LibreChat/api/server/services/Endpoints/custom/initialize.js

176 lines
4.9 KiB
JavaScript
Raw Normal View History

🛜 refactor: Streamline App Config Usage (#9234) * WIP: app.locals refactoring WIP: appConfig fix: update memory configuration retrieval to use getAppConfig based on user role fix: update comment for AppConfig interface to clarify purpose 🏷️ refactor: Update tests to use getAppConfig for endpoint configurations ci: Update AppService tests to initialize app config instead of app.locals ci: Integrate getAppConfig into remaining tests refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests refactor: Rename initializeAppConfig to setAppConfig and update related tests ci: Mock getAppConfig in various tests to provide default configurations refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests chore: rename `Config/getAppConfig` -> `Config/app` fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters chore: correct parameter documentation for imageOutputType in ToolService.js refactor: remove `getCustomConfig` dependency in config route refactor: update domain validation to use appConfig for allowed domains refactor: use appConfig registration property chore: remove app parameter from AppService invocation refactor: update AppConfig interface to correct registration and turnstile configurations refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration ci: update related tests refactor: update getAppConfig call in getCustomConfigSpeech to include user role fix: update appConfig usage to access allowedDomains from actions instead of registration refactor: enhance AppConfig to include fileStrategies and update related file strategy logic refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions chore: remove deprecated unused RunManager refactor: get balance config primarily from appConfig refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic refactor: remove getCustomConfig usage and use app config in file citations refactor: consolidate endpoint loading logic into loadEndpoints function refactor: update appConfig access to use endpoints structure across various services refactor: implement custom endpoints configuration and streamline endpoint loading logic refactor: update getAppConfig call to include user role parameter refactor: streamline endpoint configuration and enhance appConfig usage across services refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file refactor: add type annotation for loadedEndpoints in loadEndpoints function refactor: move /services/Files/images/parse to TS API chore: add missing FILE_CITATIONS permission to IRole interface refactor: restructure toolkits to TS API refactor: separate manifest logic into its own module refactor: consolidate tool loading logic into a new tools module for startup logic refactor: move interface config logic to TS API refactor: migrate checkEmailConfig to TypeScript and update imports refactor: add FunctionTool interface and availableTools to AppConfig refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig` WIP: fix tests * fix: rebase conflicts * refactor: remove app.locals references * refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware * refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients * test: add balance configuration to titleConvo method in AgentClient tests * chore: remove unused `openai-chat-tokens` package * chore: remove unused imports in initializeMCPs.js * refactor: update balance configuration to use getAppConfig instead of getBalanceConfig * refactor: integrate configMiddleware for centralized configuration handling * refactor: optimize email domain validation by removing unnecessary async calls * refactor: simplify multer storage configuration by removing async calls * refactor: reorder imports for better readability in user.js * refactor: replace getAppConfig calls with req.config for improved performance * chore: replace getAppConfig calls with req.config in tests for centralized configuration handling * chore: remove unused override config * refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config * chore: remove customConfig parameter from TTSService constructor * refactor: pass appConfig from request to processFileCitations for improved configuration handling * refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config` * test: add mockAppConfig to processFileCitations tests for improved configuration handling * fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor * fix: type safety in useExportConversation * refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached * chore: change `MongoUser` typedef to `IUser` * fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest * fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
const {
resolveHeaders,
isUserProvided,
getOpenAIConfig,
getCustomEndpointConfig,
createHandleLLMNewToken,
} = require('@librechat/api');
🅰️ feat: Azure Config to Allow Different Deployments per Model (#1863) * wip: first pass for azure endpoint schema * refactor: azure config to return groupMap and modelConfigMap * wip: naming and schema changes * refactor(errorsToString): move to data-provider * feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors * feat(AppService): load Azure groups * refactor(azure): use imported types, write `mapModelToAzureConfig` * refactor: move `extractEnvVariable` to data-provider * refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests * refactor(AppService): ensure each model is properly configured on startup * refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config * feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file * refactor: redefine types as well as load azureOpenAI models from config file * chore(ci): fix test description naming * feat(azureOpenAI): use validated model grouping for request authentication * chore: bump data-provider following rebase * chore: bump config file version noting significant changes * feat: add title options and switch azure configs for titling and vision requests * feat: enable azure plugins from config file * fix(ci): pass tests * chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated * fix(fetchModels): early return if apiKey not passed * chore: fix azure config typing * refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions * feat(createLLM): use `azureOpenAIBasePath` * feat(parsers): resolveHeaders * refactor(extractBaseURL): handle invalid input * feat(OpenAIClient): handle headers and baseURL for azureConfig * fix(ci): pass `OpenAIClient` tests * chore: extract env var for azureOpenAI group config, baseURL * docs: azureOpenAI config setup docs * feat: safe check of potential conflicting env vars that map to unique placeholders * fix: reset apiKey when model switches from originally requested model (vision or title) * chore: linting * docs: CONFIG_PATH notes in custom_config.md
2024-02-26 14:12:25 -05:00
const {
CacheKeys,
ErrorTypes,
🅰️ feat: Azure Config to Allow Different Deployments per Model (#1863) * wip: first pass for azure endpoint schema * refactor: azure config to return groupMap and modelConfigMap * wip: naming and schema changes * refactor(errorsToString): move to data-provider * feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors * feat(AppService): load Azure groups * refactor(azure): use imported types, write `mapModelToAzureConfig` * refactor: move `extractEnvVariable` to data-provider * refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests * refactor(AppService): ensure each model is properly configured on startup * refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config * feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file * refactor: redefine types as well as load azureOpenAI models from config file * chore(ci): fix test description naming * feat(azureOpenAI): use validated model grouping for request authentication * chore: bump data-provider following rebase * chore: bump config file version noting significant changes * feat: add title options and switch azure configs for titling and vision requests * feat: enable azure plugins from config file * fix(ci): pass tests * chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated * fix(fetchModels): early return if apiKey not passed * chore: fix azure config typing * refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions * feat(createLLM): use `azureOpenAIBasePath` * feat(parsers): resolveHeaders * refactor(extractBaseURL): handle invalid input * feat(OpenAIClient): handle headers and baseURL for azureConfig * fix(ci): pass `OpenAIClient` tests * chore: extract env var for azureOpenAI group config, baseURL * docs: azureOpenAI config setup docs * feat: safe check of potential conflicting env vars that map to unique placeholders * fix: reset apiKey when model switches from originally requested model (vision or title) * chore: linting * docs: CONFIG_PATH notes in custom_config.md
2024-02-26 14:12:25 -05:00
envVarRegex,
FetchTokenConfig,
extractEnvVariable,
🅰️ feat: Azure Config to Allow Different Deployments per Model (#1863) * wip: first pass for azure endpoint schema * refactor: azure config to return groupMap and modelConfigMap * wip: naming and schema changes * refactor(errorsToString): move to data-provider * feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors * feat(AppService): load Azure groups * refactor(azure): use imported types, write `mapModelToAzureConfig` * refactor: move `extractEnvVariable` to data-provider * refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests * refactor(AppService): ensure each model is properly configured on startup * refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config * feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file * refactor: redefine types as well as load azureOpenAI models from config file * chore(ci): fix test description naming * feat(azureOpenAI): use validated model grouping for request authentication * chore: bump data-provider following rebase * chore: bump config file version noting significant changes * feat: add title options and switch azure configs for titling and vision requests * feat: enable azure plugins from config file * fix(ci): pass tests * chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated * fix(fetchModels): early return if apiKey not passed * chore: fix azure config typing * refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions * feat(createLLM): use `azureOpenAIBasePath` * feat(parsers): resolveHeaders * refactor(extractBaseURL): handle invalid input * feat(OpenAIClient): handle headers and baseURL for azureConfig * fix(ci): pass `OpenAIClient` tests * chore: extract env var for azureOpenAI group config, baseURL * docs: azureOpenAI config setup docs * feat: safe check of potential conflicting env vars that map to unique placeholders * fix: reset apiKey when model switches from originally requested model (vision or title) * chore: linting * docs: CONFIG_PATH notes in custom_config.md
2024-02-26 14:12:25 -05:00
} = require('librechat-data-provider');
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
const { fetchModels } = require('~/server/services/ModelService');
🤖 refactor: Improve Agents Memory Usage, Bump Keyv, Grok 3 (#6850) * chore: remove unused redis file * chore: bump keyv dependencies, and update related imports * refactor: Implement IoRedis client for rate limiting across middleware, as node-redis via keyv not compatible * fix: Set max listeners to expected amount * WIP: memory improvements * refactor: Simplify getAbortData assignment in createAbortController * refactor: Update getAbortData to use WeakRef for content management * WIP: memory improvements in agent chat requests * refactor: Enhance memory management with finalization registry and cleanup functions * refactor: Simplify domainParser calls by removing unnecessary request parameter * refactor: Update parameter types for action tools and agent loading functions to use minimal configs * refactor: Simplify domainParser tests by removing unnecessary request parameter * refactor: Simplify domainParser call by removing unnecessary request parameter * refactor: Enhance client disposal by nullifying additional properties to improve memory management * refactor: Improve title generation by adding abort controller and timeout handling, consolidate request cleanup * refactor: Update checkIdleConnections to skip current user when checking for idle connections if passed * refactor: Update createMCPTool to derive userId from config and handle abort signals * refactor: Introduce createTokenCounter function and update tokenCounter usage; enhance disposeClient to reset Graph values * refactor: Update getMCPManager to accept userId parameter for improved idle connection handling * refactor: Extract logToolError function for improved error handling in AgentClient * refactor: Update disposeClient to clear handlerRegistry and graphRunnable references in client.run * refactor: Extract createHandleNewToken function to streamline token handling in initializeClient * chore: bump @librechat/agents * refactor: Improve timeout handling in addTitle function for better error management * refactor: Introduce createFetch instead of using class method * refactor: Enhance client disposal and request data handling in AskController and EditController * refactor: Update import statements for AnthropicClient and OpenAIClient to use specific paths * refactor: Use WeakRef for response handling in SplitStreamHandler to prevent memory leaks * refactor: Simplify client disposal and rename getReqData to processReqData in AskController and EditController * refactor: Improve logging structure and parameter handling in OpenAIClient * refactor: Remove unused GraphEvents and improve stream event handling in AnthropicClient and OpenAIClient * refactor: Simplify client initialization in AskController and EditController * refactor: Remove unused mock functions and implement in-memory store for KeyvMongo * chore: Update dependencies in package-lock.json to latest versions * refactor: Await token usage recording in OpenAIClient to ensure proper async handling * refactor: Remove handleAbort route from multiple endpoints and enhance client disposal logic * refactor: Enhance abort controller logic by managing abortKey more effectively * refactor: Add newConversation handling in useEventHandlers for improved conversation management * fix: dropparams * refactor: Use optional chaining for safer access to request properties in BaseClient * refactor: Move client disposal and request data processing logic to cleanup module for better organization * refactor: Remove aborted request check from addTitle function for cleaner logic * feat: Add Grok 3 model pricing and update tests for new models * chore: Remove trace warnings and inspect flags from backend start script used for debugging * refactor: Replace user identifier handling with userId for consistency across controllers, use UserId in clientRegistry * refactor: Enhance client disposal logic to prevent memory leaks by clearing additional references * chore: Update @librechat/agents to version 2.4.14 in package.json and package-lock.json
2025-04-12 18:46:36 -04:00
const OpenAIClient = require('~/app/clients/OpenAIClient');
const getLogStores = require('~/cache/getLogStores');
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
const { PROXY } = process.env;
🚧 WIP: Merge Dev Build (#4611) * refactor: Agent CodeFiles, abortUpload WIP * feat: code environment file upload * refactor: useLazyEffect * refactor: - Add `watch` from `useFormContext` to check if code execution is enabled - Disable file upload button if `agent_id` is not selected or code execution is disabled * WIP: primeCodeFiles; refactor: rename sessionId to session_id for uniformity * Refactor: Rename session_id to sessionId for uniformity in AuthService.js * chore: bump @librechat/agents to version 1.7.1 * WIP: prime code files * refactor: Update code env file upload method to use read stream * feat: reupload code env file if no longer active * refactor: isAssistantTool -> isEntityTool + address type issues * feat: execute code tool hook * refactor: Rename isPluginAuthenticated to checkPluginAuth in PluginController.js * refactor: Update PluginController.js to use AuthType constant for comparison * feat: verify tool authentication (execute_code) * feat: enter librechat_code_api_key * refactor: Remove unused imports in BookmarkForm.tsx * feat: authenticate code tool * refactor: Update Action.tsx to conditionally render the key and revoke key buttons * refactor(Code/Action): prevent uncheck-able 'Run Code' capability when key is revoked * refactor(Code/Action): Update Action.tsx to conditionally render the key and revoke key buttons * fix: agent file upload edge cases * chore: bump @librechat/agents * fix: custom endpoint providerValue icon * feat: ollama meta modal token values + context * feat: ollama agents * refactor: Update token models for Ollama models * chore: Comment out CodeForm * refactor: Update token models for Ollama and Meta models
2024-11-01 18:36:39 -04:00
const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrideEndpoint }) => {
🛜 refactor: Streamline App Config Usage (#9234) * WIP: app.locals refactoring WIP: appConfig fix: update memory configuration retrieval to use getAppConfig based on user role fix: update comment for AppConfig interface to clarify purpose 🏷️ refactor: Update tests to use getAppConfig for endpoint configurations ci: Update AppService tests to initialize app config instead of app.locals ci: Integrate getAppConfig into remaining tests refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests refactor: Rename initializeAppConfig to setAppConfig and update related tests ci: Mock getAppConfig in various tests to provide default configurations refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests chore: rename `Config/getAppConfig` -> `Config/app` fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters chore: correct parameter documentation for imageOutputType in ToolService.js refactor: remove `getCustomConfig` dependency in config route refactor: update domain validation to use appConfig for allowed domains refactor: use appConfig registration property chore: remove app parameter from AppService invocation refactor: update AppConfig interface to correct registration and turnstile configurations refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration ci: update related tests refactor: update getAppConfig call in getCustomConfigSpeech to include user role fix: update appConfig usage to access allowedDomains from actions instead of registration refactor: enhance AppConfig to include fileStrategies and update related file strategy logic refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions chore: remove deprecated unused RunManager refactor: get balance config primarily from appConfig refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic refactor: remove getCustomConfig usage and use app config in file citations refactor: consolidate endpoint loading logic into loadEndpoints function refactor: update appConfig access to use endpoints structure across various services refactor: implement custom endpoints configuration and streamline endpoint loading logic refactor: update getAppConfig call to include user role parameter refactor: streamline endpoint configuration and enhance appConfig usage across services refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file refactor: add type annotation for loadedEndpoints in loadEndpoints function refactor: move /services/Files/images/parse to TS API chore: add missing FILE_CITATIONS permission to IRole interface refactor: restructure toolkits to TS API refactor: separate manifest logic into its own module refactor: consolidate tool loading logic into a new tools module for startup logic refactor: move interface config logic to TS API refactor: migrate checkEmailConfig to TypeScript and update imports refactor: add FunctionTool interface and availableTools to AppConfig refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig` WIP: fix tests * fix: rebase conflicts * refactor: remove app.locals references * refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware * refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients * test: add balance configuration to titleConvo method in AgentClient tests * chore: remove unused `openai-chat-tokens` package * chore: remove unused imports in initializeMCPs.js * refactor: update balance configuration to use getAppConfig instead of getBalanceConfig * refactor: integrate configMiddleware for centralized configuration handling * refactor: optimize email domain validation by removing unnecessary async calls * refactor: simplify multer storage configuration by removing async calls * refactor: reorder imports for better readability in user.js * refactor: replace getAppConfig calls with req.config for improved performance * chore: replace getAppConfig calls with req.config in tests for centralized configuration handling * chore: remove unused override config * refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config * chore: remove customConfig parameter from TTSService constructor * refactor: pass appConfig from request to processFileCitations for improved configuration handling * refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config` * test: add mockAppConfig to processFileCitations tests for improved configuration handling * fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor * fix: type safety in useExportConversation * refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached * chore: change `MongoUser` typedef to `IUser` * fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest * fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
const appConfig = req.config;
🚧 WIP: Merge Dev Build (#4611) * refactor: Agent CodeFiles, abortUpload WIP * feat: code environment file upload * refactor: useLazyEffect * refactor: - Add `watch` from `useFormContext` to check if code execution is enabled - Disable file upload button if `agent_id` is not selected or code execution is disabled * WIP: primeCodeFiles; refactor: rename sessionId to session_id for uniformity * Refactor: Rename session_id to sessionId for uniformity in AuthService.js * chore: bump @librechat/agents to version 1.7.1 * WIP: prime code files * refactor: Update code env file upload method to use read stream * feat: reupload code env file if no longer active * refactor: isAssistantTool -> isEntityTool + address type issues * feat: execute code tool hook * refactor: Rename isPluginAuthenticated to checkPluginAuth in PluginController.js * refactor: Update PluginController.js to use AuthType constant for comparison * feat: verify tool authentication (execute_code) * feat: enter librechat_code_api_key * refactor: Remove unused imports in BookmarkForm.tsx * feat: authenticate code tool * refactor: Update Action.tsx to conditionally render the key and revoke key buttons * refactor(Code/Action): prevent uncheck-able 'Run Code' capability when key is revoked * refactor(Code/Action): Update Action.tsx to conditionally render the key and revoke key buttons * fix: agent file upload edge cases * chore: bump @librechat/agents * fix: custom endpoint providerValue icon * feat: ollama meta modal token values + context * feat: ollama agents * refactor: Update token models for Ollama models * chore: Comment out CodeForm * refactor: Update token models for Ollama and Meta models
2024-11-01 18:36:39 -04:00
const { key: expiresAt } = req.body;
const endpoint = overrideEndpoint ?? req.body.endpoint;
🛜 refactor: Streamline App Config Usage (#9234) * WIP: app.locals refactoring WIP: appConfig fix: update memory configuration retrieval to use getAppConfig based on user role fix: update comment for AppConfig interface to clarify purpose 🏷️ refactor: Update tests to use getAppConfig for endpoint configurations ci: Update AppService tests to initialize app config instead of app.locals ci: Integrate getAppConfig into remaining tests refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests refactor: Rename initializeAppConfig to setAppConfig and update related tests ci: Mock getAppConfig in various tests to provide default configurations refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests chore: rename `Config/getAppConfig` -> `Config/app` fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters chore: correct parameter documentation for imageOutputType in ToolService.js refactor: remove `getCustomConfig` dependency in config route refactor: update domain validation to use appConfig for allowed domains refactor: use appConfig registration property chore: remove app parameter from AppService invocation refactor: update AppConfig interface to correct registration and turnstile configurations refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration ci: update related tests refactor: update getAppConfig call in getCustomConfigSpeech to include user role fix: update appConfig usage to access allowedDomains from actions instead of registration refactor: enhance AppConfig to include fileStrategies and update related file strategy logic refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions chore: remove deprecated unused RunManager refactor: get balance config primarily from appConfig refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic refactor: remove getCustomConfig usage and use app config in file citations refactor: consolidate endpoint loading logic into loadEndpoints function refactor: update appConfig access to use endpoints structure across various services refactor: implement custom endpoints configuration and streamline endpoint loading logic refactor: update getAppConfig call to include user role parameter refactor: streamline endpoint configuration and enhance appConfig usage across services refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file refactor: add type annotation for loadedEndpoints in loadEndpoints function refactor: move /services/Files/images/parse to TS API chore: add missing FILE_CITATIONS permission to IRole interface refactor: restructure toolkits to TS API refactor: separate manifest logic into its own module refactor: consolidate tool loading logic into a new tools module for startup logic refactor: move interface config logic to TS API refactor: migrate checkEmailConfig to TypeScript and update imports refactor: add FunctionTool interface and availableTools to AppConfig refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig` WIP: fix tests * fix: rebase conflicts * refactor: remove app.locals references * refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware * refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients * test: add balance configuration to titleConvo method in AgentClient tests * chore: remove unused `openai-chat-tokens` package * chore: remove unused imports in initializeMCPs.js * refactor: update balance configuration to use getAppConfig instead of getBalanceConfig * refactor: integrate configMiddleware for centralized configuration handling * refactor: optimize email domain validation by removing unnecessary async calls * refactor: simplify multer storage configuration by removing async calls * refactor: reorder imports for better readability in user.js * refactor: replace getAppConfig calls with req.config for improved performance * chore: replace getAppConfig calls with req.config in tests for centralized configuration handling * chore: remove unused override config * refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config * chore: remove customConfig parameter from TTSService constructor * refactor: pass appConfig from request to processFileCitations for improved configuration handling * refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config` * test: add mockAppConfig to processFileCitations tests for improved configuration handling * fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor * fix: type safety in useExportConversation * refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached * chore: change `MongoUser` typedef to `IUser` * fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest * fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
const endpointConfig = getCustomEndpointConfig({
endpoint,
appConfig,
});
if (!endpointConfig) {
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
throw new Error(`Config not found for the ${endpoint} custom endpoint.`);
}
const CUSTOM_API_KEY = extractEnvVariable(endpointConfig.apiKey);
const CUSTOM_BASE_URL = extractEnvVariable(endpointConfig.baseURL);
/** Intentionally excludes passing `body`, i.e. `req.body`, as
* values may not be accurate until `AgentClient` is initialized
*/
🏷️ feat: Request Placeholders for Custom Endpoint & MCP Headers (#9095) * feat: Add conversation ID support to custom endpoint headers - Add LIBRECHAT_CONVERSATION_ID to customUserVars when provided - Pass conversation ID to header resolution for dynamic headers - Add comprehensive test coverage Enables custom endpoints to access conversation context using {{LIBRECHAT_CONVERSATION_ID}} placeholder. * fix: filter out unresolved placeholders from headers (thanks @MrunmayS) * feat: add support for request body placeholders in custom endpoint headers - Add {{LIBRECHAT_BODY_*}} placeholders for conversationId, parentMessageId, messageId - Update tests to reflect new body placeholder functionality * refactor resolveHeaders * style: minor styling cleanup * fix: type error in unit test * feat: add body to other endpoints * feat: add body for mcp tool calls * chore: remove changes that unnecessarily increase scope after clarification of requirements * refactor: move http.ts to packages/api and have RequestBody intersect with Express request body * refactor: processMCPEnv now uses single object argument pattern * refactor: update processMCPEnv to use 'options' parameter and align types across MCP connection classes * feat: enhance MCP connection handling with dynamic request headers to pass request body fields --------- Co-authored-by: Gopal Sharma <gopalsharma@gopal.sharma1> Co-authored-by: s10gopal <36487439+s10gopal@users.noreply.github.com> Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-08-16 20:45:55 -04:00
let resolvedHeaders = resolveHeaders({
headers: endpointConfig.headers,
user: req.user,
});
if (CUSTOM_API_KEY.match(envVarRegex)) {
throw new Error(`Missing API Key for ${endpoint}.`);
}
if (CUSTOM_BASE_URL.match(envVarRegex)) {
throw new Error(`Missing Base URL for ${endpoint}.`);
}
const userProvidesKey = isUserProvided(CUSTOM_API_KEY);
const userProvidesURL = isUserProvided(CUSTOM_BASE_URL);
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
let userValues = null;
if (expiresAt && (userProvidesKey || userProvidesURL)) {
checkUserKeyExpiry(expiresAt, endpoint);
userValues = await getUserKeyValues({ userId: req.user.id, name: endpoint });
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
}
let apiKey = userProvidesKey ? userValues?.apiKey : CUSTOM_API_KEY;
let baseURL = userProvidesURL ? userValues?.baseURL : CUSTOM_BASE_URL;
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
if (userProvidesKey & !apiKey) {
throw new Error(
JSON.stringify({
type: ErrorTypes.NO_USER_KEY,
}),
);
}
if (userProvidesURL && !baseURL) {
throw new Error(
JSON.stringify({
type: ErrorTypes.NO_BASE_URL,
}),
);
}
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
if (!apiKey) {
throw new Error(`${endpoint} API key not provided.`);
}
if (!baseURL) {
throw new Error(`${endpoint} Base URL not provided.`);
}
const cache = getLogStores(CacheKeys.TOKEN_CONFIG);
const tokenKey =
!endpointConfig.tokenConfig && (userProvidesKey || userProvidesURL)
? `${endpoint}:${req.user.id}`
: endpoint;
let endpointTokenConfig =
!endpointConfig.tokenConfig &&
FetchTokenConfig[endpoint.toLowerCase()] &&
(await cache.get(tokenKey));
if (
FetchTokenConfig[endpoint.toLowerCase()] &&
endpointConfig &&
endpointConfig.models.fetch &&
!endpointTokenConfig
) {
await fetchModels({ apiKey, baseURL, name: endpoint, user: req.user.id, tokenKey });
endpointTokenConfig = await cache.get(tokenKey);
}
const customOptions = {
headers: resolvedHeaders,
addParams: endpointConfig.addParams,
dropParams: endpointConfig.dropParams,
customParams: endpointConfig.customParams,
titleConvo: endpointConfig.titleConvo,
titleModel: endpointConfig.titleModel,
forcePrompt: endpointConfig.forcePrompt,
summaryModel: endpointConfig.summaryModel,
modelDisplayLabel: endpointConfig.modelDisplayLabel,
titleMethod: endpointConfig.titleMethod ?? 'completion',
contextStrategy: endpointConfig.summarize ? 'summarize' : null,
directEndpoint: endpointConfig.directEndpoint,
titleMessageRole: endpointConfig.titleMessageRole,
streamRate: endpointConfig.streamRate,
endpointTokenConfig,
};
🛜 refactor: Streamline App Config Usage (#9234) * WIP: app.locals refactoring WIP: appConfig fix: update memory configuration retrieval to use getAppConfig based on user role fix: update comment for AppConfig interface to clarify purpose 🏷️ refactor: Update tests to use getAppConfig for endpoint configurations ci: Update AppService tests to initialize app config instead of app.locals ci: Integrate getAppConfig into remaining tests refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests refactor: Rename initializeAppConfig to setAppConfig and update related tests ci: Mock getAppConfig in various tests to provide default configurations refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests chore: rename `Config/getAppConfig` -> `Config/app` fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters chore: correct parameter documentation for imageOutputType in ToolService.js refactor: remove `getCustomConfig` dependency in config route refactor: update domain validation to use appConfig for allowed domains refactor: use appConfig registration property chore: remove app parameter from AppService invocation refactor: update AppConfig interface to correct registration and turnstile configurations refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration ci: update related tests refactor: update getAppConfig call in getCustomConfigSpeech to include user role fix: update appConfig usage to access allowedDomains from actions instead of registration refactor: enhance AppConfig to include fileStrategies and update related file strategy logic refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions chore: remove deprecated unused RunManager refactor: get balance config primarily from appConfig refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic refactor: remove getCustomConfig usage and use app config in file citations refactor: consolidate endpoint loading logic into loadEndpoints function refactor: update appConfig access to use endpoints structure across various services refactor: implement custom endpoints configuration and streamline endpoint loading logic refactor: update getAppConfig call to include user role parameter refactor: streamline endpoint configuration and enhance appConfig usage across services refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file refactor: add type annotation for loadedEndpoints in loadEndpoints function refactor: move /services/Files/images/parse to TS API chore: add missing FILE_CITATIONS permission to IRole interface refactor: restructure toolkits to TS API refactor: separate manifest logic into its own module refactor: consolidate tool loading logic into a new tools module for startup logic refactor: move interface config logic to TS API refactor: migrate checkEmailConfig to TypeScript and update imports refactor: add FunctionTool interface and availableTools to AppConfig refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig` WIP: fix tests * fix: rebase conflicts * refactor: remove app.locals references * refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware * refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients * test: add balance configuration to titleConvo method in AgentClient tests * chore: remove unused `openai-chat-tokens` package * chore: remove unused imports in initializeMCPs.js * refactor: update balance configuration to use getAppConfig instead of getBalanceConfig * refactor: integrate configMiddleware for centralized configuration handling * refactor: optimize email domain validation by removing unnecessary async calls * refactor: simplify multer storage configuration by removing async calls * refactor: reorder imports for better readability in user.js * refactor: replace getAppConfig calls with req.config for improved performance * chore: replace getAppConfig calls with req.config in tests for centralized configuration handling * chore: remove unused override config * refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config * chore: remove customConfig parameter from TTSService constructor * refactor: pass appConfig from request to processFileCitations for improved configuration handling * refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config` * test: add mockAppConfig to processFileCitations tests for improved configuration handling * fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor * fix: type safety in useExportConversation * refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached * chore: change `MongoUser` typedef to `IUser` * fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest * fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
const allConfig = appConfig.endpoints?.all;
if (allConfig) {
customOptions.streamRate = allConfig.streamRate;
}
let clientOptions = {
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
reverseProxyUrl: baseURL ?? null,
proxy: PROXY ?? null,
req,
res,
...customOptions,
...endpointOption,
};
🚧 WIP: Merge Dev Build (#4611) * refactor: Agent CodeFiles, abortUpload WIP * feat: code environment file upload * refactor: useLazyEffect * refactor: - Add `watch` from `useFormContext` to check if code execution is enabled - Disable file upload button if `agent_id` is not selected or code execution is disabled * WIP: primeCodeFiles; refactor: rename sessionId to session_id for uniformity * Refactor: Rename session_id to sessionId for uniformity in AuthService.js * chore: bump @librechat/agents to version 1.7.1 * WIP: prime code files * refactor: Update code env file upload method to use read stream * feat: reupload code env file if no longer active * refactor: isAssistantTool -> isEntityTool + address type issues * feat: execute code tool hook * refactor: Rename isPluginAuthenticated to checkPluginAuth in PluginController.js * refactor: Update PluginController.js to use AuthType constant for comparison * feat: verify tool authentication (execute_code) * feat: enter librechat_code_api_key * refactor: Remove unused imports in BookmarkForm.tsx * feat: authenticate code tool * refactor: Update Action.tsx to conditionally render the key and revoke key buttons * refactor(Code/Action): prevent uncheck-able 'Run Code' capability when key is revoked * refactor(Code/Action): Update Action.tsx to conditionally render the key and revoke key buttons * fix: agent file upload edge cases * chore: bump @librechat/agents * fix: custom endpoint providerValue icon * feat: ollama meta modal token values + context * feat: ollama agents * refactor: Update token models for Ollama models * chore: Comment out CodeForm * refactor: Update token models for Ollama and Meta models
2024-11-01 18:36:39 -04:00
if (optionsOnly) {
🧠 fix: Agent Title Config & Resource Handling (#8028) * 🔧 fix: enhance client options handling in AgentClient and set default recursion limit - Updated the recursion limit to default to 25 if not specified in agentsEConfig. - Enhanced client options in AgentClient to include model parameters such as apiKey and anthropicApiUrl from agentModelParams. - Updated requestOptions in the anthropic endpoint to use reverseProxyUrl as anthropicApiUrl. * Enhance LLM configuration tests with edge case handling * chore add return type annotation for getCustomEndpointConfig function * fix: update modelOptions handling to use optional chaining and default to empty object in multiple endpoint initializations * chore: update @librechat/agents to version 2.4.42 * refactor: streamline agent endpoint configuration and enhance client options handling for title generations - Introduced a new `getProviderConfig` function to centralize provider configuration logic. - Updated `AgentClient` to utilize the new provider configuration, improving clarity and maintainability. - Removed redundant code related to endpoint initialization and model parameter handling. - Enhanced error logging for missing endpoint configurations. * fix: add abort handling for image generation and editing in OpenAIImageTools * ci: enhance getLLMConfig tests to verify fetchOptions and dispatcher properties * fix: use optional chaining for endpointOption properties in getOptions * fix: increase title generation timeout from 25s to 45s, pass `endpointOption` to `getOptions` * fix: update file filtering logic in getToolFilesByIds to ensure text field is properly checked * fix: add error handling for empty OCR results in uploadMistralOCR and uploadAzureMistralOCR * fix: enhance error handling in file upload to include 'No OCR result' message * chore: update error messages in uploadMistralOCR and uploadAzureMistralOCR * fix: enhance filtering logic in getToolFilesByIds to include context checks for OCR resources to only include files directly attached to agent --------- Co-authored-by: Matt Burnett <matt.burnett@shopify.com>
2025-06-23 19:44:24 -04:00
const modelOptions = endpointOption?.model_parameters ?? {};
clientOptions = Object.assign(
{
modelOptions,
},
clientOptions,
);
clientOptions.modelOptions.user = req.user.id;
const options = getOpenAIConfig(apiKey, clientOptions, endpoint);
if (options != null) {
options.useLegacyContent = true;
options.endpointTokenConfig = endpointTokenConfig;
}
if (!clientOptions.streamRate) {
return options;
🚧 WIP: Merge Dev Build (#4611) * refactor: Agent CodeFiles, abortUpload WIP * feat: code environment file upload * refactor: useLazyEffect * refactor: - Add `watch` from `useFormContext` to check if code execution is enabled - Disable file upload button if `agent_id` is not selected or code execution is disabled * WIP: primeCodeFiles; refactor: rename sessionId to session_id for uniformity * Refactor: Rename session_id to sessionId for uniformity in AuthService.js * chore: bump @librechat/agents to version 1.7.1 * WIP: prime code files * refactor: Update code env file upload method to use read stream * feat: reupload code env file if no longer active * refactor: isAssistantTool -> isEntityTool + address type issues * feat: execute code tool hook * refactor: Rename isPluginAuthenticated to checkPluginAuth in PluginController.js * refactor: Update PluginController.js to use AuthType constant for comparison * feat: verify tool authentication (execute_code) * feat: enter librechat_code_api_key * refactor: Remove unused imports in BookmarkForm.tsx * feat: authenticate code tool * refactor: Update Action.tsx to conditionally render the key and revoke key buttons * refactor(Code/Action): prevent uncheck-able 'Run Code' capability when key is revoked * refactor(Code/Action): Update Action.tsx to conditionally render the key and revoke key buttons * fix: agent file upload edge cases * chore: bump @librechat/agents * fix: custom endpoint providerValue icon * feat: ollama meta modal token values + context * feat: ollama agents * refactor: Update token models for Ollama models * chore: Comment out CodeForm * refactor: Update token models for Ollama and Meta models
2024-11-01 18:36:39 -04:00
}
options.llmConfig.callbacks = [
{
handleLLMNewToken: createHandleLLMNewToken(clientOptions.streamRate),
},
];
return options;
🚧 WIP: Merge Dev Build (#4611) * refactor: Agent CodeFiles, abortUpload WIP * feat: code environment file upload * refactor: useLazyEffect * refactor: - Add `watch` from `useFormContext` to check if code execution is enabled - Disable file upload button if `agent_id` is not selected or code execution is disabled * WIP: primeCodeFiles; refactor: rename sessionId to session_id for uniformity * Refactor: Rename session_id to sessionId for uniformity in AuthService.js * chore: bump @librechat/agents to version 1.7.1 * WIP: prime code files * refactor: Update code env file upload method to use read stream * feat: reupload code env file if no longer active * refactor: isAssistantTool -> isEntityTool + address type issues * feat: execute code tool hook * refactor: Rename isPluginAuthenticated to checkPluginAuth in PluginController.js * refactor: Update PluginController.js to use AuthType constant for comparison * feat: verify tool authentication (execute_code) * feat: enter librechat_code_api_key * refactor: Remove unused imports in BookmarkForm.tsx * feat: authenticate code tool * refactor: Update Action.tsx to conditionally render the key and revoke key buttons * refactor(Code/Action): prevent uncheck-able 'Run Code' capability when key is revoked * refactor(Code/Action): Update Action.tsx to conditionally render the key and revoke key buttons * fix: agent file upload edge cases * chore: bump @librechat/agents * fix: custom endpoint providerValue icon * feat: ollama meta modal token values + context * feat: ollama agents * refactor: Update token models for Ollama models * chore: Comment out CodeForm * refactor: Update token models for Ollama and Meta models
2024-11-01 18:36:39 -04:00
}
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
const client = new OpenAIClient(apiKey, clientOptions);
return {
client,
openAIApiKey: apiKey,
};
};
module.exports = initializeClient;