LibreChat/api/server/services/Endpoints/custom/initialize.js

146 lines
4.2 KiB
JavaScript
Raw Normal View History

🆔 feat: Add OpenID Connect Federated Provider Token Support (#9931) * feat: Add OpenID Connect federated provider token support Implements support for passing federated provider tokens (Cognito, Azure AD, Auth0) as variables in LibreChat's librechat.yaml configuration for both custom endpoints and MCP servers. Features: - New LIBRECHAT_OPENID_* template variables for federated provider tokens - JWT claims parsing from ID tokens without verification (for claim extraction) - Token validation with expiration checking - Support for multiple token storage locations (federatedTokens, openidTokens) - Integration with existing template variable system - Comprehensive test suite with Cognito-specific scenarios - Provider-agnostic design supporting Cognito, Azure AD, Auth0, etc. Security: - Server-side only token processing - Automatic token expiration validation - Graceful fallbacks for missing/invalid tokens - No client-side token exposure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: Add federated token propagation to OIDC authentication strategies Adds federatedTokens object to user during authentication to enable federated provider token template variables in LibreChat configuration. Changes: - OpenID JWT Strategy: Extract raw JWT from Authorization header and attach as federatedTokens.access_token to enable {{LIBRECHAT_OPENID_TOKEN}} placeholder resolution - OpenID Strategy: Attach tokenset tokens as federatedTokens object to standardize token access across both authentication strategies This enables proper token propagation for custom endpoints and MCP servers that require federated provider tokens for authorization. Resolves missing token issue reported by @ramden in PR #9931 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Denis Ramic <denis.ramic@nfon.com> Co-Authored-By: Claude <noreply@anthropic.com> * test: Add federatedTokens validation tests for OIDC strategies Adds comprehensive test coverage for the federated token propagation feature implemented in the authentication strategies. Tests added: - Verify federatedTokens object is attached to user with correct structure (access_token, refresh_token, expires_at) - Verify both tokenset and federatedTokens are present in user object - Ensure tokens from OIDC provider are correctly propagated Also fixes existing test suite by adding missing mocks: - isEmailDomainAllowed function mock - findOpenIDUser function mock These tests validate the fix from commit 5874ba29f that enables {{LIBRECHAT_OPENID_TOKEN}} template variable functionality. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: Remove implementation documentation file The PR description already contains all necessary implementation details. This documentation file is redundant and was requested to be removed. * fix: skip s256 check * fix(openid): handle missing refresh token in Cognito token refresh response When OPENID_REUSE_TOKENS=true, the token refresh flow was failing because Cognito (and most OAuth providers) don't return a new refresh token in the refresh grant response - they only return new access and ID tokens. Changes: - Modified setOpenIDAuthTokens() to accept optional existingRefreshToken parameter - Updated validation to only require access_token (refresh_token now optional) - Added logic to reuse existing refresh token when not provided in tokenset - Updated refreshController to pass original refresh token as fallback - Added comments explaining standard OAuth 2.0 refresh token behavior This fixes the "Token is not present. User is not authenticated." error that occurred during silent token refresh with Cognito as the OpenID provider. Fixes: Authentication loop with OPENID_REUSE_TOKENS=true and AWS Cognito * fix(openid): extract refresh token from cookies for template variable replacement When OPENID_REUSE_TOKENS=true, the openIdJwtStrategy populates user.federatedTokens to enable template variable replacement (e.g., {{LIBRECHAT_OPENID_ACCESS_TOKEN}}). However, the refresh_token field was incorrectly sourced from payload.refresh_token, which is always undefined because: 1. JWTs don't contain refresh tokens in their payload 2. The JWT itself IS the access token 3. Refresh tokens are separate opaque tokens stored in HTTP-only cookies This caused extractOpenIDTokenInfo() to receive incomplete federatedTokens, resulting in template variables remaining unreplaced in headers. **Root Cause:** - Line 90: `refresh_token: payload.refresh_token` (always undefined) - JWTs only contain access token data in their claims - Refresh tokens are separate, stored securely in cookies **Solution:** - Import `cookie` module to parse cookies from request - Extract refresh token from `refreshToken` cookie - Populate federatedTokens with both access token (JWT) and refresh token (from cookie) **Impact:** - Template variables like {{LIBRECHAT_OPENID_ACCESS_TOKEN}} now work correctly - Headers in librechat.yaml are properly replaced with actual tokens - MCP server authentication with federated tokens now functional **Technical Details:** - passReqToCallback=true in JWT strategy provides req object access - Refresh token extracted via cookies.parse(req.headers.cookie).refreshToken - Falls back gracefully if cookie header or refreshToken is missing 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: re-resolve headers on each request to pick up fresh federatedTokens - OpenAIClient now re-resolves headers in chatCompletion() before each API call - This ensures template variables like {{LIBRECHAT_OPENID_TOKEN}} are replaced with actual token values from req.user.federatedTokens - initialize.js now stores original template headers instead of pre-resolved ones - Fixes template variable replacement when OPENID_REUSE_TOKENS=true The issue was that headers were only resolved once during client initialization, before openIdJwtStrategy had populated user.federatedTokens. Now headers are re-resolved on every request with the current user's fresh tokens. * debug: add logging to track header resolution in OpenAIClient * debug: log tokenset structure after refresh to diagnose missing access_token * fix: set federatedTokens on user object after OAuth refresh - After successful OAuth token refresh, the user object was not being updated with federatedTokens - This caused template variable resolution to fail on subsequent requests - Now sets user.federatedTokens with access_token, id_token, refresh_token and expires_at from the refreshed tokenset - Fixes template variables like {{LIBRECHAT_OPENID_TOKEN}} not being replaced after token refresh - Related to PR #9931 (OpenID federated token support) * fix(openid): pass user object through agent chain for template variable resolution Root cause: buildAgentContext in agents/run.ts called resolveHeaders without the user parameter, preventing OpenID federated token template variables from being resolved in agent runtime parameters. Changes: - packages/api/src/agents/run.ts: Add user parameter to createRun signature - packages/api/src/agents/run.ts: Pass user to resolveHeaders in buildAgentContext - api/server/controllers/agents/client.js: Pass user when calling createRun - api/server/services/Endpoints/bedrock/options.js: Add resolveHeaders call with debug logging - api/server/services/Endpoints/custom/initialize.js: Add debug logging - packages/api/src/utils/env.ts: Add comprehensive debug logging and stack traces - packages/api/src/utils/oidc.ts: Fix eslint errors (unused type, explicit any) This ensures template variables like {{LIBRECHAT_OPENID_TOKEN}} and {{LIBRECHAT_USER_OPENIDID}} are properly resolved in both custom endpoint headers and Bedrock AgentCore runtime parameters. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: remove debug logging from OpenID token template feature Removed excessive debug logging that was added during development to make the PR more suitable for upstream review: - Removed 7 debug statements from OpenAIClient.js - Removed all console.log statements from packages/api/src/utils/env.ts - Removed debug logging from bedrock/options.js - Removed debug logging from custom/initialize.js - Removed debug statement from AuthController.js This reduces the changeset by ~50 lines while maintaining full functionality of the OpenID federated token template variable feature. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * test(openid): add comprehensive unit tests for template variable substitution - Add 34 unit tests for OIDC token utilities (oidc.spec.ts) - Test coverage for token extraction, validation, and placeholder processing - Integration tests for full OpenID token flow - All tests pass with comprehensive edge case coverage 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com> * test: fix OpenID federated tokens test failures - Add serverMetadata() mock to openid-client mock configuration * Fixes TypeError in openIdJwtStrategy.js where serverMetadata() was being called * Mock now returns jwks_uri and end_session_endpoint as expected by the code - Update outdated initialize.spec.js test * Remove test expecting resolveHeaders call during initialization * Header resolution was refactored to be deferred until LLM request time * Update test to verify options are returned correctly with useLegacyContent flag Fixes #9931 CI failures for backend unit tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * chore: fix package-lock.json conflict * chore: sync package-log with upstream * chore: cleanup * fix: use createSafeUser * fix: fix createSafeUser signature * chore: remove comments * chore: purge comments * fix: update Jest testPathPattern to testPathPatterns for Jest 30+ compatibility --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Denis Ramic <denis.ramic@nfon.com> Co-authored-by: kristjanaapro <kristjana@apro.is> chore: import order and add back JSDoc for OpenID JWT callback
2025-11-21 14:44:52 +00:00
const { isUserProvided, getOpenAIConfig, getCustomEndpointConfig } = require('@librechat/api');
🅰️ feat: Azure Config to Allow Different Deployments per Model (#1863) * wip: first pass for azure endpoint schema * refactor: azure config to return groupMap and modelConfigMap * wip: naming and schema changes * refactor(errorsToString): move to data-provider * feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors * feat(AppService): load Azure groups * refactor(azure): use imported types, write `mapModelToAzureConfig` * refactor: move `extractEnvVariable` to data-provider * refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests * refactor(AppService): ensure each model is properly configured on startup * refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config * feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file * refactor: redefine types as well as load azureOpenAI models from config file * chore(ci): fix test description naming * feat(azureOpenAI): use validated model grouping for request authentication * chore: bump data-provider following rebase * chore: bump config file version noting significant changes * feat: add title options and switch azure configs for titling and vision requests * feat: enable azure plugins from config file * fix(ci): pass tests * chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated * fix(fetchModels): early return if apiKey not passed * chore: fix azure config typing * refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions * feat(createLLM): use `azureOpenAIBasePath` * feat(parsers): resolveHeaders * refactor(extractBaseURL): handle invalid input * feat(OpenAIClient): handle headers and baseURL for azureConfig * fix(ci): pass `OpenAIClient` tests * chore: extract env var for azureOpenAI group config, baseURL * docs: azureOpenAI config setup docs * feat: safe check of potential conflicting env vars that map to unique placeholders * fix: reset apiKey when model switches from originally requested model (vision or title) * chore: linting * docs: CONFIG_PATH notes in custom_config.md
2024-02-26 14:12:25 -05:00
const {
CacheKeys,
ErrorTypes,
🅰️ feat: Azure Config to Allow Different Deployments per Model (#1863) * wip: first pass for azure endpoint schema * refactor: azure config to return groupMap and modelConfigMap * wip: naming and schema changes * refactor(errorsToString): move to data-provider * feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors * feat(AppService): load Azure groups * refactor(azure): use imported types, write `mapModelToAzureConfig` * refactor: move `extractEnvVariable` to data-provider * refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests * refactor(AppService): ensure each model is properly configured on startup * refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config * feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file * refactor: redefine types as well as load azureOpenAI models from config file * chore(ci): fix test description naming * feat(azureOpenAI): use validated model grouping for request authentication * chore: bump data-provider following rebase * chore: bump config file version noting significant changes * feat: add title options and switch azure configs for titling and vision requests * feat: enable azure plugins from config file * fix(ci): pass tests * chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated * fix(fetchModels): early return if apiKey not passed * chore: fix azure config typing * refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions * feat(createLLM): use `azureOpenAIBasePath` * feat(parsers): resolveHeaders * refactor(extractBaseURL): handle invalid input * feat(OpenAIClient): handle headers and baseURL for azureConfig * fix(ci): pass `OpenAIClient` tests * chore: extract env var for azureOpenAI group config, baseURL * docs: azureOpenAI config setup docs * feat: safe check of potential conflicting env vars that map to unique placeholders * fix: reset apiKey when model switches from originally requested model (vision or title) * chore: linting * docs: CONFIG_PATH notes in custom_config.md
2024-02-26 14:12:25 -05:00
envVarRegex,
FetchTokenConfig,
extractEnvVariable,
🅰️ feat: Azure Config to Allow Different Deployments per Model (#1863) * wip: first pass for azure endpoint schema * refactor: azure config to return groupMap and modelConfigMap * wip: naming and schema changes * refactor(errorsToString): move to data-provider * feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors * feat(AppService): load Azure groups * refactor(azure): use imported types, write `mapModelToAzureConfig` * refactor: move `extractEnvVariable` to data-provider * refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests * refactor(AppService): ensure each model is properly configured on startup * refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config * feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file * refactor: redefine types as well as load azureOpenAI models from config file * chore(ci): fix test description naming * feat(azureOpenAI): use validated model grouping for request authentication * chore: bump data-provider following rebase * chore: bump config file version noting significant changes * feat: add title options and switch azure configs for titling and vision requests * feat: enable azure plugins from config file * fix(ci): pass tests * chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated * fix(fetchModels): early return if apiKey not passed * chore: fix azure config typing * refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions * feat(createLLM): use `azureOpenAIBasePath` * feat(parsers): resolveHeaders * refactor(extractBaseURL): handle invalid input * feat(OpenAIClient): handle headers and baseURL for azureConfig * fix(ci): pass `OpenAIClient` tests * chore: extract env var for azureOpenAI group config, baseURL * docs: azureOpenAI config setup docs * feat: safe check of potential conflicting env vars that map to unique placeholders * fix: reset apiKey when model switches from originally requested model (vision or title) * chore: linting * docs: CONFIG_PATH notes in custom_config.md
2024-02-26 14:12:25 -05:00
} = require('librechat-data-provider');
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
const { fetchModels } = require('~/server/services/ModelService');
const getLogStores = require('~/cache/getLogStores');
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
const { PROXY } = process.env;
🪦 refactor: Remove Legacy Code (#10533) * 🗑️ chore: Remove unused Legacy Provider clients and related helpers * Deleted OpenAIClient and GoogleClient files along with their associated tests. * Removed references to these clients in the clients index file. * Cleaned up typedefs by removing the OpenAISpecClient export. * Updated chat controllers to use the OpenAI SDK directly instead of the removed client classes. * chore/remove-openapi-specs * 🗑️ chore: Remove unused mergeSort and misc utility functions * Deleted mergeSort.js and misc.js files as they are no longer needed. * Removed references to cleanUpPrimaryKeyValue in messages.js and adjusted related logic. * Updated mongoMeili.ts to eliminate local implementations of removed functions. * chore: remove legacy endpoints * chore: remove all plugins endpoint related code * chore: remove unused prompt handling code and clean up imports * Deleted handleInputs.js and instructions.js files as they are no longer needed. * Removed references to these files in the prompts index.js. * Updated docker-compose.yml to simplify reverse proxy configuration. * chore: remove unused LightningIcon import from Icons.tsx * chore: clean up translation.json by removing deprecated and unused keys * chore: update Jest configuration and remove unused mock file * Simplified the setupFiles array in jest.config.js by removing the fetchEventSource mock. * Deleted the fetchEventSource.js mock file as it is no longer needed. * fix: simplify endpoint type check in Landing and ConversationStarters components * Updated the endpoint type check to use strict equality for better clarity and performance. * Ensured consistency in the handling of the azureOpenAI endpoint across both components. * chore: remove unused dependencies from package.json and package-lock.json * chore: remove legacy EditController, associated routes and imports * chore: update banResponse logic to refine request handling for banned users * chore: remove unused validateEndpoint middleware and its references * chore: remove unused 'res' parameter from initializeClient in multiple endpoint files * chore: remove unused 'isSmallScreen' prop from BookmarkNav and NewChat components; clean up imports in ArchivedChatsTable and useSetIndexOptions hooks; enhance localization in PromptVersions * chore: remove unused import of Constants and TMessage from MobileNav; retain only necessary QueryKeys import * chore: remove unused TResPlugin type and related references; clean up imports in types and schemas
2025-11-25 15:20:07 -05:00
const initializeClient = async ({ req, endpointOption, overrideEndpoint }) => {
🛜 refactor: Streamline App Config Usage (#9234) * WIP: app.locals refactoring WIP: appConfig fix: update memory configuration retrieval to use getAppConfig based on user role fix: update comment for AppConfig interface to clarify purpose 🏷️ refactor: Update tests to use getAppConfig for endpoint configurations ci: Update AppService tests to initialize app config instead of app.locals ci: Integrate getAppConfig into remaining tests refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests refactor: Rename initializeAppConfig to setAppConfig and update related tests ci: Mock getAppConfig in various tests to provide default configurations refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests chore: rename `Config/getAppConfig` -> `Config/app` fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters chore: correct parameter documentation for imageOutputType in ToolService.js refactor: remove `getCustomConfig` dependency in config route refactor: update domain validation to use appConfig for allowed domains refactor: use appConfig registration property chore: remove app parameter from AppService invocation refactor: update AppConfig interface to correct registration and turnstile configurations refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration ci: update related tests refactor: update getAppConfig call in getCustomConfigSpeech to include user role fix: update appConfig usage to access allowedDomains from actions instead of registration refactor: enhance AppConfig to include fileStrategies and update related file strategy logic refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions chore: remove deprecated unused RunManager refactor: get balance config primarily from appConfig refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic refactor: remove getCustomConfig usage and use app config in file citations refactor: consolidate endpoint loading logic into loadEndpoints function refactor: update appConfig access to use endpoints structure across various services refactor: implement custom endpoints configuration and streamline endpoint loading logic refactor: update getAppConfig call to include user role parameter refactor: streamline endpoint configuration and enhance appConfig usage across services refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file refactor: add type annotation for loadedEndpoints in loadEndpoints function refactor: move /services/Files/images/parse to TS API chore: add missing FILE_CITATIONS permission to IRole interface refactor: restructure toolkits to TS API refactor: separate manifest logic into its own module refactor: consolidate tool loading logic into a new tools module for startup logic refactor: move interface config logic to TS API refactor: migrate checkEmailConfig to TypeScript and update imports refactor: add FunctionTool interface and availableTools to AppConfig refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig` WIP: fix tests * fix: rebase conflicts * refactor: remove app.locals references * refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware * refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients * test: add balance configuration to titleConvo method in AgentClient tests * chore: remove unused `openai-chat-tokens` package * chore: remove unused imports in initializeMCPs.js * refactor: update balance configuration to use getAppConfig instead of getBalanceConfig * refactor: integrate configMiddleware for centralized configuration handling * refactor: optimize email domain validation by removing unnecessary async calls * refactor: simplify multer storage configuration by removing async calls * refactor: reorder imports for better readability in user.js * refactor: replace getAppConfig calls with req.config for improved performance * chore: replace getAppConfig calls with req.config in tests for centralized configuration handling * chore: remove unused override config * refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config * chore: remove customConfig parameter from TTSService constructor * refactor: pass appConfig from request to processFileCitations for improved configuration handling * refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config` * test: add mockAppConfig to processFileCitations tests for improved configuration handling * fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor * fix: type safety in useExportConversation * refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached * chore: change `MongoUser` typedef to `IUser` * fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest * fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
const appConfig = req.config;
🚧 WIP: Merge Dev Build (#4611) * refactor: Agent CodeFiles, abortUpload WIP * feat: code environment file upload * refactor: useLazyEffect * refactor: - Add `watch` from `useFormContext` to check if code execution is enabled - Disable file upload button if `agent_id` is not selected or code execution is disabled * WIP: primeCodeFiles; refactor: rename sessionId to session_id for uniformity * Refactor: Rename session_id to sessionId for uniformity in AuthService.js * chore: bump @librechat/agents to version 1.7.1 * WIP: prime code files * refactor: Update code env file upload method to use read stream * feat: reupload code env file if no longer active * refactor: isAssistantTool -> isEntityTool + address type issues * feat: execute code tool hook * refactor: Rename isPluginAuthenticated to checkPluginAuth in PluginController.js * refactor: Update PluginController.js to use AuthType constant for comparison * feat: verify tool authentication (execute_code) * feat: enter librechat_code_api_key * refactor: Remove unused imports in BookmarkForm.tsx * feat: authenticate code tool * refactor: Update Action.tsx to conditionally render the key and revoke key buttons * refactor(Code/Action): prevent uncheck-able 'Run Code' capability when key is revoked * refactor(Code/Action): Update Action.tsx to conditionally render the key and revoke key buttons * fix: agent file upload edge cases * chore: bump @librechat/agents * fix: custom endpoint providerValue icon * feat: ollama meta modal token values + context * feat: ollama agents * refactor: Update token models for Ollama models * chore: Comment out CodeForm * refactor: Update token models for Ollama and Meta models
2024-11-01 18:36:39 -04:00
const { key: expiresAt } = req.body;
const endpoint = overrideEndpoint ?? req.body.endpoint;
🛜 refactor: Streamline App Config Usage (#9234) * WIP: app.locals refactoring WIP: appConfig fix: update memory configuration retrieval to use getAppConfig based on user role fix: update comment for AppConfig interface to clarify purpose 🏷️ refactor: Update tests to use getAppConfig for endpoint configurations ci: Update AppService tests to initialize app config instead of app.locals ci: Integrate getAppConfig into remaining tests refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests refactor: Rename initializeAppConfig to setAppConfig and update related tests ci: Mock getAppConfig in various tests to provide default configurations refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests chore: rename `Config/getAppConfig` -> `Config/app` fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters chore: correct parameter documentation for imageOutputType in ToolService.js refactor: remove `getCustomConfig` dependency in config route refactor: update domain validation to use appConfig for allowed domains refactor: use appConfig registration property chore: remove app parameter from AppService invocation refactor: update AppConfig interface to correct registration and turnstile configurations refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration ci: update related tests refactor: update getAppConfig call in getCustomConfigSpeech to include user role fix: update appConfig usage to access allowedDomains from actions instead of registration refactor: enhance AppConfig to include fileStrategies and update related file strategy logic refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions chore: remove deprecated unused RunManager refactor: get balance config primarily from appConfig refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic refactor: remove getCustomConfig usage and use app config in file citations refactor: consolidate endpoint loading logic into loadEndpoints function refactor: update appConfig access to use endpoints structure across various services refactor: implement custom endpoints configuration and streamline endpoint loading logic refactor: update getAppConfig call to include user role parameter refactor: streamline endpoint configuration and enhance appConfig usage across services refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file refactor: add type annotation for loadedEndpoints in loadEndpoints function refactor: move /services/Files/images/parse to TS API chore: add missing FILE_CITATIONS permission to IRole interface refactor: restructure toolkits to TS API refactor: separate manifest logic into its own module refactor: consolidate tool loading logic into a new tools module for startup logic refactor: move interface config logic to TS API refactor: migrate checkEmailConfig to TypeScript and update imports refactor: add FunctionTool interface and availableTools to AppConfig refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig` WIP: fix tests * fix: rebase conflicts * refactor: remove app.locals references * refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware * refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients * test: add balance configuration to titleConvo method in AgentClient tests * chore: remove unused `openai-chat-tokens` package * chore: remove unused imports in initializeMCPs.js * refactor: update balance configuration to use getAppConfig instead of getBalanceConfig * refactor: integrate configMiddleware for centralized configuration handling * refactor: optimize email domain validation by removing unnecessary async calls * refactor: simplify multer storage configuration by removing async calls * refactor: reorder imports for better readability in user.js * refactor: replace getAppConfig calls with req.config for improved performance * chore: replace getAppConfig calls with req.config in tests for centralized configuration handling * chore: remove unused override config * refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config * chore: remove customConfig parameter from TTSService constructor * refactor: pass appConfig from request to processFileCitations for improved configuration handling * refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config` * test: add mockAppConfig to processFileCitations tests for improved configuration handling * fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor * fix: type safety in useExportConversation * refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached * chore: change `MongoUser` typedef to `IUser` * fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest * fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
const endpointConfig = getCustomEndpointConfig({
endpoint,
appConfig,
});
if (!endpointConfig) {
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
throw new Error(`Config not found for the ${endpoint} custom endpoint.`);
}
const CUSTOM_API_KEY = extractEnvVariable(endpointConfig.apiKey);
const CUSTOM_BASE_URL = extractEnvVariable(endpointConfig.baseURL);
if (CUSTOM_API_KEY.match(envVarRegex)) {
throw new Error(`Missing API Key for ${endpoint}.`);
}
if (CUSTOM_BASE_URL.match(envVarRegex)) {
throw new Error(`Missing Base URL for ${endpoint}.`);
}
const userProvidesKey = isUserProvided(CUSTOM_API_KEY);
const userProvidesURL = isUserProvided(CUSTOM_BASE_URL);
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
let userValues = null;
if (expiresAt && (userProvidesKey || userProvidesURL)) {
checkUserKeyExpiry(expiresAt, endpoint);
userValues = await getUserKeyValues({ userId: req.user.id, name: endpoint });
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
}
let apiKey = userProvidesKey ? userValues?.apiKey : CUSTOM_API_KEY;
let baseURL = userProvidesURL ? userValues?.baseURL : CUSTOM_BASE_URL;
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
if (userProvidesKey & !apiKey) {
throw new Error(
JSON.stringify({
type: ErrorTypes.NO_USER_KEY,
}),
);
}
if (userProvidesURL && !baseURL) {
throw new Error(
JSON.stringify({
type: ErrorTypes.NO_BASE_URL,
}),
);
}
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
if (!apiKey) {
throw new Error(`${endpoint} API key not provided.`);
}
if (!baseURL) {
throw new Error(`${endpoint} Base URL not provided.`);
}
const cache = getLogStores(CacheKeys.TOKEN_CONFIG);
const tokenKey =
!endpointConfig.tokenConfig && (userProvidesKey || userProvidesURL)
? `${endpoint}:${req.user.id}`
: endpoint;
let endpointTokenConfig =
!endpointConfig.tokenConfig &&
FetchTokenConfig[endpoint.toLowerCase()] &&
(await cache.get(tokenKey));
if (
FetchTokenConfig[endpoint.toLowerCase()] &&
endpointConfig &&
endpointConfig.models.fetch &&
!endpointTokenConfig
) {
await fetchModels({ apiKey, baseURL, name: endpoint, user: req.user.id, tokenKey });
endpointTokenConfig = await cache.get(tokenKey);
}
const customOptions = {
🆔 feat: Add OpenID Connect Federated Provider Token Support (#9931) * feat: Add OpenID Connect federated provider token support Implements support for passing federated provider tokens (Cognito, Azure AD, Auth0) as variables in LibreChat's librechat.yaml configuration for both custom endpoints and MCP servers. Features: - New LIBRECHAT_OPENID_* template variables for federated provider tokens - JWT claims parsing from ID tokens without verification (for claim extraction) - Token validation with expiration checking - Support for multiple token storage locations (federatedTokens, openidTokens) - Integration with existing template variable system - Comprehensive test suite with Cognito-specific scenarios - Provider-agnostic design supporting Cognito, Azure AD, Auth0, etc. Security: - Server-side only token processing - Automatic token expiration validation - Graceful fallbacks for missing/invalid tokens - No client-side token exposure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: Add federated token propagation to OIDC authentication strategies Adds federatedTokens object to user during authentication to enable federated provider token template variables in LibreChat configuration. Changes: - OpenID JWT Strategy: Extract raw JWT from Authorization header and attach as federatedTokens.access_token to enable {{LIBRECHAT_OPENID_TOKEN}} placeholder resolution - OpenID Strategy: Attach tokenset tokens as federatedTokens object to standardize token access across both authentication strategies This enables proper token propagation for custom endpoints and MCP servers that require federated provider tokens for authorization. Resolves missing token issue reported by @ramden in PR #9931 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Denis Ramic <denis.ramic@nfon.com> Co-Authored-By: Claude <noreply@anthropic.com> * test: Add federatedTokens validation tests for OIDC strategies Adds comprehensive test coverage for the federated token propagation feature implemented in the authentication strategies. Tests added: - Verify federatedTokens object is attached to user with correct structure (access_token, refresh_token, expires_at) - Verify both tokenset and federatedTokens are present in user object - Ensure tokens from OIDC provider are correctly propagated Also fixes existing test suite by adding missing mocks: - isEmailDomainAllowed function mock - findOpenIDUser function mock These tests validate the fix from commit 5874ba29f that enables {{LIBRECHAT_OPENID_TOKEN}} template variable functionality. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: Remove implementation documentation file The PR description already contains all necessary implementation details. This documentation file is redundant and was requested to be removed. * fix: skip s256 check * fix(openid): handle missing refresh token in Cognito token refresh response When OPENID_REUSE_TOKENS=true, the token refresh flow was failing because Cognito (and most OAuth providers) don't return a new refresh token in the refresh grant response - they only return new access and ID tokens. Changes: - Modified setOpenIDAuthTokens() to accept optional existingRefreshToken parameter - Updated validation to only require access_token (refresh_token now optional) - Added logic to reuse existing refresh token when not provided in tokenset - Updated refreshController to pass original refresh token as fallback - Added comments explaining standard OAuth 2.0 refresh token behavior This fixes the "Token is not present. User is not authenticated." error that occurred during silent token refresh with Cognito as the OpenID provider. Fixes: Authentication loop with OPENID_REUSE_TOKENS=true and AWS Cognito * fix(openid): extract refresh token from cookies for template variable replacement When OPENID_REUSE_TOKENS=true, the openIdJwtStrategy populates user.federatedTokens to enable template variable replacement (e.g., {{LIBRECHAT_OPENID_ACCESS_TOKEN}}). However, the refresh_token field was incorrectly sourced from payload.refresh_token, which is always undefined because: 1. JWTs don't contain refresh tokens in their payload 2. The JWT itself IS the access token 3. Refresh tokens are separate opaque tokens stored in HTTP-only cookies This caused extractOpenIDTokenInfo() to receive incomplete federatedTokens, resulting in template variables remaining unreplaced in headers. **Root Cause:** - Line 90: `refresh_token: payload.refresh_token` (always undefined) - JWTs only contain access token data in their claims - Refresh tokens are separate, stored securely in cookies **Solution:** - Import `cookie` module to parse cookies from request - Extract refresh token from `refreshToken` cookie - Populate federatedTokens with both access token (JWT) and refresh token (from cookie) **Impact:** - Template variables like {{LIBRECHAT_OPENID_ACCESS_TOKEN}} now work correctly - Headers in librechat.yaml are properly replaced with actual tokens - MCP server authentication with federated tokens now functional **Technical Details:** - passReqToCallback=true in JWT strategy provides req object access - Refresh token extracted via cookies.parse(req.headers.cookie).refreshToken - Falls back gracefully if cookie header or refreshToken is missing 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: re-resolve headers on each request to pick up fresh federatedTokens - OpenAIClient now re-resolves headers in chatCompletion() before each API call - This ensures template variables like {{LIBRECHAT_OPENID_TOKEN}} are replaced with actual token values from req.user.federatedTokens - initialize.js now stores original template headers instead of pre-resolved ones - Fixes template variable replacement when OPENID_REUSE_TOKENS=true The issue was that headers were only resolved once during client initialization, before openIdJwtStrategy had populated user.federatedTokens. Now headers are re-resolved on every request with the current user's fresh tokens. * debug: add logging to track header resolution in OpenAIClient * debug: log tokenset structure after refresh to diagnose missing access_token * fix: set federatedTokens on user object after OAuth refresh - After successful OAuth token refresh, the user object was not being updated with federatedTokens - This caused template variable resolution to fail on subsequent requests - Now sets user.federatedTokens with access_token, id_token, refresh_token and expires_at from the refreshed tokenset - Fixes template variables like {{LIBRECHAT_OPENID_TOKEN}} not being replaced after token refresh - Related to PR #9931 (OpenID federated token support) * fix(openid): pass user object through agent chain for template variable resolution Root cause: buildAgentContext in agents/run.ts called resolveHeaders without the user parameter, preventing OpenID federated token template variables from being resolved in agent runtime parameters. Changes: - packages/api/src/agents/run.ts: Add user parameter to createRun signature - packages/api/src/agents/run.ts: Pass user to resolveHeaders in buildAgentContext - api/server/controllers/agents/client.js: Pass user when calling createRun - api/server/services/Endpoints/bedrock/options.js: Add resolveHeaders call with debug logging - api/server/services/Endpoints/custom/initialize.js: Add debug logging - packages/api/src/utils/env.ts: Add comprehensive debug logging and stack traces - packages/api/src/utils/oidc.ts: Fix eslint errors (unused type, explicit any) This ensures template variables like {{LIBRECHAT_OPENID_TOKEN}} and {{LIBRECHAT_USER_OPENIDID}} are properly resolved in both custom endpoint headers and Bedrock AgentCore runtime parameters. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: remove debug logging from OpenID token template feature Removed excessive debug logging that was added during development to make the PR more suitable for upstream review: - Removed 7 debug statements from OpenAIClient.js - Removed all console.log statements from packages/api/src/utils/env.ts - Removed debug logging from bedrock/options.js - Removed debug logging from custom/initialize.js - Removed debug statement from AuthController.js This reduces the changeset by ~50 lines while maintaining full functionality of the OpenID federated token template variable feature. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * test(openid): add comprehensive unit tests for template variable substitution - Add 34 unit tests for OIDC token utilities (oidc.spec.ts) - Test coverage for token extraction, validation, and placeholder processing - Integration tests for full OpenID token flow - All tests pass with comprehensive edge case coverage 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com> * test: fix OpenID federated tokens test failures - Add serverMetadata() mock to openid-client mock configuration * Fixes TypeError in openIdJwtStrategy.js where serverMetadata() was being called * Mock now returns jwks_uri and end_session_endpoint as expected by the code - Update outdated initialize.spec.js test * Remove test expecting resolveHeaders call during initialization * Header resolution was refactored to be deferred until LLM request time * Update test to verify options are returned correctly with useLegacyContent flag Fixes #9931 CI failures for backend unit tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * chore: fix package-lock.json conflict * chore: sync package-log with upstream * chore: cleanup * fix: use createSafeUser * fix: fix createSafeUser signature * chore: remove comments * chore: purge comments * fix: update Jest testPathPattern to testPathPatterns for Jest 30+ compatibility --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Denis Ramic <denis.ramic@nfon.com> Co-authored-by: kristjanaapro <kristjana@apro.is> chore: import order and add back JSDoc for OpenID JWT callback
2025-11-21 14:44:52 +00:00
headers: endpointConfig.headers,
addParams: endpointConfig.addParams,
dropParams: endpointConfig.dropParams,
customParams: endpointConfig.customParams,
titleConvo: endpointConfig.titleConvo,
titleModel: endpointConfig.titleModel,
forcePrompt: endpointConfig.forcePrompt,
summaryModel: endpointConfig.summaryModel,
modelDisplayLabel: endpointConfig.modelDisplayLabel,
titleMethod: endpointConfig.titleMethod ?? 'completion',
contextStrategy: endpointConfig.summarize ? 'summarize' : null,
directEndpoint: endpointConfig.directEndpoint,
titleMessageRole: endpointConfig.titleMessageRole,
streamRate: endpointConfig.streamRate,
endpointTokenConfig,
};
🛜 refactor: Streamline App Config Usage (#9234) * WIP: app.locals refactoring WIP: appConfig fix: update memory configuration retrieval to use getAppConfig based on user role fix: update comment for AppConfig interface to clarify purpose 🏷️ refactor: Update tests to use getAppConfig for endpoint configurations ci: Update AppService tests to initialize app config instead of app.locals ci: Integrate getAppConfig into remaining tests refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests refactor: Rename initializeAppConfig to setAppConfig and update related tests ci: Mock getAppConfig in various tests to provide default configurations refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests chore: rename `Config/getAppConfig` -> `Config/app` fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters chore: correct parameter documentation for imageOutputType in ToolService.js refactor: remove `getCustomConfig` dependency in config route refactor: update domain validation to use appConfig for allowed domains refactor: use appConfig registration property chore: remove app parameter from AppService invocation refactor: update AppConfig interface to correct registration and turnstile configurations refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration ci: update related tests refactor: update getAppConfig call in getCustomConfigSpeech to include user role fix: update appConfig usage to access allowedDomains from actions instead of registration refactor: enhance AppConfig to include fileStrategies and update related file strategy logic refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions chore: remove deprecated unused RunManager refactor: get balance config primarily from appConfig refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic refactor: remove getCustomConfig usage and use app config in file citations refactor: consolidate endpoint loading logic into loadEndpoints function refactor: update appConfig access to use endpoints structure across various services refactor: implement custom endpoints configuration and streamline endpoint loading logic refactor: update getAppConfig call to include user role parameter refactor: streamline endpoint configuration and enhance appConfig usage across services refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file refactor: add type annotation for loadedEndpoints in loadEndpoints function refactor: move /services/Files/images/parse to TS API chore: add missing FILE_CITATIONS permission to IRole interface refactor: restructure toolkits to TS API refactor: separate manifest logic into its own module refactor: consolidate tool loading logic into a new tools module for startup logic refactor: move interface config logic to TS API refactor: migrate checkEmailConfig to TypeScript and update imports refactor: add FunctionTool interface and availableTools to AppConfig refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig` WIP: fix tests * fix: rebase conflicts * refactor: remove app.locals references * refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware * refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients * test: add balance configuration to titleConvo method in AgentClient tests * chore: remove unused `openai-chat-tokens` package * chore: remove unused imports in initializeMCPs.js * refactor: update balance configuration to use getAppConfig instead of getBalanceConfig * refactor: integrate configMiddleware for centralized configuration handling * refactor: optimize email domain validation by removing unnecessary async calls * refactor: simplify multer storage configuration by removing async calls * refactor: reorder imports for better readability in user.js * refactor: replace getAppConfig calls with req.config for improved performance * chore: replace getAppConfig calls with req.config in tests for centralized configuration handling * chore: remove unused override config * refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config * chore: remove customConfig parameter from TTSService constructor * refactor: pass appConfig from request to processFileCitations for improved configuration handling * refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config` * test: add mockAppConfig to processFileCitations tests for improved configuration handling * fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor * fix: type safety in useExportConversation * refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached * chore: change `MongoUser` typedef to `IUser` * fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest * fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
const allConfig = appConfig.endpoints?.all;
if (allConfig) {
customOptions.streamRate = allConfig.streamRate;
}
let clientOptions = {
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
reverseProxyUrl: baseURL ?? null,
proxy: PROXY ?? null,
...customOptions,
...endpointOption,
};
🪦 refactor: Remove Legacy Code (#10533) * 🗑️ chore: Remove unused Legacy Provider clients and related helpers * Deleted OpenAIClient and GoogleClient files along with their associated tests. * Removed references to these clients in the clients index file. * Cleaned up typedefs by removing the OpenAISpecClient export. * Updated chat controllers to use the OpenAI SDK directly instead of the removed client classes. * chore/remove-openapi-specs * 🗑️ chore: Remove unused mergeSort and misc utility functions * Deleted mergeSort.js and misc.js files as they are no longer needed. * Removed references to cleanUpPrimaryKeyValue in messages.js and adjusted related logic. * Updated mongoMeili.ts to eliminate local implementations of removed functions. * chore: remove legacy endpoints * chore: remove all plugins endpoint related code * chore: remove unused prompt handling code and clean up imports * Deleted handleInputs.js and instructions.js files as they are no longer needed. * Removed references to these files in the prompts index.js. * Updated docker-compose.yml to simplify reverse proxy configuration. * chore: remove unused LightningIcon import from Icons.tsx * chore: clean up translation.json by removing deprecated and unused keys * chore: update Jest configuration and remove unused mock file * Simplified the setupFiles array in jest.config.js by removing the fetchEventSource mock. * Deleted the fetchEventSource.js mock file as it is no longer needed. * fix: simplify endpoint type check in Landing and ConversationStarters components * Updated the endpoint type check to use strict equality for better clarity and performance. * Ensured consistency in the handling of the azureOpenAI endpoint across both components. * chore: remove unused dependencies from package.json and package-lock.json * chore: remove legacy EditController, associated routes and imports * chore: update banResponse logic to refine request handling for banned users * chore: remove unused validateEndpoint middleware and its references * chore: remove unused 'res' parameter from initializeClient in multiple endpoint files * chore: remove unused 'isSmallScreen' prop from BookmarkNav and NewChat components; clean up imports in ArchivedChatsTable and useSetIndexOptions hooks; enhance localization in PromptVersions * chore: remove unused import of Constants and TMessage from MobileNav; retain only necessary QueryKeys import * chore: remove unused TResPlugin type and related references; clean up imports in types and schemas
2025-11-25 15:20:07 -05:00
const modelOptions = endpointOption?.model_parameters ?? {};
clientOptions = Object.assign(
{
modelOptions,
},
clientOptions,
);
clientOptions.modelOptions.user = req.user.id;
const options = getOpenAIConfig(apiKey, clientOptions, endpoint);
if (options != null) {
options.useLegacyContent = true;
options.endpointTokenConfig = endpointTokenConfig;
}
if (clientOptions.streamRate) {
🤖 feat: Agent Handoffs (Routing) (#10176) * feat: Add support for agent handoffs with edges in agent forms and schemas chore: Mark `agent_ids` field as deprecated in favor of edges across various schemas and types chore: Update dependencies for @langchain/core and @librechat/agents to latest versions chore: Update peer dependency for @librechat/agents to version 3.0.0-rc2 in package.json chore: Update @librechat/agents dependency to version 3.0.0-rc3 in package.json and package-lock.json feat: first pass, multi-agent handoffs fix: update output type to ToolMessage in memory handling functions fix: improve type checking for graphConfig in createRun function refactor: remove unused content filtering logic in AgentClient chore: update @librechat/agents dependency to version 3.0.0-rc4 in package.json and package-lock.json fix: update @langchain/core peer dependency version to ^0.3.72 in package.json and package-lock.json fix: update @librechat/agents dependency to version 3.0.0-rc6 in package.json and package-lock.json; refactor stream rate handling in various endpoints feat: Agent handoff UI chore: update @librechat/agents dependency to version 3.0.0-rc8 in package.json and package-lock.json fix: improve hasInfo condition and adjust UI element classes in AgentHandoff component refactor: remove current fixed agent display from AgentHandoffs component due to redundancy feat: enhance AgentHandoffs UI with localized beta label and improved layout chore: update @librechat/agents dependency to version 3.0.0-rc10 in package.json and package-lock.json feat: add `createSequentialChainEdges` function to add back agent chaining via multi-agents feat: update `createSequentialChainEdges` call to only provide conversation context between agents feat: deprecate Agent Chain functionality and update related methods for improved clarity * chore: update @librechat/agents dependency to version 3.0.0-rc11 in package.json and package-lock.json * refactor: remove unused addCacheControl function and related imports and import from @librechat/agents * chore: remove unused i18n keys * refactor: remove unused format export from index.ts * chore: update @librechat/agents to v3.0.0-rc13 * chore: remove BEDROCK_LEGACY provider from Providers enum * chore: update @librechat/agents to version 3.0.2 in package.json
2025-11-05 17:15:17 -05:00
options.llmConfig._lc_stream_delay = clientOptions.streamRate;
🚧 WIP: Merge Dev Build (#4611) * refactor: Agent CodeFiles, abortUpload WIP * feat: code environment file upload * refactor: useLazyEffect * refactor: - Add `watch` from `useFormContext` to check if code execution is enabled - Disable file upload button if `agent_id` is not selected or code execution is disabled * WIP: primeCodeFiles; refactor: rename sessionId to session_id for uniformity * Refactor: Rename session_id to sessionId for uniformity in AuthService.js * chore: bump @librechat/agents to version 1.7.1 * WIP: prime code files * refactor: Update code env file upload method to use read stream * feat: reupload code env file if no longer active * refactor: isAssistantTool -> isEntityTool + address type issues * feat: execute code tool hook * refactor: Rename isPluginAuthenticated to checkPluginAuth in PluginController.js * refactor: Update PluginController.js to use AuthType constant for comparison * feat: verify tool authentication (execute_code) * feat: enter librechat_code_api_key * refactor: Remove unused imports in BookmarkForm.tsx * feat: authenticate code tool * refactor: Update Action.tsx to conditionally render the key and revoke key buttons * refactor(Code/Action): prevent uncheck-able 'Run Code' capability when key is revoked * refactor(Code/Action): Update Action.tsx to conditionally render the key and revoke key buttons * fix: agent file upload edge cases * chore: bump @librechat/agents * fix: custom endpoint providerValue icon * feat: ollama meta modal token values + context * feat: ollama agents * refactor: Update token models for Ollama models * chore: Comment out CodeForm * refactor: Update token models for Ollama and Meta models
2024-11-01 18:36:39 -04:00
}
🪦 refactor: Remove Legacy Code (#10533) * 🗑️ chore: Remove unused Legacy Provider clients and related helpers * Deleted OpenAIClient and GoogleClient files along with their associated tests. * Removed references to these clients in the clients index file. * Cleaned up typedefs by removing the OpenAISpecClient export. * Updated chat controllers to use the OpenAI SDK directly instead of the removed client classes. * chore/remove-openapi-specs * 🗑️ chore: Remove unused mergeSort and misc utility functions * Deleted mergeSort.js and misc.js files as they are no longer needed. * Removed references to cleanUpPrimaryKeyValue in messages.js and adjusted related logic. * Updated mongoMeili.ts to eliminate local implementations of removed functions. * chore: remove legacy endpoints * chore: remove all plugins endpoint related code * chore: remove unused prompt handling code and clean up imports * Deleted handleInputs.js and instructions.js files as they are no longer needed. * Removed references to these files in the prompts index.js. * Updated docker-compose.yml to simplify reverse proxy configuration. * chore: remove unused LightningIcon import from Icons.tsx * chore: clean up translation.json by removing deprecated and unused keys * chore: update Jest configuration and remove unused mock file * Simplified the setupFiles array in jest.config.js by removing the fetchEventSource mock. * Deleted the fetchEventSource.js mock file as it is no longer needed. * fix: simplify endpoint type check in Landing and ConversationStarters components * Updated the endpoint type check to use strict equality for better clarity and performance. * Ensured consistency in the handling of the azureOpenAI endpoint across both components. * chore: remove unused dependencies from package.json and package-lock.json * chore: remove legacy EditController, associated routes and imports * chore: update banResponse logic to refine request handling for banned users * chore: remove unused validateEndpoint middleware and its references * chore: remove unused 'res' parameter from initializeClient in multiple endpoint files * chore: remove unused 'isSmallScreen' prop from BookmarkNav and NewChat components; clean up imports in ArchivedChatsTable and useSetIndexOptions hooks; enhance localization in PromptVersions * chore: remove unused import of Constants and TMessage from MobileNav; retain only necessary QueryKeys import * chore: remove unused TResPlugin type and related references; clean up imports in types and schemas
2025-11-25 15:20:07 -05:00
return options;
💫 feat: Config File & Custom Endpoints (#1474) * WIP(backend/api): custom endpoint * WIP(frontend/client): custom endpoint * chore: adjust typedefs for configs * refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility * feat: loadYaml utility * refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults * refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config * refactor(EndpointController): rename variables for clarity * feat: initial load custom config * feat(server/utils): add simple `isUserProvided` helper * chore(types): update TConfig type * refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models * feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels * chore: reorganize server init imports, invoke loadCustomConfig * refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint * refactor(Endpoint/ModelController): spread config values after default (temporary) * chore(client): fix type issues * WIP: first pass for multiple custom endpoints - add endpointType to Conversation schema - add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion) - use `endpointType` value as `endpoint` where mapping to type is necessary using this field - use custom defined `endpoint` value and not type for mapping to modelsConfig - misc: add return type to `getDefaultEndpoint` - in `useNewConvo`, add the endpointType if it wasn't already added to conversation - EndpointsMenu: use user-defined endpoint name as Title in menu - TODO: custom icon via custom config, change unknown to robot icon * refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code * chore: remove unused availableModels field in TConfig type * refactor(parseCompactConvo): pass args as an object and change where used accordingly * feat: chat through custom endpoint * chore(message/convoSchemas): avoid saving empty arrays * fix(BaseClient/saveMessageToDatabase): save endpointType * refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve * fix(useConversation): assign endpointType if it's missing * fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset) * chore: recorganize types order for TConfig, add `iconURL` * feat: custom endpoint icon support: - use UnknownIcon in all icon contexts - add mistral and openrouter as known endpoints, and add their icons - iconURL support * fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults * refactor(Settings/OpenAI): remove legacy `isOpenAI` flag * fix(OpenAIClient): do not invoke abortCompletion on completion error * feat: add responseSender/label support for custom endpoints: - use defaultModelLabel field in endpointOption - add model defaults for custom endpoints in `getResponseSender` - add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel` - include defaultModelLabel from endpointConfig in custom endpoint client options - pass `endpointType` to `getResponseSender` * feat(OpenAIClient): use custom options from config file * refactor: rename `defaultModelLabel` to `modelDisplayLabel` * refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere * feat: `iconURL` and extract environment variables from custom endpoint config values * feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml` * docs: custom config docs and examples * fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions * fix(custom/initializeClient): extract env var and use `isUserProvided` function * Update librechat.example.yaml * feat(InputWithLabel): add className props, and forwardRef * fix(streamResponse): handle error edge case where either messages or convos query throws an error * fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error * feat: user_provided keys for custom endpoints * fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name` * feat(loadConfigModels): extract env variables and optimize fetching models * feat: support custom endpoint iconURL for messages and Nav * feat(OpenAIClient): add/dropParams support * docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY` * docs: update docs with additional notes * feat(maxTokensMap): add mistral models (32k context) * docs: update openrouter notes * Update ai_setup.md * docs(custom_config): add table of contents and fix note about custom name * docs(custom_config): reorder ToC * Update custom_config.md * Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
};
module.exports = initializeClient;