LibreChat/api/server/services/MCP.js

857 lines
28 KiB
JavaScript
Raw Normal View History

🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
const { tool } = require('@langchain/core/tools');
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
const { logger, getTenantId } = require('@librechat/data-schemas');
const {
Providers,
StepTypes,
GraphEvents,
Constants: AgentConstants,
} = require('@librechat/agents');
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
const {
sendEvent,
MCPOAuthHandler,
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
isMCPDomainAllowed,
normalizeServerName,
2026-02-13 13:33:25 -05:00
normalizeJsonSchema,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
GenerationJobManager,
2026-02-13 13:33:25 -05:00
resolveJsonSchemaRefs,
🔑 fix: Robust MCP OAuth Detection in Tool-Call Flow (#12418) * fix(api): add buildOAuthToolCallName utility for MCP OAuth flows Extract a shared utility that builds the synthetic tool-call name used during MCP OAuth flows (oauth_mcp_{normalizedServerName}). Uses startsWith on the raw serverName (not the normalized form) to guard against double-wrapping, so names that merely normalize to start with oauth_mcp_ (e.g., oauth@mcp@server) are correctly prefixed while genuinely pre-wrapped names are left as-is. Add 8 unit tests covering normal names, pre-wrapped names, _mcp_ substrings, special characters, non-ASCII, and empty string inputs. * fix(backend): use buildOAuthToolCallName in MCP OAuth flows Replace inline tool-call name construction in both reconnectServer (MCP.js) and createOAuthEmitter (ToolService.js) with the shared buildOAuthToolCallName utility. Remove unused normalizeServerName import from ToolService.js. Fix import ordering in both files. This ensures the oauth_mcp_ prefix is consistently applied so the client correctly identifies MCP OAuth flows and binds the CSRF cookie to the right server. * fix(client): robust MCP OAuth detection and split handling in ToolCall - Fix split() destructuring to preserve tail segments for server names containing _mcp_ (e.g., foo_mcp_bar no longer truncated to foo). - Add auth URL redirect_uri fallback: when the tool-call name lacks the _mcp_ delimiter, parse redirect_uri for the MCP callback path. Set function_name to the extracted server name so progress text shows the server, not the raw tool-call ID. - Display server name instead of literal "oauth" as function_name, gated on auth presence to avoid misidentifying real tools named "oauth". - Consolidate three independent new URL(auth) parses into a single parsedAuthUrl useMemo shared across detection, actionId, and authDomain hooks. - Replace any type on ProgressText test mock with structural type. - Add 8 tests covering delimiter detection, multi-segment names, function_name display, redirect_uri fallback, normalized _mcp_ server names, and non-MCP action auth exclusion. * chore: fix import order in utils.test.ts * fix(client): drop auth gate on OAuth displayName so completed flows show server name The createOAuthEnd handler re-emits the toolCall delta without auth, so auth is cleared on the client after OAuth completes. Gating displayName on `func === 'oauth' && auth` caused completed OAuth steps to render "Completed oauth" instead of "Completed my-server". Remove the `&& auth` gate — within the MCP delimiter branch the func="oauth" check alone is sufficient. Also remove `auth` from the useMemo dep array since only `parsedAuthUrl` is referenced. Update the test to assert correct post-completion display.
2026-03-26 14:45:13 -04:00
buildOAuthToolCallName,
} = require('@librechat/api');
const { Time, CacheKeys, Constants, isAssistantsEndpoint } = require('librechat-data-provider');
const {
getOAuthReconnectionManager,
getMCPServersRegistry,
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
getFlowStateManager,
getMCPManager,
} = require('~/config');
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
const { findToken, createToken, updateToken } = require('~/models');
const { getGraphApiToken } = require('./GraphTokenService');
const { reinitMCPServer } = require('./Tools/mcp');
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
const { getAppConfig } = require('./Config');
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
const { getLogStores } = require('~/cache');
🧪 chore: MCP Reconnect Storm Follow-Up Fixes and Integration Tests (#12172) * 🧪 test: Add reconnection storm regression tests for MCPConnection Introduced a comprehensive test suite for reconnection storm scenarios, validating circuit breaker, throttling, cooldown, and timeout fixes. The tests utilize real MCP SDK transports and a StreamableHTTP server to ensure accurate behavior under rapid connect/disconnect cycles and error handling for SSE 400/405 responses. This enhances the reliability of the MCPConnection by ensuring proper handling of reconnection logic and circuit breaker functionality. * 🔧 fix: Update createUnavailableToolStub to return structured response Modified the `createUnavailableToolStub` function to return an array containing the unavailable message and a null value, enhancing the response structure. Additionally, added a debug log to skip tool creation when the result is null, improving the handling of reconnection scenarios in the MCP service. * 🧪 test: Enhance MCP tool creation tests for cache and throttle interactions Added new test cases for the `createMCPTool` function to validate the caching behavior when tools are unavailable or throttled. The tests ensure that tools are correctly cached as missing and prevent unnecessary reconnects across different users, improving the reliability of the MCP service under concurrent usage scenarios. Additionally, introduced a test for the `createMCPTools` function to verify that it returns an empty array when reconnect is throttled, ensuring proper handling of throttling logic. * 📝 docs: Update AGENTS.md with testing philosophy and guidelines Expanded the testing section in AGENTS.md to emphasize the importance of using real logic over mocks, advocating for the use of spies and real dependencies in tests. Added specific recommendations for testing with MongoDB and MCP SDK, highlighting the need to mock only uncontrollable external services. This update aims to improve testing practices and encourage more robust test implementations. * 🧪 test: Enhance reconnection storm tests with socket tracking and SSE handling Updated the reconnection storm test suite to include a new socket tracking mechanism for better resource management during tests. Improved the handling of SSE 400/405 responses by ensuring they are processed in the same branch as 404 errors, preventing unhandled cases. This enhances the reliability of the MCPConnection under rapid reconnect scenarios and ensures proper error handling. * 🔧 fix: Implement cache eviction for stale reconnect attempts and missing tools Added an `evictStale` function to manage the size of the `lastReconnectAttempts` and `missingToolCache` maps, ensuring they do not exceed a maximum cache size. This enhancement improves resource management by removing outdated entries based on a specified time-to-live (TTL), thereby optimizing the MCP service's performance during reconnection scenarios.
2026-03-10 17:44:13 -04:00
const MAX_CACHE_SIZE = 1000;
const lastReconnectAttempts = new Map();
const RECONNECT_THROTTLE_MS = 10_000;
const missingToolCache = new Map();
const MISSING_TOOL_TTL_MS = 10_000;
🧪 chore: MCP Reconnect Storm Follow-Up Fixes and Integration Tests (#12172) * 🧪 test: Add reconnection storm regression tests for MCPConnection Introduced a comprehensive test suite for reconnection storm scenarios, validating circuit breaker, throttling, cooldown, and timeout fixes. The tests utilize real MCP SDK transports and a StreamableHTTP server to ensure accurate behavior under rapid connect/disconnect cycles and error handling for SSE 400/405 responses. This enhances the reliability of the MCPConnection by ensuring proper handling of reconnection logic and circuit breaker functionality. * 🔧 fix: Update createUnavailableToolStub to return structured response Modified the `createUnavailableToolStub` function to return an array containing the unavailable message and a null value, enhancing the response structure. Additionally, added a debug log to skip tool creation when the result is null, improving the handling of reconnection scenarios in the MCP service. * 🧪 test: Enhance MCP tool creation tests for cache and throttle interactions Added new test cases for the `createMCPTool` function to validate the caching behavior when tools are unavailable or throttled. The tests ensure that tools are correctly cached as missing and prevent unnecessary reconnects across different users, improving the reliability of the MCP service under concurrent usage scenarios. Additionally, introduced a test for the `createMCPTools` function to verify that it returns an empty array when reconnect is throttled, ensuring proper handling of throttling logic. * 📝 docs: Update AGENTS.md with testing philosophy and guidelines Expanded the testing section in AGENTS.md to emphasize the importance of using real logic over mocks, advocating for the use of spies and real dependencies in tests. Added specific recommendations for testing with MongoDB and MCP SDK, highlighting the need to mock only uncontrollable external services. This update aims to improve testing practices and encourage more robust test implementations. * 🧪 test: Enhance reconnection storm tests with socket tracking and SSE handling Updated the reconnection storm test suite to include a new socket tracking mechanism for better resource management during tests. Improved the handling of SSE 400/405 responses by ensuring they are processed in the same branch as 404 errors, preventing unhandled cases. This enhances the reliability of the MCPConnection under rapid reconnect scenarios and ensures proper error handling. * 🔧 fix: Implement cache eviction for stale reconnect attempts and missing tools Added an `evictStale` function to manage the size of the `lastReconnectAttempts` and `missingToolCache` maps, ensuring they do not exceed a maximum cache size. This enhancement improves resource management by removing outdated entries based on a specified time-to-live (TTL), thereby optimizing the MCP service's performance during reconnection scenarios.
2026-03-10 17:44:13 -04:00
function evictStale(map, ttl) {
if (map.size <= MAX_CACHE_SIZE) {
return;
}
const now = Date.now();
for (const [key, timestamp] of map) {
if (now - timestamp >= ttl) {
map.delete(key);
}
if (map.size <= MAX_CACHE_SIZE) {
return;
}
}
}
const unavailableMsg =
"This tool's MCP server is temporarily unavailable. Please try again shortly.";
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
/**
* Resolves config-source MCP servers from admin Config overrides for the current
* request context. Returns the parsed configs keyed by server name.
* @param {import('express').Request} req - Express request with user context
* @returns {Promise<Record<string, import('@librechat/api').ParsedServerConfig>>}
*/
async function resolveConfigServers(req) {
try {
const registry = getMCPServersRegistry();
const user = req?.user;
const appConfig = await getAppConfig({
role: user?.role,
tenantId: getTenantId(),
userId: user?.id,
});
return await registry.ensureConfigServers(appConfig?.mcpConfig || {});
} catch (error) {
logger.warn(
'[resolveConfigServers] Failed to resolve config servers, degrading to empty:',
error,
);
return {};
}
}
/**
* Resolves config-source servers and merges all server configs (YAML + config + user DB)
* for the given user context. Shared helper for controllers needing the full merged config.
* @param {string} userId
* @param {{ id?: string, role?: string }} [user]
* @returns {Promise<Record<string, import('@librechat/api').ParsedServerConfig>>}
*/
async function resolveAllMcpConfigs(userId, user) {
const registry = getMCPServersRegistry();
const appConfig = await getAppConfig({ role: user?.role, tenantId: getTenantId(), userId });
let configServers = {};
try {
configServers = await registry.ensureConfigServers(appConfig?.mcpConfig || {});
} catch (error) {
logger.warn(
'[resolveAllMcpConfigs] Config server resolution failed, continuing without:',
error,
);
}
return await registry.getAllServerConfigs(userId, configServers);
}
/**
* @param {string} toolName
* @param {string} serverName
*/
function createUnavailableToolStub(toolName, serverName) {
const normalizedToolKey = `${toolName}${Constants.mcp_delimiter}${normalizeServerName(serverName)}`;
🧪 chore: MCP Reconnect Storm Follow-Up Fixes and Integration Tests (#12172) * 🧪 test: Add reconnection storm regression tests for MCPConnection Introduced a comprehensive test suite for reconnection storm scenarios, validating circuit breaker, throttling, cooldown, and timeout fixes. The tests utilize real MCP SDK transports and a StreamableHTTP server to ensure accurate behavior under rapid connect/disconnect cycles and error handling for SSE 400/405 responses. This enhances the reliability of the MCPConnection by ensuring proper handling of reconnection logic and circuit breaker functionality. * 🔧 fix: Update createUnavailableToolStub to return structured response Modified the `createUnavailableToolStub` function to return an array containing the unavailable message and a null value, enhancing the response structure. Additionally, added a debug log to skip tool creation when the result is null, improving the handling of reconnection scenarios in the MCP service. * 🧪 test: Enhance MCP tool creation tests for cache and throttle interactions Added new test cases for the `createMCPTool` function to validate the caching behavior when tools are unavailable or throttled. The tests ensure that tools are correctly cached as missing and prevent unnecessary reconnects across different users, improving the reliability of the MCP service under concurrent usage scenarios. Additionally, introduced a test for the `createMCPTools` function to verify that it returns an empty array when reconnect is throttled, ensuring proper handling of throttling logic. * 📝 docs: Update AGENTS.md with testing philosophy and guidelines Expanded the testing section in AGENTS.md to emphasize the importance of using real logic over mocks, advocating for the use of spies and real dependencies in tests. Added specific recommendations for testing with MongoDB and MCP SDK, highlighting the need to mock only uncontrollable external services. This update aims to improve testing practices and encourage more robust test implementations. * 🧪 test: Enhance reconnection storm tests with socket tracking and SSE handling Updated the reconnection storm test suite to include a new socket tracking mechanism for better resource management during tests. Improved the handling of SSE 400/405 responses by ensuring they are processed in the same branch as 404 errors, preventing unhandled cases. This enhances the reliability of the MCPConnection under rapid reconnect scenarios and ensures proper error handling. * 🔧 fix: Implement cache eviction for stale reconnect attempts and missing tools Added an `evictStale` function to manage the size of the `lastReconnectAttempts` and `missingToolCache` maps, ensuring they do not exceed a maximum cache size. This enhancement improves resource management by removing outdated entries based on a specified time-to-live (TTL), thereby optimizing the MCP service's performance during reconnection scenarios.
2026-03-10 17:44:13 -04:00
const _call = async () => [unavailableMsg, null];
const toolInstance = tool(_call, {
schema: {
type: 'object',
properties: {
input: { type: 'string', description: 'Input for the tool' },
},
required: [],
},
name: normalizedToolKey,
description: unavailableMsg,
responseFormat: AgentConstants.CONTENT_AND_ARTIFACT,
});
toolInstance.mcp = true;
toolInstance.mcpRawServerName = serverName;
return toolInstance;
}
🦥 refactor: Event-Driven Lazy Tool Loading (#11588) * refactor: json schema tools with lazy loading - Added LocalToolExecutor class for lazy loading and caching of tools during execution. - Introduced ToolExecutionContext and ToolExecutor interfaces for better type management. - Created utility functions to generate tool proxies with JSON schema support. - Added ExtendedJsonSchema type for enhanced schema definitions. - Updated existing toolkits to utilize the new schema and executor functionalities. - Introduced a comprehensive tool definitions registry for managing various tool schemas. chore: update @librechat/agents to version 3.1.2 refactor: enhance tool loading optimization and classification - Improved the loadAgentToolsOptimized function to utilize a proxy pattern for all tools, enabling deferred execution and reducing overhead. - Introduced caching for tool instances and refined tool classification logic to streamline tool management. - Updated the handling of MCP tools to improve logging and error reporting for missing tools in the cache. - Enhanced the structure of tool definitions to support better classification and integration with existing tools. refactor: modularize tool loading and enhance optimization - Moved the loadAgentToolsOptimized function to a new service file for better organization and maintainability. - Updated the ToolService to utilize the new service for optimized tool loading, improving code clarity. - Removed legacy tool loading methods and streamlined the tool loading process to enhance performance and reduce complexity. - Introduced feature flag handling for optimized tool loading, allowing for easier toggling of this functionality. refactor: replace loadAgentToolsWithFlag with loadAgentTools in tool loader refactor: enhance MCP tool loading with proxy creation and classification refactor: optimize MCP tool loading by grouping tools by server - Introduced a Map to group cached tools by server name, improving the organization of tool data. - Updated the createMCPProxyTool function to accept server name directly, enhancing clarity. - Refactored the logic for handling MCP tools, streamlining the process of creating proxy tools for classification. refactor: enhance MCP tool loading and proxy creation - Added functionality to retrieve MCP server tools and reinitialize servers if necessary, improving tool availability. - Updated the tool loading logic to utilize a Map for organizing tools by server, enhancing clarity and performance. - Refactored the createToolProxy function to ensure a default response format, streamlining tool creation. refactor: update createToolProxy to ensure consistent response format - Modified the createToolProxy function to await the executor's execution and validate the result format. - Ensured that the function returns a default response structure when the result is not an array of two elements, enhancing reliability in tool proxy creation. refactor: ToolExecutionContext with toolCall property - Added toolCall property to ToolExecutionContext interface for improved context handling during tool execution. - Updated LocalToolExecutor to include toolCall in the runnable configuration, allowing for more flexible tool invocation. - Modified createToolProxy to pass toolCall from the configuration, ensuring consistent context across tool executions. refactor: enhance event-driven tool execution and logging - Introduced ToolExecuteOptions for improved handling of event-driven tool execution, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to include support for ON_TOOL_EXECUTE events, enhancing the flexibility of tool invocation. - Added detailed logging in LocalToolExecutor to track tool loading and execution metrics, improving observability and debugging capabilities. - Refactored initializeClient to integrate event-driven tool loading, ensuring compatibility with the new execution model. chore: update @librechat/agents to version 3.1.21 refactor: remove legacy tool loading and executor components - Eliminated the loadAgentToolsWithFlag function, simplifying the tool loading process by directly using loadAgentTools. - Removed the LocalToolExecutor and related executor components to streamline the tool execution architecture. - Updated ToolService and related files to reflect the removal of deprecated features, enhancing code clarity and maintainability. refactor: enhance tool classification and definitions handling - Updated the loadAgentTools function to return toolDefinitions alongside toolRegistry, improving the structure of tool data returned to clients. - Removed the convertRegistryToDefinitions function from the initialize.js file, simplifying the initialization process. - Adjusted the buildToolClassification function to ensure toolDefinitions are built and returned simultaneously with the toolRegistry, enhancing efficiency in tool management. - Updated type definitions in initialize.ts to include toolDefinitions, ensuring consistency across the codebase. refactor: implement event-driven tool execution handler - Introduced createToolExecuteHandler function to streamline the handling of ON_TOOL_EXECUTE events, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to utilize the new handler, simplifying the event-driven architecture. - Added handlers.ts file to encapsulate tool execution logic, improving code organization and maintainability. - Enhanced OpenAI handlers to integrate the new tool execution capabilities, ensuring consistent event handling across the application. refactor: integrate event-driven tool execution options - Added toolExecuteOptions to support event-driven tool execution in OpenAI and responses controllers, enhancing flexibility in tool handling. - Updated handlers to utilize createToolExecuteHandler, allowing for streamlined execution of tools during agent interactions. - Refactored service dependencies to include toolExecuteOptions, ensuring consistent integration across the application. refactor: enhance tool loading with definitionsOnly parameter - Updated createToolLoader and loadAgentTools functions to include a definitionsOnly parameter, allowing for the retrieval of only serializable tool definitions in event-driven mode. - Adjusted related interfaces and documentation to reflect the new parameter, improving clarity and flexibility in tool management. - Ensured compatibility across various components by integrating the definitionsOnly option in the initialization process. refactor: improve agent tool presence check in initialization - Added a check for tool presence using a new hasAgentTools variable, which evaluates both structuredTools and toolDefinitions. - Updated the conditional logic in the agent initialization process to utilize the hasAgentTools variable, enhancing clarity and maintainability in tool management. refactor: enhance agent tool extraction to support tool definitions - Updated the extractMCPServers function to handle both tool instances and serializable tool definitions, improving flexibility in agent tool management. - Added a new property toolDefinitions to the AgentWithTools type for better integration of event-driven mode. - Enhanced documentation to clarify the function's capabilities in extracting unique MCP server names from both tools and tool definitions. refactor: enhance tool classification and registry building - Added serverName property to ToolDefinition for improved tool identification. - Introduced buildToolRegistry function to streamline the creation of tool registries based on MCP tool definitions and agent options. - Updated buildToolClassification to utilize the new registry building logic, ensuring basic definitions are returned even when advanced classification features are not allowed. - Enhanced documentation and logging for clarity in tool classification processes. refactor: update @librechat/agents dependency to version 3.1.22 fix: expose loadTools function in ToolService - Added loadTools function to the exported module in ToolService.js, enhancing the accessibility of tool loading functionality. chore: remove configurable options from tool execute options in OpenAI controller refactor: enhance tool loading mechanism to utilize agent-specific context chore: update @librechat/agents dependency to version 3.1.23 fix: simplify result handling in createToolExecuteHandler * refactor: loadToolDefinitions for efficient tool loading in event-driven mode * refactor: replace legacy tool loading with loadToolsForExecution in OpenAI and responses controllers - Updated OpenAIChatCompletionController and createResponse functions to utilize loadToolsForExecution for improved tool loading. - Removed deprecated loadToolsLegacy references, streamlining the tool execution process. - Enhanced tool loading options to include agent-specific context and configurations. * refactor: enhance tool loading and execution handling - Introduced loadActionToolsForExecution function to streamline loading of action tools, improving organization and maintainability. - Updated loadToolsForExecution to handle both regular and action tools, optimizing the tool loading process. - Added detailed logging for missing tools in createToolExecuteHandler, enhancing error visibility. - Refactored tool definitions to normalize action tool names, improving consistency in tool management. * refactor: enhance built-in tool definitions loading - Updated loadToolDefinitions to include descriptions and parameters from the tool registry for built-in tools, improving the clarity and usability of tool definitions. - Integrated getToolDefinition to streamline the retrieval of tool metadata, enhancing the overall tool management process. * feat: add action tool definitions loading to tool service - Introduced getActionToolDefinitions function to load action tool definitions based on agent ID and tool names, enhancing the tool loading process. - Updated loadToolDefinitions to integrate action tool definitions, allowing for better management and retrieval of action-specific tools. - Added comprehensive tests for action tool definitions to ensure correct loading and parameter handling, improving overall reliability and functionality. * chore: update @librechat/agents dependency to version 3.1.26 * refactor: add toolEndCallback to handle tool execution results * fix: tool definitions and execution handling - Introduced native tools (execute_code, file_search, web_search) to the tool service, allowing for better integration and management of these tools. - Updated isBuiltInTool function to include native tools in the built-in check, improving tool recognition. - Added comprehensive tests for loading parameters of native tools, ensuring correct functionality and parameter handling. - Enhanced tool definitions registry to include new agent tool definitions, streamlining tool retrieval and management. * refactor: enhance tool loading and execution context - Added toolRegistry to the context for OpenAIChatCompletionController and createResponse functions, improving tool management. - Updated loadToolsForExecution to utilize toolRegistry for better integration of programmatic tools and tool search functionalities. - Enhanced the initialization process to include toolRegistry in agent context, streamlining tool access and configuration. - Refactored tool classification logic to support event-driven execution, ensuring compatibility with new tool definitions. * chore: add request duration logging to OpenAI and Responses controllers - Introduced logging for request start and completion times in OpenAIChatCompletionController and createResponse functions. - Calculated and logged the duration of each request, enhancing observability and performance tracking. - Improved debugging capabilities by providing detailed logs for both streaming and non-streaming responses. * chore: update @librechat/agents dependency to version 3.1.27 * refactor: implement buildToolSet function for tool management - Introduced buildToolSet function to streamline the creation of tool sets from agent configurations, enhancing tool management across various controllers. - Updated AgentClient, OpenAIChatCompletionController, and createResponse functions to utilize buildToolSet, improving consistency in tool handling. - Added comprehensive tests for buildToolSet to ensure correct functionality and edge case handling, enhancing overall reliability. * refactor: update import paths for ToolExecuteOptions and createToolExecuteHandler * fix: update GoogleSearch.js description for maximum search results - Changed the default maximum number of search results from 10 to 5 in the Google Search JSON schema description, ensuring accurate documentation of the expected behavior. * chore: remove deprecated Browser tool and associated assets - Deleted the Browser tool definition from manifest.json, which included its name, plugin key, description, and authentication configuration. - Removed the web-browser.svg asset as it is no longer needed following the removal of the Browser tool. * fix: ensure tool definitions are valid before processing - Added a check to verify the existence of tool definitions in the registry before accessing their properties, preventing potential runtime errors. - Updated the loading logic for built-in tool definitions to ensure that only valid definitions are pushed to the built-in tool definitions array. * fix: extend ExtendedJsonSchema to support 'null' type and nullable enums - Updated the ExtendedJsonSchema type to include 'null' as a valid type option. - Modified the enum property to accept an array of values that can include strings, numbers, booleans, and null, enhancing schema flexibility. * test: add comprehensive tests for tool definitions loading and registry behavior - Implemented tests to verify the handling of built-in tools without registry definitions, ensuring they are skipped correctly. - Added tests to confirm that built-in tools include descriptions and parameters in the registry. - Enhanced tests for action tools, checking for proper inclusion of metadata and handling of tools without parameters in the registry. * test: add tests for mixed-type and number enum schema handling - Introduced tests to validate the parsing of mixed-type enum values, including strings, numbers, booleans, and null. - Added tests for number enum schema values to ensure correct parsing of numeric inputs, enhancing schema validation coverage. * fix: update mock implementation for @librechat/agents - Changed the mock for @librechat/agents to spread the actual module's properties, ensuring that all necessary functionalities are preserved in tests. - This adjustment enhances the accuracy of the tests by reflecting the real structure of the module. * fix: change max_results type in GoogleSearch schema from number to integer - Updated the type of max_results in the Google Search JSON schema to 'integer' for better type accuracy and validation consistency. * fix: update max_results description and type in GoogleSearch schema - Changed the type of max_results from 'number' to 'integer' for improved type accuracy. - Updated the description to reflect the new default maximum number of search results, changing it from 10 to 5. * refactor: remove unused code and improve tool registry handling - Eliminated outdated comments and conditional logic related to event-driven mode in the ToolService. - Enhanced the handling of the tool registry by ensuring it is configurable for better integration during tool execution. * feat: add definitionsOnly option to buildToolClassification for event-driven mode - Introduced a new parameter, definitionsOnly, to the BuildToolClassificationParams interface to enable a mode that skips tool instance creation. - Updated the buildToolClassification function to conditionally add tool definitions without instantiating tools when definitionsOnly is true. - Modified the loadToolDefinitions function to pass definitionsOnly as true, ensuring compatibility with the new feature. * test: add unit tests for buildToolClassification with definitionsOnly option - Implemented tests to verify the behavior of buildToolClassification when definitionsOnly is set to true or false. - Ensured that tool instances are not created when definitionsOnly is true, while still adding necessary tool definitions. - Confirmed that loadAuthValues is called appropriately based on the definitionsOnly parameter, enhancing test coverage for this new feature.
2026-02-01 08:50:57 -05:00
function isEmptyObjectSchema(jsonSchema) {
return (
jsonSchema != null &&
typeof jsonSchema === 'object' &&
jsonSchema.type === 'object' &&
(jsonSchema.properties == null || Object.keys(jsonSchema.properties).length === 0) &&
!jsonSchema.additionalProperties
);
}
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
/**
* @param {object} params
* @param {ServerResponse} params.res - The Express response object for sending events.
* @param {string} params.stepId - The ID of the step in the flow.
* @param {ToolCallChunk} params.toolCall - The tool call object containing tool information.
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
* @param {string | null} [params.streamId] - The stream ID for resumable mode.
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
*/
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
function createRunStepDeltaEmitter({ res, stepId, toolCall, streamId = null }) {
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
/**
* @param {string} authURL - The URL to redirect the user for OAuth authentication.
🔄 refactor: Sequential Event Ordering in Redis Streaming Mode (#11650) * chore: linting image context file * refactor: Event Emission with Async Handling for Redis Ordering - Updated emitEvent and related functions to be async, ensuring proper event ordering in Redis mode. - Refactored multiple handlers to await emitEvent calls, improving reliability for streaming deltas. - Enhanced GenerationJobManager to await chunk emissions, critical for maintaining sequential event delivery. - Added tests to verify that events are delivered in strict order when using Redis, addressing previous issues with out-of-order messages. * refactor: Clear Pending Buffers and Timeouts in RedisEventTransport - Enhanced the cleanup process in RedisEventTransport by ensuring that pending messages and flush timeouts are cleared when the last subscriber unsubscribes. - Updated the destroy method to also clear pending messages and flush timeouts for all streams, improving resource management and preventing memory leaks. * refactor: Update Event Emission to Async for Improved Ordering - Refactored GenerationJobManager and RedisEventTransport to make emitDone and emitError methods async, ensuring proper event ordering in Redis mode. - Updated all relevant calls to await these methods, enhancing reliability in event delivery. - Adjusted tests to verify that events are processed in the correct sequence, addressing previous issues with out-of-order messages. * refactor: Adjust RedisEventTransport for 0-Indexed Sequence Handling - Updated sequence handling in RedisEventTransport to be 0-indexed, ensuring consistency across event emissions and buffer management. - Modified integration tests to reflect the new sequence logic, improving the accuracy of event processing and delivery order. - Enhanced comments for clarity on sequence management and terminal event handling. * chore: Add Redis dump file to .gitignore - Included dump.rdb in .gitignore to prevent accidental commits of Redis database dumps, enhancing repository cleanliness and security. * test: Increase wait times in RedisEventTransport integration tests for CI stability - Adjusted wait times for subscription establishment and event propagation from 100ms and 200ms to 500ms to improve reliability in CI environments. - Enhanced code readability by formatting promise resolution lines for better clarity.
2026-02-05 17:57:33 +01:00
* @returns {Promise<void>}
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
*/
🔄 refactor: Sequential Event Ordering in Redis Streaming Mode (#11650) * chore: linting image context file * refactor: Event Emission with Async Handling for Redis Ordering - Updated emitEvent and related functions to be async, ensuring proper event ordering in Redis mode. - Refactored multiple handlers to await emitEvent calls, improving reliability for streaming deltas. - Enhanced GenerationJobManager to await chunk emissions, critical for maintaining sequential event delivery. - Added tests to verify that events are delivered in strict order when using Redis, addressing previous issues with out-of-order messages. * refactor: Clear Pending Buffers and Timeouts in RedisEventTransport - Enhanced the cleanup process in RedisEventTransport by ensuring that pending messages and flush timeouts are cleared when the last subscriber unsubscribes. - Updated the destroy method to also clear pending messages and flush timeouts for all streams, improving resource management and preventing memory leaks. * refactor: Update Event Emission to Async for Improved Ordering - Refactored GenerationJobManager and RedisEventTransport to make emitDone and emitError methods async, ensuring proper event ordering in Redis mode. - Updated all relevant calls to await these methods, enhancing reliability in event delivery. - Adjusted tests to verify that events are processed in the correct sequence, addressing previous issues with out-of-order messages. * refactor: Adjust RedisEventTransport for 0-Indexed Sequence Handling - Updated sequence handling in RedisEventTransport to be 0-indexed, ensuring consistency across event emissions and buffer management. - Modified integration tests to reflect the new sequence logic, improving the accuracy of event processing and delivery order. - Enhanced comments for clarity on sequence management and terminal event handling. * chore: Add Redis dump file to .gitignore - Included dump.rdb in .gitignore to prevent accidental commits of Redis database dumps, enhancing repository cleanliness and security. * test: Increase wait times in RedisEventTransport integration tests for CI stability - Adjusted wait times for subscription establishment and event propagation from 100ms and 200ms to 500ms to improve reliability in CI environments. - Enhanced code readability by formatting promise resolution lines for better clarity.
2026-02-05 17:57:33 +01:00
return async function (authURL) {
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
/** @type {{ id: string; delta: AgentToolCallDelta }} */
const data = {
id: stepId,
delta: {
type: StepTypes.TOOL_CALLS,
tool_calls: [{ ...toolCall, args: '' }],
auth: authURL,
expires_at: Date.now() + Time.TWO_MINUTES,
},
};
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
const eventData = { event: GraphEvents.ON_RUN_STEP_DELTA, data };
if (streamId) {
🔄 refactor: Sequential Event Ordering in Redis Streaming Mode (#11650) * chore: linting image context file * refactor: Event Emission with Async Handling for Redis Ordering - Updated emitEvent and related functions to be async, ensuring proper event ordering in Redis mode. - Refactored multiple handlers to await emitEvent calls, improving reliability for streaming deltas. - Enhanced GenerationJobManager to await chunk emissions, critical for maintaining sequential event delivery. - Added tests to verify that events are delivered in strict order when using Redis, addressing previous issues with out-of-order messages. * refactor: Clear Pending Buffers and Timeouts in RedisEventTransport - Enhanced the cleanup process in RedisEventTransport by ensuring that pending messages and flush timeouts are cleared when the last subscriber unsubscribes. - Updated the destroy method to also clear pending messages and flush timeouts for all streams, improving resource management and preventing memory leaks. * refactor: Update Event Emission to Async for Improved Ordering - Refactored GenerationJobManager and RedisEventTransport to make emitDone and emitError methods async, ensuring proper event ordering in Redis mode. - Updated all relevant calls to await these methods, enhancing reliability in event delivery. - Adjusted tests to verify that events are processed in the correct sequence, addressing previous issues with out-of-order messages. * refactor: Adjust RedisEventTransport for 0-Indexed Sequence Handling - Updated sequence handling in RedisEventTransport to be 0-indexed, ensuring consistency across event emissions and buffer management. - Modified integration tests to reflect the new sequence logic, improving the accuracy of event processing and delivery order. - Enhanced comments for clarity on sequence management and terminal event handling. * chore: Add Redis dump file to .gitignore - Included dump.rdb in .gitignore to prevent accidental commits of Redis database dumps, enhancing repository cleanliness and security. * test: Increase wait times in RedisEventTransport integration tests for CI stability - Adjusted wait times for subscription establishment and event propagation from 100ms and 200ms to 500ms to improve reliability in CI environments. - Enhanced code readability by formatting promise resolution lines for better clarity.
2026-02-05 17:57:33 +01:00
await GenerationJobManager.emitChunk(streamId, eventData);
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
} else {
sendEvent(res, eventData);
}
};
}
/**
* @param {object} params
* @param {ServerResponse} params.res - The Express response object for sending events.
* @param {string} params.runId - The Run ID, i.e. message ID
* @param {string} params.stepId - The ID of the step in the flow.
* @param {ToolCallChunk} params.toolCall - The tool call object containing tool information.
* @param {number} [params.index]
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
* @param {string | null} [params.streamId] - The stream ID for resumable mode.
🔄 refactor: Sequential Event Ordering in Redis Streaming Mode (#11650) * chore: linting image context file * refactor: Event Emission with Async Handling for Redis Ordering - Updated emitEvent and related functions to be async, ensuring proper event ordering in Redis mode. - Refactored multiple handlers to await emitEvent calls, improving reliability for streaming deltas. - Enhanced GenerationJobManager to await chunk emissions, critical for maintaining sequential event delivery. - Added tests to verify that events are delivered in strict order when using Redis, addressing previous issues with out-of-order messages. * refactor: Clear Pending Buffers and Timeouts in RedisEventTransport - Enhanced the cleanup process in RedisEventTransport by ensuring that pending messages and flush timeouts are cleared when the last subscriber unsubscribes. - Updated the destroy method to also clear pending messages and flush timeouts for all streams, improving resource management and preventing memory leaks. * refactor: Update Event Emission to Async for Improved Ordering - Refactored GenerationJobManager and RedisEventTransport to make emitDone and emitError methods async, ensuring proper event ordering in Redis mode. - Updated all relevant calls to await these methods, enhancing reliability in event delivery. - Adjusted tests to verify that events are processed in the correct sequence, addressing previous issues with out-of-order messages. * refactor: Adjust RedisEventTransport for 0-Indexed Sequence Handling - Updated sequence handling in RedisEventTransport to be 0-indexed, ensuring consistency across event emissions and buffer management. - Modified integration tests to reflect the new sequence logic, improving the accuracy of event processing and delivery order. - Enhanced comments for clarity on sequence management and terminal event handling. * chore: Add Redis dump file to .gitignore - Included dump.rdb in .gitignore to prevent accidental commits of Redis database dumps, enhancing repository cleanliness and security. * test: Increase wait times in RedisEventTransport integration tests for CI stability - Adjusted wait times for subscription establishment and event propagation from 100ms and 200ms to 500ms to improve reliability in CI environments. - Enhanced code readability by formatting promise resolution lines for better clarity.
2026-02-05 17:57:33 +01:00
* @returns {() => Promise<void>}
*/
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
function createRunStepEmitter({ res, runId, stepId, toolCall, index, streamId = null }) {
🔄 refactor: Sequential Event Ordering in Redis Streaming Mode (#11650) * chore: linting image context file * refactor: Event Emission with Async Handling for Redis Ordering - Updated emitEvent and related functions to be async, ensuring proper event ordering in Redis mode. - Refactored multiple handlers to await emitEvent calls, improving reliability for streaming deltas. - Enhanced GenerationJobManager to await chunk emissions, critical for maintaining sequential event delivery. - Added tests to verify that events are delivered in strict order when using Redis, addressing previous issues with out-of-order messages. * refactor: Clear Pending Buffers and Timeouts in RedisEventTransport - Enhanced the cleanup process in RedisEventTransport by ensuring that pending messages and flush timeouts are cleared when the last subscriber unsubscribes. - Updated the destroy method to also clear pending messages and flush timeouts for all streams, improving resource management and preventing memory leaks. * refactor: Update Event Emission to Async for Improved Ordering - Refactored GenerationJobManager and RedisEventTransport to make emitDone and emitError methods async, ensuring proper event ordering in Redis mode. - Updated all relevant calls to await these methods, enhancing reliability in event delivery. - Adjusted tests to verify that events are processed in the correct sequence, addressing previous issues with out-of-order messages. * refactor: Adjust RedisEventTransport for 0-Indexed Sequence Handling - Updated sequence handling in RedisEventTransport to be 0-indexed, ensuring consistency across event emissions and buffer management. - Modified integration tests to reflect the new sequence logic, improving the accuracy of event processing and delivery order. - Enhanced comments for clarity on sequence management and terminal event handling. * chore: Add Redis dump file to .gitignore - Included dump.rdb in .gitignore to prevent accidental commits of Redis database dumps, enhancing repository cleanliness and security. * test: Increase wait times in RedisEventTransport integration tests for CI stability - Adjusted wait times for subscription establishment and event propagation from 100ms and 200ms to 500ms to improve reliability in CI environments. - Enhanced code readability by formatting promise resolution lines for better clarity.
2026-02-05 17:57:33 +01:00
return async function () {
/** @type {import('@librechat/agents').RunStep} */
const data = {
runId: runId ?? Constants.USE_PRELIM_RESPONSE_MESSAGE_ID,
id: stepId,
type: StepTypes.TOOL_CALLS,
index: index ?? 0,
stepDetails: {
type: StepTypes.TOOL_CALLS,
tool_calls: [toolCall],
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
},
};
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
const eventData = { event: GraphEvents.ON_RUN_STEP, data };
if (streamId) {
🔄 refactor: Sequential Event Ordering in Redis Streaming Mode (#11650) * chore: linting image context file * refactor: Event Emission with Async Handling for Redis Ordering - Updated emitEvent and related functions to be async, ensuring proper event ordering in Redis mode. - Refactored multiple handlers to await emitEvent calls, improving reliability for streaming deltas. - Enhanced GenerationJobManager to await chunk emissions, critical for maintaining sequential event delivery. - Added tests to verify that events are delivered in strict order when using Redis, addressing previous issues with out-of-order messages. * refactor: Clear Pending Buffers and Timeouts in RedisEventTransport - Enhanced the cleanup process in RedisEventTransport by ensuring that pending messages and flush timeouts are cleared when the last subscriber unsubscribes. - Updated the destroy method to also clear pending messages and flush timeouts for all streams, improving resource management and preventing memory leaks. * refactor: Update Event Emission to Async for Improved Ordering - Refactored GenerationJobManager and RedisEventTransport to make emitDone and emitError methods async, ensuring proper event ordering in Redis mode. - Updated all relevant calls to await these methods, enhancing reliability in event delivery. - Adjusted tests to verify that events are processed in the correct sequence, addressing previous issues with out-of-order messages. * refactor: Adjust RedisEventTransport for 0-Indexed Sequence Handling - Updated sequence handling in RedisEventTransport to be 0-indexed, ensuring consistency across event emissions and buffer management. - Modified integration tests to reflect the new sequence logic, improving the accuracy of event processing and delivery order. - Enhanced comments for clarity on sequence management and terminal event handling. * chore: Add Redis dump file to .gitignore - Included dump.rdb in .gitignore to prevent accidental commits of Redis database dumps, enhancing repository cleanliness and security. * test: Increase wait times in RedisEventTransport integration tests for CI stability - Adjusted wait times for subscription establishment and event propagation from 100ms and 200ms to 500ms to improve reliability in CI environments. - Enhanced code readability by formatting promise resolution lines for better clarity.
2026-02-05 17:57:33 +01:00
await GenerationJobManager.emitChunk(streamId, eventData);
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
} else {
sendEvent(res, eventData);
}
};
}
/**
* Creates a function used to ensure the flow handler is only invoked once
* @param {object} params
* @param {string} params.flowId - The ID of the login flow.
* @param {FlowStateManager<any>} params.flowManager - The flow manager instance.
* @param {(authURL: string) => void} [params.callback]
*/
function createOAuthStart({ flowId, flowManager, callback }) {
/**
* Creates a function to handle OAuth login requests.
* @param {string} authURL - The URL to redirect the user for OAuth authentication.
* @returns {Promise<boolean>} Returns true to indicate the event was sent successfully.
*/
return async function (authURL) {
await flowManager.createFlowWithHandler(flowId, 'oauth_login', async () => {
callback?.(authURL);
logger.debug('Sent OAuth login request to client');
return true;
});
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
};
}
/**
* @param {object} params
* @param {ServerResponse} params.res - The Express response object for sending events.
* @param {string} params.stepId - The ID of the step in the flow.
* @param {ToolCallChunk} params.toolCall - The tool call object containing tool information.
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
* @param {string | null} [params.streamId] - The stream ID for resumable mode.
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
*/
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
function createOAuthEnd({ res, stepId, toolCall, streamId = null }) {
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
return async function () {
/** @type {{ id: string; delta: AgentToolCallDelta }} */
const data = {
id: stepId,
delta: {
type: StepTypes.TOOL_CALLS,
tool_calls: [{ ...toolCall }],
},
};
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
const eventData = { event: GraphEvents.ON_RUN_STEP_DELTA, data };
if (streamId) {
🔄 refactor: Sequential Event Ordering in Redis Streaming Mode (#11650) * chore: linting image context file * refactor: Event Emission with Async Handling for Redis Ordering - Updated emitEvent and related functions to be async, ensuring proper event ordering in Redis mode. - Refactored multiple handlers to await emitEvent calls, improving reliability for streaming deltas. - Enhanced GenerationJobManager to await chunk emissions, critical for maintaining sequential event delivery. - Added tests to verify that events are delivered in strict order when using Redis, addressing previous issues with out-of-order messages. * refactor: Clear Pending Buffers and Timeouts in RedisEventTransport - Enhanced the cleanup process in RedisEventTransport by ensuring that pending messages and flush timeouts are cleared when the last subscriber unsubscribes. - Updated the destroy method to also clear pending messages and flush timeouts for all streams, improving resource management and preventing memory leaks. * refactor: Update Event Emission to Async for Improved Ordering - Refactored GenerationJobManager and RedisEventTransport to make emitDone and emitError methods async, ensuring proper event ordering in Redis mode. - Updated all relevant calls to await these methods, enhancing reliability in event delivery. - Adjusted tests to verify that events are processed in the correct sequence, addressing previous issues with out-of-order messages. * refactor: Adjust RedisEventTransport for 0-Indexed Sequence Handling - Updated sequence handling in RedisEventTransport to be 0-indexed, ensuring consistency across event emissions and buffer management. - Modified integration tests to reflect the new sequence logic, improving the accuracy of event processing and delivery order. - Enhanced comments for clarity on sequence management and terminal event handling. * chore: Add Redis dump file to .gitignore - Included dump.rdb in .gitignore to prevent accidental commits of Redis database dumps, enhancing repository cleanliness and security. * test: Increase wait times in RedisEventTransport integration tests for CI stability - Adjusted wait times for subscription establishment and event propagation from 100ms and 200ms to 500ms to improve reliability in CI environments. - Enhanced code readability by formatting promise resolution lines for better clarity.
2026-02-05 17:57:33 +01:00
await GenerationJobManager.emitChunk(streamId, eventData);
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
} else {
sendEvent(res, eventData);
}
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
logger.debug('Sent OAuth login success to client');
};
}
/**
* @param {object} params
* @param {string} params.userId - The ID of the user.
* @param {string} params.serverName - The name of the server.
* @param {string} params.toolName - The name of the tool.
* @param {FlowStateManager<any>} params.flowManager - The flow manager instance.
*/
function createAbortHandler({ userId, serverName, toolName, flowManager }) {
return function () {
logger.info(`[MCP][User: ${userId}][${serverName}][${toolName}] Tool call aborted`);
const flowId = MCPOAuthHandler.generateFlowId(userId, serverName);
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
// Clean up both mcp_oauth and mcp_get_tokens flows
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
flowManager.failFlow(flowId, 'mcp_oauth', new Error('Tool call aborted'));
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
flowManager.failFlow(flowId, 'mcp_get_tokens', new Error('Tool call aborted'));
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
};
}
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
/**
* @param {Object} params
* @param {() => void} params.runStepEmitter
* @param {(authURL: string) => void} params.runStepDeltaEmitter
* @returns {(authURL: string) => void}
*/
function createOAuthCallback({ runStepEmitter, runStepDeltaEmitter }) {
return function (authURL) {
runStepEmitter();
runStepDeltaEmitter(authURL);
};
}
/**
* @param {Object} params
* @param {ServerResponse} params.res - The Express response object for sending events.
* @param {IUser} params.user - The user from the request object.
* @param {string} params.serverName
* @param {AbortSignal} params.signal
* @param {string} params.model
* @param {number} [params.index]
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
* @param {string | null} [params.streamId] - The stream ID for resumable mode.
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap]
* @returns { Promise<Array<typeof tool | { _call: (toolInput: Object | string) => unknown}>> } An object with `_call` method to execute the tool input.
*/
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
async function reconnectServer({
res,
user,
index,
signal,
serverName,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
configServers,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
userMCPAuthMap,
streamId = null,
}) {
🦥 refactor: Event-Driven Lazy Tool Loading (#11588) * refactor: json schema tools with lazy loading - Added LocalToolExecutor class for lazy loading and caching of tools during execution. - Introduced ToolExecutionContext and ToolExecutor interfaces for better type management. - Created utility functions to generate tool proxies with JSON schema support. - Added ExtendedJsonSchema type for enhanced schema definitions. - Updated existing toolkits to utilize the new schema and executor functionalities. - Introduced a comprehensive tool definitions registry for managing various tool schemas. chore: update @librechat/agents to version 3.1.2 refactor: enhance tool loading optimization and classification - Improved the loadAgentToolsOptimized function to utilize a proxy pattern for all tools, enabling deferred execution and reducing overhead. - Introduced caching for tool instances and refined tool classification logic to streamline tool management. - Updated the handling of MCP tools to improve logging and error reporting for missing tools in the cache. - Enhanced the structure of tool definitions to support better classification and integration with existing tools. refactor: modularize tool loading and enhance optimization - Moved the loadAgentToolsOptimized function to a new service file for better organization and maintainability. - Updated the ToolService to utilize the new service for optimized tool loading, improving code clarity. - Removed legacy tool loading methods and streamlined the tool loading process to enhance performance and reduce complexity. - Introduced feature flag handling for optimized tool loading, allowing for easier toggling of this functionality. refactor: replace loadAgentToolsWithFlag with loadAgentTools in tool loader refactor: enhance MCP tool loading with proxy creation and classification refactor: optimize MCP tool loading by grouping tools by server - Introduced a Map to group cached tools by server name, improving the organization of tool data. - Updated the createMCPProxyTool function to accept server name directly, enhancing clarity. - Refactored the logic for handling MCP tools, streamlining the process of creating proxy tools for classification. refactor: enhance MCP tool loading and proxy creation - Added functionality to retrieve MCP server tools and reinitialize servers if necessary, improving tool availability. - Updated the tool loading logic to utilize a Map for organizing tools by server, enhancing clarity and performance. - Refactored the createToolProxy function to ensure a default response format, streamlining tool creation. refactor: update createToolProxy to ensure consistent response format - Modified the createToolProxy function to await the executor's execution and validate the result format. - Ensured that the function returns a default response structure when the result is not an array of two elements, enhancing reliability in tool proxy creation. refactor: ToolExecutionContext with toolCall property - Added toolCall property to ToolExecutionContext interface for improved context handling during tool execution. - Updated LocalToolExecutor to include toolCall in the runnable configuration, allowing for more flexible tool invocation. - Modified createToolProxy to pass toolCall from the configuration, ensuring consistent context across tool executions. refactor: enhance event-driven tool execution and logging - Introduced ToolExecuteOptions for improved handling of event-driven tool execution, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to include support for ON_TOOL_EXECUTE events, enhancing the flexibility of tool invocation. - Added detailed logging in LocalToolExecutor to track tool loading and execution metrics, improving observability and debugging capabilities. - Refactored initializeClient to integrate event-driven tool loading, ensuring compatibility with the new execution model. chore: update @librechat/agents to version 3.1.21 refactor: remove legacy tool loading and executor components - Eliminated the loadAgentToolsWithFlag function, simplifying the tool loading process by directly using loadAgentTools. - Removed the LocalToolExecutor and related executor components to streamline the tool execution architecture. - Updated ToolService and related files to reflect the removal of deprecated features, enhancing code clarity and maintainability. refactor: enhance tool classification and definitions handling - Updated the loadAgentTools function to return toolDefinitions alongside toolRegistry, improving the structure of tool data returned to clients. - Removed the convertRegistryToDefinitions function from the initialize.js file, simplifying the initialization process. - Adjusted the buildToolClassification function to ensure toolDefinitions are built and returned simultaneously with the toolRegistry, enhancing efficiency in tool management. - Updated type definitions in initialize.ts to include toolDefinitions, ensuring consistency across the codebase. refactor: implement event-driven tool execution handler - Introduced createToolExecuteHandler function to streamline the handling of ON_TOOL_EXECUTE events, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to utilize the new handler, simplifying the event-driven architecture. - Added handlers.ts file to encapsulate tool execution logic, improving code organization and maintainability. - Enhanced OpenAI handlers to integrate the new tool execution capabilities, ensuring consistent event handling across the application. refactor: integrate event-driven tool execution options - Added toolExecuteOptions to support event-driven tool execution in OpenAI and responses controllers, enhancing flexibility in tool handling. - Updated handlers to utilize createToolExecuteHandler, allowing for streamlined execution of tools during agent interactions. - Refactored service dependencies to include toolExecuteOptions, ensuring consistent integration across the application. refactor: enhance tool loading with definitionsOnly parameter - Updated createToolLoader and loadAgentTools functions to include a definitionsOnly parameter, allowing for the retrieval of only serializable tool definitions in event-driven mode. - Adjusted related interfaces and documentation to reflect the new parameter, improving clarity and flexibility in tool management. - Ensured compatibility across various components by integrating the definitionsOnly option in the initialization process. refactor: improve agent tool presence check in initialization - Added a check for tool presence using a new hasAgentTools variable, which evaluates both structuredTools and toolDefinitions. - Updated the conditional logic in the agent initialization process to utilize the hasAgentTools variable, enhancing clarity and maintainability in tool management. refactor: enhance agent tool extraction to support tool definitions - Updated the extractMCPServers function to handle both tool instances and serializable tool definitions, improving flexibility in agent tool management. - Added a new property toolDefinitions to the AgentWithTools type for better integration of event-driven mode. - Enhanced documentation to clarify the function's capabilities in extracting unique MCP server names from both tools and tool definitions. refactor: enhance tool classification and registry building - Added serverName property to ToolDefinition for improved tool identification. - Introduced buildToolRegistry function to streamline the creation of tool registries based on MCP tool definitions and agent options. - Updated buildToolClassification to utilize the new registry building logic, ensuring basic definitions are returned even when advanced classification features are not allowed. - Enhanced documentation and logging for clarity in tool classification processes. refactor: update @librechat/agents dependency to version 3.1.22 fix: expose loadTools function in ToolService - Added loadTools function to the exported module in ToolService.js, enhancing the accessibility of tool loading functionality. chore: remove configurable options from tool execute options in OpenAI controller refactor: enhance tool loading mechanism to utilize agent-specific context chore: update @librechat/agents dependency to version 3.1.23 fix: simplify result handling in createToolExecuteHandler * refactor: loadToolDefinitions for efficient tool loading in event-driven mode * refactor: replace legacy tool loading with loadToolsForExecution in OpenAI and responses controllers - Updated OpenAIChatCompletionController and createResponse functions to utilize loadToolsForExecution for improved tool loading. - Removed deprecated loadToolsLegacy references, streamlining the tool execution process. - Enhanced tool loading options to include agent-specific context and configurations. * refactor: enhance tool loading and execution handling - Introduced loadActionToolsForExecution function to streamline loading of action tools, improving organization and maintainability. - Updated loadToolsForExecution to handle both regular and action tools, optimizing the tool loading process. - Added detailed logging for missing tools in createToolExecuteHandler, enhancing error visibility. - Refactored tool definitions to normalize action tool names, improving consistency in tool management. * refactor: enhance built-in tool definitions loading - Updated loadToolDefinitions to include descriptions and parameters from the tool registry for built-in tools, improving the clarity and usability of tool definitions. - Integrated getToolDefinition to streamline the retrieval of tool metadata, enhancing the overall tool management process. * feat: add action tool definitions loading to tool service - Introduced getActionToolDefinitions function to load action tool definitions based on agent ID and tool names, enhancing the tool loading process. - Updated loadToolDefinitions to integrate action tool definitions, allowing for better management and retrieval of action-specific tools. - Added comprehensive tests for action tool definitions to ensure correct loading and parameter handling, improving overall reliability and functionality. * chore: update @librechat/agents dependency to version 3.1.26 * refactor: add toolEndCallback to handle tool execution results * fix: tool definitions and execution handling - Introduced native tools (execute_code, file_search, web_search) to the tool service, allowing for better integration and management of these tools. - Updated isBuiltInTool function to include native tools in the built-in check, improving tool recognition. - Added comprehensive tests for loading parameters of native tools, ensuring correct functionality and parameter handling. - Enhanced tool definitions registry to include new agent tool definitions, streamlining tool retrieval and management. * refactor: enhance tool loading and execution context - Added toolRegistry to the context for OpenAIChatCompletionController and createResponse functions, improving tool management. - Updated loadToolsForExecution to utilize toolRegistry for better integration of programmatic tools and tool search functionalities. - Enhanced the initialization process to include toolRegistry in agent context, streamlining tool access and configuration. - Refactored tool classification logic to support event-driven execution, ensuring compatibility with new tool definitions. * chore: add request duration logging to OpenAI and Responses controllers - Introduced logging for request start and completion times in OpenAIChatCompletionController and createResponse functions. - Calculated and logged the duration of each request, enhancing observability and performance tracking. - Improved debugging capabilities by providing detailed logs for both streaming and non-streaming responses. * chore: update @librechat/agents dependency to version 3.1.27 * refactor: implement buildToolSet function for tool management - Introduced buildToolSet function to streamline the creation of tool sets from agent configurations, enhancing tool management across various controllers. - Updated AgentClient, OpenAIChatCompletionController, and createResponse functions to utilize buildToolSet, improving consistency in tool handling. - Added comprehensive tests for buildToolSet to ensure correct functionality and edge case handling, enhancing overall reliability. * refactor: update import paths for ToolExecuteOptions and createToolExecuteHandler * fix: update GoogleSearch.js description for maximum search results - Changed the default maximum number of search results from 10 to 5 in the Google Search JSON schema description, ensuring accurate documentation of the expected behavior. * chore: remove deprecated Browser tool and associated assets - Deleted the Browser tool definition from manifest.json, which included its name, plugin key, description, and authentication configuration. - Removed the web-browser.svg asset as it is no longer needed following the removal of the Browser tool. * fix: ensure tool definitions are valid before processing - Added a check to verify the existence of tool definitions in the registry before accessing their properties, preventing potential runtime errors. - Updated the loading logic for built-in tool definitions to ensure that only valid definitions are pushed to the built-in tool definitions array. * fix: extend ExtendedJsonSchema to support 'null' type and nullable enums - Updated the ExtendedJsonSchema type to include 'null' as a valid type option. - Modified the enum property to accept an array of values that can include strings, numbers, booleans, and null, enhancing schema flexibility. * test: add comprehensive tests for tool definitions loading and registry behavior - Implemented tests to verify the handling of built-in tools without registry definitions, ensuring they are skipped correctly. - Added tests to confirm that built-in tools include descriptions and parameters in the registry. - Enhanced tests for action tools, checking for proper inclusion of metadata and handling of tools without parameters in the registry. * test: add tests for mixed-type and number enum schema handling - Introduced tests to validate the parsing of mixed-type enum values, including strings, numbers, booleans, and null. - Added tests for number enum schema values to ensure correct parsing of numeric inputs, enhancing schema validation coverage. * fix: update mock implementation for @librechat/agents - Changed the mock for @librechat/agents to spread the actual module's properties, ensuring that all necessary functionalities are preserved in tests. - This adjustment enhances the accuracy of the tests by reflecting the real structure of the module. * fix: change max_results type in GoogleSearch schema from number to integer - Updated the type of max_results in the Google Search JSON schema to 'integer' for better type accuracy and validation consistency. * fix: update max_results description and type in GoogleSearch schema - Changed the type of max_results from 'number' to 'integer' for improved type accuracy. - Updated the description to reflect the new default maximum number of search results, changing it from 10 to 5. * refactor: remove unused code and improve tool registry handling - Eliminated outdated comments and conditional logic related to event-driven mode in the ToolService. - Enhanced the handling of the tool registry by ensuring it is configurable for better integration during tool execution. * feat: add definitionsOnly option to buildToolClassification for event-driven mode - Introduced a new parameter, definitionsOnly, to the BuildToolClassificationParams interface to enable a mode that skips tool instance creation. - Updated the buildToolClassification function to conditionally add tool definitions without instantiating tools when definitionsOnly is true. - Modified the loadToolDefinitions function to pass definitionsOnly as true, ensuring compatibility with the new feature. * test: add unit tests for buildToolClassification with definitionsOnly option - Implemented tests to verify the behavior of buildToolClassification when definitionsOnly is set to true or false. - Ensured that tool instances are not created when definitionsOnly is true, while still adding necessary tool definitions. - Confirmed that loadAuthValues is called appropriately based on the definitionsOnly parameter, enhancing test coverage for this new feature.
2026-02-01 08:50:57 -05:00
logger.debug(
`[MCP][reconnectServer] serverName: ${serverName}, user: ${user?.id}, hasUserMCPAuthMap: ${!!userMCPAuthMap}`,
);
const throttleKey = `${user.id}:${serverName}`;
const now = Date.now();
const lastAttempt = lastReconnectAttempts.get(throttleKey) ?? 0;
if (now - lastAttempt < RECONNECT_THROTTLE_MS) {
logger.debug(`[MCP][reconnectServer] Throttled reconnect for ${serverName}`);
return null;
}
lastReconnectAttempts.set(throttleKey, now);
🧪 chore: MCP Reconnect Storm Follow-Up Fixes and Integration Tests (#12172) * 🧪 test: Add reconnection storm regression tests for MCPConnection Introduced a comprehensive test suite for reconnection storm scenarios, validating circuit breaker, throttling, cooldown, and timeout fixes. The tests utilize real MCP SDK transports and a StreamableHTTP server to ensure accurate behavior under rapid connect/disconnect cycles and error handling for SSE 400/405 responses. This enhances the reliability of the MCPConnection by ensuring proper handling of reconnection logic and circuit breaker functionality. * 🔧 fix: Update createUnavailableToolStub to return structured response Modified the `createUnavailableToolStub` function to return an array containing the unavailable message and a null value, enhancing the response structure. Additionally, added a debug log to skip tool creation when the result is null, improving the handling of reconnection scenarios in the MCP service. * 🧪 test: Enhance MCP tool creation tests for cache and throttle interactions Added new test cases for the `createMCPTool` function to validate the caching behavior when tools are unavailable or throttled. The tests ensure that tools are correctly cached as missing and prevent unnecessary reconnects across different users, improving the reliability of the MCP service under concurrent usage scenarios. Additionally, introduced a test for the `createMCPTools` function to verify that it returns an empty array when reconnect is throttled, ensuring proper handling of throttling logic. * 📝 docs: Update AGENTS.md with testing philosophy and guidelines Expanded the testing section in AGENTS.md to emphasize the importance of using real logic over mocks, advocating for the use of spies and real dependencies in tests. Added specific recommendations for testing with MongoDB and MCP SDK, highlighting the need to mock only uncontrollable external services. This update aims to improve testing practices and encourage more robust test implementations. * 🧪 test: Enhance reconnection storm tests with socket tracking and SSE handling Updated the reconnection storm test suite to include a new socket tracking mechanism for better resource management during tests. Improved the handling of SSE 400/405 responses by ensuring they are processed in the same branch as 404 errors, preventing unhandled cases. This enhances the reliability of the MCPConnection under rapid reconnect scenarios and ensures proper error handling. * 🔧 fix: Implement cache eviction for stale reconnect attempts and missing tools Added an `evictStale` function to manage the size of the `lastReconnectAttempts` and `missingToolCache` maps, ensuring they do not exceed a maximum cache size. This enhancement improves resource management by removing outdated entries based on a specified time-to-live (TTL), thereby optimizing the MCP service's performance during reconnection scenarios.
2026-03-10 17:44:13 -04:00
evictStale(lastReconnectAttempts, RECONNECT_THROTTLE_MS);
const runId = Constants.USE_PRELIM_RESPONSE_MESSAGE_ID;
const flowId = `${user.id}:${serverName}:${Date.now()}`;
const flowManager = getFlowStateManager(getLogStores(CacheKeys.FLOWS));
const stepId = 'step_oauth_login_' + serverName;
const toolCall = {
id: flowId,
🔑 fix: Robust MCP OAuth Detection in Tool-Call Flow (#12418) * fix(api): add buildOAuthToolCallName utility for MCP OAuth flows Extract a shared utility that builds the synthetic tool-call name used during MCP OAuth flows (oauth_mcp_{normalizedServerName}). Uses startsWith on the raw serverName (not the normalized form) to guard against double-wrapping, so names that merely normalize to start with oauth_mcp_ (e.g., oauth@mcp@server) are correctly prefixed while genuinely pre-wrapped names are left as-is. Add 8 unit tests covering normal names, pre-wrapped names, _mcp_ substrings, special characters, non-ASCII, and empty string inputs. * fix(backend): use buildOAuthToolCallName in MCP OAuth flows Replace inline tool-call name construction in both reconnectServer (MCP.js) and createOAuthEmitter (ToolService.js) with the shared buildOAuthToolCallName utility. Remove unused normalizeServerName import from ToolService.js. Fix import ordering in both files. This ensures the oauth_mcp_ prefix is consistently applied so the client correctly identifies MCP OAuth flows and binds the CSRF cookie to the right server. * fix(client): robust MCP OAuth detection and split handling in ToolCall - Fix split() destructuring to preserve tail segments for server names containing _mcp_ (e.g., foo_mcp_bar no longer truncated to foo). - Add auth URL redirect_uri fallback: when the tool-call name lacks the _mcp_ delimiter, parse redirect_uri for the MCP callback path. Set function_name to the extracted server name so progress text shows the server, not the raw tool-call ID. - Display server name instead of literal "oauth" as function_name, gated on auth presence to avoid misidentifying real tools named "oauth". - Consolidate three independent new URL(auth) parses into a single parsedAuthUrl useMemo shared across detection, actionId, and authDomain hooks. - Replace any type on ProgressText test mock with structural type. - Add 8 tests covering delimiter detection, multi-segment names, function_name display, redirect_uri fallback, normalized _mcp_ server names, and non-MCP action auth exclusion. * chore: fix import order in utils.test.ts * fix(client): drop auth gate on OAuth displayName so completed flows show server name The createOAuthEnd handler re-emits the toolCall delta without auth, so auth is cleared on the client after OAuth completes. Gating displayName on `func === 'oauth' && auth` caused completed OAuth steps to render "Completed oauth" instead of "Completed my-server". Remove the `&& auth` gate — within the MCP delimiter branch the func="oauth" check alone is sufficient. Also remove `auth` from the useMemo dep array since only `parsedAuthUrl` is referenced. Update the test to assert correct post-completion display.
2026-03-26 14:45:13 -04:00
name: buildOAuthToolCallName(serverName),
type: 'tool_call_chunk',
};
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
// Set up abort handler to clean up OAuth flows if request is aborted
const oauthFlowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
const abortHandler = () => {
logger.info(
`[MCP][User: ${user.id}][${serverName}] Tool loading aborted, cleaning up OAuth flows`,
);
// Clean up both mcp_oauth and mcp_get_tokens flows
flowManager.failFlow(oauthFlowId, 'mcp_oauth', new Error('Tool loading aborted'));
flowManager.failFlow(oauthFlowId, 'mcp_get_tokens', new Error('Tool loading aborted'));
};
if (signal) {
signal.addEventListener('abort', abortHandler, { once: true });
}
try {
const runStepEmitter = createRunStepEmitter({
res,
index,
runId,
stepId,
toolCall,
streamId,
});
const runStepDeltaEmitter = createRunStepDeltaEmitter({
res,
stepId,
toolCall,
streamId,
});
const callback = createOAuthCallback({ runStepEmitter, runStepDeltaEmitter });
const oauthStart = createOAuthStart({
res,
flowId,
callback,
flowManager,
});
return await reinitMCPServer({
user,
signal,
serverName,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
configServers,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
oauthStart,
flowManager,
userMCPAuthMap,
forceNew: true,
returnOnOAuth: false,
connectionTimeout: Time.THIRTY_SECONDS,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
});
} finally {
// Clean up abort handler to prevent memory leaks
if (signal) {
signal.removeEventListener('abort', abortHandler);
}
}
}
/**
* Creates all tools from the specified MCP Server via `toolKey`.
*
* This function assumes tools could not be aggregated from the cache of tool definitions,
* i.e. `availableTools`, and will reinitialize the MCP server to ensure all tools are generated.
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
*
* @param {Object} params
* @param {ServerResponse} params.res - The Express response object for sending events.
* @param {IUser} params.user - The user from the request object.
* @param {string} params.serverName
* @param {string} params.model
* @param {Providers | EModelEndpoint} params.provider - The provider for the tool.
* @param {number} [params.index]
* @param {AbortSignal} [params.signal]
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
* @param {string | null} [params.streamId] - The stream ID for resumable mode.
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
* @param {import('@librechat/api').ParsedServerConfig} [params.config]
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap]
* @returns { Promise<Array<typeof tool | { _call: (toolInput: Object | string) => unknown}>> } An object with `_call` method to execute the tool input.
*/
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
async function createMCPTools({
res,
user,
index,
signal,
config,
provider,
serverName,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
configServers,
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
userMCPAuthMap,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
streamId = null,
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
}) {
const serverConfig =
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
config ?? (await getMCPServersRegistry().getServerConfig(serverName, user?.id, configServers));
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
if (serverConfig?.url) {
🧵 feat: ALS Context Middleware, Tenant Threading, and Config Cache Invalidation (#12407) * feat: add tenant context middleware for ALS-based isolation Introduces tenantContextMiddleware that propagates req.user.tenantId into AsyncLocalStorage, activating the Mongoose applyTenantIsolation plugin for all downstream DB queries within a request. - Strict mode (TENANT_ISOLATION_STRICT=true) returns 403 if no tenantId - Non-strict mode passes through for backward compatibility - No-op for unauthenticated requests - Includes 6 unit tests covering all paths * feat: register tenant middleware and wrap startup/auth in runAsSystem() - Register tenantContextMiddleware in Express app after capability middleware - Wrap server startup initialization in runAsSystem() for strict mode compat - Wrap auth strategy getAppConfig() calls in runAsSystem() since they run before user context is established (LDAP, SAML, OpenID, social login, AuthService) * feat: thread tenantId through all getAppConfig callers Pass tenantId from req.user to getAppConfig() across all callers that have request context, ensuring correct per-tenant cache key resolution. Also fixes getBaseConfig admin endpoint to scope to requesting admin's tenant instead of returning the unscoped base config. Files updated: - Controllers: UserController, PluginController - Middleware: checkDomainAllowed, balance - Routes: config - Services: loadConfigModels, loadDefaultModels, getEndpointsConfig, MCP - Audio services: TTSService, STTService, getVoices, getCustomConfigSpeech - Admin: getBaseConfig endpoint * feat: add config cache invalidation on admin mutations - Add clearOverrideCache(tenantId?) to flush per-principal override caches by enumerating Keyv store keys matching _OVERRIDE_: prefix - Add invalidateConfigCaches() helper that clears base config, override caches, tool caches, and endpoint config cache in one call - Wire invalidation into all 5 admin config mutation handlers (upsert, patch, delete field, delete overrides, toggle active) - Add strict mode warning when __default__ tenant fallback is used - Add 3 new tests for clearOverrideCache (all/scoped/base-preserving) * chore: update getUserPrincipals comment to reflect ALS-based tenant filtering The TODO(#12091) about missing tenantId filtering is resolved by the tenant context middleware + applyTenantIsolation Mongoose plugin. Group queries are now automatically scoped by tenantId via ALS. * fix: replace runAsSystem with baseOnly for pre-tenant code paths App configs are tenant-owned — runAsSystem() would bypass tenant isolation and return cross-tenant DB overrides. Instead, add baseOnly option to getAppConfig() that returns YAML-derived config only, with zero DB queries. All startup code, auth strategies, and MCP initialization now use getAppConfig({ baseOnly: true }) to get the YAML config without touching the Config collection. * fix: address PR review findings — middleware ordering, types, cache safety - Chain tenantContextMiddleware inside requireJwtAuth after passport auth instead of global app.use() where req.user is always undefined (Finding 1) - Remove global tenantContextMiddleware registration from index.js - Update BalanceMiddlewareOptions to include tenantId, remove redundant cast (Finding 4) - Add warning log when clearOverrideCache cannot enumerate keys on Redis (Finding 3) - Use startsWith instead of includes for cache key filtering (Finding 12) - Use generator loop instead of Array.from for key enumeration (Finding 3) - Selective barrel export — exclude _resetTenantMiddlewareStrictCache (Finding 5) - Move isMainThread check to module level, remove per-request check (Finding 9) - Move mid-file require to top of app.js (Finding 8) - Parallelize invalidateConfigCaches with Promise.all (Finding 10) - Remove clearOverrideCache from public app.js exports (internal only) - Strengthen getUserPrincipals comment re: ALS dependency (Finding 2) * fix: restore runAsSystem for startup DB ops, consolidate require, clarify baseOnly - Restore runAsSystem() around performStartupChecks, updateInterfacePermissions, initializeMCPs, and initializeOAuthReconnectManager — these make Mongoose queries that need system context in strict tenant mode (NEW-3) - Consolidate duplicate require('@librechat/api') in requireJwtAuth.js (NEW-1) - Document that baseOnly ignores role/userId/tenantId in JSDoc (NEW-2) * test: add requireJwtAuth tenant chaining + invalidateConfigCaches tests - requireJwtAuth: 5 tests verifying ALS tenant context is set after passport auth, isolated between concurrent requests, and not set when user has no tenantId (Finding 6) - invalidateConfigCaches: 4 tests verifying all four caches are cleared, tenantId is threaded through, partial failure is handled gracefully, and operations run in parallel via Promise.all (Finding 11) * fix: address Copilot review — passport errors, namespaced cache keys, /base scoping - Forward passport errors in requireJwtAuth before entering tenant middleware — prevents silent auth failures from reaching handlers (P1) - Account for Keyv namespace prefix in clearOverrideCache — stored keys are namespaced as "APP_CONFIG:_OVERRIDE_:..." not "_OVERRIDE_:...", so override caches were never actually matched/cleared (P2) - Remove role from getBaseConfig — /base should return tenant-scoped base config, not role-merged config that drifts per admin role (P2) - Return tenantStorage.run() for cleaner async semantics - Update mock cache in service.spec.ts to simulate Keyv namespacing * fix: address second review — cache safety, code quality, test reliability - Decouple cache invalidation from mutation response: fire-and-forget with logging so DB mutation success is not masked by cache failures - Extract clearEndpointConfigCache helper from inline IIFE - Move isMainThread check to lazy once-per-process guard (no import side effect) - Memoize process.env read in overrideCacheKey to avoid per-request env lookups and log flooding in strict mode - Remove flaky timer-based parallelism assertion, use structural check - Merge orphaned double JSDoc block on getUserPrincipals - Fix stale [getAppConfig] log prefix → [ensureBaseConfig] - Fix import order in tenant.spec.ts (package types before local values) - Replace "Finding 1" reference with self-contained description - Use real tenantStorage primitives in requireJwtAuth spec mock * fix: move JSDoc to correct function after clearEndpointConfigCache extraction * refactor: remove Redis SCAN from clearOverrideCache, rely on TTL expiry Redis SCAN causes 60s+ stalls under concurrent load (see #12410). APP_CONFIG defaults to FORCED_IN_MEMORY_CACHE_NAMESPACES, so the in-memory store.keys() path handles the standard case. When APP_CONFIG is Redis-backed, overrides expire naturally via overrideCacheTtl (60s default) — an acceptable window for admin config mutations. * fix: remove return from tenantStorage.run to satisfy void middleware signature * fix: address second review — cache safety, code quality, test reliability - Switch invalidateConfigCaches from Promise.all to Promise.allSettled so partial failures are logged individually instead of producing one undifferentiated error (Finding 3) - Gate overrideCacheKey strict-mode warning behind a once-per-process flag to prevent log flooding under load (Finding 4) - Add test for passport error forwarding in requireJwtAuth — the if (err) { return next(err) } branch now has coverage (Finding 5) - Add test for real partial failure in invalidateConfigCaches where clearAppConfigCache rejects (not just the swallowed endpoint error) * chore: reorder imports in index.js and app.js for consistency - Moved logger and runAsSystem imports to maintain a consistent import order across files. - Improved code readability by ensuring related imports are grouped together.
2026-03-26 17:35:00 -04:00
const appConfig = await getAppConfig({ role: user?.role, tenantId: user?.tenantId });
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
const allowedDomains = appConfig?.mcpSettings?.allowedDomains;
const isDomainAllowed = await isMCPDomainAllowed(serverConfig, allowedDomains);
if (!isDomainAllowed) {
logger.warn(`[MCP][${serverName}] Domain not allowed, skipping all tools`);
return [];
}
}
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
const result = await reconnectServer({
res,
user,
index,
signal,
serverName,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
configServers,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
userMCPAuthMap,
streamId,
});
🧪 chore: MCP Reconnect Storm Follow-Up Fixes and Integration Tests (#12172) * 🧪 test: Add reconnection storm regression tests for MCPConnection Introduced a comprehensive test suite for reconnection storm scenarios, validating circuit breaker, throttling, cooldown, and timeout fixes. The tests utilize real MCP SDK transports and a StreamableHTTP server to ensure accurate behavior under rapid connect/disconnect cycles and error handling for SSE 400/405 responses. This enhances the reliability of the MCPConnection by ensuring proper handling of reconnection logic and circuit breaker functionality. * 🔧 fix: Update createUnavailableToolStub to return structured response Modified the `createUnavailableToolStub` function to return an array containing the unavailable message and a null value, enhancing the response structure. Additionally, added a debug log to skip tool creation when the result is null, improving the handling of reconnection scenarios in the MCP service. * 🧪 test: Enhance MCP tool creation tests for cache and throttle interactions Added new test cases for the `createMCPTool` function to validate the caching behavior when tools are unavailable or throttled. The tests ensure that tools are correctly cached as missing and prevent unnecessary reconnects across different users, improving the reliability of the MCP service under concurrent usage scenarios. Additionally, introduced a test for the `createMCPTools` function to verify that it returns an empty array when reconnect is throttled, ensuring proper handling of throttling logic. * 📝 docs: Update AGENTS.md with testing philosophy and guidelines Expanded the testing section in AGENTS.md to emphasize the importance of using real logic over mocks, advocating for the use of spies and real dependencies in tests. Added specific recommendations for testing with MongoDB and MCP SDK, highlighting the need to mock only uncontrollable external services. This update aims to improve testing practices and encourage more robust test implementations. * 🧪 test: Enhance reconnection storm tests with socket tracking and SSE handling Updated the reconnection storm test suite to include a new socket tracking mechanism for better resource management during tests. Improved the handling of SSE 400/405 responses by ensuring they are processed in the same branch as 404 errors, preventing unhandled cases. This enhances the reliability of the MCPConnection under rapid reconnect scenarios and ensures proper error handling. * 🔧 fix: Implement cache eviction for stale reconnect attempts and missing tools Added an `evictStale` function to manage the size of the `lastReconnectAttempts` and `missingToolCache` maps, ensuring they do not exceed a maximum cache size. This enhancement improves resource management by removing outdated entries based on a specified time-to-live (TTL), thereby optimizing the MCP service's performance during reconnection scenarios.
2026-03-10 17:44:13 -04:00
if (result === null) {
logger.debug(`[MCP][${serverName}] Reconnect throttled, skipping tool creation.`);
return [];
}
if (!result || !result.tools) {
logger.warn(`[MCP][${serverName}] Failed to reinitialize MCP server.`);
return [];
}
const serverTools = [];
for (const tool of result.tools) {
const toolInstance = await createMCPTool({
res,
user,
provider,
userMCPAuthMap,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
configServers,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
streamId,
availableTools: result.availableTools,
toolKey: `${tool.name}${Constants.mcp_delimiter}${serverName}`,
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
config: serverConfig,
});
if (toolInstance) {
serverTools.push(toolInstance);
}
}
return serverTools;
}
/**
* Creates a single tool from the specified MCP Server via `toolKey`.
* @param {Object} params
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
* @param {ServerResponse} params.res - The Express response object for sending events.
* @param {IUser} params.user - The user from the request object.
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
* @param {string} params.toolKey - The toolKey for the tool.
* @param {string} params.model - The model for the tool.
* @param {number} [params.index]
* @param {AbortSignal} [params.signal]
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
* @param {string | null} [params.streamId] - The stream ID for resumable mode.
* @param {Providers | EModelEndpoint} params.provider - The provider for the tool.
* @param {LCAvailableTools} [params.availableTools]
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap]
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
* @param {import('@librechat/api').ParsedServerConfig} [params.config]
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
* @returns { Promise<typeof tool | { _call: (toolInput: Object | string) => unknown}> } An object with `_call` method to execute the tool input.
*/
async function createMCPTool({
res,
user,
index,
signal,
toolKey,
provider,
userMCPAuthMap,
availableTools,
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
config,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
configServers,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
streamId = null,
}) {
const [toolName, serverName] = toolKey.split(Constants.mcp_delimiter);
✂️ refactor: MCP UI Separation for Agents (#9237) * refactor: MCP UI Separation for Agents (Dustin WIP) feat: separate MCPs into their own lists away from tools + actions and add the status indicator functionality from chat to their dropdown ui fix: spotify mcp was not persisting on agent creation feat: show disconnected saved servers and their tools in agent mcp list in created agents fix: select-all regression fixed (caused by deleting tools we were drawing from for rendering list) fix: dont show all mcps, only those installed in agent in list feat: separate ToolSelectDialog for MCPServerTools fix: uninitialized mcp servers not showing as added in toolselectdialog refactor: reduce looping in AgentPanelContext for categorizing groups and mcps refactor: split ToolSelectDialog and MCPToolSelectDialog functionality (still needs customization for custom user vars) chore: address ESLint comments chore: address ESLint comments feat: one-click initialization on MCP servers in agent builder fix: stop propagation triggering reinit on caret click refactor: split uninitialized MCPs component from initialized MCPs feat: new mcp tool select dialog ui with custom user vars feat: show initialization state for CUV configurable MCPs too chore: remove unused localization string fix: deselecting all tools caused a re-render fix: remove subtools so removal from MCPToolSelectDialog works more consistently feat: added servers have all tools enabled by default feat: mcp server list now alphabetical to prevent annoying ui behavior of servers jumping around depending on tool selection fix: filter out placeholder group mcp tools from any actual tool calls / definitions feat: indicator now takes you to config dialog for uninitialized servers feat: show previously configured mcp servers that are now missing from the yaml feat: select all enabled by default on first add to mcp server list chore: address ESLint comments * refactor: MCP UI Separation for Agents (Danny WIP) chore: remove use of `{serverName}_mcp_{serverName}` chore: import order WIP: separate component concerns refactor: streamline agent mcp tools refactor: unify MCP server handling and improve tool visibility logic, remove unnecessary normalization or sorting, remove nesting button, make variable names clear refactor: rename mcpServerIds to mcpServerNames for clarity and consistency across components refactor: remove groupedMCPTools and toolToServerMap, streamline MCP server handling in context and components to effectively utilize mcpServersMap refactor: optimize tool selection logic by replacing array includes with Set for improved performance chore: add error logging for failed auth URL parsing in ToolCall component refactor: enhance MCP tool handling by improving server name management and updating UI elements for better clarity * refactor: decouple connection status from useMCPServerManager with useMCPConnectionStatus * fix: improve MCP tool validation logic to handle unconfigured servers * chore: enhance log message clarity for MCP server disconnection in updateUserPluginsController * refactor: simplify connection status extraction in useMCPConnectionStatus hook * refactor: improve initializing UX * chore: replace string literal with ResourceType constant in useResourcePermissions * refactor: cleanup code, remove redundancies, rename variables for clarity * chore: add back filtering and sorting for mcp tools dialog * refactor: initializeServer to return response and early return * refactor: enhance server initialization logic and improve UI for OAuth interaction * chore: clarify warning message for unconfigured MCP server in handleTools * refactor: prevent CustomUserVarsSection from submitting tools dialog form * fix: nested button of button issue in UninitializedMCPTool * feat: add functionality to revoke custom user variables in MCPToolSelectDialog --------- Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-29 19:57:01 -07:00
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
const serverConfig =
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
config ?? (await getMCPServersRegistry().getServerConfig(serverName, user?.id, configServers));
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
if (serverConfig?.url) {
🧵 feat: ALS Context Middleware, Tenant Threading, and Config Cache Invalidation (#12407) * feat: add tenant context middleware for ALS-based isolation Introduces tenantContextMiddleware that propagates req.user.tenantId into AsyncLocalStorage, activating the Mongoose applyTenantIsolation plugin for all downstream DB queries within a request. - Strict mode (TENANT_ISOLATION_STRICT=true) returns 403 if no tenantId - Non-strict mode passes through for backward compatibility - No-op for unauthenticated requests - Includes 6 unit tests covering all paths * feat: register tenant middleware and wrap startup/auth in runAsSystem() - Register tenantContextMiddleware in Express app after capability middleware - Wrap server startup initialization in runAsSystem() for strict mode compat - Wrap auth strategy getAppConfig() calls in runAsSystem() since they run before user context is established (LDAP, SAML, OpenID, social login, AuthService) * feat: thread tenantId through all getAppConfig callers Pass tenantId from req.user to getAppConfig() across all callers that have request context, ensuring correct per-tenant cache key resolution. Also fixes getBaseConfig admin endpoint to scope to requesting admin's tenant instead of returning the unscoped base config. Files updated: - Controllers: UserController, PluginController - Middleware: checkDomainAllowed, balance - Routes: config - Services: loadConfigModels, loadDefaultModels, getEndpointsConfig, MCP - Audio services: TTSService, STTService, getVoices, getCustomConfigSpeech - Admin: getBaseConfig endpoint * feat: add config cache invalidation on admin mutations - Add clearOverrideCache(tenantId?) to flush per-principal override caches by enumerating Keyv store keys matching _OVERRIDE_: prefix - Add invalidateConfigCaches() helper that clears base config, override caches, tool caches, and endpoint config cache in one call - Wire invalidation into all 5 admin config mutation handlers (upsert, patch, delete field, delete overrides, toggle active) - Add strict mode warning when __default__ tenant fallback is used - Add 3 new tests for clearOverrideCache (all/scoped/base-preserving) * chore: update getUserPrincipals comment to reflect ALS-based tenant filtering The TODO(#12091) about missing tenantId filtering is resolved by the tenant context middleware + applyTenantIsolation Mongoose plugin. Group queries are now automatically scoped by tenantId via ALS. * fix: replace runAsSystem with baseOnly for pre-tenant code paths App configs are tenant-owned — runAsSystem() would bypass tenant isolation and return cross-tenant DB overrides. Instead, add baseOnly option to getAppConfig() that returns YAML-derived config only, with zero DB queries. All startup code, auth strategies, and MCP initialization now use getAppConfig({ baseOnly: true }) to get the YAML config without touching the Config collection. * fix: address PR review findings — middleware ordering, types, cache safety - Chain tenantContextMiddleware inside requireJwtAuth after passport auth instead of global app.use() where req.user is always undefined (Finding 1) - Remove global tenantContextMiddleware registration from index.js - Update BalanceMiddlewareOptions to include tenantId, remove redundant cast (Finding 4) - Add warning log when clearOverrideCache cannot enumerate keys on Redis (Finding 3) - Use startsWith instead of includes for cache key filtering (Finding 12) - Use generator loop instead of Array.from for key enumeration (Finding 3) - Selective barrel export — exclude _resetTenantMiddlewareStrictCache (Finding 5) - Move isMainThread check to module level, remove per-request check (Finding 9) - Move mid-file require to top of app.js (Finding 8) - Parallelize invalidateConfigCaches with Promise.all (Finding 10) - Remove clearOverrideCache from public app.js exports (internal only) - Strengthen getUserPrincipals comment re: ALS dependency (Finding 2) * fix: restore runAsSystem for startup DB ops, consolidate require, clarify baseOnly - Restore runAsSystem() around performStartupChecks, updateInterfacePermissions, initializeMCPs, and initializeOAuthReconnectManager — these make Mongoose queries that need system context in strict tenant mode (NEW-3) - Consolidate duplicate require('@librechat/api') in requireJwtAuth.js (NEW-1) - Document that baseOnly ignores role/userId/tenantId in JSDoc (NEW-2) * test: add requireJwtAuth tenant chaining + invalidateConfigCaches tests - requireJwtAuth: 5 tests verifying ALS tenant context is set after passport auth, isolated between concurrent requests, and not set when user has no tenantId (Finding 6) - invalidateConfigCaches: 4 tests verifying all four caches are cleared, tenantId is threaded through, partial failure is handled gracefully, and operations run in parallel via Promise.all (Finding 11) * fix: address Copilot review — passport errors, namespaced cache keys, /base scoping - Forward passport errors in requireJwtAuth before entering tenant middleware — prevents silent auth failures from reaching handlers (P1) - Account for Keyv namespace prefix in clearOverrideCache — stored keys are namespaced as "APP_CONFIG:_OVERRIDE_:..." not "_OVERRIDE_:...", so override caches were never actually matched/cleared (P2) - Remove role from getBaseConfig — /base should return tenant-scoped base config, not role-merged config that drifts per admin role (P2) - Return tenantStorage.run() for cleaner async semantics - Update mock cache in service.spec.ts to simulate Keyv namespacing * fix: address second review — cache safety, code quality, test reliability - Decouple cache invalidation from mutation response: fire-and-forget with logging so DB mutation success is not masked by cache failures - Extract clearEndpointConfigCache helper from inline IIFE - Move isMainThread check to lazy once-per-process guard (no import side effect) - Memoize process.env read in overrideCacheKey to avoid per-request env lookups and log flooding in strict mode - Remove flaky timer-based parallelism assertion, use structural check - Merge orphaned double JSDoc block on getUserPrincipals - Fix stale [getAppConfig] log prefix → [ensureBaseConfig] - Fix import order in tenant.spec.ts (package types before local values) - Replace "Finding 1" reference with self-contained description - Use real tenantStorage primitives in requireJwtAuth spec mock * fix: move JSDoc to correct function after clearEndpointConfigCache extraction * refactor: remove Redis SCAN from clearOverrideCache, rely on TTL expiry Redis SCAN causes 60s+ stalls under concurrent load (see #12410). APP_CONFIG defaults to FORCED_IN_MEMORY_CACHE_NAMESPACES, so the in-memory store.keys() path handles the standard case. When APP_CONFIG is Redis-backed, overrides expire naturally via overrideCacheTtl (60s default) — an acceptable window for admin config mutations. * fix: remove return from tenantStorage.run to satisfy void middleware signature * fix: address second review — cache safety, code quality, test reliability - Switch invalidateConfigCaches from Promise.all to Promise.allSettled so partial failures are logged individually instead of producing one undifferentiated error (Finding 3) - Gate overrideCacheKey strict-mode warning behind a once-per-process flag to prevent log flooding under load (Finding 4) - Add test for passport error forwarding in requireJwtAuth — the if (err) { return next(err) } branch now has coverage (Finding 5) - Add test for real partial failure in invalidateConfigCaches where clearAppConfigCache rejects (not just the swallowed endpoint error) * chore: reorder imports in index.js and app.js for consistency - Moved logger and runAsSystem imports to maintain a consistent import order across files. - Improved code readability by ensuring related imports are grouped together.
2026-03-26 17:35:00 -04:00
const appConfig = await getAppConfig({ role: user?.role, tenantId: user?.tenantId });
🔒 feat: Add MCP server domain restrictions for remote transports (#11013) * 🔒 feat: Add MCP server domain restrictions for remote transports * 🔒 feat: Implement comprehensive MCP error handling and domain validation - Added `handleMCPError` function to centralize error responses for domain restrictions and inspection failures. - Introduced custom error classes: `MCPDomainNotAllowedError` and `MCPInspectionFailedError` for better error management. - Updated MCP server controllers to utilize the new error handling mechanism. - Enhanced domain validation logic in `createMCPTools` and `createMCPTool` functions to prevent operations on disallowed domains. - Added tests for runtime domain validation scenarios to ensure correct behavior. * chore: import order * 🔒 feat: Enhance domain validation in MCP tools with user role-based restrictions - Integrated `getAppConfig` to fetch allowed domains based on user roles in `createMCPTools` and `createMCPTool` functions. - Removed the deprecated `getAllowedDomains` method from `MCPServersRegistry`. - Updated tests to verify domain restrictions are applied correctly based on user roles. - Ensured that domain validation logic is consistent and efficient across tool creation processes. * 🔒 test: Refactor MCP tests to utilize configurable app settings - Introduced a mock for `getAppConfig` to enhance test flexibility. - Removed redundant mock definition to streamline test setup. - Ensured tests are aligned with the latest domain validation logic. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-18 19:57:49 +01:00
const allowedDomains = appConfig?.mcpSettings?.allowedDomains;
const isDomainAllowed = await isMCPDomainAllowed(serverConfig, allowedDomains);
if (!isDomainAllowed) {
logger.warn(`[MCP][${serverName}] Domain no longer allowed, skipping tool: ${toolName}`);
return undefined;
}
}
/** @type {LCTool | undefined} */
let toolDefinition = availableTools?.[toolKey]?.function;
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
if (!toolDefinition) {
const cachedAt = missingToolCache.get(toolKey);
if (cachedAt && Date.now() - cachedAt < MISSING_TOOL_TTL_MS) {
logger.debug(
`[MCP][${serverName}][${toolName}] Tool in negative cache, returning unavailable stub.`,
);
return createUnavailableToolStub(toolName, serverName);
}
logger.warn(
`[MCP][${serverName}][${toolName}] Requested tool not found in available tools, re-initializing MCP server.`,
);
const result = await reconnectServer({
res,
user,
index,
signal,
serverName,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
configServers,
userMCPAuthMap,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
streamId,
});
toolDefinition = result?.availableTools?.[toolKey]?.function;
if (!toolDefinition) {
missingToolCache.set(toolKey, Date.now());
🧪 chore: MCP Reconnect Storm Follow-Up Fixes and Integration Tests (#12172) * 🧪 test: Add reconnection storm regression tests for MCPConnection Introduced a comprehensive test suite for reconnection storm scenarios, validating circuit breaker, throttling, cooldown, and timeout fixes. The tests utilize real MCP SDK transports and a StreamableHTTP server to ensure accurate behavior under rapid connect/disconnect cycles and error handling for SSE 400/405 responses. This enhances the reliability of the MCPConnection by ensuring proper handling of reconnection logic and circuit breaker functionality. * 🔧 fix: Update createUnavailableToolStub to return structured response Modified the `createUnavailableToolStub` function to return an array containing the unavailable message and a null value, enhancing the response structure. Additionally, added a debug log to skip tool creation when the result is null, improving the handling of reconnection scenarios in the MCP service. * 🧪 test: Enhance MCP tool creation tests for cache and throttle interactions Added new test cases for the `createMCPTool` function to validate the caching behavior when tools are unavailable or throttled. The tests ensure that tools are correctly cached as missing and prevent unnecessary reconnects across different users, improving the reliability of the MCP service under concurrent usage scenarios. Additionally, introduced a test for the `createMCPTools` function to verify that it returns an empty array when reconnect is throttled, ensuring proper handling of throttling logic. * 📝 docs: Update AGENTS.md with testing philosophy and guidelines Expanded the testing section in AGENTS.md to emphasize the importance of using real logic over mocks, advocating for the use of spies and real dependencies in tests. Added specific recommendations for testing with MongoDB and MCP SDK, highlighting the need to mock only uncontrollable external services. This update aims to improve testing practices and encourage more robust test implementations. * 🧪 test: Enhance reconnection storm tests with socket tracking and SSE handling Updated the reconnection storm test suite to include a new socket tracking mechanism for better resource management during tests. Improved the handling of SSE 400/405 responses by ensuring they are processed in the same branch as 404 errors, preventing unhandled cases. This enhances the reliability of the MCPConnection under rapid reconnect scenarios and ensures proper error handling. * 🔧 fix: Implement cache eviction for stale reconnect attempts and missing tools Added an `evictStale` function to manage the size of the `lastReconnectAttempts` and `missingToolCache` maps, ensuring they do not exceed a maximum cache size. This enhancement improves resource management by removing outdated entries based on a specified time-to-live (TTL), thereby optimizing the MCP service's performance during reconnection scenarios.
2026-03-10 17:44:13 -04:00
evictStale(missingToolCache, MISSING_TOOL_TTL_MS);
}
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
}
if (!toolDefinition) {
logger.warn(
`[MCP][${serverName}][${toolName}] Tool definition not found, returning unavailable stub.`,
);
return createUnavailableToolStub(toolName, serverName);
}
return createToolInstance({
res,
provider,
toolName,
serverName,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
serverConfig,
toolDefinition,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
streamId,
});
}
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
function createToolInstance({
res,
toolName,
serverName,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
serverConfig: capturedServerConfig,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
toolDefinition,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
provider: capturedProvider,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
streamId = null,
}) {
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
/** @type {LCTool} */
const { description, parameters } = toolDefinition;
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
const isGoogle = capturedProvider === Providers.VERTEXAI || capturedProvider === Providers.GOOGLE;
2026-02-13 13:33:25 -05:00
let schema = parameters ? normalizeJsonSchema(resolveJsonSchemaRefs(parameters)) : null;
🦥 refactor: Event-Driven Lazy Tool Loading (#11588) * refactor: json schema tools with lazy loading - Added LocalToolExecutor class for lazy loading and caching of tools during execution. - Introduced ToolExecutionContext and ToolExecutor interfaces for better type management. - Created utility functions to generate tool proxies with JSON schema support. - Added ExtendedJsonSchema type for enhanced schema definitions. - Updated existing toolkits to utilize the new schema and executor functionalities. - Introduced a comprehensive tool definitions registry for managing various tool schemas. chore: update @librechat/agents to version 3.1.2 refactor: enhance tool loading optimization and classification - Improved the loadAgentToolsOptimized function to utilize a proxy pattern for all tools, enabling deferred execution and reducing overhead. - Introduced caching for tool instances and refined tool classification logic to streamline tool management. - Updated the handling of MCP tools to improve logging and error reporting for missing tools in the cache. - Enhanced the structure of tool definitions to support better classification and integration with existing tools. refactor: modularize tool loading and enhance optimization - Moved the loadAgentToolsOptimized function to a new service file for better organization and maintainability. - Updated the ToolService to utilize the new service for optimized tool loading, improving code clarity. - Removed legacy tool loading methods and streamlined the tool loading process to enhance performance and reduce complexity. - Introduced feature flag handling for optimized tool loading, allowing for easier toggling of this functionality. refactor: replace loadAgentToolsWithFlag with loadAgentTools in tool loader refactor: enhance MCP tool loading with proxy creation and classification refactor: optimize MCP tool loading by grouping tools by server - Introduced a Map to group cached tools by server name, improving the organization of tool data. - Updated the createMCPProxyTool function to accept server name directly, enhancing clarity. - Refactored the logic for handling MCP tools, streamlining the process of creating proxy tools for classification. refactor: enhance MCP tool loading and proxy creation - Added functionality to retrieve MCP server tools and reinitialize servers if necessary, improving tool availability. - Updated the tool loading logic to utilize a Map for organizing tools by server, enhancing clarity and performance. - Refactored the createToolProxy function to ensure a default response format, streamlining tool creation. refactor: update createToolProxy to ensure consistent response format - Modified the createToolProxy function to await the executor's execution and validate the result format. - Ensured that the function returns a default response structure when the result is not an array of two elements, enhancing reliability in tool proxy creation. refactor: ToolExecutionContext with toolCall property - Added toolCall property to ToolExecutionContext interface for improved context handling during tool execution. - Updated LocalToolExecutor to include toolCall in the runnable configuration, allowing for more flexible tool invocation. - Modified createToolProxy to pass toolCall from the configuration, ensuring consistent context across tool executions. refactor: enhance event-driven tool execution and logging - Introduced ToolExecuteOptions for improved handling of event-driven tool execution, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to include support for ON_TOOL_EXECUTE events, enhancing the flexibility of tool invocation. - Added detailed logging in LocalToolExecutor to track tool loading and execution metrics, improving observability and debugging capabilities. - Refactored initializeClient to integrate event-driven tool loading, ensuring compatibility with the new execution model. chore: update @librechat/agents to version 3.1.21 refactor: remove legacy tool loading and executor components - Eliminated the loadAgentToolsWithFlag function, simplifying the tool loading process by directly using loadAgentTools. - Removed the LocalToolExecutor and related executor components to streamline the tool execution architecture. - Updated ToolService and related files to reflect the removal of deprecated features, enhancing code clarity and maintainability. refactor: enhance tool classification and definitions handling - Updated the loadAgentTools function to return toolDefinitions alongside toolRegistry, improving the structure of tool data returned to clients. - Removed the convertRegistryToDefinitions function from the initialize.js file, simplifying the initialization process. - Adjusted the buildToolClassification function to ensure toolDefinitions are built and returned simultaneously with the toolRegistry, enhancing efficiency in tool management. - Updated type definitions in initialize.ts to include toolDefinitions, ensuring consistency across the codebase. refactor: implement event-driven tool execution handler - Introduced createToolExecuteHandler function to streamline the handling of ON_TOOL_EXECUTE events, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to utilize the new handler, simplifying the event-driven architecture. - Added handlers.ts file to encapsulate tool execution logic, improving code organization and maintainability. - Enhanced OpenAI handlers to integrate the new tool execution capabilities, ensuring consistent event handling across the application. refactor: integrate event-driven tool execution options - Added toolExecuteOptions to support event-driven tool execution in OpenAI and responses controllers, enhancing flexibility in tool handling. - Updated handlers to utilize createToolExecuteHandler, allowing for streamlined execution of tools during agent interactions. - Refactored service dependencies to include toolExecuteOptions, ensuring consistent integration across the application. refactor: enhance tool loading with definitionsOnly parameter - Updated createToolLoader and loadAgentTools functions to include a definitionsOnly parameter, allowing for the retrieval of only serializable tool definitions in event-driven mode. - Adjusted related interfaces and documentation to reflect the new parameter, improving clarity and flexibility in tool management. - Ensured compatibility across various components by integrating the definitionsOnly option in the initialization process. refactor: improve agent tool presence check in initialization - Added a check for tool presence using a new hasAgentTools variable, which evaluates both structuredTools and toolDefinitions. - Updated the conditional logic in the agent initialization process to utilize the hasAgentTools variable, enhancing clarity and maintainability in tool management. refactor: enhance agent tool extraction to support tool definitions - Updated the extractMCPServers function to handle both tool instances and serializable tool definitions, improving flexibility in agent tool management. - Added a new property toolDefinitions to the AgentWithTools type for better integration of event-driven mode. - Enhanced documentation to clarify the function's capabilities in extracting unique MCP server names from both tools and tool definitions. refactor: enhance tool classification and registry building - Added serverName property to ToolDefinition for improved tool identification. - Introduced buildToolRegistry function to streamline the creation of tool registries based on MCP tool definitions and agent options. - Updated buildToolClassification to utilize the new registry building logic, ensuring basic definitions are returned even when advanced classification features are not allowed. - Enhanced documentation and logging for clarity in tool classification processes. refactor: update @librechat/agents dependency to version 3.1.22 fix: expose loadTools function in ToolService - Added loadTools function to the exported module in ToolService.js, enhancing the accessibility of tool loading functionality. chore: remove configurable options from tool execute options in OpenAI controller refactor: enhance tool loading mechanism to utilize agent-specific context chore: update @librechat/agents dependency to version 3.1.23 fix: simplify result handling in createToolExecuteHandler * refactor: loadToolDefinitions for efficient tool loading in event-driven mode * refactor: replace legacy tool loading with loadToolsForExecution in OpenAI and responses controllers - Updated OpenAIChatCompletionController and createResponse functions to utilize loadToolsForExecution for improved tool loading. - Removed deprecated loadToolsLegacy references, streamlining the tool execution process. - Enhanced tool loading options to include agent-specific context and configurations. * refactor: enhance tool loading and execution handling - Introduced loadActionToolsForExecution function to streamline loading of action tools, improving organization and maintainability. - Updated loadToolsForExecution to handle both regular and action tools, optimizing the tool loading process. - Added detailed logging for missing tools in createToolExecuteHandler, enhancing error visibility. - Refactored tool definitions to normalize action tool names, improving consistency in tool management. * refactor: enhance built-in tool definitions loading - Updated loadToolDefinitions to include descriptions and parameters from the tool registry for built-in tools, improving the clarity and usability of tool definitions. - Integrated getToolDefinition to streamline the retrieval of tool metadata, enhancing the overall tool management process. * feat: add action tool definitions loading to tool service - Introduced getActionToolDefinitions function to load action tool definitions based on agent ID and tool names, enhancing the tool loading process. - Updated loadToolDefinitions to integrate action tool definitions, allowing for better management and retrieval of action-specific tools. - Added comprehensive tests for action tool definitions to ensure correct loading and parameter handling, improving overall reliability and functionality. * chore: update @librechat/agents dependency to version 3.1.26 * refactor: add toolEndCallback to handle tool execution results * fix: tool definitions and execution handling - Introduced native tools (execute_code, file_search, web_search) to the tool service, allowing for better integration and management of these tools. - Updated isBuiltInTool function to include native tools in the built-in check, improving tool recognition. - Added comprehensive tests for loading parameters of native tools, ensuring correct functionality and parameter handling. - Enhanced tool definitions registry to include new agent tool definitions, streamlining tool retrieval and management. * refactor: enhance tool loading and execution context - Added toolRegistry to the context for OpenAIChatCompletionController and createResponse functions, improving tool management. - Updated loadToolsForExecution to utilize toolRegistry for better integration of programmatic tools and tool search functionalities. - Enhanced the initialization process to include toolRegistry in agent context, streamlining tool access and configuration. - Refactored tool classification logic to support event-driven execution, ensuring compatibility with new tool definitions. * chore: add request duration logging to OpenAI and Responses controllers - Introduced logging for request start and completion times in OpenAIChatCompletionController and createResponse functions. - Calculated and logged the duration of each request, enhancing observability and performance tracking. - Improved debugging capabilities by providing detailed logs for both streaming and non-streaming responses. * chore: update @librechat/agents dependency to version 3.1.27 * refactor: implement buildToolSet function for tool management - Introduced buildToolSet function to streamline the creation of tool sets from agent configurations, enhancing tool management across various controllers. - Updated AgentClient, OpenAIChatCompletionController, and createResponse functions to utilize buildToolSet, improving consistency in tool handling. - Added comprehensive tests for buildToolSet to ensure correct functionality and edge case handling, enhancing overall reliability. * refactor: update import paths for ToolExecuteOptions and createToolExecuteHandler * fix: update GoogleSearch.js description for maximum search results - Changed the default maximum number of search results from 10 to 5 in the Google Search JSON schema description, ensuring accurate documentation of the expected behavior. * chore: remove deprecated Browser tool and associated assets - Deleted the Browser tool definition from manifest.json, which included its name, plugin key, description, and authentication configuration. - Removed the web-browser.svg asset as it is no longer needed following the removal of the Browser tool. * fix: ensure tool definitions are valid before processing - Added a check to verify the existence of tool definitions in the registry before accessing their properties, preventing potential runtime errors. - Updated the loading logic for built-in tool definitions to ensure that only valid definitions are pushed to the built-in tool definitions array. * fix: extend ExtendedJsonSchema to support 'null' type and nullable enums - Updated the ExtendedJsonSchema type to include 'null' as a valid type option. - Modified the enum property to accept an array of values that can include strings, numbers, booleans, and null, enhancing schema flexibility. * test: add comprehensive tests for tool definitions loading and registry behavior - Implemented tests to verify the handling of built-in tools without registry definitions, ensuring they are skipped correctly. - Added tests to confirm that built-in tools include descriptions and parameters in the registry. - Enhanced tests for action tools, checking for proper inclusion of metadata and handling of tools without parameters in the registry. * test: add tests for mixed-type and number enum schema handling - Introduced tests to validate the parsing of mixed-type enum values, including strings, numbers, booleans, and null. - Added tests for number enum schema values to ensure correct parsing of numeric inputs, enhancing schema validation coverage. * fix: update mock implementation for @librechat/agents - Changed the mock for @librechat/agents to spread the actual module's properties, ensuring that all necessary functionalities are preserved in tests. - This adjustment enhances the accuracy of the tests by reflecting the real structure of the module. * fix: change max_results type in GoogleSearch schema from number to integer - Updated the type of max_results in the Google Search JSON schema to 'integer' for better type accuracy and validation consistency. * fix: update max_results description and type in GoogleSearch schema - Changed the type of max_results from 'number' to 'integer' for improved type accuracy. - Updated the description to reflect the new default maximum number of search results, changing it from 10 to 5. * refactor: remove unused code and improve tool registry handling - Eliminated outdated comments and conditional logic related to event-driven mode in the ToolService. - Enhanced the handling of the tool registry by ensuring it is configurable for better integration during tool execution. * feat: add definitionsOnly option to buildToolClassification for event-driven mode - Introduced a new parameter, definitionsOnly, to the BuildToolClassificationParams interface to enable a mode that skips tool instance creation. - Updated the buildToolClassification function to conditionally add tool definitions without instantiating tools when definitionsOnly is true. - Modified the loadToolDefinitions function to pass definitionsOnly as true, ensuring compatibility with the new feature. * test: add unit tests for buildToolClassification with definitionsOnly option - Implemented tests to verify the behavior of buildToolClassification when definitionsOnly is set to true or false. - Ensured that tool instances are not created when definitionsOnly is true, while still adding necessary tool definitions. - Confirmed that loadAuthValues is called appropriately based on the definitionsOnly parameter, enhancing test coverage for this new feature.
2026-02-01 08:50:57 -05:00
if (!schema || (isGoogle && isEmptyObjectSchema(schema))) {
schema = {
type: 'object',
properties: {
input: { type: 'string', description: 'Input for the tool' },
},
required: [],
};
}
const normalizedToolKey = `${toolName}${Constants.mcp_delimiter}${normalizeServerName(serverName)}`;
/** @type {(toolArguments: Object | string, config?: GraphRunnableConfig) => Promise<unknown>} */
const _call = async (toolArguments, config) => {
const userId = config?.configurable?.user?.id || config?.configurable?.user_id;
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
/** @type {ReturnType<typeof createAbortHandler>} */
let abortHandler = null;
/** @type {AbortSignal} */
let derivedSignal = null;
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
try {
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = getFlowStateManager(flowsCache);
derivedSignal = config?.signal ? AbortSignal.any([config.signal]) : undefined;
const mcpManager = getMCPManager(userId);
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
const provider = (config?.metadata?.provider || capturedProvider)?.toLowerCase();
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
const { args: _args, stepId, ...toolCall } = config.toolCall ?? {};
const flowId = `${serverName}:oauth_login:${config.metadata.thread_id}:${config.metadata.run_id}`;
const runStepDeltaEmitter = createRunStepDeltaEmitter({
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
res,
stepId,
toolCall,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
streamId,
});
const oauthStart = createOAuthStart({
flowId,
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
flowManager,
callback: runStepDeltaEmitter,
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
});
const oauthEnd = createOAuthEnd({
res,
stepId,
toolCall,
🌊 feat: Resumable LLM Streams with Horizontal Scaling (#10926) * ✨ feat: Implement Resumable Generation Jobs with SSE Support - Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections. - Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress. - Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling. - Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings. - Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections. * WIP: resuming * WIP: resumable stream * feat: Enhance Stream Management with Abort Functionality - Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId. - Introduced a new mutation hook `useAbortStreamMutation` for client-side integration. - Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations. - Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation. - Improved `useResumableSSE` to handle stream errors and token refresh seamlessly. - Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately. * fix: Update query parameter handling in useChatHelpers - Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations. * fix: improve syncing when switching conversations * fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup * fix: Improve content type mismatch handling in useStepHandler - Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates. * fix: Allow dynamic content creation in useChatFunctions - Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text. * fix: Refine response message handling in useStepHandler - Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow. * refactor: Enhance GenerationJobManager with In-Memory Implementations - Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling. - Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance. - Enhanced job metadata handling to support user messages and response IDs for resumable functionality. - Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage. * refactor: Enhance GenerationJobManager with improved subscriber handling - Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count. - Refined createJob and subscribe methods to ensure generation starts only when the first real client connects. - Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness. - Improved logging for subscriber checks and event handling to facilitate debugging and monitoring. * chore: Adjust timeout for subscriber readiness in ResumableAgentController - Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process. * refactor: Update GenerationJobManager documentation and structure - Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design. - Updated comments to reflect the potential for Redis integration and the need for async refactoring. - Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code. * refactor: Convert GenerationJobManager methods to async for improved performance - Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management. - Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling. - Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management. * refactor: Simplify initial response handling in useChatFunctions - Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow. * refactor: Clarify content handling logic in useStepHandler - Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios. - Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability. * refactor: Improve message handling logic in useStepHandler - Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized. - Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow. * fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context. * refactor: Integrate streamId handling for improved resumable functionality for attachments - Added streamId parameter to various functions to support resumable mode in tool loading and memory processing. - Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience. - Improved logging and attachment management to accommodate both standard and resumable modes. * refactor: Streamline abort handling and integrate GenerationJobManager for improved job management - Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager. - Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency. - Simplified cleanup processes and improved error handling during abort operations. - Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management. * refactor: Unify streamId and conversationId handling for improved job management - Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency. - Simplified job creation and metadata management by removing redundant conversationId updates from callbacks. - Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling. - Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability. * refactor: Enhance resumable SSE handling with improved UI state management and error recovery - Added UI state restoration on successful SSE connection to indicate ongoing submission. - Implemented detailed error handling for network failures, including retry logic with exponential backoff. - Introduced abort event handling to reset UI state on intentional stream closure. - Enhanced debugging capabilities for testing reconnection and clean close scenarios. - Updated generation function to retry on network errors, improving resilience during submission processes. * refactor: Consolidate content state management into IJobStore for improved job handling - Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management. - Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy. - Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks. - Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations. * feat: Introduce Redis-backed stream services for enhanced job management - Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options. - Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios. - Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations. - Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness. - Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options. * refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport - Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity. - Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output. - Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency. * refactor: Enhance job state management and TTL configuration in RedisJobStore - Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management. - Refactored the handling of job expiration and cleanup processes to align with new TTL configurations. - Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance. - Improved comments and documentation for better understanding of the changes made. * refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management - Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion. - Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management. - Improved cleanup logic in the cleanup method to handle orphaned resources effectively. - Enhanced documentation and comments for better clarity on the new functionality. * refactor: Update TTL configuration for completed jobs in InMemoryJobStore - Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup. - Enhanced cleanup logic to respect the new TTL setting, improving resource management. - Updated comments for clarity on the behavior of the TTL configuration. * refactor: Enhance RedisJobStore with local graph caching for improved performance - Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance. - Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed. - Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects. - Improved documentation and comments for clarity on the caching mechanism and its benefits. * feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support - Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling. - Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling. - Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior. - Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability. * fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers - Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately. - Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting. * ci: Improve Redis client disconnection handling in integration tests - Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client. - Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown. - Improved comments for clarity on the disconnection process and error handling. * refactor: Enhance GenerationJobManager and event transports for improved resource management - Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup. - Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs. - Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams. - Improved comments for clarity on resource management and cleanup processes. * refactor: Update GenerationJobManager and ResumableAgentController for improved event handling - Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers. - Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions. - Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes. * fix: Update cache integration test command for stream to ensure proper execution - Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests. - This change enhances the reliability of the test suite by ensuring all tests complete as expected. * feat: Add active job management for user and show progress in conversation list - Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks. - Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs. - Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup. - Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback. * feat: Implement active job tracking by user in RedisJobStore - Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks. - Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs. - Updated job creation, update, and deletion methods to manage user-specific job sets effectively. - Enhanced integration tests to validate the new user-specific job management features. * refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore * WIP: Add backend inspect script for easier debugging in production * refactor: title generation logic - Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID. - Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load. - Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion. - Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance. * feat: Enhance updateConvoInAllQueries to support moving conversations to the top * chore: temp. remove added multi convo * refactor: Update active jobs query integration for optimistic updates on abort - Introduced a new interface for active jobs response to standardize data handling. - Updated query keys for active jobs to ensure consistency across components. - Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness. * refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints - Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away. - Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type. - Refactored imports in ChatView for better organization. * refactor: streamline conversation title generation handling - Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase. - Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application. * feat: Add USE_REDIS_STREAMS configuration for stream job storage - Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set. - Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration. - Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides. * fix: title generation queue management for assistants - Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams. - Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete. - Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow. * refactor: streamline agent controller and remove legacy resumable handling - Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic. - Deprecated the legacy non-resumable path, providing a clear migration path for future use. - Adjusted setHeaders middleware to remove unnecessary checks for resumable mode. - Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance. * feat: Add USE_REDIS_STREAMS configuration to .env.example - Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams. - Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management. * refactor: remove unused setHeaders middleware from chat route - Eliminated the setHeaders middleware from the chat route, streamlining the request handling process. - This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks. * fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth) * fix(flow): add immediate abort handling and fix intervalId initialization - Add immediate abort handler that responds instantly to abort signal - Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error - Consolidate cleanup logic into single function to avoid duplicate cleanup - Properly remove abort event listener on cleanup * fix(mcp): clean up OAuth flows on abort and simplify flow handling - Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows - Update createAbortHandler to clean up both flow types on tool call abort - Pass abort signal to createFlow in returnOnOAuth path - Simplify handleOAuthRequired to always cancel existing flows and start fresh - This ensures user always gets a new OAuth URL instead of waiting for stale flows * fix(agents): handle 'new' conversationId and improve abort reliability - Treat 'new' as placeholder that needs UUID in request controller - Send JSON response immediately before tool loading for faster SSE connection - Use job's abort controller instead of prelimAbortController - Emit errors to stream if headers already sent - Skip 'new' as valid ID in abort endpoint - Add fallback to find active jobs by userId when conversationId is 'new' * fix(stream): detect early abort and prevent navigation to non-existent conversation - Abort controller on job completion to signal pending operations - Detect early abort (no content, no responseMessageId) in abortJob - Set conversation and responseMessage to null for early aborts - Add earlyAbort flag to final event for frontend detection - Remove unused text field from AbortResult interface - Frontend handles earlyAbort by staying on/navigating to new chat * test(mcp): update test to expect signal parameter in createFlow fix(agents): include 'new' conversationId in newConvo check for title generation When frontend sends 'new' as conversationId, it should still trigger title generation since it's a new conversation. Rename boolean variable for clarity fix(agents): check abort state before completeJob for title generation completeJob now triggers abort signal for cleanup, so we need to capture the abort state beforehand to correctly determine if title generation should run.
2025-12-19 10:12:39 -05:00
streamId,
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
});
if (derivedSignal) {
abortHandler = createAbortHandler({ userId, serverName, toolName, flowManager });
derivedSignal.addEventListener('abort', abortHandler, { once: true });
}
🗝️ feat: User Provided Credentials for MCP Servers (#7980) * 🗝️ feat: Per-User Credentials for MCP Servers chore: add aider to gitignore feat: fill custom variables to MCP server feat: replace placeholders with custom user MCP variables feat: handle MCP install/uninstall (uses pluginauths) feat: add MCP custom variables dialog to MCPSelect feat: add MCP custom variables dialog to the side panel feat: do not require to fill MCP credentials for in tools dialog feat: add translations keys (en+cs) for custom MCP variables fix: handle LIBRECHAT_USER_ID correctly during MCP var replacement style: remove unused MCP translation keys style: fix eslint for MCP custom vars chore: move aider gitignore to AI section * feat: Add Plugin Authentication Methods to data-schemas * refactor: Replace PluginAuth model methods with new utility functions for improved code organization and maintainability * refactor: Move IPluginAuth interface to types directory for better organization and update pluginAuth schema to use the new import * refactor: Remove unused getUsersPluginsAuthValuesMap function and streamline PluginService.js; add new getPluginAuthMap function for improved plugin authentication handling * chore: fix typing for optional tools property with GenericTool[] type * chore: update librechat-data-provider version to 0.7.88 * refactor: optimize getUserMCPAuthMap function by reducing variable usage and improving server key collection logic * refactor: streamline MCP tool creation by removing customUserVars parameter and enhancing user-specific authentication handling to avoid closure encapsulation * refactor: extract processSingleValue function to streamline MCP environment variable processing and enhance readability * refactor: enhance MCP tool processing logic by simplifying conditions and improving authentication handling for custom user variables * ci: fix action tests * chore: fix imports, remove comments * chore: remove non-english translations * fix: remove newline at end of translation.json file --------- Co-authored-by: Aleš Kůtek <kutekales@gmail.com>
2025-06-19 18:27:55 -04:00
const customUserVars =
config?.configurable?.userMCPAuthMap?.[`${Constants.mcp_prefix}${serverName}`];
const result = await mcpManager.callTool({
serverName,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
serverConfig: capturedServerConfig,
toolName,
provider,
toolArguments,
options: {
🤖 refactor: Improve Agents Memory Usage, Bump Keyv, Grok 3 (#6850) * chore: remove unused redis file * chore: bump keyv dependencies, and update related imports * refactor: Implement IoRedis client for rate limiting across middleware, as node-redis via keyv not compatible * fix: Set max listeners to expected amount * WIP: memory improvements * refactor: Simplify getAbortData assignment in createAbortController * refactor: Update getAbortData to use WeakRef for content management * WIP: memory improvements in agent chat requests * refactor: Enhance memory management with finalization registry and cleanup functions * refactor: Simplify domainParser calls by removing unnecessary request parameter * refactor: Update parameter types for action tools and agent loading functions to use minimal configs * refactor: Simplify domainParser tests by removing unnecessary request parameter * refactor: Simplify domainParser call by removing unnecessary request parameter * refactor: Enhance client disposal by nullifying additional properties to improve memory management * refactor: Improve title generation by adding abort controller and timeout handling, consolidate request cleanup * refactor: Update checkIdleConnections to skip current user when checking for idle connections if passed * refactor: Update createMCPTool to derive userId from config and handle abort signals * refactor: Introduce createTokenCounter function and update tokenCounter usage; enhance disposeClient to reset Graph values * refactor: Update getMCPManager to accept userId parameter for improved idle connection handling * refactor: Extract logToolError function for improved error handling in AgentClient * refactor: Update disposeClient to clear handlerRegistry and graphRunnable references in client.run * refactor: Extract createHandleNewToken function to streamline token handling in initializeClient * chore: bump @librechat/agents * refactor: Improve timeout handling in addTitle function for better error management * refactor: Introduce createFetch instead of using class method * refactor: Enhance client disposal and request data handling in AskController and EditController * refactor: Update import statements for AnthropicClient and OpenAIClient to use specific paths * refactor: Use WeakRef for response handling in SplitStreamHandler to prevent memory leaks * refactor: Simplify client disposal and rename getReqData to processReqData in AskController and EditController * refactor: Improve logging structure and parameter handling in OpenAIClient * refactor: Remove unused GraphEvents and improve stream event handling in AnthropicClient and OpenAIClient * refactor: Simplify client initialization in AskController and EditController * refactor: Remove unused mock functions and implement in-memory store for KeyvMongo * chore: Update dependencies in package-lock.json to latest versions * refactor: Await token usage recording in OpenAIClient to ensure proper async handling * refactor: Remove handleAbort route from multiple endpoints and enhance client disposal logic * refactor: Enhance abort controller logic by managing abortKey more effectively * refactor: Add newConversation handling in useEventHandlers for improved conversation management * fix: dropparams * refactor: Use optional chaining for safer access to request properties in BaseClient * refactor: Move client disposal and request data processing logic to cleanup module for better organization * refactor: Remove aborted request check from addTitle function for cleaner logic * feat: Add Grok 3 model pricing and update tests for new models * chore: Remove trace warnings and inspect flags from backend start script used for debugging * refactor: Replace user identifier handling with userId for consistency across controllers, use UserId in clientRegistry * refactor: Enhance client disposal logic to prevent memory leaks by clearing additional references * chore: Update @librechat/agents to version 2.4.14 in package.json and package-lock.json
2025-04-12 18:46:36 -04:00
signal: derivedSignal,
},
🗝️ feat: User Provided Credentials for MCP Servers (#7980) * 🗝️ feat: Per-User Credentials for MCP Servers chore: add aider to gitignore feat: fill custom variables to MCP server feat: replace placeholders with custom user MCP variables feat: handle MCP install/uninstall (uses pluginauths) feat: add MCP custom variables dialog to MCPSelect feat: add MCP custom variables dialog to the side panel feat: do not require to fill MCP credentials for in tools dialog feat: add translations keys (en+cs) for custom MCP variables fix: handle LIBRECHAT_USER_ID correctly during MCP var replacement style: remove unused MCP translation keys style: fix eslint for MCP custom vars chore: move aider gitignore to AI section * feat: Add Plugin Authentication Methods to data-schemas * refactor: Replace PluginAuth model methods with new utility functions for improved code organization and maintainability * refactor: Move IPluginAuth interface to types directory for better organization and update pluginAuth schema to use the new import * refactor: Remove unused getUsersPluginsAuthValuesMap function and streamline PluginService.js; add new getPluginAuthMap function for improved plugin authentication handling * chore: fix typing for optional tools property with GenericTool[] type * chore: update librechat-data-provider version to 0.7.88 * refactor: optimize getUserMCPAuthMap function by reducing variable usage and improving server key collection logic * refactor: streamline MCP tool creation by removing customUserVars parameter and enhancing user-specific authentication handling to avoid closure encapsulation * refactor: extract processSingleValue function to streamline MCP environment variable processing and enhance readability * refactor: enhance MCP tool processing logic by simplifying conditions and improving authentication handling for custom user variables * ci: fix action tests * chore: fix imports, remove comments * chore: remove non-english translations * fix: remove newline at end of translation.json file --------- Co-authored-by: Aleš Kůtek <kutekales@gmail.com>
2025-06-19 18:27:55 -04:00
user: config?.configurable?.user,
🏷️ feat: Request Placeholders for Custom Endpoint & MCP Headers (#9095) * feat: Add conversation ID support to custom endpoint headers - Add LIBRECHAT_CONVERSATION_ID to customUserVars when provided - Pass conversation ID to header resolution for dynamic headers - Add comprehensive test coverage Enables custom endpoints to access conversation context using {{LIBRECHAT_CONVERSATION_ID}} placeholder. * fix: filter out unresolved placeholders from headers (thanks @MrunmayS) * feat: add support for request body placeholders in custom endpoint headers - Add {{LIBRECHAT_BODY_*}} placeholders for conversationId, parentMessageId, messageId - Update tests to reflect new body placeholder functionality * refactor resolveHeaders * style: minor styling cleanup * fix: type error in unit test * feat: add body to other endpoints * feat: add body for mcp tool calls * chore: remove changes that unnecessarily increase scope after clarification of requirements * refactor: move http.ts to packages/api and have RequestBody intersect with Express request body * refactor: processMCPEnv now uses single object argument pattern * refactor: update processMCPEnv to use 'options' parameter and align types across MCP connection classes * feat: enhance MCP connection handling with dynamic request headers to pass request body fields --------- Co-authored-by: Gopal Sharma <gopalsharma@gopal.sharma1> Co-authored-by: s10gopal <36487439+s10gopal@users.noreply.github.com> Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-08-16 20:45:55 -04:00
requestBody: config?.configurable?.requestBody,
🗝️ feat: User Provided Credentials for MCP Servers (#7980) * 🗝️ feat: Per-User Credentials for MCP Servers chore: add aider to gitignore feat: fill custom variables to MCP server feat: replace placeholders with custom user MCP variables feat: handle MCP install/uninstall (uses pluginauths) feat: add MCP custom variables dialog to MCPSelect feat: add MCP custom variables dialog to the side panel feat: do not require to fill MCP credentials for in tools dialog feat: add translations keys (en+cs) for custom MCP variables fix: handle LIBRECHAT_USER_ID correctly during MCP var replacement style: remove unused MCP translation keys style: fix eslint for MCP custom vars chore: move aider gitignore to AI section * feat: Add Plugin Authentication Methods to data-schemas * refactor: Replace PluginAuth model methods with new utility functions for improved code organization and maintainability * refactor: Move IPluginAuth interface to types directory for better organization and update pluginAuth schema to use the new import * refactor: Remove unused getUsersPluginsAuthValuesMap function and streamline PluginService.js; add new getPluginAuthMap function for improved plugin authentication handling * chore: fix typing for optional tools property with GenericTool[] type * chore: update librechat-data-provider version to 0.7.88 * refactor: optimize getUserMCPAuthMap function by reducing variable usage and improving server key collection logic * refactor: streamline MCP tool creation by removing customUserVars parameter and enhancing user-specific authentication handling to avoid closure encapsulation * refactor: extract processSingleValue function to streamline MCP environment variable processing and enhance readability * refactor: enhance MCP tool processing logic by simplifying conditions and improving authentication handling for custom user variables * ci: fix action tests * chore: fix imports, remove comments * chore: remove non-english translations * fix: remove newline at end of translation.json file --------- Co-authored-by: Aleš Kůtek <kutekales@gmail.com>
2025-06-19 18:27:55 -04:00
customUserVars,
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
flowManager,
tokenMethods: {
findToken,
createToken,
updateToken,
},
oauthStart,
oauthEnd,
graphTokenResolver: getGraphApiToken,
});
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
if (isAssistantsEndpoint(provider) && Array.isArray(result)) {
return result[0];
}
return result;
} catch (error) {
logger.error(
`[MCP][${serverName}][${toolName}][User: ${userId}] Error calling MCP tool:`,
error,
);
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
/** OAuth error, provide a helpful message */
const isOAuthError =
error.message?.includes('401') ||
error.message?.includes('OAuth') ||
error.message?.includes('authentication') ||
error.message?.includes('Non-200 status code (401)');
if (isOAuthError) {
throw new Error(
`[MCP][${serverName}][${toolName}] OAuth authentication required. Please check the server logs for the authentication URL.`,
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
);
}
throw new Error(
`[MCP][${serverName}][${toolName}] tool call failed${error?.message ? `: ${error?.message}` : '.'}`,
);
🪐 feat: MCP OAuth 2.0 Discovery Support (#7924) * chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
2025-06-17 13:50:33 -04:00
} finally {
// Clean up abort handler to prevent memory leaks
if (abortHandler && derivedSignal) {
derivedSignal.removeEventListener('abort', abortHandler);
}
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
}
};
const toolInstance = tool(_call, {
schema,
name: normalizedToolKey,
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
description: description || '',
responseFormat: AgentConstants.CONTENT_AND_ARTIFACT,
});
toolInstance.mcp = true;
toolInstance.mcpRawServerName = serverName;
⏲️ feat: Defer Loading MCP Tools (#11270) * WIP: code ptc * refactor: tool classification and calling logic * 🔧 fix: Update @librechat/agents dependency to version 3.0.68 * chore: import order and correct renamed tool name for tool search * refactor: streamline tool classification logic for local and programmatic tools * feat: add per-tool configuration options for agents, including deferred loading and allowed callers - Introduced `tool_options` in agent forms to manage tool behavior. - Updated tool classification logic to prioritize agent-level configurations. - Enhanced UI components to support tool deferral functionality. - Added localization strings for new tool options and actions. * feat: enhance agent schema with per-tool options for configuration - Added `tool_options` schema to support per-tool configurations, including `defer_loading` and `allowed_callers`. - Updated agent data model to incorporate new tool options, ensuring flexibility in tool behavior management. - Modified type definitions to reflect the new `tool_options` structure for agents. * feat: add tool_options parameter to loadTools and initializeAgent for enhanced agent configuration * chore: update @librechat/agents dependency to version 3.0.71 and enhance agent tool loading logic - Updated the @librechat/agents package to version 3.0.71 across multiple files. - Added support for handling deferred loading of tools in agent initialization and execution processes. - Improved the extraction of discovered tools from message history to optimize tool loading behavior. * chore: update @librechat/agents dependency to version 3.0.72 * chore: update @librechat/agents dependency to version 3.0.75 * refactor: simplify tool defer loading logic in MCPTool component - Removed local state management for deferred tools, relying on form state instead. - Updated related functions to directly use form values for checking and toggling defer loading. - Cleaned up code by eliminating unnecessary optimistic updates and local state dependencies. * chore: remove deprecated localization strings for tool deferral in translation.json - Eliminated unused strings related to deferred loading descriptions in the English translation file. - Streamlined localization to reflect recent changes in tool loading logic. * refactor: improve tool defer loading handling in MCPTool component - Enhanced the logic for managing deferred loading of tools by simplifying the update process for tool options. - Ensured that the state reflects the correct loading behavior based on the new deferred loading conditions. - Cleaned up the code to remove unnecessary complexity in handling tool options. * refactor: update agent mocks in callbacks test to use actual implementations - Modified the agent mocks in the callbacks test to include actual implementations from the @librechat/agents module. - This change enhances the accuracy of the tests by ensuring they reflect the real behavior of the agent functions.
2026-01-08 21:55:33 -05:00
toolInstance.mcpJsonSchema = parameters;
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
return toolInstance;
}
/**
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
* Get MCP setup data including config, connections, and OAuth servers.
* Resolves config-source servers from admin Config overrides when tenant context is available.
* @param {string} userId - The user ID
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
* @param {{ role?: string, tenantId?: string }} [options] - Optional role/tenant context
* @returns {Object} Object containing mcpConfig, appConnections, userConnections, and oauthServers
*/
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
async function getMCPSetupData(userId, options = {}) {
const registry = getMCPServersRegistry();
const { role, tenantId } = options;
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
const appConfig = await getAppConfig({ role, tenantId, userId });
const configServers = await registry.ensureConfigServers(appConfig?.mcpConfig || {});
const mcpConfig = await registry.getAllServerConfigs(userId, configServers);
const mcpManager = getMCPManager(userId);
/** @type {Map<string, import('@librechat/api').MCPConnection>} */
let appConnections = new Map();
try {
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
// Use getLoaded() instead of getAll() to avoid forcing connection creation.
🔄 refactor: MCP Server Init and Stale Cache Handling (#10984) * 🔧 refactor: Update MCP connection handling to improve performance and testing * refactor: Replace getAll() with getLoaded() in MCP.js to prevent unnecessary connection creation for user-context servers. * test: Adjust MCP.spec.js to mock getLoaded() instead of getAll() for consistency with the new implementation. * feat: Enhance MCPServersInitializer to reset initialization flag for better handling of process restarts and stale data. * test: Add integration tests to verify re-initialization behavior and ensure stale data is cleared when necessary. * 🔧 refactor: Enhance cached endpoints config handling for GPT plugins * refactor: Update MCPServersInitializer tests to use new server management methods * refactor: Replace direct Redis server manipulation with registry.addServer and registry.getServerConfig for better abstraction and consistency. * test: Adjust integration tests to verify server initialization and stale data handling using the updated methods. * 🔧 refactor: Increase retry limits and delay for MCP server creation * Updated MAX_CREATE_RETRIES from 3 to 5 to allow for more attempts during server creation. * Increased RETRY_BASE_DELAY_MS from 10 to 25 milliseconds to provide a longer wait time between retries, improving stability in server initialization. * refactor: Update MCPServersInitializer tests to utilize new registry methods * refactor: Replace direct access to sharedAppServers with registry.getServerConfig for improved abstraction. * test: Adjust tests to verify server initialization and stale data handling using the updated registry methods, ensuring consistency and clarity in the test structure.
2025-12-15 16:46:56 -05:00
// getAll() creates connections for all servers, which is problematic for servers
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
// that require user context (e.g., those with {{LIBRECHAT_USER_ID}} placeholders).
🔄 refactor: MCP Server Init and Stale Cache Handling (#10984) * 🔧 refactor: Update MCP connection handling to improve performance and testing * refactor: Replace getAll() with getLoaded() in MCP.js to prevent unnecessary connection creation for user-context servers. * test: Adjust MCP.spec.js to mock getLoaded() instead of getAll() for consistency with the new implementation. * feat: Enhance MCPServersInitializer to reset initialization flag for better handling of process restarts and stale data. * test: Add integration tests to verify re-initialization behavior and ensure stale data is cleared when necessary. * 🔧 refactor: Enhance cached endpoints config handling for GPT plugins * refactor: Update MCPServersInitializer tests to use new server management methods * refactor: Replace direct Redis server manipulation with registry.addServer and registry.getServerConfig for better abstraction and consistency. * test: Adjust integration tests to verify server initialization and stale data handling using the updated methods. * 🔧 refactor: Increase retry limits and delay for MCP server creation * Updated MAX_CREATE_RETRIES from 3 to 5 to allow for more attempts during server creation. * Increased RETRY_BASE_DELAY_MS from 10 to 25 milliseconds to provide a longer wait time between retries, improving stability in server initialization. * refactor: Update MCPServersInitializer tests to utilize new registry methods * refactor: Replace direct access to sharedAppServers with registry.getServerConfig for improved abstraction. * test: Adjust tests to verify server initialization and stale data handling using the updated registry methods, ensuring consistency and clarity in the test structure.
2025-12-15 16:46:56 -05:00
appConnections = (await mcpManager.appConnections?.getLoaded()) || new Map();
} catch (error) {
logger.error(`[MCP][User: ${userId}] Error getting app connections:`, error);
}
const userConnections = mcpManager.getUserConnections(userId) || new Map();
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
const oauthServers = new Set(
Object.entries(mcpConfig)
.filter(([, config]) => config.requiresOAuth)
.map(([name]) => name),
);
return {
mcpConfig,
oauthServers,
appConnections,
userConnections,
};
}
/**
* Check OAuth flow status for a user and server
* @param {string} userId - The user ID
* @param {string} serverName - The server name
* @returns {Object} Object containing hasActiveFlow and hasFailedFlow flags
*/
async function checkOAuthFlowStatus(userId, serverName) {
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = getFlowStateManager(flowsCache);
const flowId = MCPOAuthHandler.generateFlowId(userId, serverName);
try {
const flowState = await flowManager.getFlowState(flowId, 'mcp_oauth');
if (!flowState) {
return { hasActiveFlow: false, hasFailedFlow: false };
}
const flowAge = Date.now() - flowState.createdAt;
const flowTTL = flowState.ttl || 180000; // Default 3 minutes
if (flowState.status === 'FAILED' || flowAge > flowTTL) {
const wasCancelled = flowState.error && flowState.error.includes('cancelled');
if (wasCancelled) {
logger.debug(`[MCP Connection Status] Found cancelled OAuth flow for ${serverName}`, {
flowId,
status: flowState.status,
error: flowState.error,
});
return { hasActiveFlow: false, hasFailedFlow: false };
} else {
logger.debug(`[MCP Connection Status] Found failed OAuth flow for ${serverName}`, {
flowId,
status: flowState.status,
flowAge,
flowTTL,
timedOut: flowAge > flowTTL,
error: flowState.error,
});
return { hasActiveFlow: false, hasFailedFlow: true };
}
}
if (flowState.status === 'PENDING') {
logger.debug(`[MCP Connection Status] Found active OAuth flow for ${serverName}`, {
flowId,
flowAge,
flowTTL,
});
return { hasActiveFlow: true, hasFailedFlow: false };
}
return { hasActiveFlow: false, hasFailedFlow: false };
} catch (error) {
logger.error(`[MCP Connection Status] Error checking OAuth flows for ${serverName}:`, error);
return { hasActiveFlow: false, hasFailedFlow: false };
}
}
/**
* Get connection status for a specific MCP server
* @param {string} userId - The user ID
* @param {string} serverName - The server name
* @param {import('@librechat/api').ParsedServerConfig} config - The server configuration
* @param {Map<string, import('@librechat/api').MCPConnection>} appConnections - App-level connections
* @param {Map<string, import('@librechat/api').MCPConnection>} userConnections - User-level connections
* @param {Set} oauthServers - Set of OAuth servers
* @returns {Object} Object containing requiresOAuth and connectionState
*/
async function getServerConnectionStatus(
userId,
serverName,
config,
appConnections,
userConnections,
oauthServers,
) {
const connection = appConnections.get(serverName) || userConnections.get(serverName);
🏗️ feat: Dynamic MCP Server Infrastructure with Access Control (#10787) * Feature: Dynamic MCP Server with Full UI Management * 🚦 feat: Add MCP Connection Status icons to MCPBuilder panel (#10805) * feature: Add MCP server connection status icons to MCPBuilder panel * refactor: Simplify MCPConfigDialog rendering in MCPBuilderPanel --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai> * fix: address code review feedback for MCP server management - Fix OAuth secret preservation to avoid mutating input parameter by creating a merged config copy in ServerConfigsDB.update() - Improve error handling in getResourcePermissionsMap to propagate critical errors instead of silently returning empty Map - Extract duplicated MCP server filter logic by exposing selectableServers from useMCPServerManager hook and using it in MCPSelect component * test: Update PermissionService tests to throw errors on invalid resource types - Changed the test for handling invalid resource types to ensure it throws an error instead of returning an empty permissions map. - Updated the expectation to check for the specific error message when an invalid resource type is provided. * feat: Implement retry logic for MCP server creation to handle race conditions - Enhanced the createMCPServer method to include retry logic with exponential backoff for handling duplicate key errors during concurrent server creation. - Updated tests to verify that all concurrent requests succeed and that unique server names are generated. - Added a helper function to identify MongoDB duplicate key errors, improving error handling during server creation. * refactor: StatusIcon to use CircleCheck for connected status - Replaced the PlugZap icon with CircleCheck in the ConnectedStatusIcon component to better represent the connected state. - Ensured consistent icon usage across the component for improved visual clarity. * test: Update AccessControlService tests to throw errors on invalid resource types - Modified the test for invalid resource types to ensure it throws an error with a specific message instead of returning an empty permissions map. - This change enhances error handling and improves test coverage for the AccessControlService. * fix: Update error message for missing server name in MCP server retrieval - Changed the error message returned when the server name is not provided from 'MCP ID is required' to 'Server name is required' for better clarity and accuracy in the API response. --------- Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com> Co-authored-by: Danny Avila <danny@librechat.ai>
2025-12-04 21:37:23 +01:00
const isStaleOrDoNotExist = connection ? connection?.isStale(config.updatedAt) : true;
const baseConnectionState = isStaleOrDoNotExist
? 'disconnected'
: connection?.connectionState || 'disconnected';
let finalConnectionState = baseConnectionState;
// connection state overrides specific to OAuth servers
if (baseConnectionState === 'disconnected' && oauthServers.has(serverName)) {
// check if server is actively being reconnected
const oauthReconnectionManager = getOAuthReconnectionManager();
if (oauthReconnectionManager.isReconnecting(userId, serverName)) {
finalConnectionState = 'connecting';
} else {
const { hasActiveFlow, hasFailedFlow } = await checkOAuthFlowStatus(userId, serverName);
if (hasFailedFlow) {
finalConnectionState = 'error';
} else if (hasActiveFlow) {
finalConnectionState = 'connecting';
}
}
}
return {
requiresOAuth: oauthServers.has(serverName),
connectionState: finalConnectionState,
};
}
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
module.exports = {
createMCPTool,
createMCPTools,
getMCPSetupData,
🏗️ feat: 3-Tier MCP Server Architecture with Config-Source Lazy Init (#12435) * feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring - Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with `enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains` - Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source` field to `ParsedServerConfig` - Change `dbSourced` determination from `!!config.dbId` to `config.source === 'user'` across MCPManager, ConnectionsRepository, UserConnectionManager, and MCPServerInspector - Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB * feat: three-layer MCPServersRegistry with config cache and lazy init - Add `configCacheRepo` as third repository layer between YAML cache and DB for admin-defined config-source MCP servers - Implement `ensureConfigServers()` that identifies config-override servers from resolved `getAppConfig()` mcpConfig, lazily inspects them, and caches parsed configs with `source: 'config'` - Add `lazyInitConfigServer()` with timeout, stub-on-failure, and concurrent-init deduplication via `pendingConfigInits` map - Extend `getAllServerConfigs()` with optional `configServers` param for three-way merge: YAML → Config → User - Add `getServerConfig()` lookup through config cache layer - Add `invalidateConfigCache()` for clearing config-source inspection results on admin config mutations - Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on DB-stored servers in `addServer()` and `addServerStub()` * feat: wire tenant context into MCP controllers, services, and cache invalidation - Resolve config-source servers via `getAppConfig({ role, tenantId })` in `getMCPTools()` and `getMCPServersList()` controllers - Pass `ensureConfigServers()` results through `getAllServerConfigs()` for three-way merge of YAML + Config + User servers - Add tenant/role context to `getMCPSetupData()` and connection status routes via `getTenantId()` from ALS - Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin config mutations trigger re-inspection of config-source MCP servers * feat: enforce tenantMcpPolicy on admin config mcpServers mutations - Add `validateMcpServerPolicy()` helper that checks mcpServers against operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant, allowedTransports, allowedDomains) - Wire validation into `upsertConfigOverrides` and `patchConfigField` handlers — rejects with 403 when policy is violated - Infer transport type from config shape (command → stdio, url protocol → websocket/sse, type field → streamable-http) - Validate server domains against policy allowlist when configured * revert: remove tenantMcpPolicy schema and enforcement The existing admin config CRUD routes already provide the mechanism for granular MCP server prepopulation (groups, roles, users). The tenantMcpPolicy gating adds unnecessary complexity that can be revisited if needed in the future. - Remove tenantMcpPolicy from mcpSettings Zod schema - Remove validateMcpServerPolicy helper and TenantMcpPolicy interface - Remove policy enforcement from upsertConfigOverrides and patchConfigField handlers * test: update test assertions for source field and config-server wiring - Use objectContaining in MCPServersRegistry reset test to account for new source: 'yaml' field on CACHE-stored configs - Add getTenantId and ensureConfigServers mocks to MCP route tests - Add getAppConfig mock to route test Config service mock - Update getMCPSetupData assertion to expect second options argument - Update getAllServerConfigs assertions for new configServers parameter * fix: disconnect active connections when config-source servers are evicted When admin config overrides change and config-source MCP servers are removed, the invalidation now proactively disconnects active connections for evicted servers instead of leaving them lingering until timeout. - Return evicted server names from invalidateConfigCache() - Disconnect app-level connections for evicted servers in clearMcpConfigCache() via MCPManager.appConnections.disconnect() * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Scope configCacheRepo keys by config content hash to prevent cross-tenant cache poisoning when two tenants define the same server name with different configurations - Change dbSourced checks from `source === 'user'` to `source !== 'yaml' && source !== 'config'` so undefined source (pre-upgrade cached configs) fails closed to restricted mode MAJOR fixes: - Derive OAuth servers from already-computed mcpConfig instead of calling getOAuthServers() separately — config-source OAuth servers are now properly detected - Add parseInt radix (10) and NaN guard with fallback to 30_000 for CONFIG_SERVER_INIT_TIMEOUT_MS - Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Remove `if (role || tenantId)` guard in getMCPSetupData — config servers now always resolve regardless of tenant context MINOR fixes: - Extract resolveAllMcpConfigs() helper in mcp controller to eliminate 3x copy-pasted config resolution boilerplate - Distinguish "not initialized" from real errors in clearMcpConfigCache — log actual failures instead of swallowing - Remove narrative inline comments per style guide - Remove dead try/catch inside Promise.allSettled in ensureConfigServers (inner method never throws) - Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll() calls per request Test updates: - Add ensureConfigServers mock to registry test fixtures - Update getMCPSetupData assertions for inline OAuth derivation * fix: address code review findings (CRITICAL, MAJOR, MINOR) CRITICAL fixes: - Break circular dependency: move CONFIG_CACHE_NAMESPACE from MCPServersRegistry to ServerConfigsCacheFactory - Fix dbSourced fail-closed: use source field when present, fall back to legacy dbId check when absent (backward-compatible with pre-upgrade cached configs that lack source field) MAJOR fixes: - Add CONFIG_CACHE_NAMESPACE to aggregate-key set in ServerConfigsCacheFactory to avoid SCAN-based Redis stalls - Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests) covering lazy init, stub-on-failure, cross-tenant isolation via config hash keys, concurrent deduplication, merge order, and cache invalidation MINOR fixes: - Update MCPServerInspector test assertion for dbSourced change * fix: restore getServerConfig lookup for config-source servers (NEW-1) Add configNameToKey map that indexes server name → hash-based cache key for O(1) lookup by name in getServerConfig. This restores the config cache layer that was dropped when hash-based keys were introduced. Without this fix, config-source servers appeared in tool listings (via getAllServerConfigs) but getServerConfig returned undefined, breaking all connection and tool call paths. - Populate configNameToKey in ensureSingleConfigServer - Clear configNameToKey in invalidateConfigCache and reset - Clear stale read-through cache entries after lazy init - Remove dead code in invalidateConfigCache (config.title, key parsing) - Add getServerConfig tests for config-source server lookup * fix: eliminate configNameToKey race via caller-provided configServers param Replace the process-global configNameToKey map (last-writer-wins under concurrent multi-tenant load) with a configServers parameter on getServerConfig. Callers pass the pre-resolved config servers map directly — no shared mutable state, no cross-tenant race. - Add optional configServers param to getServerConfig; when provided, returns matching config directly without any global lookup - Remove configNameToKey map entirely (was the source of the race) - Extract server names from cache keys via lastIndexOf in invalidateConfigCache (safe for names containing colons) - Use mcpConfig[serverName] directly in getMCPTools instead of a redundant getServerConfig call - Add cross-tenant isolation test for getServerConfig * fix: populate read-through cache after config server lazy init After lazyInitConfigServer succeeds, write the parsed config to readThroughCache keyed by serverName so that getServerConfig calls from ConnectionsRepository, UserConnectionManager, and MCPManager.callTool find the config without needing configServers. Without this, config-source servers appeared in tool listings but every connection attempt and tool call returned undefined. * fix: user-scoped getServerConfig fallback to server-only cache key When getServerConfig is called with a userId (e.g., from callTool or UserConnectionManager), the cache key is serverName::userId. Config-source servers are cached under the server-only key (no userId). Add a fallback so user-scoped lookups find config-source servers in the read-through cache. * fix: configCacheRepo fallback, isUserSourced DRY, cross-process race CRITICAL: Add findInConfigCache fallback in getServerConfig so config-source servers remain reachable after readThroughCache TTL expires (5s). Without this, every tool call after 5s returned undefined for config-source servers. MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace all 5 inline dbSourced ternary expressions (MCPManager x2, ConnectionsRepository, UserConnectionManager, MCPServerInspector). MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when configCacheRepo.add throws (key exists from another process), fall back to reading the existing entry instead of returning undefined. MINOR: Parallelize invalidateConfigCache awaits with Promise.all. Remove redundant .catch(() => {}) inside Promise.allSettled. Tighten dedup test assertion to toBe(1). Add TTL-expiry tests for getServerConfig (with and without userId). * feat: thread configServers through getAppToolFunctions and formatInstructionsForContext Add optional configServers parameter to getAppToolFunctions, getInstructions, and formatInstructionsForContext so config-source server tools and instructions are visible to agent initialization and context injection paths. Existing callers (boot-time init, tests) pass no argument and continue to work unchanged. Agent runtime paths can now thread resolved config servers from request context. * fix: stale failure stubs retry after 5 min, upsert for cross-process races - Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried instead of permanently disabling config-source servers after transient errors (DNS outage, cold-start race) - Extract upsertConfigCache() helper that tries add then falls back to update, preventing cross-process Redis races where a second instance's successful inspection result was discarded - Add test for stale-stub retry after CONFIG_STUB_RETRY_MS * fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup - Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs were always considered stale (updatedAt ?? 0 → epoch → always expired) - Add null guard for rawConfig in MCPManager.callTool before passing to preProcessGraphTokens — prevents unsafe `as` cast on undefined - Log double-failure in upsertConfigCache instead of silently swallowing - Replace module-scope Date.now monkey-patch with jest.useFakeTimers / jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests * fix: server-only readThrough fallback only returns truthy values Prevents a cached undefined from a prior no-userId lookup from short-circuiting the DB query on a subsequent userId-scoped lookup. * fix: remove findInConfigCache to eliminate cross-tenant config leakage The findInConfigCache prefix scan (serverName:*) could return any tenant's config after readThrough TTL expires, violating tenant isolation. Config-source servers are now ONLY resolvable through: 1. The configServers param (callers with tenant context from ALS) 2. The readThrough cache (populated by ensureSingleConfigServer, 5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs) Connection/tool-call paths without tenant context rely exclusively on the readThrough cache. If it expires before the next HTTP request repopulates it, the server is not found — which is correct because there is no tenant context to determine which config to return. - Remove findInConfigCache method and its call in getServerConfig - Update server-only readThrough fallback to only return truthy values (prevents cached undefined from short-circuiting user-scoped DB lookup) - Update tests to document tenant isolation behavior after cache expiry * style: fix import order per AGENTS.md conventions Sort package imports shortest-to-longest, local imports longest-to-shortest across MCPServersRegistry, ConnectionsRepository, MCPManager, UserConnectionManager, and MCPServerInspector. * fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures Thread pre-resolved serverConfig from tool creation context into callTool, removing dependency on the readThrough cache for config-source servers. This fixes two issues: - Cross-tenant contamination: the readThrough cache key was unscoped (just serverName), so concurrent multi-tenant requests for same-named servers would overwrite each other's entries - TTL expiry: tool calls happening >5s after config resolution would fail with "Configuration not found" because the readThrough entry had expired Changes: - Add optional serverConfig param to MCPManager.callTool — uses provided config directly, falling back to getServerConfig lookup for YAML/user servers - Thread serverConfig from createMCPTool through createToolInstance closure to callTool - Remove readThrough write from ensureSingleConfigServer — config-source servers are only accessible via configServers param (tenant-scoped) - Remove server-only readThrough fallback from getServerConfig - Increase config cache hash from 8 to 16 hex chars (64-bit) - Add isUserSourced boundary tests for all source/dbId combinations - Fix double Object.keys call in getMCPTools controller - Update test assertions for new getServerConfig behavior * fix: cache base configs for config-server users; narrow upsertConfigCache error handling - Refactor getAllServerConfigs to separate base config fetch (YAML + DB) from config-server layering. Base configs are cached via readThroughCacheAll regardless of whether configServers is provided, eliminating uncached MongoDB queries per request for config-server users - Narrow upsertConfigCache catch to duplicate-key errors only; infrastructure errors (Redis timeouts, network failures) now propagate instead of being silently swallowed, preventing inspection storms during outages * fix: restore correct merge order and document upsert error matching - Restore YAML → Config → User DB precedence in getAllServerConfigs (user DB servers have highest precedence, matching the JSDoc contract) - Add source comment on upsertConfigCache duplicate-key detection linking to the two cache implementations that define the error message * feat: complete config-source server support across all execution paths Wire configServers through the entire agent execution pipeline so config-source MCP servers are fully functional — not just visible in listings but executable in agent sessions. - Thread configServers into handleTools.js agent tool pipeline: resolve config servers from tenant context before MCP tool iteration, pass to getServerConfig, createMCPTools, and createMCPTool - Thread configServers into agent instructions pipeline: applyContextToAgent → getMCPInstructionsForServers → formatInstructionsForContext, resolved in client.js before agent context application - Add configServers param to createMCPTool and createMCPTools for reconnect path fallback - Add source field to redactServerSecrets allowlist for client UI differentiation of server tiers - Narrow invalidateConfigCache to only clear readThroughCacheAll (merged results), preserving YAML individual-server readThrough entries - Update context.spec.ts assertions for new configServers parameter * fix: add missing mocks for config-source server dependencies in client.test.js Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added to client.js but not reflected in the test file's jest.mock declarations. * fix: update formatInstructionsForContext assertions for configServers param The test assertions expected formatInstructionsForContext to be called with only the server names array, but it now receives configServers as a second argument after the config-source server feature wiring. * fix: move configServers resolution before MCP tool loop to avoid TDZ configServers was declared with `let` after the first tool loop but referenced inside it via getServerConfig(), causing a ReferenceError temporal dead zone. Move declaration and resolution before the loop, using tools.some(mcpToolPattern) to gate the async resolution. * fix: address review findings — cache bypass, discoverServerTools gap, DRY - #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via readThroughCacheAll) instead of bypassing it when configServers is present. Extracts user-DB entries from cached base by diffing against YAML keys to maintain YAML → Config → User DB merge order without extra MongoDB calls. - #3: Add configServers param to ToolDiscoveryOptions and thread it through discoverServerTools → getServerConfig so config-source servers are discoverable during OAuth reconnection flows. - #6: Replace inline import() type annotations in context.ts with proper import type { ParsedServerConfig } per AGENTS.md conventions. - #7: Extract resolveConfigServers(req) helper in MCP.js and use it from handleTools.js and client.js, eliminating the duplicated 6-line config resolution pattern. - #10: Restore removed "why" comment explaining getLoaded() vs getAll() choice in getMCPSetupData — documents non-obvious correctness constraint. - #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs. * fix: consolidate imports, reorder constants, fix YAML-DB merge edge case - Merge duplicate @librechat/data-schemas requires in MCP.js into one - Move resolveConfigServers after module-level constants - Fix getAllServerConfigs edge case where user-DB entry overriding a YAML entry with the same name was excluded from userDbConfigs; now uses reference equality check to detect DB-overwritten YAML keys * fix: replace fragile string-match error detection with proper upsert method Add upsert() to IServerConfigsRepositoryInterface and all implementations (InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle error message string match ('already exists in cache') in upsertConfigCache that was the only thing preventing cross-process init races from silently discarding inspection results. Each implementation handles add-or-update atomically: - InMemory: direct Map.set() - Redis: direct cache.set() - RedisAggregateKey: read-modify-write under write lock - DB: delegates to update() (DB servers use explicit add() with ACL setup) * fix: wire configServers through remaining HTTP endpoints - getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig - reinitialize route: resolve configServers before getServerConfig - auth-values route: resolve configServers before getServerConfig - getOAuthHeaders: accept configServers param, thread from callers - Update mcp.spec.js tests to mock getAllServerConfigs for GET by name * fix: thread serverConfig through getConnection for config-source servers Config-source servers exist only in configCacheRepo, not in YAML cache or DB. When callTool → getConnection → getUserConnection → getServerConfig runs without configServers, it returns undefined and throws. Fix by threading the pre-resolved serverConfig (providedConfig) from callTool through getConnection → getUserConnection → createUserConnectionInternal, using it as a fallback before the registry lookup. * fix: thread configServers through reinit, reconnect, and tool definition paths Wire configServers through every remaining call chain that creates or reconnects MCP server connections: - reinitMCPServer: accepts serverConfig and configServers, uses them for getServerConfig fallback, getConnection, and discoverServerTools - reconnectServer: accepts and passes configServers to reinitMCPServer - createMCPTools/createMCPTool: pass configServers to reconnectServer - ToolService.loadToolDefinitionsWrapper: resolves configServers from req, passes to both reinitMCPServer call sites - reinitialize route: passes serverConfig and configServers to reinitMCPServer * fix: address review findings — simplify merge, harden error paths, fix log labels - Simplify getAllServerConfigs merge: replace fragile reference-equality loop with direct spread { ...yamlConfigs, ...configServers, ...base } - Guard upsertConfigCache in lazyInitConfigServer catch block so cache failures don't mask the original inspection error - Deduplicate getYamlServerNames cold-start with promise dedup pattern - Remove dead `if (!mcpConfig)` guard in getMCPSetupData - Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error messages — now uses this.namespace for correct Config/App labeling - Remove misleading OAuth callback comment about readThrough cache - Move resolveConfigServers after module-level constants in MCP.js * fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label - Clear yamlServerNamesPromise on rejection so transient cache errors don't permanently prevent ensureConfigServers from working - Skip reinspectServer for config-source servers (source: 'config') in reinitMCPServer — they lack a CACHE/DB storage location; retry is handled by CONFIG_STUB_RETRY_MS in ensureConfigServers - Use source field instead of dbId for storageLocation derivation - Fix remaining hardcoded "App" in reset() leaderCheck message * fix: persist oauthHeaders in flow state for config-source OAuth servers The OAuth callback route has no JWT auth context and cannot resolve config-source server configs. Previously, getOAuthHeaders would silently return {} for config-source servers, dropping custom token exchange headers. Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow initiation (which has auth context), and the callback reads them from the stored flow state with a fallback to the registry lookup for YAML/user-DB servers. * fix: update tests for getMCPSetupData null guard removal and ToolService mock - MCP.spec.js: update test to expect graceful handling of null mcpConfig instead of a throw (getAllServerConfigs always returns an object) - MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of null from test mocks - ToolService.spec.js: add missing mock for ~/server/services/MCP (resolveConfigServers) * fix: address review findings — DRY, naming, logging, dead code, defensive guards - #1: Simplify getAllServerConfigs to single getBaseServerConfigs call, eliminating redundant double-fetch of cacheConfigsRepo.getAll() - #2: Add warning log when oauthHeaders absent from OAuth callback flow state - #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller imports shared helper instead of reimplementing - #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider in createToolInstance — these are actively used, not unused - #5: Log rejected results from ensureConfigServers Promise.allSettled so cache errors are visible instead of silently dropped - #6: Remove dead 'MCP config not found' error handlers from routes - #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache - #8: Remove logger.error from withTimeout to prevent double-logging timeouts - #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message - #12: Use spread instead of mutation in addServer for immutability consistency - Add upsert mock to ensureConfigServers.test.ts DB mock - Update route tests for resolveAllMcpConfigs import change * fix: restore correct merge priority, use immutable spread, fix test mock - getAllServerConfigs: { ...configServers, ...base } so userDB wins over configServers, matching documented "User DB (highest)" priority - lazyInitConfigServer: use immutable spread instead of direct mutation for parsedConfig.source, consistent with addServer fix - Fix test to mock getAllServerConfigs as {} instead of null, remove unnecessary || {} defensive guard in getMCPSetupData * fix: error handling, stable hashing, flatten nesting, remove dead param - Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with graceful {} fallback so transient DB/cache errors don't crash tool pipeline - Sort keys in configCacheKey JSON.stringify for deterministic hashing regardless of object property insertion order - Flatten clearMcpConfigCache from 3 nested try-catch to early returns; document that user connections are cleaned up lazily (accepted tradeoff) - Remove dead configServers param from getAppToolFunctions (never passed) - Add security rationale comment for source field in redactServerSecrets * fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision The array replacer in JSON.stringify acts as a property allowlist at every nesting depth, silently dropping nested keys like headers['X-API-Key'], oauth.client_secret, etc. Two configs with different nested values but identical top-level structure produced the same hash, causing cross-tenant cache hits and potential credential contamination. Switch to a function replacer that recursively sorts keys at all depths without dropping any properties. Also document the known gap in getOAuthServers: config-source OAuth servers are not covered by auto-reconnection or uninstall cleanup because callers lack request context. * fix: move clearMcpConfigCache to packages/api to eliminate circular dependency The function only depends on MCPServersRegistry and MCPManager, both of which live in packages/api. Import it directly from @librechat/api in the CJS layer instead of using dynamic require('~/config'). * chore: imports/fields ordering * fix: address review findings — error handling, targeted lookup, test gaps - Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so getAppConfig/getAllServerConfigs failures propagate instead of masking infrastructure errors as empty server lists. - Use targeted getServerConfig in getMCPServerById instead of fetching all server configs for a single-server lookup. - Forward configServers to inner createMCPTool calls so reconnect path works for config-source servers. - Update getAllServerConfigs JSDoc to document disjoint-key design. - Add OAuth callback oauthHeaders fallback tests (flow state present vs registry fallback). - Add resolveConfigServers/resolveAllMcpConfigs unit tests covering happy path and error propagation. * fix: add getOAuthReconnectionManager mock to OAuth callback tests * chore: imports ordering
2026-03-28 10:36:43 -04:00
resolveConfigServers,
resolveAllMcpConfigs,
checkOAuthFlowStatus,
getServerConnectionStatus,
createUnavailableToolStub,
🔧 feat: Initial MCP Support (Tools) (#5015) * 📝 chore: Add comment to clarify purpose of check_updates.sh script * feat: mcp package * feat: add librechat-mcp package and update dependencies * feat: refactor MCPConnectionSingleton to handle transport initialization and connection management * feat: change private methods to public in MCPConnectionSingleton for improved accessibility * feat: filesystem demo * chore: everything demo and move everything under mcp workspace * chore: move ts-node to mcp workspace * feat: mcp examples * feat: working sse MCP example * refactor: rename MCPConnectionSingleton to MCPConnection for clarity * refactor: replace MCPConnectionSingleton with MCPConnection for consistency * refactor: manager/connections * refactor: update MCPConnection to use type definitions from mcp types * refactor: update MCPManager to use winston logger and enhance server initialization * refactor: share logger between connections and manager * refactor: add schema definitions and update MCPManager to accept logger parameter * feat: map available MCP tools * feat: load manifest tools * feat: add MCP tools delimiter constant and update plugin key generation * feat: call MCP tools * feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties * refactor: simplify typing * chore: update types/packages * feat: MCP Tool Content parsing * chore: update dependencies and improve package configurations * feat: add 'mcp' directory to package and update configurations * refactor: return CONTENT_AND_ARTIFACT format for MCP callTool * chore: bump @librechat/agents * WIP: MCP artifacts * chore: bump @librechat/agents to v1.8.7 * fix: ensure filename has extension when saving base64 image * fix: move base64 buffer conversion before filename extension check * chore: update backend review workflow to install MCP package * fix: use correct `mime` method * fix: enhance file metadata with message and tool call IDs in image saving process * fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction * fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog * fix: update ToolItem component to use consistent text color for tool description * style: add theming to ToolSelectDialog * fix: improve domain extraction logic in ToolCall component * refactor: conversation item theming, fix rename UI bug, optimize props, add missing types * feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields * fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging * refactor: improve logging format * refactor: improve logging of available tools by displaying tool names * refactor: improve reconnection/connection logic * feat: add MCP package build process to Dockerfile * feat: add fallback icon for tools without an image in ToolItem component * feat: Assistants Support for MCP Tools * fix(build): configure rollup to use output.dir for dynamic imports * chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency * fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
};