mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-04-03 14:27:20 +02:00
* feat: add MCPServerSource type, tenantMcpPolicy schema, and source-based dbSourced wiring
- Add `tenantMcpPolicy` to `mcpSettings` in YAML config schema with
`enabled`, `maxServersPerTenant`, `allowedTransports`, and `allowedDomains`
- Add `MCPServerSource` type ('yaml' | 'config' | 'user') and `source`
field to `ParsedServerConfig`
- Change `dbSourced` determination from `!!config.dbId` to
`config.source === 'user'` across MCPManager, ConnectionsRepository,
UserConnectionManager, and MCPServerInspector
- Set `source: 'user'` on all DB-sourced servers in ServerConfigsDB
* feat: three-layer MCPServersRegistry with config cache and lazy init
- Add `configCacheRepo` as third repository layer between YAML cache and
DB for admin-defined config-source MCP servers
- Implement `ensureConfigServers()` that identifies config-override servers
from resolved `getAppConfig()` mcpConfig, lazily inspects them, and
caches parsed configs with `source: 'config'`
- Add `lazyInitConfigServer()` with timeout, stub-on-failure, and
concurrent-init deduplication via `pendingConfigInits` map
- Extend `getAllServerConfigs()` with optional `configServers` param for
three-way merge: YAML → Config → User
- Add `getServerConfig()` lookup through config cache layer
- Add `invalidateConfigCache()` for clearing config-source inspection
results on admin config mutations
- Tag `source: 'yaml'` on CACHE-stored servers and `source: 'user'` on
DB-stored servers in `addServer()` and `addServerStub()`
* feat: wire tenant context into MCP controllers, services, and cache invalidation
- Resolve config-source servers via `getAppConfig({ role, tenantId })`
in `getMCPTools()` and `getMCPServersList()` controllers
- Pass `ensureConfigServers()` results through `getAllServerConfigs()`
for three-way merge of YAML + Config + User servers
- Add tenant/role context to `getMCPSetupData()` and connection status
routes via `getTenantId()` from ALS
- Add `clearMcpConfigCache()` to `invalidateConfigCaches()` so admin
config mutations trigger re-inspection of config-source MCP servers
* feat: enforce tenantMcpPolicy on admin config mcpServers mutations
- Add `validateMcpServerPolicy()` helper that checks mcpServers against
operator-defined `tenantMcpPolicy` (enabled, maxServersPerTenant,
allowedTransports, allowedDomains)
- Wire validation into `upsertConfigOverrides` and `patchConfigField`
handlers — rejects with 403 when policy is violated
- Infer transport type from config shape (command → stdio, url protocol
→ websocket/sse, type field → streamable-http)
- Validate server domains against policy allowlist when configured
* revert: remove tenantMcpPolicy schema and enforcement
The existing admin config CRUD routes already provide the mechanism
for granular MCP server prepopulation (groups, roles, users). The
tenantMcpPolicy gating adds unnecessary complexity that can be
revisited if needed in the future.
- Remove tenantMcpPolicy from mcpSettings Zod schema
- Remove validateMcpServerPolicy helper and TenantMcpPolicy interface
- Remove policy enforcement from upsertConfigOverrides and
patchConfigField handlers
* test: update test assertions for source field and config-server wiring
- Use objectContaining in MCPServersRegistry reset test to account for
new source: 'yaml' field on CACHE-stored configs
- Add getTenantId and ensureConfigServers mocks to MCP route tests
- Add getAppConfig mock to route test Config service mock
- Update getMCPSetupData assertion to expect second options argument
- Update getAllServerConfigs assertions for new configServers parameter
* fix: disconnect active connections when config-source servers are evicted
When admin config overrides change and config-source MCP servers are
removed, the invalidation now proactively disconnects active connections
for evicted servers instead of leaving them lingering until timeout.
- Return evicted server names from invalidateConfigCache()
- Disconnect app-level connections for evicted servers in
clearMcpConfigCache() via MCPManager.appConnections.disconnect()
* fix: address code review findings (CRITICAL, MAJOR, MINOR)
CRITICAL fixes:
- Scope configCacheRepo keys by config content hash to prevent
cross-tenant cache poisoning when two tenants define the same
server name with different configurations
- Change dbSourced checks from `source === 'user'` to
`source !== 'yaml' && source !== 'config'` so undefined source
(pre-upgrade cached configs) fails closed to restricted mode
MAJOR fixes:
- Derive OAuth servers from already-computed mcpConfig instead of
calling getOAuthServers() separately — config-source OAuth servers
are now properly detected
- Add parseInt radix (10) and NaN guard with fallback to 30_000
for CONFIG_SERVER_INIT_TIMEOUT_MS
- Add CONFIG_CACHE_NAMESPACE to aggregate-key branch in
ServerConfigsCacheFactory to avoid SCAN-based Redis stalls
- Remove `if (role || tenantId)` guard in getMCPSetupData — config
servers now always resolve regardless of tenant context
MINOR fixes:
- Extract resolveAllMcpConfigs() helper in mcp controller to
eliminate 3x copy-pasted config resolution boilerplate
- Distinguish "not initialized" from real errors in
clearMcpConfigCache — log actual failures instead of swallowing
- Remove narrative inline comments per style guide
- Remove dead try/catch inside Promise.allSettled in
ensureConfigServers (inner method never throws)
- Memoize YAML server names to avoid repeated cacheConfigsRepo.getAll()
calls per request
Test updates:
- Add ensureConfigServers mock to registry test fixtures
- Update getMCPSetupData assertions for inline OAuth derivation
* fix: address code review findings (CRITICAL, MAJOR, MINOR)
CRITICAL fixes:
- Break circular dependency: move CONFIG_CACHE_NAMESPACE from
MCPServersRegistry to ServerConfigsCacheFactory
- Fix dbSourced fail-closed: use source field when present, fall back to
legacy dbId check when absent (backward-compatible with pre-upgrade
cached configs that lack source field)
MAJOR fixes:
- Add CONFIG_CACHE_NAMESPACE to aggregate-key set in
ServerConfigsCacheFactory to avoid SCAN-based Redis stalls
- Add comprehensive test suite (ensureConfigServers.test.ts, 18 tests)
covering lazy init, stub-on-failure, cross-tenant isolation via config
hash keys, concurrent deduplication, merge order, and cache invalidation
MINOR fixes:
- Update MCPServerInspector test assertion for dbSourced change
* fix: restore getServerConfig lookup for config-source servers (NEW-1)
Add configNameToKey map that indexes server name → hash-based cache key
for O(1) lookup by name in getServerConfig. This restores the config
cache layer that was dropped when hash-based keys were introduced.
Without this fix, config-source servers appeared in tool listings
(via getAllServerConfigs) but getServerConfig returned undefined,
breaking all connection and tool call paths.
- Populate configNameToKey in ensureSingleConfigServer
- Clear configNameToKey in invalidateConfigCache and reset
- Clear stale read-through cache entries after lazy init
- Remove dead code in invalidateConfigCache (config.title, key parsing)
- Add getServerConfig tests for config-source server lookup
* fix: eliminate configNameToKey race via caller-provided configServers param
Replace the process-global configNameToKey map (last-writer-wins under
concurrent multi-tenant load) with a configServers parameter on
getServerConfig. Callers pass the pre-resolved config servers map
directly — no shared mutable state, no cross-tenant race.
- Add optional configServers param to getServerConfig; when provided,
returns matching config directly without any global lookup
- Remove configNameToKey map entirely (was the source of the race)
- Extract server names from cache keys via lastIndexOf in
invalidateConfigCache (safe for names containing colons)
- Use mcpConfig[serverName] directly in getMCPTools instead of a
redundant getServerConfig call
- Add cross-tenant isolation test for getServerConfig
* fix: populate read-through cache after config server lazy init
After lazyInitConfigServer succeeds, write the parsed config to
readThroughCache keyed by serverName so that getServerConfig calls
from ConnectionsRepository, UserConnectionManager, and
MCPManager.callTool find the config without needing configServers.
Without this, config-source servers appeared in tool listings but
every connection attempt and tool call returned undefined.
* fix: user-scoped getServerConfig fallback to server-only cache key
When getServerConfig is called with a userId (e.g., from callTool or
UserConnectionManager), the cache key is serverName::userId. Config-source
servers are cached under the server-only key (no userId). Add a fallback
so user-scoped lookups find config-source servers in the read-through cache.
* fix: configCacheRepo fallback, isUserSourced DRY, cross-process race
CRITICAL: Add findInConfigCache fallback in getServerConfig so
config-source servers remain reachable after readThroughCache TTL
expires (5s). Without this, every tool call after 5s returned
undefined for config-source servers.
MAJOR: Extract isUserSourced() helper to mcp/utils.ts and replace
all 5 inline dbSourced ternary expressions (MCPManager x2,
ConnectionsRepository, UserConnectionManager, MCPServerInspector).
MAJOR: Fix cross-process Redis race in lazyInitConfigServer — when
configCacheRepo.add throws (key exists from another process), fall
back to reading the existing entry instead of returning undefined.
MINOR: Parallelize invalidateConfigCache awaits with Promise.all.
Remove redundant .catch(() => {}) inside Promise.allSettled.
Tighten dedup test assertion to toBe(1).
Add TTL-expiry tests for getServerConfig (with and without userId).
* feat: thread configServers through getAppToolFunctions and formatInstructionsForContext
Add optional configServers parameter to getAppToolFunctions,
getInstructions, and formatInstructionsForContext so config-source
server tools and instructions are visible to agent initialization
and context injection paths.
Existing callers (boot-time init, tests) pass no argument and
continue to work unchanged. Agent runtime paths can now thread
resolved config servers from request context.
* fix: stale failure stubs retry after 5 min, upsert for cross-process races
- Add CONFIG_STUB_RETRY_MS (5 min) — stale failure stubs are retried
instead of permanently disabling config-source servers after transient
errors (DNS outage, cold-start race)
- Extract upsertConfigCache() helper that tries add then falls back to
update, preventing cross-process Redis races where a second instance's
successful inspection result was discarded
- Add test for stale-stub retry after CONFIG_STUB_RETRY_MS
* fix: stamp updatedAt on failure stubs, null-guard callTool config, test cleanup
- Add updatedAt: Date.now() to failure stubs in lazyInitConfigServer so
CONFIG_STUB_RETRY_MS (5 min) window works correctly — without it, stubs
were always considered stale (updatedAt ?? 0 → epoch → always expired)
- Add null guard for rawConfig in MCPManager.callTool before passing to
preProcessGraphTokens — prevents unsafe `as` cast on undefined
- Log double-failure in upsertConfigCache instead of silently swallowing
- Replace module-scope Date.now monkey-patch with jest.useFakeTimers /
jest.setSystemTime / jest.useRealTimers in ensureConfigServers tests
* fix: server-only readThrough fallback only returns truthy values
Prevents a cached undefined from a prior no-userId lookup from
short-circuiting the DB query on a subsequent userId-scoped lookup.
* fix: remove findInConfigCache to eliminate cross-tenant config leakage
The findInConfigCache prefix scan (serverName:*) could return any
tenant's config after readThrough TTL expires, violating tenant
isolation. Config-source servers are now ONLY resolvable through:
1. The configServers param (callers with tenant context from ALS)
2. The readThrough cache (populated by ensureSingleConfigServer,
5s TTL, repopulated on every HTTP request via resolveAllMcpConfigs)
Connection/tool-call paths without tenant context rely exclusively on
the readThrough cache. If it expires before the next HTTP request
repopulates it, the server is not found — which is correct because
there is no tenant context to determine which config to return.
- Remove findInConfigCache method and its call in getServerConfig
- Update server-only readThrough fallback to only return truthy values
(prevents cached undefined from short-circuiting user-scoped DB lookup)
- Update tests to document tenant isolation behavior after cache expiry
* style: fix import order per AGENTS.md conventions
Sort package imports shortest-to-longest, local imports longest-to-shortest
across MCPServersRegistry, ConnectionsRepository, MCPManager,
UserConnectionManager, and MCPServerInspector.
* fix: eliminate cross-tenant readThrough contamination and TTL-expiry tool failures
Thread pre-resolved serverConfig from tool creation context into
callTool, removing dependency on the readThrough cache for config-source
servers. This fixes two issues:
- Cross-tenant contamination: the readThrough cache key was unscoped
(just serverName), so concurrent multi-tenant requests for same-named
servers would overwrite each other's entries
- TTL expiry: tool calls happening >5s after config resolution would
fail with "Configuration not found" because the readThrough entry
had expired
Changes:
- Add optional serverConfig param to MCPManager.callTool — uses
provided config directly, falling back to getServerConfig lookup
for YAML/user servers
- Thread serverConfig from createMCPTool through createToolInstance
closure to callTool
- Remove readThrough write from ensureSingleConfigServer — config-source
servers are only accessible via configServers param (tenant-scoped)
- Remove server-only readThrough fallback from getServerConfig
- Increase config cache hash from 8 to 16 hex chars (64-bit)
- Add isUserSourced boundary tests for all source/dbId combinations
- Fix double Object.keys call in getMCPTools controller
- Update test assertions for new getServerConfig behavior
* fix: cache base configs for config-server users; narrow upsertConfigCache error handling
- Refactor getAllServerConfigs to separate base config fetch (YAML + DB)
from config-server layering. Base configs are cached via readThroughCacheAll
regardless of whether configServers is provided, eliminating uncached
MongoDB queries per request for config-server users
- Narrow upsertConfigCache catch to duplicate-key errors only;
infrastructure errors (Redis timeouts, network failures) now propagate
instead of being silently swallowed, preventing inspection storms
during outages
* fix: restore correct merge order and document upsert error matching
- Restore YAML → Config → User DB precedence in getAllServerConfigs
(user DB servers have highest precedence, matching the JSDoc contract)
- Add source comment on upsertConfigCache duplicate-key detection
linking to the two cache implementations that define the error message
* feat: complete config-source server support across all execution paths
Wire configServers through the entire agent execution pipeline so
config-source MCP servers are fully functional — not just visible in
listings but executable in agent sessions.
- Thread configServers into handleTools.js agent tool pipeline: resolve
config servers from tenant context before MCP tool iteration, pass to
getServerConfig, createMCPTools, and createMCPTool
- Thread configServers into agent instructions pipeline:
applyContextToAgent → getMCPInstructionsForServers →
formatInstructionsForContext, resolved in client.js before agent
context application
- Add configServers param to createMCPTool and createMCPTools for
reconnect path fallback
- Add source field to redactServerSecrets allowlist for client UI
differentiation of server tiers
- Narrow invalidateConfigCache to only clear readThroughCacheAll (merged
results), preserving YAML individual-server readThrough entries
- Update context.spec.ts assertions for new configServers parameter
* fix: add missing mocks for config-source server dependencies in client.test.js
Mock getMCPServersRegistry, getAppConfig, and getTenantId that were added
to client.js but not reflected in the test file's jest.mock declarations.
* fix: update formatInstructionsForContext assertions for configServers param
The test assertions expected formatInstructionsForContext to be called with
only the server names array, but it now receives configServers as a second
argument after the config-source server feature wiring.
* fix: move configServers resolution before MCP tool loop to avoid TDZ
configServers was declared with `let` after the first tool loop but
referenced inside it via getServerConfig(), causing a ReferenceError
temporal dead zone. Move declaration and resolution before the loop,
using tools.some(mcpToolPattern) to gate the async resolution.
* fix: address review findings — cache bypass, discoverServerTools gap, DRY
- #2: getAllServerConfigs now always uses getBaseServerConfigs (cached via
readThroughCacheAll) instead of bypassing it when configServers is present.
Extracts user-DB entries from cached base by diffing against YAML keys
to maintain YAML → Config → User DB merge order without extra MongoDB calls.
- #3: Add configServers param to ToolDiscoveryOptions and thread it through
discoverServerTools → getServerConfig so config-source servers are
discoverable during OAuth reconnection flows.
- #6: Replace inline import() type annotations in context.ts with proper
import type { ParsedServerConfig } per AGENTS.md conventions.
- #7: Extract resolveConfigServers(req) helper in MCP.js and use it from
handleTools.js and client.js, eliminating the duplicated 6-line config
resolution pattern.
- #10: Restore removed "why" comment explaining getLoaded() vs getAll()
choice in getMCPSetupData — documents non-obvious correctness constraint.
- #11: Fix incomplete JSDoc param type on resolveAllMcpConfigs.
* fix: consolidate imports, reorder constants, fix YAML-DB merge edge case
- Merge duplicate @librechat/data-schemas requires in MCP.js into one
- Move resolveConfigServers after module-level constants
- Fix getAllServerConfigs edge case where user-DB entry overriding a
YAML entry with the same name was excluded from userDbConfigs; now
uses reference equality check to detect DB-overwritten YAML keys
* fix: replace fragile string-match error detection with proper upsert method
Add upsert() to IServerConfigsRepositoryInterface and all implementations
(InMemory, Redis, RedisAggregateKey, DB). This eliminates the brittle
error message string match ('already exists in cache') in upsertConfigCache
that was the only thing preventing cross-process init races from silently
discarding inspection results.
Each implementation handles add-or-update atomically:
- InMemory: direct Map.set()
- Redis: direct cache.set()
- RedisAggregateKey: read-modify-write under write lock
- DB: delegates to update() (DB servers use explicit add() with ACL setup)
* fix: wire configServers through remaining HTTP endpoints
- getMCPServerById: use resolveAllMcpConfigs instead of bare getServerConfig
- reinitialize route: resolve configServers before getServerConfig
- auth-values route: resolve configServers before getServerConfig
- getOAuthHeaders: accept configServers param, thread from callers
- Update mcp.spec.js tests to mock getAllServerConfigs for GET by name
* fix: thread serverConfig through getConnection for config-source servers
Config-source servers exist only in configCacheRepo, not in YAML cache or
DB. When callTool → getConnection → getUserConnection → getServerConfig
runs without configServers, it returns undefined and throws. Fix by
threading the pre-resolved serverConfig (providedConfig) from callTool
through getConnection → getUserConnection → createUserConnectionInternal,
using it as a fallback before the registry lookup.
* fix: thread configServers through reinit, reconnect, and tool definition paths
Wire configServers through every remaining call chain that creates or
reconnects MCP server connections:
- reinitMCPServer: accepts serverConfig and configServers, uses them for
getServerConfig fallback, getConnection, and discoverServerTools
- reconnectServer: accepts and passes configServers to reinitMCPServer
- createMCPTools/createMCPTool: pass configServers to reconnectServer
- ToolService.loadToolDefinitionsWrapper: resolves configServers from req,
passes to both reinitMCPServer call sites
- reinitialize route: passes serverConfig and configServers to reinitMCPServer
* fix: address review findings — simplify merge, harden error paths, fix log labels
- Simplify getAllServerConfigs merge: replace fragile reference-equality
loop with direct spread { ...yamlConfigs, ...configServers, ...base }
- Guard upsertConfigCache in lazyInitConfigServer catch block so cache
failures don't mask the original inspection error
- Deduplicate getYamlServerNames cold-start with promise dedup pattern
- Remove dead `if (!mcpConfig)` guard in getMCPSetupData
- Fix hardcoded "App server" in ServerConfigsCacheRedisAggregateKey error
messages — now uses this.namespace for correct Config/App labeling
- Remove misleading OAuth callback comment about readThrough cache
- Move resolveConfigServers after module-level constants in MCP.js
* fix: clear rejected yamlServerNames promise, fix config-source reinspect, fix reset log label
- Clear yamlServerNamesPromise on rejection so transient cache errors
don't permanently prevent ensureConfigServers from working
- Skip reinspectServer for config-source servers (source: 'config') in
reinitMCPServer — they lack a CACHE/DB storage location; retry is
handled by CONFIG_STUB_RETRY_MS in ensureConfigServers
- Use source field instead of dbId for storageLocation derivation
- Fix remaining hardcoded "App" in reset() leaderCheck message
* fix: persist oauthHeaders in flow state for config-source OAuth servers
The OAuth callback route has no JWT auth context and cannot resolve
config-source server configs. Previously, getOAuthHeaders would silently
return {} for config-source servers, dropping custom token exchange headers.
Now oauthHeaders are persisted in MCPOAuthFlowMetadata during flow
initiation (which has auth context), and the callback reads them from
the stored flow state with a fallback to the registry lookup for
YAML/user-DB servers.
* fix: update tests for getMCPSetupData null guard removal and ToolService mock
- MCP.spec.js: update test to expect graceful handling of null mcpConfig
instead of a throw (getAllServerConfigs always returns an object)
- MCP.js: add defensive || {} for Object.entries(mcpConfig) in case of
null from test mocks
- ToolService.spec.js: add missing mock for ~/server/services/MCP
(resolveConfigServers)
* fix: address review findings — DRY, naming, logging, dead code, defensive guards
- #1: Simplify getAllServerConfigs to single getBaseServerConfigs call,
eliminating redundant double-fetch of cacheConfigsRepo.getAll()
- #2: Add warning log when oauthHeaders absent from OAuth callback flow state
- #3: Extract resolveAllMcpConfigs to MCP.js service layer; controller
imports shared helper instead of reimplementing
- #4: Rename _serverConfig/_provider to capturedServerConfig/capturedProvider
in createToolInstance — these are actively used, not unused
- #5: Log rejected results from ensureConfigServers Promise.allSettled
so cache errors are visible instead of silently dropped
- #6: Remove dead 'MCP config not found' error handlers from routes
- #7: Document circular-dependency reason for dynamic require in clearMcpConfigCache
- #8: Remove logger.error from withTimeout to prevent double-logging timeouts
- #10: Add explicit userId guard in ServerConfigsDB.upsert with clear error message
- #12: Use spread instead of mutation in addServer for immutability consistency
- Add upsert mock to ensureConfigServers.test.ts DB mock
- Update route tests for resolveAllMcpConfigs import change
* fix: restore correct merge priority, use immutable spread, fix test mock
- getAllServerConfigs: { ...configServers, ...base } so userDB wins over
configServers, matching documented "User DB (highest)" priority
- lazyInitConfigServer: use immutable spread instead of direct mutation
for parsedConfig.source, consistent with addServer fix
- Fix test to mock getAllServerConfigs as {} instead of null, remove
unnecessary || {} defensive guard in getMCPSetupData
* fix: error handling, stable hashing, flatten nesting, remove dead param
- Wrap resolveConfigServers/resolveAllMcpConfigs in try/catch with
graceful {} fallback so transient DB/cache errors don't crash tool pipeline
- Sort keys in configCacheKey JSON.stringify for deterministic hashing
regardless of object property insertion order
- Flatten clearMcpConfigCache from 3 nested try-catch to early returns;
document that user connections are cleaned up lazily (accepted tradeoff)
- Remove dead configServers param from getAppToolFunctions (never passed)
- Add security rationale comment for source field in redactServerSecrets
* fix: use recursive key-sorting replacer in configCacheKey to prevent cross-tenant cache collision
The array replacer in JSON.stringify acts as a property allowlist at
every nesting depth, silently dropping nested keys like headers['X-API-Key'],
oauth.client_secret, etc. Two configs with different nested values but
identical top-level structure produced the same hash, causing cross-tenant
cache hits and potential credential contamination.
Switch to a function replacer that recursively sorts keys at all depths
without dropping any properties.
Also document the known gap in getOAuthServers: config-source OAuth
servers are not covered by auto-reconnection or uninstall cleanup
because callers lack request context.
* fix: move clearMcpConfigCache to packages/api to eliminate circular dependency
The function only depends on MCPServersRegistry and MCPManager, both of
which live in packages/api. Import it directly from @librechat/api in
the CJS layer instead of using dynamic require('~/config').
* chore: imports/fields ordering
* fix: address review findings — error handling, targeted lookup, test gaps
- Narrow resolveAllMcpConfigs catch to only wrap ensureConfigServers so
getAppConfig/getAllServerConfigs failures propagate instead of masking
infrastructure errors as empty server lists.
- Use targeted getServerConfig in getMCPServerById instead of fetching
all server configs for a single-server lookup.
- Forward configServers to inner createMCPTool calls so reconnect path
works for config-source servers.
- Update getAllServerConfigs JSDoc to document disjoint-key design.
- Add OAuth callback oauthHeaders fallback tests (flow state present
vs registry fallback).
- Add resolveConfigServers/resolveAllMcpConfigs unit tests covering
happy path and error propagation.
* fix: add getOAuthReconnectionManager mock to OAuth callback tests
* chore: imports ordering
810 lines
25 KiB
JavaScript
810 lines
25 KiB
JavaScript
const { Router } = require('express');
|
|
const { logger, getTenantId } = require('@librechat/data-schemas');
|
|
const {
|
|
CacheKeys,
|
|
Constants,
|
|
PermissionBits,
|
|
PermissionTypes,
|
|
Permissions,
|
|
} = require('librechat-data-provider');
|
|
const {
|
|
getBasePath,
|
|
createSafeUser,
|
|
MCPOAuthHandler,
|
|
MCPTokenStorage,
|
|
setOAuthSession,
|
|
PENDING_STALE_MS,
|
|
getUserMCPAuthMap,
|
|
validateOAuthCsrf,
|
|
OAUTH_CSRF_COOKIE,
|
|
setOAuthCsrfCookie,
|
|
generateCheckAccess,
|
|
validateOAuthSession,
|
|
OAUTH_SESSION_COOKIE,
|
|
} = require('@librechat/api');
|
|
const {
|
|
createMCPServerController,
|
|
updateMCPServerController,
|
|
deleteMCPServerController,
|
|
getMCPServersList,
|
|
getMCPServerById,
|
|
getMCPTools,
|
|
} = require('~/server/controllers/mcp');
|
|
const {
|
|
getOAuthReconnectionManager,
|
|
getMCPServersRegistry,
|
|
getFlowStateManager,
|
|
getMCPManager,
|
|
} = require('~/config');
|
|
const {
|
|
getServerConnectionStatus,
|
|
resolveConfigServers,
|
|
getMCPSetupData,
|
|
} = require('~/server/services/MCP');
|
|
const { requireJwtAuth, canAccessMCPServerResource } = require('~/server/middleware');
|
|
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
|
|
const { updateMCPServerTools } = require('~/server/services/Config/mcp');
|
|
const { reinitMCPServer } = require('~/server/services/Tools/mcp');
|
|
const { getLogStores } = require('~/cache');
|
|
const db = require('~/models');
|
|
|
|
const router = Router();
|
|
|
|
const OAUTH_CSRF_COOKIE_PATH = '/api/mcp';
|
|
|
|
const checkMCPUsePermissions = generateCheckAccess({
|
|
permissionType: PermissionTypes.MCP_SERVERS,
|
|
permissions: [Permissions.USE],
|
|
getRoleByName: db.getRoleByName,
|
|
});
|
|
|
|
const checkMCPCreate = generateCheckAccess({
|
|
permissionType: PermissionTypes.MCP_SERVERS,
|
|
permissions: [Permissions.USE, Permissions.CREATE],
|
|
getRoleByName: db.getRoleByName,
|
|
});
|
|
|
|
/**
|
|
* Get all MCP tools available to the user
|
|
* Returns only MCP tools, completely decoupled from regular LibreChat tools
|
|
*/
|
|
router.get('/tools', requireJwtAuth, async (req, res) => {
|
|
return getMCPTools(req, res);
|
|
});
|
|
|
|
/**
|
|
* Initiate OAuth flow
|
|
* This endpoint is called when the user clicks the auth link in the UI
|
|
*/
|
|
router.get('/:serverName/oauth/initiate', requireJwtAuth, setOAuthSession, async (req, res) => {
|
|
try {
|
|
const { serverName } = req.params;
|
|
const { userId, flowId } = req.query;
|
|
const user = req.user;
|
|
|
|
// Verify the userId matches the authenticated user
|
|
if (userId !== user.id) {
|
|
return res.status(403).json({ error: 'User mismatch' });
|
|
}
|
|
|
|
logger.debug('[MCP OAuth] Initiate request', { serverName, userId, flowId });
|
|
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
|
|
/** Flow state to retrieve OAuth config */
|
|
const flowState = await flowManager.getFlowState(flowId, 'mcp_oauth');
|
|
if (!flowState) {
|
|
logger.error('[MCP OAuth] Flow state not found', { flowId });
|
|
return res.status(404).json({ error: 'Flow not found' });
|
|
}
|
|
|
|
const { serverUrl, oauth: oauthConfig } = flowState.metadata || {};
|
|
if (!serverUrl || !oauthConfig) {
|
|
logger.error('[MCP OAuth] Missing server URL or OAuth config in flow state');
|
|
return res.status(400).json({ error: 'Invalid flow state' });
|
|
}
|
|
|
|
const configServers = await resolveConfigServers(req);
|
|
const oauthHeaders = await getOAuthHeaders(serverName, userId, configServers);
|
|
const {
|
|
authorizationUrl,
|
|
flowId: oauthFlowId,
|
|
flowMetadata,
|
|
} = await MCPOAuthHandler.initiateOAuthFlow(
|
|
serverName,
|
|
serverUrl,
|
|
userId,
|
|
oauthHeaders,
|
|
oauthConfig,
|
|
);
|
|
|
|
logger.debug('[MCP OAuth] OAuth flow initiated', { oauthFlowId, authorizationUrl });
|
|
|
|
await MCPOAuthHandler.storeStateMapping(flowMetadata.state, oauthFlowId, flowManager);
|
|
setOAuthCsrfCookie(res, oauthFlowId, OAUTH_CSRF_COOKIE_PATH);
|
|
res.redirect(authorizationUrl);
|
|
} catch (error) {
|
|
logger.error('[MCP OAuth] Failed to initiate OAuth', error);
|
|
res.status(500).json({ error: 'Failed to initiate OAuth' });
|
|
}
|
|
});
|
|
|
|
/**
|
|
* OAuth callback handler
|
|
* This handles the OAuth callback after the user has authorized the application
|
|
*/
|
|
router.get('/:serverName/oauth/callback', async (req, res) => {
|
|
const basePath = getBasePath();
|
|
try {
|
|
const { serverName } = req.params;
|
|
const { code, state, error: oauthError } = req.query;
|
|
|
|
logger.debug('[MCP OAuth] Callback received', {
|
|
serverName,
|
|
code: code ? 'present' : 'missing',
|
|
state,
|
|
error: oauthError,
|
|
});
|
|
|
|
if (oauthError) {
|
|
logger.error('[MCP OAuth] OAuth error received', { error: oauthError });
|
|
return res.redirect(
|
|
`${basePath}/oauth/error?error=${encodeURIComponent(String(oauthError))}`,
|
|
);
|
|
}
|
|
|
|
if (!code || typeof code !== 'string') {
|
|
logger.error('[MCP OAuth] Missing or invalid code');
|
|
return res.redirect(`${basePath}/oauth/error?error=missing_code`);
|
|
}
|
|
|
|
if (!state || typeof state !== 'string') {
|
|
logger.error('[MCP OAuth] Missing or invalid state');
|
|
return res.redirect(`${basePath}/oauth/error?error=missing_state`);
|
|
}
|
|
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
|
|
const flowId = await MCPOAuthHandler.resolveStateToFlowId(state, flowManager);
|
|
if (!flowId) {
|
|
logger.error('[MCP OAuth] Could not resolve state to flow ID', { state });
|
|
return res.redirect(`${basePath}/oauth/error?error=invalid_state`);
|
|
}
|
|
logger.debug('[MCP OAuth] Resolved flow ID from state', { flowId });
|
|
|
|
const flowParts = flowId.split(':');
|
|
if (flowParts.length < 2 || !flowParts[0] || !flowParts[1]) {
|
|
logger.error('[MCP OAuth] Invalid flow ID format', { flowId });
|
|
return res.redirect(`${basePath}/oauth/error?error=invalid_state`);
|
|
}
|
|
|
|
const [flowUserId] = flowParts;
|
|
|
|
const hasCsrf = validateOAuthCsrf(req, res, flowId, OAUTH_CSRF_COOKIE_PATH);
|
|
const hasSession = !hasCsrf && validateOAuthSession(req, flowUserId);
|
|
let hasActiveFlow = false;
|
|
if (!hasCsrf && !hasSession) {
|
|
const pendingFlow = await flowManager.getFlowState(flowId, 'mcp_oauth');
|
|
const pendingAge = pendingFlow?.createdAt ? Date.now() - pendingFlow.createdAt : Infinity;
|
|
hasActiveFlow = pendingFlow?.status === 'PENDING' && pendingAge < PENDING_STALE_MS;
|
|
if (hasActiveFlow) {
|
|
logger.debug(
|
|
'[MCP OAuth] CSRF/session cookies absent, validating via active PENDING flow',
|
|
{
|
|
flowId,
|
|
},
|
|
);
|
|
}
|
|
}
|
|
|
|
if (!hasCsrf && !hasSession && !hasActiveFlow) {
|
|
logger.error(
|
|
'[MCP OAuth] CSRF validation failed: no valid CSRF cookie, session cookie, or active flow',
|
|
{
|
|
flowId,
|
|
hasCsrfCookie: !!req.cookies?.[OAUTH_CSRF_COOKIE],
|
|
hasSessionCookie: !!req.cookies?.[OAUTH_SESSION_COOKIE],
|
|
},
|
|
);
|
|
return res.redirect(`${basePath}/oauth/error?error=csrf_validation_failed`);
|
|
}
|
|
|
|
logger.debug('[MCP OAuth] Getting flow state for flowId: ' + flowId);
|
|
const flowState = await MCPOAuthHandler.getFlowState(flowId, flowManager);
|
|
|
|
if (!flowState) {
|
|
logger.error('[MCP OAuth] Flow state not found for flowId:', flowId);
|
|
return res.redirect(`${basePath}/oauth/error?error=invalid_state`);
|
|
}
|
|
|
|
logger.debug('[MCP OAuth] Flow state details', {
|
|
serverName: flowState.serverName,
|
|
userId: flowState.userId,
|
|
hasMetadata: !!flowState.metadata,
|
|
hasClientInfo: !!flowState.clientInfo,
|
|
hasCodeVerifier: !!flowState.codeVerifier,
|
|
});
|
|
|
|
/** Check if this flow has already been completed (idempotency protection) */
|
|
const currentFlowState = await flowManager.getFlowState(flowId, 'mcp_oauth');
|
|
if (currentFlowState?.status === 'COMPLETED') {
|
|
logger.warn('[MCP OAuth] Flow already completed, preventing duplicate token exchange', {
|
|
flowId,
|
|
serverName,
|
|
});
|
|
return res.redirect(`${basePath}/oauth/success?serverName=${encodeURIComponent(serverName)}`);
|
|
}
|
|
|
|
logger.debug('[MCP OAuth] Completing OAuth flow');
|
|
if (!flowState.oauthHeaders) {
|
|
logger.warn(
|
|
'[MCP OAuth] oauthHeaders absent from flow state — config-source server oauth_headers will be empty',
|
|
{ serverName, flowId },
|
|
);
|
|
}
|
|
const oauthHeaders =
|
|
flowState.oauthHeaders ?? (await getOAuthHeaders(serverName, flowState.userId));
|
|
const tokens = await MCPOAuthHandler.completeOAuthFlow(flowId, code, flowManager, oauthHeaders);
|
|
logger.info('[MCP OAuth] OAuth flow completed, tokens received in callback route');
|
|
|
|
/** Persist tokens immediately so reconnection uses fresh credentials */
|
|
if (flowState?.userId && tokens) {
|
|
try {
|
|
await MCPTokenStorage.storeTokens({
|
|
userId: flowState.userId,
|
|
serverName,
|
|
tokens,
|
|
createToken: db.createToken,
|
|
updateToken: db.updateToken,
|
|
findToken: db.findToken,
|
|
clientInfo: flowState.clientInfo,
|
|
metadata: flowState.metadata,
|
|
});
|
|
logger.debug('[MCP OAuth] Stored OAuth tokens prior to reconnection', {
|
|
serverName,
|
|
userId: flowState.userId,
|
|
});
|
|
} catch (error) {
|
|
logger.error('[MCP OAuth] Failed to store OAuth tokens after callback', error);
|
|
throw error;
|
|
}
|
|
|
|
/**
|
|
* Clear any cached `mcp_get_tokens` flow result so subsequent lookups
|
|
* re-fetch the freshly stored credentials instead of returning stale nulls.
|
|
*/
|
|
if (typeof flowManager?.deleteFlow === 'function') {
|
|
try {
|
|
await flowManager.deleteFlow(flowId, 'mcp_get_tokens');
|
|
} catch (error) {
|
|
logger.warn('[MCP OAuth] Failed to clear cached token flow state', error);
|
|
}
|
|
}
|
|
}
|
|
|
|
try {
|
|
const mcpManager = getMCPManager(flowState.userId);
|
|
logger.debug(`[MCP OAuth] Attempting to reconnect ${serverName} with new OAuth tokens`);
|
|
|
|
if (flowState.userId !== 'system') {
|
|
const user = { id: flowState.userId };
|
|
|
|
const userConnection = await mcpManager.getUserConnection({
|
|
user,
|
|
serverName,
|
|
flowManager,
|
|
tokenMethods: {
|
|
findToken: db.findToken,
|
|
updateToken: db.updateToken,
|
|
createToken: db.createToken,
|
|
deleteTokens: db.deleteTokens,
|
|
},
|
|
});
|
|
|
|
logger.info(
|
|
`[MCP OAuth] Successfully reconnected ${serverName} for user ${flowState.userId}`,
|
|
);
|
|
|
|
// clear any reconnection attempts
|
|
const oauthReconnectionManager = getOAuthReconnectionManager();
|
|
oauthReconnectionManager.clearReconnection(flowState.userId, serverName);
|
|
|
|
const tools = await userConnection.fetchTools();
|
|
await updateMCPServerTools({
|
|
userId: flowState.userId,
|
|
serverName,
|
|
tools,
|
|
});
|
|
} else {
|
|
logger.debug(`[MCP OAuth] System-level OAuth completed for ${serverName}`);
|
|
}
|
|
} catch (error) {
|
|
logger.warn(
|
|
`[MCP OAuth] Failed to reconnect ${serverName} after OAuth, but tokens are saved:`,
|
|
error,
|
|
);
|
|
}
|
|
|
|
/** ID of the flow that the tool/connection is waiting for */
|
|
const toolFlowId = flowState.metadata?.toolFlowId;
|
|
if (toolFlowId) {
|
|
logger.debug('[MCP OAuth] Completing tool flow', { toolFlowId });
|
|
const completed = await flowManager.completeFlow(toolFlowId, 'mcp_oauth', tokens);
|
|
if (!completed) {
|
|
logger.warn(
|
|
'[MCP OAuth] Tool flow state not found during completion — waiter will time out',
|
|
{ toolFlowId },
|
|
);
|
|
}
|
|
}
|
|
|
|
/** Redirect to success page with flowId and serverName */
|
|
const redirectUrl = `${basePath}/oauth/success?serverName=${encodeURIComponent(serverName)}`;
|
|
res.redirect(redirectUrl);
|
|
} catch (error) {
|
|
logger.error('[MCP OAuth] OAuth callback error', error);
|
|
res.redirect(`${basePath}/oauth/error?error=callback_failed`);
|
|
}
|
|
});
|
|
|
|
/**
|
|
* Get OAuth tokens for a completed flow
|
|
* This is primarily for user-level OAuth flows
|
|
*/
|
|
router.get('/oauth/tokens/:flowId', requireJwtAuth, async (req, res) => {
|
|
try {
|
|
const { flowId } = req.params;
|
|
const user = req.user;
|
|
|
|
if (!user?.id) {
|
|
return res.status(401).json({ error: 'User not authenticated' });
|
|
}
|
|
|
|
if (!flowId.startsWith(`${user.id}:`) && !flowId.startsWith('system:')) {
|
|
return res.status(403).json({ error: 'Access denied' });
|
|
}
|
|
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
|
|
const flowState = await flowManager.getFlowState(flowId, 'mcp_oauth');
|
|
if (!flowState) {
|
|
return res.status(404).json({ error: 'Flow not found' });
|
|
}
|
|
|
|
if (flowState.status !== 'COMPLETED') {
|
|
return res.status(400).json({ error: 'Flow not completed' });
|
|
}
|
|
|
|
res.json({ tokens: flowState.result });
|
|
} catch (error) {
|
|
logger.error('[MCP OAuth] Failed to get tokens', error);
|
|
res.status(500).json({ error: 'Failed to get tokens' });
|
|
}
|
|
});
|
|
|
|
/**
|
|
* Set CSRF binding cookie for OAuth flows initiated outside of HTTP request/response
|
|
* (e.g. during chat via SSE). The frontend should call this before opening the OAuth URL
|
|
* so the callback can verify the browser matches the flow initiator.
|
|
*/
|
|
router.post('/:serverName/oauth/bind', requireJwtAuth, setOAuthSession, async (req, res) => {
|
|
try {
|
|
const { serverName } = req.params;
|
|
const user = req.user;
|
|
|
|
if (!user?.id) {
|
|
return res.status(401).json({ error: 'User not authenticated' });
|
|
}
|
|
|
|
const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
|
|
setOAuthCsrfCookie(res, flowId, OAUTH_CSRF_COOKIE_PATH);
|
|
|
|
res.json({ success: true });
|
|
} catch (error) {
|
|
logger.error('[MCP OAuth] Failed to set CSRF binding cookie', error);
|
|
res.status(500).json({ error: 'Failed to bind OAuth flow' });
|
|
}
|
|
});
|
|
|
|
/**
|
|
* Check OAuth flow status
|
|
* This endpoint can be used to poll the status of an OAuth flow
|
|
*/
|
|
router.get('/oauth/status/:flowId', requireJwtAuth, async (req, res) => {
|
|
try {
|
|
const { flowId } = req.params;
|
|
const user = req.user;
|
|
|
|
if (!user?.id) {
|
|
return res.status(401).json({ error: 'User not authenticated' });
|
|
}
|
|
|
|
if (!flowId.startsWith(`${user.id}:`) && !flowId.startsWith('system:')) {
|
|
return res.status(403).json({ error: 'Access denied' });
|
|
}
|
|
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
|
|
const flowState = await flowManager.getFlowState(flowId, 'mcp_oauth');
|
|
if (!flowState) {
|
|
return res.status(404).json({ error: 'Flow not found' });
|
|
}
|
|
|
|
res.json({
|
|
status: flowState.status,
|
|
completed: flowState.status === 'COMPLETED',
|
|
failed: flowState.status === 'FAILED',
|
|
error: flowState.error,
|
|
});
|
|
} catch (error) {
|
|
logger.error('[MCP OAuth] Failed to get flow status', error);
|
|
res.status(500).json({ error: 'Failed to get flow status' });
|
|
}
|
|
});
|
|
|
|
/**
|
|
* Cancel OAuth flow
|
|
* This endpoint cancels a pending OAuth flow
|
|
*/
|
|
router.post('/oauth/cancel/:serverName', requireJwtAuth, async (req, res) => {
|
|
try {
|
|
const { serverName } = req.params;
|
|
const user = req.user;
|
|
|
|
if (!user?.id) {
|
|
return res.status(401).json({ error: 'User not authenticated' });
|
|
}
|
|
|
|
logger.info(`[MCP OAuth Cancel] Cancelling OAuth flow for ${serverName} by user ${user.id}`);
|
|
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
|
|
const flowState = await flowManager.getFlowState(flowId, 'mcp_oauth');
|
|
|
|
if (!flowState) {
|
|
logger.debug(`[MCP OAuth Cancel] No active flow found for ${serverName}`);
|
|
return res.json({
|
|
success: true,
|
|
message: 'No active OAuth flow to cancel',
|
|
});
|
|
}
|
|
|
|
await flowManager.failFlow(flowId, 'mcp_oauth', 'User cancelled OAuth flow');
|
|
|
|
logger.info(`[MCP OAuth Cancel] Successfully cancelled OAuth flow for ${serverName}`);
|
|
|
|
res.json({
|
|
success: true,
|
|
message: `OAuth flow for ${serverName} cancelled successfully`,
|
|
});
|
|
} catch (error) {
|
|
logger.error('[MCP OAuth Cancel] Failed to cancel OAuth flow', error);
|
|
res.status(500).json({ error: 'Failed to cancel OAuth flow' });
|
|
}
|
|
});
|
|
|
|
/**
|
|
* Reinitialize MCP server
|
|
* This endpoint allows reinitializing a specific MCP server
|
|
*/
|
|
router.post(
|
|
'/:serverName/reinitialize',
|
|
requireJwtAuth,
|
|
checkMCPUsePermissions,
|
|
setOAuthSession,
|
|
async (req, res) => {
|
|
try {
|
|
const { serverName } = req.params;
|
|
const user = createSafeUser(req.user);
|
|
|
|
if (!user.id) {
|
|
return res.status(401).json({ error: 'User not authenticated' });
|
|
}
|
|
|
|
logger.info(`[MCP Reinitialize] Reinitializing server: ${serverName}`);
|
|
|
|
const mcpManager = getMCPManager();
|
|
const configServers = await resolveConfigServers(req);
|
|
const serverConfig = await getMCPServersRegistry().getServerConfig(
|
|
serverName,
|
|
user.id,
|
|
configServers,
|
|
);
|
|
if (!serverConfig) {
|
|
return res.status(404).json({
|
|
error: `MCP server '${serverName}' not found in configuration`,
|
|
});
|
|
}
|
|
|
|
await mcpManager.disconnectUserConnection(user.id, serverName);
|
|
logger.info(
|
|
`[MCP Reinitialize] Disconnected existing user connection for server: ${serverName}`,
|
|
);
|
|
|
|
/** @type {Record<string, Record<string, string>> | undefined} */
|
|
let userMCPAuthMap;
|
|
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') {
|
|
userMCPAuthMap = await getUserMCPAuthMap({
|
|
userId: user.id,
|
|
servers: [serverName],
|
|
findPluginAuthsByKeys: db.findPluginAuthsByKeys,
|
|
});
|
|
}
|
|
|
|
const result = await reinitMCPServer({
|
|
user,
|
|
serverName,
|
|
serverConfig,
|
|
configServers,
|
|
userMCPAuthMap,
|
|
});
|
|
|
|
if (!result) {
|
|
return res.status(500).json({ error: 'Failed to reinitialize MCP server for user' });
|
|
}
|
|
|
|
const { success, message, oauthRequired, oauthUrl } = result;
|
|
|
|
if (oauthRequired) {
|
|
const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
|
|
setOAuthCsrfCookie(res, flowId, OAUTH_CSRF_COOKIE_PATH);
|
|
}
|
|
|
|
res.json({
|
|
success,
|
|
message,
|
|
oauthUrl,
|
|
serverName,
|
|
oauthRequired,
|
|
});
|
|
} catch (error) {
|
|
logger.error('[MCP Reinitialize] Unexpected error', error);
|
|
res.status(500).json({ error: 'Internal server error' });
|
|
}
|
|
},
|
|
);
|
|
|
|
/**
|
|
* Get connection status for all MCP servers
|
|
* This endpoint returns all app level and user-scoped connection statuses from MCPManager without disconnecting idle connections
|
|
*/
|
|
router.get('/connection/status', requireJwtAuth, async (req, res) => {
|
|
try {
|
|
const user = req.user;
|
|
|
|
if (!user?.id) {
|
|
return res.status(401).json({ error: 'User not authenticated' });
|
|
}
|
|
|
|
const { mcpConfig, appConnections, userConnections, oauthServers } = await getMCPSetupData(
|
|
user.id,
|
|
{ role: user.role, tenantId: getTenantId() },
|
|
);
|
|
const connectionStatus = {};
|
|
|
|
for (const [serverName, config] of Object.entries(mcpConfig)) {
|
|
try {
|
|
connectionStatus[serverName] = await getServerConnectionStatus(
|
|
user.id,
|
|
serverName,
|
|
config,
|
|
appConnections,
|
|
userConnections,
|
|
oauthServers,
|
|
);
|
|
} catch (error) {
|
|
const message = `Failed to get status for server "${serverName}"`;
|
|
logger.error(`[MCP Connection Status] ${message},`, error);
|
|
connectionStatus[serverName] = {
|
|
connectionState: 'error',
|
|
requiresOAuth: oauthServers.has(serverName),
|
|
error: message,
|
|
};
|
|
}
|
|
}
|
|
|
|
res.json({
|
|
success: true,
|
|
connectionStatus,
|
|
});
|
|
} catch (error) {
|
|
logger.error('[MCP Connection Status] Failed to get connection status', error);
|
|
res.status(500).json({ error: 'Failed to get connection status' });
|
|
}
|
|
});
|
|
|
|
/**
|
|
* Get connection status for a single MCP server
|
|
* This endpoint returns the connection status for a specific server for a given user
|
|
*/
|
|
router.get('/connection/status/:serverName', requireJwtAuth, async (req, res) => {
|
|
try {
|
|
const user = req.user;
|
|
const { serverName } = req.params;
|
|
|
|
if (!user?.id) {
|
|
return res.status(401).json({ error: 'User not authenticated' });
|
|
}
|
|
|
|
const { mcpConfig, appConnections, userConnections, oauthServers } = await getMCPSetupData(
|
|
user.id,
|
|
{ role: user.role, tenantId: getTenantId() },
|
|
);
|
|
|
|
if (!mcpConfig[serverName]) {
|
|
return res
|
|
.status(404)
|
|
.json({ error: `MCP server '${serverName}' not found in configuration` });
|
|
}
|
|
|
|
const serverStatus = await getServerConnectionStatus(
|
|
user.id,
|
|
serverName,
|
|
mcpConfig[serverName],
|
|
appConnections,
|
|
userConnections,
|
|
oauthServers,
|
|
);
|
|
|
|
res.json({
|
|
success: true,
|
|
serverName,
|
|
connectionStatus: serverStatus.connectionState,
|
|
requiresOAuth: serverStatus.requiresOAuth,
|
|
});
|
|
} catch (error) {
|
|
logger.error(
|
|
`[MCP Per-Server Status] Failed to get connection status for ${req.params.serverName}`,
|
|
error,
|
|
);
|
|
res.status(500).json({ error: 'Failed to get connection status' });
|
|
}
|
|
});
|
|
|
|
/**
|
|
* Check which authentication values exist for a specific MCP server
|
|
* This endpoint returns only boolean flags indicating if values are set, not the actual values
|
|
*/
|
|
router.get('/:serverName/auth-values', requireJwtAuth, checkMCPUsePermissions, async (req, res) => {
|
|
try {
|
|
const { serverName } = req.params;
|
|
const user = req.user;
|
|
|
|
if (!user?.id) {
|
|
return res.status(401).json({ error: 'User not authenticated' });
|
|
}
|
|
|
|
const configServers = await resolveConfigServers(req);
|
|
const serverConfig = await getMCPServersRegistry().getServerConfig(
|
|
serverName,
|
|
user.id,
|
|
configServers,
|
|
);
|
|
if (!serverConfig) {
|
|
return res.status(404).json({
|
|
error: `MCP server '${serverName}' not found in configuration`,
|
|
});
|
|
}
|
|
|
|
const pluginKey = `${Constants.mcp_prefix}${serverName}`;
|
|
const authValueFlags = {};
|
|
|
|
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') {
|
|
for (const varName of Object.keys(serverConfig.customUserVars)) {
|
|
try {
|
|
const value = await getUserPluginAuthValue(user.id, varName, false, pluginKey);
|
|
authValueFlags[varName] = !!(value && value.length > 0);
|
|
} catch (err) {
|
|
logger.error(
|
|
`[MCP Auth Value Flags] Error checking ${varName} for user ${user.id}:`,
|
|
err,
|
|
);
|
|
authValueFlags[varName] = false;
|
|
}
|
|
}
|
|
}
|
|
|
|
res.json({
|
|
success: true,
|
|
serverName,
|
|
authValueFlags,
|
|
});
|
|
} catch (error) {
|
|
logger.error(
|
|
`[MCP Auth Value Flags] Failed to check auth value flags for ${req.params.serverName}`,
|
|
error,
|
|
);
|
|
res.status(500).json({ error: 'Failed to check auth value flags' });
|
|
}
|
|
});
|
|
|
|
async function getOAuthHeaders(serverName, userId, configServers) {
|
|
const serverConfig = await getMCPServersRegistry().getServerConfig(
|
|
serverName,
|
|
userId,
|
|
configServers,
|
|
);
|
|
return serverConfig?.oauth_headers ?? {};
|
|
}
|
|
|
|
/**
|
|
MCP Server CRUD Routes (User-Managed MCP Servers)
|
|
*/
|
|
|
|
/**
|
|
* Get list of accessible MCP servers
|
|
* @route GET /api/mcp/servers
|
|
* @param {Object} req.query - Query parameters for pagination and search
|
|
* @param {number} [req.query.limit] - Number of results per page
|
|
* @param {string} [req.query.after] - Pagination cursor
|
|
* @param {string} [req.query.search] - Search query for title/description
|
|
* @returns {MCPServerListResponse} 200 - Success response - application/json
|
|
*/
|
|
router.get('/servers', requireJwtAuth, checkMCPUsePermissions, getMCPServersList);
|
|
|
|
/**
|
|
* Create a new MCP server
|
|
* @route POST /api/mcp/servers
|
|
* @param {MCPServerCreateParams} req.body - The MCP server creation parameters.
|
|
* @returns {MCPServer} 201 - Success response - application/json
|
|
*/
|
|
router.post('/servers', requireJwtAuth, checkMCPCreate, createMCPServerController);
|
|
|
|
/**
|
|
* Get single MCP server by ID
|
|
* @route GET /api/mcp/servers/:serverName
|
|
* @param {string} req.params.serverName - MCP server identifier.
|
|
* @returns {MCPServer} 200 - Success response - application/json
|
|
*/
|
|
router.get(
|
|
'/servers/:serverName',
|
|
requireJwtAuth,
|
|
checkMCPUsePermissions,
|
|
canAccessMCPServerResource({
|
|
requiredPermission: PermissionBits.VIEW,
|
|
resourceIdParam: 'serverName',
|
|
}),
|
|
getMCPServerById,
|
|
);
|
|
|
|
/**
|
|
* Update MCP server
|
|
* @route PATCH /api/mcp/servers/:serverName
|
|
* @param {string} req.params.serverName - MCP server identifier.
|
|
* @param {MCPServerUpdateParams} req.body - The MCP server update parameters.
|
|
* @returns {MCPServer} 200 - Success response - application/json
|
|
*/
|
|
router.patch(
|
|
'/servers/:serverName',
|
|
requireJwtAuth,
|
|
checkMCPCreate,
|
|
canAccessMCPServerResource({
|
|
requiredPermission: PermissionBits.EDIT,
|
|
resourceIdParam: 'serverName',
|
|
}),
|
|
updateMCPServerController,
|
|
);
|
|
|
|
/**
|
|
* Delete MCP server
|
|
* @route DELETE /api/mcp/servers/:serverName
|
|
* @param {string} req.params.serverName - MCP server identifier.
|
|
* @returns {Object} 200 - Success response - application/json
|
|
*/
|
|
router.delete(
|
|
'/servers/:serverName',
|
|
requireJwtAuth,
|
|
checkMCPCreate,
|
|
canAccessMCPServerResource({
|
|
requiredPermission: PermissionBits.DELETE,
|
|
resourceIdParam: 'serverName',
|
|
}),
|
|
deleteMCPServerController,
|
|
);
|
|
|
|
module.exports = router;
|