🛸 feat: Remote Agent Access with External API Support (#11503)

* 🪪 feat: Microsoft Graph Access Token Placeholder for MCP Servers (#10867)

* feat: MCP Graph Token env var

* Addressing copilot remarks

* Addressed Copilot review remarks

* Fixed graphtokenservice mock in MCP test suite

* fix: remove unnecessary type check and cast in resolveGraphTokensInRecord

* ci: add Graph Token integration tests in MCPManager

* refactor: update user type definitions to use Partial<IUser> in multiple functions

* test: enhance MCP tests for graph token processing and user placeholder resolution

- Added comprehensive tests to validate the interaction between preProcessGraphTokens and processMCPEnv.
- Ensured correct resolution of graph tokens and user placeholders in various configurations.
- Mocked OIDC utilities to facilitate testing of token extraction and validation.
- Verified that original options remain unchanged after processing.

* chore: import order

* chore: imports

---------

Co-authored-by: Danny Avila <danny@librechat.ai>

* WIP: OpenAI-compatible API for LibreChat agents

- Added OpenAIChatCompletionController for handling chat completions.
- Introduced ListModelsController and GetModelController for listing and retrieving agent details.
- Created routes for OpenAI API endpoints, including /v1/chat/completions and /v1/models.
- Developed event handlers for streaming responses in OpenAI format.
- Implemented request validation and error handling for API interactions.
- Integrated content aggregation and response formatting to align with OpenAI specifications.

This commit establishes a foundational API for interacting with LibreChat agents in a manner compatible with OpenAI's chat completion interface.

* refactor: OpenAI-spec content aggregation for improved performance and clarity

* fix: OpenAI chat completion controller with safe user handling for correct tool loading

* refactor: Remove conversation ID from OpenAI response context and related handlers

* refactor: OpenAI chat completion handling with streaming support

- Introduced a lightweight tracker for streaming responses, allowing for efficient tracking of emitted content and usage metadata.
- Updated the OpenAIChatCompletionController to utilize the new tracker, improving the handling of streaming and non-streaming responses.
- Refactored event handlers to accommodate the new streaming logic, ensuring proper management of tool calls and content aggregation.
- Adjusted response handling to streamline error reporting during streaming sessions.

* WIP: Open Responses API with core service, types, and handlers

- Added Open Responses API module with comprehensive types and enums.
- Implemented core service for processing requests, including validation and input conversion.
- Developed event handlers for streaming responses and non-streaming aggregation.
- Established response building logic and error handling mechanisms.
- Created detailed types for input and output content, ensuring compliance with Open Responses specification.

* feat: Implement response storage and retrieval in Open Responses API

- Added functionality to save user input messages and assistant responses to the database when the `store` flag is set to true.
- Introduced a new endpoint to retrieve stored responses by ID, allowing users to access previous interactions.
- Enhanced the response creation process to include database operations for conversation and message storage.
- Implemented tests to validate the storage and retrieval of responses, ensuring correct behavior for both existing and non-existent response IDs.

* refactor: Open Responses API with additional token tracking and validation

- Added support for tracking cached tokens in response usage, improving token management.
- Updated response structure to include new properties for top log probabilities and detailed usage metrics.
- Enhanced tests to validate the presence and types of new properties in API responses, ensuring compliance with updated specifications.
- Refactored response handling to accommodate new fields and improve overall clarity and performance.

* refactor: Update reasoning event handlers and types for consistency

- Renamed reasoning text events to simplify naming conventions, changing `emitReasoningTextDelta` to `emitReasoningDelta` and `emitReasoningTextDone` to `emitReasoningDone`.
- Updated event types in the API to reflect the new naming, ensuring consistency across the codebase.
- Added `logprobs` property to output events for enhanced tracking of log probabilities.

* feat: Add validation for streaming events in Open Responses API tests

* feat: Implement response.created event in Open Responses API

- Added emitResponseCreated function to emit the response.created event as the first event in the streaming sequence, adhering to the Open Responses specification.
- Updated createResponse function to emit response.created followed by response.in_progress.
- Enhanced tests to validate the order of emitted events, ensuring response.created is triggered before response.in_progress.

* feat: Responses API with attachment event handling

- Introduced `createResponsesToolEndCallback` to handle attachment events in the Responses API, emitting `librechat:attachment` events as per the Open Responses extension specification.
- Updated the `createResponse` function to utilize the new callback for processing tool outputs and emitting attachments during streaming.
- Added helper functions for writing attachment events and defined types for attachment data, ensuring compatibility with the Open Responses protocol.
- Enhanced tests to validate the integration of attachment events within the Responses API workflow.

* WIP: remote agent auth

* fix: Improve loading state handling in AgentApiKeys component

- Updated the rendering logic to conditionally display loading spinner and API keys based on the loading state.
- Removed unnecessary imports and streamlined the component for better readability.

* refactor: Update API key access handling in routes

- Replaced `checkAccess` with `generateCheckAccess` for improved access control.
- Consolidated access checks into a single `checkApiKeyAccess` function, enhancing code readability and maintainability.
- Streamlined route definitions for creating, listing, retrieving, and deleting API keys.

* fix: Add permission handling for REMOTE_AGENT resource type

* feat: Enhance permission handling for REMOTE_AGENT resources

- Updated the deleteAgent and deleteUserAgents functions to handle permissions for both AGENT and REMOTE_AGENT resource types.
- Introduced new functions to enrich REMOTE_AGENT principals and backfill permissions for AGENT owners.
- Modified createAgentHandler and duplicateAgentHandler to grant permissions for REMOTE_AGENT alongside AGENT.
- Added utility functions for retrieving effective permissions for REMOTE_AGENT resources, ensuring consistent access control across the application.

* refactor: Rename and update roles for remote agent access

- Changed role name from API User to Editor in translation files for clarity.
- Updated default editor role ID from REMOTE_AGENT_USER to REMOTE_AGENT_EDITOR in resource configurations.
- Adjusted role localization to reflect the new Editor role.
- Modified access permissions to align with the updated role definitions across the application.

* feat: Introduce remote agent permissions and update access handling

- Added support for REMOTE_AGENTS in permission schemas, including use, create, share, and share_public permissions.
- Updated the interface configuration to include remote agent settings.
- Modified middleware and API key access checks to align with the new remote agent permission structure.
- Enhanced role defaults to incorporate remote agent permissions, ensuring consistent access control across the application.

* refactor: Update AgentApiKeys component and permissions handling

- Refactored the AgentApiKeys component to improve structure and readability, including the introduction of ApiKeysContent for better separation of concerns.
- Updated CreateKeyDialog to accept an onKeyCreated callback, enhancing its functionality.
- Adjusted permission checks in Data component to use REMOTE_AGENTS and USE permissions, aligning with recent permission schema changes.
- Enhanced loading state handling and dialog management for a smoother user experience.

* refactor: Update remote agent access checks in API routes

- Replaced existing access checks with `generateCheckAccess` for remote agents in the API keys and agents routes.
- Introduced specific permission checks for creating, listing, retrieving, and deleting API keys, enhancing access control.
- Improved code structure by consolidating permission handling for remote agents across multiple routes.

* fix: Correct query parameters in ApiKeysContent component

- Updated the useGetAgentApiKeysQuery call to include an object for the enabled parameter, ensuring proper functionality when the component is open.
- This change improves the handling of API key retrieval based on the component's open state.

* feat: Implement remote agents permissions and update API routes

- Added new API route for updating remote agents permissions, enhancing role management capabilities.
- Introduced remote agents permissions handling in the AgentApiKeys component, including a dedicated settings dialog.
- Updated localization files to include new remote agents permission labels for better user experience.
- Refactored data provider to support remote agents permissions updates, ensuring consistent access control across the application.

* feat: Add remote agents permissions to role schema and interface

- Introduced new permissions for REMOTE_AGENTS in the role schema, including USE, CREATE, SHARE, and SHARE_PUBLIC.
- Updated the IRole interface to reflect the new remote agents permissions structure, enhancing role management capabilities.

* feat: Add remote agents settings button to API keys dialog

* feat: Update AgentFooter to include remote agent sharing permissions

- Refactored access checks to incorporate permissions for sharing remote agents.
- Enhanced conditional rendering logic to allow sharing by users with remote agent permissions.
- Improved loading state handling for remote agent permissions, ensuring a smoother user experience.

* refactor: Update API key creation access check and localization strings

- Replaced the access check for creating API keys to use the existing remote agents access check.
- Updated localization strings to correct the descriptions for remote agent permissions, ensuring clarity in user interface.

* fix: resource permission mapping to include remote agents

- Changed the resourceToPermissionMap to use a Partial<Record> for better flexibility.
- Added mapping for REMOTE_AGENT permissions, enhancing the sharing capabilities for remote agents.

* feat: Implement remote access checks for agent models

- Enhanced ListModelsController and GetModelController to include checks for user permissions on remote agents.
- Integrated findAccessibleResources to filter agents based on VIEW permission for REMOTE_AGENT.
- Updated response handling to ensure users can only access agents they have permissions for, improving security and access control.

* fix: Update user parameter type in processUserPlaceholders function

- Changed the user parameter type in the processUserPlaceholders function from Partial<Partial<IUser>> to Partial<IUser> for improved type clarity and consistency.

* refactor: Simplify integration test structure by removing conditional describe

- Replaced conditional describeWithApiKey with a standard describe for all integration tests in responses.spec.js.
- This change enhances test clarity and ensures all tests are executed consistently, regardless of the SKIP_INTEGRATION_TESTS flag.

* test: Update AgentFooter tests to reflect new grant access dialog ID

- Changed test IDs for the grant access dialog in AgentFooter tests to include the resource type, ensuring accurate identification in the test cases.
- This update improves test clarity and aligns with recent changes in the component's implementation.

* test: Enhance integration tests for Open Responses API

- Updated integration tests in responses.spec.js to utilize an authRequest helper for consistent authorization handling across all test cases.
- Introduced a test user and API key creation to improve test setup and ensure proper permission checks for remote agents.
- Added checks for existing access roles and created necessary roles if they do not exist, enhancing test reliability and coverage.

* feat: Extend accessRole schema to include remoteAgent resource type

- Updated the accessRole schema to add 'remoteAgent' to the resourceType enum, enhancing the flexibility of role assignments and permissions management.

* test: refactored test setup to create a minimal Express app for responses routes, enhancing test structure and maintainability.

* test: Enhance abort.spec.js by mocking additional modules for improved test isolation

- Updated the test setup in abort.spec.js to include actual implementations of '@librechat/data-schemas' and '@librechat/api' while maintaining mock functionality.
- This change improves test reliability and ensures that the tests are more representative of the actual module behavior.

* refactor: Update conversation ID generation to use UUID

- Replaced the nanoid with uuidv4 for generating conversation IDs in the createResponse function, enhancing uniqueness and consistency in ID generation.

* test: Add remote agent access roles to AccessRole model tests

- Included additional access roles for remote agents (REMOTE_AGENT_EDITOR, REMOTE_AGENT_OWNER, REMOTE_AGENT_VIEWER) in the AccessRole model tests to ensure comprehensive coverage of role assignments and permissions management.

* chore: Add deletion of user agent API keys in user deletion process

- Updated the user deletion process in UserController and delete-user.js to include the removal of user agent API keys, ensuring comprehensive cleanup of user data upon account deletion.

* test: Add remote agents permissions to permissions.spec.ts

- Enhanced the permissions tests by including comprehensive permission settings for remote agents across various scenarios, ensuring accurate validation of access controls for remote agent roles.

* chore: Update remote agents translations for clarity and consistency

- Removed outdated remote agents translation entries and added revised entries to improve clarity on API key creation and sharing permissions for remote agents. This enhances user understanding of the available functionalities.

* feat: Add indexing and TTL for agent API keys

- Introduced an index on the `key` field for improved query performance.
- Added a TTL index on the `expiresAt` field to enable automatic cleanup of expired API keys, ensuring efficient management of stored keys.

* chore: Update API route documentation for clarity

- Revised comments in the agents route file to clarify the handling of API key authentication.
- Removed outdated endpoint listings to streamline the documentation and focus on current functionality.

---------

Co-authored-by: Max Sanna <max@maxsanna.com>
This commit is contained in:
Danny Avila 2026-01-26 10:50:30 -05:00
parent dd4bbd38fc
commit 6279ea8dd7
No known key found for this signature in database
GPG key ID: BF31EEB2C5CA0956
70 changed files with 8926 additions and 50 deletions

View file

@ -6,6 +6,8 @@ export * from './initialize';
export * from './legacy';
export * from './memory';
export * from './migration';
export * from './openai';
export * from './resources';
export * from './responses';
export * from './run';
export * from './validation';

View file

@ -0,0 +1,454 @@
/**
* OpenAI-compatible event handlers for agent streaming.
*
* These handlers convert LibreChat's internal graph events into OpenAI-compatible
* streaming format (SSE with chat.completion.chunk objects).
*/
import type { Response as ServerResponse } from 'express';
import type {
ChatCompletionChunkChoice,
OpenAIResponseContext,
ChatCompletionChunk,
CompletionUsage,
ToolCall,
} from './types';
/**
* Create a chat completion chunk in OpenAI format
*/
export function createChunk(
context: OpenAIResponseContext,
delta: ChatCompletionChunkChoice['delta'],
finishReason: ChatCompletionChunkChoice['finish_reason'] = null,
usage?: CompletionUsage,
): ChatCompletionChunk {
return {
id: context.requestId,
object: 'chat.completion.chunk',
created: context.created,
model: context.model,
choices: [
{
index: 0,
delta,
finish_reason: finishReason,
},
],
...(usage && { usage }),
};
}
/**
* Write an SSE event to the response
*/
export function writeSSE(res: ServerResponse, data: ChatCompletionChunk | string): void {
if (typeof data === 'string') {
res.write(`data: ${data}\n\n`);
} else {
res.write(`data: ${JSON.stringify(data)}\n\n`);
}
}
/**
* Lightweight tracker for streaming responses.
* Only tracks what's needed for finish_reason and usage - doesn't store content.
*/
export interface OpenAIStreamTracker {
/** Whether any text content was emitted */
hasText: boolean;
/** Whether any reasoning content was emitted */
hasReasoning: boolean;
/** Accumulated tool calls by index */
toolCalls: Map<number, ToolCall>;
/** Accumulated usage metadata */
usage: {
promptTokens: number;
completionTokens: number;
reasoningTokens: number;
};
/** Mark that text was emitted */
addText: () => void;
/** Mark that reasoning was emitted */
addReasoning: () => void;
}
/**
* Create a lightweight stream tracker (doesn't store content)
*/
export function createOpenAIStreamTracker(): OpenAIStreamTracker {
const tracker: OpenAIStreamTracker = {
hasText: false,
hasReasoning: false,
toolCalls: new Map(),
usage: {
promptTokens: 0,
completionTokens: 0,
reasoningTokens: 0,
},
addText: () => {
tracker.hasText = true;
},
addReasoning: () => {
tracker.hasReasoning = true;
},
};
return tracker;
}
/**
* Content aggregator for non-streaming responses.
* Accumulates full text content, reasoning, and tool calls.
* Uses arrays for O(n) text accumulation instead of O(n²) string concatenation.
*/
export interface OpenAIContentAggregator {
/** Accumulated text chunks */
textChunks: string[];
/** Accumulated reasoning/thinking chunks */
reasoningChunks: string[];
/** Accumulated tool calls by index */
toolCalls: Map<number, ToolCall>;
/** Accumulated usage metadata */
usage: {
promptTokens: number;
completionTokens: number;
reasoningTokens: number;
};
/** Get accumulated text (joins chunks) */
getText: () => string;
/** Get accumulated reasoning (joins chunks) */
getReasoning: () => string;
/** Add text chunk */
addText: (text: string) => void;
/** Add reasoning chunk */
addReasoning: (text: string) => void;
}
/**
* Create a content aggregator for non-streaming responses
*/
export function createOpenAIContentAggregator(): OpenAIContentAggregator {
const textChunks: string[] = [];
const reasoningChunks: string[] = [];
return {
textChunks,
reasoningChunks,
toolCalls: new Map(),
usage: {
promptTokens: 0,
completionTokens: 0,
reasoningTokens: 0,
},
getText: () => textChunks.join(''),
getReasoning: () => reasoningChunks.join(''),
addText: (text: string) => textChunks.push(text),
addReasoning: (text: string) => reasoningChunks.push(text),
};
}
/**
* Handler configuration for OpenAI streaming
*/
export interface OpenAIStreamHandlerConfig {
res: ServerResponse;
context: OpenAIResponseContext;
tracker: OpenAIStreamTracker;
}
/**
* Graph event types from @librechat/agents
*/
export const GraphEvents = {
CHAT_MODEL_END: 'on_chat_model_end',
TOOL_END: 'on_tool_end',
CHAT_MODEL_STREAM: 'on_chat_model_stream',
ON_RUN_STEP: 'on_run_step',
ON_RUN_STEP_DELTA: 'on_run_step_delta',
ON_RUN_STEP_COMPLETED: 'on_run_step_completed',
ON_MESSAGE_DELTA: 'on_message_delta',
ON_REASONING_DELTA: 'on_reasoning_delta',
} as const;
/**
* Step types from librechat-data-provider
*/
export const StepTypes = {
MESSAGE_CREATION: 'message_creation',
TOOL_CALLS: 'tool_calls',
} as const;
/**
* Event data interfaces
*/
export interface MessageDeltaData {
id?: string;
content?: Array<{ type: string; text?: string }>;
}
export interface RunStepDeltaData {
id?: string;
delta?: {
type?: string;
tool_calls?: Array<{
index?: number;
id?: string;
type?: string;
function?: {
name?: string;
arguments?: string;
};
}>;
};
}
export interface ToolEndData {
output?: {
name?: string;
tool_call_id?: string;
content?: string;
};
}
export interface ModelEndData {
output?: {
usage_metadata?: {
input_tokens?: number;
output_tokens?: number;
model?: string;
};
};
}
/**
* Event handler interface
*/
export interface EventHandler {
handle(
event: string,
data: unknown,
metadata?: Record<string, unknown>,
graph?: unknown,
): void | Promise<void>;
}
/**
* Handler for message delta events - streams text content
*/
export class OpenAIMessageDeltaHandler implements EventHandler {
constructor(private config: OpenAIStreamHandlerConfig) {}
handle(_event: string, data: MessageDeltaData): void {
const content = data?.content;
if (!content || !Array.isArray(content)) {
return;
}
for (const part of content) {
if (part.type === 'text' && part.text) {
this.config.tracker.addText();
const chunk = createChunk(this.config.context, { content: part.text });
writeSSE(this.config.res, chunk);
}
}
}
}
/**
* Handler for run step delta events - streams tool calls
*/
export class OpenAIRunStepDeltaHandler implements EventHandler {
constructor(private config: OpenAIStreamHandlerConfig) {}
handle(_event: string, data: RunStepDeltaData): void {
const delta = data?.delta;
if (!delta || delta.type !== StepTypes.TOOL_CALLS) {
return;
}
const toolCalls = delta.tool_calls;
if (!toolCalls || !Array.isArray(toolCalls)) {
return;
}
for (const tc of toolCalls) {
if (tc.index === undefined) {
continue;
}
// Initialize tool call in tracker if needed
let trackedTc = this.config.tracker.toolCalls.get(tc.index);
if (!trackedTc && tc.id) {
trackedTc = {
id: tc.id,
type: 'function',
function: {
name: '',
arguments: '',
},
};
this.config.tracker.toolCalls.set(tc.index, trackedTc);
}
// Build the streaming delta
const streamDelta: ChatCompletionChunkChoice['delta'] = {
tool_calls: [
{
index: tc.index,
...(tc.id && { id: tc.id }),
...(tc.type && { type: tc.type as 'function' }),
...(tc.function && {
function: {
...(tc.function.name && { name: tc.function.name }),
...(tc.function.arguments && { arguments: tc.function.arguments }),
},
}),
},
],
};
// Update tracked tool call
if (trackedTc) {
if (tc.function?.name) {
trackedTc.function.name += tc.function.name;
}
if (tc.function?.arguments) {
trackedTc.function.arguments += tc.function.arguments;
}
}
const chunk = createChunk(this.config.context, streamDelta);
writeSSE(this.config.res, chunk);
}
}
}
/**
* Handler for run step events - sends initial tool call info
*/
export class OpenAIRunStepHandler implements EventHandler {
constructor(private config: OpenAIStreamHandlerConfig) {}
handle(_event: string, data: { stepDetails?: { type?: string } }): void {
// Run step events are primarily for LibreChat UI, we use deltas for streaming
// This handler is a no-op for OpenAI format
if (data?.stepDetails?.type === StepTypes.TOOL_CALLS) {
// Tool calls will be streamed via delta events
}
}
}
/**
* Handler for model end events - captures usage
*/
export class OpenAIModelEndHandler implements EventHandler {
constructor(private config: OpenAIStreamHandlerConfig) {}
handle(_event: string, data: ModelEndData): void {
const usage = data?.output?.usage_metadata;
if (!usage) {
return;
}
this.config.tracker.usage.promptTokens += usage.input_tokens ?? 0;
this.config.tracker.usage.completionTokens += usage.output_tokens ?? 0;
}
}
/**
* Handler for chat model stream events
*/
export class OpenAIChatModelStreamHandler implements EventHandler {
handle(): void {
// Handled by message delta handler
}
}
/**
* Handler for tool end events
*/
export class OpenAIToolEndHandler implements EventHandler {
handle(): void {
// Tool results don't need to be streamed in OpenAI format
// They're used internally by the agent
}
}
/**
* Handler for reasoning delta events.
* Streams reasoning/thinking content using the `delta.reasoning` field (OpenRouter convention).
*/
export class OpenAIReasoningDeltaHandler implements EventHandler {
constructor(private config: OpenAIStreamHandlerConfig) {}
handle(_event: string, data: MessageDeltaData): void {
const content = data?.content;
if (!content || !Array.isArray(content)) {
return;
}
for (const part of content) {
if (part.type === 'text' && part.text) {
// Mark that reasoning was emitted
this.config.tracker.addReasoning();
// Stream as delta.reasoning (OpenRouter convention)
const chunk = createChunk(this.config.context, { reasoning: part.text });
writeSSE(this.config.res, chunk);
}
}
}
}
/**
* Create all handlers for OpenAI streaming format
*/
export function createOpenAIHandlers(
config: OpenAIStreamHandlerConfig,
): Record<string, EventHandler> {
return {
[GraphEvents.ON_MESSAGE_DELTA]: new OpenAIMessageDeltaHandler(config),
[GraphEvents.ON_RUN_STEP_DELTA]: new OpenAIRunStepDeltaHandler(config),
[GraphEvents.ON_RUN_STEP]: new OpenAIRunStepHandler(config),
[GraphEvents.ON_RUN_STEP_COMPLETED]: new OpenAIRunStepHandler(config),
[GraphEvents.CHAT_MODEL_END]: new OpenAIModelEndHandler(config),
[GraphEvents.CHAT_MODEL_STREAM]: new OpenAIChatModelStreamHandler(),
[GraphEvents.TOOL_END]: new OpenAIToolEndHandler(),
[GraphEvents.ON_REASONING_DELTA]: new OpenAIReasoningDeltaHandler(config),
};
}
/**
* Send the final chunk with finish_reason and optional usage
*/
export function sendFinalChunk(
config: OpenAIStreamHandlerConfig,
finishReason: ChatCompletionChunkChoice['finish_reason'] = 'stop',
): void {
const { res, context, tracker } = config;
// Determine finish reason based on content
let reason = finishReason;
if (tracker.toolCalls.size > 0 && !tracker.hasText) {
reason = 'tool_calls';
}
// Build usage object with reasoning token details (OpenRouter/OpenAI convention)
const usage: CompletionUsage = {
prompt_tokens: tracker.usage.promptTokens,
completion_tokens: tracker.usage.completionTokens,
total_tokens: tracker.usage.promptTokens + tracker.usage.completionTokens,
};
// Add reasoning token breakdown if there are reasoning tokens
if (tracker.usage.reasoningTokens > 0) {
usage.completion_tokens_details = {
reasoning_tokens: tracker.usage.reasoningTokens,
};
}
const finalChunk = createChunk(context, {}, reason, usage);
writeSSE(res, finalChunk);
// Send [DONE] marker
writeSSE(res, '[DONE]');
}

View file

@ -0,0 +1,52 @@
/**
* OpenAI-compatible API for LibreChat agents.
*
* This module provides an OpenAI v1/chat/completions compatible interface
* for interacting with LibreChat agents remotely via API.
*
* @example
* ```typescript
* import { createAgentChatCompletion, listAgentModels } from '@librechat/api';
*
* // POST /v1/chat/completions
* app.post('/v1/chat/completions', async (req, res) => {
* await createAgentChatCompletion(req, res, dependencies);
* });
*
* // GET /v1/models
* app.get('/v1/models', async (req, res) => {
* await listAgentModels(req, res, { getAgents });
* });
* ```
*
* Request format:
* ```json
* {
* "model": "agent_id_here",
* "messages": [
* {"role": "user", "content": "Hello!"}
* ],
* "stream": true
* }
* ```
*
* The "model" parameter should be the agent ID you want to invoke.
* Use the /v1/models endpoint to list available agents.
*/
export * from './types';
export * from './handlers';
export {
createAgentChatCompletion,
listAgentModels,
convertMessages,
validateRequest,
isChatCompletionValidationFailure,
createErrorResponse,
sendErrorResponse,
buildNonStreamingResponse,
type ChatCompletionDependencies,
type ChatCompletionValidationResult,
type ChatCompletionValidationSuccess,
type ChatCompletionValidationFailure,
} from './service';

View file

@ -0,0 +1,554 @@
/**
* OpenAI-compatible chat completions service for agents.
*
* This service provides an OpenAI v1/chat/completions compatible API for
* interacting with LibreChat agents. The agent_id is passed as the "model"
* parameter per OpenAI spec.
*
* Usage:
* ```typescript
* import { createAgentChatCompletion } from '@librechat/api';
*
* // In your Express route handler:
* app.post('/v1/chat/completions', async (req, res) => {
* await createAgentChatCompletion(req, res, {
* getAgent: db.getAgent,
* // ... other dependencies
* });
* });
* ```
*/
import { nanoid } from 'nanoid';
import type { Response as ServerResponse, Request } from 'express';
import type {
ChatCompletionResponse,
OpenAIResponseContext,
ChatCompletionRequest,
OpenAIErrorResponse,
CompletionUsage,
ChatMessage,
ToolCall,
} from './types';
import type { OpenAIStreamHandlerConfig, EventHandler } from './handlers';
import {
createOpenAIContentAggregator,
createOpenAIStreamTracker,
createOpenAIHandlers,
sendFinalChunk,
createChunk,
writeSSE,
} from './handlers';
/**
* Dependencies for the chat completion service
*/
export interface ChatCompletionDependencies {
/** Get agent by ID */
getAgent: (params: { id: string }) => Promise<Agent | null>;
/** Initialize agent for use */
initializeAgent: (params: InitializeAgentParams) => Promise<InitializedAgent>;
/** Load agent tools */
loadAgentTools?: LoadToolsFn;
/** Get models config */
getModelsConfig?: (req: Request) => Promise<unknown>;
/** Validate agent model */
validateAgentModel?: (
params: unknown,
) => Promise<{ isValid: boolean; error?: { message: string } }>;
/** Log violation */
logViolation?: (
req: Request,
res: ServerResponse,
type: string,
info: unknown,
score: number,
) => Promise<void>;
/** Create agent run */
createRun?: CreateRunFn;
/** App config */
appConfig?: AppConfig;
}
/**
* Agent type from librechat-data-provider
*/
interface Agent {
id: string;
name?: string;
model?: string;
provider: string;
tools?: string[];
instructions?: string;
model_parameters?: Record<string, unknown>;
tool_resources?: Record<string, unknown>;
tool_options?: Record<string, unknown>;
[key: string]: unknown;
}
/**
* Initialized agent type - note: after initialization, tools become structured tool objects
*/
interface InitializedAgent {
id: string;
name?: string;
model?: string;
provider: string;
/** After initialization, tools are structured tool objects, not strings */
tools: unknown[];
instructions?: string;
model_parameters?: Record<string, unknown>;
tool_resources?: Record<string, unknown>;
tool_options?: Record<string, unknown>;
attachments: unknown[];
toolContextMap: Record<string, unknown>;
maxContextTokens: number;
userMCPAuthMap?: Record<string, Record<string, string>>;
[key: string]: unknown;
}
/**
* Initialize agent params
*/
interface InitializeAgentParams {
req: Request;
res: ServerResponse;
agent: Agent;
conversationId?: string | null;
parentMessageId?: string | null;
requestFiles?: unknown[];
loadTools?: LoadToolsFn;
endpointOption?: Record<string, unknown>;
allowedProviders: Set<string>;
isInitialAgent?: boolean;
}
/**
* Tool loading function type
*/
type LoadToolsFn = (params: {
req: Request;
res: ServerResponse;
provider: string;
agentId: string;
tools: string[];
model: string | null;
tool_options: unknown;
tool_resources: unknown;
}) => Promise<{
tools: unknown[];
toolContextMap: Record<string, unknown>;
userMCPAuthMap?: Record<string, Record<string, string>>;
} | null>;
/**
* Create run function type
*/
type CreateRunFn = (params: {
agents: unknown[];
messages: unknown[];
runId: string;
signal: AbortSignal;
customHandlers: Record<string, EventHandler>;
requestBody: Record<string, unknown>;
user: Record<string, unknown>;
tokenCounter?: (message: unknown) => number;
}) => Promise<{
Graph?: unknown;
processStream: (
input: { messages: unknown[] },
config: Record<string, unknown>,
options: Record<string, unknown>,
) => Promise<void>;
} | null>;
/**
* App config type
*/
interface AppConfig {
endpoints?: Record<string, unknown>;
[key: string]: unknown;
}
/**
* Convert OpenAI messages to LibreChat format
*/
export function convertMessages(messages: ChatMessage[]): unknown[] {
return messages.map((msg) => {
let content: string | unknown[];
if (typeof msg.content === 'string') {
content = msg.content;
} else if (msg.content) {
content = msg.content.map((part) => {
if (part.type === 'text') {
return { type: 'text', text: part.text };
}
if (part.type === 'image_url') {
return { type: 'image_url', image_url: part.image_url };
}
return part;
});
} else {
content = '';
}
return {
role: msg.role,
content,
...(msg.name && { name: msg.name }),
...(msg.tool_calls && { tool_calls: msg.tool_calls }),
...(msg.tool_call_id && { tool_call_id: msg.tool_call_id }),
};
});
}
/**
* Create an error response in OpenAI format
*/
export function createErrorResponse(
message: string,
type = 'invalid_request_error',
code: string | null = null,
): OpenAIErrorResponse {
return {
error: {
message,
type,
param: null,
code,
},
};
}
/**
* Send an error response
*/
export function sendErrorResponse(
res: ServerResponse,
statusCode: number,
message: string,
type = 'invalid_request_error',
code: string | null = null,
): void {
res.status(statusCode).json(createErrorResponse(message, type, code));
}
/**
* Validation result types for chat completion requests
*/
export type ChatCompletionValidationSuccess = { valid: true; request: ChatCompletionRequest };
export type ChatCompletionValidationFailure = { valid: false; error: string };
export type ChatCompletionValidationResult =
| ChatCompletionValidationSuccess
| ChatCompletionValidationFailure;
/**
* Type guard for validation failure
*/
export function isChatCompletionValidationFailure(
result: ChatCompletionValidationResult,
): result is ChatCompletionValidationFailure {
return !result.valid;
}
/**
* Validate the chat completion request
*/
export function validateRequest(body: unknown): ChatCompletionValidationResult {
if (!body || typeof body !== 'object') {
return { valid: false, error: 'Request body is required' };
}
const request = body as Record<string, unknown>;
if (!request.model || typeof request.model !== 'string') {
return { valid: false, error: 'model (agent_id) is required' };
}
if (!request.messages || !Array.isArray(request.messages)) {
return { valid: false, error: 'messages array is required' };
}
if (request.messages.length === 0) {
return { valid: false, error: 'messages array cannot be empty' };
}
// Validate each message has role and content
for (let i = 0; i < request.messages.length; i++) {
const msg = request.messages[i] as Record<string, unknown>;
if (!msg.role || typeof msg.role !== 'string') {
return { valid: false, error: `messages[${i}].role is required` };
}
if (!['system', 'user', 'assistant', 'tool'].includes(msg.role)) {
return {
valid: false,
error: `messages[${i}].role must be one of: system, user, assistant, tool`,
};
}
}
return { valid: true, request: request as unknown as ChatCompletionRequest };
}
/**
* Build a non-streaming response from aggregated content
*/
export function buildNonStreamingResponse(
context: OpenAIResponseContext,
text: string,
reasoning: string,
toolCalls: Map<number, ToolCall>,
usage: CompletionUsage,
): ChatCompletionResponse {
const toolCallsArray = Array.from(toolCalls.values());
const finishReason = toolCallsArray.length > 0 && !text ? 'tool_calls' : 'stop';
return {
id: context.requestId,
object: 'chat.completion',
created: context.created,
model: context.model,
choices: [
{
index: 0,
message: {
role: 'assistant',
content: text || null,
...(reasoning && { reasoning }),
...(toolCallsArray.length > 0 && { tool_calls: toolCallsArray }),
},
finish_reason: finishReason,
},
],
usage,
};
}
/**
* Main handler for OpenAI-compatible chat completions with agents.
*
* This function:
* 1. Validates the request
* 2. Looks up the agent by ID (model parameter)
* 3. Initializes the agent with tools
* 4. Runs the agent and streams/returns the response
*
* @param req - Express request object
* @param res - Express response object
* @param deps - Dependencies for the service
*/
export async function createAgentChatCompletion(
req: Request,
res: ServerResponse,
deps: ChatCompletionDependencies,
): Promise<void> {
// Validate request
const validation = validateRequest(req.body);
if (isChatCompletionValidationFailure(validation)) {
sendErrorResponse(res, 400, validation.error);
return;
}
const request = validation.request;
const agentId = request.model;
const requestedStreaming = request.stream === true;
// Look up the agent
const agent = await deps.getAgent({ id: agentId });
if (!agent) {
sendErrorResponse(
res,
404,
`Agent not found: ${agentId}`,
'invalid_request_error',
'model_not_found',
);
return;
}
// Generate IDs
const requestId = `chatcmpl-${nanoid()}`;
const conversationId = request.conversation_id ?? nanoid();
const created = Math.floor(Date.now() / 1000);
// Build response context
const context: OpenAIResponseContext = {
created,
requestId,
model: agentId,
};
// Set up abort controller
const abortController = new AbortController();
// Handle client disconnect
req.on('close', () => {
abortController.abort();
});
try {
// Build allowed providers set (empty = all allowed)
const allowedProviders = new Set<string>();
// Initialize the agent first to check for disableStreaming
const initializedAgent = await deps.initializeAgent({
req,
res,
agent,
conversationId,
parentMessageId: request.parent_message_id,
loadTools: deps.loadAgentTools,
endpointOption: {
endpoint: agent.provider,
model_parameters: agent.model_parameters ?? {},
},
allowedProviders,
isInitialAgent: true,
});
// Determine if streaming is enabled (check both request and agent config)
const streamingDisabled = !!(initializedAgent.model_parameters as Record<string, unknown>)
?.disableStreaming;
const isStreaming = requestedStreaming && !streamingDisabled;
// Create tracker for streaming or aggregator for non-streaming
const tracker = isStreaming ? createOpenAIStreamTracker() : null;
const aggregator = isStreaming ? null : createOpenAIContentAggregator();
// Set up response headers for streaming
if (isStreaming) {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
res.setHeader('X-Accel-Buffering', 'no');
res.flushHeaders();
// Send initial chunk with role
const initialChunk = createChunk(context, { role: 'assistant' });
writeSSE(res, initialChunk);
}
// Create handler config (only used for streaming)
const handlerConfig: OpenAIStreamHandlerConfig | null =
isStreaming && tracker
? {
res,
context,
tracker,
}
: null;
// Create event handlers
const eventHandlers = isStreaming && handlerConfig ? createOpenAIHandlers(handlerConfig) : {};
// Convert messages to internal format
const messages = convertMessages(request.messages);
// Create and run the agent
if (deps.createRun) {
const userId = (req as unknown as { user?: { id?: string } }).user?.id ?? 'api-user';
const run = await deps.createRun({
agents: [initializedAgent],
messages,
runId: requestId,
signal: abortController.signal,
customHandlers: eventHandlers,
requestBody: {
messageId: requestId,
conversationId,
},
user: { id: userId },
});
if (run) {
await run.processStream(
{ messages },
{
runName: 'AgentRun',
configurable: {
thread_id: conversationId,
user_id: userId,
},
signal: abortController.signal,
streamMode: 'values',
version: 'v2',
},
{},
);
}
}
// Finalize response
if (isStreaming && handlerConfig) {
sendFinalChunk(handlerConfig);
res.end();
} else if (aggregator) {
// Build and send non-streaming response
const usage: CompletionUsage = {
prompt_tokens: aggregator.usage.promptTokens,
completion_tokens: aggregator.usage.completionTokens,
total_tokens: aggregator.usage.promptTokens + aggregator.usage.completionTokens,
...(aggregator.usage.reasoningTokens > 0 && {
completion_tokens_details: { reasoning_tokens: aggregator.usage.reasoningTokens },
}),
};
const response = buildNonStreamingResponse(
context,
aggregator.getText(),
aggregator.getReasoning(),
aggregator.toolCalls,
usage,
);
res.json(response);
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'An error occurred';
// Check if we already started streaming (headers sent)
if (res.headersSent) {
// Headers already sent, try to send error in stream format
const errorChunk = createChunk(context, { content: `\n\nError: ${errorMessage}` }, 'stop');
writeSSE(res, errorChunk);
writeSSE(res, '[DONE]');
res.end();
} else {
sendErrorResponse(res, 500, errorMessage, 'server_error');
}
}
}
/**
* List available agents/models
*
* This provides a /v1/models compatible endpoint that lists available agents.
*/
export async function listAgentModels(
_req: Request,
res: ServerResponse,
deps: { getAgents: (params: Record<string, unknown>) => Promise<Agent[]> },
): Promise<void> {
try {
const agents = await deps.getAgents({});
const models = agents.map((agent) => ({
id: agent.id,
object: 'model',
created: Math.floor(Date.now() / 1000),
owned_by: 'librechat',
permission: [],
root: agent.id,
parent: null,
// Extensions
name: agent.name,
provider: agent.provider,
}));
res.json({
object: 'list',
data: models,
});
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Failed to list models';
sendErrorResponse(res, 500, errorMessage, 'server_error');
}
}

View file

@ -0,0 +1,194 @@
/**
* OpenAI-compatible types for the agent chat completions API.
* These types follow the OpenAI API spec for /v1/chat/completions.
*
* Note: This API uses agent_id as the "model" parameter per OpenAI spec.
* In the future, this will be extended to support the Responses API.
*/
/**
* Content part types for OpenAI format
*/
export interface OpenAITextContentPart {
type: 'text';
text: string;
}
export interface OpenAIImageContentPart {
type: 'image_url';
image_url: {
url: string;
detail?: 'auto' | 'low' | 'high';
};
}
export type OpenAIContentPart = OpenAITextContentPart | OpenAIImageContentPart;
/**
* Tool call in OpenAI format
*/
export interface ToolCall {
id: string;
type: 'function';
function: {
name: string;
arguments: string;
};
}
/**
* OpenAI chat message format
*/
export interface ChatMessage {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string | OpenAIContentPart[] | null;
name?: string;
tool_calls?: ToolCall[];
tool_call_id?: string;
}
/**
* OpenAI chat completion request
*/
export interface ChatCompletionRequest {
/** Agent ID to invoke (maps to model in OpenAI spec) */
model: string;
/** Conversation messages */
messages: ChatMessage[];
/** Whether to stream the response */
stream?: boolean;
/** Maximum tokens to generate */
max_tokens?: number;
/** Temperature for sampling */
temperature?: number;
/** Top-p sampling */
top_p?: number;
/** Frequency penalty */
frequency_penalty?: number;
/** Presence penalty */
presence_penalty?: number;
/** Stop sequences */
stop?: string | string[];
/** User identifier */
user?: string;
/** Conversation ID (LibreChat extension) */
conversation_id?: string;
/** Parent message ID (LibreChat extension) */
parent_message_id?: string;
}
/**
* Token usage information
*/
export interface CompletionUsage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
/** Detailed breakdown of output tokens (OpenRouter/OpenAI convention) */
completion_tokens_details?: {
reasoning_tokens?: number;
};
}
/**
* Non-streaming choice
*/
export interface ChatCompletionChoice {
index: number;
message: {
role: 'assistant';
content: string | null;
/** Reasoning/thinking content (OpenRouter convention) */
reasoning?: string | null;
tool_calls?: ToolCall[];
};
finish_reason: 'stop' | 'length' | 'tool_calls' | 'content_filter' | null;
}
/**
* Non-streaming response
*/
export interface ChatCompletionResponse {
id: string;
object: 'chat.completion';
created: number;
model: string;
choices: ChatCompletionChoice[];
usage?: CompletionUsage;
}
/**
* Streaming choice delta
* Note: `reasoning` field follows OpenRouter convention for streaming reasoning/thinking content
*/
export interface ChatCompletionChunkChoice {
index: number;
delta: {
role?: 'assistant';
content?: string | null;
/** Reasoning/thinking content (OpenRouter convention) */
reasoning?: string | null;
tool_calls?: Array<{
index: number;
id?: string;
type?: 'function';
function?: {
name?: string;
arguments?: string;
};
}>;
};
finish_reason: 'stop' | 'length' | 'tool_calls' | 'content_filter' | null;
}
/**
* Streaming response chunk
*/
export interface ChatCompletionChunk {
id: string;
object: 'chat.completion.chunk';
created: number;
model: string;
choices: ChatCompletionChunkChoice[];
/** Final chunk may include usage */
usage?: CompletionUsage;
}
/**
* SSE event wrapper for streaming
*/
export interface SSEEvent {
data: ChatCompletionChunk | '[DONE]';
}
/**
* Context for building OpenAI responses
*/
export interface OpenAIResponseContext {
/** Request ID for the chat completion */
requestId: string;
/** Model/agent ID */
model: string;
/** Created timestamp */
created: number;
}
/**
* Aggregated content for building final response
*/
export interface AggregatedContent {
text: string;
toolCalls: ToolCall[];
}
/**
* Error response in OpenAI format
*/
export interface OpenAIErrorResponse {
error: {
message: string;
type: string;
param: string | null;
code: string | null;
};
}

View file

@ -0,0 +1,914 @@
/**
* Open Responses API Handlers
*
* Semantic event emitters and response tracking for the Open Responses API.
* Events follow the Open Responses spec with proper lifecycle management.
*/
import type { Response as ServerResponse } from 'express';
import type {
Response,
ResponseContext,
ResponseEvent,
OutputItem,
MessageItem,
FunctionCallItem,
FunctionCallOutputItem,
ReasoningItem,
OutputTextContent,
ReasoningTextContent,
ItemStatus,
ResponseStatus,
} from './types';
/* =============================================================================
* RESPONSE TRACKER
* ============================================================================= */
/**
* Tracks the state of a response during streaming.
* Manages items, sequence numbers, and accumulated content.
*/
export interface ResponseTracker {
/** Current sequence number (monotonically increasing) */
sequenceNumber: number;
/** Output items being built */
items: OutputItem[];
/** Current message item (if any) */
currentMessage: MessageItem | null;
/** Current message content index */
currentContentIndex: number;
/** Current reasoning item (if any) */
currentReasoning: ReasoningItem | null;
/** Current reasoning content index */
currentReasoningContentIndex: number;
/** Map of function call items by call_id */
functionCalls: Map<string, FunctionCallItem>;
/** Map of function call outputs by call_id */
functionCallOutputs: Map<string, FunctionCallOutputItem>;
/** Accumulated text for current message */
accumulatedText: string;
/** Accumulated reasoning text */
accumulatedReasoningText: string;
/** Accumulated function call arguments by call_id */
accumulatedArguments: Map<string, string>;
/** Token usage */
usage: {
inputTokens: number;
outputTokens: number;
reasoningTokens: number;
cachedTokens: number;
};
/** Response status */
status: ResponseStatus;
/** Get next sequence number */
nextSequence: () => number;
}
/**
* Create a new response tracker
*/
export function createResponseTracker(): ResponseTracker {
const tracker: ResponseTracker = {
sequenceNumber: 0,
items: [],
currentMessage: null,
currentContentIndex: 0,
currentReasoning: null,
currentReasoningContentIndex: 0,
functionCalls: new Map(),
functionCallOutputs: new Map(),
accumulatedText: '',
accumulatedReasoningText: '',
accumulatedArguments: new Map(),
usage: {
inputTokens: 0,
outputTokens: 0,
reasoningTokens: 0,
cachedTokens: 0,
},
status: 'in_progress',
nextSequence: () => tracker.sequenceNumber++,
};
return tracker;
}
/* =============================================================================
* SSE EVENT WRITING
* ============================================================================= */
/**
* Write a semantic SSE event to the response.
* The `event:` field matches the `type` in the data payload.
*/
export function writeEvent(res: ServerResponse, event: ResponseEvent): void {
res.write(`event: ${event.type}\n`);
res.write(`data: ${JSON.stringify(event)}\n\n`);
}
/**
* Write the terminal [DONE] event
*/
export function writeDone(res: ServerResponse): void {
res.write('data: [DONE]\n\n');
}
/* =============================================================================
* RESPONSE BUILDING
* ============================================================================= */
/**
* Build a Response object from context and tracker
* Includes all required fields per Open Responses spec
*/
export function buildResponse(
context: ResponseContext,
tracker: ResponseTracker,
status: ResponseStatus = 'in_progress',
): Response {
const isCompleted = status === 'completed';
return {
// Required fields
id: context.responseId,
object: 'response',
created_at: context.createdAt,
completed_at: isCompleted ? Math.floor(Date.now() / 1000) : null,
status,
incomplete_details: null,
model: context.model,
previous_response_id: context.previousResponseId ?? null,
instructions: context.instructions ?? null,
output: tracker.items,
error: null,
tools: [],
tool_choice: 'auto',
truncation: 'disabled',
parallel_tool_calls: true,
text: { format: { type: 'text' } },
temperature: 1,
top_p: 1,
presence_penalty: 0,
frequency_penalty: 0,
top_logprobs: 0,
reasoning: null,
user: null,
usage: isCompleted
? {
input_tokens: tracker.usage.inputTokens,
output_tokens: tracker.usage.outputTokens,
total_tokens: tracker.usage.inputTokens + tracker.usage.outputTokens,
input_tokens_details: { cached_tokens: tracker.usage.cachedTokens },
output_tokens_details: { reasoning_tokens: tracker.usage.reasoningTokens },
}
: null,
max_output_tokens: null,
max_tool_calls: null,
store: false,
background: false,
service_tier: 'default',
metadata: {},
safety_identifier: null,
prompt_cache_key: null,
};
}
/* =============================================================================
* ITEM BUILDERS
* ============================================================================= */
let itemIdCounter = 0;
/**
* Generate a unique item ID
*/
export function generateItemId(prefix: string): string {
return `${prefix}_${Date.now().toString(36)}${(itemIdCounter++).toString(36)}`;
}
/**
* Create a new message item
*/
export function createMessageItem(status: ItemStatus = 'in_progress'): MessageItem {
return {
type: 'message',
id: generateItemId('msg'),
role: 'assistant',
status,
content: [],
};
}
/**
* Create a new function call item
*/
export function createFunctionCallItem(
callId: string,
name: string,
status: ItemStatus = 'in_progress',
): FunctionCallItem {
return {
type: 'function_call',
id: generateItemId('fc'),
call_id: callId,
name,
arguments: '',
status,
};
}
/**
* Create a new function call output item
*/
export function createFunctionCallOutputItem(
callId: string,
output: string,
status: ItemStatus = 'completed',
): FunctionCallOutputItem {
return {
type: 'function_call_output',
id: generateItemId('fco'),
call_id: callId,
output,
status,
};
}
/**
* Create a new reasoning item
*/
export function createReasoningItem(status: ItemStatus = 'in_progress'): ReasoningItem {
return {
type: 'reasoning',
id: generateItemId('reason'),
status,
content: [],
summary: [],
};
}
/**
* Create output text content
*/
export function createOutputTextContent(text: string = ''): OutputTextContent {
return {
type: 'output_text',
text,
annotations: [],
logprobs: [],
};
}
/**
* Create reasoning text content
*/
export function createReasoningTextContent(text: string = ''): ReasoningTextContent {
return {
type: 'reasoning_text',
text,
};
}
/* =============================================================================
* STREAMING EVENT EMITTERS
* ============================================================================= */
export interface StreamHandlerConfig {
res: ServerResponse;
context: ResponseContext;
tracker: ResponseTracker;
}
/**
* Emit response.created event
* This is the first event emitted per the Open Responses spec
*/
export function emitResponseCreated(config: StreamHandlerConfig): void {
const { res, context, tracker } = config;
const response = buildResponse(context, tracker, 'in_progress');
writeEvent(res, {
type: 'response.created',
sequence_number: tracker.nextSequence(),
response,
});
}
/**
* Emit response.in_progress event
*/
export function emitResponseInProgress(config: StreamHandlerConfig): void {
const { res, context, tracker } = config;
const response = buildResponse(context, tracker, 'in_progress');
writeEvent(res, {
type: 'response.in_progress',
sequence_number: tracker.nextSequence(),
response,
});
}
/**
* Emit response.completed event
*/
export function emitResponseCompleted(config: StreamHandlerConfig): void {
const { res, context, tracker } = config;
tracker.status = 'completed';
const response = buildResponse(context, tracker, 'completed');
writeEvent(res, {
type: 'response.completed',
sequence_number: tracker.nextSequence(),
response,
});
}
/**
* Emit response.failed event
*/
export function emitResponseFailed(
config: StreamHandlerConfig,
error: { type: string; message: string; code?: string },
): void {
const { res, context, tracker } = config;
tracker.status = 'failed';
const response = buildResponse(context, tracker, 'failed');
response.error = {
type: error.type as
| 'server_error'
| 'invalid_request'
| 'not_found'
| 'model_error'
| 'too_many_requests',
message: error.message,
code: error.code,
};
writeEvent(res, {
type: 'response.failed',
sequence_number: tracker.nextSequence(),
response,
});
}
/**
* Emit response.output_item.added event for a message
*/
export function emitMessageItemAdded(config: StreamHandlerConfig): MessageItem {
const { res, tracker } = config;
const item = createMessageItem('in_progress');
tracker.currentMessage = item;
tracker.currentContentIndex = 0;
tracker.accumulatedText = '';
tracker.items.push(item);
writeEvent(res, {
type: 'response.output_item.added',
sequence_number: tracker.nextSequence(),
output_index: tracker.items.length - 1,
item,
});
return item;
}
/**
* Emit response.output_item.done event for a message
*/
export function emitMessageItemDone(config: StreamHandlerConfig): void {
const { res, tracker } = config;
if (!tracker.currentMessage) {
return;
}
tracker.currentMessage.status = 'completed';
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
writeEvent(res, {
type: 'response.output_item.done',
sequence_number: tracker.nextSequence(),
output_index: outputIndex,
item: tracker.currentMessage,
});
tracker.currentMessage = null;
}
/**
* Emit response.content_part.added for text content
*/
export function emitTextContentPartAdded(config: StreamHandlerConfig): void {
const { res, tracker } = config;
if (!tracker.currentMessage) {
return;
}
const part = createOutputTextContent('');
tracker.currentMessage.content.push(part);
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
writeEvent(res, {
type: 'response.content_part.added',
sequence_number: tracker.nextSequence(),
item_id: tracker.currentMessage.id,
output_index: outputIndex,
content_index: tracker.currentContentIndex,
part,
});
}
/**
* Emit response.output_text.delta event
*/
export function emitOutputTextDelta(config: StreamHandlerConfig, delta: string): void {
const { res, tracker } = config;
if (!tracker.currentMessage) {
return;
}
tracker.accumulatedText += delta;
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
writeEvent(res, {
type: 'response.output_text.delta',
sequence_number: tracker.nextSequence(),
item_id: tracker.currentMessage.id,
output_index: outputIndex,
content_index: tracker.currentContentIndex,
delta,
logprobs: [],
});
}
/**
* Emit response.output_text.done event
*/
export function emitOutputTextDone(config: StreamHandlerConfig): void {
const { res, tracker } = config;
if (!tracker.currentMessage) {
return;
}
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
const contentIndex = tracker.currentContentIndex;
// Update the content part with final text
if (tracker.currentMessage.content[contentIndex]) {
(tracker.currentMessage.content[contentIndex] as OutputTextContent).text =
tracker.accumulatedText;
}
writeEvent(res, {
type: 'response.output_text.done',
sequence_number: tracker.nextSequence(),
item_id: tracker.currentMessage.id,
output_index: outputIndex,
content_index: contentIndex,
text: tracker.accumulatedText,
logprobs: [],
});
}
/**
* Emit response.content_part.done for text content
*/
export function emitTextContentPartDone(config: StreamHandlerConfig): void {
const { res, tracker } = config;
if (!tracker.currentMessage) {
return;
}
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
const contentIndex = tracker.currentContentIndex;
const part = tracker.currentMessage.content[contentIndex];
if (part) {
writeEvent(res, {
type: 'response.content_part.done',
sequence_number: tracker.nextSequence(),
item_id: tracker.currentMessage.id,
output_index: outputIndex,
content_index: contentIndex,
part,
});
}
tracker.currentContentIndex++;
}
/* =============================================================================
* FUNCTION CALL EVENT EMITTERS
* ============================================================================= */
/**
* Emit response.output_item.added for a function call
*/
export function emitFunctionCallItemAdded(
config: StreamHandlerConfig,
callId: string,
name: string,
): FunctionCallItem {
const { res, tracker } = config;
const item = createFunctionCallItem(callId, name, 'in_progress');
tracker.functionCalls.set(callId, item);
tracker.accumulatedArguments.set(callId, '');
tracker.items.push(item);
writeEvent(res, {
type: 'response.output_item.added',
sequence_number: tracker.nextSequence(),
output_index: tracker.items.length - 1,
item,
});
return item;
}
/**
* Emit response.function_call_arguments.delta event
*/
export function emitFunctionCallArgumentsDelta(
config: StreamHandlerConfig,
callId: string,
delta: string,
): void {
const { res, tracker } = config;
const item = tracker.functionCalls.get(callId);
if (!item) {
return;
}
const accumulated = (tracker.accumulatedArguments.get(callId) ?? '') + delta;
tracker.accumulatedArguments.set(callId, accumulated);
item.arguments = accumulated;
const outputIndex = tracker.items.indexOf(item);
writeEvent(res, {
type: 'response.function_call_arguments.delta',
sequence_number: tracker.nextSequence(),
item_id: item.id,
output_index: outputIndex,
call_id: callId,
delta,
});
}
/**
* Emit response.function_call_arguments.done event
*/
export function emitFunctionCallArgumentsDone(config: StreamHandlerConfig, callId: string): void {
const { res, tracker } = config;
const item = tracker.functionCalls.get(callId);
if (!item) {
return;
}
const outputIndex = tracker.items.indexOf(item);
const args = tracker.accumulatedArguments.get(callId) ?? '';
writeEvent(res, {
type: 'response.function_call_arguments.done',
sequence_number: tracker.nextSequence(),
item_id: item.id,
output_index: outputIndex,
call_id: callId,
arguments: args,
});
}
/**
* Emit response.output_item.done for a function call
*/
export function emitFunctionCallItemDone(config: StreamHandlerConfig, callId: string): void {
const { res, tracker } = config;
const item = tracker.functionCalls.get(callId);
if (!item) {
return;
}
item.status = 'completed';
const outputIndex = tracker.items.indexOf(item);
writeEvent(res, {
type: 'response.output_item.done',
sequence_number: tracker.nextSequence(),
output_index: outputIndex,
item,
});
}
/**
* Emit function call output item (internal tool result)
*/
export function emitFunctionCallOutputItem(
config: StreamHandlerConfig,
callId: string,
output: string,
): void {
const { res, tracker } = config;
const item = createFunctionCallOutputItem(callId, output, 'completed');
tracker.functionCallOutputs.set(callId, item);
tracker.items.push(item);
// Emit added
writeEvent(res, {
type: 'response.output_item.added',
sequence_number: tracker.nextSequence(),
output_index: tracker.items.length - 1,
item,
});
// Immediately emit done since it's already complete
writeEvent(res, {
type: 'response.output_item.done',
sequence_number: tracker.nextSequence(),
output_index: tracker.items.length - 1,
item,
});
}
/* =============================================================================
* REASONING EVENT EMITTERS
* ============================================================================= */
/**
* Emit response.output_item.added for reasoning
*/
export function emitReasoningItemAdded(config: StreamHandlerConfig): ReasoningItem {
const { res, tracker } = config;
const item = createReasoningItem('in_progress');
tracker.currentReasoning = item;
tracker.currentReasoningContentIndex = 0;
tracker.accumulatedReasoningText = '';
tracker.items.push(item);
writeEvent(res, {
type: 'response.output_item.added',
sequence_number: tracker.nextSequence(),
output_index: tracker.items.length - 1,
item,
});
return item;
}
/**
* Emit response.content_part.added for reasoning
*/
export function emitReasoningContentPartAdded(config: StreamHandlerConfig): void {
const { res, tracker } = config;
if (!tracker.currentReasoning) {
return;
}
const part = createReasoningTextContent('');
if (!tracker.currentReasoning.content) {
tracker.currentReasoning.content = [];
}
tracker.currentReasoning.content.push(part);
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
writeEvent(res, {
type: 'response.content_part.added',
sequence_number: tracker.nextSequence(),
item_id: tracker.currentReasoning.id,
output_index: outputIndex,
content_index: tracker.currentReasoningContentIndex,
part,
});
}
/**
* Emit response.reasoning.delta event
*/
export function emitReasoningDelta(config: StreamHandlerConfig, delta: string): void {
const { res, tracker } = config;
if (!tracker.currentReasoning) {
return;
}
tracker.accumulatedReasoningText += delta;
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
writeEvent(res, {
type: 'response.reasoning.delta',
sequence_number: tracker.nextSequence(),
item_id: tracker.currentReasoning.id,
output_index: outputIndex,
content_index: tracker.currentReasoningContentIndex,
delta,
});
}
/**
* Emit response.reasoning.done event
*/
export function emitReasoningDone(config: StreamHandlerConfig): void {
const { res, tracker } = config;
if (!tracker.currentReasoning || !tracker.currentReasoning.content) {
return;
}
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
const contentIndex = tracker.currentReasoningContentIndex;
// Update the content part with final text
if (tracker.currentReasoning.content[contentIndex]) {
(tracker.currentReasoning.content[contentIndex] as ReasoningTextContent).text =
tracker.accumulatedReasoningText;
}
writeEvent(res, {
type: 'response.reasoning.done',
sequence_number: tracker.nextSequence(),
item_id: tracker.currentReasoning.id,
output_index: outputIndex,
content_index: contentIndex,
text: tracker.accumulatedReasoningText,
});
}
/**
* Emit response.content_part.done for reasoning
*/
export function emitReasoningContentPartDone(config: StreamHandlerConfig): void {
const { res, tracker } = config;
if (!tracker.currentReasoning || !tracker.currentReasoning.content) {
return;
}
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
const contentIndex = tracker.currentReasoningContentIndex;
const part = tracker.currentReasoning.content[contentIndex];
if (part) {
writeEvent(res, {
type: 'response.content_part.done',
sequence_number: tracker.nextSequence(),
item_id: tracker.currentReasoning.id,
output_index: outputIndex,
content_index: contentIndex,
part,
});
}
tracker.currentReasoningContentIndex++;
}
/**
* Emit response.output_item.done for reasoning
*/
export function emitReasoningItemDone(config: StreamHandlerConfig): void {
const { res, tracker } = config;
if (!tracker.currentReasoning) {
return;
}
tracker.currentReasoning.status = 'completed';
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
writeEvent(res, {
type: 'response.output_item.done',
sequence_number: tracker.nextSequence(),
output_index: outputIndex,
item: tracker.currentReasoning,
});
tracker.currentReasoning = null;
}
/* =============================================================================
* ERROR HANDLING
* ============================================================================= */
/**
* Emit error event
*/
export function emitError(
config: StreamHandlerConfig,
error: { type: string; message: string; code?: string },
): void {
const { res, tracker } = config;
writeEvent(res, {
type: 'error',
sequence_number: tracker.nextSequence(),
error: {
type: error.type as 'server_error',
message: error.message,
code: error.code,
},
});
}
/* =============================================================================
* LIBRECHAT EXTENSION EVENTS
* Custom events prefixed with 'librechat:' per Open Responses spec
* @see https://openresponses.org/specification#extending-streaming-events
* ============================================================================= */
/**
* Attachment data for librechat:attachment events
*/
export interface AttachmentData {
/** File ID in LibreChat storage */
file_id?: string;
/** Original filename */
filename?: string;
/** MIME type */
type?: string;
/** URL to access the file */
url?: string;
/** Base64-encoded image data (for inline images) */
image_url?: string;
/** Width for images */
width?: number;
/** Height for images */
height?: number;
/** Associated tool call ID */
tool_call_id?: string;
/** Additional metadata */
[key: string]: unknown;
}
/**
* Emit librechat:attachment event for file/image attachments
* This is a LibreChat extension to the Open Responses streaming protocol.
* External clients can safely ignore these events.
*/
export function emitAttachment(
config: StreamHandlerConfig,
attachment: AttachmentData,
options?: {
messageId?: string;
conversationId?: string;
},
): void {
const { res, tracker } = config;
writeEvent(res, {
type: 'librechat:attachment',
sequence_number: tracker.nextSequence(),
attachment,
message_id: options?.messageId,
conversation_id: options?.conversationId,
});
}
/**
* Write attachment event directly to response (for use outside streaming context)
* Useful when attachment processing happens asynchronously
*/
export function writeAttachmentEvent(
res: ServerResponse,
sequenceNumber: number,
attachment: AttachmentData,
options?: {
messageId?: string;
conversationId?: string;
},
): void {
writeEvent(res, {
type: 'librechat:attachment',
sequence_number: sequenceNumber,
attachment,
message_id: options?.messageId,
conversation_id: options?.conversationId,
});
}
/* =============================================================================
* NON-STREAMING RESPONSE BUILDER
* ============================================================================= */
/**
* Build a complete non-streaming response
*/
export function buildResponsesNonStreamingResponse(
context: ResponseContext,
tracker: ResponseTracker,
): Response {
return buildResponse(context, tracker, 'completed');
}
/**
* Update tracker usage from collected data
*/
export function updateTrackerUsage(
tracker: ResponseTracker,
usage: {
promptTokens?: number;
completionTokens?: number;
reasoningTokens?: number;
cachedTokens?: number;
},
): void {
if (usage.promptTokens != null) {
tracker.usage.inputTokens = usage.promptTokens;
}
if (usage.completionTokens != null) {
tracker.usage.outputTokens = usage.completionTokens;
}
if (usage.reasoningTokens != null) {
tracker.usage.reasoningTokens = usage.reasoningTokens;
}
if (usage.cachedTokens != null) {
tracker.usage.cachedTokens = usage.cachedTokens;
}
}

View file

@ -0,0 +1,183 @@
/**
* Open Responses API Module
*
* Exports for the Open Responses API implementation.
* @see https://openresponses.org/specification
*/
// Types
export type {
// Enums
ItemStatus,
ResponseStatus,
MessageRole,
ToolChoiceValue,
TruncationValue,
ServiceTier,
ReasoningEffort,
ReasoningSummary,
// Input content
InputTextContent,
InputImageContent,
InputFileContent,
InputContent,
// Output content
LogProb,
TopLogProb,
OutputTextContent,
RefusalContent,
ModelContent,
// Annotations
UrlCitationAnnotation,
FileCitationAnnotation,
Annotation,
// Reasoning content
ReasoningTextContent,
SummaryTextContent,
ReasoningContent,
// Input items
SystemMessageItemParam,
DeveloperMessageItemParam,
UserMessageItemParam,
AssistantMessageItemParam,
FunctionCallItemParam,
FunctionCallOutputItemParam,
ReasoningItemParam,
ItemReferenceParam,
InputItem,
// Output items
MessageItem,
FunctionCallItem,
FunctionCallOutputItem,
ReasoningItem,
OutputItem,
// Tools
FunctionTool,
HostedTool,
Tool,
FunctionToolChoice,
ToolChoice,
// Request
ReasoningConfig,
TextConfig,
StreamOptions,
Metadata,
ResponseRequest,
// Response field types
TextField,
// Response
InputTokensDetails,
OutputTokensDetails,
Usage,
IncompleteDetails,
ResponseError,
Response,
// Streaming events
BaseEvent,
ResponseCreatedEvent,
ResponseInProgressEvent,
ResponseCompletedEvent,
ResponseFailedEvent,
ResponseIncompleteEvent,
OutputItemAddedEvent,
OutputItemDoneEvent,
ContentPartAddedEvent,
ContentPartDoneEvent,
OutputTextDeltaEvent,
OutputTextDoneEvent,
RefusalDeltaEvent,
RefusalDoneEvent,
FunctionCallArgumentsDeltaEvent,
FunctionCallArgumentsDoneEvent,
ReasoningDeltaEvent,
ReasoningDoneEvent,
ErrorEvent,
ResponseEvent,
// LibreChat extensions
LibreChatAttachmentContent,
LibreChatAttachmentEvent,
// Internal
ResponseContext,
RequestValidationResult,
} from './types';
// Handlers
export {
// Tracker
createResponseTracker,
type ResponseTracker,
// SSE
writeEvent,
writeDone,
// Response building
buildResponse,
// Item builders
generateItemId,
createMessageItem,
createFunctionCallItem,
createFunctionCallOutputItem,
createReasoningItem,
createOutputTextContent,
createReasoningTextContent,
// Stream config
type StreamHandlerConfig,
// Response events
emitResponseCreated,
emitResponseInProgress,
emitResponseCompleted,
emitResponseFailed,
// Message events
emitMessageItemAdded,
emitMessageItemDone,
emitTextContentPartAdded,
emitOutputTextDelta,
emitOutputTextDone,
emitTextContentPartDone,
// Function call events
emitFunctionCallItemAdded,
emitFunctionCallArgumentsDelta,
emitFunctionCallArgumentsDone,
emitFunctionCallItemDone,
emitFunctionCallOutputItem,
// Reasoning events
emitReasoningItemAdded,
emitReasoningContentPartAdded,
emitReasoningDelta,
emitReasoningDone,
emitReasoningContentPartDone,
emitReasoningItemDone,
// Error events
emitError,
// LibreChat extension events
emitAttachment,
writeAttachmentEvent,
type AttachmentData,
// Non-streaming
buildResponsesNonStreamingResponse,
updateTrackerUsage,
} from './handlers';
// Service
export {
// Validation
validateResponseRequest,
isValidationFailure,
// Input conversion
convertInputToMessages,
mergeMessagesWithInput,
type InternalMessage,
// Error response
sendResponsesErrorResponse,
// Context
generateResponseId,
createResponseContext,
// Streaming setup
setupStreamingResponse,
// Event handlers
createResponsesEventHandlers,
// Non-streaming
createResponseAggregator,
buildAggregatedResponse,
createAggregatorEventHandlers,
type ResponseAggregator,
} from './service';

View file

@ -0,0 +1,869 @@
/**
* Open Responses API Service
*
* Core service for processing Open Responses API requests.
* Handles input conversion, message formatting, and request validation.
*/
import type { Response as ServerResponse } from 'express';
import type {
ResponseRequest,
RequestValidationResult,
InputItem,
InputContent,
ResponseContext,
Response,
} from './types';
import {
writeDone,
emitResponseCompleted,
emitMessageItemAdded,
emitMessageItemDone,
emitTextContentPartAdded,
emitOutputTextDelta,
emitOutputTextDone,
emitTextContentPartDone,
emitFunctionCallItemAdded,
emitFunctionCallArgumentsDelta,
emitFunctionCallArgumentsDone,
emitFunctionCallItemDone,
emitFunctionCallOutputItem,
emitReasoningItemAdded,
emitReasoningContentPartAdded,
emitReasoningDelta,
emitReasoningDone,
emitReasoningContentPartDone,
emitReasoningItemDone,
updateTrackerUsage,
type StreamHandlerConfig,
} from './handlers';
/* =============================================================================
* REQUEST VALIDATION
* ============================================================================= */
/**
* Validate a request body
*/
export function validateResponseRequest(body: unknown): RequestValidationResult {
if (!body || typeof body !== 'object') {
return { valid: false, error: 'Request body is required' };
}
const request = body as Record<string, unknown>;
// Required: model
if (!request.model || typeof request.model !== 'string') {
return { valid: false, error: 'model is required and must be a string' };
}
// Required: input (string or array)
if (request.input === undefined || request.input === null) {
return { valid: false, error: 'input is required' };
}
if (typeof request.input !== 'string' && !Array.isArray(request.input)) {
return { valid: false, error: 'input must be a string or array of items' };
}
// Optional validations
if (request.stream !== undefined && typeof request.stream !== 'boolean') {
return { valid: false, error: 'stream must be a boolean' };
}
if (request.temperature !== undefined) {
const temp = request.temperature as number;
if (typeof temp !== 'number' || temp < 0 || temp > 2) {
return { valid: false, error: 'temperature must be a number between 0 and 2' };
}
}
if (request.max_output_tokens !== undefined) {
if (typeof request.max_output_tokens !== 'number' || request.max_output_tokens < 1) {
return { valid: false, error: 'max_output_tokens must be a positive number' };
}
}
return { valid: true, request: request as unknown as ResponseRequest };
}
/**
* Check if validation failed
*/
export function isValidationFailure(
result: RequestValidationResult,
): result is { valid: false; error: string } {
return !result.valid;
}
/* =============================================================================
* INPUT CONVERSION
* ============================================================================= */
/** Internal message format (LibreChat-compatible) */
export interface InternalMessage {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string | Array<{ type: string; text?: string; image_url?: unknown }>;
name?: string;
tool_call_id?: string;
tool_calls?: Array<{
id: string;
type: 'function';
function: { name: string; arguments: string };
}>;
}
/**
* Convert Open Responses input to internal message format.
* Handles both string input and array of items.
*/
export function convertInputToMessages(input: string | InputItem[]): InternalMessage[] {
// Simple string input becomes a user message
if (typeof input === 'string') {
return [{ role: 'user', content: input }];
}
const messages: InternalMessage[] = [];
for (const item of input) {
if (item.type === 'item_reference') {
// Skip item references - they're handled by previous_response_id
continue;
}
if (item.type === 'message') {
const messageItem = item as {
type: 'message';
role: string;
content: string | InputContent[];
};
let content: InternalMessage['content'];
if (typeof messageItem.content === 'string') {
content = messageItem.content;
} else if (Array.isArray(messageItem.content)) {
content = messageItem.content.map((part) => {
if (part.type === 'input_text') {
return { type: 'text', text: part.text };
}
if (part.type === 'input_image') {
return {
type: 'image_url',
image_url: {
url: (part as { image_url?: string }).image_url,
detail: (part as { detail?: string }).detail,
},
};
}
return { type: part.type };
});
} else {
content = '';
}
// Map developer role to system (LibreChat convention)
let role: InternalMessage['role'];
if (messageItem.role === 'developer') {
role = 'system';
} else if (messageItem.role === 'user') {
role = 'user';
} else if (messageItem.role === 'assistant') {
role = 'assistant';
} else if (messageItem.role === 'system') {
role = 'system';
} else {
role = 'user';
}
messages.push({ role, content });
}
if (item.type === 'function_call') {
// Function call items represent prior tool calls from assistant
const fcItem = item as {
type: 'function_call';
call_id: string;
name: string;
arguments: string;
};
// Add as assistant message with tool_calls
messages.push({
role: 'assistant',
content: '',
tool_calls: [
{
id: fcItem.call_id,
type: 'function',
function: { name: fcItem.name, arguments: fcItem.arguments },
},
],
});
}
if (item.type === 'function_call_output') {
// Function call output items represent tool results
const fcoItem = item as { type: 'function_call_output'; call_id: string; output: string };
messages.push({
role: 'tool',
content: fcoItem.output,
tool_call_id: fcoItem.call_id,
});
}
// Reasoning items are typically not passed back as input
// They're model-generated and may be encrypted
}
return messages;
}
/**
* Merge previous conversation messages with new input
*/
export function mergeMessagesWithInput(
previousMessages: InternalMessage[],
newInput: InternalMessage[],
): InternalMessage[] {
return [...previousMessages, ...newInput];
}
/* =============================================================================
* ERROR RESPONSE
* ============================================================================= */
/**
* Send an error response in Open Responses format
*/
export function sendResponsesErrorResponse(
res: ServerResponse,
statusCode: number,
message: string,
type: string = 'invalid_request',
code?: string,
): void {
res.status(statusCode).json({
error: {
type,
message,
code: code ?? null,
param: null,
},
});
}
/* =============================================================================
* RESPONSE CONTEXT
* ============================================================================= */
/**
* Generate a unique response ID
*/
export function generateResponseId(): string {
return `resp_${Date.now().toString(36)}${Math.random().toString(36).substring(2, 8)}`;
}
/**
* Create a response context from request
*/
export function createResponseContext(
request: ResponseRequest,
responseId?: string,
): ResponseContext {
return {
responseId: responseId ?? generateResponseId(),
model: request.model,
createdAt: Math.floor(Date.now() / 1000),
previousResponseId: request.previous_response_id,
instructions: request.instructions,
};
}
/* =============================================================================
* STREAMING SETUP
* ============================================================================= */
/**
* Set up streaming response headers
*/
export function setupStreamingResponse(res: ServerResponse): void {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
res.setHeader('X-Accel-Buffering', 'no');
res.flushHeaders();
}
/* =============================================================================
* STREAM HANDLER FACTORY
* ============================================================================= */
/**
* State for tracking streaming progress
*/
interface StreamState {
messageStarted: boolean;
messageContentStarted: boolean;
reasoningStarted: boolean;
reasoningContentStarted: boolean;
activeToolCalls: Set<string>;
completedToolCalls: Set<string>;
}
/**
* Create LibreChat event handlers that emit Open Responses events
*/
export function createResponsesEventHandlers(config: StreamHandlerConfig): {
handlers: Record<string, { handle: (event: string, data: unknown) => void }>;
state: StreamState;
finalizeStream: () => void;
} {
const state: StreamState = {
messageStarted: false,
messageContentStarted: false,
reasoningStarted: false,
reasoningContentStarted: false,
activeToolCalls: new Set(),
completedToolCalls: new Set(),
};
/**
* Ensure message item is started
*/
const ensureMessageStarted = (): void => {
if (!state.messageStarted) {
emitMessageItemAdded(config);
state.messageStarted = true;
}
};
/**
* Ensure message content part is started
*/
const ensureMessageContentStarted = (): void => {
ensureMessageStarted();
if (!state.messageContentStarted) {
emitTextContentPartAdded(config);
state.messageContentStarted = true;
}
};
/**
* Ensure reasoning item is started
*/
const ensureReasoningStarted = (): void => {
if (!state.reasoningStarted) {
emitReasoningItemAdded(config);
state.reasoningStarted = true;
}
};
/**
* Ensure reasoning content part is started
*/
const ensureReasoningContentStarted = (): void => {
ensureReasoningStarted();
if (!state.reasoningContentStarted) {
emitReasoningContentPartAdded(config);
state.reasoningContentStarted = true;
}
};
/**
* Close any open content streams
*/
const closeOpenStreams = (): void => {
// Close message content if open
if (state.messageContentStarted) {
emitOutputTextDone(config);
emitTextContentPartDone(config);
state.messageContentStarted = false;
}
// Close message item if open
if (state.messageStarted) {
emitMessageItemDone(config);
state.messageStarted = false;
}
// Close reasoning content if open
if (state.reasoningContentStarted) {
emitReasoningDone(config);
emitReasoningContentPartDone(config);
state.reasoningContentStarted = false;
}
// Close reasoning item if open
if (state.reasoningStarted) {
emitReasoningItemDone(config);
state.reasoningStarted = false;
}
};
const handlers = {
/**
* Handle text message deltas
*/
on_message_delta: {
handle: (_event: string, data: unknown): void => {
const deltaData = data as { delta?: { content?: Array<{ type: string; text?: string }> } };
const content = deltaData?.delta?.content;
if (Array.isArray(content)) {
for (const part of content) {
if (part.type === 'text' && part.text) {
ensureMessageContentStarted();
emitOutputTextDelta(config, part.text);
}
}
}
},
},
/**
* Handle reasoning deltas
*/
on_reasoning_delta: {
handle: (_event: string, data: unknown): void => {
const deltaData = data as {
delta?: { content?: Array<{ type: string; text?: string; think?: string }> };
};
const content = deltaData?.delta?.content;
if (Array.isArray(content)) {
for (const part of content) {
const text = part.think || part.text;
if (text) {
ensureReasoningContentStarted();
emitReasoningDelta(config, text);
}
}
}
},
},
/**
* Handle run step (tool call initiation)
*/
on_run_step: {
handle: (_event: string, data: unknown): void => {
const stepData = data as {
stepDetails?: { type: string; tool_calls?: Array<{ id?: string; name?: string }> };
};
const stepDetails = stepData?.stepDetails;
if (stepDetails?.type === 'tool_calls' && stepDetails.tool_calls) {
// Close any open message/reasoning before tool calls
closeOpenStreams();
for (const tc of stepDetails.tool_calls) {
const callId = tc.id ?? '';
const name = tc.name ?? '';
if (callId && !state.activeToolCalls.has(callId)) {
state.activeToolCalls.add(callId);
emitFunctionCallItemAdded(config, callId, name);
}
}
}
},
},
/**
* Handle run step delta (tool call argument streaming)
*/
on_run_step_delta: {
handle: (_event: string, data: unknown): void => {
const deltaData = data as {
delta?: { type: string; tool_calls?: Array<{ index?: number; args?: string }> };
};
const delta = deltaData?.delta;
if (delta?.type === 'tool_calls' && delta.tool_calls) {
for (const tc of delta.tool_calls) {
const args = tc.args ?? '';
if (!args) {
continue;
}
// Find the call_id for this tool call by index
const toolCallsArray = Array.from(state.activeToolCalls);
const callId = toolCallsArray[tc.index ?? 0];
if (callId) {
emitFunctionCallArgumentsDelta(config, callId, args);
}
}
}
},
},
/**
* Handle tool end (tool execution complete)
*/
on_tool_end: {
handle: (_event: string, data: unknown): void => {
const toolData = data as { tool_call_id?: string; output?: string };
const callId = toolData?.tool_call_id;
const output = toolData?.output ?? '';
if (callId && state.activeToolCalls.has(callId) && !state.completedToolCalls.has(callId)) {
state.completedToolCalls.add(callId);
// Complete the function call item
emitFunctionCallArgumentsDone(config, callId);
emitFunctionCallItemDone(config, callId);
// Emit the function call output (internal tool result)
emitFunctionCallOutputItem(config, callId, output);
}
},
},
/**
* Handle chat model end (usage collection)
*/
on_chat_model_end: {
handle: (_event: string, data: unknown): void => {
const endData = data as {
output?: {
usage_metadata?: {
input_tokens?: number;
output_tokens?: number;
// OpenAI format
input_token_details?: {
cache_creation?: number;
cache_read?: number;
};
// Anthropic format
cache_creation_input_tokens?: number;
cache_read_input_tokens?: number;
};
};
};
const usage = endData?.output?.usage_metadata;
if (usage) {
// Extract cached tokens from either OpenAI or Anthropic format
const cachedTokens =
(usage.input_token_details?.cache_read ?? 0) + (usage.cache_read_input_tokens ?? 0);
updateTrackerUsage(config.tracker, {
promptTokens: usage.input_tokens,
completionTokens: usage.output_tokens,
cachedTokens,
});
}
},
},
};
/**
* Finalize the stream - close open items and emit completed
*/
const finalizeStream = (): void => {
closeOpenStreams();
emitResponseCompleted(config);
writeDone(config.res);
};
return { handlers, state, finalizeStream };
}
/* =============================================================================
* NON-STREAMING AGGREGATOR
* ============================================================================= */
/**
* Aggregator for non-streaming responses
*/
export interface ResponseAggregator {
textChunks: string[];
reasoningChunks: string[];
toolCalls: Map<
string,
{
id: string;
name: string;
arguments: string;
}
>;
toolOutputs: Map<string, string>;
usage: {
inputTokens: number;
outputTokens: number;
reasoningTokens: number;
cachedTokens: number;
};
addText: (text: string) => void;
addReasoning: (text: string) => void;
getText: () => string;
getReasoning: () => string;
}
/**
* Create an aggregator for non-streaming responses
*/
export function createResponseAggregator(): ResponseAggregator {
const aggregator: ResponseAggregator = {
textChunks: [],
reasoningChunks: [],
toolCalls: new Map(),
toolOutputs: new Map(),
usage: {
inputTokens: 0,
outputTokens: 0,
reasoningTokens: 0,
cachedTokens: 0,
},
addText: (text: string) => {
aggregator.textChunks.push(text);
},
addReasoning: (text: string) => {
aggregator.reasoningChunks.push(text);
},
getText: () => aggregator.textChunks.join(''),
getReasoning: () => aggregator.reasoningChunks.join(''),
};
return aggregator;
}
/**
* Build a non-streaming response from aggregator
* Includes all required fields per Open Responses spec
*/
export function buildAggregatedResponse(
context: ResponseContext,
aggregator: ResponseAggregator,
): Response {
const output: Response['output'] = [];
// Add reasoning item if present
const reasoningText = aggregator.getReasoning();
if (reasoningText) {
output.push({
type: 'reasoning',
id: `reason_${Date.now().toString(36)}`,
status: 'completed',
content: [{ type: 'reasoning_text', text: reasoningText }],
summary: [],
});
}
// Add function calls and outputs
for (const [callId, tc] of aggregator.toolCalls) {
output.push({
type: 'function_call',
id: `fc_${Date.now().toString(36)}${Math.random().toString(36).substring(2, 6)}`,
call_id: callId,
name: tc.name,
arguments: tc.arguments,
status: 'completed',
});
const toolOutput = aggregator.toolOutputs.get(callId);
if (toolOutput) {
output.push({
type: 'function_call_output',
id: `fco_${Date.now().toString(36)}${Math.random().toString(36).substring(2, 6)}`,
call_id: callId,
output: toolOutput,
status: 'completed',
});
}
}
// Add message item if there's text (or always add one if no other output)
const text = aggregator.getText();
if (text || output.length === 0) {
output.push({
type: 'message',
id: `msg_${Date.now().toString(36)}`,
role: 'assistant',
status: 'completed',
content: text ? [{ type: 'output_text', text, annotations: [], logprobs: [] }] : [],
});
}
return {
// Required fields per Open Responses spec
id: context.responseId,
object: 'response',
created_at: context.createdAt,
completed_at: Math.floor(Date.now() / 1000),
status: 'completed',
incomplete_details: null,
model: context.model,
previous_response_id: context.previousResponseId ?? null,
instructions: context.instructions ?? null,
output,
error: null,
tools: [],
tool_choice: 'auto',
truncation: 'disabled',
parallel_tool_calls: true,
text: { format: { type: 'text' } },
temperature: 1,
top_p: 1,
presence_penalty: 0,
frequency_penalty: 0,
top_logprobs: 0,
reasoning: null,
user: null,
usage: {
input_tokens: aggregator.usage.inputTokens,
output_tokens: aggregator.usage.outputTokens,
total_tokens: aggregator.usage.inputTokens + aggregator.usage.outputTokens,
input_tokens_details: { cached_tokens: aggregator.usage.cachedTokens },
output_tokens_details: { reasoning_tokens: aggregator.usage.reasoningTokens },
},
max_output_tokens: null,
max_tool_calls: null,
store: false,
background: false,
service_tier: 'default',
metadata: {},
safety_identifier: null,
prompt_cache_key: null,
};
}
/**
* Create event handlers for non-streaming aggregation
*/
export function createAggregatorEventHandlers(aggregator: ResponseAggregator): Record<
string,
{
handle: (event: string, data: unknown) => void;
}
> {
const activeToolCalls = new Set<string>();
return {
on_message_delta: {
handle: (_event: string, data: unknown): void => {
const deltaData = data as { delta?: { content?: Array<{ type: string; text?: string }> } };
const content = deltaData?.delta?.content;
if (Array.isArray(content)) {
for (const part of content) {
if (part.type === 'text' && part.text) {
aggregator.addText(part.text);
}
}
}
},
},
on_reasoning_delta: {
handle: (_event: string, data: unknown): void => {
const deltaData = data as {
delta?: { content?: Array<{ type: string; text?: string; think?: string }> };
};
const content = deltaData?.delta?.content;
if (Array.isArray(content)) {
for (const part of content) {
const text = part.think || part.text;
if (text) {
aggregator.addReasoning(text);
}
}
}
},
},
on_run_step: {
handle: (_event: string, data: unknown): void => {
const stepData = data as {
stepDetails?: { type: string; tool_calls?: Array<{ id?: string; name?: string }> };
};
const stepDetails = stepData?.stepDetails;
if (stepDetails?.type === 'tool_calls' && stepDetails.tool_calls) {
for (const tc of stepDetails.tool_calls) {
const callId = tc.id ?? '';
const name = tc.name ?? '';
if (callId && !activeToolCalls.has(callId)) {
activeToolCalls.add(callId);
aggregator.toolCalls.set(callId, { id: callId, name, arguments: '' });
}
}
}
},
},
on_run_step_delta: {
handle: (_event: string, data: unknown): void => {
const deltaData = data as {
delta?: { type: string; tool_calls?: Array<{ index?: number; args?: string }> };
};
const delta = deltaData?.delta;
if (delta?.type === 'tool_calls' && delta.tool_calls) {
for (const tc of delta.tool_calls) {
const args = tc.args ?? '';
if (!args) {
continue;
}
const toolCallsArray = Array.from(activeToolCalls);
const callId = toolCallsArray[tc.index ?? 0];
if (callId) {
const existing = aggregator.toolCalls.get(callId);
if (existing) {
existing.arguments += args;
}
}
}
}
},
},
on_tool_end: {
handle: (_event: string, data: unknown): void => {
const toolData = data as { tool_call_id?: string; output?: string };
const callId = toolData?.tool_call_id;
const output = toolData?.output ?? '';
if (callId) {
aggregator.toolOutputs.set(callId, output);
}
},
},
on_chat_model_end: {
handle: (_event: string, data: unknown): void => {
const endData = data as {
output?: {
usage_metadata?: {
input_tokens?: number;
output_tokens?: number;
// OpenAI format
input_token_details?: {
cache_creation?: number;
cache_read?: number;
};
// Anthropic format
cache_creation_input_tokens?: number;
cache_read_input_tokens?: number;
};
};
};
const usage = endData?.output?.usage_metadata;
if (usage) {
aggregator.usage.inputTokens = usage.input_tokens ?? 0;
aggregator.usage.outputTokens = usage.output_tokens ?? 0;
// Extract cached tokens from either OpenAI or Anthropic format
aggregator.usage.cachedTokens =
(usage.input_token_details?.cache_read ?? 0) + (usage.cache_read_input_tokens ?? 0);
}
},
},
};
}

View file

@ -0,0 +1,779 @@
/**
* Open Responses API Types
*
* Types following the Open Responses specification for building multi-provider,
* interoperable LLM interfaces. Items are the fundamental unit of context,
* and streaming uses semantic events rather than simple deltas.
*
* @see https://openresponses.org/specification
*/
/* =============================================================================
* ENUMS
* ============================================================================= */
/** Item status lifecycle */
export type ItemStatus = 'in_progress' | 'incomplete' | 'completed';
/** Response status lifecycle */
export type ResponseStatus = 'in_progress' | 'completed' | 'failed' | 'incomplete';
/** Message roles */
export type MessageRole = 'user' | 'assistant' | 'system' | 'developer';
/** Tool choice options */
export type ToolChoiceValue = 'none' | 'auto' | 'required';
/** Truncation options */
export type TruncationValue = 'auto' | 'disabled';
/** Service tier options */
export type ServiceTier = 'auto' | 'default' | 'flex' | 'priority';
/** Reasoning effort levels */
export type ReasoningEffort = 'none' | 'low' | 'medium' | 'high' | 'xhigh';
/** Reasoning summary options */
export type ReasoningSummary = 'concise' | 'detailed' | 'auto';
/* =============================================================================
* INPUT CONTENT TYPES
* ============================================================================= */
/** Text input content */
export interface InputTextContent {
type: 'input_text';
text: string;
}
/** Image input content */
export interface InputImageContent {
type: 'input_image';
image_url?: string;
file_id?: string;
detail?: 'auto' | 'low' | 'high';
}
/** File input content */
export interface InputFileContent {
type: 'input_file';
file_id?: string;
file_data?: string;
filename?: string;
}
/** Union of all input content types */
export type InputContent = InputTextContent | InputImageContent | InputFileContent;
/* =============================================================================
* OUTPUT CONTENT TYPES
* ============================================================================= */
/** Log probability for a token */
export interface LogProb {
token: string;
logprob: number;
bytes?: number[];
top_logprobs?: TopLogProb[];
}
/** Top log probability entry */
export interface TopLogProb {
token: string;
logprob: number;
bytes?: number[];
}
/** Text output content */
export interface OutputTextContent {
type: 'output_text';
text: string;
annotations: Annotation[];
logprobs: LogProb[];
}
/** Refusal content */
export interface RefusalContent {
type: 'refusal';
refusal: string;
}
/** Union of model output content types */
export type ModelContent = OutputTextContent | RefusalContent;
/* =============================================================================
* ANNOTATIONS
* ============================================================================= */
/** URL citation annotation */
export interface UrlCitationAnnotation {
type: 'url_citation';
url: string;
title?: string;
start_index: number;
end_index: number;
}
/** File citation annotation */
export interface FileCitationAnnotation {
type: 'file_citation';
file_id: string;
start_index: number;
end_index: number;
}
/** Union of annotation types */
export type Annotation = UrlCitationAnnotation | FileCitationAnnotation;
/* =============================================================================
* REASONING CONTENT
* ============================================================================= */
/** Reasoning text content */
export interface ReasoningTextContent {
type: 'reasoning_text';
text: string;
}
/** Summary text content */
export interface SummaryTextContent {
type: 'summary_text';
text: string;
}
/** Reasoning content union */
export type ReasoningContent = ReasoningTextContent;
/* =============================================================================
* INPUT ITEMS (for request)
* ============================================================================= */
/** System message input item */
export interface SystemMessageItemParam {
type: 'message';
role: 'system';
content: string | InputContent[];
}
/** Developer message input item */
export interface DeveloperMessageItemParam {
type: 'message';
role: 'developer';
content: string | InputContent[];
}
/** User message input item */
export interface UserMessageItemParam {
type: 'message';
role: 'user';
content: string | InputContent[];
}
/** Assistant message input item */
export interface AssistantMessageItemParam {
type: 'message';
role: 'assistant';
content: string | ModelContent[];
}
/** Function call input item (for providing context) */
export interface FunctionCallItemParam {
type: 'function_call';
id: string;
call_id: string;
name: string;
arguments: string;
status?: ItemStatus;
}
/** Function call output input item (for providing tool results) */
export interface FunctionCallOutputItemParam {
type: 'function_call_output';
call_id: string;
output: string;
status?: ItemStatus;
}
/** Reasoning input item */
export interface ReasoningItemParam {
type: 'reasoning';
id?: string;
content?: ReasoningContent[];
encrypted_content?: string;
summary?: SummaryTextContent[];
status?: ItemStatus;
}
/** Item reference (for referencing existing items) */
export interface ItemReferenceParam {
type: 'item_reference';
id: string;
}
/** Union of all input item types */
export type InputItem =
| SystemMessageItemParam
| DeveloperMessageItemParam
| UserMessageItemParam
| AssistantMessageItemParam
| FunctionCallItemParam
| FunctionCallOutputItemParam
| ReasoningItemParam
| ItemReferenceParam;
/* =============================================================================
* OUTPUT ITEMS (in response)
* ============================================================================= */
/** Message output item */
export interface MessageItem {
type: 'message';
id: string;
role: 'assistant';
status: ItemStatus;
content: ModelContent[];
}
/** Function call output item */
export interface FunctionCallItem {
type: 'function_call';
id: string;
call_id: string;
name: string;
arguments: string;
status: ItemStatus;
}
/** Function call output result item (internal tool execution result) */
export interface FunctionCallOutputItem {
type: 'function_call_output';
id: string;
call_id: string;
output: string;
status: ItemStatus;
}
/** Reasoning output item */
export interface ReasoningItem {
type: 'reasoning';
id: string;
status?: ItemStatus;
content?: ReasoningContent[];
encrypted_content?: string;
/** Required per Open Responses spec - summary content parts */
summary: SummaryTextContent[];
}
/** Union of all output item types */
export type OutputItem = MessageItem | FunctionCallItem | FunctionCallOutputItem | ReasoningItem;
/* =============================================================================
* TOOLS
* ============================================================================= */
/** Function tool definition */
export interface FunctionTool {
type: 'function';
name: string;
description?: string;
parameters?: Record<string, unknown>;
strict?: boolean;
}
/** Hosted tool (provider-specific) */
export interface HostedTool {
type: string; // e.g., 'librechat:web_search'
[key: string]: unknown;
}
/** Union of tool types */
export type Tool = FunctionTool | HostedTool;
/** Specific function tool choice */
export interface FunctionToolChoice {
type: 'function';
name: string;
}
/** Tool choice parameter */
export type ToolChoice = ToolChoiceValue | FunctionToolChoice;
/* =============================================================================
* REQUEST
* ============================================================================= */
/** Reasoning configuration */
export interface ReasoningConfig {
effort?: ReasoningEffort;
summary?: ReasoningSummary;
}
/** Text output configuration */
export interface TextConfig {
format?: {
type: 'text' | 'json_object' | 'json_schema';
json_schema?: Record<string, unknown>;
};
}
/** Stream options */
export interface StreamOptions {
include_usage?: boolean;
}
/** Metadata (key-value pairs) */
export type Metadata = Record<string, string>;
/** Open Responses API Request */
export interface ResponseRequest {
/** Model/agent ID to use */
model: string;
/** Input context - string or array of items */
input: string | InputItem[];
/** Previous response ID for conversation continuation */
previous_response_id?: string;
/** Tools available to the model */
tools?: Tool[];
/** Tool choice configuration */
tool_choice?: ToolChoice;
/** Whether to stream the response */
stream?: boolean;
/** Stream options */
stream_options?: StreamOptions;
/** Additional instructions */
instructions?: string;
/** Maximum output tokens */
max_output_tokens?: number;
/** Maximum tool calls */
max_tool_calls?: number;
/** Sampling temperature */
temperature?: number;
/** Top-p sampling */
top_p?: number;
/** Presence penalty */
presence_penalty?: number;
/** Frequency penalty */
frequency_penalty?: number;
/** Reasoning configuration */
reasoning?: ReasoningConfig;
/** Text output configuration */
text?: TextConfig;
/** Truncation behavior */
truncation?: TruncationValue;
/** Service tier */
service_tier?: ServiceTier;
/** Whether to store the response */
store?: boolean;
/** Metadata */
metadata?: Metadata;
/** Whether model can call multiple tools in parallel */
parallel_tool_calls?: boolean;
/** User identifier for safety */
user?: string;
}
/* =============================================================================
* RESPONSE
* ============================================================================= */
/** Token usage details */
export interface InputTokensDetails {
cached_tokens: number;
}
/** Output tokens details */
export interface OutputTokensDetails {
reasoning_tokens: number;
}
/** Token usage statistics */
export interface Usage {
input_tokens: number;
output_tokens: number;
total_tokens: number;
input_tokens_details: InputTokensDetails;
output_tokens_details: OutputTokensDetails;
}
/** Incomplete details */
export interface IncompleteDetails {
reason: 'max_output_tokens' | 'max_tool_calls' | 'content_filter' | 'other';
}
/** Error object */
export interface ResponseError {
type: 'server_error' | 'invalid_request' | 'not_found' | 'model_error' | 'too_many_requests';
code?: string;
message: string;
param?: string;
}
/** Text field configuration */
export interface TextField {
format?: {
type: 'text' | 'json_object' | 'json_schema';
json_schema?: Record<string, unknown>;
};
}
/** Open Responses API Response - All required fields per spec */
export interface Response {
/** Response ID */
id: string;
/** Object type - always "response" */
object: 'response';
/** Creation timestamp (Unix seconds) */
created_at: number;
/** Completion timestamp (Unix seconds) - null if not completed */
completed_at: number | null;
/** Response status */
status: ResponseStatus;
/** Incomplete details - null if not incomplete */
incomplete_details: IncompleteDetails | null;
/** Model that generated the response */
model: string;
/** Previous response ID - null if not a continuation */
previous_response_id: string | null;
/** Instructions used - null if none */
instructions: string | null;
/** Output items */
output: OutputItem[];
/** Error - null if no error */
error: ResponseError | null;
/** Tools available */
tools: Tool[];
/** Tool choice setting */
tool_choice: ToolChoice;
/** Truncation setting used */
truncation: TruncationValue;
/** Whether parallel tool calls were allowed */
parallel_tool_calls: boolean;
/** Text configuration used */
text: TextField;
/** Temperature used */
temperature: number;
/** Top-p used */
top_p: number;
/** Presence penalty used */
presence_penalty: number;
/** Frequency penalty used */
frequency_penalty: number;
/** Top logprobs - number of most likely tokens to return */
top_logprobs: number;
/** Reasoning configuration - null if none */
reasoning: ReasoningConfig | null;
/** User identifier - null if none */
user: string | null;
/** Token usage - null if not available */
usage: Usage | null;
/** Max output tokens - null if not set */
max_output_tokens: number | null;
/** Max tool calls - null if not set */
max_tool_calls: number | null;
/** Whether response was stored */
store: boolean;
/** Whether request was run in background */
background: boolean;
/** Service tier used */
service_tier: string;
/** Metadata */
metadata: Metadata;
/** Safety identifier - null if none */
safety_identifier: string | null;
/** Prompt cache key - null if none */
prompt_cache_key: string | null;
}
/* =============================================================================
* STREAMING EVENTS
* ============================================================================= */
/** Base event structure */
export interface BaseEvent {
type: string;
sequence_number: number;
}
/** Response created event (first event in stream) */
export interface ResponseCreatedEvent extends BaseEvent {
type: 'response.created';
response: Response;
}
/** Response in_progress event */
export interface ResponseInProgressEvent extends BaseEvent {
type: 'response.in_progress';
response: Response;
}
/** Response completed event */
export interface ResponseCompletedEvent extends BaseEvent {
type: 'response.completed';
response: Response;
}
/** Response failed event */
export interface ResponseFailedEvent extends BaseEvent {
type: 'response.failed';
response: Response;
}
/** Response incomplete event */
export interface ResponseIncompleteEvent extends BaseEvent {
type: 'response.incomplete';
response: Response;
}
/** Output item added event */
export interface OutputItemAddedEvent extends BaseEvent {
type: 'response.output_item.added';
output_index: number;
item: OutputItem;
}
/** Output item done event */
export interface OutputItemDoneEvent extends BaseEvent {
type: 'response.output_item.done';
output_index: number;
item: OutputItem;
}
/** Content part added event */
export interface ContentPartAddedEvent extends BaseEvent {
type: 'response.content_part.added';
item_id: string;
output_index: number;
content_index: number;
part: ModelContent | ReasoningContent;
}
/** Content part done event */
export interface ContentPartDoneEvent extends BaseEvent {
type: 'response.content_part.done';
item_id: string;
output_index: number;
content_index: number;
part: ModelContent | ReasoningContent;
}
/** Output text delta event */
export interface OutputTextDeltaEvent extends BaseEvent {
type: 'response.output_text.delta';
item_id: string;
output_index: number;
content_index: number;
delta: string;
logprobs: LogProb[];
}
/** Output text done event */
export interface OutputTextDoneEvent extends BaseEvent {
type: 'response.output_text.done';
item_id: string;
output_index: number;
content_index: number;
text: string;
logprobs: LogProb[];
}
/** Refusal delta event */
export interface RefusalDeltaEvent extends BaseEvent {
type: 'response.refusal.delta';
item_id: string;
output_index: number;
content_index: number;
delta: string;
}
/** Refusal done event */
export interface RefusalDoneEvent extends BaseEvent {
type: 'response.refusal.done';
item_id: string;
output_index: number;
content_index: number;
refusal: string;
}
/** Function call arguments delta event */
export interface FunctionCallArgumentsDeltaEvent extends BaseEvent {
type: 'response.function_call_arguments.delta';
item_id: string;
output_index: number;
call_id: string;
delta: string;
}
/** Function call arguments done event */
export interface FunctionCallArgumentsDoneEvent extends BaseEvent {
type: 'response.function_call_arguments.done';
item_id: string;
output_index: number;
call_id: string;
arguments: string;
}
/** Reasoning delta event */
export interface ReasoningDeltaEvent extends BaseEvent {
type: 'response.reasoning.delta';
item_id: string;
output_index: number;
content_index: number;
delta: string;
}
/** Reasoning done event */
export interface ReasoningDoneEvent extends BaseEvent {
type: 'response.reasoning.done';
item_id: string;
output_index: number;
content_index: number;
text: string;
}
/** Error event */
export interface ErrorEvent extends BaseEvent {
type: 'error';
error: ResponseError;
}
/* =============================================================================
* LIBRECHAT EXTENSION TYPES
* Per Open Responses spec, custom types MUST be prefixed with implementor slug
* @see https://openresponses.org/specification#extending-streaming-events
* ============================================================================= */
/** Attachment content types for LibreChat extensions */
export interface LibreChatAttachmentContent {
/** File ID in LibreChat storage */
file_id?: string;
/** Original filename */
filename?: string;
/** MIME type */
type?: string;
/** URL to access the file */
url?: string;
/** Base64-encoded image data (for inline images) */
image_url?: string;
/** Width for images */
width?: number;
/** Height for images */
height?: number;
/** Associated tool call ID */
tool_call_id?: string;
/** Additional metadata */
[key: string]: unknown;
}
/**
* LibreChat attachment event - custom streaming event for file/image attachments
* Follows Open Responses extension pattern with librechat: prefix
*/
export interface LibreChatAttachmentEvent extends BaseEvent {
type: 'librechat:attachment';
/** The attachment data */
attachment: LibreChatAttachmentContent;
/** Associated message ID */
message_id?: string;
/** Associated conversation ID */
conversation_id?: string;
}
/** Union of all streaming events (including LibreChat extensions) */
export type ResponseEvent =
| ResponseCreatedEvent
| ResponseInProgressEvent
| ResponseCompletedEvent
| ResponseFailedEvent
| ResponseIncompleteEvent
| OutputItemAddedEvent
| OutputItemDoneEvent
| ContentPartAddedEvent
| ContentPartDoneEvent
| OutputTextDeltaEvent
| OutputTextDoneEvent
| RefusalDeltaEvent
| RefusalDoneEvent
| FunctionCallArgumentsDeltaEvent
| FunctionCallArgumentsDoneEvent
| ReasoningDeltaEvent
| ReasoningDoneEvent
| ErrorEvent
// LibreChat extensions (prefixed per Open Responses spec)
| LibreChatAttachmentEvent;
/* =============================================================================
* INTERNAL TYPES
* ============================================================================= */
/** Context for building responses */
export interface ResponseContext {
/** Response ID */
responseId: string;
/** Model/agent ID */
model: string;
/** Creation timestamp */
createdAt: number;
/** Previous response ID */
previousResponseId?: string;
/** Instructions */
instructions?: string;
}
/** Validation result for requests */
export interface RequestValidationResult {
valid: boolean;
request?: ResponseRequest;
error?: string;
}

View file

@ -0,0 +1,129 @@
import type { Request, Response } from 'express';
import type { Types } from 'mongoose';
import { logger } from '@librechat/data-schemas';
export interface ApiKeyHandlerDependencies {
createAgentApiKey: (params: {
userId: string | Types.ObjectId;
name: string;
expiresAt?: Date | null;
}) => Promise<{
id: string;
name: string;
key: string;
keyPrefix: string;
createdAt: Date;
expiresAt?: Date;
}>;
listAgentApiKeys: (userId: string | Types.ObjectId) => Promise<
Array<{
id: string;
name: string;
keyPrefix: string;
lastUsedAt?: Date;
expiresAt?: Date;
createdAt: Date;
}>
>;
deleteAgentApiKey: (
keyId: string | Types.ObjectId,
userId: string | Types.ObjectId,
) => Promise<boolean>;
getAgentApiKeyById: (
keyId: string | Types.ObjectId,
userId: string | Types.ObjectId,
) => Promise<{
id: string;
name: string;
keyPrefix: string;
lastUsedAt?: Date;
expiresAt?: Date;
createdAt: Date;
} | null>;
}
interface AuthenticatedRequest extends Request {
user?: {
id: string;
_id: Types.ObjectId;
};
}
export function createApiKeyHandlers(deps: ApiKeyHandlerDependencies) {
async function createApiKey(req: AuthenticatedRequest, res: Response) {
try {
const { name, expiresAt } = req.body;
if (!name || typeof name !== 'string' || name.trim() === '') {
return res.status(400).json({
error: 'API key name is required',
});
}
const result = await deps.createAgentApiKey({
userId: req.user?.id || '',
name: name.trim(),
expiresAt: expiresAt ? new Date(expiresAt) : null,
});
res.status(201).json({
id: result.id,
name: result.name,
key: result.key,
keyPrefix: result.keyPrefix,
createdAt: result.createdAt,
expiresAt: result.expiresAt,
});
} catch (error) {
logger.error('[createApiKey] Error creating API key:', error);
res.status(500).json({ error: 'Failed to create API key' });
}
}
async function listApiKeys(req: AuthenticatedRequest, res: Response) {
try {
const keys = await deps.listAgentApiKeys(req.user?.id || '');
res.status(200).json({ keys });
} catch (error) {
logger.error('[listApiKeys] Error listing API keys:', error);
res.status(500).json({ error: 'Failed to list API keys' });
}
}
async function getApiKey(req: AuthenticatedRequest, res: Response) {
try {
const key = await deps.getAgentApiKeyById(req.params.id, req.user?.id || '');
if (!key) {
return res.status(404).json({ error: 'API key not found' });
}
res.status(200).json(key);
} catch (error) {
logger.error('[getApiKey] Error getting API key:', error);
res.status(500).json({ error: 'Failed to get API key' });
}
}
async function deleteApiKey(req: AuthenticatedRequest, res: Response) {
try {
const deleted = await deps.deleteAgentApiKey(req.params.id, req.user?.id || '');
if (!deleted) {
return res.status(404).json({ error: 'API key not found' });
}
res.status(204).send();
} catch (error) {
logger.error('[deleteApiKey] Error deleting API key:', error);
res.status(500).json({ error: 'Failed to delete API key' });
}
}
return {
createApiKey,
listApiKeys,
getApiKey,
deleteApiKey,
};
}

View file

@ -0,0 +1,4 @@
export * from './service';
export * from './middleware';
export * from './handlers';
export * from './permissions';

View file

@ -0,0 +1,163 @@
import { logger } from '@librechat/data-schemas';
import { ResourceType, PermissionBits, hasPermissions } from 'librechat-data-provider';
import type { Request, Response, NextFunction } from 'express';
import type { IUser } from '@librechat/data-schemas';
import type { Types } from 'mongoose';
import { getRemoteAgentPermissions } from './service';
export interface ApiKeyAuthDependencies {
validateAgentApiKey: (apiKey: string) => Promise<{
userId: Types.ObjectId;
keyId: Types.ObjectId;
} | null>;
findUser: (query: { _id: string | Types.ObjectId }) => Promise<IUser | null>;
}
export interface RemoteAgentAccessDependencies {
getAgent: (query: {
id: string;
}) => Promise<{ _id: Types.ObjectId; [key: string]: unknown } | null>;
getEffectivePermissions: (params: {
userId: string;
role?: string;
resourceType: ResourceType;
resourceId: string | Types.ObjectId;
}) => Promise<number>;
}
export interface ApiKeyAuthRequest extends Request {
user?: IUser & { id: string };
apiKeyId?: Types.ObjectId;
}
export interface RemoteAgentAccessRequest extends ApiKeyAuthRequest {
agent?: { _id: Types.ObjectId; [key: string]: unknown };
agentPermissions?: number;
}
export function createRequireApiKeyAuth(deps: ApiKeyAuthDependencies) {
return async (req: ApiKeyAuthRequest, res: Response, next: NextFunction) => {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({
error: {
message: 'Missing or invalid Authorization header. Expected: Bearer <api_key>',
type: 'invalid_request_error',
code: 'missing_api_key',
},
});
}
const apiKey = authHeader.slice(7);
if (!apiKey || apiKey.trim() === '') {
return res.status(401).json({
error: {
message: 'API key is required',
type: 'invalid_request_error',
code: 'missing_api_key',
},
});
}
try {
const keyValidation = await deps.validateAgentApiKey(apiKey);
if (!keyValidation) {
return res.status(401).json({
error: {
message: 'Invalid API key',
type: 'invalid_request_error',
code: 'invalid_api_key',
},
});
}
const user = await deps.findUser({ _id: keyValidation.userId });
if (!user) {
return res.status(401).json({
error: {
message: 'User not found for this API key',
type: 'invalid_request_error',
code: 'invalid_api_key',
},
});
}
user.id = (user._id as Types.ObjectId).toString();
req.user = user as IUser & { id: string };
req.apiKeyId = keyValidation.keyId;
next();
} catch (error) {
logger.error('[requireApiKeyAuth] Error validating API key:', error);
return res.status(500).json({
error: {
message: 'Internal server error during authentication',
type: 'server_error',
code: 'internal_error',
},
});
}
};
}
export function createCheckRemoteAgentAccess(deps: RemoteAgentAccessDependencies) {
return async (req: RemoteAgentAccessRequest, res: Response, next: NextFunction) => {
const agentId = req.body?.model || req.params?.model;
if (!agentId) {
return res.status(400).json({
error: {
message: 'Model (agent ID) is required',
type: 'invalid_request_error',
code: 'missing_model',
},
});
}
try {
const agent = await deps.getAgent({ id: agentId });
if (!agent) {
return res.status(404).json({
error: {
message: `Agent not found: ${agentId}`,
type: 'invalid_request_error',
code: 'model_not_found',
},
});
}
const userId = req.user?.id || '';
const permissions = await getRemoteAgentPermissions(deps, userId, req.user?.role, agent._id);
if (!hasPermissions(permissions, PermissionBits.VIEW)) {
return res.status(403).json({
error: {
message: `No remote access to agent: ${agentId}`,
type: 'permission_error',
code: 'access_denied',
},
});
}
req.agent = agent;
req.agentPermissions = permissions;
next();
} catch (error) {
logger.error('[checkRemoteAgentAccess] Error checking agent access:', error);
return res.status(500).json({
error: {
message: 'Internal server error while checking agent access',
type: 'server_error',
code: 'internal_error',
},
});
}
};
}

View file

@ -0,0 +1,169 @@
import {
ResourceType,
PrincipalType,
PermissionBits,
AccessRoleIds,
} from 'librechat-data-provider';
import type { Types, Model } from 'mongoose';
export interface Principal {
type: string;
id: string;
name: string;
email?: string;
avatar?: string;
source?: string;
idOnTheSource?: string;
accessRoleId: string;
isImplicit?: boolean;
}
export interface EnricherDependencies {
AclEntry: Model<{
principalType: string;
principalId: Types.ObjectId;
resourceType: string;
resourceId: Types.ObjectId;
permBits: number;
roleId: Types.ObjectId;
grantedBy: Types.ObjectId;
grantedAt: Date;
}>;
AccessRole: Model<{
accessRoleId: string;
permBits: number;
}>;
logger: { error: (msg: string, ...args: unknown[]) => void };
}
export interface EnrichResult {
principals: Principal[];
entriesToBackfill: Types.ObjectId[];
}
/** Enriches REMOTE_AGENT principals with implicit AGENT owners */
export async function enrichRemoteAgentPrincipals(
deps: EnricherDependencies,
resourceId: string | Types.ObjectId,
principals: Principal[],
): Promise<EnrichResult> {
const { AclEntry } = deps;
const resourceObjectId =
typeof resourceId === 'string' && /^[a-f\d]{24}$/i.test(resourceId)
? deps.AclEntry.base.Types.ObjectId.createFromHexString(resourceId)
: resourceId;
const agentOwnerEntries = await AclEntry.aggregate([
{
$match: {
resourceType: ResourceType.AGENT,
resourceId: resourceObjectId,
principalType: PrincipalType.USER,
permBits: { $bitsAllSet: PermissionBits.SHARE },
},
},
{
$lookup: {
from: 'users',
localField: 'principalId',
foreignField: '_id',
as: 'userInfo',
},
},
{
$project: {
principalId: 1,
userInfo: { $arrayElemAt: ['$userInfo', 0] },
},
},
]);
const enrichedPrincipals = [...principals];
const entriesToBackfill: Types.ObjectId[] = [];
for (const entry of agentOwnerEntries) {
if (!entry.userInfo) {
continue;
}
const alreadyIncluded = enrichedPrincipals.some(
(p) => p.type === PrincipalType.USER && p.id === entry.principalId.toString(),
);
if (!alreadyIncluded) {
enrichedPrincipals.unshift({
type: PrincipalType.USER,
id: entry.userInfo._id.toString(),
name: entry.userInfo.name || entry.userInfo.username,
email: entry.userInfo.email,
avatar: entry.userInfo.avatar,
source: 'local',
idOnTheSource: entry.userInfo.idOnTheSource || entry.userInfo._id.toString(),
accessRoleId: AccessRoleIds.REMOTE_AGENT_OWNER,
isImplicit: true,
});
entriesToBackfill.push(entry.principalId);
}
}
return { principals: enrichedPrincipals, entriesToBackfill };
}
/** Backfills REMOTE_AGENT ACL entries for AGENT owners (fire-and-forget) */
export function backfillRemoteAgentPermissions(
deps: EnricherDependencies,
resourceId: string | Types.ObjectId,
entriesToBackfill: Types.ObjectId[],
): void {
if (entriesToBackfill.length === 0) {
return;
}
const { AclEntry, AccessRole, logger } = deps;
const resourceObjectId =
typeof resourceId === 'string' && /^[a-f\d]{24}$/i.test(resourceId)
? AclEntry.base.Types.ObjectId.createFromHexString(resourceId)
: resourceId;
AccessRole.findOne({ accessRoleId: AccessRoleIds.REMOTE_AGENT_OWNER })
.lean()
.then((role) => {
if (!role) {
logger.error('[backfillRemoteAgentPermissions] REMOTE_AGENT_OWNER role not found');
return;
}
const bulkOps = entriesToBackfill.map((principalId) => ({
updateOne: {
filter: {
principalType: PrincipalType.USER,
principalId,
resourceType: ResourceType.REMOTE_AGENT,
resourceId: resourceObjectId,
},
update: {
$setOnInsert: {
principalType: PrincipalType.USER,
principalId,
principalModel: 'User',
resourceType: ResourceType.REMOTE_AGENT,
resourceId: resourceObjectId,
permBits: role.permBits,
roleId: role._id,
grantedBy: principalId,
grantedAt: new Date(),
},
},
upsert: true,
},
}));
return AclEntry.bulkWrite(bulkOps, { ordered: false });
})
.catch((err) => {
logger.error('[backfillRemoteAgentPermissions] Failed to backfill:', err);
});
}

View file

@ -0,0 +1,146 @@
import { createMethods } from '@librechat/data-schemas';
import { ResourceType, PermissionBits, hasPermissions } from 'librechat-data-provider';
import type { AllMethods, IUser } from '@librechat/data-schemas';
import type { Types } from 'mongoose';
export interface ApiKeyServiceDependencies {
validateAgentApiKey: AllMethods['validateAgentApiKey'];
createAgentApiKey: AllMethods['createAgentApiKey'];
listAgentApiKeys: AllMethods['listAgentApiKeys'];
deleteAgentApiKey: AllMethods['deleteAgentApiKey'];
getAgentApiKeyById: AllMethods['getAgentApiKeyById'];
findUser: (query: { _id: string | Types.ObjectId }) => Promise<IUser | null>;
}
export interface RemoteAgentAccessResult {
hasAccess: boolean;
permissions: number;
agent: { _id: Types.ObjectId; [key: string]: unknown } | null;
}
export class AgentApiKeyService {
private deps: ApiKeyServiceDependencies;
constructor(deps: ApiKeyServiceDependencies) {
this.deps = deps;
}
async validateApiKey(apiKey: string): Promise<{
userId: Types.ObjectId;
keyId: Types.ObjectId;
} | null> {
return this.deps.validateAgentApiKey(apiKey);
}
async createApiKey(params: {
userId: string | Types.ObjectId;
name: string;
expiresAt?: Date | null;
}) {
return this.deps.createAgentApiKey(params);
}
async listApiKeys(userId: string | Types.ObjectId) {
return this.deps.listAgentApiKeys(userId);
}
async deleteApiKey(keyId: string | Types.ObjectId, userId: string | Types.ObjectId) {
return this.deps.deleteAgentApiKey(keyId, userId);
}
async getApiKeyById(keyId: string | Types.ObjectId, userId: string | Types.ObjectId) {
return this.deps.getAgentApiKeyById(keyId, userId);
}
async getUserFromApiKey(apiKey: string): Promise<IUser | null> {
const keyValidation = await this.validateApiKey(apiKey);
if (!keyValidation) {
return null;
}
return this.deps.findUser({ _id: keyValidation.userId });
}
}
export function createApiKeyServiceDependencies(
mongoose: typeof import('mongoose'),
): ApiKeyServiceDependencies {
const methods = createMethods(mongoose);
return {
validateAgentApiKey: methods.validateAgentApiKey,
createAgentApiKey: methods.createAgentApiKey,
listAgentApiKeys: methods.listAgentApiKeys,
deleteAgentApiKey: methods.deleteAgentApiKey,
getAgentApiKeyById: methods.getAgentApiKeyById,
findUser: methods.findUser,
};
}
export interface GetRemoteAgentPermissionsDeps {
getEffectivePermissions: (params: {
userId: string;
role?: string;
resourceType: ResourceType;
resourceId: string | Types.ObjectId;
}) => Promise<number>;
}
/** AGENT owners automatically have full REMOTE_AGENT permissions */
export async function getRemoteAgentPermissions(
deps: GetRemoteAgentPermissionsDeps,
userId: string,
role: string | undefined,
resourceId: string | Types.ObjectId,
): Promise<number> {
const agentPerms = await deps.getEffectivePermissions({
userId,
role,
resourceType: ResourceType.AGENT,
resourceId,
});
if (hasPermissions(agentPerms, PermissionBits.SHARE)) {
return PermissionBits.VIEW | PermissionBits.EDIT | PermissionBits.DELETE | PermissionBits.SHARE;
}
return deps.getEffectivePermissions({
userId,
role,
resourceType: ResourceType.REMOTE_AGENT,
resourceId,
});
}
export async function checkRemoteAgentAccess(params: {
userId: string;
role?: string;
agentId: string;
getAgent: (query: {
id: string;
}) => Promise<{ _id: Types.ObjectId; [key: string]: unknown } | null>;
getEffectivePermissions: (params: {
userId: string;
role?: string;
resourceType: ResourceType;
resourceId: string | Types.ObjectId;
}) => Promise<number>;
}): Promise<RemoteAgentAccessResult> {
const { userId, role, agentId, getAgent, getEffectivePermissions } = params;
const agent = await getAgent({ id: agentId });
if (!agent) {
return { hasAccess: false, permissions: 0, agent: null };
}
const permissions = await getRemoteAgentPermissions(
{ getEffectivePermissions },
userId,
role,
agent._id,
);
const hasAccess = hasPermissions(permissions, PermissionBits.VIEW);
return { hasAccess, permissions, agent };
}

View file

@ -100,6 +100,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: false,
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
};
const expectedPermissionsForAdmin = {
@ -141,6 +147,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: true,
[Permissions.CREATE]: true,
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
};
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
@ -246,6 +258,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: false,
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
};
const expectedPermissionsForAdmin = {
@ -287,6 +305,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: true,
[Permissions.CREATE]: true,
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
};
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
@ -378,6 +402,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: false,
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
};
const expectedPermissionsForAdmin = {
@ -419,6 +449,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: true,
[Permissions.CREATE]: true,
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
};
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
@ -523,6 +559,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: false,
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
};
const expectedPermissionsForAdmin = {
@ -564,6 +606,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: true,
[Permissions.CREATE]: true,
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
};
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
@ -655,6 +703,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: false,
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
};
const expectedPermissionsForAdmin = {
@ -696,6 +750,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: true,
[Permissions.CREATE]: true,
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
};
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
@ -784,6 +844,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: false,
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
};
const expectedPermissionsForAdmin = {
@ -813,6 +879,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: true,
[Permissions.CREATE]: true,
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
};
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
@ -920,6 +992,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: false,
[Permissions.SHARE]: false,
[Permissions.SHARE_PUBLIC]: false,
},
};
const expectedPermissionsForAdmin = {
@ -955,6 +1033,12 @@ describe('updateInterfacePermissions - permissions', () => {
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: true,
[Permissions.CREATE]: true,
[Permissions.SHARE]: true,
[Permissions.SHARE_PUBLIC]: true,
},
};
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);

View file

@ -43,6 +43,8 @@ function hasExplicitConfig(
return interfaceConfig?.fileCitations !== undefined;
case PermissionTypes.MCP_SERVERS:
return interfaceConfig?.mcpServers !== undefined;
case PermissionTypes.REMOTE_AGENTS:
return interfaceConfig?.remoteAgents !== undefined;
default:
return false;
}
@ -101,7 +103,9 @@ export async function updateInterfacePermissions({
const defaultPerms = roleDefaults[roleName]?.permissions;
const existingRole = await getRoleByName(roleName);
const existingPermissions = existingRole?.permissions;
const existingPermissions = existingRole?.permissions as
| Partial<Record<PermissionTypes, Record<string, boolean | undefined>>>
| undefined;
const permissionsToUpdate: Partial<
Record<PermissionTypes, Record<string, boolean | undefined>>
> = {};
@ -335,6 +339,28 @@ export async function updateInterfacePermissions({
defaults.mcpServers?.public,
),
},
[PermissionTypes.REMOTE_AGENTS]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.remoteAgents?.use,
defaultPerms[PermissionTypes.REMOTE_AGENTS]?.[Permissions.USE],
defaults.remoteAgents?.use,
),
[Permissions.CREATE]: getPermissionValue(
loadedInterface.remoteAgents?.create,
defaultPerms[PermissionTypes.REMOTE_AGENTS]?.[Permissions.CREATE],
defaults.remoteAgents?.create,
),
[Permissions.SHARE]: getPermissionValue(
loadedInterface.remoteAgents?.share,
defaultPerms[PermissionTypes.REMOTE_AGENTS]?.[Permissions.SHARE],
defaults.remoteAgents?.share,
),
[Permissions.SHARE_PUBLIC]: getPermissionValue(
loadedInterface.remoteAgents?.public,
defaultPerms[PermissionTypes.REMOTE_AGENTS]?.[Permissions.SHARE_PUBLIC],
defaults.remoteAgents?.public,
),
},
};
// Check and add each permission type if needed

View file

@ -2,6 +2,8 @@ export * from './app';
export * from './cdn';
/* Auth */
export * from './auth';
/* API Keys */
export * from './apiKeys';
/* MCP */
export * from './mcp/registry/MCPServersRegistry';
export * from './mcp/MCPManager';