LibreChat/api/server/controllers/agents/responses.js
Danny Avila 5af1342dbb
🦥 refactor: Event-Driven Lazy Tool Loading (#11588)
* refactor: json schema tools with lazy loading

- Added LocalToolExecutor class for lazy loading and caching of tools during execution.
- Introduced ToolExecutionContext and ToolExecutor interfaces for better type management.
- Created utility functions to generate tool proxies with JSON schema support.
- Added ExtendedJsonSchema type for enhanced schema definitions.
- Updated existing toolkits to utilize the new schema and executor functionalities.
- Introduced a comprehensive tool definitions registry for managing various tool schemas.

chore: update @librechat/agents to version 3.1.2

refactor: enhance tool loading optimization and classification

- Improved the loadAgentToolsOptimized function to utilize a proxy pattern for all tools, enabling deferred execution and reducing overhead.
- Introduced caching for tool instances and refined tool classification logic to streamline tool management.
- Updated the handling of MCP tools to improve logging and error reporting for missing tools in the cache.
- Enhanced the structure of tool definitions to support better classification and integration with existing tools.

refactor: modularize tool loading and enhance optimization

- Moved the loadAgentToolsOptimized function to a new service file for better organization and maintainability.
- Updated the ToolService to utilize the new service for optimized tool loading, improving code clarity.
- Removed legacy tool loading methods and streamlined the tool loading process to enhance performance and reduce complexity.
- Introduced feature flag handling for optimized tool loading, allowing for easier toggling of this functionality.

refactor: replace loadAgentToolsWithFlag with loadAgentTools in tool loader

refactor: enhance MCP tool loading with proxy creation and classification

refactor: optimize MCP tool loading by grouping tools by server

- Introduced a Map to group cached tools by server name, improving the organization of tool data.
- Updated the createMCPProxyTool function to accept server name directly, enhancing clarity.
- Refactored the logic for handling MCP tools, streamlining the process of creating proxy tools for classification.

refactor: enhance MCP tool loading and proxy creation

- Added functionality to retrieve MCP server tools and reinitialize servers if necessary, improving tool availability.
- Updated the tool loading logic to utilize a Map for organizing tools by server, enhancing clarity and performance.
- Refactored the createToolProxy function to ensure a default response format, streamlining tool creation.

refactor: update createToolProxy to ensure consistent response format

- Modified the createToolProxy function to await the executor's execution and validate the result format.
- Ensured that the function returns a default response structure when the result is not an array of two elements, enhancing reliability in tool proxy creation.

refactor: ToolExecutionContext with toolCall property

- Added toolCall property to ToolExecutionContext interface for improved context handling during tool execution.
- Updated LocalToolExecutor to include toolCall in the runnable configuration, allowing for more flexible tool invocation.
- Modified createToolProxy to pass toolCall from the configuration, ensuring consistent context across tool executions.

refactor: enhance event-driven tool execution and logging

- Introduced ToolExecuteOptions for improved handling of event-driven tool execution, allowing for parallel execution of tool calls.
- Updated getDefaultHandlers to include support for ON_TOOL_EXECUTE events, enhancing the flexibility of tool invocation.
- Added detailed logging in LocalToolExecutor to track tool loading and execution metrics, improving observability and debugging capabilities.
- Refactored initializeClient to integrate event-driven tool loading, ensuring compatibility with the new execution model.

chore: update @librechat/agents to version 3.1.21

refactor: remove legacy tool loading and executor components

- Eliminated the loadAgentToolsWithFlag function, simplifying the tool loading process by directly using loadAgentTools.
- Removed the LocalToolExecutor and related executor components to streamline the tool execution architecture.
- Updated ToolService and related files to reflect the removal of deprecated features, enhancing code clarity and maintainability.

refactor: enhance tool classification and definitions handling

- Updated the loadAgentTools function to return toolDefinitions alongside toolRegistry, improving the structure of tool data returned to clients.
- Removed the convertRegistryToDefinitions function from the initialize.js file, simplifying the initialization process.
- Adjusted the buildToolClassification function to ensure toolDefinitions are built and returned simultaneously with the toolRegistry, enhancing efficiency in tool management.
- Updated type definitions in initialize.ts to include toolDefinitions, ensuring consistency across the codebase.

refactor: implement event-driven tool execution handler

- Introduced createToolExecuteHandler function to streamline the handling of ON_TOOL_EXECUTE events, allowing for parallel execution of tool calls.
- Updated getDefaultHandlers to utilize the new handler, simplifying the event-driven architecture.
- Added handlers.ts file to encapsulate tool execution logic, improving code organization and maintainability.
- Enhanced OpenAI handlers to integrate the new tool execution capabilities, ensuring consistent event handling across the application.

refactor: integrate event-driven tool execution options

- Added toolExecuteOptions to support event-driven tool execution in OpenAI and responses controllers, enhancing flexibility in tool handling.
- Updated handlers to utilize createToolExecuteHandler, allowing for streamlined execution of tools during agent interactions.
- Refactored service dependencies to include toolExecuteOptions, ensuring consistent integration across the application.

refactor: enhance tool loading with definitionsOnly parameter

- Updated createToolLoader and loadAgentTools functions to include a definitionsOnly parameter, allowing for the retrieval of only serializable tool definitions in event-driven mode.
- Adjusted related interfaces and documentation to reflect the new parameter, improving clarity and flexibility in tool management.
- Ensured compatibility across various components by integrating the definitionsOnly option in the initialization process.

refactor: improve agent tool presence check in initialization

- Added a check for tool presence using a new hasAgentTools variable, which evaluates both structuredTools and toolDefinitions.
- Updated the conditional logic in the agent initialization process to utilize the hasAgentTools variable, enhancing clarity and maintainability in tool management.

refactor: enhance agent tool extraction to support tool definitions

- Updated the extractMCPServers function to handle both tool instances and serializable tool definitions, improving flexibility in agent tool management.
- Added a new property toolDefinitions to the AgentWithTools type for better integration of event-driven mode.
- Enhanced documentation to clarify the function's capabilities in extracting unique MCP server names from both tools and tool definitions.

refactor: enhance tool classification and registry building

- Added serverName property to ToolDefinition for improved tool identification.
- Introduced buildToolRegistry function to streamline the creation of tool registries based on MCP tool definitions and agent options.
- Updated buildToolClassification to utilize the new registry building logic, ensuring basic definitions are returned even when advanced classification features are not allowed.
- Enhanced documentation and logging for clarity in tool classification processes.

refactor: update @librechat/agents dependency to version 3.1.22

fix: expose loadTools function in ToolService

- Added loadTools function to the exported module in ToolService.js, enhancing the accessibility of tool loading functionality.

chore: remove configurable options from tool execute options in OpenAI controller

refactor: enhance tool loading mechanism to utilize agent-specific context

chore: update @librechat/agents dependency to version 3.1.23

fix: simplify result handling in createToolExecuteHandler

* refactor: loadToolDefinitions for efficient tool loading in event-driven mode

* refactor: replace legacy tool loading with loadToolsForExecution in OpenAI and responses controllers

- Updated OpenAIChatCompletionController and createResponse functions to utilize loadToolsForExecution for improved tool loading.
- Removed deprecated loadToolsLegacy references, streamlining the tool execution process.
- Enhanced tool loading options to include agent-specific context and configurations.

* refactor: enhance tool loading and execution handling

- Introduced loadActionToolsForExecution function to streamline loading of action tools, improving organization and maintainability.
- Updated loadToolsForExecution to handle both regular and action tools, optimizing the tool loading process.
- Added detailed logging for missing tools in createToolExecuteHandler, enhancing error visibility.
- Refactored tool definitions to normalize action tool names, improving consistency in tool management.

* refactor: enhance built-in tool definitions loading

- Updated loadToolDefinitions to include descriptions and parameters from the tool registry for built-in tools, improving the clarity and usability of tool definitions.
- Integrated getToolDefinition to streamline the retrieval of tool metadata, enhancing the overall tool management process.

* feat: add action tool definitions loading to tool service

- Introduced getActionToolDefinitions function to load action tool definitions based on agent ID and tool names, enhancing the tool loading process.
- Updated loadToolDefinitions to integrate action tool definitions, allowing for better management and retrieval of action-specific tools.
- Added comprehensive tests for action tool definitions to ensure correct loading and parameter handling, improving overall reliability and functionality.

* chore: update @librechat/agents dependency to version 3.1.26

* refactor: add toolEndCallback to handle tool execution results

* fix: tool definitions and execution handling

- Introduced native tools (execute_code, file_search, web_search) to the tool service, allowing for better integration and management of these tools.
- Updated isBuiltInTool function to include native tools in the built-in check, improving tool recognition.
- Added comprehensive tests for loading parameters of native tools, ensuring correct functionality and parameter handling.
- Enhanced tool definitions registry to include new agent tool definitions, streamlining tool retrieval and management.

* refactor: enhance tool loading and execution context

- Added toolRegistry to the context for OpenAIChatCompletionController and createResponse functions, improving tool management.
- Updated loadToolsForExecution to utilize toolRegistry for better integration of programmatic tools and tool search functionalities.
- Enhanced the initialization process to include toolRegistry in agent context, streamlining tool access and configuration.
- Refactored tool classification logic to support event-driven execution, ensuring compatibility with new tool definitions.

* chore: add request duration logging to OpenAI and Responses controllers

- Introduced logging for request start and completion times in OpenAIChatCompletionController and createResponse functions.
- Calculated and logged the duration of each request, enhancing observability and performance tracking.
- Improved debugging capabilities by providing detailed logs for both streaming and non-streaming responses.

* chore: update @librechat/agents dependency to version 3.1.27

* refactor: implement buildToolSet function for tool management

- Introduced buildToolSet function to streamline the creation of tool sets from agent configurations, enhancing tool management across various controllers.
- Updated AgentClient, OpenAIChatCompletionController, and createResponse functions to utilize buildToolSet, improving consistency in tool handling.
- Added comprehensive tests for buildToolSet to ensure correct functionality and edge case handling, enhancing overall reliability.

* refactor: update import paths for ToolExecuteOptions and createToolExecuteHandler

* fix: update GoogleSearch.js description for maximum search results

- Changed the default maximum number of search results from 10 to 5 in the Google Search JSON schema description, ensuring accurate documentation of the expected behavior.

* chore: remove deprecated Browser tool and associated assets

- Deleted the Browser tool definition from manifest.json, which included its name, plugin key, description, and authentication configuration.
- Removed the web-browser.svg asset as it is no longer needed following the removal of the Browser tool.

* fix: ensure tool definitions are valid before processing

- Added a check to verify the existence of tool definitions in the registry before accessing their properties, preventing potential runtime errors.
- Updated the loading logic for built-in tool definitions to ensure that only valid definitions are pushed to the built-in tool definitions array.

* fix: extend ExtendedJsonSchema to support 'null' type and nullable enums

- Updated the ExtendedJsonSchema type to include 'null' as a valid type option.
- Modified the enum property to accept an array of values that can include strings, numbers, booleans, and null, enhancing schema flexibility.

* test: add comprehensive tests for tool definitions loading and registry behavior

- Implemented tests to verify the handling of built-in tools without registry definitions, ensuring they are skipped correctly.
- Added tests to confirm that built-in tools include descriptions and parameters in the registry.
- Enhanced tests for action tools, checking for proper inclusion of metadata and handling of tools without parameters in the registry.

* test: add tests for mixed-type and number enum schema handling

- Introduced tests to validate the parsing of mixed-type enum values, including strings, numbers, booleans, and null.
- Added tests for number enum schema values to ensure correct parsing of numeric inputs, enhancing schema validation coverage.

* fix: update mock implementation for @librechat/agents

- Changed the mock for @librechat/agents to spread the actual module's properties, ensuring that all necessary functionalities are preserved in tests.
- This adjustment enhances the accuracy of the tests by reflecting the real structure of the module.

* fix: change max_results type in GoogleSearch schema from number to integer

- Updated the type of max_results in the Google Search JSON schema to 'integer' for better type accuracy and validation consistency.

* fix: update max_results description and type in GoogleSearch schema

- Changed the type of max_results from 'number' to 'integer' for improved type accuracy.
- Updated the description to reflect the new default maximum number of search results, changing it from 10 to 5.

* refactor: remove unused code and improve tool registry handling

- Eliminated outdated comments and conditional logic related to event-driven mode in the ToolService.
- Enhanced the handling of the tool registry by ensuring it is configurable for better integration during tool execution.

* feat: add definitionsOnly option to buildToolClassification for event-driven mode

- Introduced a new parameter, definitionsOnly, to the BuildToolClassificationParams interface to enable a mode that skips tool instance creation.
- Updated the buildToolClassification function to conditionally add tool definitions without instantiating tools when definitionsOnly is true.
- Modified the loadToolDefinitions function to pass definitionsOnly as true, ensuring compatibility with the new feature.

* test: add unit tests for buildToolClassification with definitionsOnly option

- Implemented tests to verify the behavior of buildToolClassification when definitionsOnly is set to true or false.
- Ensured that tool instances are not created when definitionsOnly is true, while still adding necessary tool definitions.
- Confirmed that loadAuthValues is called appropriately based on the definitionsOnly parameter, enhancing test coverage for this new feature.
2026-02-01 08:50:57 -05:00

841 lines
25 KiB
JavaScript

const { nanoid } = require('nanoid');
const { v4: uuidv4 } = require('uuid');
const { logger } = require('@librechat/data-schemas');
const { EModelEndpoint, ResourceType, PermissionBits } = require('librechat-data-provider');
const {
Callback,
ToolEndHandler,
formatAgentMessages,
ChatModelStreamHandler,
} = require('@librechat/agents');
const {
createRun,
buildToolSet,
createSafeUser,
initializeAgent,
createToolExecuteHandler,
// Responses API
writeDone,
buildResponse,
generateResponseId,
isValidationFailure,
emitResponseCreated,
createResponseContext,
createResponseTracker,
setupStreamingResponse,
emitResponseInProgress,
convertInputToMessages,
validateResponseRequest,
buildAggregatedResponse,
createResponseAggregator,
sendResponsesErrorResponse,
createResponsesEventHandlers,
createAggregatorEventHandlers,
} = require('@librechat/api');
const {
createResponsesToolEndCallback,
createToolEndCallback,
} = require('~/server/controllers/agents/callbacks');
const { loadAgentTools, loadToolsForExecution } = require('~/server/services/ToolService');
const { findAccessibleResources } = require('~/server/services/PermissionService');
const { getConvoFiles, saveConvo, getConvo } = require('~/models/Conversation');
const { getAgent, getAgents } = require('~/models/Agent');
const db = require('~/models');
/** @type {import('@librechat/api').AppConfig | null} */
let appConfig = null;
/**
* Set the app config for the controller
* @param {import('@librechat/api').AppConfig} config
*/
function setAppConfig(config) {
appConfig = config;
}
/**
* Creates a tool loader function for the agent.
* @param {AbortSignal} signal - The abort signal
* @param {boolean} [definitionsOnly=true] - When true, returns only serializable
* tool definitions without creating full tool instances (for event-driven mode)
*/
function createToolLoader(signal, definitionsOnly = true) {
return async function loadTools({
req,
res,
tools,
model,
agentId,
provider,
tool_options,
tool_resources,
}) {
const agent = { id: agentId, tools, provider, model, tool_options };
try {
return await loadAgentTools({
req,
res,
agent,
signal,
tool_resources,
definitionsOnly,
streamId: null,
});
} catch (error) {
logger.error('Error loading tools for agent ' + agentId, error);
}
};
}
/**
* Convert Open Responses input items to internal messages
* @param {import('@librechat/api').InputItem[]} input
* @returns {Array} Internal messages
*/
function convertToInternalMessages(input) {
return convertInputToMessages(input);
}
/**
* Load messages from a previous response/conversation
* @param {string} conversationId - The conversation/response ID
* @param {string} userId - The user ID
* @returns {Promise<Array>} Messages from the conversation
*/
async function loadPreviousMessages(conversationId, userId) {
try {
const messages = await db.getMessages({ conversationId, user: userId });
if (!messages || messages.length === 0) {
return [];
}
// Convert stored messages to internal format
return messages.map((msg) => {
const internalMsg = {
role: msg.isCreatedByUser ? 'user' : 'assistant',
content: '',
messageId: msg.messageId,
};
// Handle content - could be string or array
if (typeof msg.text === 'string') {
internalMsg.content = msg.text;
} else if (Array.isArray(msg.content)) {
// Handle content parts
internalMsg.content = msg.content;
} else if (msg.text) {
internalMsg.content = String(msg.text);
}
return internalMsg;
});
} catch (error) {
logger.error('[Responses API] Error loading previous messages:', error);
return [];
}
}
/**
* Save input messages to database
* @param {import('express').Request} req
* @param {string} conversationId
* @param {Array} inputMessages - Internal format messages
* @param {string} agentId
* @returns {Promise<void>}
*/
async function saveInputMessages(req, conversationId, inputMessages, agentId) {
for (const msg of inputMessages) {
if (msg.role === 'user') {
await db.saveMessage(
req,
{
messageId: msg.messageId || nanoid(),
conversationId,
parentMessageId: null,
isCreatedByUser: true,
text: typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content),
sender: 'User',
endpoint: EModelEndpoint.agents,
model: agentId,
},
{ context: 'Responses API - save user input' },
);
}
}
}
/**
* Save response output to database
* @param {import('express').Request} req
* @param {string} conversationId
* @param {string} responseId
* @param {import('@librechat/api').Response} response
* @param {string} agentId
* @returns {Promise<void>}
*/
async function saveResponseOutput(req, conversationId, responseId, response, agentId) {
// Extract text content from output items
let responseText = '';
for (const item of response.output) {
if (item.type === 'message' && item.content) {
for (const part of item.content) {
if (part.type === 'output_text' && part.text) {
responseText += part.text;
}
}
}
}
// Save the assistant message
await db.saveMessage(
req,
{
messageId: responseId,
conversationId,
parentMessageId: null,
isCreatedByUser: false,
text: responseText,
sender: 'Agent',
endpoint: EModelEndpoint.agents,
model: agentId,
finish_reason: response.status === 'completed' ? 'stop' : response.status,
tokenCount: response.usage?.output_tokens,
},
{ context: 'Responses API - save assistant response' },
);
}
/**
* Save or update conversation
* @param {import('express').Request} req
* @param {string} conversationId
* @param {string} agentId
* @param {object} agent
* @returns {Promise<void>}
*/
async function saveConversation(req, conversationId, agentId, agent) {
await saveConvo(
req,
{
conversationId,
endpoint: EModelEndpoint.agents,
agentId,
title: agent?.name || 'Open Responses Conversation',
model: agent?.model,
},
{ context: 'Responses API - save conversation' },
);
}
/**
* Convert stored messages to Open Responses output format
* @param {Array} messages - Stored messages
* @returns {Array} Output items
*/
function convertMessagesToOutputItems(messages) {
const output = [];
for (const msg of messages) {
if (!msg.isCreatedByUser) {
output.push({
type: 'message',
id: msg.messageId,
role: 'assistant',
status: 'completed',
content: [
{
type: 'output_text',
text: msg.text || '',
annotations: [],
},
],
});
}
}
return output;
}
/**
* Create Response - POST /v1/responses
*
* Creates a model response following the Open Responses API specification.
* Supports both streaming and non-streaming responses.
*
* @param {import('express').Request} req
* @param {import('express').Response} res
*/
const createResponse = async (req, res) => {
const requestStartTime = Date.now();
// Validate request
const validation = validateResponseRequest(req.body);
if (isValidationFailure(validation)) {
return sendResponsesErrorResponse(res, 400, validation.error);
}
const request = validation.request;
const agentId = request.model;
const isStreaming = request.stream === true;
// Look up the agent
const agent = await getAgent({ id: agentId });
if (!agent) {
return sendResponsesErrorResponse(
res,
404,
`Agent not found: ${agentId}`,
'not_found',
'model_not_found',
);
}
// Generate IDs
const responseId = generateResponseId();
const conversationId = request.previous_response_id ?? uuidv4();
const parentMessageId = null;
// Create response context
const context = createResponseContext(request, responseId);
logger.debug(
`[Responses API] Request ${responseId} started for agent ${agentId}, stream: ${isStreaming}`,
);
// Set up abort controller
const abortController = new AbortController();
// Handle client disconnect
req.on('close', () => {
if (!abortController.signal.aborted) {
abortController.abort();
logger.debug('[Responses API] Client disconnected, aborting');
}
});
try {
// Build allowed providers set
const allowedProviders = new Set(
appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders,
);
// Create tool loader
const loadTools = createToolLoader(abortController.signal);
// Initialize the agent first to check for disableStreaming
const endpointOption = {
endpoint: agent.provider,
model_parameters: agent.model_parameters ?? {},
};
const primaryConfig = await initializeAgent(
{
req,
res,
loadTools,
requestFiles: [],
conversationId,
parentMessageId,
agent,
endpointOption,
allowedProviders,
isInitialAgent: true,
},
{
getConvoFiles,
getFiles: db.getFiles,
getUserKey: db.getUserKey,
getMessages: db.getMessages,
updateFilesUsage: db.updateFilesUsage,
getUserKeyValues: db.getUserKeyValues,
getUserCodeFiles: db.getUserCodeFiles,
getToolFilesByIds: db.getToolFilesByIds,
getCodeGeneratedFiles: db.getCodeGeneratedFiles,
},
);
// Determine if streaming is enabled (check both request and agent config)
const streamingDisabled = !!primaryConfig.model_parameters?.disableStreaming;
const actuallyStreaming = isStreaming && !streamingDisabled;
// Load previous messages if previous_response_id is provided
let previousMessages = [];
if (request.previous_response_id) {
const userId = req.user?.id ?? 'api-user';
previousMessages = await loadPreviousMessages(request.previous_response_id, userId);
}
// Convert input to internal messages
const inputMessages = convertToInternalMessages(
typeof request.input === 'string' ? request.input : request.input,
);
// Merge previous messages with new input
const allMessages = [...previousMessages, ...inputMessages];
const toolSet = buildToolSet(primaryConfig);
const { messages: formattedMessages, indexTokenCountMap } = formatAgentMessages(
allMessages,
{},
toolSet,
);
// Create tracker for streaming or aggregator for non-streaming
const tracker = actuallyStreaming ? createResponseTracker() : null;
const aggregator = actuallyStreaming ? null : createResponseAggregator();
// Set up response for streaming
if (actuallyStreaming) {
setupStreamingResponse(res);
// Create handler config
const handlerConfig = {
res,
context,
tracker,
};
// Emit response.created then response.in_progress per Open Responses spec
emitResponseCreated(handlerConfig);
emitResponseInProgress(handlerConfig);
// Create event handlers
const { handlers: responsesHandlers, finalizeStream } =
createResponsesEventHandlers(handlerConfig);
// Built-in handler for processing raw model stream chunks
const chatModelStreamHandler = new ChatModelStreamHandler();
// Artifact promises for processing tool outputs
/** @type {Promise<import('librechat-data-provider').TAttachment | null>[]} */
const artifactPromises = [];
// Use Responses API-specific callback that emits librechat:attachment events
const toolEndCallback = createResponsesToolEndCallback({
req,
res,
tracker,
artifactPromises,
});
// Create tool execute options for event-driven tool execution
const toolExecuteOptions = {
loadTools: async (toolNames) => {
return loadToolsForExecution({
req,
res,
agent,
toolNames,
signal: abortController.signal,
toolRegistry: primaryConfig.toolRegistry,
userMCPAuthMap: primaryConfig.userMCPAuthMap,
tool_resources: primaryConfig.tool_resources,
});
},
toolEndCallback,
};
// Combine handlers
const handlers = {
on_chat_model_stream: {
handle: async (event, data, metadata, graph) => {
await chatModelStreamHandler.handle(event, data, metadata, graph);
},
},
on_message_delta: responsesHandlers.on_message_delta,
on_reasoning_delta: responsesHandlers.on_reasoning_delta,
on_run_step: responsesHandlers.on_run_step,
on_run_step_delta: responsesHandlers.on_run_step_delta,
on_chat_model_end: responsesHandlers.on_chat_model_end,
on_tool_end: new ToolEndHandler(toolEndCallback, logger),
on_run_step_completed: { handle: () => {} },
on_chain_stream: { handle: () => {} },
on_chain_end: { handle: () => {} },
on_agent_update: { handle: () => {} },
on_custom_event: { handle: () => {} },
on_tool_execute: createToolExecuteHandler(toolExecuteOptions),
};
// Create and run the agent
const userId = req.user?.id ?? 'api-user';
const userMCPAuthMap = primaryConfig.userMCPAuthMap;
const run = await createRun({
agents: [primaryConfig],
messages: formattedMessages,
indexTokenCountMap,
runId: responseId,
signal: abortController.signal,
customHandlers: handlers,
requestBody: {
messageId: responseId,
conversationId,
},
user: { id: userId },
});
if (!run) {
throw new Error('Failed to create agent run');
}
// Process the stream
const config = {
runName: 'AgentRun',
configurable: {
thread_id: conversationId,
user_id: userId,
user: createSafeUser(req.user),
...(userMCPAuthMap != null && { userMCPAuthMap }),
},
signal: abortController.signal,
streamMode: 'values',
version: 'v2',
};
await run.processStream({ messages: formattedMessages }, config, {
callbacks: {
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
logger.error(`[Responses API] Tool Error "${toolId}"`, error);
},
},
});
// Finalize the stream
finalizeStream();
res.end();
const duration = Date.now() - requestStartTime;
logger.debug(`[Responses API] Request ${responseId} completed in ${duration}ms (streaming)`);
// Save to database if store: true
if (request.store === true) {
try {
// Save conversation
await saveConversation(req, conversationId, agentId, agent);
// Save input messages
await saveInputMessages(req, conversationId, inputMessages, agentId);
// Build response for saving (use tracker with buildResponse for streaming)
const finalResponse = buildResponse(context, tracker, 'completed');
await saveResponseOutput(req, conversationId, responseId, finalResponse, agentId);
logger.debug(
`[Responses API] Stored response ${responseId} in conversation ${conversationId}`,
);
} catch (saveError) {
logger.error('[Responses API] Error saving response:', saveError);
// Don't fail the request if saving fails
}
}
// Wait for artifact processing after response ends (non-blocking)
if (artifactPromises.length > 0) {
Promise.all(artifactPromises).catch((artifactError) => {
logger.warn('[Responses API] Error processing artifacts:', artifactError);
});
}
} else {
const aggregatorHandlers = createAggregatorEventHandlers(aggregator);
const chatModelStreamHandler = new ChatModelStreamHandler();
/** @type {Promise<import('librechat-data-provider').TAttachment | null>[]} */
const artifactPromises = [];
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises, streamId: null });
const toolExecuteOptions = {
loadTools: async (toolNames) => {
return loadToolsForExecution({
req,
res,
agent,
toolNames,
signal: abortController.signal,
toolRegistry: primaryConfig.toolRegistry,
userMCPAuthMap: primaryConfig.userMCPAuthMap,
tool_resources: primaryConfig.tool_resources,
});
},
toolEndCallback,
};
const handlers = {
on_chat_model_stream: {
handle: async (event, data, metadata, graph) => {
await chatModelStreamHandler.handle(event, data, metadata, graph);
},
},
on_message_delta: aggregatorHandlers.on_message_delta,
on_reasoning_delta: aggregatorHandlers.on_reasoning_delta,
on_run_step: aggregatorHandlers.on_run_step,
on_run_step_delta: aggregatorHandlers.on_run_step_delta,
on_chat_model_end: aggregatorHandlers.on_chat_model_end,
on_tool_end: new ToolEndHandler(toolEndCallback, logger),
on_run_step_completed: { handle: () => {} },
on_chain_stream: { handle: () => {} },
on_chain_end: { handle: () => {} },
on_agent_update: { handle: () => {} },
on_custom_event: { handle: () => {} },
on_tool_execute: createToolExecuteHandler(toolExecuteOptions),
};
const userId = req.user?.id ?? 'api-user';
const userMCPAuthMap = primaryConfig.userMCPAuthMap;
const run = await createRun({
agents: [primaryConfig],
messages: formattedMessages,
indexTokenCountMap,
runId: responseId,
signal: abortController.signal,
customHandlers: handlers,
requestBody: {
messageId: responseId,
conversationId,
},
user: { id: userId },
});
if (!run) {
throw new Error('Failed to create agent run');
}
const config = {
runName: 'AgentRun',
configurable: {
thread_id: conversationId,
user_id: userId,
user: createSafeUser(req.user),
...(userMCPAuthMap != null && { userMCPAuthMap }),
},
signal: abortController.signal,
streamMode: 'values',
version: 'v2',
};
await run.processStream({ messages: formattedMessages }, config, {
callbacks: {
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
logger.error(`[Responses API] Tool Error "${toolId}"`, error);
},
},
});
if (artifactPromises.length > 0) {
try {
await Promise.all(artifactPromises);
} catch (artifactError) {
logger.warn('[Responses API] Error processing artifacts:', artifactError);
}
}
const response = buildAggregatedResponse(context, aggregator);
if (request.store === true) {
try {
await saveConversation(req, conversationId, agentId, agent);
await saveInputMessages(req, conversationId, inputMessages, agentId);
await saveResponseOutput(req, conversationId, responseId, response, agentId);
logger.debug(
`[Responses API] Stored response ${responseId} in conversation ${conversationId}`,
);
} catch (saveError) {
logger.error('[Responses API] Error saving response:', saveError);
// Don't fail the request if saving fails
}
}
res.json(response);
const duration = Date.now() - requestStartTime;
logger.debug(
`[Responses API] Request ${responseId} completed in ${duration}ms (non-streaming)`,
);
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'An error occurred';
logger.error('[Responses API] Error:', error);
// Check if we already started streaming (headers sent)
if (res.headersSent) {
// Headers already sent, write error event and close
writeDone(res);
res.end();
} else {
sendResponsesErrorResponse(res, 500, errorMessage, 'server_error');
}
}
};
/**
* List available agents as models - GET /v1/models (also works with /v1/responses/models)
*
* Returns a list of available agents the user has remote access to.
*
* @param {import('express').Request} req
* @param {import('express').Response} res
*/
const listModels = async (req, res) => {
try {
const userId = req.user?.id;
const userRole = req.user?.role;
if (!userId) {
return sendResponsesErrorResponse(res, 401, 'Authentication required', 'auth_error');
}
// Find agents the user has remote access to (VIEW permission on REMOTE_AGENT)
const accessibleAgentIds = await findAccessibleResources({
userId,
role: userRole,
resourceType: ResourceType.REMOTE_AGENT,
requiredPermissions: PermissionBits.VIEW,
});
// Get the accessible agents
let agents = [];
if (accessibleAgentIds.length > 0) {
agents = await getAgents({ _id: { $in: accessibleAgentIds } });
}
// Convert to models format
const models = agents.map((agent) => ({
id: agent.id,
object: 'model',
created: Math.floor(new Date(agent.createdAt).getTime() / 1000),
owned_by: agent.author ?? 'librechat',
// Additional metadata
name: agent.name,
description: agent.description,
provider: agent.provider,
}));
res.json({
object: 'list',
data: models,
});
} catch (error) {
logger.error('[Responses API] Error listing models:', error);
sendResponsesErrorResponse(
res,
500,
error instanceof Error ? error.message : 'Failed to list models',
'server_error',
);
}
};
/**
* Get Response - GET /v1/responses/:id
*
* Retrieves a stored response by its ID.
* The response ID maps to a conversationId in LibreChat's storage.
*
* @param {import('express').Request} req
* @param {import('express').Response} res
*/
const getResponse = async (req, res) => {
try {
const responseId = req.params.id;
const userId = req.user?.id;
if (!responseId) {
return sendResponsesErrorResponse(res, 400, 'Response ID is required');
}
// The responseId could be either the response ID or the conversation ID
// Try to find a conversation with this ID
const conversation = await getConvo(userId, responseId);
if (!conversation) {
return sendResponsesErrorResponse(
res,
404,
`Response not found: ${responseId}`,
'not_found',
'response_not_found',
);
}
// Load messages for this conversation
const messages = await db.getMessages({ conversationId: responseId, user: userId });
if (!messages || messages.length === 0) {
return sendResponsesErrorResponse(
res,
404,
`No messages found for response: ${responseId}`,
'not_found',
'response_not_found',
);
}
// Convert messages to Open Responses output format
const output = convertMessagesToOutputItems(messages);
// Find the last assistant message for usage info
const lastAssistantMessage = messages.filter((m) => !m.isCreatedByUser).pop();
// Build the response object
const response = {
id: responseId,
object: 'response',
created_at: Math.floor(new Date(conversation.createdAt || Date.now()).getTime() / 1000),
completed_at: Math.floor(new Date(conversation.updatedAt || Date.now()).getTime() / 1000),
status: 'completed',
incomplete_details: null,
model: conversation.agentId || conversation.model || 'unknown',
previous_response_id: null,
instructions: null,
output,
error: null,
tools: [],
tool_choice: 'auto',
truncation: 'disabled',
parallel_tool_calls: true,
text: { format: { type: 'text' } },
temperature: 1,
top_p: 1,
presence_penalty: 0,
frequency_penalty: 0,
top_logprobs: null,
reasoning: null,
user: userId,
usage: lastAssistantMessage?.tokenCount
? {
input_tokens: 0,
output_tokens: lastAssistantMessage.tokenCount,
total_tokens: lastAssistantMessage.tokenCount,
}
: null,
max_output_tokens: null,
max_tool_calls: null,
store: true,
background: false,
service_tier: 'default',
metadata: {},
safety_identifier: null,
prompt_cache_key: null,
};
res.json(response);
} catch (error) {
logger.error('[Responses API] Error getting response:', error);
sendResponsesErrorResponse(
res,
500,
error instanceof Error ? error.message : 'Failed to get response',
'server_error',
);
}
};
module.exports = {
createResponse,
getResponse,
listModels,
setAppConfig,
};