mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-02-02 07:41:49 +01:00
* refactor: json schema tools with lazy loading - Added LocalToolExecutor class for lazy loading and caching of tools during execution. - Introduced ToolExecutionContext and ToolExecutor interfaces for better type management. - Created utility functions to generate tool proxies with JSON schema support. - Added ExtendedJsonSchema type for enhanced schema definitions. - Updated existing toolkits to utilize the new schema and executor functionalities. - Introduced a comprehensive tool definitions registry for managing various tool schemas. chore: update @librechat/agents to version 3.1.2 refactor: enhance tool loading optimization and classification - Improved the loadAgentToolsOptimized function to utilize a proxy pattern for all tools, enabling deferred execution and reducing overhead. - Introduced caching for tool instances and refined tool classification logic to streamline tool management. - Updated the handling of MCP tools to improve logging and error reporting for missing tools in the cache. - Enhanced the structure of tool definitions to support better classification and integration with existing tools. refactor: modularize tool loading and enhance optimization - Moved the loadAgentToolsOptimized function to a new service file for better organization and maintainability. - Updated the ToolService to utilize the new service for optimized tool loading, improving code clarity. - Removed legacy tool loading methods and streamlined the tool loading process to enhance performance and reduce complexity. - Introduced feature flag handling for optimized tool loading, allowing for easier toggling of this functionality. refactor: replace loadAgentToolsWithFlag with loadAgentTools in tool loader refactor: enhance MCP tool loading with proxy creation and classification refactor: optimize MCP tool loading by grouping tools by server - Introduced a Map to group cached tools by server name, improving the organization of tool data. - Updated the createMCPProxyTool function to accept server name directly, enhancing clarity. - Refactored the logic for handling MCP tools, streamlining the process of creating proxy tools for classification. refactor: enhance MCP tool loading and proxy creation - Added functionality to retrieve MCP server tools and reinitialize servers if necessary, improving tool availability. - Updated the tool loading logic to utilize a Map for organizing tools by server, enhancing clarity and performance. - Refactored the createToolProxy function to ensure a default response format, streamlining tool creation. refactor: update createToolProxy to ensure consistent response format - Modified the createToolProxy function to await the executor's execution and validate the result format. - Ensured that the function returns a default response structure when the result is not an array of two elements, enhancing reliability in tool proxy creation. refactor: ToolExecutionContext with toolCall property - Added toolCall property to ToolExecutionContext interface for improved context handling during tool execution. - Updated LocalToolExecutor to include toolCall in the runnable configuration, allowing for more flexible tool invocation. - Modified createToolProxy to pass toolCall from the configuration, ensuring consistent context across tool executions. refactor: enhance event-driven tool execution and logging - Introduced ToolExecuteOptions for improved handling of event-driven tool execution, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to include support for ON_TOOL_EXECUTE events, enhancing the flexibility of tool invocation. - Added detailed logging in LocalToolExecutor to track tool loading and execution metrics, improving observability and debugging capabilities. - Refactored initializeClient to integrate event-driven tool loading, ensuring compatibility with the new execution model. chore: update @librechat/agents to version 3.1.21 refactor: remove legacy tool loading and executor components - Eliminated the loadAgentToolsWithFlag function, simplifying the tool loading process by directly using loadAgentTools. - Removed the LocalToolExecutor and related executor components to streamline the tool execution architecture. - Updated ToolService and related files to reflect the removal of deprecated features, enhancing code clarity and maintainability. refactor: enhance tool classification and definitions handling - Updated the loadAgentTools function to return toolDefinitions alongside toolRegistry, improving the structure of tool data returned to clients. - Removed the convertRegistryToDefinitions function from the initialize.js file, simplifying the initialization process. - Adjusted the buildToolClassification function to ensure toolDefinitions are built and returned simultaneously with the toolRegistry, enhancing efficiency in tool management. - Updated type definitions in initialize.ts to include toolDefinitions, ensuring consistency across the codebase. refactor: implement event-driven tool execution handler - Introduced createToolExecuteHandler function to streamline the handling of ON_TOOL_EXECUTE events, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to utilize the new handler, simplifying the event-driven architecture. - Added handlers.ts file to encapsulate tool execution logic, improving code organization and maintainability. - Enhanced OpenAI handlers to integrate the new tool execution capabilities, ensuring consistent event handling across the application. refactor: integrate event-driven tool execution options - Added toolExecuteOptions to support event-driven tool execution in OpenAI and responses controllers, enhancing flexibility in tool handling. - Updated handlers to utilize createToolExecuteHandler, allowing for streamlined execution of tools during agent interactions. - Refactored service dependencies to include toolExecuteOptions, ensuring consistent integration across the application. refactor: enhance tool loading with definitionsOnly parameter - Updated createToolLoader and loadAgentTools functions to include a definitionsOnly parameter, allowing for the retrieval of only serializable tool definitions in event-driven mode. - Adjusted related interfaces and documentation to reflect the new parameter, improving clarity and flexibility in tool management. - Ensured compatibility across various components by integrating the definitionsOnly option in the initialization process. refactor: improve agent tool presence check in initialization - Added a check for tool presence using a new hasAgentTools variable, which evaluates both structuredTools and toolDefinitions. - Updated the conditional logic in the agent initialization process to utilize the hasAgentTools variable, enhancing clarity and maintainability in tool management. refactor: enhance agent tool extraction to support tool definitions - Updated the extractMCPServers function to handle both tool instances and serializable tool definitions, improving flexibility in agent tool management. - Added a new property toolDefinitions to the AgentWithTools type for better integration of event-driven mode. - Enhanced documentation to clarify the function's capabilities in extracting unique MCP server names from both tools and tool definitions. refactor: enhance tool classification and registry building - Added serverName property to ToolDefinition for improved tool identification. - Introduced buildToolRegistry function to streamline the creation of tool registries based on MCP tool definitions and agent options. - Updated buildToolClassification to utilize the new registry building logic, ensuring basic definitions are returned even when advanced classification features are not allowed. - Enhanced documentation and logging for clarity in tool classification processes. refactor: update @librechat/agents dependency to version 3.1.22 fix: expose loadTools function in ToolService - Added loadTools function to the exported module in ToolService.js, enhancing the accessibility of tool loading functionality. chore: remove configurable options from tool execute options in OpenAI controller refactor: enhance tool loading mechanism to utilize agent-specific context chore: update @librechat/agents dependency to version 3.1.23 fix: simplify result handling in createToolExecuteHandler * refactor: loadToolDefinitions for efficient tool loading in event-driven mode * refactor: replace legacy tool loading with loadToolsForExecution in OpenAI and responses controllers - Updated OpenAIChatCompletionController and createResponse functions to utilize loadToolsForExecution for improved tool loading. - Removed deprecated loadToolsLegacy references, streamlining the tool execution process. - Enhanced tool loading options to include agent-specific context and configurations. * refactor: enhance tool loading and execution handling - Introduced loadActionToolsForExecution function to streamline loading of action tools, improving organization and maintainability. - Updated loadToolsForExecution to handle both regular and action tools, optimizing the tool loading process. - Added detailed logging for missing tools in createToolExecuteHandler, enhancing error visibility. - Refactored tool definitions to normalize action tool names, improving consistency in tool management. * refactor: enhance built-in tool definitions loading - Updated loadToolDefinitions to include descriptions and parameters from the tool registry for built-in tools, improving the clarity and usability of tool definitions. - Integrated getToolDefinition to streamline the retrieval of tool metadata, enhancing the overall tool management process. * feat: add action tool definitions loading to tool service - Introduced getActionToolDefinitions function to load action tool definitions based on agent ID and tool names, enhancing the tool loading process. - Updated loadToolDefinitions to integrate action tool definitions, allowing for better management and retrieval of action-specific tools. - Added comprehensive tests for action tool definitions to ensure correct loading and parameter handling, improving overall reliability and functionality. * chore: update @librechat/agents dependency to version 3.1.26 * refactor: add toolEndCallback to handle tool execution results * fix: tool definitions and execution handling - Introduced native tools (execute_code, file_search, web_search) to the tool service, allowing for better integration and management of these tools. - Updated isBuiltInTool function to include native tools in the built-in check, improving tool recognition. - Added comprehensive tests for loading parameters of native tools, ensuring correct functionality and parameter handling. - Enhanced tool definitions registry to include new agent tool definitions, streamlining tool retrieval and management. * refactor: enhance tool loading and execution context - Added toolRegistry to the context for OpenAIChatCompletionController and createResponse functions, improving tool management. - Updated loadToolsForExecution to utilize toolRegistry for better integration of programmatic tools and tool search functionalities. - Enhanced the initialization process to include toolRegistry in agent context, streamlining tool access and configuration. - Refactored tool classification logic to support event-driven execution, ensuring compatibility with new tool definitions. * chore: add request duration logging to OpenAI and Responses controllers - Introduced logging for request start and completion times in OpenAIChatCompletionController and createResponse functions. - Calculated and logged the duration of each request, enhancing observability and performance tracking. - Improved debugging capabilities by providing detailed logs for both streaming and non-streaming responses. * chore: update @librechat/agents dependency to version 3.1.27 * refactor: implement buildToolSet function for tool management - Introduced buildToolSet function to streamline the creation of tool sets from agent configurations, enhancing tool management across various controllers. - Updated AgentClient, OpenAIChatCompletionController, and createResponse functions to utilize buildToolSet, improving consistency in tool handling. - Added comprehensive tests for buildToolSet to ensure correct functionality and edge case handling, enhancing overall reliability. * refactor: update import paths for ToolExecuteOptions and createToolExecuteHandler * fix: update GoogleSearch.js description for maximum search results - Changed the default maximum number of search results from 10 to 5 in the Google Search JSON schema description, ensuring accurate documentation of the expected behavior. * chore: remove deprecated Browser tool and associated assets - Deleted the Browser tool definition from manifest.json, which included its name, plugin key, description, and authentication configuration. - Removed the web-browser.svg asset as it is no longer needed following the removal of the Browser tool. * fix: ensure tool definitions are valid before processing - Added a check to verify the existence of tool definitions in the registry before accessing their properties, preventing potential runtime errors. - Updated the loading logic for built-in tool definitions to ensure that only valid definitions are pushed to the built-in tool definitions array. * fix: extend ExtendedJsonSchema to support 'null' type and nullable enums - Updated the ExtendedJsonSchema type to include 'null' as a valid type option. - Modified the enum property to accept an array of values that can include strings, numbers, booleans, and null, enhancing schema flexibility. * test: add comprehensive tests for tool definitions loading and registry behavior - Implemented tests to verify the handling of built-in tools without registry definitions, ensuring they are skipped correctly. - Added tests to confirm that built-in tools include descriptions and parameters in the registry. - Enhanced tests for action tools, checking for proper inclusion of metadata and handling of tools without parameters in the registry. * test: add tests for mixed-type and number enum schema handling - Introduced tests to validate the parsing of mixed-type enum values, including strings, numbers, booleans, and null. - Added tests for number enum schema values to ensure correct parsing of numeric inputs, enhancing schema validation coverage. * fix: update mock implementation for @librechat/agents - Changed the mock for @librechat/agents to spread the actual module's properties, ensuring that all necessary functionalities are preserved in tests. - This adjustment enhances the accuracy of the tests by reflecting the real structure of the module. * fix: change max_results type in GoogleSearch schema from number to integer - Updated the type of max_results in the Google Search JSON schema to 'integer' for better type accuracy and validation consistency. * fix: update max_results description and type in GoogleSearch schema - Changed the type of max_results from 'number' to 'integer' for improved type accuracy. - Updated the description to reflect the new default maximum number of search results, changing it from 10 to 5. * refactor: remove unused code and improve tool registry handling - Eliminated outdated comments and conditional logic related to event-driven mode in the ToolService. - Enhanced the handling of the tool registry by ensuring it is configurable for better integration during tool execution. * feat: add definitionsOnly option to buildToolClassification for event-driven mode - Introduced a new parameter, definitionsOnly, to the BuildToolClassificationParams interface to enable a mode that skips tool instance creation. - Updated the buildToolClassification function to conditionally add tool definitions without instantiating tools when definitionsOnly is true. - Modified the loadToolDefinitions function to pass definitionsOnly as true, ensuring compatibility with the new feature. * test: add unit tests for buildToolClassification with definitionsOnly option - Implemented tests to verify the behavior of buildToolClassification when definitionsOnly is set to true or false. - Ensured that tool instances are not created when definitionsOnly is true, while still adding necessary tool definitions. - Confirmed that loadAuthValues is called appropriately based on the definitionsOnly parameter, enhancing test coverage for this new feature.
731 lines
24 KiB
JavaScript
731 lines
24 KiB
JavaScript
const { nanoid } = require('nanoid');
|
|
const { Constants } = require('@librechat/agents');
|
|
const { logger } = require('@librechat/data-schemas');
|
|
const {
|
|
sendEvent,
|
|
GenerationJobManager,
|
|
writeAttachmentEvent,
|
|
createToolExecuteHandler,
|
|
} = require('@librechat/api');
|
|
const { Tools, StepTypes, FileContext, ErrorTypes } = require('librechat-data-provider');
|
|
const {
|
|
EnvVar,
|
|
Providers,
|
|
GraphEvents,
|
|
getMessageId,
|
|
ToolEndHandler,
|
|
handleToolCalls,
|
|
ChatModelStreamHandler,
|
|
} = require('@librechat/agents');
|
|
const { processFileCitations } = require('~/server/services/Files/Citations');
|
|
const { processCodeOutput } = require('~/server/services/Files/Code/process');
|
|
const { loadAuthValues } = require('~/server/services/Tools/credentials');
|
|
const { saveBase64Image } = require('~/server/services/Files/process');
|
|
|
|
class ModelEndHandler {
|
|
/**
|
|
* @param {Array<UsageMetadata>} collectedUsage
|
|
*/
|
|
constructor(collectedUsage) {
|
|
if (!Array.isArray(collectedUsage)) {
|
|
throw new Error('collectedUsage must be an array');
|
|
}
|
|
this.collectedUsage = collectedUsage;
|
|
}
|
|
|
|
finalize(errorMessage) {
|
|
if (!errorMessage) {
|
|
return;
|
|
}
|
|
throw new Error(errorMessage);
|
|
}
|
|
|
|
/**
|
|
* @param {string} event
|
|
* @param {ModelEndData | undefined} data
|
|
* @param {Record<string, unknown> | undefined} metadata
|
|
* @param {StandardGraph} graph
|
|
* @returns {Promise<void>}
|
|
*/
|
|
async handle(event, data, metadata, graph) {
|
|
if (!graph || !metadata) {
|
|
console.warn(`Graph or metadata not found in ${event} event`);
|
|
return;
|
|
}
|
|
|
|
/** @type {string | undefined} */
|
|
let errorMessage;
|
|
try {
|
|
const agentContext = graph.getAgentContext(metadata);
|
|
const isGoogle = agentContext.provider === Providers.GOOGLE;
|
|
const streamingDisabled = !!agentContext.clientOptions?.disableStreaming;
|
|
if (data?.output?.additional_kwargs?.stop_reason === 'refusal') {
|
|
const info = { ...data.output.additional_kwargs };
|
|
errorMessage = JSON.stringify({
|
|
type: ErrorTypes.REFUSAL,
|
|
info,
|
|
});
|
|
logger.debug(`[ModelEndHandler] Model refused to respond`, {
|
|
...info,
|
|
userId: metadata.user_id,
|
|
messageId: metadata.run_id,
|
|
conversationId: metadata.thread_id,
|
|
});
|
|
}
|
|
|
|
const toolCalls = data?.output?.tool_calls;
|
|
let hasUnprocessedToolCalls = false;
|
|
if (Array.isArray(toolCalls) && toolCalls.length > 0 && graph?.toolCallStepIds?.has) {
|
|
try {
|
|
hasUnprocessedToolCalls = toolCalls.some(
|
|
(tc) => tc?.id && !graph.toolCallStepIds.has(tc.id),
|
|
);
|
|
} catch {
|
|
hasUnprocessedToolCalls = false;
|
|
}
|
|
}
|
|
if (isGoogle || streamingDisabled || hasUnprocessedToolCalls) {
|
|
await handleToolCalls(toolCalls, metadata, graph);
|
|
}
|
|
|
|
const usage = data?.output?.usage_metadata;
|
|
if (!usage) {
|
|
return this.finalize(errorMessage);
|
|
}
|
|
const modelName = metadata?.ls_model_name || agentContext.clientOptions?.model;
|
|
if (modelName) {
|
|
usage.model = modelName;
|
|
}
|
|
|
|
this.collectedUsage.push(usage);
|
|
if (!streamingDisabled) {
|
|
return this.finalize(errorMessage);
|
|
}
|
|
if (!data.output.content) {
|
|
return this.finalize(errorMessage);
|
|
}
|
|
const stepKey = graph.getStepKey(metadata);
|
|
const message_id = getMessageId(stepKey, graph) ?? '';
|
|
if (message_id) {
|
|
await graph.dispatchRunStep(stepKey, {
|
|
type: StepTypes.MESSAGE_CREATION,
|
|
message_creation: {
|
|
message_id,
|
|
},
|
|
});
|
|
}
|
|
const stepId = graph.getStepIdByKey(stepKey);
|
|
const content = data.output.content;
|
|
if (typeof content === 'string') {
|
|
await graph.dispatchMessageDelta(stepId, {
|
|
content: [
|
|
{
|
|
type: 'text',
|
|
text: content,
|
|
},
|
|
],
|
|
});
|
|
} else if (content.every((c) => c.type?.startsWith('text'))) {
|
|
await graph.dispatchMessageDelta(stepId, {
|
|
content,
|
|
});
|
|
}
|
|
} catch (error) {
|
|
logger.error('Error handling model end event:', error);
|
|
return this.finalize(errorMessage);
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* @deprecated Agent Chain helper
|
|
* @param {string | undefined} [last_agent_id]
|
|
* @param {string | undefined} [langgraph_node]
|
|
* @returns {boolean}
|
|
*/
|
|
function checkIfLastAgent(last_agent_id, langgraph_node) {
|
|
if (!last_agent_id || !langgraph_node) {
|
|
return false;
|
|
}
|
|
return langgraph_node?.endsWith(last_agent_id);
|
|
}
|
|
|
|
/**
|
|
* Helper to emit events either to res (standard mode) or to job emitter (resumable mode).
|
|
* @param {ServerResponse} res - The server response object
|
|
* @param {string | null} streamId - The stream ID for resumable mode, or null for standard mode
|
|
* @param {Object} eventData - The event data to send
|
|
*/
|
|
function emitEvent(res, streamId, eventData) {
|
|
if (streamId) {
|
|
GenerationJobManager.emitChunk(streamId, eventData);
|
|
} else {
|
|
sendEvent(res, eventData);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* @typedef {Object} ToolExecuteOptions
|
|
* @property {(toolNames: string[]) => Promise<{loadedTools: StructuredTool[]}>} loadTools - Function to load tools by name
|
|
* @property {Object} configurable - Configurable context for tool invocation
|
|
*/
|
|
|
|
/**
|
|
* Get default handlers for stream events.
|
|
* @param {Object} options - The options object.
|
|
* @param {ServerResponse} options.res - The server response object.
|
|
* @param {ContentAggregator} options.aggregateContent - Content aggregator function.
|
|
* @param {ToolEndCallback} options.toolEndCallback - Callback to use when tool ends.
|
|
* @param {Array<UsageMetadata>} options.collectedUsage - The list of collected usage metadata.
|
|
* @param {string | null} [options.streamId] - The stream ID for resumable mode, or null for standard mode.
|
|
* @param {ToolExecuteOptions} [options.toolExecuteOptions] - Options for event-driven tool execution.
|
|
* @returns {Record<string, t.EventHandler>} The default handlers.
|
|
* @throws {Error} If the request is not found.
|
|
*/
|
|
function getDefaultHandlers({
|
|
res,
|
|
aggregateContent,
|
|
toolEndCallback,
|
|
collectedUsage,
|
|
streamId = null,
|
|
toolExecuteOptions = null,
|
|
}) {
|
|
if (!res || !aggregateContent) {
|
|
throw new Error(
|
|
`[getDefaultHandlers] Missing required options: res: ${!res}, aggregateContent: ${!aggregateContent}`,
|
|
);
|
|
}
|
|
const handlers = {
|
|
[GraphEvents.CHAT_MODEL_END]: new ModelEndHandler(collectedUsage),
|
|
[GraphEvents.TOOL_END]: new ToolEndHandler(toolEndCallback, logger),
|
|
[GraphEvents.CHAT_MODEL_STREAM]: new ChatModelStreamHandler(),
|
|
[GraphEvents.ON_RUN_STEP]: {
|
|
/**
|
|
* Handle ON_RUN_STEP event.
|
|
* @param {string} event - The event name.
|
|
* @param {StreamEventData} data - The event data.
|
|
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
|
|
*/
|
|
handle: (event, data, metadata) => {
|
|
if (data?.stepDetails.type === StepTypes.TOOL_CALLS) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else if (checkIfLastAgent(metadata?.last_agent_id, metadata?.langgraph_node)) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else if (!metadata?.hide_sequential_outputs) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else {
|
|
const agentName = metadata?.name ?? 'Agent';
|
|
const isToolCall = data?.stepDetails.type === StepTypes.TOOL_CALLS;
|
|
const action = isToolCall ? 'performing a task...' : 'thinking...';
|
|
emitEvent(res, streamId, {
|
|
event: 'on_agent_update',
|
|
data: {
|
|
runId: metadata?.run_id,
|
|
message: `${agentName} is ${action}`,
|
|
},
|
|
});
|
|
}
|
|
aggregateContent({ event, data });
|
|
},
|
|
},
|
|
[GraphEvents.ON_RUN_STEP_DELTA]: {
|
|
/**
|
|
* Handle ON_RUN_STEP_DELTA event.
|
|
* @param {string} event - The event name.
|
|
* @param {StreamEventData} data - The event data.
|
|
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
|
|
*/
|
|
handle: (event, data, metadata) => {
|
|
if (data?.delta.type === StepTypes.TOOL_CALLS) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else if (checkIfLastAgent(metadata?.last_agent_id, metadata?.langgraph_node)) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else if (!metadata?.hide_sequential_outputs) {
|
|
emitEvent(res, streamId, { event, data });
|
|
}
|
|
aggregateContent({ event, data });
|
|
},
|
|
},
|
|
[GraphEvents.ON_RUN_STEP_COMPLETED]: {
|
|
/**
|
|
* Handle ON_RUN_STEP_COMPLETED event.
|
|
* @param {string} event - The event name.
|
|
* @param {StreamEventData & { result: ToolEndData }} data - The event data.
|
|
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
|
|
*/
|
|
handle: (event, data, metadata) => {
|
|
if (data?.result != null) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else if (checkIfLastAgent(metadata?.last_agent_id, metadata?.langgraph_node)) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else if (!metadata?.hide_sequential_outputs) {
|
|
emitEvent(res, streamId, { event, data });
|
|
}
|
|
aggregateContent({ event, data });
|
|
},
|
|
},
|
|
[GraphEvents.ON_MESSAGE_DELTA]: {
|
|
/**
|
|
* Handle ON_MESSAGE_DELTA event.
|
|
* @param {string} event - The event name.
|
|
* @param {StreamEventData} data - The event data.
|
|
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
|
|
*/
|
|
handle: (event, data, metadata) => {
|
|
if (checkIfLastAgent(metadata?.last_agent_id, metadata?.langgraph_node)) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else if (!metadata?.hide_sequential_outputs) {
|
|
emitEvent(res, streamId, { event, data });
|
|
}
|
|
aggregateContent({ event, data });
|
|
},
|
|
},
|
|
[GraphEvents.ON_REASONING_DELTA]: {
|
|
/**
|
|
* Handle ON_REASONING_DELTA event.
|
|
* @param {string} event - The event name.
|
|
* @param {StreamEventData} data - The event data.
|
|
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
|
|
*/
|
|
handle: (event, data, metadata) => {
|
|
if (checkIfLastAgent(metadata?.last_agent_id, metadata?.langgraph_node)) {
|
|
emitEvent(res, streamId, { event, data });
|
|
} else if (!metadata?.hide_sequential_outputs) {
|
|
emitEvent(res, streamId, { event, data });
|
|
}
|
|
aggregateContent({ event, data });
|
|
},
|
|
},
|
|
};
|
|
|
|
if (toolExecuteOptions) {
|
|
handlers[GraphEvents.ON_TOOL_EXECUTE] = createToolExecuteHandler(toolExecuteOptions);
|
|
}
|
|
|
|
return handlers;
|
|
}
|
|
|
|
/**
|
|
* Helper to write attachment events either to res or to job emitter.
|
|
* @param {ServerResponse} res - The server response object
|
|
* @param {string | null} streamId - The stream ID for resumable mode, or null for standard mode
|
|
* @param {Object} attachment - The attachment data
|
|
*/
|
|
function writeAttachment(res, streamId, attachment) {
|
|
if (streamId) {
|
|
GenerationJobManager.emitChunk(streamId, { event: 'attachment', data: attachment });
|
|
} else {
|
|
res.write(`event: attachment\ndata: ${JSON.stringify(attachment)}\n\n`);
|
|
}
|
|
}
|
|
|
|
/**
|
|
*
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req
|
|
* @param {ServerResponse} params.res
|
|
* @param {Promise<MongoFile | { filename: string; filepath: string; expires: number;} | null>[]} params.artifactPromises
|
|
* @param {string | null} [params.streamId] - The stream ID for resumable mode, or null for standard mode.
|
|
* @returns {ToolEndCallback} The tool end callback.
|
|
*/
|
|
function createToolEndCallback({ req, res, artifactPromises, streamId = null }) {
|
|
/**
|
|
* @type {ToolEndCallback}
|
|
*/
|
|
return async (data, metadata) => {
|
|
const output = data?.output;
|
|
if (!output) {
|
|
return;
|
|
}
|
|
|
|
if (!output.artifact) {
|
|
return;
|
|
}
|
|
|
|
if (output.artifact[Tools.file_search]) {
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const user = req.user;
|
|
const attachment = await processFileCitations({
|
|
user,
|
|
metadata,
|
|
appConfig: req.config,
|
|
toolArtifact: output.artifact,
|
|
toolCallId: output.tool_call_id,
|
|
});
|
|
if (!attachment) {
|
|
return null;
|
|
}
|
|
if (!streamId && !res.headersSent) {
|
|
return attachment;
|
|
}
|
|
writeAttachment(res, streamId, attachment);
|
|
return attachment;
|
|
})().catch((error) => {
|
|
logger.error('Error processing file citations:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
|
|
if (output.artifact[Tools.ui_resources]) {
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const attachment = {
|
|
type: Tools.ui_resources,
|
|
messageId: metadata.run_id,
|
|
toolCallId: output.tool_call_id,
|
|
conversationId: metadata.thread_id,
|
|
[Tools.ui_resources]: output.artifact[Tools.ui_resources].data,
|
|
};
|
|
if (!streamId && !res.headersSent) {
|
|
return attachment;
|
|
}
|
|
writeAttachment(res, streamId, attachment);
|
|
return attachment;
|
|
})().catch((error) => {
|
|
logger.error('Error processing artifact content:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
|
|
if (output.artifact[Tools.web_search]) {
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const attachment = {
|
|
type: Tools.web_search,
|
|
messageId: metadata.run_id,
|
|
toolCallId: output.tool_call_id,
|
|
conversationId: metadata.thread_id,
|
|
[Tools.web_search]: { ...output.artifact[Tools.web_search] },
|
|
};
|
|
if (!streamId && !res.headersSent) {
|
|
return attachment;
|
|
}
|
|
writeAttachment(res, streamId, attachment);
|
|
return attachment;
|
|
})().catch((error) => {
|
|
logger.error('Error processing artifact content:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
|
|
if (output.artifact.content) {
|
|
/** @type {FormattedContent[]} */
|
|
const content = output.artifact.content;
|
|
for (let i = 0; i < content.length; i++) {
|
|
const part = content[i];
|
|
if (!part) {
|
|
continue;
|
|
}
|
|
if (part.type !== 'image_url') {
|
|
continue;
|
|
}
|
|
const { url } = part.image_url;
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const filename = `${output.name}_img_${nanoid()}`;
|
|
const file_id = output.artifact.file_ids?.[i];
|
|
const file = await saveBase64Image(url, {
|
|
req,
|
|
file_id,
|
|
filename,
|
|
endpoint: metadata.provider,
|
|
context: FileContext.image_generation,
|
|
});
|
|
const fileMetadata = Object.assign(file, {
|
|
messageId: metadata.run_id,
|
|
toolCallId: output.tool_call_id,
|
|
conversationId: metadata.thread_id,
|
|
});
|
|
if (!streamId && !res.headersSent) {
|
|
return fileMetadata;
|
|
}
|
|
|
|
if (!fileMetadata) {
|
|
return null;
|
|
}
|
|
|
|
writeAttachment(res, streamId, fileMetadata);
|
|
return fileMetadata;
|
|
})().catch((error) => {
|
|
logger.error('Error processing artifact content:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
return;
|
|
}
|
|
|
|
const isCodeTool =
|
|
output.name === Tools.execute_code || output.name === Constants.PROGRAMMATIC_TOOL_CALLING;
|
|
if (!isCodeTool) {
|
|
return;
|
|
}
|
|
|
|
if (!output.artifact.files) {
|
|
return;
|
|
}
|
|
|
|
for (const file of output.artifact.files) {
|
|
const { id, name } = file;
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const result = await loadAuthValues({
|
|
userId: req.user.id,
|
|
authFields: [EnvVar.CODE_API_KEY],
|
|
});
|
|
const fileMetadata = await processCodeOutput({
|
|
req,
|
|
id,
|
|
name,
|
|
apiKey: result[EnvVar.CODE_API_KEY],
|
|
messageId: metadata.run_id,
|
|
toolCallId: output.tool_call_id,
|
|
conversationId: metadata.thread_id,
|
|
session_id: output.artifact.session_id,
|
|
});
|
|
if (!streamId && !res.headersSent) {
|
|
return fileMetadata;
|
|
}
|
|
|
|
if (!fileMetadata) {
|
|
return null;
|
|
}
|
|
|
|
writeAttachment(res, streamId, fileMetadata);
|
|
return fileMetadata;
|
|
})().catch((error) => {
|
|
logger.error('Error processing code output:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Helper to write attachment events in Open Responses format (librechat:attachment)
|
|
* @param {ServerResponse} res - The server response object
|
|
* @param {Object} tracker - The response tracker with sequence number
|
|
* @param {Object} attachment - The attachment data
|
|
* @param {Object} metadata - Additional metadata (messageId, conversationId)
|
|
*/
|
|
function writeResponsesAttachment(res, tracker, attachment, metadata) {
|
|
const sequenceNumber = tracker.nextSequence();
|
|
writeAttachmentEvent(res, sequenceNumber, attachment, {
|
|
messageId: metadata.run_id,
|
|
conversationId: metadata.thread_id,
|
|
});
|
|
}
|
|
|
|
/**
|
|
* Creates a tool end callback specifically for the Responses API.
|
|
* Emits attachments as `librechat:attachment` events per the Open Responses extension spec.
|
|
*
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req
|
|
* @param {ServerResponse} params.res
|
|
* @param {Object} params.tracker - Response tracker with sequence number
|
|
* @param {Promise<MongoFile | { filename: string; filepath: string; expires: number;} | null>[]} params.artifactPromises
|
|
* @returns {ToolEndCallback} The tool end callback.
|
|
*/
|
|
function createResponsesToolEndCallback({ req, res, tracker, artifactPromises }) {
|
|
/**
|
|
* @type {ToolEndCallback}
|
|
*/
|
|
return async (data, metadata) => {
|
|
const output = data?.output;
|
|
if (!output) {
|
|
return;
|
|
}
|
|
|
|
if (!output.artifact) {
|
|
return;
|
|
}
|
|
|
|
if (output.artifact[Tools.file_search]) {
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const user = req.user;
|
|
const attachment = await processFileCitations({
|
|
user,
|
|
metadata,
|
|
appConfig: req.config,
|
|
toolArtifact: output.artifact,
|
|
toolCallId: output.tool_call_id,
|
|
});
|
|
if (!attachment) {
|
|
return null;
|
|
}
|
|
// For Responses API, emit attachment during streaming
|
|
if (res.headersSent && !res.writableEnded) {
|
|
writeResponsesAttachment(res, tracker, attachment, metadata);
|
|
}
|
|
return attachment;
|
|
})().catch((error) => {
|
|
logger.error('Error processing file citations:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
|
|
if (output.artifact[Tools.ui_resources]) {
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const attachment = {
|
|
type: Tools.ui_resources,
|
|
toolCallId: output.tool_call_id,
|
|
[Tools.ui_resources]: output.artifact[Tools.ui_resources].data,
|
|
};
|
|
// For Responses API, always emit attachment during streaming
|
|
if (res.headersSent && !res.writableEnded) {
|
|
writeResponsesAttachment(res, tracker, attachment, metadata);
|
|
}
|
|
return attachment;
|
|
})().catch((error) => {
|
|
logger.error('Error processing artifact content:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
|
|
if (output.artifact[Tools.web_search]) {
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const attachment = {
|
|
type: Tools.web_search,
|
|
toolCallId: output.tool_call_id,
|
|
[Tools.web_search]: { ...output.artifact[Tools.web_search] },
|
|
};
|
|
// For Responses API, always emit attachment during streaming
|
|
if (res.headersSent && !res.writableEnded) {
|
|
writeResponsesAttachment(res, tracker, attachment, metadata);
|
|
}
|
|
return attachment;
|
|
})().catch((error) => {
|
|
logger.error('Error processing artifact content:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
|
|
if (output.artifact.content) {
|
|
/** @type {FormattedContent[]} */
|
|
const content = output.artifact.content;
|
|
for (let i = 0; i < content.length; i++) {
|
|
const part = content[i];
|
|
if (!part) {
|
|
continue;
|
|
}
|
|
if (part.type !== 'image_url') {
|
|
continue;
|
|
}
|
|
const { url } = part.image_url;
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const filename = `${output.name}_img_${nanoid()}`;
|
|
const file_id = output.artifact.file_ids?.[i];
|
|
const file = await saveBase64Image(url, {
|
|
req,
|
|
file_id,
|
|
filename,
|
|
endpoint: metadata.provider,
|
|
context: FileContext.image_generation,
|
|
});
|
|
const fileMetadata = Object.assign(file, {
|
|
toolCallId: output.tool_call_id,
|
|
});
|
|
|
|
if (!fileMetadata) {
|
|
return null;
|
|
}
|
|
|
|
// For Responses API, emit attachment during streaming
|
|
if (res.headersSent && !res.writableEnded) {
|
|
const attachment = {
|
|
file_id: fileMetadata.file_id,
|
|
filename: fileMetadata.filename,
|
|
type: fileMetadata.type,
|
|
url: fileMetadata.filepath,
|
|
width: fileMetadata.width,
|
|
height: fileMetadata.height,
|
|
tool_call_id: output.tool_call_id,
|
|
};
|
|
writeResponsesAttachment(res, tracker, attachment, metadata);
|
|
}
|
|
|
|
return fileMetadata;
|
|
})().catch((error) => {
|
|
logger.error('Error processing artifact content:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
return;
|
|
}
|
|
|
|
const isCodeTool =
|
|
output.name === Tools.execute_code || output.name === Constants.PROGRAMMATIC_TOOL_CALLING;
|
|
if (!isCodeTool) {
|
|
return;
|
|
}
|
|
|
|
if (!output.artifact.files) {
|
|
return;
|
|
}
|
|
|
|
for (const file of output.artifact.files) {
|
|
const { id, name } = file;
|
|
artifactPromises.push(
|
|
(async () => {
|
|
const result = await loadAuthValues({
|
|
userId: req.user.id,
|
|
authFields: [EnvVar.CODE_API_KEY],
|
|
});
|
|
const fileMetadata = await processCodeOutput({
|
|
req,
|
|
id,
|
|
name,
|
|
apiKey: result[EnvVar.CODE_API_KEY],
|
|
messageId: metadata.run_id,
|
|
toolCallId: output.tool_call_id,
|
|
conversationId: metadata.thread_id,
|
|
session_id: output.artifact.session_id,
|
|
});
|
|
|
|
if (!fileMetadata) {
|
|
return null;
|
|
}
|
|
|
|
// For Responses API, emit attachment during streaming
|
|
if (res.headersSent && !res.writableEnded) {
|
|
const attachment = {
|
|
file_id: fileMetadata.file_id,
|
|
filename: fileMetadata.filename,
|
|
type: fileMetadata.type,
|
|
url: fileMetadata.filepath,
|
|
width: fileMetadata.width,
|
|
height: fileMetadata.height,
|
|
tool_call_id: output.tool_call_id,
|
|
};
|
|
writeResponsesAttachment(res, tracker, attachment, metadata);
|
|
}
|
|
|
|
return fileMetadata;
|
|
})().catch((error) => {
|
|
logger.error('Error processing code output:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
}
|
|
};
|
|
}
|
|
|
|
module.exports = {
|
|
getDefaultHandlers,
|
|
createToolEndCallback,
|
|
createResponsesToolEndCallback,
|
|
};
|