mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-02-02 07:41:49 +01:00
🦥 refactor: Event-Driven Lazy Tool Loading (#11588)
* refactor: json schema tools with lazy loading - Added LocalToolExecutor class for lazy loading and caching of tools during execution. - Introduced ToolExecutionContext and ToolExecutor interfaces for better type management. - Created utility functions to generate tool proxies with JSON schema support. - Added ExtendedJsonSchema type for enhanced schema definitions. - Updated existing toolkits to utilize the new schema and executor functionalities. - Introduced a comprehensive tool definitions registry for managing various tool schemas. chore: update @librechat/agents to version 3.1.2 refactor: enhance tool loading optimization and classification - Improved the loadAgentToolsOptimized function to utilize a proxy pattern for all tools, enabling deferred execution and reducing overhead. - Introduced caching for tool instances and refined tool classification logic to streamline tool management. - Updated the handling of MCP tools to improve logging and error reporting for missing tools in the cache. - Enhanced the structure of tool definitions to support better classification and integration with existing tools. refactor: modularize tool loading and enhance optimization - Moved the loadAgentToolsOptimized function to a new service file for better organization and maintainability. - Updated the ToolService to utilize the new service for optimized tool loading, improving code clarity. - Removed legacy tool loading methods and streamlined the tool loading process to enhance performance and reduce complexity. - Introduced feature flag handling for optimized tool loading, allowing for easier toggling of this functionality. refactor: replace loadAgentToolsWithFlag with loadAgentTools in tool loader refactor: enhance MCP tool loading with proxy creation and classification refactor: optimize MCP tool loading by grouping tools by server - Introduced a Map to group cached tools by server name, improving the organization of tool data. - Updated the createMCPProxyTool function to accept server name directly, enhancing clarity. - Refactored the logic for handling MCP tools, streamlining the process of creating proxy tools for classification. refactor: enhance MCP tool loading and proxy creation - Added functionality to retrieve MCP server tools and reinitialize servers if necessary, improving tool availability. - Updated the tool loading logic to utilize a Map for organizing tools by server, enhancing clarity and performance. - Refactored the createToolProxy function to ensure a default response format, streamlining tool creation. refactor: update createToolProxy to ensure consistent response format - Modified the createToolProxy function to await the executor's execution and validate the result format. - Ensured that the function returns a default response structure when the result is not an array of two elements, enhancing reliability in tool proxy creation. refactor: ToolExecutionContext with toolCall property - Added toolCall property to ToolExecutionContext interface for improved context handling during tool execution. - Updated LocalToolExecutor to include toolCall in the runnable configuration, allowing for more flexible tool invocation. - Modified createToolProxy to pass toolCall from the configuration, ensuring consistent context across tool executions. refactor: enhance event-driven tool execution and logging - Introduced ToolExecuteOptions for improved handling of event-driven tool execution, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to include support for ON_TOOL_EXECUTE events, enhancing the flexibility of tool invocation. - Added detailed logging in LocalToolExecutor to track tool loading and execution metrics, improving observability and debugging capabilities. - Refactored initializeClient to integrate event-driven tool loading, ensuring compatibility with the new execution model. chore: update @librechat/agents to version 3.1.21 refactor: remove legacy tool loading and executor components - Eliminated the loadAgentToolsWithFlag function, simplifying the tool loading process by directly using loadAgentTools. - Removed the LocalToolExecutor and related executor components to streamline the tool execution architecture. - Updated ToolService and related files to reflect the removal of deprecated features, enhancing code clarity and maintainability. refactor: enhance tool classification and definitions handling - Updated the loadAgentTools function to return toolDefinitions alongside toolRegistry, improving the structure of tool data returned to clients. - Removed the convertRegistryToDefinitions function from the initialize.js file, simplifying the initialization process. - Adjusted the buildToolClassification function to ensure toolDefinitions are built and returned simultaneously with the toolRegistry, enhancing efficiency in tool management. - Updated type definitions in initialize.ts to include toolDefinitions, ensuring consistency across the codebase. refactor: implement event-driven tool execution handler - Introduced createToolExecuteHandler function to streamline the handling of ON_TOOL_EXECUTE events, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to utilize the new handler, simplifying the event-driven architecture. - Added handlers.ts file to encapsulate tool execution logic, improving code organization and maintainability. - Enhanced OpenAI handlers to integrate the new tool execution capabilities, ensuring consistent event handling across the application. refactor: integrate event-driven tool execution options - Added toolExecuteOptions to support event-driven tool execution in OpenAI and responses controllers, enhancing flexibility in tool handling. - Updated handlers to utilize createToolExecuteHandler, allowing for streamlined execution of tools during agent interactions. - Refactored service dependencies to include toolExecuteOptions, ensuring consistent integration across the application. refactor: enhance tool loading with definitionsOnly parameter - Updated createToolLoader and loadAgentTools functions to include a definitionsOnly parameter, allowing for the retrieval of only serializable tool definitions in event-driven mode. - Adjusted related interfaces and documentation to reflect the new parameter, improving clarity and flexibility in tool management. - Ensured compatibility across various components by integrating the definitionsOnly option in the initialization process. refactor: improve agent tool presence check in initialization - Added a check for tool presence using a new hasAgentTools variable, which evaluates both structuredTools and toolDefinitions. - Updated the conditional logic in the agent initialization process to utilize the hasAgentTools variable, enhancing clarity and maintainability in tool management. refactor: enhance agent tool extraction to support tool definitions - Updated the extractMCPServers function to handle both tool instances and serializable tool definitions, improving flexibility in agent tool management. - Added a new property toolDefinitions to the AgentWithTools type for better integration of event-driven mode. - Enhanced documentation to clarify the function's capabilities in extracting unique MCP server names from both tools and tool definitions. refactor: enhance tool classification and registry building - Added serverName property to ToolDefinition for improved tool identification. - Introduced buildToolRegistry function to streamline the creation of tool registries based on MCP tool definitions and agent options. - Updated buildToolClassification to utilize the new registry building logic, ensuring basic definitions are returned even when advanced classification features are not allowed. - Enhanced documentation and logging for clarity in tool classification processes. refactor: update @librechat/agents dependency to version 3.1.22 fix: expose loadTools function in ToolService - Added loadTools function to the exported module in ToolService.js, enhancing the accessibility of tool loading functionality. chore: remove configurable options from tool execute options in OpenAI controller refactor: enhance tool loading mechanism to utilize agent-specific context chore: update @librechat/agents dependency to version 3.1.23 fix: simplify result handling in createToolExecuteHandler * refactor: loadToolDefinitions for efficient tool loading in event-driven mode * refactor: replace legacy tool loading with loadToolsForExecution in OpenAI and responses controllers - Updated OpenAIChatCompletionController and createResponse functions to utilize loadToolsForExecution for improved tool loading. - Removed deprecated loadToolsLegacy references, streamlining the tool execution process. - Enhanced tool loading options to include agent-specific context and configurations. * refactor: enhance tool loading and execution handling - Introduced loadActionToolsForExecution function to streamline loading of action tools, improving organization and maintainability. - Updated loadToolsForExecution to handle both regular and action tools, optimizing the tool loading process. - Added detailed logging for missing tools in createToolExecuteHandler, enhancing error visibility. - Refactored tool definitions to normalize action tool names, improving consistency in tool management. * refactor: enhance built-in tool definitions loading - Updated loadToolDefinitions to include descriptions and parameters from the tool registry for built-in tools, improving the clarity and usability of tool definitions. - Integrated getToolDefinition to streamline the retrieval of tool metadata, enhancing the overall tool management process. * feat: add action tool definitions loading to tool service - Introduced getActionToolDefinitions function to load action tool definitions based on agent ID and tool names, enhancing the tool loading process. - Updated loadToolDefinitions to integrate action tool definitions, allowing for better management and retrieval of action-specific tools. - Added comprehensive tests for action tool definitions to ensure correct loading and parameter handling, improving overall reliability and functionality. * chore: update @librechat/agents dependency to version 3.1.26 * refactor: add toolEndCallback to handle tool execution results * fix: tool definitions and execution handling - Introduced native tools (execute_code, file_search, web_search) to the tool service, allowing for better integration and management of these tools. - Updated isBuiltInTool function to include native tools in the built-in check, improving tool recognition. - Added comprehensive tests for loading parameters of native tools, ensuring correct functionality and parameter handling. - Enhanced tool definitions registry to include new agent tool definitions, streamlining tool retrieval and management. * refactor: enhance tool loading and execution context - Added toolRegistry to the context for OpenAIChatCompletionController and createResponse functions, improving tool management. - Updated loadToolsForExecution to utilize toolRegistry for better integration of programmatic tools and tool search functionalities. - Enhanced the initialization process to include toolRegistry in agent context, streamlining tool access and configuration. - Refactored tool classification logic to support event-driven execution, ensuring compatibility with new tool definitions. * chore: add request duration logging to OpenAI and Responses controllers - Introduced logging for request start and completion times in OpenAIChatCompletionController and createResponse functions. - Calculated and logged the duration of each request, enhancing observability and performance tracking. - Improved debugging capabilities by providing detailed logs for both streaming and non-streaming responses. * chore: update @librechat/agents dependency to version 3.1.27 * refactor: implement buildToolSet function for tool management - Introduced buildToolSet function to streamline the creation of tool sets from agent configurations, enhancing tool management across various controllers. - Updated AgentClient, OpenAIChatCompletionController, and createResponse functions to utilize buildToolSet, improving consistency in tool handling. - Added comprehensive tests for buildToolSet to ensure correct functionality and edge case handling, enhancing overall reliability. * refactor: update import paths for ToolExecuteOptions and createToolExecuteHandler * fix: update GoogleSearch.js description for maximum search results - Changed the default maximum number of search results from 10 to 5 in the Google Search JSON schema description, ensuring accurate documentation of the expected behavior. * chore: remove deprecated Browser tool and associated assets - Deleted the Browser tool definition from manifest.json, which included its name, plugin key, description, and authentication configuration. - Removed the web-browser.svg asset as it is no longer needed following the removal of the Browser tool. * fix: ensure tool definitions are valid before processing - Added a check to verify the existence of tool definitions in the registry before accessing their properties, preventing potential runtime errors. - Updated the loading logic for built-in tool definitions to ensure that only valid definitions are pushed to the built-in tool definitions array. * fix: extend ExtendedJsonSchema to support 'null' type and nullable enums - Updated the ExtendedJsonSchema type to include 'null' as a valid type option. - Modified the enum property to accept an array of values that can include strings, numbers, booleans, and null, enhancing schema flexibility. * test: add comprehensive tests for tool definitions loading and registry behavior - Implemented tests to verify the handling of built-in tools without registry definitions, ensuring they are skipped correctly. - Added tests to confirm that built-in tools include descriptions and parameters in the registry. - Enhanced tests for action tools, checking for proper inclusion of metadata and handling of tools without parameters in the registry. * test: add tests for mixed-type and number enum schema handling - Introduced tests to validate the parsing of mixed-type enum values, including strings, numbers, booleans, and null. - Added tests for number enum schema values to ensure correct parsing of numeric inputs, enhancing schema validation coverage. * fix: update mock implementation for @librechat/agents - Changed the mock for @librechat/agents to spread the actual module's properties, ensuring that all necessary functionalities are preserved in tests. - This adjustment enhances the accuracy of the tests by reflecting the real structure of the module. * fix: change max_results type in GoogleSearch schema from number to integer - Updated the type of max_results in the Google Search JSON schema to 'integer' for better type accuracy and validation consistency. * fix: update max_results description and type in GoogleSearch schema - Changed the type of max_results from 'number' to 'integer' for improved type accuracy. - Updated the description to reflect the new default maximum number of search results, changing it from 10 to 5. * refactor: remove unused code and improve tool registry handling - Eliminated outdated comments and conditional logic related to event-driven mode in the ToolService. - Enhanced the handling of the tool registry by ensuring it is configurable for better integration during tool execution. * feat: add definitionsOnly option to buildToolClassification for event-driven mode - Introduced a new parameter, definitionsOnly, to the BuildToolClassificationParams interface to enable a mode that skips tool instance creation. - Updated the buildToolClassification function to conditionally add tool definitions without instantiating tools when definitionsOnly is true. - Modified the loadToolDefinitions function to pass definitionsOnly as true, ensuring compatibility with the new feature. * test: add unit tests for buildToolClassification with definitionsOnly option - Implemented tests to verify the behavior of buildToolClassification when definitionsOnly is set to true or false. - Ensured that tool instances are not created when definitionsOnly is true, while still adding necessary tool definitions. - Confirmed that loadAuthValues is called appropriately based on the definitionsOnly parameter, enhancing test coverage for this new feature.
This commit is contained in:
parent
6279ea8dd7
commit
5af1342dbb
46 changed files with 3297 additions and 565 deletions
|
|
@ -41,9 +41,9 @@ jest.mock('~/models', () => ({
|
|||
const { getConvo, saveConvo } = require('~/models');
|
||||
|
||||
jest.mock('@librechat/agents', () => {
|
||||
const { Providers } = jest.requireActual('@librechat/agents');
|
||||
const actual = jest.requireActual('@librechat/agents');
|
||||
return {
|
||||
Providers,
|
||||
...actual,
|
||||
ChatOpenAI: jest.fn().mockImplementation(() => {
|
||||
return {};
|
||||
}),
|
||||
|
|
|
|||
|
|
@ -57,19 +57,6 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Browser",
|
||||
"pluginKey": "web-browser",
|
||||
"description": "Scrape and summarize webpage data",
|
||||
"icon": "assets/web-browser.svg",
|
||||
"authConfig": [
|
||||
{
|
||||
"authField": "OPENAI_API_KEY",
|
||||
"label": "OpenAI API Key",
|
||||
"description": "Browser makes use of OpenAI embeddings"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "DALL-E-3",
|
||||
"pluginKey": "dalle",
|
||||
|
|
|
|||
|
|
@ -1,14 +1,28 @@
|
|||
const { z } = require('zod');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { SearchClient, AzureKeyCredential } = require('@azure/search-documents');
|
||||
|
||||
const azureAISearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description: 'Search word or phrase to Azure AI Search',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
class AzureAISearch extends Tool {
|
||||
// Constants for default values
|
||||
static DEFAULT_API_VERSION = '2023-11-01';
|
||||
static DEFAULT_QUERY_TYPE = 'simple';
|
||||
static DEFAULT_TOP = 5;
|
||||
|
||||
static get jsonSchema() {
|
||||
return azureAISearchJsonSchema;
|
||||
}
|
||||
|
||||
// Helper function for initializing properties
|
||||
_initializeField(field, envVar, defaultValue) {
|
||||
return field || process.env[envVar] || defaultValue;
|
||||
|
|
@ -22,10 +36,7 @@ class AzureAISearch extends Tool {
|
|||
/* Used to initialize the Tool without necessary variables. */
|
||||
this.override = fields.override ?? false;
|
||||
|
||||
// Define schema
|
||||
this.schema = z.object({
|
||||
query: z.string().describe('Search word or phrase to Azure AI Search'),
|
||||
});
|
||||
this.schema = azureAISearchJsonSchema;
|
||||
|
||||
// Initialize properties using helper function
|
||||
this.serviceEndpoint = this._initializeField(
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
const { z } = require('zod');
|
||||
const path = require('path');
|
||||
const OpenAI = require('openai');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
|
|
@ -8,6 +7,36 @@ const { logger } = require('@librechat/data-schemas');
|
|||
const { getImageBasename, extractBaseURL } = require('@librechat/api');
|
||||
const { FileContext, ContentTypes } = require('librechat-data-provider');
|
||||
|
||||
const dalle3JsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 4000,
|
||||
description:
|
||||
'A text description of the desired image, following the rules, up to 4000 characters.',
|
||||
},
|
||||
style: {
|
||||
type: 'string',
|
||||
enum: ['vivid', 'natural'],
|
||||
description:
|
||||
'Must be one of `vivid` or `natural`. `vivid` generates hyper-real and dramatic images, `natural` produces more natural, less hyper-real looking images',
|
||||
},
|
||||
quality: {
|
||||
type: 'string',
|
||||
enum: ['hd', 'standard'],
|
||||
description: 'The quality of the generated image. Only `hd` and `standard` are supported.',
|
||||
},
|
||||
size: {
|
||||
type: 'string',
|
||||
enum: ['1024x1024', '1792x1024', '1024x1792'],
|
||||
description:
|
||||
'The size of the requested image. Use 1024x1024 (square) as the default, 1792x1024 if the user requests a wide image, and 1024x1792 for full-body portraits. Always include this parameter in the request.',
|
||||
},
|
||||
},
|
||||
required: ['prompt', 'style', 'quality', 'size'],
|
||||
};
|
||||
|
||||
const displayMessage =
|
||||
"DALL-E displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
|
||||
class DALLE3 extends Tool {
|
||||
|
|
@ -72,27 +101,11 @@ class DALLE3 extends Tool {
|
|||
// The prompt must intricately describe every part of the image in concrete, objective detail. THINK about what the end goal of the description is, and extrapolate that to what would make satisfying images.
|
||||
// All descriptions sent to dalle should be a paragraph of text that is extremely descriptive and detailed. Each should be more than 3 sentences long.
|
||||
// - The "vivid" style is HIGHLY preferred, but "natural" is also supported.`;
|
||||
this.schema = z.object({
|
||||
prompt: z
|
||||
.string()
|
||||
.max(4000)
|
||||
.describe(
|
||||
'A text description of the desired image, following the rules, up to 4000 characters.',
|
||||
),
|
||||
style: z
|
||||
.enum(['vivid', 'natural'])
|
||||
.describe(
|
||||
'Must be one of `vivid` or `natural`. `vivid` generates hyper-real and dramatic images, `natural` produces more natural, less hyper-real looking images',
|
||||
),
|
||||
quality: z
|
||||
.enum(['hd', 'standard'])
|
||||
.describe('The quality of the generated image. Only `hd` and `standard` are supported.'),
|
||||
size: z
|
||||
.enum(['1024x1024', '1792x1024', '1024x1792'])
|
||||
.describe(
|
||||
'The size of the requested image. Use 1024x1024 (square) as the default, 1792x1024 if the user requests a wide image, and 1024x1792 for full-body portraits. Always include this parameter in the request.',
|
||||
),
|
||||
});
|
||||
this.schema = dalle3JsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return dalle3JsonSchema;
|
||||
}
|
||||
|
||||
getApiKey() {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
const { z } = require('zod');
|
||||
const axios = require('axios');
|
||||
const fetch = require('node-fetch');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
|
|
@ -7,6 +6,84 @@ const { logger } = require('@librechat/data-schemas');
|
|||
const { HttpsProxyAgent } = require('https-proxy-agent');
|
||||
const { FileContext, ContentTypes } = require('librechat-data-provider');
|
||||
|
||||
const fluxApiJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: {
|
||||
type: 'string',
|
||||
enum: ['generate', 'list_finetunes', 'generate_finetuned'],
|
||||
description:
|
||||
'Action to perform: "generate" for image generation, "generate_finetuned" for finetuned model generation, "list_finetunes" to get available custom models',
|
||||
},
|
||||
prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Text prompt for image generation. Required when action is "generate". Not used for list_finetunes.',
|
||||
},
|
||||
width: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Width of the generated image in pixels. Must be a multiple of 32. Default is 1024.',
|
||||
},
|
||||
height: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Height of the generated image in pixels. Must be a multiple of 32. Default is 768.',
|
||||
},
|
||||
prompt_upsampling: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to perform upsampling on the prompt.',
|
||||
},
|
||||
steps: {
|
||||
type: 'integer',
|
||||
description: 'Number of steps to run the model for, a number from 1 to 50. Default is 40.',
|
||||
},
|
||||
seed: {
|
||||
type: 'number',
|
||||
description: 'Optional seed for reproducibility.',
|
||||
},
|
||||
safety_tolerance: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Tolerance level for input and output moderation. Between 0 and 6, 0 being most strict, 6 being least strict.',
|
||||
},
|
||||
endpoint: {
|
||||
type: 'string',
|
||||
enum: [
|
||||
'/v1/flux-pro-1.1',
|
||||
'/v1/flux-pro',
|
||||
'/v1/flux-dev',
|
||||
'/v1/flux-pro-1.1-ultra',
|
||||
'/v1/flux-pro-finetuned',
|
||||
'/v1/flux-pro-1.1-ultra-finetuned',
|
||||
],
|
||||
description: 'Endpoint to use for image generation.',
|
||||
},
|
||||
raw: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Generate less processed, more natural-looking images. Only works for /v1/flux-pro-1.1-ultra.',
|
||||
},
|
||||
finetune_id: {
|
||||
type: 'string',
|
||||
description: 'ID of the finetuned model to use',
|
||||
},
|
||||
finetune_strength: {
|
||||
type: 'number',
|
||||
description: 'Strength of the finetuning effect (typically between 0.1 and 1.2)',
|
||||
},
|
||||
guidance: {
|
||||
type: 'number',
|
||||
description: 'Guidance scale for finetuned models',
|
||||
},
|
||||
aspect_ratio: {
|
||||
type: 'string',
|
||||
description: 'Aspect ratio for ultra models (e.g., "16:9")',
|
||||
},
|
||||
},
|
||||
required: [],
|
||||
};
|
||||
|
||||
const displayMessage =
|
||||
"Flux displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
|
||||
|
||||
|
|
@ -57,82 +134,11 @@ class FluxAPI extends Tool {
|
|||
// Add base URL from environment variable with fallback
|
||||
this.baseUrl = process.env.FLUX_API_BASE_URL || 'https://api.us1.bfl.ai';
|
||||
|
||||
// Define the schema for structured input
|
||||
this.schema = z.object({
|
||||
action: z
|
||||
.enum(['generate', 'list_finetunes', 'generate_finetuned'])
|
||||
.default('generate')
|
||||
.describe(
|
||||
'Action to perform: "generate" for image generation, "generate_finetuned" for finetuned model generation, "list_finetunes" to get available custom models',
|
||||
),
|
||||
prompt: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Text prompt for image generation. Required when action is "generate". Not used for list_finetunes.',
|
||||
),
|
||||
width: z
|
||||
.number()
|
||||
.optional()
|
||||
.describe(
|
||||
'Width of the generated image in pixels. Must be a multiple of 32. Default is 1024.',
|
||||
),
|
||||
height: z
|
||||
.number()
|
||||
.optional()
|
||||
.describe(
|
||||
'Height of the generated image in pixels. Must be a multiple of 32. Default is 768.',
|
||||
),
|
||||
prompt_upsampling: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(false)
|
||||
.describe('Whether to perform upsampling on the prompt.'),
|
||||
steps: z
|
||||
.number()
|
||||
.int()
|
||||
.optional()
|
||||
.describe('Number of steps to run the model for, a number from 1 to 50. Default is 40.'),
|
||||
seed: z.number().optional().describe('Optional seed for reproducibility.'),
|
||||
safety_tolerance: z
|
||||
.number()
|
||||
.optional()
|
||||
.default(6)
|
||||
.describe(
|
||||
'Tolerance level for input and output moderation. Between 0 and 6, 0 being most strict, 6 being least strict.',
|
||||
),
|
||||
endpoint: z
|
||||
.enum([
|
||||
'/v1/flux-pro-1.1',
|
||||
'/v1/flux-pro',
|
||||
'/v1/flux-dev',
|
||||
'/v1/flux-pro-1.1-ultra',
|
||||
'/v1/flux-pro-finetuned',
|
||||
'/v1/flux-pro-1.1-ultra-finetuned',
|
||||
])
|
||||
.optional()
|
||||
.default('/v1/flux-pro-1.1')
|
||||
.describe('Endpoint to use for image generation.'),
|
||||
raw: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(false)
|
||||
.describe(
|
||||
'Generate less processed, more natural-looking images. Only works for /v1/flux-pro-1.1-ultra.',
|
||||
),
|
||||
finetune_id: z.string().optional().describe('ID of the finetuned model to use'),
|
||||
finetune_strength: z
|
||||
.number()
|
||||
.optional()
|
||||
.default(1.1)
|
||||
.describe('Strength of the finetuning effect (typically between 0.1 and 1.2)'),
|
||||
guidance: z.number().optional().default(2.5).describe('Guidance scale for finetuned models'),
|
||||
aspect_ratio: z
|
||||
.string()
|
||||
.optional()
|
||||
.default('16:9')
|
||||
.describe('Aspect ratio for ultra models (e.g., "16:9")'),
|
||||
});
|
||||
this.schema = fluxApiJsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return fluxApiJsonSchema;
|
||||
}
|
||||
|
||||
getAxiosConfig() {
|
||||
|
|
|
|||
|
|
@ -1,12 +1,33 @@
|
|||
const { z } = require('zod');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
|
||||
const googleSearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
description: 'The search query string.',
|
||||
},
|
||||
max_results: {
|
||||
type: 'integer',
|
||||
minimum: 1,
|
||||
maximum: 10,
|
||||
description: 'The maximum number of search results to return. Defaults to 5.',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
class GoogleSearchResults extends Tool {
|
||||
static lc_name() {
|
||||
return 'google';
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return googleSearchJsonSchema;
|
||||
}
|
||||
|
||||
constructor(fields = {}) {
|
||||
super(fields);
|
||||
this.name = 'google';
|
||||
|
|
@ -28,25 +49,11 @@ class GoogleSearchResults extends Tool {
|
|||
this.description =
|
||||
'A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events.';
|
||||
|
||||
this.schema = z.object({
|
||||
query: z.string().min(1).describe('The search query string.'),
|
||||
max_results: z
|
||||
.number()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The maximum number of search results to return. Defaults to 10.'),
|
||||
// Note: Google API has its own parameters for search customization, adjust as needed.
|
||||
});
|
||||
this.schema = googleSearchJsonSchema;
|
||||
}
|
||||
|
||||
async _call(input) {
|
||||
const validationResult = this.schema.safeParse(input);
|
||||
if (!validationResult.success) {
|
||||
throw new Error(`Validation failed: ${JSON.stringify(validationResult.error.issues)}`);
|
||||
}
|
||||
|
||||
const { query, max_results = 5 } = validationResult.data;
|
||||
const { query, max_results = 5 } = input;
|
||||
|
||||
const response = await fetch(
|
||||
`https://www.googleapis.com/customsearch/v1?key=${this.apiKey}&cx=${
|
||||
|
|
|
|||
|
|
@ -1,8 +1,52 @@
|
|||
const { Tool } = require('@langchain/core/tools');
|
||||
const { z } = require('zod');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
const fetch = require('node-fetch');
|
||||
|
||||
const openWeatherJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: {
|
||||
type: 'string',
|
||||
enum: ['help', 'current_forecast', 'timestamp', 'daily_aggregation', 'overview'],
|
||||
description: 'The action to perform',
|
||||
},
|
||||
city: {
|
||||
type: 'string',
|
||||
description: 'City name for geocoding if lat/lon not provided',
|
||||
},
|
||||
lat: {
|
||||
type: 'number',
|
||||
description: 'Latitude coordinate',
|
||||
},
|
||||
lon: {
|
||||
type: 'number',
|
||||
description: 'Longitude coordinate',
|
||||
},
|
||||
exclude: {
|
||||
type: 'string',
|
||||
description: 'Parts to exclude from the response',
|
||||
},
|
||||
units: {
|
||||
type: 'string',
|
||||
enum: ['Celsius', 'Kelvin', 'Fahrenheit'],
|
||||
description: 'Temperature units',
|
||||
},
|
||||
lang: {
|
||||
type: 'string',
|
||||
description: 'Language code',
|
||||
},
|
||||
date: {
|
||||
type: 'string',
|
||||
description: 'Date in YYYY-MM-DD format for timestamp and daily_aggregation',
|
||||
},
|
||||
tz: {
|
||||
type: 'string',
|
||||
description: 'Timezone',
|
||||
},
|
||||
},
|
||||
required: ['action'],
|
||||
};
|
||||
|
||||
/**
|
||||
* Map user-friendly units to OpenWeather units.
|
||||
* Defaults to Celsius if not specified.
|
||||
|
|
@ -66,17 +110,11 @@ class OpenWeather extends Tool {
|
|||
'Units: "Celsius", "Kelvin", or "Fahrenheit" (default: Celsius). ' +
|
||||
'For timestamp action, use "date" in YYYY-MM-DD format.';
|
||||
|
||||
schema = z.object({
|
||||
action: z.enum(['help', 'current_forecast', 'timestamp', 'daily_aggregation', 'overview']),
|
||||
city: z.string().optional(),
|
||||
lat: z.number().optional(),
|
||||
lon: z.number().optional(),
|
||||
exclude: z.string().optional(),
|
||||
units: z.enum(['Celsius', 'Kelvin', 'Fahrenheit']).optional(),
|
||||
lang: z.string().optional(),
|
||||
date: z.string().optional(), // For timestamp and daily_aggregation
|
||||
tz: z.string().optional(),
|
||||
});
|
||||
schema = openWeatherJsonSchema;
|
||||
|
||||
static get jsonSchema() {
|
||||
return openWeatherJsonSchema;
|
||||
}
|
||||
|
||||
constructor(fields = {}) {
|
||||
super();
|
||||
|
|
|
|||
|
|
@ -1,6 +1,5 @@
|
|||
// Generates image using stable diffusion webui's api (automatic1111)
|
||||
const fs = require('fs');
|
||||
const { z } = require('zod');
|
||||
const path = require('path');
|
||||
const axios = require('axios');
|
||||
const sharp = require('sharp');
|
||||
|
|
@ -11,6 +10,23 @@ const { FileContext, ContentTypes } = require('librechat-data-provider');
|
|||
const { getBasePath } = require('@librechat/api');
|
||||
const paths = require('~/config/paths');
|
||||
|
||||
const stableDiffusionJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Detailed keywords to describe the subject, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
},
|
||||
negative_prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Keywords we want to exclude from the final image, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
},
|
||||
},
|
||||
required: ['prompt', 'negative_prompt'],
|
||||
};
|
||||
|
||||
const displayMessage =
|
||||
"Stable Diffusion displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
|
||||
|
||||
|
|
@ -46,18 +62,11 @@ class StableDiffusionAPI extends Tool {
|
|||
// - Generate images only once per human query unless explicitly requested by the user`;
|
||||
this.description =
|
||||
"You can generate images using text with 'stable-diffusion'. This tool is exclusively for visual content.";
|
||||
this.schema = z.object({
|
||||
prompt: z
|
||||
.string()
|
||||
.describe(
|
||||
'Detailed keywords to describe the subject, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
),
|
||||
negative_prompt: z
|
||||
.string()
|
||||
.describe(
|
||||
'Keywords we want to exclude from the final image, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
),
|
||||
});
|
||||
this.schema = stableDiffusionJsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return stableDiffusionJsonSchema;
|
||||
}
|
||||
|
||||
replaceNewLinesWithSpaces(inputString) {
|
||||
|
|
|
|||
|
|
@ -1,8 +1,75 @@
|
|||
const { z } = require('zod');
|
||||
const { ProxyAgent, fetch } = require('undici');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
|
||||
const tavilySearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
description: 'The search query string.',
|
||||
},
|
||||
max_results: {
|
||||
type: 'number',
|
||||
minimum: 1,
|
||||
maximum: 10,
|
||||
description: 'The maximum number of search results to return. Defaults to 5.',
|
||||
},
|
||||
search_depth: {
|
||||
type: 'string',
|
||||
enum: ['basic', 'advanced'],
|
||||
description:
|
||||
'The depth of the search, affecting result quality and response time (`basic` or `advanced`). Default is basic for quick results and advanced for indepth high quality results but longer response time. Advanced calls equals 2 requests.',
|
||||
},
|
||||
include_images: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Whether to include a list of query-related images in the response. Default is False.',
|
||||
},
|
||||
include_answer: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to include answers in the search results. Default is False.',
|
||||
},
|
||||
include_raw_content: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to include raw content in the search results. Default is False.',
|
||||
},
|
||||
include_domains: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'A list of domains to specifically include in the search results.',
|
||||
},
|
||||
exclude_domains: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'A list of domains to specifically exclude from the search results.',
|
||||
},
|
||||
topic: {
|
||||
type: 'string',
|
||||
enum: ['general', 'news', 'finance'],
|
||||
description:
|
||||
'The category of the search. Use news ONLY if query SPECIFCALLY mentions the word "news".',
|
||||
},
|
||||
time_range: {
|
||||
type: 'string',
|
||||
enum: ['day', 'week', 'month', 'year', 'd', 'w', 'm', 'y'],
|
||||
description: 'The time range back from the current date to filter results.',
|
||||
},
|
||||
days: {
|
||||
type: 'number',
|
||||
minimum: 1,
|
||||
description: 'Number of days back from the current date to include. Only if topic is news.',
|
||||
},
|
||||
include_image_descriptions: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'When include_images is true, also add a descriptive text for each image. Default is false.',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
class TavilySearchResults extends Tool {
|
||||
static lc_name() {
|
||||
return 'TavilySearchResults';
|
||||
|
|
@ -20,64 +87,11 @@ class TavilySearchResults extends Tool {
|
|||
this.description =
|
||||
'A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events.';
|
||||
|
||||
this.schema = z.object({
|
||||
query: z.string().min(1).describe('The search query string.'),
|
||||
max_results: z
|
||||
.number()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The maximum number of search results to return. Defaults to 5.'),
|
||||
search_depth: z
|
||||
.enum(['basic', 'advanced'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The depth of the search, affecting result quality and response time (`basic` or `advanced`). Default is basic for quick results and advanced for indepth high quality results but longer response time. Advanced calls equals 2 requests.',
|
||||
),
|
||||
include_images: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe(
|
||||
'Whether to include a list of query-related images in the response. Default is False.',
|
||||
),
|
||||
include_answer: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Whether to include answers in the search results. Default is False.'),
|
||||
include_raw_content: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Whether to include raw content in the search results. Default is False.'),
|
||||
include_domains: z
|
||||
.array(z.string())
|
||||
.optional()
|
||||
.describe('A list of domains to specifically include in the search results.'),
|
||||
exclude_domains: z
|
||||
.array(z.string())
|
||||
.optional()
|
||||
.describe('A list of domains to specifically exclude from the search results.'),
|
||||
topic: z
|
||||
.enum(['general', 'news', 'finance'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The category of the search. Use news ONLY if query SPECIFCALLY mentions the word "news".',
|
||||
),
|
||||
time_range: z
|
||||
.enum(['day', 'week', 'month', 'year', 'd', 'w', 'm', 'y'])
|
||||
.optional()
|
||||
.describe('The time range back from the current date to filter results.'),
|
||||
days: z
|
||||
.number()
|
||||
.min(1)
|
||||
.optional()
|
||||
.describe('Number of days back from the current date to include. Only if topic is news.'),
|
||||
include_image_descriptions: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe(
|
||||
'When include_images is true, also add a descriptive text for each image. Default is false.',
|
||||
),
|
||||
});
|
||||
this.schema = tavilySearchJsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return tavilySearchJsonSchema;
|
||||
}
|
||||
|
||||
getApiKey() {
|
||||
|
|
@ -89,12 +103,7 @@ class TavilySearchResults extends Tool {
|
|||
}
|
||||
|
||||
async _call(input) {
|
||||
const validationResult = this.schema.safeParse(input);
|
||||
if (!validationResult.success) {
|
||||
throw new Error(`Validation failed: ${JSON.stringify(validationResult.error.issues)}`);
|
||||
}
|
||||
|
||||
const { query, ...rest } = validationResult.data;
|
||||
const { query, ...rest } = input;
|
||||
|
||||
const requestBody = {
|
||||
api_key: this.apiKey,
|
||||
|
|
|
|||
|
|
@ -1,8 +1,19 @@
|
|||
const { z } = require('zod');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
|
||||
const traversaalSearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description:
|
||||
"A properly written sentence to be interpreted by an AI to search the web according to the user's request.",
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/**
|
||||
* Tool for the Traversaal AI search API, Ares.
|
||||
*/
|
||||
|
|
@ -17,17 +28,15 @@ class TraversaalSearch extends Tool {
|
|||
Useful for when you need to answer questions about current events. Input should be a search query.`;
|
||||
this.description_for_model =
|
||||
'\'Please create a specific sentence for the AI to understand and use as a query to search the web based on the user\'s request. For example, "Find information about the highest mountains in the world." or "Show me the latest news articles about climate change and its impact on polar ice caps."\'';
|
||||
this.schema = z.object({
|
||||
query: z
|
||||
.string()
|
||||
.describe(
|
||||
"A properly written sentence to be interpreted by an AI to search the web according to the user's request.",
|
||||
),
|
||||
});
|
||||
this.schema = traversaalSearchJsonSchema;
|
||||
|
||||
this.apiKey = fields?.TRAVERSAAL_API_KEY ?? this.getApiKey();
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return traversaalSearchJsonSchema;
|
||||
}
|
||||
|
||||
getApiKey() {
|
||||
const apiKey = getEnvironmentVariable('TRAVERSAAL_API_KEY');
|
||||
if (!apiKey && this.override) {
|
||||
|
|
|
|||
|
|
@ -1,9 +1,19 @@
|
|||
/* eslint-disable no-useless-escape */
|
||||
const { z } = require('zod');
|
||||
const axios = require('axios');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
|
||||
const wolframJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
input: {
|
||||
type: 'string',
|
||||
description: 'Natural language query to WolframAlpha following the guidelines',
|
||||
},
|
||||
},
|
||||
required: ['input'],
|
||||
};
|
||||
|
||||
class WolframAlphaAPI extends Tool {
|
||||
constructor(fields) {
|
||||
super();
|
||||
|
|
@ -41,9 +51,11 @@ class WolframAlphaAPI extends Tool {
|
|||
// -- Do not explain each step unless user input is needed. Proceed directly to making a better API call based on the available assumptions.`;
|
||||
this.description = `WolframAlpha offers computation, math, curated knowledge, and real-time data. It handles natural language queries and performs complex calculations.
|
||||
Follow the guidelines to get the best results.`;
|
||||
this.schema = z.object({
|
||||
input: z.string().describe('Natural language query to WolframAlpha following the guidelines'),
|
||||
});
|
||||
this.schema = wolframJsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return wolframJsonSchema;
|
||||
}
|
||||
|
||||
async fetchRawText(url) {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
const { z } = require('zod');
|
||||
const axios = require('axios');
|
||||
const { tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
|
|
@ -7,6 +6,18 @@ const { Tools, EToolResources } = require('librechat-data-provider');
|
|||
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
|
||||
const { getFiles } = require('~/models');
|
||||
|
||||
const fileSearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description:
|
||||
"A natural language query to search for relevant information in the files. Be specific and use keywords related to the information you're looking for. The query will be used for semantic similarity matching against the file contents.",
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/**
|
||||
*
|
||||
* @param {Object} options
|
||||
|
|
@ -182,15 +193,9 @@ Use the EXACT anchor markers shown below (copy them verbatim) immediately after
|
|||
**ALWAYS mention the filename in your text before the citation marker. NEVER use markdown links or footnotes.**`
|
||||
: ''
|
||||
}`,
|
||||
schema: z.object({
|
||||
query: z
|
||||
.string()
|
||||
.describe(
|
||||
"A natural language query to search for relevant information in the files. Be specific and use keywords related to the information you're looking for. The query will be used for semantic similarity matching against the file contents.",
|
||||
),
|
||||
}),
|
||||
schema: fileSearchJsonSchema,
|
||||
},
|
||||
);
|
||||
};
|
||||
|
||||
module.exports = { createFileSearchTool, primeFiles };
|
||||
module.exports = { createFileSearchTool, primeFiles, fileSearchJsonSchema };
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@
|
|||
"@google/genai": "^1.19.0",
|
||||
"@keyv/redis": "^4.3.3",
|
||||
"@langchain/core": "^0.3.80",
|
||||
"@librechat/agents": "^3.1.0",
|
||||
"@librechat/agents": "^3.1.27",
|
||||
"@librechat/api": "*",
|
||||
"@librechat/data-schemas": "*",
|
||||
"@microsoft/microsoft-graph-client": "^3.0.7",
|
||||
|
|
|
|||
|
|
@ -1,7 +1,12 @@
|
|||
const { nanoid } = require('nanoid');
|
||||
const { Constants } = require('@librechat/agents');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { sendEvent, GenerationJobManager, writeAttachmentEvent } = require('@librechat/api');
|
||||
const {
|
||||
sendEvent,
|
||||
GenerationJobManager,
|
||||
writeAttachmentEvent,
|
||||
createToolExecuteHandler,
|
||||
} = require('@librechat/api');
|
||||
const { Tools, StepTypes, FileContext, ErrorTypes } = require('librechat-data-provider');
|
||||
const {
|
||||
EnvVar,
|
||||
|
|
@ -159,6 +164,12 @@ function emitEvent(res, streamId, eventData) {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @typedef {Object} ToolExecuteOptions
|
||||
* @property {(toolNames: string[]) => Promise<{loadedTools: StructuredTool[]}>} loadTools - Function to load tools by name
|
||||
* @property {Object} configurable - Configurable context for tool invocation
|
||||
*/
|
||||
|
||||
/**
|
||||
* Get default handlers for stream events.
|
||||
* @param {Object} options - The options object.
|
||||
|
|
@ -167,6 +178,7 @@ function emitEvent(res, streamId, eventData) {
|
|||
* @param {ToolEndCallback} options.toolEndCallback - Callback to use when tool ends.
|
||||
* @param {Array<UsageMetadata>} options.collectedUsage - The list of collected usage metadata.
|
||||
* @param {string | null} [options.streamId] - The stream ID for resumable mode, or null for standard mode.
|
||||
* @param {ToolExecuteOptions} [options.toolExecuteOptions] - Options for event-driven tool execution.
|
||||
* @returns {Record<string, t.EventHandler>} The default handlers.
|
||||
* @throws {Error} If the request is not found.
|
||||
*/
|
||||
|
|
@ -176,6 +188,7 @@ function getDefaultHandlers({
|
|||
toolEndCallback,
|
||||
collectedUsage,
|
||||
streamId = null,
|
||||
toolExecuteOptions = null,
|
||||
}) {
|
||||
if (!res || !aggregateContent) {
|
||||
throw new Error(
|
||||
|
|
@ -285,6 +298,10 @@ function getDefaultHandlers({
|
|||
},
|
||||
};
|
||||
|
||||
if (toolExecuteOptions) {
|
||||
handlers[GraphEvents.ON_TOOL_EXECUTE] = createToolExecuteHandler(toolExecuteOptions);
|
||||
}
|
||||
|
||||
return handlers;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ const {
|
|||
createRun,
|
||||
Tokenizer,
|
||||
checkAccess,
|
||||
buildToolSet,
|
||||
logAxiosError,
|
||||
sanitizeTitle,
|
||||
resolveHeaders,
|
||||
|
|
@ -974,7 +975,7 @@ class AgentClient extends BaseClient {
|
|||
version: 'v2',
|
||||
};
|
||||
|
||||
const toolSet = new Set((this.options.agent.tools ?? []).map((tool) => tool && tool.name));
|
||||
const toolSet = buildToolSet(this.options.agent);
|
||||
let { messages: initialMessages, indexTokenCountMap } = formatAgentMessages(
|
||||
payload,
|
||||
this.indexTokenCountMap,
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ const {
|
|||
writeSSE,
|
||||
createRun,
|
||||
createChunk,
|
||||
buildToolSet,
|
||||
sendFinalChunk,
|
||||
createSafeUser,
|
||||
validateRequest,
|
||||
|
|
@ -19,11 +20,12 @@ const {
|
|||
buildNonStreamingResponse,
|
||||
createOpenAIStreamTracker,
|
||||
createOpenAIContentAggregator,
|
||||
createToolExecuteHandler,
|
||||
isChatCompletionValidationFailure,
|
||||
} = require('@librechat/api');
|
||||
const { loadAgentTools, loadToolsForExecution } = require('~/server/services/ToolService');
|
||||
const { createToolEndCallback } = require('~/server/controllers/agents/callbacks');
|
||||
const { findAccessibleResources } = require('~/server/services/PermissionService');
|
||||
const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const { getConvoFiles } = require('~/models/Conversation');
|
||||
const { getAgent, getAgents } = require('~/models/Agent');
|
||||
const db = require('~/models');
|
||||
|
|
@ -31,8 +33,10 @@ const db = require('~/models');
|
|||
/**
|
||||
* Creates a tool loader function for the agent.
|
||||
* @param {AbortSignal} signal - The abort signal
|
||||
* @param {boolean} [definitionsOnly=true] - When true, returns only serializable
|
||||
* tool definitions without creating full tool instances (for event-driven mode)
|
||||
*/
|
||||
function createToolLoader(signal) {
|
||||
function createToolLoader(signal, definitionsOnly = true) {
|
||||
return async function loadTools({
|
||||
req,
|
||||
res,
|
||||
|
|
@ -51,6 +55,7 @@ function createToolLoader(signal) {
|
|||
agent,
|
||||
signal,
|
||||
tool_resources,
|
||||
definitionsOnly,
|
||||
streamId: null, // No resumable stream for OpenAI compat
|
||||
});
|
||||
} catch (error) {
|
||||
|
|
@ -123,6 +128,7 @@ function sendErrorResponse(res, statusCode, message, type = 'invalid_request_err
|
|||
*/
|
||||
const OpenAIChatCompletionController = async (req, res) => {
|
||||
const appConfig = req.config;
|
||||
const requestStartTime = Date.now();
|
||||
|
||||
// Validate request
|
||||
const validation = validateRequest(req.body);
|
||||
|
|
@ -157,6 +163,10 @@ const OpenAIChatCompletionController = async (req, res) => {
|
|||
model: agentId,
|
||||
};
|
||||
|
||||
logger.debug(
|
||||
`[OpenAI API] Request ${requestId} started for agent ${agentId}, stream: ${request.stream}`,
|
||||
);
|
||||
|
||||
// Set up abort controller
|
||||
const abortController = new AbortController();
|
||||
|
||||
|
|
@ -239,19 +249,31 @@ const OpenAIChatCompletionController = async (req, res) => {
|
|||
}
|
||||
: null;
|
||||
|
||||
// We need custom handlers that stream in OpenAI format
|
||||
const collectedUsage = [];
|
||||
/** @type {Promise<import('librechat-data-provider').TAttachment | null>[]} */
|
||||
const artifactPromises = [];
|
||||
|
||||
// Create tool end callback for processing artifacts (images, file citations, code output)
|
||||
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises, streamId: null });
|
||||
|
||||
// Convert messages to internal format
|
||||
const toolExecuteOptions = {
|
||||
loadTools: async (toolNames) => {
|
||||
return loadToolsForExecution({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
toolNames,
|
||||
signal: abortController.signal,
|
||||
toolRegistry: primaryConfig.toolRegistry,
|
||||
userMCPAuthMap: primaryConfig.userMCPAuthMap,
|
||||
tool_resources: primaryConfig.tool_resources,
|
||||
});
|
||||
},
|
||||
toolEndCallback,
|
||||
};
|
||||
|
||||
const openaiMessages = convertMessages(request.messages);
|
||||
|
||||
// Format for agent
|
||||
const toolSet = new Set((primaryConfig.tools ?? []).map((tool) => tool && tool.name));
|
||||
const toolSet = buildToolSet(primaryConfig);
|
||||
const { messages: formattedMessages, indexTokenCountMap } = formatAgentMessages(
|
||||
openaiMessages,
|
||||
{},
|
||||
|
|
@ -425,6 +447,8 @@ const OpenAIChatCompletionController = async (req, res) => {
|
|||
on_chain_end: createHandler(),
|
||||
on_agent_update: createHandler(),
|
||||
on_custom_event: createHandler(),
|
||||
// Event-driven tool execution handler
|
||||
on_tool_execute: createToolExecuteHandler(toolExecuteOptions),
|
||||
};
|
||||
|
||||
// Create and run the agent
|
||||
|
|
@ -474,9 +498,11 @@ const OpenAIChatCompletionController = async (req, res) => {
|
|||
});
|
||||
|
||||
// Finalize response
|
||||
const duration = Date.now() - requestStartTime;
|
||||
if (isStreaming) {
|
||||
sendFinalChunk(handlerConfig);
|
||||
res.end();
|
||||
logger.debug(`[OpenAI API] Request ${requestId} completed in ${duration}ms (streaming)`);
|
||||
|
||||
// Wait for artifact processing after response ends (non-blocking)
|
||||
if (artifactPromises.length > 0) {
|
||||
|
|
@ -515,6 +541,7 @@ const OpenAIChatCompletionController = async (req, res) => {
|
|||
usage,
|
||||
);
|
||||
res.json(response);
|
||||
logger.debug(`[OpenAI API] Request ${requestId} completed in ${duration}ms (non-streaming)`);
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'An error occurred';
|
||||
|
|
|
|||
|
|
@ -10,8 +10,10 @@ const {
|
|||
} = require('@librechat/agents');
|
||||
const {
|
||||
createRun,
|
||||
buildToolSet,
|
||||
createSafeUser,
|
||||
initializeAgent,
|
||||
createToolExecuteHandler,
|
||||
// Responses API
|
||||
writeDone,
|
||||
buildResponse,
|
||||
|
|
@ -34,9 +36,9 @@ const {
|
|||
createResponsesToolEndCallback,
|
||||
createToolEndCallback,
|
||||
} = require('~/server/controllers/agents/callbacks');
|
||||
const { loadAgentTools, loadToolsForExecution } = require('~/server/services/ToolService');
|
||||
const { findAccessibleResources } = require('~/server/services/PermissionService');
|
||||
const { getConvoFiles, saveConvo, getConvo } = require('~/models/Conversation');
|
||||
const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const { getAgent, getAgents } = require('~/models/Agent');
|
||||
const db = require('~/models');
|
||||
|
||||
|
|
@ -54,8 +56,10 @@ function setAppConfig(config) {
|
|||
/**
|
||||
* Creates a tool loader function for the agent.
|
||||
* @param {AbortSignal} signal - The abort signal
|
||||
* @param {boolean} [definitionsOnly=true] - When true, returns only serializable
|
||||
* tool definitions without creating full tool instances (for event-driven mode)
|
||||
*/
|
||||
function createToolLoader(signal) {
|
||||
function createToolLoader(signal, definitionsOnly = true) {
|
||||
return async function loadTools({
|
||||
req,
|
||||
res,
|
||||
|
|
@ -74,6 +78,7 @@ function createToolLoader(signal) {
|
|||
agent,
|
||||
signal,
|
||||
tool_resources,
|
||||
definitionsOnly,
|
||||
streamId: null,
|
||||
});
|
||||
} catch (error) {
|
||||
|
|
@ -261,6 +266,8 @@ function convertMessagesToOutputItems(messages) {
|
|||
* @param {import('express').Response} res
|
||||
*/
|
||||
const createResponse = async (req, res) => {
|
||||
const requestStartTime = Date.now();
|
||||
|
||||
// Validate request
|
||||
const validation = validateResponseRequest(req.body);
|
||||
if (isValidationFailure(validation)) {
|
||||
|
|
@ -291,6 +298,10 @@ const createResponse = async (req, res) => {
|
|||
// Create response context
|
||||
const context = createResponseContext(request, responseId);
|
||||
|
||||
logger.debug(
|
||||
`[Responses API] Request ${responseId} started for agent ${agentId}, stream: ${isStreaming}`,
|
||||
);
|
||||
|
||||
// Set up abort controller
|
||||
const abortController = new AbortController();
|
||||
|
||||
|
|
@ -362,8 +373,7 @@ const createResponse = async (req, res) => {
|
|||
// Merge previous messages with new input
|
||||
const allMessages = [...previousMessages, ...inputMessages];
|
||||
|
||||
// Format for agent
|
||||
const toolSet = new Set((primaryConfig.tools ?? []).map((tool) => tool && tool.name));
|
||||
const toolSet = buildToolSet(primaryConfig);
|
||||
const { messages: formattedMessages, indexTokenCountMap } = formatAgentMessages(
|
||||
allMessages,
|
||||
{},
|
||||
|
|
@ -407,6 +417,23 @@ const createResponse = async (req, res) => {
|
|||
artifactPromises,
|
||||
});
|
||||
|
||||
// Create tool execute options for event-driven tool execution
|
||||
const toolExecuteOptions = {
|
||||
loadTools: async (toolNames) => {
|
||||
return loadToolsForExecution({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
toolNames,
|
||||
signal: abortController.signal,
|
||||
toolRegistry: primaryConfig.toolRegistry,
|
||||
userMCPAuthMap: primaryConfig.userMCPAuthMap,
|
||||
tool_resources: primaryConfig.tool_resources,
|
||||
});
|
||||
},
|
||||
toolEndCallback,
|
||||
};
|
||||
|
||||
// Combine handlers
|
||||
const handlers = {
|
||||
on_chat_model_stream: {
|
||||
|
|
@ -425,6 +452,7 @@ const createResponse = async (req, res) => {
|
|||
on_chain_end: { handle: () => {} },
|
||||
on_agent_update: { handle: () => {} },
|
||||
on_custom_event: { handle: () => {} },
|
||||
on_tool_execute: createToolExecuteHandler(toolExecuteOptions),
|
||||
};
|
||||
|
||||
// Create and run the agent
|
||||
|
|
@ -475,6 +503,9 @@ const createResponse = async (req, res) => {
|
|||
finalizeStream();
|
||||
res.end();
|
||||
|
||||
const duration = Date.now() - requestStartTime;
|
||||
logger.debug(`[Responses API] Request ${responseId} completed in ${duration}ms (streaming)`);
|
||||
|
||||
// Save to database if store: true
|
||||
if (request.store === true) {
|
||||
try {
|
||||
|
|
@ -504,18 +535,30 @@ const createResponse = async (req, res) => {
|
|||
});
|
||||
}
|
||||
} else {
|
||||
// Non-streaming response
|
||||
const aggregatorHandlers = createAggregatorEventHandlers(aggregator);
|
||||
|
||||
// Built-in handler for processing raw model stream chunks
|
||||
const chatModelStreamHandler = new ChatModelStreamHandler();
|
||||
|
||||
// Artifact promises for processing tool outputs
|
||||
/** @type {Promise<import('librechat-data-provider').TAttachment | null>[]} */
|
||||
const artifactPromises = [];
|
||||
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises, streamId: null });
|
||||
|
||||
// Combine handlers
|
||||
const toolExecuteOptions = {
|
||||
loadTools: async (toolNames) => {
|
||||
return loadToolsForExecution({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
toolNames,
|
||||
signal: abortController.signal,
|
||||
toolRegistry: primaryConfig.toolRegistry,
|
||||
userMCPAuthMap: primaryConfig.userMCPAuthMap,
|
||||
tool_resources: primaryConfig.tool_resources,
|
||||
});
|
||||
},
|
||||
toolEndCallback,
|
||||
};
|
||||
|
||||
const handlers = {
|
||||
on_chat_model_stream: {
|
||||
handle: async (event, data, metadata, graph) => {
|
||||
|
|
@ -533,9 +576,9 @@ const createResponse = async (req, res) => {
|
|||
on_chain_end: { handle: () => {} },
|
||||
on_agent_update: { handle: () => {} },
|
||||
on_custom_event: { handle: () => {} },
|
||||
on_tool_execute: createToolExecuteHandler(toolExecuteOptions),
|
||||
};
|
||||
|
||||
// Create and run the agent
|
||||
const userId = req.user?.id ?? 'api-user';
|
||||
const userMCPAuthMap = primaryConfig.userMCPAuthMap;
|
||||
|
||||
|
|
@ -557,7 +600,6 @@ const createResponse = async (req, res) => {
|
|||
throw new Error('Failed to create agent run');
|
||||
}
|
||||
|
||||
// Process the stream
|
||||
const config = {
|
||||
runName: 'AgentRun',
|
||||
configurable: {
|
||||
|
|
@ -579,7 +621,6 @@ const createResponse = async (req, res) => {
|
|||
},
|
||||
});
|
||||
|
||||
// Wait for artifacts before sending response
|
||||
if (artifactPromises.length > 0) {
|
||||
try {
|
||||
await Promise.all(artifactPromises);
|
||||
|
|
@ -588,19 +629,14 @@ const createResponse = async (req, res) => {
|
|||
}
|
||||
}
|
||||
|
||||
// Build and send the response
|
||||
const response = buildAggregatedResponse(context, aggregator);
|
||||
|
||||
// Save to database if store: true
|
||||
if (request.store === true) {
|
||||
try {
|
||||
// Save conversation
|
||||
await saveConversation(req, conversationId, agentId, agent);
|
||||
|
||||
// Save input messages
|
||||
await saveInputMessages(req, conversationId, inputMessages, agentId);
|
||||
|
||||
// Save response output
|
||||
await saveResponseOutput(req, conversationId, responseId, response, agentId);
|
||||
|
||||
logger.debug(
|
||||
|
|
@ -613,6 +649,11 @@ const createResponse = async (req, res) => {
|
|||
}
|
||||
|
||||
res.json(response);
|
||||
|
||||
const duration = Date.now() - requestStartTime;
|
||||
logger.debug(
|
||||
`[Responses API] Request ${responseId} completed in ${duration}ms (non-streaming)`,
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'An error occurred';
|
||||
|
|
|
|||
|
|
@ -19,8 +19,8 @@ const {
|
|||
createToolEndCallback,
|
||||
getDefaultHandlers,
|
||||
} = require('~/server/controllers/agents/callbacks');
|
||||
const { loadAgentTools, loadToolsForExecution } = require('~/server/services/ToolService');
|
||||
const { getModelsConfig } = require('~/server/controllers/ModelController');
|
||||
const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const AgentClient = require('~/server/controllers/agents/client');
|
||||
const { getConvoFiles } = require('~/models/Conversation');
|
||||
const { processAddedConvo } = require('./addedConvo');
|
||||
|
|
@ -32,8 +32,10 @@ const db = require('~/models');
|
|||
* Creates a tool loader function for the agent.
|
||||
* @param {AbortSignal} signal - The abort signal
|
||||
* @param {string | null} [streamId] - The stream ID for resumable mode
|
||||
* @param {boolean} [definitionsOnly=false] - When true, returns only serializable
|
||||
* tool definitions without creating full tool instances (for event-driven mode)
|
||||
*/
|
||||
function createToolLoader(signal, streamId = null) {
|
||||
function createToolLoader(signal, streamId = null, definitionsOnly = false) {
|
||||
/**
|
||||
* @param {object} params
|
||||
* @param {ServerRequest} params.req
|
||||
|
|
@ -44,8 +46,9 @@ function createToolLoader(signal, streamId = null) {
|
|||
* @param {string} params.model
|
||||
* @param {AgentToolResources} params.tool_resources
|
||||
* @returns {Promise<{
|
||||
* tools: StructuredTool[],
|
||||
* tools?: StructuredTool[],
|
||||
* toolContextMap: Record<string, unknown>,
|
||||
* toolDefinitions?: import('@librechat/agents').LCTool[],
|
||||
* userMCPAuthMap?: Record<string, Record<string, string>>,
|
||||
* toolRegistry?: import('@librechat/agents').LCToolRegistry
|
||||
* } | undefined>}
|
||||
|
|
@ -67,8 +70,9 @@ function createToolLoader(signal, streamId = null) {
|
|||
res,
|
||||
agent,
|
||||
signal,
|
||||
tool_resources,
|
||||
streamId,
|
||||
tool_resources,
|
||||
definitionsOnly,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('Error loading tools for agent ' + agentId, error);
|
||||
|
|
@ -91,8 +95,46 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
|
|||
const artifactPromises = [];
|
||||
const { contentParts, aggregateContent } = createContentAggregator();
|
||||
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises, streamId });
|
||||
|
||||
/**
|
||||
* Agent context store - populated after initialization, accessed by callback via closure.
|
||||
* Maps agentId -> { userMCPAuthMap, agent, tool_resources, toolRegistry, openAIApiKey }
|
||||
* @type {Map<string, {
|
||||
* userMCPAuthMap?: Record<string, Record<string, string>>,
|
||||
* agent?: object,
|
||||
* tool_resources?: object,
|
||||
* toolRegistry?: import('@librechat/agents').LCToolRegistry,
|
||||
* openAIApiKey?: string
|
||||
* }>}
|
||||
*/
|
||||
const agentToolContexts = new Map();
|
||||
|
||||
const toolExecuteOptions = {
|
||||
loadTools: async (toolNames, agentId) => {
|
||||
const ctx = agentToolContexts.get(agentId) ?? {};
|
||||
logger.debug(`[ON_TOOL_EXECUTE] ctx found: ${!!ctx.userMCPAuthMap}, agent: ${ctx.agent?.id}`);
|
||||
|
||||
const result = await loadToolsForExecution({
|
||||
req,
|
||||
res,
|
||||
signal,
|
||||
streamId,
|
||||
toolNames,
|
||||
agent: ctx.agent,
|
||||
toolRegistry: ctx.toolRegistry,
|
||||
userMCPAuthMap: ctx.userMCPAuthMap,
|
||||
tool_resources: ctx.tool_resources,
|
||||
});
|
||||
|
||||
logger.debug(`[ON_TOOL_EXECUTE] loaded ${result.loadedTools?.length ?? 0} tools`);
|
||||
return result;
|
||||
},
|
||||
toolEndCallback,
|
||||
};
|
||||
|
||||
const eventHandlers = getDefaultHandlers({
|
||||
res,
|
||||
toolExecuteOptions,
|
||||
aggregateContent,
|
||||
toolEndCallback,
|
||||
collectedUsage,
|
||||
|
|
@ -125,7 +167,8 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
|
|||
const agentConfigs = new Map();
|
||||
const allowedProviders = new Set(appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders);
|
||||
|
||||
const loadTools = createToolLoader(signal, streamId);
|
||||
/** Event-driven mode: only load tool definitions, not full instances */
|
||||
const loadTools = createToolLoader(signal, streamId, true);
|
||||
/** @type {Array<MongoFile>} */
|
||||
const requestFiles = req.body.files ?? [];
|
||||
/** @type {string} */
|
||||
|
|
@ -159,6 +202,19 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
|
|||
},
|
||||
);
|
||||
|
||||
logger.debug(
|
||||
`[initializeClient] Tool definitions for primary agent: ${primaryConfig.toolDefinitions?.length ?? 0}`,
|
||||
);
|
||||
|
||||
/** Store primary agent's tool context for ON_TOOL_EXECUTE callback */
|
||||
logger.debug(`[initializeClient] Storing tool context for agentId: ${primaryConfig.id}`);
|
||||
agentToolContexts.set(primaryConfig.id, {
|
||||
agent: primaryAgent,
|
||||
toolRegistry: primaryConfig.toolRegistry,
|
||||
userMCPAuthMap: primaryConfig.userMCPAuthMap,
|
||||
tool_resources: primaryConfig.tool_resources,
|
||||
});
|
||||
|
||||
const agent_ids = primaryConfig.agent_ids;
|
||||
let userMCPAuthMap = primaryConfig.userMCPAuthMap;
|
||||
|
||||
|
|
@ -211,11 +267,21 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
|
|||
getCodeGeneratedFiles: db.getCodeGeneratedFiles,
|
||||
},
|
||||
);
|
||||
|
||||
if (userMCPAuthMap != null) {
|
||||
Object.assign(userMCPAuthMap, config.userMCPAuthMap ?? {});
|
||||
} else {
|
||||
userMCPAuthMap = config.userMCPAuthMap;
|
||||
}
|
||||
|
||||
/** Store handoff agent's tool context for ON_TOOL_EXECUTE callback */
|
||||
agentToolContexts.set(agentId, {
|
||||
agent,
|
||||
toolRegistry: config.toolRegistry,
|
||||
userMCPAuthMap: config.userMCPAuthMap,
|
||||
tool_resources: config.tool_resources,
|
||||
});
|
||||
|
||||
agentConfigs.set(agentId, config);
|
||||
return agent;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
const { z } = require('zod');
|
||||
const { tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const {
|
||||
|
|
@ -12,7 +11,7 @@ const {
|
|||
MCPOAuthHandler,
|
||||
isMCPDomainAllowed,
|
||||
normalizeServerName,
|
||||
convertWithResolvedRefs,
|
||||
resolveJsonSchemaRefs,
|
||||
GenerationJobManager,
|
||||
} = require('@librechat/api');
|
||||
const {
|
||||
|
|
@ -34,6 +33,16 @@ const { reinitMCPServer } = require('./Tools/mcp');
|
|||
const { getAppConfig } = require('./Config');
|
||||
const { getLogStores } = require('~/cache');
|
||||
|
||||
function isEmptyObjectSchema(jsonSchema) {
|
||||
return (
|
||||
jsonSchema != null &&
|
||||
typeof jsonSchema === 'object' &&
|
||||
jsonSchema.type === 'object' &&
|
||||
(jsonSchema.properties == null || Object.keys(jsonSchema.properties).length === 0) &&
|
||||
!jsonSchema.additionalProperties
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {object} params
|
||||
* @param {ServerResponse} params.res - The Express response object for sending events.
|
||||
|
|
@ -197,6 +206,9 @@ async function reconnectServer({
|
|||
userMCPAuthMap,
|
||||
streamId = null,
|
||||
}) {
|
||||
logger.debug(
|
||||
`[MCP][reconnectServer] serverName: ${serverName}, user: ${user?.id}, hasUserMCPAuthMap: ${!!userMCPAuthMap}`,
|
||||
);
|
||||
const runId = Constants.USE_PRELIM_RESPONSE_MESSAGE_ID;
|
||||
const flowId = `${user.id}:${serverName}:${Date.now()}`;
|
||||
const flowManager = getFlowStateManager(getLogStores(CacheKeys.FLOWS));
|
||||
|
|
@ -429,13 +441,17 @@ function createToolInstance({
|
|||
/** @type {LCTool} */
|
||||
const { description, parameters } = toolDefinition;
|
||||
const isGoogle = _provider === Providers.VERTEXAI || _provider === Providers.GOOGLE;
|
||||
let schema = convertWithResolvedRefs(parameters, {
|
||||
allowEmptyObject: !isGoogle,
|
||||
transformOneOfAnyOf: true,
|
||||
});
|
||||
|
||||
if (!schema) {
|
||||
schema = z.object({ input: z.string().optional() });
|
||||
let schema = parameters ? resolveJsonSchemaRefs(parameters) : null;
|
||||
|
||||
if (!schema || (isGoogle && isEmptyObjectSchema(schema))) {
|
||||
schema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
input: { type: 'string', description: 'Input for the tool' },
|
||||
},
|
||||
required: [],
|
||||
};
|
||||
}
|
||||
|
||||
const normalizedToolKey = `${toolName}${Constants.mcp_delimiter}${normalizeServerName(serverName)}`;
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ jest.mock('@librechat/api', () => {
|
|||
},
|
||||
sendEvent: jest.fn(),
|
||||
normalizeServerName: jest.fn((name) => name),
|
||||
convertWithResolvedRefs: jest.fn((params) => params),
|
||||
resolveJsonSchemaRefs: jest.fn((params) => params),
|
||||
get isMCPDomainAllowed() {
|
||||
return mockIsMCPDomainAllowed;
|
||||
},
|
||||
|
|
|
|||
|
|
@ -1,10 +1,17 @@
|
|||
const { sleep } = require('@librechat/agents');
|
||||
const {
|
||||
sleep,
|
||||
EnvVar,
|
||||
Constants,
|
||||
createToolSearch,
|
||||
createProgrammaticToolCallingTool,
|
||||
} = require('@librechat/agents');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { tool: toolFn, DynamicStructuredTool } = require('@langchain/core/tools');
|
||||
const {
|
||||
getToolkitKey,
|
||||
hasCustomUserVars,
|
||||
getUserMCPAuthMap,
|
||||
loadToolDefinitions,
|
||||
isActionDomainAllowed,
|
||||
buildToolClassification,
|
||||
} = require('@librechat/api');
|
||||
|
|
@ -20,9 +27,12 @@ const {
|
|||
AgentCapabilities,
|
||||
isEphemeralAgentId,
|
||||
validateActionDomain,
|
||||
actionDomainSeparator,
|
||||
defaultAgentCapabilities,
|
||||
validateAndParseOpenAPISpec,
|
||||
} = require('librechat-data-provider');
|
||||
|
||||
const domainSeparatorRegex = new RegExp(actionDomainSeparator, 'g');
|
||||
const {
|
||||
createActionTool,
|
||||
decryptMetadata,
|
||||
|
|
@ -30,14 +40,19 @@ const {
|
|||
domainParser,
|
||||
} = require('./ActionService');
|
||||
const { processFileURL, uploadImageBuffer } = require('~/server/services/Files/process');
|
||||
const { getEndpointsConfig, getCachedTools } = require('~/server/services/Config');
|
||||
const {
|
||||
getEndpointsConfig,
|
||||
getCachedTools,
|
||||
getMCPServerTools,
|
||||
} = require('~/server/services/Config');
|
||||
const { manifestToolMap, toolkits } = require('~/app/clients/tools/manifest');
|
||||
const { createOnSearchResults } = require('~/server/services/Tools/search');
|
||||
const { loadAuthValues } = require('~/server/services/Tools/credentials');
|
||||
const { reinitMCPServer } = require('~/server/services/Tools/mcp');
|
||||
const { recordUsage } = require('~/server/services/Threads');
|
||||
const { loadTools } = require('~/app/clients/tools/util');
|
||||
const { redactMessage } = require('~/config/parsers');
|
||||
const { findPluginAuthsByKeys } = require('~/models');
|
||||
const { loadAuthValues } = require('~/server/services/Tools/credentials');
|
||||
/**
|
||||
* Processes the required actions by calling the appropriate tools and returning the outputs.
|
||||
* @param {OpenAIClient} client - OpenAI or StreamRunManager Client.
|
||||
|
|
@ -377,6 +392,187 @@ async function processRequiredActions(client, requiredActions) {
|
|||
* hasDeferredTools?: boolean;
|
||||
* }>} The agent tools and registry.
|
||||
*/
|
||||
/** Native LibreChat tools that are not in the manifest */
|
||||
const nativeTools = new Set([Tools.execute_code, Tools.file_search, Tools.web_search]);
|
||||
|
||||
/** Checks if a tool name is a known built-in tool */
|
||||
const isBuiltInTool = (toolName) =>
|
||||
Boolean(
|
||||
manifestToolMap[toolName] ||
|
||||
toolkits.some((t) => t.pluginKey === toolName) ||
|
||||
nativeTools.has(toolName),
|
||||
);
|
||||
|
||||
/**
|
||||
* Loads only tool definitions without creating tool instances.
|
||||
* This is the efficient path for event-driven mode where tools are loaded on-demand.
|
||||
*
|
||||
* @param {Object} params
|
||||
* @param {ServerRequest} params.req - The request object
|
||||
* @param {Object} params.agent - The agent configuration
|
||||
* @returns {Promise<{
|
||||
* toolDefinitions?: import('@librechat/api').LCTool[];
|
||||
* toolRegistry?: Map<string, import('@librechat/api').LCTool>;
|
||||
* userMCPAuthMap?: Record<string, Record<string, string>>;
|
||||
* hasDeferredTools?: boolean;
|
||||
* }>}
|
||||
*/
|
||||
async function loadToolDefinitionsWrapper({ req, agent }) {
|
||||
if (!agent.tools || agent.tools.length === 0) {
|
||||
return { toolDefinitions: [] };
|
||||
}
|
||||
|
||||
if (
|
||||
agent.tools.length === 1 &&
|
||||
(agent.tools[0] === AgentCapabilities.context || agent.tools[0] === AgentCapabilities.ocr)
|
||||
) {
|
||||
return { toolDefinitions: [] };
|
||||
}
|
||||
|
||||
const appConfig = req.config;
|
||||
const endpointsConfig = await getEndpointsConfig(req);
|
||||
let enabledCapabilities = new Set(endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? []);
|
||||
|
||||
if (enabledCapabilities.size === 0 && isEphemeralAgentId(agent.id)) {
|
||||
enabledCapabilities = new Set(
|
||||
appConfig.endpoints?.[EModelEndpoint.agents]?.capabilities ?? defaultAgentCapabilities,
|
||||
);
|
||||
}
|
||||
|
||||
const checkCapability = (capability) => enabledCapabilities.has(capability);
|
||||
const areToolsEnabled = checkCapability(AgentCapabilities.tools);
|
||||
const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools);
|
||||
|
||||
const filteredTools = agent.tools?.filter((tool) => {
|
||||
if (tool === Tools.file_search) {
|
||||
return checkCapability(AgentCapabilities.file_search);
|
||||
}
|
||||
if (tool === Tools.execute_code) {
|
||||
return checkCapability(AgentCapabilities.execute_code);
|
||||
}
|
||||
if (tool === Tools.web_search) {
|
||||
return checkCapability(AgentCapabilities.web_search);
|
||||
}
|
||||
if (!areToolsEnabled && !tool.includes(actionDelimiter)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
if (!filteredTools || filteredTools.length === 0) {
|
||||
return { toolDefinitions: [] };
|
||||
}
|
||||
|
||||
/** @type {Record<string, Record<string, string>>} */
|
||||
let userMCPAuthMap;
|
||||
if (hasCustomUserVars(req.config)) {
|
||||
userMCPAuthMap = await getUserMCPAuthMap({
|
||||
tools: agent.tools,
|
||||
userId: req.user.id,
|
||||
findPluginAuthsByKeys,
|
||||
});
|
||||
}
|
||||
|
||||
const getOrFetchMCPServerTools = async (userId, serverName) => {
|
||||
const cached = await getMCPServerTools(userId, serverName);
|
||||
if (cached) {
|
||||
return cached;
|
||||
}
|
||||
|
||||
const result = await reinitMCPServer({
|
||||
user: req.user,
|
||||
serverName,
|
||||
userMCPAuthMap,
|
||||
});
|
||||
|
||||
return result?.availableTools || null;
|
||||
};
|
||||
|
||||
const getActionToolDefinitions = async (agentId, actionToolNames) => {
|
||||
const actionSets = (await loadActionSets({ agent_id: agentId })) ?? [];
|
||||
if (actionSets.length === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const definitions = [];
|
||||
const allowedDomains = appConfig?.actions?.allowedDomains;
|
||||
|
||||
for (const action of actionSets) {
|
||||
const domain = await domainParser(action.metadata.domain, true);
|
||||
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
||||
|
||||
const isDomainAllowed = await isActionDomainAllowed(action.metadata.domain, allowedDomains);
|
||||
if (!isDomainAllowed) {
|
||||
logger.warn(
|
||||
`[Actions] Domain "${action.metadata.domain}" not in allowedDomains. ` +
|
||||
`Add it to librechat.yaml actions.allowedDomains to enable this action.`,
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
||||
if (!validationResult.spec || !validationResult.serverUrl) {
|
||||
logger.warn(`[Actions] Invalid OpenAPI spec for domain: ${domain}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const { functionSignatures } = openapiToFunction(validationResult.spec, true);
|
||||
|
||||
for (const sig of functionSignatures) {
|
||||
const toolName = `${sig.name}${actionDelimiter}${normalizedDomain}`;
|
||||
if (!actionToolNames.some((name) => name.replace(domainSeparatorRegex, '_') === toolName)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
definitions.push({
|
||||
name: toolName,
|
||||
description: sig.description,
|
||||
parameters: sig.parameters,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return definitions;
|
||||
};
|
||||
|
||||
const { toolDefinitions, toolRegistry, hasDeferredTools } = await loadToolDefinitions(
|
||||
{
|
||||
userId: req.user.id,
|
||||
agentId: agent.id,
|
||||
tools: filteredTools,
|
||||
toolOptions: agent.tool_options,
|
||||
deferredToolsEnabled,
|
||||
},
|
||||
{
|
||||
isBuiltInTool,
|
||||
loadAuthValues,
|
||||
getOrFetchMCPServerTools,
|
||||
getActionToolDefinitions,
|
||||
},
|
||||
);
|
||||
|
||||
return {
|
||||
toolRegistry,
|
||||
userMCPAuthMap,
|
||||
toolDefinitions,
|
||||
hasDeferredTools,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Loads agent tools for initialization or execution.
|
||||
* @param {Object} params
|
||||
* @param {ServerRequest} params.req - The request object
|
||||
* @param {ServerResponse} params.res - The response object
|
||||
* @param {Object} params.agent - The agent configuration
|
||||
* @param {AbortSignal} [params.signal] - Abort signal
|
||||
* @param {Object} [params.tool_resources] - Tool resources
|
||||
* @param {string} [params.openAIApiKey] - OpenAI API key
|
||||
* @param {string|null} [params.streamId] - Stream ID for resumable mode
|
||||
* @param {boolean} [params.definitionsOnly=true] - When true, returns only serializable
|
||||
* tool definitions without creating full tool instances. Use for event-driven mode
|
||||
* where tools are loaded on-demand during execution.
|
||||
*/
|
||||
async function loadAgentTools({
|
||||
req,
|
||||
res,
|
||||
|
|
@ -385,16 +581,21 @@ async function loadAgentTools({
|
|||
tool_resources,
|
||||
openAIApiKey,
|
||||
streamId = null,
|
||||
definitionsOnly = true,
|
||||
}) {
|
||||
if (definitionsOnly) {
|
||||
return loadToolDefinitionsWrapper({ req, agent });
|
||||
}
|
||||
|
||||
if (!agent.tools || agent.tools.length === 0) {
|
||||
return {};
|
||||
return { toolDefinitions: [] };
|
||||
} else if (
|
||||
agent.tools &&
|
||||
agent.tools.length === 1 &&
|
||||
/** Legacy handling for `ocr` as may still exist in existing Agents */
|
||||
(agent.tools[0] === AgentCapabilities.context || agent.tools[0] === AgentCapabilities.ocr)
|
||||
) {
|
||||
return {};
|
||||
return { toolDefinitions: [] };
|
||||
}
|
||||
|
||||
const appConfig = req.config;
|
||||
|
|
@ -480,6 +681,18 @@ async function loadAgentTools({
|
|||
imageOutputType: appConfig.imageOutputType,
|
||||
});
|
||||
|
||||
/** Build tool registry from MCP tools and create PTC/tool search tools if configured */
|
||||
const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools);
|
||||
const { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools } =
|
||||
await buildToolClassification({
|
||||
loadedTools,
|
||||
userId: req.user.id,
|
||||
agentId: agent.id,
|
||||
agentToolOptions: agent.tool_options,
|
||||
deferredToolsEnabled,
|
||||
loadAuthValues,
|
||||
});
|
||||
|
||||
const agentTools = [];
|
||||
for (let i = 0; i < loadedTools.length; i++) {
|
||||
const tool = loadedTools[i];
|
||||
|
|
@ -524,25 +737,16 @@ async function loadAgentTools({
|
|||
return map;
|
||||
}, {});
|
||||
|
||||
/** Build tool registry from MCP tools and create PTC/tool search tools if configured */
|
||||
const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools);
|
||||
const { toolRegistry, additionalTools, hasDeferredTools } = await buildToolClassification({
|
||||
loadedTools,
|
||||
userId: req.user.id,
|
||||
agentId: agent.id,
|
||||
agentToolOptions: agent.tool_options,
|
||||
deferredToolsEnabled,
|
||||
loadAuthValues,
|
||||
});
|
||||
agentTools.push(...additionalTools);
|
||||
|
||||
if (!checkCapability(AgentCapabilities.actions)) {
|
||||
return {
|
||||
tools: agentTools,
|
||||
toolRegistry,
|
||||
userMCPAuthMap,
|
||||
toolContextMap,
|
||||
toolRegistry,
|
||||
toolDefinitions,
|
||||
hasDeferredTools,
|
||||
tools: agentTools,
|
||||
};
|
||||
}
|
||||
|
||||
|
|
@ -552,11 +756,12 @@ async function loadAgentTools({
|
|||
logger.warn(`No tools found for the specified tool calls: ${_agentTools.join(', ')}`);
|
||||
}
|
||||
return {
|
||||
tools: agentTools,
|
||||
toolRegistry,
|
||||
userMCPAuthMap,
|
||||
toolContextMap,
|
||||
toolRegistry,
|
||||
toolDefinitions,
|
||||
hasDeferredTools,
|
||||
tools: agentTools,
|
||||
};
|
||||
}
|
||||
|
||||
|
|
@ -681,16 +886,293 @@ async function loadAgentTools({
|
|||
}
|
||||
|
||||
return {
|
||||
tools: agentTools,
|
||||
toolRegistry,
|
||||
toolContextMap,
|
||||
userMCPAuthMap,
|
||||
toolRegistry,
|
||||
toolDefinitions,
|
||||
hasDeferredTools,
|
||||
tools: agentTools,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Loads tools for event-driven execution (ON_TOOL_EXECUTE handler).
|
||||
* This function encapsulates all dependencies needed for tool loading,
|
||||
* so callers don't need to import processFileURL, uploadImageBuffer, etc.
|
||||
*
|
||||
* Handles both regular tools (MCP, built-in) and action tools.
|
||||
*
|
||||
* @param {Object} params
|
||||
* @param {ServerRequest} params.req - The request object
|
||||
* @param {ServerResponse} params.res - The response object
|
||||
* @param {AbortSignal} [params.signal] - Abort signal
|
||||
* @param {Object} params.agent - The agent object
|
||||
* @param {string[]} params.toolNames - Names of tools to load
|
||||
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap] - User MCP auth map
|
||||
* @param {Object} [params.tool_resources] - Tool resources
|
||||
* @param {string|null} [params.streamId] - Stream ID for web search callbacks
|
||||
* @returns {Promise<{ loadedTools: Array, configurable: Object }>}
|
||||
*/
|
||||
async function loadToolsForExecution({
|
||||
req,
|
||||
res,
|
||||
signal,
|
||||
agent,
|
||||
toolNames,
|
||||
toolRegistry,
|
||||
userMCPAuthMap,
|
||||
tool_resources,
|
||||
streamId = null,
|
||||
}) {
|
||||
const appConfig = req.config;
|
||||
const allLoadedTools = [];
|
||||
const configurable = { userMCPAuthMap };
|
||||
|
||||
const isToolSearch = toolNames.includes(Constants.TOOL_SEARCH);
|
||||
const isPTC = toolNames.includes(Constants.PROGRAMMATIC_TOOL_CALLING);
|
||||
|
||||
if (isToolSearch && toolRegistry) {
|
||||
const toolSearchTool = createToolSearch({
|
||||
mode: 'local',
|
||||
toolRegistry,
|
||||
});
|
||||
allLoadedTools.push(toolSearchTool);
|
||||
configurable.toolRegistry = toolRegistry;
|
||||
}
|
||||
|
||||
if (isPTC && toolRegistry) {
|
||||
configurable.toolRegistry = toolRegistry;
|
||||
try {
|
||||
const authValues = await loadAuthValues({
|
||||
userId: req.user.id,
|
||||
authFields: [EnvVar.CODE_API_KEY],
|
||||
});
|
||||
const codeApiKey = authValues[EnvVar.CODE_API_KEY];
|
||||
|
||||
if (codeApiKey) {
|
||||
const ptcTool = createProgrammaticToolCallingTool({ apiKey: codeApiKey });
|
||||
allLoadedTools.push(ptcTool);
|
||||
} else {
|
||||
logger.warn('[loadToolsForExecution] PTC requested but CODE_API_KEY not available');
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('[loadToolsForExecution] Error creating PTC tool:', error);
|
||||
}
|
||||
}
|
||||
|
||||
const specialToolNames = new Set([Constants.TOOL_SEARCH, Constants.PROGRAMMATIC_TOOL_CALLING]);
|
||||
|
||||
let ptcOrchestratedToolNames = [];
|
||||
if (isPTC && toolRegistry) {
|
||||
ptcOrchestratedToolNames = Array.from(toolRegistry.keys()).filter(
|
||||
(name) => !specialToolNames.has(name),
|
||||
);
|
||||
}
|
||||
|
||||
const requestedNonSpecialToolNames = toolNames.filter((name) => !specialToolNames.has(name));
|
||||
const allToolNamesToLoad = isPTC
|
||||
? [...new Set([...requestedNonSpecialToolNames, ...ptcOrchestratedToolNames])]
|
||||
: requestedNonSpecialToolNames;
|
||||
|
||||
const actionToolNames = allToolNamesToLoad.filter((name) => name.includes(actionDelimiter));
|
||||
const regularToolNames = allToolNamesToLoad.filter((name) => !name.includes(actionDelimiter));
|
||||
|
||||
if (regularToolNames.length > 0) {
|
||||
const includesWebSearch = regularToolNames.includes(Tools.web_search);
|
||||
const webSearchCallbacks = includesWebSearch ? createOnSearchResults(res, streamId) : undefined;
|
||||
|
||||
const { loadedTools } = await loadTools({
|
||||
agent,
|
||||
signal,
|
||||
userMCPAuthMap,
|
||||
functions: true,
|
||||
tools: regularToolNames,
|
||||
user: req.user.id,
|
||||
options: {
|
||||
req,
|
||||
res,
|
||||
processFileURL,
|
||||
uploadImageBuffer,
|
||||
returnMetadata: true,
|
||||
tool_resources,
|
||||
[Tools.web_search]: webSearchCallbacks,
|
||||
},
|
||||
webSearch: appConfig?.webSearch,
|
||||
fileStrategy: appConfig?.fileStrategy,
|
||||
imageOutputType: appConfig?.imageOutputType,
|
||||
});
|
||||
|
||||
if (loadedTools) {
|
||||
allLoadedTools.push(...loadedTools);
|
||||
}
|
||||
}
|
||||
|
||||
if (actionToolNames.length > 0 && agent) {
|
||||
const actionTools = await loadActionToolsForExecution({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
appConfig,
|
||||
streamId,
|
||||
actionToolNames,
|
||||
});
|
||||
allLoadedTools.push(...actionTools);
|
||||
}
|
||||
|
||||
if (isPTC && allLoadedTools.length > 0) {
|
||||
const ptcToolMap = new Map();
|
||||
for (const tool of allLoadedTools) {
|
||||
if (tool.name && tool.name !== Constants.PROGRAMMATIC_TOOL_CALLING) {
|
||||
ptcToolMap.set(tool.name, tool);
|
||||
}
|
||||
}
|
||||
configurable.ptcToolMap = ptcToolMap;
|
||||
}
|
||||
|
||||
return {
|
||||
configurable,
|
||||
loadedTools: allLoadedTools,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Loads action tools for event-driven execution.
|
||||
* @param {Object} params
|
||||
* @param {ServerRequest} params.req - The request object
|
||||
* @param {ServerResponse} params.res - The response object
|
||||
* @param {Object} params.agent - The agent object
|
||||
* @param {Object} params.appConfig - App configuration
|
||||
* @param {string|null} params.streamId - Stream ID
|
||||
* @param {string[]} params.actionToolNames - Action tool names to load
|
||||
* @returns {Promise<Array>} Loaded action tools
|
||||
*/
|
||||
async function loadActionToolsForExecution({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
appConfig,
|
||||
streamId,
|
||||
actionToolNames,
|
||||
}) {
|
||||
const loadedActionTools = [];
|
||||
|
||||
const actionSets = (await loadActionSets({ agent_id: agent.id })) ?? [];
|
||||
if (actionSets.length === 0) {
|
||||
return loadedActionTools;
|
||||
}
|
||||
|
||||
const processedActionSets = new Map();
|
||||
const domainMap = new Map();
|
||||
const allowedDomains = appConfig?.actions?.allowedDomains;
|
||||
|
||||
for (const action of actionSets) {
|
||||
const domain = await domainParser(action.metadata.domain, true);
|
||||
domainMap.set(domain, action);
|
||||
|
||||
const isDomainAllowed = await isActionDomainAllowed(action.metadata.domain, allowedDomains);
|
||||
if (!isDomainAllowed) {
|
||||
logger.warn(
|
||||
`[Actions] Domain "${action.metadata.domain}" not in allowedDomains. ` +
|
||||
`Add it to librechat.yaml actions.allowedDomains to enable this action.`,
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
||||
if (!validationResult.spec || !validationResult.serverUrl) {
|
||||
logger.warn(`[Actions] Invalid OpenAPI spec for domain: ${domain}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const domainValidation = validateActionDomain(
|
||||
action.metadata.domain,
|
||||
validationResult.serverUrl,
|
||||
);
|
||||
if (!domainValidation.isValid) {
|
||||
logger.error(`Domain mismatch in stored action: ${domainValidation.message}`, {
|
||||
userId: req.user.id,
|
||||
agent_id: agent.id,
|
||||
action_id: action.action_id,
|
||||
});
|
||||
continue;
|
||||
}
|
||||
|
||||
const encrypted = {
|
||||
oauth_client_id: action.metadata.oauth_client_id,
|
||||
oauth_client_secret: action.metadata.oauth_client_secret,
|
||||
};
|
||||
|
||||
const decryptedAction = { ...action };
|
||||
decryptedAction.metadata = await decryptMetadata(action.metadata);
|
||||
|
||||
const { requestBuilders, functionSignatures, zodSchemas } = openapiToFunction(
|
||||
validationResult.spec,
|
||||
true,
|
||||
);
|
||||
|
||||
processedActionSets.set(domain, {
|
||||
action: decryptedAction,
|
||||
requestBuilders,
|
||||
functionSignatures,
|
||||
zodSchemas,
|
||||
encrypted,
|
||||
});
|
||||
}
|
||||
|
||||
for (const toolName of actionToolNames) {
|
||||
let currentDomain = '';
|
||||
for (const domain of domainMap.keys()) {
|
||||
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
||||
if (toolName.includes(normalizedDomain)) {
|
||||
currentDomain = domain;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!currentDomain || !processedActionSets.has(currentDomain)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const { action, encrypted, zodSchemas, requestBuilders, functionSignatures } =
|
||||
processedActionSets.get(currentDomain);
|
||||
const normalizedDomain = currentDomain.replace(domainSeparatorRegex, '_');
|
||||
const functionName = toolName.replace(`${actionDelimiter}${normalizedDomain}`, '');
|
||||
const functionSig = functionSignatures.find((sig) => sig.name === functionName);
|
||||
const requestBuilder = requestBuilders[functionName];
|
||||
const zodSchema = zodSchemas[functionName];
|
||||
|
||||
if (!requestBuilder) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const tool = await createActionTool({
|
||||
userId: req.user.id,
|
||||
res,
|
||||
action,
|
||||
streamId,
|
||||
zodSchema,
|
||||
encrypted,
|
||||
requestBuilder,
|
||||
name: toolName,
|
||||
description: functionSig?.description ?? '',
|
||||
});
|
||||
|
||||
if (!tool) {
|
||||
logger.warn(`[Actions] Failed to create action tool: ${toolName}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
loadedActionTools.push(tool);
|
||||
}
|
||||
|
||||
return loadedActionTools;
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
loadTools,
|
||||
isBuiltInTool,
|
||||
getToolkitKey,
|
||||
loadAgentTools,
|
||||
loadToolsForExecution,
|
||||
processRequiredActions,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -107,22 +107,33 @@ function loadAndFormatTools({ directory, adminFilter = [], adminIncluded = [] })
|
|||
}, {});
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if a schema is a Zod schema by looking for the _def property
|
||||
* @param {unknown} schema - The schema to check
|
||||
* @returns {boolean} True if it's a Zod schema
|
||||
*/
|
||||
function isZodSchema(schema) {
|
||||
return schema && typeof schema === 'object' && '_def' in schema;
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats a `StructuredTool` instance into a format that is compatible
|
||||
* with OpenAI's ChatCompletionFunctions. It uses the `zodToJsonSchema`
|
||||
* function to convert the schema of the `StructuredTool` into a JSON
|
||||
* schema, which is then used as the parameters for the OpenAI function.
|
||||
* If the schema is already a JSON schema, it is used directly.
|
||||
*
|
||||
* @param {StructuredTool} tool - The StructuredTool to format.
|
||||
* @returns {FunctionTool} The OpenAI Assistant Tool.
|
||||
*/
|
||||
function formatToOpenAIAssistantTool(tool) {
|
||||
const parameters = isZodSchema(tool.schema) ? zodToJsonSchema(tool.schema) : tool.schema;
|
||||
return {
|
||||
type: Tools.function,
|
||||
[Tools.function]: {
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
parameters: zodToJsonSchema(tool.schema),
|
||||
parameters,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,86 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
||||
|
||||
<svg
|
||||
width="135.46666mm"
|
||||
height="135.46666mm"
|
||||
viewBox="0 0 135.46666 135.46666"
|
||||
version="1.1"
|
||||
id="svg5"
|
||||
xml:space="preserve"
|
||||
inkscape:version="1.2.2 (732a01da63, 2022-12-09)"
|
||||
sodipodi:docname="web-browser.svg"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
|
||||
id="namedview7"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#666666"
|
||||
borderopacity="1.0"
|
||||
inkscape:showpageshadow="2"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pagecheckerboard="0"
|
||||
inkscape:deskcolor="#d1d1d1"
|
||||
inkscape:document-units="mm"
|
||||
showgrid="false"
|
||||
inkscape:zoom="1.829812"
|
||||
inkscape:cx="339.10587"
|
||||
inkscape:cy="281.44968"
|
||||
inkscape:window-width="2560"
|
||||
inkscape:window-height="1369"
|
||||
inkscape:window-x="1072"
|
||||
inkscape:window-y="-8"
|
||||
inkscape:window-maximized="1"
|
||||
inkscape:current-layer="layer1" /><defs
|
||||
id="defs2"><linearGradient
|
||||
id="linearGradient1021"
|
||||
inkscape:collect="always"><stop
|
||||
style="stop-color:#6914d1;stop-opacity:1;"
|
||||
offset="0"
|
||||
id="stop1017" /><stop
|
||||
style="stop-color:#c82090;stop-opacity:1;"
|
||||
offset="1"
|
||||
id="stop1019" /></linearGradient><linearGradient
|
||||
inkscape:collect="always"
|
||||
xlink:href="#linearGradient1021"
|
||||
id="linearGradient5957"
|
||||
x1="75.754601"
|
||||
y1="163.95738"
|
||||
x2="146.97395"
|
||||
y2="86.082359"
|
||||
gradientUnits="userSpaceOnUse" /></defs><g
|
||||
inkscape:label="Layer 1"
|
||||
inkscape:groupmode="layer"
|
||||
id="layer1"
|
||||
transform="translate(-38.978451,-51.992085)"><g
|
||||
inkscape:groupmode="layer"
|
||||
id="layer2"
|
||||
inkscape:label="Layer 2"
|
||||
transform="translate(38.978451,51.992085)"
|
||||
style="fill:#000000;fill-opacity:1"><rect
|
||||
style="display:inline;fill:#000000;fill-opacity:1;stroke-width:34.9999"
|
||||
id="rect3228"
|
||||
width="135.46666"
|
||||
height="135.46666"
|
||||
x="0"
|
||||
y="0" /></g><g
|
||||
id="g5792"
|
||||
transform="matrix(0.10863854,0,0,0.10863854,51.849445,65.949185)"><path
|
||||
style="display:inline;fill:url(#linearGradient5957);fill-opacity:1;stroke-width:34.9999"
|
||||
id="path5949"
|
||||
sodipodi:type="arc"
|
||||
sodipodi:cx="106.16872"
|
||||
sodipodi:cy="120.26846"
|
||||
sodipodi:rx="53.232887"
|
||||
sodipodi:ry="53.232887"
|
||||
sodipodi:start="0.31945228"
|
||||
sodipodi:end="0.31361418"
|
||||
sodipodi:open="true"
|
||||
sodipodi:arc-type="arc"
|
||||
d="M 156.70842,136.98607 A 53.232887,53.232887 0 0 1 89.524891,170.83252 53.232887,53.232887 0 0 1 55.580426,103.69845 53.232887,53.232887 0 0 1 122.66487,69.656042 53.232887,53.232887 0 0 1 156.80516,136.69073"
|
||||
transform="matrix(9.2048366,0,0,9.2048366,-477.26567,-607.05147)" /><path
|
||||
d="m 500,10 c -0.1,0 -0.3,0 -0.4,0 -0.1,0 -0.1,0 -0.2,0 -0.2,0 -0.4,0 -0.6,0 C 228.7,10.6 10,229.8 10,500 c 0,270.2 218.7,489.3 488.8,490 0.2,0 0.4,0 0.6,0 0.1,0 0.1,0 0.2,0 0.2,0 0.3,0 0.4,0 C 770.6,990 990,770.6 990,500 990,229.4 770.6,10 500,10 Z m 19.6,293.2 c 51.9,-1.4 102.5,-8.3 151.2,-20.1 14.7,57.8 23.8,124.3 25.2,197.3 H 519.6 Z m 0,-39.2 V 52.3 C 572.4,66.9 626,137.4 660.1,245.4 614.8,256.3 567.9,262.7 519.6,264 Z M 480.4,51.8 V 264 c -48.7,-1.4 -96,-7.8 -141.6,-19 C 373.2,136.4 427.2,65.7 480.4,51.8 Z m 0,251.4 V 480.4 H 302.8 c 1.4,-73.1 10.6,-139.7 25.2,-197.5 49.1,12 100,18.9 152.4,20.3 z M 263.3,480.4 H 49.7 c 4.3,-100.8 41.9,-193.1 102,-266.2 43.6,24 89.9,43.7 138.4,58.4 -15.8,62.5 -25.3,133 -26.8,207.8 z m 0,39.2 c 1.4,74.8 10.9,145.2 26.8,207.8 -48.5,14.7 -94.8,34.4 -138.4,58.5 C 91.6,712.7 54,620.4 49.7,519.6 Z m 39.5,0 h 177.6 v 177 C 428,698 377.1,705 328,717 313.3,659.2 304.2,592.6 302.8,519.6 Z M 480.4,735.7 V 948.1 C 427.2,934.2 373.1,863.4 338.7,754.6 c 45.7,-11 93,-17.5 141.7,-18.9 z m 39.2,212 v -212 c 48.3,1.4 95.2,7.8 140.5,18.7 C 626,862.5 572.5,933.1 519.6,947.7 Z m 0,-251.1 v -177 H 696 C 694.6,592.5 685.5,659 670.8,716.7 622.1,704.8 571.6,698 519.6,696.6 Z m 215.8,-177 H 950.3 C 946,620.4 908.5,712.7 848.4,785.8 804.4,761.6 757.7,741.8 708.8,727 724.5,664.5 734,594.2 735.4,519.6 Z m 0,-39.2 C 734,405.7 724.5,335.3 708.7,272.8 757.6,258 804.3,238.2 848.2,214 c 60.2,73.1 97.7,165.5 102.1,266.3 z M 821.2,184.1 C 782.2,204.8 741,222 698.1,235 675.2,161.3 643,101.1 605,61.6 688.4,81.7 762.9,124.8 821.2,184.1 Z M 393.5,62 c -37.8,39.4 -69.9,99.3 -92.7,172.7 -42.6,-13 -83.4,-30 -122,-50.6 C 236.7,125.2 310.6,82.2 393.5,62 Z M 178.6,815.7 c 38.7,-20.6 79.5,-37.6 122.1,-50.6 22.8,73.4 54.9,133.4 92.7,172.9 C 310.6,917.8 236.6,874.7 178.6,815.7 Z m 426.3,122.6 c 38.1,-39.5 70.3,-99.7 93.2,-173.6 43,13.1 84.2,30.2 123.2,51 C 763,875.1 688.5,918.3 604.9,938.3 Z"
|
||||
id="path5790"
|
||||
style="fill:#000000;fill-opacity:1" /></g></g></svg>
|
||||
|
Before Width: | Height: | Size: 5 KiB |
10
package-lock.json
generated
10
package-lock.json
generated
|
|
@ -59,7 +59,7 @@
|
|||
"@google/genai": "^1.19.0",
|
||||
"@keyv/redis": "^4.3.3",
|
||||
"@langchain/core": "^0.3.80",
|
||||
"@librechat/agents": "^3.1.0",
|
||||
"@librechat/agents": "^3.1.27",
|
||||
"@librechat/api": "*",
|
||||
"@librechat/data-schemas": "*",
|
||||
"@microsoft/microsoft-graph-client": "^3.0.7",
|
||||
|
|
@ -11709,9 +11709,9 @@
|
|||
}
|
||||
},
|
||||
"node_modules/@librechat/agents": {
|
||||
"version": "3.1.0",
|
||||
"resolved": "https://registry.npmjs.org/@librechat/agents/-/agents-3.1.0.tgz",
|
||||
"integrity": "sha512-cnZTxSdIfBZZslsQlizA4grxVhAgeHeNPZiwIly+E2IfPJDg8oz+uvtQGXOEKRA0PHL1xRzAb6sqPMUTdm/2RA==",
|
||||
"version": "3.1.27",
|
||||
"resolved": "https://registry.npmjs.org/@librechat/agents/-/agents-3.1.27.tgz",
|
||||
"integrity": "sha512-cThf2+OoyjBGf1PoG3H9Au3zm+zFICHF53qHYc6B3/j9mss9NgmGXd30ILRXiXPgsMCfOHqJoqUWidQHFJLiiA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-bedrock-runtime": "^3.970.0",
|
||||
|
|
@ -43020,7 +43020,7 @@
|
|||
"@google/genai": "^1.19.0",
|
||||
"@keyv/redis": "^4.3.3",
|
||||
"@langchain/core": "^0.3.80",
|
||||
"@librechat/agents": "^3.1.0",
|
||||
"@librechat/agents": "^3.1.27",
|
||||
"@librechat/data-schemas": "*",
|
||||
"@modelcontextprotocol/sdk": "^1.25.3",
|
||||
"@smithy/node-http-handler": "^4.4.5",
|
||||
|
|
|
|||
|
|
@ -87,7 +87,7 @@
|
|||
"@google/genai": "^1.19.0",
|
||||
"@keyv/redis": "^4.3.3",
|
||||
"@langchain/core": "^0.3.80",
|
||||
"@librechat/agents": "^3.1.0",
|
||||
"@librechat/agents": "^3.1.27",
|
||||
"@librechat/data-schemas": "*",
|
||||
"@modelcontextprotocol/sdk": "^1.25.3",
|
||||
"@smithy/node-http-handler": "^4.4.5",
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import { DynamicStructuredTool } from '@langchain/core/tools';
|
||||
import { Constants } from 'librechat-data-provider';
|
||||
import type { Agent, TEphemeralAgent } from 'librechat-data-provider';
|
||||
import type { LCTool } from '@librechat/agents';
|
||||
import type { Logger } from 'winston';
|
||||
import type { MCPManager } from '~/mcp/MCPManager';
|
||||
|
||||
|
|
@ -11,27 +12,43 @@ import type { MCPManager } from '~/mcp/MCPManager';
|
|||
export type AgentWithTools = Pick<Agent, 'id'> &
|
||||
Partial<Omit<Agent, 'id' | 'tools'>> & {
|
||||
tools?: Array<DynamicStructuredTool | string>;
|
||||
/** Serializable tool definitions for event-driven mode */
|
||||
toolDefinitions?: LCTool[];
|
||||
};
|
||||
|
||||
/**
|
||||
* Extracts unique MCP server names from an agent's tools.
|
||||
* @param agent - The agent with tools
|
||||
* Extracts unique MCP server names from an agent's tools or tool definitions.
|
||||
* Supports both full tool instances (tools) and serializable definitions (toolDefinitions).
|
||||
* @param agent - The agent with tools and/or tool definitions
|
||||
* @returns Array of unique MCP server names
|
||||
*/
|
||||
export function extractMCPServers(agent: AgentWithTools): string[] {
|
||||
if (!agent?.tools?.length) {
|
||||
return [];
|
||||
}
|
||||
const mcpServers = new Set<string>();
|
||||
for (let i = 0; i < agent.tools.length; i++) {
|
||||
const tool = agent.tools[i];
|
||||
if (tool instanceof DynamicStructuredTool && tool.name.includes(Constants.mcp_delimiter)) {
|
||||
const serverName = tool.name.split(Constants.mcp_delimiter).pop();
|
||||
if (serverName) {
|
||||
mcpServers.add(serverName);
|
||||
|
||||
/** Check tool instances (non-event-driven mode) */
|
||||
if (agent?.tools?.length) {
|
||||
for (const tool of agent.tools) {
|
||||
if (tool instanceof DynamicStructuredTool && tool.name.includes(Constants.mcp_delimiter)) {
|
||||
const serverName = tool.name.split(Constants.mcp_delimiter).pop();
|
||||
if (serverName) {
|
||||
mcpServers.add(serverName);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** Check tool definitions (event-driven mode) */
|
||||
if (agent?.toolDefinitions?.length) {
|
||||
for (const toolDef of agent.toolDefinitions) {
|
||||
if (toolDef.name?.includes(Constants.mcp_delimiter)) {
|
||||
const serverName = toolDef.name.split(Constants.mcp_delimiter).pop();
|
||||
if (serverName) {
|
||||
mcpServers.add(serverName);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return Array.from(mcpServers);
|
||||
}
|
||||
|
||||
|
|
|
|||
168
packages/api/src/agents/handlers.ts
Normal file
168
packages/api/src/agents/handlers.ts
Normal file
|
|
@ -0,0 +1,168 @@
|
|||
import { logger } from '@librechat/data-schemas';
|
||||
import { GraphEvents, Constants } from '@librechat/agents';
|
||||
import type {
|
||||
LCTool,
|
||||
EventHandler,
|
||||
LCToolRegistry,
|
||||
ToolCallRequest,
|
||||
ToolExecuteResult,
|
||||
ToolExecuteBatchRequest,
|
||||
} from '@librechat/agents';
|
||||
import type { StructuredToolInterface } from '@langchain/core/tools';
|
||||
|
||||
export interface ToolEndCallbackData {
|
||||
output: {
|
||||
name: string;
|
||||
tool_call_id: string;
|
||||
content: string | unknown;
|
||||
artifact?: unknown;
|
||||
};
|
||||
}
|
||||
|
||||
export interface ToolEndCallbackMetadata {
|
||||
run_id?: string;
|
||||
thread_id?: string;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
export type ToolEndCallback = (
|
||||
data: ToolEndCallbackData,
|
||||
metadata: ToolEndCallbackMetadata,
|
||||
) => Promise<void>;
|
||||
|
||||
export interface ToolExecuteOptions {
|
||||
/** Loads tools by name, using agentId to look up agent-specific context */
|
||||
loadTools: (
|
||||
toolNames: string[],
|
||||
agentId?: string,
|
||||
) => Promise<{
|
||||
loadedTools: StructuredToolInterface[];
|
||||
/** Additional configurable properties to merge (e.g., userMCPAuthMap) */
|
||||
configurable?: Record<string, unknown>;
|
||||
}>;
|
||||
/** Callback to process tool artifacts (code output files, file citations, etc.) */
|
||||
toolEndCallback?: ToolEndCallback;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates the ON_TOOL_EXECUTE handler for event-driven tool execution.
|
||||
* This handler receives batched tool calls, loads the required tools,
|
||||
* executes them in parallel, and resolves with the results.
|
||||
*/
|
||||
export function createToolExecuteHandler(options: ToolExecuteOptions): EventHandler {
|
||||
const { loadTools, toolEndCallback } = options;
|
||||
|
||||
return {
|
||||
handle: async (_event: string, data: ToolExecuteBatchRequest) => {
|
||||
const { toolCalls, agentId, configurable, metadata, resolve, reject } = data;
|
||||
|
||||
try {
|
||||
const toolNames = [...new Set(toolCalls.map((tc: ToolCallRequest) => tc.name))];
|
||||
const { loadedTools, configurable: toolConfigurable } = await loadTools(toolNames, agentId);
|
||||
const toolMap = new Map(loadedTools.map((t) => [t.name, t]));
|
||||
const mergedConfigurable = { ...configurable, ...toolConfigurable };
|
||||
|
||||
const results: ToolExecuteResult[] = await Promise.all(
|
||||
toolCalls.map(async (tc: ToolCallRequest) => {
|
||||
const tool = toolMap.get(tc.name);
|
||||
|
||||
if (!tool) {
|
||||
logger.warn(
|
||||
`[ON_TOOL_EXECUTE] Tool "${tc.name}" not found. Available: ${[...toolMap.keys()].join(', ')}`,
|
||||
);
|
||||
return {
|
||||
toolCallId: tc.id,
|
||||
status: 'error' as const,
|
||||
content: '',
|
||||
errorMessage: `Tool ${tc.name} not found`,
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
const toolCallConfig: Record<string, unknown> = {
|
||||
id: tc.id,
|
||||
stepId: tc.stepId,
|
||||
turn: tc.turn,
|
||||
};
|
||||
|
||||
if (tc.name === Constants.PROGRAMMATIC_TOOL_CALLING) {
|
||||
const toolRegistry = mergedConfigurable?.toolRegistry as LCToolRegistry | undefined;
|
||||
const ptcToolMap = mergedConfigurable?.ptcToolMap as
|
||||
| Map<string, StructuredToolInterface>
|
||||
| undefined;
|
||||
if (toolRegistry) {
|
||||
const toolDefs: LCTool[] = Array.from(toolRegistry.values()).filter(
|
||||
(t) =>
|
||||
t.name !== Constants.PROGRAMMATIC_TOOL_CALLING &&
|
||||
t.name !== Constants.TOOL_SEARCH,
|
||||
);
|
||||
toolCallConfig.toolDefs = toolDefs;
|
||||
toolCallConfig.toolMap = ptcToolMap ?? toolMap;
|
||||
}
|
||||
}
|
||||
|
||||
const result = await tool.invoke(tc.args, {
|
||||
toolCall: toolCallConfig,
|
||||
configurable: mergedConfigurable,
|
||||
metadata,
|
||||
} as Record<string, unknown>);
|
||||
|
||||
if (toolEndCallback) {
|
||||
await toolEndCallback(
|
||||
{
|
||||
output: {
|
||||
name: tc.name,
|
||||
tool_call_id: tc.id,
|
||||
content: result.content,
|
||||
artifact: result.artifact,
|
||||
},
|
||||
},
|
||||
{
|
||||
run_id: (metadata as Record<string, unknown>)?.run_id as string | undefined,
|
||||
thread_id: (metadata as Record<string, unknown>)?.thread_id as
|
||||
| string
|
||||
| undefined,
|
||||
...metadata,
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
toolCallId: tc.id,
|
||||
content: result.content,
|
||||
artifact: result.artifact,
|
||||
status: 'success' as const,
|
||||
};
|
||||
} catch (toolError) {
|
||||
const error = toolError as Error;
|
||||
logger.error(`[ON_TOOL_EXECUTE] Tool ${tc.name} error:`, error);
|
||||
return {
|
||||
toolCallId: tc.id,
|
||||
status: 'error' as const,
|
||||
content: '',
|
||||
errorMessage: error.message,
|
||||
};
|
||||
}
|
||||
}),
|
||||
);
|
||||
|
||||
resolve(results);
|
||||
} catch (error) {
|
||||
logger.error('[ON_TOOL_EXECUTE] Fatal error:', error);
|
||||
reject(error as Error);
|
||||
}
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a handlers object that includes ON_TOOL_EXECUTE.
|
||||
* Can be merged with other handler objects.
|
||||
*/
|
||||
export function createToolExecuteHandlers(
|
||||
options: ToolExecuteOptions,
|
||||
): Record<string, EventHandler> {
|
||||
return {
|
||||
[GraphEvents.ON_TOOL_EXECUTE]: createToolExecuteHandler(options),
|
||||
};
|
||||
}
|
||||
|
|
@ -2,6 +2,7 @@ export * from './avatars';
|
|||
export * from './chain';
|
||||
export * from './context';
|
||||
export * from './edges';
|
||||
export * from './handlers';
|
||||
export * from './initialize';
|
||||
export * from './legacy';
|
||||
export * from './memory';
|
||||
|
|
@ -10,4 +11,5 @@ export * from './openai';
|
|||
export * from './resources';
|
||||
export * from './responses';
|
||||
export * from './run';
|
||||
export * from './tools';
|
||||
export * from './validation';
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ import type {
|
|||
Agent,
|
||||
TUser,
|
||||
} from 'librechat-data-provider';
|
||||
import type { GenericTool, LCToolRegistry, ToolMap } from '@librechat/agents';
|
||||
import type { GenericTool, LCToolRegistry, ToolMap, LCTool } from '@librechat/agents';
|
||||
import type { Response as ServerResponse } from 'express';
|
||||
import type { IMongoFile } from '@librechat/data-schemas';
|
||||
import type { InitializeResultBase, ServerRequest, EndpointDbMethods } from '~/types';
|
||||
|
|
@ -47,6 +47,8 @@ export type InitializedAgent = Agent & {
|
|||
toolMap?: ToolMap;
|
||||
/** Tool registry for PTC and tool search (only present when MCP tools with env classification exist) */
|
||||
toolRegistry?: LCToolRegistry;
|
||||
/** Serializable tool definitions for event-driven execution */
|
||||
toolDefinitions?: LCTool[];
|
||||
/** Precomputed flag indicating if any tools have defer_loading enabled (for efficient runtime checks) */
|
||||
hasDeferredTools?: boolean;
|
||||
};
|
||||
|
|
@ -79,10 +81,13 @@ export interface InitializeAgentParams {
|
|||
tool_options: AgentToolOptions | undefined;
|
||||
tool_resources: AgentToolResources | undefined;
|
||||
}) => Promise<{
|
||||
tools: GenericTool[];
|
||||
toolContextMap: Record<string, unknown>;
|
||||
/** Full tool instances (only present when definitionsOnly=false) */
|
||||
tools?: GenericTool[];
|
||||
toolContextMap?: Record<string, unknown>;
|
||||
userMCPAuthMap?: Record<string, Record<string, string>>;
|
||||
toolRegistry?: LCToolRegistry;
|
||||
/** Serializable tool definitions for event-driven mode */
|
||||
toolDefinitions?: LCTool[];
|
||||
hasDeferredTools?: boolean;
|
||||
} | null>;
|
||||
/** Endpoint option (contains model_parameters and endpoint info) */
|
||||
|
|
@ -272,11 +277,12 @@ export async function initializeAgent(
|
|||
});
|
||||
|
||||
const {
|
||||
tools: structuredTools,
|
||||
toolRegistry,
|
||||
toolContextMap,
|
||||
userMCPAuthMap,
|
||||
toolRegistry,
|
||||
toolDefinitions,
|
||||
hasDeferredTools,
|
||||
tools: structuredTools,
|
||||
} = (await loadTools?.({
|
||||
req,
|
||||
res,
|
||||
|
|
@ -291,6 +297,7 @@ export async function initializeAgent(
|
|||
toolContextMap: {},
|
||||
userMCPAuthMap: undefined,
|
||||
toolRegistry: undefined,
|
||||
toolDefinitions: [],
|
||||
hasDeferredTools: false,
|
||||
};
|
||||
|
||||
|
|
@ -343,13 +350,17 @@ export async function initializeAgent(
|
|||
agent.provider = options.provider;
|
||||
}
|
||||
|
||||
/** Check for tool presence from either full instances or definitions (event-driven mode) */
|
||||
const hasAgentTools = (structuredTools?.length ?? 0) > 0 || (toolDefinitions?.length ?? 0) > 0;
|
||||
|
||||
let tools: GenericTool[] = options.tools?.length
|
||||
? (options.tools as GenericTool[])
|
||||
: structuredTools;
|
||||
: (structuredTools ?? []);
|
||||
|
||||
if (
|
||||
(agent.provider === Providers.GOOGLE || agent.provider === Providers.VERTEXAI) &&
|
||||
options.tools?.length &&
|
||||
structuredTools?.length
|
||||
hasAgentTools
|
||||
) {
|
||||
throw new Error(`{ "type": "${ErrorTypes.GOOGLE_TOOL_CONFLICT}"}`);
|
||||
} else if (
|
||||
|
|
@ -396,6 +407,7 @@ export async function initializeAgent(
|
|||
resendFiles,
|
||||
userMCPAuthMap,
|
||||
toolRegistry,
|
||||
toolDefinitions,
|
||||
hasDeferredTools,
|
||||
toolContextMap: toolContextMap ?? {},
|
||||
useLegacyContent: !!options.useLegacyContent,
|
||||
|
|
|
|||
|
|
@ -12,6 +12,8 @@ import type {
|
|||
CompletionUsage,
|
||||
ToolCall,
|
||||
} from './types';
|
||||
import type { ToolExecuteOptions } from '~/agents/handlers';
|
||||
import { createToolExecuteHandler } from '~/agents/handlers';
|
||||
|
||||
/**
|
||||
* Create a chat completion chunk in OpenAI format
|
||||
|
|
@ -167,6 +169,7 @@ export const GraphEvents = {
|
|||
ON_RUN_STEP_COMPLETED: 'on_run_step_completed',
|
||||
ON_MESSAGE_DELTA: 'on_message_delta',
|
||||
ON_REASONING_DELTA: 'on_reasoning_delta',
|
||||
ON_TOOL_EXECUTE: 'on_tool_execute',
|
||||
} as const;
|
||||
|
||||
/**
|
||||
|
|
@ -404,8 +407,9 @@ export class OpenAIReasoningDeltaHandler implements EventHandler {
|
|||
*/
|
||||
export function createOpenAIHandlers(
|
||||
config: OpenAIStreamHandlerConfig,
|
||||
toolExecuteOptions?: ToolExecuteOptions,
|
||||
): Record<string, EventHandler> {
|
||||
return {
|
||||
const handlers: Record<string, EventHandler> = {
|
||||
[GraphEvents.ON_MESSAGE_DELTA]: new OpenAIMessageDeltaHandler(config),
|
||||
[GraphEvents.ON_RUN_STEP_DELTA]: new OpenAIRunStepDeltaHandler(config),
|
||||
[GraphEvents.ON_RUN_STEP]: new OpenAIRunStepHandler(config),
|
||||
|
|
@ -415,6 +419,12 @@ export function createOpenAIHandlers(
|
|||
[GraphEvents.TOOL_END]: new OpenAIToolEndHandler(),
|
||||
[GraphEvents.ON_REASONING_DELTA]: new OpenAIReasoningDeltaHandler(config),
|
||||
};
|
||||
|
||||
if (toolExecuteOptions) {
|
||||
handlers[GraphEvents.ON_TOOL_EXECUTE] = createToolExecuteHandler(toolExecuteOptions);
|
||||
}
|
||||
|
||||
return handlers;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -38,6 +38,7 @@ import {
|
|||
createChunk,
|
||||
writeSSE,
|
||||
} from './handlers';
|
||||
import type { ToolExecuteOptions } from '../handlers';
|
||||
|
||||
/**
|
||||
* Dependencies for the chat completion service
|
||||
|
|
@ -67,6 +68,8 @@ export interface ChatCompletionDependencies {
|
|||
createRun?: CreateRunFn;
|
||||
/** App config */
|
||||
appConfig?: AppConfig;
|
||||
/** Tool execute options for event-driven tool execution */
|
||||
toolExecuteOptions?: ToolExecuteOptions;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -438,7 +441,10 @@ export async function createAgentChatCompletion(
|
|||
: null;
|
||||
|
||||
// Create event handlers
|
||||
const eventHandlers = isStreaming && handlerConfig ? createOpenAIHandlers(handlerConfig) : {};
|
||||
const eventHandlers =
|
||||
isStreaming && handlerConfig
|
||||
? createOpenAIHandlers(handlerConfig, deps.toolExecuteOptions)
|
||||
: {};
|
||||
|
||||
// Convert messages to internal format
|
||||
const messages = convertMessages(request.messages);
|
||||
|
|
|
|||
133
packages/api/src/agents/run.spec.ts
Normal file
133
packages/api/src/agents/run.spec.ts
Normal file
|
|
@ -0,0 +1,133 @@
|
|||
import { ToolMessage, AIMessage, HumanMessage } from '@langchain/core/messages';
|
||||
import { extractDiscoveredToolsFromHistory } from './run';
|
||||
|
||||
describe('extractDiscoveredToolsFromHistory', () => {
|
||||
it('extracts tool names from tool_search JSON output', () => {
|
||||
const toolSearchOutput = JSON.stringify({
|
||||
found: 3,
|
||||
tools: [
|
||||
{ name: 'tool_a', score: 1.0 },
|
||||
{ name: 'tool_b', score: 0.8 },
|
||||
{ name: 'tool_c', score: 0.5 },
|
||||
],
|
||||
});
|
||||
|
||||
const messages = [
|
||||
new HumanMessage('Find tools'),
|
||||
new AIMessage({ content: '', tool_calls: [{ id: 'call_1', name: 'tool_search', args: {} }] }),
|
||||
new ToolMessage({ content: toolSearchOutput, tool_call_id: 'call_1', name: 'tool_search' }),
|
||||
];
|
||||
|
||||
const discovered = extractDiscoveredToolsFromHistory(messages);
|
||||
|
||||
expect(discovered.size).toBe(3);
|
||||
expect(discovered.has('tool_a')).toBe(true);
|
||||
expect(discovered.has('tool_b')).toBe(true);
|
||||
expect(discovered.has('tool_c')).toBe(true);
|
||||
});
|
||||
|
||||
it('extracts tool names from legacy tool_search format', () => {
|
||||
const legacyOutput = `Found 2 tools:
|
||||
- tool_x (score: 0.95)
|
||||
- tool_y (score: 0.80)`;
|
||||
|
||||
const messages = [
|
||||
new ToolMessage({ content: legacyOutput, tool_call_id: 'call_1', name: 'tool_search' }),
|
||||
];
|
||||
|
||||
const discovered = extractDiscoveredToolsFromHistory(messages);
|
||||
|
||||
expect(discovered.size).toBe(2);
|
||||
expect(discovered.has('tool_x')).toBe(true);
|
||||
expect(discovered.has('tool_y')).toBe(true);
|
||||
});
|
||||
|
||||
it('returns empty set when no tool_search messages exist', () => {
|
||||
const messages = [new HumanMessage('Hello'), new AIMessage('Hi there!')];
|
||||
|
||||
const discovered = extractDiscoveredToolsFromHistory(messages);
|
||||
|
||||
expect(discovered.size).toBe(0);
|
||||
});
|
||||
|
||||
it('ignores non-tool_search ToolMessages', () => {
|
||||
const messages = [
|
||||
new ToolMessage({
|
||||
content: '[{"sha": "abc123"}]',
|
||||
tool_call_id: 'call_1',
|
||||
name: 'list_commits_mcp_github',
|
||||
}),
|
||||
];
|
||||
|
||||
const discovered = extractDiscoveredToolsFromHistory(messages);
|
||||
|
||||
expect(discovered.size).toBe(0);
|
||||
});
|
||||
|
||||
it('handles multiple tool_search calls in history', () => {
|
||||
const firstOutput = JSON.stringify({
|
||||
tools: [{ name: 'tool_1' }, { name: 'tool_2' }],
|
||||
});
|
||||
const secondOutput = JSON.stringify({
|
||||
tools: [{ name: 'tool_2' }, { name: 'tool_3' }],
|
||||
});
|
||||
|
||||
const messages = [
|
||||
new ToolMessage({ content: firstOutput, tool_call_id: 'call_1', name: 'tool_search' }),
|
||||
new AIMessage('Using discovered tools'),
|
||||
new ToolMessage({ content: secondOutput, tool_call_id: 'call_2', name: 'tool_search' }),
|
||||
];
|
||||
|
||||
const discovered = extractDiscoveredToolsFromHistory(messages);
|
||||
|
||||
expect(discovered.size).toBe(3);
|
||||
expect(discovered.has('tool_1')).toBe(true);
|
||||
expect(discovered.has('tool_2')).toBe(true);
|
||||
expect(discovered.has('tool_3')).toBe(true);
|
||||
});
|
||||
|
||||
it('handles malformed JSON in tool_search output', () => {
|
||||
const messages = [
|
||||
new ToolMessage({
|
||||
content: 'This is not valid JSON',
|
||||
tool_call_id: 'call_1',
|
||||
name: 'tool_search',
|
||||
}),
|
||||
];
|
||||
|
||||
const discovered = extractDiscoveredToolsFromHistory(messages);
|
||||
|
||||
// Should not throw, just return empty set
|
||||
expect(discovered.size).toBe(0);
|
||||
});
|
||||
|
||||
it('handles tool_search output with empty tools array', () => {
|
||||
const output = JSON.stringify({
|
||||
found: 0,
|
||||
tools: [],
|
||||
});
|
||||
|
||||
const messages = [
|
||||
new ToolMessage({ content: output, tool_call_id: 'call_1', name: 'tool_search' }),
|
||||
];
|
||||
|
||||
const discovered = extractDiscoveredToolsFromHistory(messages);
|
||||
|
||||
expect(discovered.size).toBe(0);
|
||||
});
|
||||
|
||||
it('handles non-string content in ToolMessage', () => {
|
||||
const messages = [
|
||||
new ToolMessage({
|
||||
content: [{ type: 'text', text: 'array content' }],
|
||||
tool_call_id: 'call_1',
|
||||
name: 'tool_search',
|
||||
}),
|
||||
];
|
||||
|
||||
const discovered = extractDiscoveredToolsFromHistory(messages);
|
||||
|
||||
// Should handle gracefully
|
||||
expect(discovered.size).toBe(0);
|
||||
});
|
||||
});
|
||||
|
|
@ -10,6 +10,7 @@ import type {
|
|||
GenericTool,
|
||||
RunConfig,
|
||||
IState,
|
||||
LCTool,
|
||||
} from '@librechat/agents';
|
||||
import type { IUser } from '@librechat/data-schemas';
|
||||
import type { Agent } from 'librechat-data-provider';
|
||||
|
|
@ -166,6 +167,8 @@ type RunAgent = Omit<Agent, 'tools'> & {
|
|||
useLegacyContent?: boolean;
|
||||
toolContextMap?: Record<string, string>;
|
||||
toolRegistry?: LCToolRegistry;
|
||||
/** Serializable tool definitions for event-driven execution */
|
||||
toolDefinitions?: LCTool[];
|
||||
/** Precomputed flag indicating if any tools have defer_loading enabled */
|
||||
hasDeferredTools?: boolean;
|
||||
};
|
||||
|
|
@ -279,23 +282,39 @@ export async function createRun({
|
|||
/**
|
||||
* Override defer_loading for tools that were discovered in previous turns.
|
||||
* This prevents the LLM from having to re-discover tools via tool_search.
|
||||
* Also add the discovered tools' definitions so the LLM has their schemas.
|
||||
*/
|
||||
let toolDefinitions = agent.toolDefinitions ?? [];
|
||||
if (discoveredTools.size > 0 && agent.toolRegistry) {
|
||||
overrideDeferLoadingForDiscoveredTools(agent.toolRegistry, discoveredTools);
|
||||
|
||||
/** Add discovered tools' definitions so the LLM can see their schemas */
|
||||
const existingToolNames = new Set(toolDefinitions.map((d) => d.name));
|
||||
for (const toolName of discoveredTools) {
|
||||
if (existingToolNames.has(toolName)) {
|
||||
continue;
|
||||
}
|
||||
const toolDef = agent.toolRegistry.get(toolName);
|
||||
if (toolDef) {
|
||||
toolDefinitions = [...toolDefinitions, toolDef];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const reasoningKey = getReasoningKey(provider, llmConfig, agent.endpoint);
|
||||
const agentInput: AgentInputs = {
|
||||
provider,
|
||||
reasoningKey,
|
||||
toolDefinitions,
|
||||
agentId: agent.id,
|
||||
name: agent.name ?? undefined,
|
||||
tools: agent.tools,
|
||||
clientOptions: llmConfig,
|
||||
instructions: systemContent,
|
||||
name: agent.name ?? undefined,
|
||||
toolRegistry: agent.toolRegistry,
|
||||
maxContextTokens: agent.maxContextTokens,
|
||||
useLegacyContent: agent.useLegacyContent ?? false,
|
||||
discoveredTools: discoveredTools.size > 0 ? Array.from(discoveredTools) : undefined,
|
||||
};
|
||||
agentInputs.push(agentInput);
|
||||
};
|
||||
|
|
|
|||
126
packages/api/src/agents/tools.spec.ts
Normal file
126
packages/api/src/agents/tools.spec.ts
Normal file
|
|
@ -0,0 +1,126 @@
|
|||
import { buildToolSet, BuildToolSetConfig } from './tools';
|
||||
|
||||
describe('buildToolSet', () => {
|
||||
describe('event-driven mode (toolDefinitions)', () => {
|
||||
it('builds toolSet from toolDefinitions when available', () => {
|
||||
const agentConfig: BuildToolSetConfig = {
|
||||
toolDefinitions: [
|
||||
{ name: 'tool_search', description: 'Search for tools' },
|
||||
{ name: 'list_commits_mcp_github', description: 'List commits' },
|
||||
{ name: 'calculator', description: 'Calculate' },
|
||||
],
|
||||
tools: [],
|
||||
};
|
||||
|
||||
const toolSet = buildToolSet(agentConfig);
|
||||
|
||||
expect(toolSet.size).toBe(3);
|
||||
expect(toolSet.has('tool_search')).toBe(true);
|
||||
expect(toolSet.has('list_commits_mcp_github')).toBe(true);
|
||||
expect(toolSet.has('calculator')).toBe(true);
|
||||
});
|
||||
|
||||
it('includes tool_search in toolSet for deferred tools workflow', () => {
|
||||
const agentConfig: BuildToolSetConfig = {
|
||||
toolDefinitions: [
|
||||
{ name: 'tool_search', description: 'Search for deferred tools' },
|
||||
{ name: 'deferred_tool_1', description: 'A deferred tool', defer_loading: true },
|
||||
{ name: 'deferred_tool_2', description: 'Another deferred tool', defer_loading: true },
|
||||
],
|
||||
};
|
||||
|
||||
const toolSet = buildToolSet(agentConfig);
|
||||
|
||||
expect(toolSet.has('tool_search')).toBe(true);
|
||||
expect(toolSet.has('deferred_tool_1')).toBe(true);
|
||||
expect(toolSet.has('deferred_tool_2')).toBe(true);
|
||||
});
|
||||
|
||||
it('prefers toolDefinitions over tools when both are present', () => {
|
||||
const agentConfig: BuildToolSetConfig = {
|
||||
toolDefinitions: [{ name: 'from_definitions' }],
|
||||
tools: [{ name: 'from_tools' }],
|
||||
};
|
||||
|
||||
const toolSet = buildToolSet(agentConfig);
|
||||
|
||||
expect(toolSet.size).toBe(1);
|
||||
expect(toolSet.has('from_definitions')).toBe(true);
|
||||
expect(toolSet.has('from_tools')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('legacy mode (tools)', () => {
|
||||
it('falls back to tools when toolDefinitions is empty', () => {
|
||||
const agentConfig: BuildToolSetConfig = {
|
||||
toolDefinitions: [],
|
||||
tools: [{ name: 'web_search' }, { name: 'calculator' }],
|
||||
};
|
||||
|
||||
const toolSet = buildToolSet(agentConfig);
|
||||
|
||||
expect(toolSet.size).toBe(2);
|
||||
expect(toolSet.has('web_search')).toBe(true);
|
||||
expect(toolSet.has('calculator')).toBe(true);
|
||||
});
|
||||
|
||||
it('falls back to tools when toolDefinitions is undefined', () => {
|
||||
const agentConfig: BuildToolSetConfig = {
|
||||
tools: [{ name: 'tool_a' }, { name: 'tool_b' }],
|
||||
};
|
||||
|
||||
const toolSet = buildToolSet(agentConfig);
|
||||
|
||||
expect(toolSet.size).toBe(2);
|
||||
expect(toolSet.has('tool_a')).toBe(true);
|
||||
expect(toolSet.has('tool_b')).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('edge cases', () => {
|
||||
it('returns empty set when agentConfig is null', () => {
|
||||
const toolSet = buildToolSet(null);
|
||||
expect(toolSet.size).toBe(0);
|
||||
});
|
||||
|
||||
it('returns empty set when agentConfig is undefined', () => {
|
||||
const toolSet = buildToolSet(undefined);
|
||||
expect(toolSet.size).toBe(0);
|
||||
});
|
||||
|
||||
it('returns empty set when both toolDefinitions and tools are empty', () => {
|
||||
const agentConfig: BuildToolSetConfig = {
|
||||
toolDefinitions: [],
|
||||
tools: [],
|
||||
};
|
||||
|
||||
const toolSet = buildToolSet(agentConfig);
|
||||
expect(toolSet.size).toBe(0);
|
||||
});
|
||||
|
||||
it('filters out null/undefined tool entries', () => {
|
||||
const agentConfig: BuildToolSetConfig = {
|
||||
tools: [{ name: 'valid_tool' }, null, undefined, { name: 'another_valid' }],
|
||||
};
|
||||
|
||||
const toolSet = buildToolSet(agentConfig);
|
||||
|
||||
expect(toolSet.size).toBe(2);
|
||||
expect(toolSet.has('valid_tool')).toBe(true);
|
||||
expect(toolSet.has('another_valid')).toBe(true);
|
||||
});
|
||||
|
||||
it('filters out empty string tool names', () => {
|
||||
const agentConfig: BuildToolSetConfig = {
|
||||
toolDefinitions: [{ name: 'valid' }, { name: '' }, { name: 'also_valid' }],
|
||||
};
|
||||
|
||||
const toolSet = buildToolSet(agentConfig);
|
||||
|
||||
expect(toolSet.size).toBe(2);
|
||||
expect(toolSet.has('valid')).toBe(true);
|
||||
expect(toolSet.has('also_valid')).toBe(true);
|
||||
expect(toolSet.has('')).toBe(false);
|
||||
});
|
||||
});
|
||||
});
|
||||
39
packages/api/src/agents/tools.ts
Normal file
39
packages/api/src/agents/tools.ts
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
interface ToolDefLike {
|
||||
name: string;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
interface ToolInstanceLike {
|
||||
name: string;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
export interface BuildToolSetConfig {
|
||||
toolDefinitions?: ToolDefLike[];
|
||||
tools?: (ToolInstanceLike | null | undefined)[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Builds a Set of tool names for use with formatAgentMessages.
|
||||
*
|
||||
* In event-driven mode, tools are defined via toolDefinitions (which includes
|
||||
* deferred tools like tool_search). In legacy mode, tools come from loaded
|
||||
* tool instances.
|
||||
*
|
||||
* This ensures tool_search and other deferred tools are included in the toolSet,
|
||||
* allowing their ToolMessages to be preserved in conversation history.
|
||||
*/
|
||||
export function buildToolSet(agentConfig: BuildToolSetConfig | null | undefined): Set<string> {
|
||||
if (!agentConfig) {
|
||||
return new Set();
|
||||
}
|
||||
|
||||
const { toolDefinitions, tools } = agentConfig;
|
||||
|
||||
const toolNames =
|
||||
toolDefinitions && toolDefinitions.length > 0
|
||||
? toolDefinitions.map((def) => def.name)
|
||||
: (tools ?? []).map((tool) => tool?.name);
|
||||
|
||||
return new Set(toolNames.filter((name): name is string => Boolean(name)));
|
||||
}
|
||||
|
|
@ -187,6 +187,32 @@ describe('convertJsonSchemaToZod', () => {
|
|||
expect(() => zodSchema?.parse('invalid')).toThrow();
|
||||
});
|
||||
|
||||
it('should accept mixed-type enum schema values', () => {
|
||||
const schema = {
|
||||
enum: ['active', 'inactive', 0, 1, true, false, null],
|
||||
};
|
||||
const zodSchema = convertWithResolvedRefs(schema as JsonSchemaType);
|
||||
|
||||
expect(zodSchema?.parse('active')).toBe('active');
|
||||
expect(zodSchema?.parse(0)).toBe(0);
|
||||
expect(zodSchema?.parse(1)).toBe(1);
|
||||
expect(zodSchema?.parse(true)).toBe(true);
|
||||
expect(zodSchema?.parse(false)).toBe(false);
|
||||
expect(zodSchema?.parse(null)).toBe(null);
|
||||
});
|
||||
|
||||
it('should accept number enum schema values', () => {
|
||||
const schema = {
|
||||
type: 'number' as const,
|
||||
enum: [1, 2, 3, 5, 8, 13],
|
||||
};
|
||||
const zodSchema = convertWithResolvedRefs(schema as JsonSchemaType);
|
||||
|
||||
expect(zodSchema?.parse(1)).toBe(1);
|
||||
expect(zodSchema?.parse(13)).toBe(13);
|
||||
expect(zodSchema?.parse(5)).toBe(5);
|
||||
});
|
||||
|
||||
it('should convert number schema', () => {
|
||||
const schema: JsonSchemaType = {
|
||||
type: 'number',
|
||||
|
|
|
|||
|
|
@ -248,6 +248,15 @@ export function resolveJsonSchemaRefs<T extends Record<string, unknown>>(
|
|||
return result as T;
|
||||
}
|
||||
|
||||
/**
|
||||
* Converts a JSON Schema to a Zod schema.
|
||||
*
|
||||
* @deprecated This function is deprecated in favor of using JSON schemas directly.
|
||||
* LangChain.js now supports JSON schemas natively, eliminating the need for Zod conversion.
|
||||
* Use `resolveJsonSchemaRefs` to handle $ref references and pass the JSON schema directly to tools.
|
||||
*
|
||||
* @see https://js.langchain.com/docs/how_to/custom_tools/
|
||||
*/
|
||||
export function convertJsonSchemaToZod(
|
||||
schema: JsonSchemaType & Record<string, unknown>,
|
||||
options: ConvertJsonSchemaToZodOptions = {},
|
||||
|
|
@ -474,8 +483,13 @@ export function convertJsonSchemaToZod(
|
|||
}
|
||||
|
||||
/**
|
||||
* Helper function for tests that automatically resolves refs before converting to Zod
|
||||
* This ensures all tests use resolveJsonSchemaRefs even when not explicitly testing it
|
||||
* Helper function that resolves refs before converting to Zod.
|
||||
*
|
||||
* @deprecated This function is deprecated in favor of using JSON schemas directly.
|
||||
* LangChain.js now supports JSON schemas natively, eliminating the need for Zod conversion.
|
||||
* Use `resolveJsonSchemaRefs` to handle $ref references and pass the JSON schema directly to tools.
|
||||
*
|
||||
* @see https://js.langchain.com/docs/how_to/custom_tools/
|
||||
*/
|
||||
export function convertWithResolvedRefs(
|
||||
schema: JsonSchemaType & Record<string, unknown>,
|
||||
|
|
|
|||
|
|
@ -450,4 +450,142 @@ describe('classification.ts', () => {
|
|||
expect(result.additionalTools.length).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildToolClassification with definitionsOnly', () => {
|
||||
const mockLoadAuthValues = jest.fn().mockResolvedValue({ CODE_API_KEY: 'test-key' });
|
||||
|
||||
const createMCPTool = (name: string, description?: string) =>
|
||||
({
|
||||
name,
|
||||
description,
|
||||
mcp: true,
|
||||
mcpJsonSchema: { type: 'object', properties: {} },
|
||||
}) as unknown as GenericTool;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should NOT create tool instances when definitionsOnly=true', async () => {
|
||||
const loadedTools: GenericTool[] = [createMCPTool('tool1')];
|
||||
|
||||
const agentToolOptions: AgentToolOptions = {
|
||||
tool1: { defer_loading: true },
|
||||
};
|
||||
|
||||
const result = await buildToolClassification({
|
||||
loadedTools,
|
||||
userId: 'user1',
|
||||
agentId: 'agent1',
|
||||
agentToolOptions,
|
||||
deferredToolsEnabled: true,
|
||||
definitionsOnly: true,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
});
|
||||
|
||||
expect(result.additionalTools.length).toBe(0);
|
||||
});
|
||||
|
||||
it('should still add tool_search definition when definitionsOnly=true and has deferred tools', async () => {
|
||||
const loadedTools: GenericTool[] = [createMCPTool('tool1')];
|
||||
|
||||
const agentToolOptions: AgentToolOptions = {
|
||||
tool1: { defer_loading: true },
|
||||
};
|
||||
|
||||
const result = await buildToolClassification({
|
||||
loadedTools,
|
||||
userId: 'user1',
|
||||
agentId: 'agent1',
|
||||
agentToolOptions,
|
||||
deferredToolsEnabled: true,
|
||||
definitionsOnly: true,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
});
|
||||
|
||||
expect(result.toolDefinitions.some((d) => d.name === 'tool_search')).toBe(true);
|
||||
expect(result.toolRegistry?.has('tool_search')).toBe(true);
|
||||
});
|
||||
|
||||
it('should still add PTC definition when definitionsOnly=true and has programmatic tools', async () => {
|
||||
const loadedTools: GenericTool[] = [createMCPTool('tool1')];
|
||||
|
||||
const agentToolOptions: AgentToolOptions = {
|
||||
tool1: { allowed_callers: ['code_execution'] },
|
||||
};
|
||||
|
||||
const result = await buildToolClassification({
|
||||
loadedTools,
|
||||
userId: 'user1',
|
||||
agentId: 'agent1',
|
||||
agentToolOptions,
|
||||
deferredToolsEnabled: true,
|
||||
definitionsOnly: true,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
});
|
||||
|
||||
expect(result.toolDefinitions.some((d) => d.name === 'run_tools_with_code')).toBe(true);
|
||||
expect(result.toolRegistry?.has('run_tools_with_code')).toBe(true);
|
||||
expect(result.additionalTools.length).toBe(0);
|
||||
});
|
||||
|
||||
it('should NOT call loadAuthValues for PTC when definitionsOnly=true', async () => {
|
||||
const loadedTools: GenericTool[] = [createMCPTool('tool1')];
|
||||
|
||||
const agentToolOptions: AgentToolOptions = {
|
||||
tool1: { allowed_callers: ['code_execution'] },
|
||||
};
|
||||
|
||||
await buildToolClassification({
|
||||
loadedTools,
|
||||
userId: 'user1',
|
||||
agentId: 'agent1',
|
||||
agentToolOptions,
|
||||
deferredToolsEnabled: true,
|
||||
definitionsOnly: true,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
});
|
||||
|
||||
expect(mockLoadAuthValues).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should call loadAuthValues for PTC when definitionsOnly=false', async () => {
|
||||
const loadedTools: GenericTool[] = [createMCPTool('tool1')];
|
||||
|
||||
const agentToolOptions: AgentToolOptions = {
|
||||
tool1: { allowed_callers: ['code_execution'] },
|
||||
};
|
||||
|
||||
await buildToolClassification({
|
||||
loadedTools,
|
||||
userId: 'user1',
|
||||
agentId: 'agent1',
|
||||
agentToolOptions,
|
||||
deferredToolsEnabled: true,
|
||||
definitionsOnly: false,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
});
|
||||
|
||||
expect(mockLoadAuthValues).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should create tool instances when definitionsOnly=false (default)', async () => {
|
||||
const loadedTools: GenericTool[] = [createMCPTool('tool1')];
|
||||
|
||||
const agentToolOptions: AgentToolOptions = {
|
||||
tool1: { defer_loading: true },
|
||||
};
|
||||
|
||||
const result = await buildToolClassification({
|
||||
loadedTools,
|
||||
userId: 'user1',
|
||||
agentId: 'agent1',
|
||||
agentToolOptions,
|
||||
deferredToolsEnabled: true,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
});
|
||||
|
||||
expect(result.additionalTools.some((t) => t.name === 'tool_search')).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -26,7 +26,13 @@
|
|||
|
||||
import { logger } from '@librechat/data-schemas';
|
||||
import { Constants } from 'librechat-data-provider';
|
||||
import { EnvVar, createProgrammaticToolCallingTool, createToolSearch } from '@librechat/agents';
|
||||
import {
|
||||
EnvVar,
|
||||
createToolSearch,
|
||||
ToolSearchToolDefinition,
|
||||
createProgrammaticToolCallingTool,
|
||||
ProgrammaticToolCallingDefinition,
|
||||
} from '@librechat/agents';
|
||||
import type { AgentToolOptions } from 'librechat-data-provider';
|
||||
import type {
|
||||
LCToolRegistry,
|
||||
|
|
@ -45,6 +51,8 @@ export interface ToolDefinition {
|
|||
name: string;
|
||||
description?: string;
|
||||
parameters?: JsonSchemaType;
|
||||
/** MCP server name extracted from tool name */
|
||||
serverName?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -286,6 +294,12 @@ export function extractMCPToolDefinition(tool: MCPToolInstance): ToolDefinition
|
|||
def.parameters = tool.mcpJsonSchema;
|
||||
}
|
||||
|
||||
/** Extract server name from tool name (format: toolName_mcp_ServerName) */
|
||||
const serverName = getServerNameFromTool(tool.name);
|
||||
if (serverName) {
|
||||
def.serverName = serverName;
|
||||
}
|
||||
|
||||
return def;
|
||||
}
|
||||
|
||||
|
|
@ -312,6 +326,36 @@ export function cleanupMCPToolSchemas(tools: MCPToolInstance[]): void {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Builds tool registry from MCP tool definitions using the appropriate strategy.
|
||||
* Uses early returns to avoid nesting (Torvalds principle).
|
||||
*/
|
||||
function buildToolRegistry(
|
||||
mcpToolDefs: ToolDefinition[],
|
||||
agentToolOptions?: AgentToolOptions,
|
||||
): LCToolRegistry {
|
||||
if (agentToolOptions && Object.keys(agentToolOptions).length > 0) {
|
||||
return buildToolRegistryFromAgentOptions(mcpToolDefs, agentToolOptions);
|
||||
}
|
||||
|
||||
if (process.env.TOOL_CLASSIFICATION_FROM_ENV === 'true') {
|
||||
return buildToolRegistryFromEnv(mcpToolDefs);
|
||||
}
|
||||
|
||||
/** No classification config - build basic definitions for event-driven mode */
|
||||
const registry: LCToolRegistry = new Map<string, LCTool>();
|
||||
for (const toolDef of mcpToolDefs) {
|
||||
registry.set(toolDef.name, {
|
||||
name: toolDef.name,
|
||||
description: toolDef.description,
|
||||
parameters: toolDef.parameters,
|
||||
serverName: toolDef.serverName,
|
||||
toolType: 'mcp',
|
||||
});
|
||||
}
|
||||
return registry;
|
||||
}
|
||||
|
||||
/** Parameters for building tool classification and creating PTC/tool search tools */
|
||||
export interface BuildToolClassificationParams {
|
||||
/** All loaded tools (will be filtered for MCP tools) */
|
||||
|
|
@ -324,6 +368,8 @@ export interface BuildToolClassificationParams {
|
|||
agentToolOptions?: AgentToolOptions;
|
||||
/** Whether the deferred_tools capability is enabled (from agent config) */
|
||||
deferredToolsEnabled?: boolean;
|
||||
/** When true, skip creating tool instances (for event-driven mode) */
|
||||
definitionsOnly?: boolean;
|
||||
/** Function to load auth values (dependency injection) */
|
||||
loadAuthValues: (params: {
|
||||
userId: string;
|
||||
|
|
@ -335,6 +381,8 @@ export interface BuildToolClassificationParams {
|
|||
export interface BuildToolClassificationResult {
|
||||
/** Tool registry built from MCP tools (undefined if no MCP tools) */
|
||||
toolRegistry?: LCToolRegistry;
|
||||
/** Tool definitions array for event-driven execution (built simultaneously with registry) */
|
||||
toolDefinitions: LCTool[];
|
||||
/** Additional tools created (PTC and/or tool search) */
|
||||
additionalTools: GenericTool[];
|
||||
/** Whether any tools have defer_loading enabled (precomputed for efficiency) */
|
||||
|
|
@ -407,26 +455,35 @@ export async function buildToolClassification(
|
|||
params: BuildToolClassificationParams,
|
||||
): Promise<BuildToolClassificationResult> {
|
||||
const {
|
||||
loadedTools,
|
||||
userId,
|
||||
agentId,
|
||||
loadedTools,
|
||||
agentToolOptions,
|
||||
definitionsOnly = false,
|
||||
deferredToolsEnabled = true,
|
||||
loadAuthValues,
|
||||
} = params;
|
||||
const additionalTools: GenericTool[] = [];
|
||||
|
||||
/** Check if this agent is allowed to have classification features (requires agentId) */
|
||||
if (!isAgentAllowedForClassification(agentId)) {
|
||||
logger.debug(
|
||||
`[buildToolClassification] Agent ${agentId ?? 'undefined'} not allowed for classification, skipping`,
|
||||
);
|
||||
return { toolRegistry: undefined, additionalTools, hasDeferredTools: false };
|
||||
}
|
||||
|
||||
const mcpTools = loadedTools.filter(isMCPTool);
|
||||
if (mcpTools.length === 0) {
|
||||
return { toolRegistry: undefined, additionalTools, hasDeferredTools: false };
|
||||
return {
|
||||
additionalTools,
|
||||
toolDefinitions: [],
|
||||
toolRegistry: undefined,
|
||||
hasDeferredTools: false,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if this agent is allowed to have advanced classification features (PTC, deferred tools).
|
||||
* Even if not allowed, we still build basic tool definitions for event-driven execution.
|
||||
*/
|
||||
const isAllowedForClassification = isAgentAllowedForClassification(agentId);
|
||||
if (!isAllowedForClassification) {
|
||||
logger.debug(
|
||||
`[buildToolClassification] Agent ${agentId ?? 'undefined'} not allowed for classification, building basic definitions only`,
|
||||
);
|
||||
}
|
||||
|
||||
const mcpToolDefs = mcpTools.map(extractMCPToolDefinition);
|
||||
|
|
@ -435,17 +492,11 @@ export async function buildToolClassification(
|
|||
* Build registry from agent's tool_options if provided (UI config).
|
||||
* Environment variable-based classification is only used as fallback
|
||||
* when TOOL_CLASSIFICATION_FROM_ENV=true is explicitly set.
|
||||
*
|
||||
* Even without classification config, we still build basic tool definitions
|
||||
* for event-driven execution.
|
||||
*/
|
||||
let toolRegistry: LCToolRegistry | undefined;
|
||||
|
||||
if (agentToolOptions && Object.keys(agentToolOptions).length > 0) {
|
||||
toolRegistry = buildToolRegistryFromAgentOptions(mcpToolDefs, agentToolOptions);
|
||||
} else if (process.env.TOOL_CLASSIFICATION_FROM_ENV === 'true') {
|
||||
toolRegistry = buildToolRegistryFromEnv(mcpToolDefs);
|
||||
} else {
|
||||
/** No agent-level config and env-based classification not enabled */
|
||||
return { toolRegistry: undefined, additionalTools, hasDeferredTools: false };
|
||||
}
|
||||
const toolRegistry: LCToolRegistry = buildToolRegistry(mcpToolDefs, agentToolOptions);
|
||||
|
||||
/** Clean up temporary mcpJsonSchema property from tools now that registry is populated */
|
||||
cleanupMCPToolSchemas(mcpTools);
|
||||
|
|
@ -458,55 +509,111 @@ export async function buildToolClassification(
|
|||
const hasProgrammaticTools = agentHasProgrammaticTools(toolRegistry);
|
||||
const hasDeferredTools = deferredToolsEnabled && agentHasDeferredTools(toolRegistry);
|
||||
|
||||
/**
|
||||
* If deferred tools capability is disabled, clear defer_loading from all tools
|
||||
* to ensure no tools are treated as deferred at runtime.
|
||||
*/
|
||||
/** Clear defer_loading if capability disabled */
|
||||
if (!deferredToolsEnabled) {
|
||||
for (const toolDef of toolRegistry.values()) {
|
||||
if (toolDef.defer_loading === true) {
|
||||
toolDef.defer_loading = false;
|
||||
if (toolDef.defer_loading !== true) {
|
||||
continue;
|
||||
}
|
||||
toolDef.defer_loading = false;
|
||||
}
|
||||
}
|
||||
|
||||
/** Build toolDefinitions array from registry (single pass, reused) */
|
||||
const toolDefinitions: LCTool[] = Array.from(toolRegistry.values());
|
||||
|
||||
/** Agent not allowed for classification - return basic definitions */
|
||||
if (!isAllowedForClassification) {
|
||||
logger.debug(
|
||||
`[buildToolClassification] Agent ${agentId} not allowed for classification, returning basic definitions`,
|
||||
);
|
||||
return { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools: false };
|
||||
}
|
||||
|
||||
/** No programmatic or deferred tools - skip PTC/ToolSearch */
|
||||
if (!hasProgrammaticTools && !hasDeferredTools) {
|
||||
logger.debug(
|
||||
`[buildToolClassification] Agent ${agentId} has no programmatic or deferred tools, skipping PTC/ToolSearch`,
|
||||
);
|
||||
return { toolRegistry, additionalTools, hasDeferredTools: false };
|
||||
return { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools: false };
|
||||
}
|
||||
|
||||
/** Tool search uses local mode (no API key needed) */
|
||||
if (hasDeferredTools) {
|
||||
const toolSearchTool = createToolSearch({
|
||||
mode: 'local',
|
||||
toolRegistry,
|
||||
if (!definitionsOnly) {
|
||||
const toolSearchTool = createToolSearch({
|
||||
mode: 'local',
|
||||
toolRegistry,
|
||||
});
|
||||
additionalTools.push(toolSearchTool);
|
||||
}
|
||||
|
||||
/** Add ToolSearch definition for event-driven mode */
|
||||
toolDefinitions.push({
|
||||
name: ToolSearchToolDefinition.name,
|
||||
description: ToolSearchToolDefinition.description,
|
||||
parameters: ToolSearchToolDefinition.schema as unknown as LCTool['parameters'],
|
||||
});
|
||||
additionalTools.push(toolSearchTool);
|
||||
toolRegistry.set(ToolSearchToolDefinition.name, {
|
||||
name: ToolSearchToolDefinition.name,
|
||||
allowed_callers: ['direct'],
|
||||
});
|
||||
|
||||
logger.debug(`[buildToolClassification] Tool Search enabled for agent ${agentId}`);
|
||||
}
|
||||
|
||||
/** PTC requires CODE_API_KEY for sandbox execution */
|
||||
if (hasProgrammaticTools) {
|
||||
try {
|
||||
const authValues = await loadAuthValues({
|
||||
userId,
|
||||
authFields: [EnvVar.CODE_API_KEY],
|
||||
});
|
||||
const codeApiKey = authValues[EnvVar.CODE_API_KEY];
|
||||
|
||||
if (!codeApiKey) {
|
||||
logger.warn('[buildToolClassification] PTC configured but CODE_API_KEY not available');
|
||||
} else {
|
||||
const ptcTool = createProgrammaticToolCallingTool({ apiKey: codeApiKey });
|
||||
additionalTools.push(ptcTool);
|
||||
logger.debug(`[buildToolClassification] PTC tool enabled for agent ${agentId}`);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('[buildToolClassification] Error creating PTC tool:', error);
|
||||
}
|
||||
if (!hasProgrammaticTools) {
|
||||
return { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools };
|
||||
}
|
||||
|
||||
return { toolRegistry, additionalTools, hasDeferredTools };
|
||||
/** In definitions-only mode, add PTC definition without creating the tool instance */
|
||||
if (definitionsOnly) {
|
||||
toolDefinitions.push({
|
||||
name: ProgrammaticToolCallingDefinition.name,
|
||||
description: ProgrammaticToolCallingDefinition.description,
|
||||
parameters: ProgrammaticToolCallingDefinition.schema as unknown as LCTool['parameters'],
|
||||
});
|
||||
toolRegistry.set(ProgrammaticToolCallingDefinition.name, {
|
||||
name: ProgrammaticToolCallingDefinition.name,
|
||||
allowed_callers: ['direct'],
|
||||
});
|
||||
logger.debug(
|
||||
`[buildToolClassification] PTC definition added for agent ${agentId} (definitions only)`,
|
||||
);
|
||||
return { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools };
|
||||
}
|
||||
|
||||
try {
|
||||
const authValues = await loadAuthValues({
|
||||
userId,
|
||||
authFields: [EnvVar.CODE_API_KEY],
|
||||
});
|
||||
const codeApiKey = authValues[EnvVar.CODE_API_KEY];
|
||||
|
||||
if (!codeApiKey) {
|
||||
logger.warn('[buildToolClassification] PTC configured but CODE_API_KEY not available');
|
||||
return { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools };
|
||||
}
|
||||
|
||||
const ptcTool = createProgrammaticToolCallingTool({ apiKey: codeApiKey });
|
||||
additionalTools.push(ptcTool);
|
||||
|
||||
/** Add PTC definition for event-driven mode */
|
||||
toolDefinitions.push({
|
||||
name: ProgrammaticToolCallingDefinition.name,
|
||||
description: ProgrammaticToolCallingDefinition.description,
|
||||
parameters: ProgrammaticToolCallingDefinition.schema as unknown as LCTool['parameters'],
|
||||
});
|
||||
toolRegistry.set(ProgrammaticToolCallingDefinition.name, {
|
||||
name: ProgrammaticToolCallingDefinition.name,
|
||||
allowed_callers: ['direct'],
|
||||
});
|
||||
|
||||
logger.debug(`[buildToolClassification] PTC tool enabled for agent ${agentId}`);
|
||||
} catch (error) {
|
||||
logger.error('[buildToolClassification] Error creating PTC tool:', error);
|
||||
}
|
||||
|
||||
return { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools };
|
||||
}
|
||||
|
|
|
|||
361
packages/api/src/tools/definitions.spec.ts
Normal file
361
packages/api/src/tools/definitions.spec.ts
Normal file
|
|
@ -0,0 +1,361 @@
|
|||
import { loadToolDefinitions } from './definitions';
|
||||
import type {
|
||||
LoadToolDefinitionsParams,
|
||||
LoadToolDefinitionsDeps,
|
||||
ActionToolDefinition,
|
||||
} from './definitions';
|
||||
|
||||
describe('definitions.ts', () => {
|
||||
const mockLoadAuthValues = jest.fn().mockResolvedValue({});
|
||||
const mockGetOrFetchMCPServerTools = jest.fn().mockResolvedValue(null);
|
||||
const mockIsBuiltInTool = jest.fn().mockReturnValue(false);
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('loadToolDefinitions', () => {
|
||||
it('should return empty result for empty tools array', async () => {
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: [],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
expect(result.toolDefinitions).toHaveLength(0);
|
||||
expect(result.toolRegistry.size).toBe(0);
|
||||
expect(result.hasDeferredTools).toBe(false);
|
||||
});
|
||||
|
||||
describe('action tool definitions', () => {
|
||||
it('should include parameters in action tool definitions', async () => {
|
||||
const mockActionDefs: ActionToolDefinition[] = [
|
||||
{
|
||||
name: 'getWeather_action_weather_com',
|
||||
description: 'Get weather for a location',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
latitude: { type: 'number', description: 'Latitude coordinate' },
|
||||
longitude: { type: 'number', description: 'Longitude coordinate' },
|
||||
},
|
||||
required: ['latitude', 'longitude'],
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
const mockGetActionToolDefinitions = jest.fn().mockResolvedValue(mockActionDefs);
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['getWeather_action_weather---com'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
getActionToolDefinitions: mockGetActionToolDefinitions,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
expect(mockGetActionToolDefinitions).toHaveBeenCalledWith('agent-123', [
|
||||
'getWeather_action_weather---com',
|
||||
]);
|
||||
|
||||
const actionDef = result.toolDefinitions.find(
|
||||
(d) => d.name === 'getWeather_action_weather_com',
|
||||
);
|
||||
expect(actionDef).toBeDefined();
|
||||
expect(actionDef?.parameters).toBeDefined();
|
||||
expect(actionDef?.parameters?.type).toBe('object');
|
||||
expect(actionDef?.parameters?.properties).toHaveProperty('latitude');
|
||||
expect(actionDef?.parameters?.properties).toHaveProperty('longitude');
|
||||
expect(actionDef?.parameters?.required).toContain('latitude');
|
||||
expect(actionDef?.parameters?.required).toContain('longitude');
|
||||
});
|
||||
|
||||
it('should handle action definitions without parameters', async () => {
|
||||
const mockActionDefs: ActionToolDefinition[] = [
|
||||
{
|
||||
name: 'listItems_action_api_example_com',
|
||||
description: 'List all items',
|
||||
},
|
||||
];
|
||||
|
||||
const mockGetActionToolDefinitions = jest.fn().mockResolvedValue(mockActionDefs);
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['listItems_action_api---example---com'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
getActionToolDefinitions: mockGetActionToolDefinitions,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const actionDef = result.toolDefinitions.find(
|
||||
(d) => d.name === 'listItems_action_api_example_com',
|
||||
);
|
||||
expect(actionDef).toBeDefined();
|
||||
expect(actionDef?.parameters).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should not call getActionToolDefinitions when no action tools present', async () => {
|
||||
const mockGetActionToolDefinitions = jest.fn();
|
||||
mockIsBuiltInTool.mockReturnValue(true);
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['calculator', 'web_search'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
getActionToolDefinitions: mockGetActionToolDefinitions,
|
||||
};
|
||||
|
||||
await loadToolDefinitions(params, deps);
|
||||
|
||||
expect(mockGetActionToolDefinitions).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('built-in tool definitions', () => {
|
||||
it('should include parameters for known built-in tools', async () => {
|
||||
mockIsBuiltInTool.mockImplementation((name) => name === 'calculator');
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['calculator'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const calcDef = result.toolDefinitions.find((d) => d.name === 'calculator');
|
||||
expect(calcDef).toBeDefined();
|
||||
expect(calcDef?.parameters).toBeDefined();
|
||||
});
|
||||
|
||||
it('should include parameters for execute_code native tool', async () => {
|
||||
mockIsBuiltInTool.mockImplementation((name) => name === 'execute_code');
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['execute_code'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const execCodeDef = result.toolDefinitions.find((d) => d.name === 'execute_code');
|
||||
expect(execCodeDef).toBeDefined();
|
||||
expect(execCodeDef?.parameters).toBeDefined();
|
||||
expect(execCodeDef?.parameters?.properties).toHaveProperty('lang');
|
||||
expect(execCodeDef?.parameters?.properties).toHaveProperty('code');
|
||||
expect(execCodeDef?.parameters?.required).toContain('lang');
|
||||
expect(execCodeDef?.parameters?.required).toContain('code');
|
||||
});
|
||||
|
||||
it('should include parameters for web_search native tool', async () => {
|
||||
mockIsBuiltInTool.mockImplementation((name) => name === 'web_search');
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['web_search'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const webSearchDef = result.toolDefinitions.find((d) => d.name === 'web_search');
|
||||
expect(webSearchDef).toBeDefined();
|
||||
expect(webSearchDef?.parameters).toBeDefined();
|
||||
expect(webSearchDef?.parameters?.properties).toHaveProperty('query');
|
||||
expect(webSearchDef?.parameters?.required).toContain('query');
|
||||
});
|
||||
|
||||
it('should include parameters for file_search native tool', async () => {
|
||||
mockIsBuiltInTool.mockImplementation((name) => name === 'file_search');
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['file_search'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const fileSearchDef = result.toolDefinitions.find((d) => d.name === 'file_search');
|
||||
expect(fileSearchDef).toBeDefined();
|
||||
expect(fileSearchDef?.parameters).toBeDefined();
|
||||
expect(fileSearchDef?.parameters?.properties).toHaveProperty('query');
|
||||
expect(fileSearchDef?.parameters?.required).toContain('query');
|
||||
});
|
||||
|
||||
it('should skip built-in tools without registry definitions', async () => {
|
||||
mockIsBuiltInTool.mockImplementation((name) => name === 'unknown_tool');
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['unknown_tool'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const unknownDef = result.toolDefinitions.find((d) => d.name === 'unknown_tool');
|
||||
expect(unknownDef).toBeUndefined();
|
||||
expect(result.toolRegistry.has('unknown_tool')).toBe(false);
|
||||
});
|
||||
|
||||
it('should include description and parameters in registry for built-in tools', async () => {
|
||||
mockIsBuiltInTool.mockImplementation((name) => name === 'calculator');
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['calculator'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const registryEntry = result.toolRegistry.get('calculator');
|
||||
expect(registryEntry).toBeDefined();
|
||||
expect(registryEntry?.description).toBeDefined();
|
||||
expect(registryEntry?.parameters).toBeDefined();
|
||||
expect(registryEntry?.allowed_callers).toContain('direct');
|
||||
});
|
||||
});
|
||||
|
||||
describe('tool registry metadata', () => {
|
||||
it('should include description and parameters in registry for action tools', async () => {
|
||||
const mockActionDefs: ActionToolDefinition[] = [
|
||||
{
|
||||
name: 'getWeather_action_weather_com',
|
||||
description: 'Get weather for a location',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
city: { type: 'string', description: 'City name' },
|
||||
},
|
||||
required: ['city'],
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
const mockGetActionToolDefinitions = jest.fn().mockResolvedValue(mockActionDefs);
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['getWeather_action_weather---com'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
getActionToolDefinitions: mockGetActionToolDefinitions,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const registryEntry = result.toolRegistry.get('getWeather_action_weather_com');
|
||||
expect(registryEntry).toBeDefined();
|
||||
expect(registryEntry?.description).toBe('Get weather for a location');
|
||||
expect(registryEntry?.parameters).toBeDefined();
|
||||
expect(registryEntry?.parameters?.properties).toHaveProperty('city');
|
||||
expect(registryEntry?.allowed_callers).toContain('direct');
|
||||
});
|
||||
|
||||
it('should handle action tools without parameters in registry', async () => {
|
||||
const mockActionDefs: ActionToolDefinition[] = [
|
||||
{
|
||||
name: 'ping_action_api_com',
|
||||
description: 'Ping the API',
|
||||
},
|
||||
];
|
||||
|
||||
const mockGetActionToolDefinitions = jest.fn().mockResolvedValue(mockActionDefs);
|
||||
|
||||
const params: LoadToolDefinitionsParams = {
|
||||
userId: 'user-123',
|
||||
agentId: 'agent-123',
|
||||
tools: ['ping_action_api---com'],
|
||||
};
|
||||
|
||||
const deps: LoadToolDefinitionsDeps = {
|
||||
getOrFetchMCPServerTools: mockGetOrFetchMCPServerTools,
|
||||
isBuiltInTool: mockIsBuiltInTool,
|
||||
loadAuthValues: mockLoadAuthValues,
|
||||
getActionToolDefinitions: mockGetActionToolDefinitions,
|
||||
};
|
||||
|
||||
const result = await loadToolDefinitions(params, deps);
|
||||
|
||||
const registryEntry = result.toolRegistry.get('ping_action_api_com');
|
||||
expect(registryEntry).toBeDefined();
|
||||
expect(registryEntry?.description).toBe('Ping the API');
|
||||
expect(registryEntry?.parameters).toBeUndefined();
|
||||
expect(registryEntry?.allowed_callers).toContain('direct');
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
225
packages/api/src/tools/definitions.ts
Normal file
225
packages/api/src/tools/definitions.ts
Normal file
|
|
@ -0,0 +1,225 @@
|
|||
/**
|
||||
* @fileoverview Tool definitions loader for event-driven mode.
|
||||
* Loads tool definitions without creating tool instances for efficient initialization.
|
||||
*
|
||||
* @module packages/api/src/tools/definitions
|
||||
*/
|
||||
|
||||
import { Constants, actionDelimiter } from 'librechat-data-provider';
|
||||
import type { AgentToolOptions } from 'librechat-data-provider';
|
||||
import type { LCToolRegistry, JsonSchemaType, LCTool, GenericTool } from '@librechat/agents';
|
||||
import { buildToolClassification, type ToolDefinition } from './classification';
|
||||
import { getToolDefinition } from './registry/definitions';
|
||||
import { resolveJsonSchemaRefs } from '~/mcp/zod';
|
||||
|
||||
export interface MCPServerTool {
|
||||
function?: {
|
||||
name?: string;
|
||||
description?: string;
|
||||
parameters?: JsonSchemaType;
|
||||
};
|
||||
}
|
||||
|
||||
export type MCPServerTools = Record<string, MCPServerTool>;
|
||||
|
||||
export interface LoadToolDefinitionsParams {
|
||||
/** User ID for MCP server tool lookup */
|
||||
userId: string;
|
||||
/** Agent ID for tool classification */
|
||||
agentId: string;
|
||||
/** Agent's tool list (tool names/identifiers) */
|
||||
tools: string[];
|
||||
/** Agent-specific tool options */
|
||||
toolOptions?: AgentToolOptions;
|
||||
/** Whether deferred tools feature is enabled */
|
||||
deferredToolsEnabled?: boolean;
|
||||
}
|
||||
|
||||
export interface ActionToolDefinition {
|
||||
name: string;
|
||||
description?: string;
|
||||
parameters?: JsonSchemaType;
|
||||
}
|
||||
|
||||
export interface LoadToolDefinitionsDeps {
|
||||
/** Gets MCP server tools - first checks cache, then initializes server if needed */
|
||||
getOrFetchMCPServerTools: (userId: string, serverName: string) => Promise<MCPServerTools | null>;
|
||||
/** Checks if a tool name is a known built-in tool */
|
||||
isBuiltInTool: (toolName: string) => boolean;
|
||||
/** Loads auth values for tool search (passed to buildToolClassification) */
|
||||
loadAuthValues: (params: {
|
||||
userId: string;
|
||||
authFields: string[];
|
||||
}) => Promise<Record<string, string>>;
|
||||
/** Loads action tool definitions (schemas) from OpenAPI specs */
|
||||
getActionToolDefinitions?: (
|
||||
agentId: string,
|
||||
actionToolNames: string[],
|
||||
) => Promise<ActionToolDefinition[]>;
|
||||
}
|
||||
|
||||
export interface LoadToolDefinitionsResult {
|
||||
toolDefinitions: (ToolDefinition | LCTool)[];
|
||||
toolRegistry: LCToolRegistry;
|
||||
hasDeferredTools: boolean;
|
||||
}
|
||||
|
||||
const mcpToolPattern = /_mcp_/;
|
||||
|
||||
/**
|
||||
* Loads tool definitions without creating tool instances.
|
||||
* This is the efficient path for event-driven mode where tools are loaded on-demand.
|
||||
*/
|
||||
export async function loadToolDefinitions(
|
||||
params: LoadToolDefinitionsParams,
|
||||
deps: LoadToolDefinitionsDeps,
|
||||
): Promise<LoadToolDefinitionsResult> {
|
||||
const { userId, agentId, tools, toolOptions = {}, deferredToolsEnabled = false } = params;
|
||||
const { getOrFetchMCPServerTools, isBuiltInTool, loadAuthValues, getActionToolDefinitions } =
|
||||
deps;
|
||||
|
||||
const emptyResult: LoadToolDefinitionsResult = {
|
||||
toolDefinitions: [],
|
||||
toolRegistry: new Map(),
|
||||
hasDeferredTools: false,
|
||||
};
|
||||
|
||||
if (!tools || tools.length === 0) {
|
||||
return emptyResult;
|
||||
}
|
||||
|
||||
const mcpServerToolsCache = new Map<string, MCPServerTools>();
|
||||
const mcpToolDefs: ToolDefinition[] = [];
|
||||
const builtInToolDefs: ToolDefinition[] = [];
|
||||
let actionToolDefs: ToolDefinition[] = [];
|
||||
const actionToolNames: string[] = [];
|
||||
|
||||
const mcpAllPattern = `${Constants.mcp_all}${Constants.mcp_delimiter}`;
|
||||
|
||||
for (const toolName of tools) {
|
||||
if (toolName.includes(actionDelimiter)) {
|
||||
actionToolNames.push(toolName);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!mcpToolPattern.test(toolName)) {
|
||||
if (!isBuiltInTool(toolName)) {
|
||||
continue;
|
||||
}
|
||||
const registryDef = getToolDefinition(toolName);
|
||||
if (!registryDef) {
|
||||
continue;
|
||||
}
|
||||
builtInToolDefs.push({
|
||||
name: toolName,
|
||||
description: registryDef.description,
|
||||
parameters: registryDef.schema as JsonSchemaType | undefined,
|
||||
});
|
||||
continue;
|
||||
}
|
||||
|
||||
const parts = toolName.split(Constants.mcp_delimiter);
|
||||
const serverName = parts[parts.length - 1];
|
||||
|
||||
if (!mcpServerToolsCache.has(serverName)) {
|
||||
const serverTools = await getOrFetchMCPServerTools(userId, serverName);
|
||||
mcpServerToolsCache.set(serverName, serverTools || {});
|
||||
}
|
||||
|
||||
const serverTools = mcpServerToolsCache.get(serverName);
|
||||
if (!serverTools) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (toolName.startsWith(mcpAllPattern)) {
|
||||
for (const [actualToolName, toolDef] of Object.entries(serverTools)) {
|
||||
if (toolDef?.function) {
|
||||
mcpToolDefs.push({
|
||||
name: actualToolName,
|
||||
description: toolDef.function.description,
|
||||
parameters: toolDef.function.parameters
|
||||
? resolveJsonSchemaRefs(toolDef.function.parameters)
|
||||
: undefined,
|
||||
serverName,
|
||||
});
|
||||
}
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
const toolDef = serverTools[toolName];
|
||||
if (toolDef?.function) {
|
||||
mcpToolDefs.push({
|
||||
name: toolName,
|
||||
description: toolDef.function.description,
|
||||
parameters: toolDef.function.parameters
|
||||
? resolveJsonSchemaRefs(toolDef.function.parameters)
|
||||
: undefined,
|
||||
serverName,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (actionToolNames.length > 0 && getActionToolDefinitions) {
|
||||
const fetchedActionDefs = await getActionToolDefinitions(agentId, actionToolNames);
|
||||
actionToolDefs = fetchedActionDefs.map((def) => ({
|
||||
name: def.name,
|
||||
description: def.description,
|
||||
parameters: def.parameters,
|
||||
}));
|
||||
}
|
||||
|
||||
const loadedTools = mcpToolDefs.map((def) => ({
|
||||
name: def.name,
|
||||
description: def.description,
|
||||
mcp: true as const,
|
||||
mcpJsonSchema: def.parameters,
|
||||
})) as unknown as GenericTool[];
|
||||
|
||||
const classificationResult = await buildToolClassification({
|
||||
userId,
|
||||
agentId,
|
||||
loadedTools,
|
||||
loadAuthValues,
|
||||
deferredToolsEnabled,
|
||||
definitionsOnly: true,
|
||||
agentToolOptions: toolOptions,
|
||||
});
|
||||
|
||||
const { toolDefinitions, hasDeferredTools } = classificationResult;
|
||||
const toolRegistry: LCToolRegistry = classificationResult.toolRegistry ?? new Map();
|
||||
|
||||
for (const actionDef of actionToolDefs) {
|
||||
if (!toolRegistry.has(actionDef.name)) {
|
||||
toolRegistry.set(actionDef.name, {
|
||||
name: actionDef.name,
|
||||
description: actionDef.description,
|
||||
parameters: actionDef.parameters,
|
||||
allowed_callers: ['direct'],
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
for (const builtInDef of builtInToolDefs) {
|
||||
if (!toolRegistry.has(builtInDef.name)) {
|
||||
toolRegistry.set(builtInDef.name, {
|
||||
name: builtInDef.name,
|
||||
description: builtInDef.description,
|
||||
parameters: builtInDef.parameters,
|
||||
allowed_callers: ['direct'],
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const allDefinitions: (ToolDefinition | LCTool)[] = [
|
||||
...toolDefinitions,
|
||||
...actionToolDefs.filter((d) => !toolDefinitions.some((td) => td.name === d.name)),
|
||||
...builtInToolDefs.filter((d) => !toolDefinitions.some((td) => td.name === d.name)),
|
||||
];
|
||||
|
||||
return {
|
||||
toolDefinitions: allDefinitions,
|
||||
toolRegistry,
|
||||
hasDeferredTools,
|
||||
};
|
||||
}
|
||||
|
|
@ -1,3 +1,5 @@
|
|||
export * from './format';
|
||||
export * from './registry';
|
||||
export * from './toolkits';
|
||||
export * from './definitions';
|
||||
export * from './classification';
|
||||
|
|
|
|||
637
packages/api/src/tools/registry/definitions.ts
Normal file
637
packages/api/src/tools/registry/definitions.ts
Normal file
|
|
@ -0,0 +1,637 @@
|
|||
import {
|
||||
WebSearchToolDefinition,
|
||||
CalculatorToolDefinition,
|
||||
CodeExecutionToolDefinition,
|
||||
} from '@librechat/agents';
|
||||
|
||||
/** Extended JSON Schema type that includes standard validation keywords */
|
||||
export type ExtendedJsonSchema = {
|
||||
type?: 'string' | 'number' | 'integer' | 'float' | 'boolean' | 'array' | 'object' | 'null';
|
||||
enum?: (string | number | boolean | null)[];
|
||||
items?: ExtendedJsonSchema;
|
||||
properties?: Record<string, ExtendedJsonSchema>;
|
||||
required?: string[];
|
||||
description?: string;
|
||||
additionalProperties?: boolean | ExtendedJsonSchema;
|
||||
minLength?: number;
|
||||
maxLength?: number;
|
||||
minimum?: number;
|
||||
maximum?: number;
|
||||
minItems?: number;
|
||||
maxItems?: number;
|
||||
pattern?: string;
|
||||
format?: string;
|
||||
default?: unknown;
|
||||
const?: unknown;
|
||||
oneOf?: ExtendedJsonSchema[];
|
||||
anyOf?: ExtendedJsonSchema[];
|
||||
allOf?: ExtendedJsonSchema[];
|
||||
$ref?: string;
|
||||
$defs?: Record<string, ExtendedJsonSchema>;
|
||||
definitions?: Record<string, ExtendedJsonSchema>;
|
||||
};
|
||||
|
||||
export interface ToolRegistryDefinition {
|
||||
name: string;
|
||||
description: string;
|
||||
schema: ExtendedJsonSchema;
|
||||
description_for_model?: string;
|
||||
responseFormat?: 'content_and_artifact' | 'content';
|
||||
toolType: 'builtin' | 'mcp' | 'action' | 'custom';
|
||||
}
|
||||
|
||||
/** Google Search tool JSON schema */
|
||||
export const googleSearchSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
description: 'The search query string.',
|
||||
},
|
||||
max_results: {
|
||||
type: 'integer',
|
||||
minimum: 1,
|
||||
maximum: 10,
|
||||
description: 'The maximum number of search results to return. Defaults to 5.',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/** DALL-E 3 tool JSON schema */
|
||||
export const dalle3Schema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 4000,
|
||||
description:
|
||||
'A text description of the desired image, following the rules, up to 4000 characters.',
|
||||
},
|
||||
style: {
|
||||
type: 'string',
|
||||
enum: ['vivid', 'natural'],
|
||||
description:
|
||||
'Must be one of `vivid` or `natural`. `vivid` generates hyper-real and dramatic images, `natural` produces more natural, less hyper-real looking images',
|
||||
},
|
||||
quality: {
|
||||
type: 'string',
|
||||
enum: ['hd', 'standard'],
|
||||
description: 'The quality of the generated image. Only `hd` and `standard` are supported.',
|
||||
},
|
||||
size: {
|
||||
type: 'string',
|
||||
enum: ['1024x1024', '1792x1024', '1024x1792'],
|
||||
description:
|
||||
'The size of the requested image. Use 1024x1024 (square) as the default, 1792x1024 if the user requests a wide image, and 1024x1792 for full-body portraits. Always include this parameter in the request.',
|
||||
},
|
||||
},
|
||||
required: ['prompt', 'style', 'quality', 'size'],
|
||||
};
|
||||
|
||||
/** Flux API tool JSON schema */
|
||||
export const fluxApiSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: {
|
||||
type: 'string',
|
||||
enum: ['generate', 'list_finetunes', 'generate_finetuned'],
|
||||
description:
|
||||
'Action to perform: "generate" for image generation, "generate_finetuned" for finetuned model generation, "list_finetunes" to get available custom models',
|
||||
},
|
||||
prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Text prompt for image generation. Required when action is "generate". Not used for list_finetunes.',
|
||||
},
|
||||
width: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Width of the generated image in pixels. Must be a multiple of 32. Default is 1024.',
|
||||
},
|
||||
height: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Height of the generated image in pixels. Must be a multiple of 32. Default is 768.',
|
||||
},
|
||||
prompt_upsampling: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to perform upsampling on the prompt.',
|
||||
},
|
||||
steps: {
|
||||
type: 'integer',
|
||||
description: 'Number of steps to run the model for, a number from 1 to 50. Default is 40.',
|
||||
},
|
||||
seed: {
|
||||
type: 'number',
|
||||
description: 'Optional seed for reproducibility.',
|
||||
},
|
||||
safety_tolerance: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Tolerance level for input and output moderation. Between 0 and 6, 0 being most strict, 6 being least strict.',
|
||||
},
|
||||
endpoint: {
|
||||
type: 'string',
|
||||
enum: [
|
||||
'/v1/flux-pro-1.1',
|
||||
'/v1/flux-pro',
|
||||
'/v1/flux-dev',
|
||||
'/v1/flux-pro-1.1-ultra',
|
||||
'/v1/flux-pro-finetuned',
|
||||
'/v1/flux-pro-1.1-ultra-finetuned',
|
||||
],
|
||||
description: 'Endpoint to use for image generation.',
|
||||
},
|
||||
raw: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Generate less processed, more natural-looking images. Only works for /v1/flux-pro-1.1-ultra.',
|
||||
},
|
||||
finetune_id: {
|
||||
type: 'string',
|
||||
description: 'ID of the finetuned model to use',
|
||||
},
|
||||
finetune_strength: {
|
||||
type: 'number',
|
||||
description: 'Strength of the finetuning effect (typically between 0.1 and 1.2)',
|
||||
},
|
||||
guidance: {
|
||||
type: 'number',
|
||||
description: 'Guidance scale for finetuned models',
|
||||
},
|
||||
aspect_ratio: {
|
||||
type: 'string',
|
||||
description: 'Aspect ratio for ultra models (e.g., "16:9")',
|
||||
},
|
||||
},
|
||||
required: [],
|
||||
};
|
||||
|
||||
/** OpenWeather tool JSON schema */
|
||||
export const openWeatherSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: {
|
||||
type: 'string',
|
||||
enum: ['help', 'current_forecast', 'timestamp', 'daily_aggregation', 'overview'],
|
||||
description: 'The action to perform',
|
||||
},
|
||||
city: {
|
||||
type: 'string',
|
||||
description: 'City name for geocoding if lat/lon not provided',
|
||||
},
|
||||
lat: {
|
||||
type: 'number',
|
||||
description: 'Latitude coordinate',
|
||||
},
|
||||
lon: {
|
||||
type: 'number',
|
||||
description: 'Longitude coordinate',
|
||||
},
|
||||
exclude: {
|
||||
type: 'string',
|
||||
description: 'Parts to exclude from the response',
|
||||
},
|
||||
units: {
|
||||
type: 'string',
|
||||
enum: ['Celsius', 'Kelvin', 'Fahrenheit'],
|
||||
description: 'Temperature units',
|
||||
},
|
||||
lang: {
|
||||
type: 'string',
|
||||
description: 'Language code',
|
||||
},
|
||||
date: {
|
||||
type: 'string',
|
||||
description: 'Date in YYYY-MM-DD format for timestamp and daily_aggregation',
|
||||
},
|
||||
tz: {
|
||||
type: 'string',
|
||||
description: 'Timezone',
|
||||
},
|
||||
},
|
||||
required: ['action'],
|
||||
};
|
||||
|
||||
/** Wolfram Alpha tool JSON schema */
|
||||
export const wolframSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
input: {
|
||||
type: 'string',
|
||||
description: 'Natural language query to WolframAlpha following the guidelines',
|
||||
},
|
||||
},
|
||||
required: ['input'],
|
||||
};
|
||||
|
||||
/** Stable Diffusion tool JSON schema */
|
||||
export const stableDiffusionSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Detailed keywords to describe the subject, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
},
|
||||
negative_prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Keywords we want to exclude from the final image, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
},
|
||||
},
|
||||
required: ['prompt', 'negative_prompt'],
|
||||
};
|
||||
|
||||
/** Azure AI Search tool JSON schema */
|
||||
export const azureAISearchSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description: 'Search word or phrase to Azure AI Search',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/** Traversaal Search tool JSON schema */
|
||||
export const traversaalSearchSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description:
|
||||
"A properly written sentence to be interpreted by an AI to search the web according to the user's request.",
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/** Tavily Search Results tool JSON schema */
|
||||
export const tavilySearchSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
description: 'The search query string.',
|
||||
},
|
||||
max_results: {
|
||||
type: 'number',
|
||||
minimum: 1,
|
||||
maximum: 10,
|
||||
description: 'The maximum number of search results to return. Defaults to 5.',
|
||||
},
|
||||
search_depth: {
|
||||
type: 'string',
|
||||
enum: ['basic', 'advanced'],
|
||||
description:
|
||||
'The depth of the search, affecting result quality and response time (`basic` or `advanced`). Default is basic for quick results and advanced for indepth high quality results but longer response time. Advanced calls equals 2 requests.',
|
||||
},
|
||||
include_images: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Whether to include a list of query-related images in the response. Default is False.',
|
||||
},
|
||||
include_answer: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to include answers in the search results. Default is False.',
|
||||
},
|
||||
include_raw_content: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to include raw content in the search results. Default is False.',
|
||||
},
|
||||
include_domains: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'A list of domains to specifically include in the search results.',
|
||||
},
|
||||
exclude_domains: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'A list of domains to specifically exclude from the search results.',
|
||||
},
|
||||
topic: {
|
||||
type: 'string',
|
||||
enum: ['general', 'news', 'finance'],
|
||||
description:
|
||||
'The category of the search. Use news ONLY if query SPECIFCALLY mentions the word "news".',
|
||||
},
|
||||
time_range: {
|
||||
type: 'string',
|
||||
enum: ['day', 'week', 'month', 'year', 'd', 'w', 'm', 'y'],
|
||||
description: 'The time range back from the current date to filter results.',
|
||||
},
|
||||
days: {
|
||||
type: 'number',
|
||||
minimum: 1,
|
||||
description: 'Number of days back from the current date to include. Only if topic is news.',
|
||||
},
|
||||
include_image_descriptions: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'When include_images is true, also add a descriptive text for each image. Default is false.',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/** File Search tool JSON schema */
|
||||
export const fileSearchSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description:
|
||||
"A natural language query to search for relevant information in the files. Be specific and use keywords related to the information you're looking for. The query will be used for semantic similarity matching against the file contents.",
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/** OpenAI Image Generation tool JSON schema */
|
||||
export const imageGenOaiSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 32000,
|
||||
description: `Describe the image you want in detail.
|
||||
Be highly specific—break your idea into layers:
|
||||
(1) main concept and subject,
|
||||
(2) composition and position,
|
||||
(3) lighting and mood,
|
||||
(4) style, medium, or camera details,
|
||||
(5) important features (age, expression, clothing, etc.),
|
||||
(6) background.
|
||||
Use positive, descriptive language and specify what should be included, not what to avoid.
|
||||
List number and characteristics of people/objects, and mention style/technical requirements (e.g., "DSLR photo, 85mm lens, golden hour").
|
||||
Do not reference any uploaded images—use for new image creation from text only.`,
|
||||
},
|
||||
background: {
|
||||
type: 'string',
|
||||
enum: ['transparent', 'opaque', 'auto'],
|
||||
description:
|
||||
'Sets transparency for the background. Must be one of transparent, opaque or auto (default). When transparent, the output format should be png or webp.',
|
||||
},
|
||||
quality: {
|
||||
type: 'string',
|
||||
enum: ['auto', 'high', 'medium', 'low'],
|
||||
description: 'The quality of the image. One of auto (default), high, medium, or low.',
|
||||
},
|
||||
size: {
|
||||
type: 'string',
|
||||
enum: ['auto', '1024x1024', '1536x1024', '1024x1536'],
|
||||
description:
|
||||
'The size of the generated image. One of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default).',
|
||||
},
|
||||
},
|
||||
required: ['prompt'],
|
||||
};
|
||||
|
||||
/** OpenAI Image Edit tool JSON schema */
|
||||
export const imageEditOaiSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
image_ids: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
minItems: 1,
|
||||
description: `IDs (image ID strings) of previously generated or uploaded images that should guide the edit.
|
||||
|
||||
Guidelines:
|
||||
- If the user's request depends on any prior image(s), copy their image IDs into the \`image_ids\` array (in the same order the user refers to them).
|
||||
- Never invent or hallucinate IDs; only use IDs that are still visible in the conversation context.
|
||||
- If no earlier image is relevant, omit the field entirely.`,
|
||||
},
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 32000,
|
||||
description: `Describe the changes, enhancements, or new ideas to apply to the uploaded image(s).
|
||||
Be highly specific—break your request into layers:
|
||||
(1) main concept or transformation,
|
||||
(2) specific edits/replacements or composition guidance,
|
||||
(3) desired style, mood, or technique,
|
||||
(4) features/items to keep, change, or add (such as objects, people, clothing, lighting, etc.).
|
||||
Use positive, descriptive language and clarify what should be included or changed, not what to avoid.
|
||||
Always base this prompt on the most recently uploaded reference images.`,
|
||||
},
|
||||
quality: {
|
||||
type: 'string',
|
||||
enum: ['auto', 'high', 'medium', 'low'],
|
||||
description:
|
||||
'The quality of the image. One of auto (default), high, medium, or low. High/medium/low only supported for gpt-image-1.',
|
||||
},
|
||||
size: {
|
||||
type: 'string',
|
||||
enum: ['auto', '1024x1024', '1536x1024', '1024x1536', '256x256', '512x512'],
|
||||
description:
|
||||
'The size of the generated images. For gpt-image-1: auto (default), 1024x1024, 1536x1024, 1024x1536. For dall-e-2: 256x256, 512x512, 1024x1024.',
|
||||
},
|
||||
},
|
||||
required: ['image_ids', 'prompt'],
|
||||
};
|
||||
|
||||
/** Gemini Image Generation tool JSON schema */
|
||||
export const geminiImageGenSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 32000,
|
||||
description:
|
||||
'A detailed text description of the desired image, up to 32000 characters. For "editing" requests, describe the changes you want to make to the referenced image. Be specific about composition, style, lighting, and subject matter.',
|
||||
},
|
||||
image_ids: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: `Optional array of image IDs to use as visual context for generation.
|
||||
|
||||
Guidelines:
|
||||
- For "editing" requests: ALWAYS include the image ID being "edited"
|
||||
- For new generation with context: Include any relevant reference image IDs
|
||||
- If the user's request references any prior images, include their image IDs in this array
|
||||
- These images will be used as visual context/inspiration for the new generation
|
||||
- Never invent or hallucinate IDs; only use IDs that are visible in the conversation
|
||||
- If no images are relevant, omit this field entirely`,
|
||||
},
|
||||
aspectRatio: {
|
||||
type: 'string',
|
||||
enum: ['1:1', '2:3', '3:2', '3:4', '4:3', '4:5', '5:4', '9:16', '16:9', '21:9'],
|
||||
description:
|
||||
'The aspect ratio of the generated image. Use 16:9 or 3:2 for landscape, 9:16 or 2:3 for portrait, 21:9 for ultra-wide/cinematic, 1:1 for square. Defaults to 1:1 if not specified.',
|
||||
},
|
||||
imageSize: {
|
||||
type: 'string',
|
||||
enum: ['1K', '2K', '4K'],
|
||||
description:
|
||||
'The resolution of the generated image. Use 1K for standard, 2K for high, 4K for maximum quality. Defaults to 1K if not specified.',
|
||||
},
|
||||
},
|
||||
required: ['prompt'],
|
||||
};
|
||||
|
||||
/** Tool definitions registry - maps tool names to their definitions */
|
||||
export const toolDefinitions: Record<string, ToolRegistryDefinition> = {
|
||||
google: {
|
||||
name: 'google',
|
||||
description:
|
||||
'A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events.',
|
||||
schema: googleSearchSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
dalle: {
|
||||
name: 'dalle',
|
||||
description: `Use DALLE to create images from text descriptions.
|
||||
- It requires prompts to be in English, detailed, and to specify image type and human features for diversity.
|
||||
- Create only one image, without repeating or listing descriptions outside the "prompts" field.
|
||||
- Maintains the original intent of the description, with parameters for image style, quality, and size to tailor the output.`,
|
||||
schema: dalle3Schema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
flux: {
|
||||
name: 'flux',
|
||||
description:
|
||||
'Use Flux to generate images from text descriptions. This tool can generate images and list available finetunes. Each generate call creates one image. For multiple images, make multiple consecutive calls.',
|
||||
schema: fluxApiSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
open_weather: {
|
||||
name: 'open_weather',
|
||||
description:
|
||||
'Provides weather data from OpenWeather One Call API 3.0. Actions: help, current_forecast, timestamp, daily_aggregation, overview. If lat/lon not provided, specify "city" for geocoding. Units: "Celsius", "Kelvin", or "Fahrenheit" (default: Celsius). For timestamp action, use "date" in YYYY-MM-DD format.',
|
||||
schema: openWeatherSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
wolfram: {
|
||||
name: 'wolfram',
|
||||
description:
|
||||
'WolframAlpha offers computation, math, curated knowledge, and real-time data. It handles natural language queries and performs complex calculations. Follow the guidelines to get the best results.',
|
||||
schema: wolframSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
'stable-diffusion': {
|
||||
name: 'stable-diffusion',
|
||||
description:
|
||||
"You can generate images using text with 'stable-diffusion'. This tool is exclusively for visual content.",
|
||||
schema: stableDiffusionSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
'azure-ai-search': {
|
||||
name: 'azure-ai-search',
|
||||
description: "Use the 'azure-ai-search' tool to retrieve search results relevant to your input",
|
||||
schema: azureAISearchSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
traversaal_search: {
|
||||
name: 'traversaal_search',
|
||||
description:
|
||||
'An AI search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events. Input should be a search query.',
|
||||
schema: traversaalSearchSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
tavily_search_results_json: {
|
||||
name: 'tavily_search_results_json',
|
||||
description:
|
||||
'A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events.',
|
||||
schema: tavilySearchSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
file_search: {
|
||||
name: 'file_search',
|
||||
description:
|
||||
'Performs semantic search across attached "file_search" documents using natural language queries. This tool analyzes the content of uploaded files to find relevant information, quotes, and passages that best match your query.',
|
||||
schema: fileSearchSchema,
|
||||
toolType: 'builtin',
|
||||
responseFormat: 'content_and_artifact',
|
||||
},
|
||||
image_gen_oai: {
|
||||
name: 'image_gen_oai',
|
||||
description: `Generates high-quality, original images based solely on text, not using any uploaded reference images.
|
||||
|
||||
When to use \`image_gen_oai\`:
|
||||
- To create entirely new images from detailed text descriptions that do NOT reference any image files.
|
||||
|
||||
When NOT to use \`image_gen_oai\`:
|
||||
- If the user has uploaded any images and requests modifications, enhancements, or remixing based on those uploads → use \`image_edit_oai\` instead.
|
||||
|
||||
Generated image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.`,
|
||||
schema: imageGenOaiSchema,
|
||||
toolType: 'builtin',
|
||||
responseFormat: 'content_and_artifact',
|
||||
},
|
||||
image_edit_oai: {
|
||||
name: 'image_edit_oai',
|
||||
description: `Generates high-quality, original images based on text and one or more uploaded/referenced images.
|
||||
|
||||
When to use \`image_edit_oai\`:
|
||||
- The user wants to modify, extend, or remix one **or more** uploaded images, either:
|
||||
- Previously generated, or in the current request (both to be included in the \`image_ids\` array).
|
||||
- Always when the user refers to uploaded images for editing, enhancement, remixing, style transfer, or combining elements.
|
||||
- Any current or existing images are to be used as visual guides.
|
||||
- If there are any files in the current request, they are more likely than not expected as references for image edit requests.
|
||||
|
||||
When NOT to use \`image_edit_oai\`:
|
||||
- Brand-new generations that do not rely on an existing image → use \`image_gen_oai\` instead.
|
||||
|
||||
Both generated and referenced image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.`,
|
||||
schema: imageEditOaiSchema,
|
||||
toolType: 'builtin',
|
||||
responseFormat: 'content_and_artifact',
|
||||
},
|
||||
gemini_image_gen: {
|
||||
name: 'gemini_image_gen',
|
||||
description: `Generates high-quality, original images based on text prompts, with optional image context.
|
||||
|
||||
When to use \`gemini_image_gen\`:
|
||||
- To create entirely new images from detailed text descriptions
|
||||
- To generate images using existing images as context or inspiration
|
||||
- When the user requests image generation, creation, or asks to "generate an image"
|
||||
- When the user asks to "edit", "modify", "change", or "swap" elements in an image (generates new image with changes)
|
||||
|
||||
When NOT to use \`gemini_image_gen\`:
|
||||
- For uploading or saving existing images without modification
|
||||
|
||||
Generated image IDs will be returned in the response, so you can refer to them in future requests.`,
|
||||
schema: geminiImageGenSchema,
|
||||
toolType: 'builtin',
|
||||
responseFormat: 'content_and_artifact',
|
||||
},
|
||||
};
|
||||
|
||||
/** Tool definitions from @librechat/agents */
|
||||
const agentToolDefinitions: Record<string, ToolRegistryDefinition> = {
|
||||
[CalculatorToolDefinition.name]: {
|
||||
name: CalculatorToolDefinition.name,
|
||||
description: CalculatorToolDefinition.description,
|
||||
schema: CalculatorToolDefinition.schema as unknown as ExtendedJsonSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
[CodeExecutionToolDefinition.name]: {
|
||||
name: CodeExecutionToolDefinition.name,
|
||||
description: CodeExecutionToolDefinition.description,
|
||||
schema: CodeExecutionToolDefinition.schema as unknown as ExtendedJsonSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
[WebSearchToolDefinition.name]: {
|
||||
name: WebSearchToolDefinition.name,
|
||||
description: WebSearchToolDefinition.description,
|
||||
schema: WebSearchToolDefinition.schema as unknown as ExtendedJsonSchema,
|
||||
toolType: 'builtin',
|
||||
},
|
||||
};
|
||||
|
||||
export function getToolDefinition(toolName: string): ToolRegistryDefinition | undefined {
|
||||
return toolDefinitions[toolName] ?? agentToolDefinitions[toolName];
|
||||
}
|
||||
|
||||
export function getAllToolDefinitions(): ToolRegistryDefinition[] {
|
||||
return [...Object.values(toolDefinitions), ...Object.values(agentToolDefinitions)];
|
||||
}
|
||||
|
||||
export function getToolSchema(toolName: string): ExtendedJsonSchema | undefined {
|
||||
return getToolDefinition(toolName)?.schema;
|
||||
}
|
||||
1
packages/api/src/tools/registry/index.ts
Normal file
1
packages/api/src/tools/registry/index.ts
Normal file
|
|
@ -0,0 +1 @@
|
|||
export * from './definitions';
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
import { z } from 'zod';
|
||||
import type { ExtendedJsonSchema } from '../registry/definitions';
|
||||
|
||||
/** Default description for Gemini image generation tool */
|
||||
const DEFAULT_GEMINI_IMAGE_GEN_DESCRIPTION =
|
||||
|
|
@ -46,6 +46,35 @@ const getGeminiImageIdsDescription = () => {
|
|||
return process.env.GEMINI_IMAGE_IDS_DESCRIPTION || DEFAULT_GEMINI_IMAGE_IDS_DESCRIPTION;
|
||||
};
|
||||
|
||||
const geminiImageGenJsonSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 32000,
|
||||
description: getGeminiImageGenPromptDescription(),
|
||||
},
|
||||
image_ids: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: getGeminiImageIdsDescription(),
|
||||
},
|
||||
aspectRatio: {
|
||||
type: 'string',
|
||||
enum: ['1:1', '2:3', '3:2', '3:4', '4:3', '4:5', '5:4', '9:16', '16:9', '21:9'],
|
||||
description:
|
||||
'The aspect ratio of the generated image. Use 16:9 or 3:2 for landscape, 9:16 or 2:3 for portrait, 21:9 for ultra-wide/cinematic, 1:1 for square. Defaults to 1:1 if not specified.',
|
||||
},
|
||||
imageSize: {
|
||||
type: 'string',
|
||||
enum: ['1K', '2K', '4K'],
|
||||
description:
|
||||
'The resolution of the generated image. Use 1K for standard, 2K for high, 4K for maximum quality. Defaults to 1K if not specified.',
|
||||
},
|
||||
},
|
||||
required: ['prompt'],
|
||||
};
|
||||
|
||||
export const geminiToolkit = {
|
||||
gemini_image_gen: {
|
||||
name: 'gemini_image_gen' as const,
|
||||
|
|
@ -77,22 +106,7 @@ export const geminiToolkit = {
|
|||
9. Use imageSize to control the resolution: 1K (standard), 2K (high), 4K (maximum quality).
|
||||
|
||||
The prompt should be a detailed paragraph describing every part of the image in concrete, objective detail.`,
|
||||
schema: z.object({
|
||||
prompt: z.string().max(32000).describe(getGeminiImageGenPromptDescription()),
|
||||
image_ids: z.array(z.string()).optional().describe(getGeminiImageIdsDescription()),
|
||||
aspectRatio: z
|
||||
.enum(['1:1', '2:3', '3:2', '3:4', '4:3', '4:5', '5:4', '9:16', '16:9', '21:9'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The aspect ratio of the generated image. Use 16:9 or 3:2 for landscape, 9:16 or 2:3 for portrait, 21:9 for ultra-wide/cinematic, 1:1 for square. Defaults to 1:1 if not specified.',
|
||||
),
|
||||
imageSize: z
|
||||
.enum(['1K', '2K', '4K'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The resolution of the generated image. Use 1K for standard, 2K for high, 4K for maximum quality. Defaults to 1K if not specified.',
|
||||
),
|
||||
}),
|
||||
schema: geminiImageGenJsonSchema,
|
||||
responseFormat: 'content_and_artifact' as const,
|
||||
},
|
||||
} as const;
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import { z } from 'zod';
|
||||
import type { ExtendedJsonSchema } from '../registry/definitions';
|
||||
|
||||
/** Default descriptions for image generation tool */
|
||||
const DEFAULT_IMAGE_GEN_DESCRIPTION =
|
||||
|
|
@ -67,87 +67,81 @@ const getImageEditPromptDescription = () => {
|
|||
return process.env.IMAGE_EDIT_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION;
|
||||
};
|
||||
|
||||
const imageGenOaiJsonSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 32000,
|
||||
description: getImageGenPromptDescription(),
|
||||
},
|
||||
background: {
|
||||
type: 'string',
|
||||
enum: ['transparent', 'opaque', 'auto'],
|
||||
description:
|
||||
'Sets transparency for the background. Must be one of transparent, opaque or auto (default). When transparent, the output format should be png or webp.',
|
||||
},
|
||||
quality: {
|
||||
type: 'string',
|
||||
enum: ['auto', 'high', 'medium', 'low'],
|
||||
description: 'The quality of the image. One of auto (default), high, medium, or low.',
|
||||
},
|
||||
size: {
|
||||
type: 'string',
|
||||
enum: ['auto', '1024x1024', '1536x1024', '1024x1536'],
|
||||
description:
|
||||
'The size of the generated image. One of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default).',
|
||||
},
|
||||
},
|
||||
required: ['prompt'],
|
||||
};
|
||||
|
||||
const imageEditOaiJsonSchema: ExtendedJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
image_ids: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
minItems: 1,
|
||||
description: `IDs (image ID strings) of previously generated or uploaded images that should guide the edit.
|
||||
|
||||
Guidelines:
|
||||
- If the user's request depends on any prior image(s), copy their image IDs into the \`image_ids\` array (in the same order the user refers to them).
|
||||
- Never invent or hallucinate IDs; only use IDs that are still visible in the conversation context.
|
||||
- If no earlier image is relevant, omit the field entirely.`,
|
||||
},
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 32000,
|
||||
description: getImageEditPromptDescription(),
|
||||
},
|
||||
quality: {
|
||||
type: 'string',
|
||||
enum: ['auto', 'high', 'medium', 'low'],
|
||||
description:
|
||||
'The quality of the image. One of auto (default), high, medium, or low. High/medium/low only supported for gpt-image-1.',
|
||||
},
|
||||
size: {
|
||||
type: 'string',
|
||||
enum: ['auto', '1024x1024', '1536x1024', '1024x1536', '256x256', '512x512'],
|
||||
description:
|
||||
'The size of the generated images. For gpt-image-1: auto (default), 1024x1024, 1536x1024, 1024x1536. For dall-e-2: 256x256, 512x512, 1024x1024.',
|
||||
},
|
||||
},
|
||||
required: ['image_ids', 'prompt'],
|
||||
};
|
||||
|
||||
export const oaiToolkit = {
|
||||
image_gen_oai: {
|
||||
name: 'image_gen_oai' as const,
|
||||
description: getImageGenDescription(),
|
||||
schema: z.object({
|
||||
prompt: z.string().max(32000).describe(getImageGenPromptDescription()),
|
||||
background: z
|
||||
.enum(['transparent', 'opaque', 'auto'])
|
||||
.optional()
|
||||
.describe(
|
||||
'Sets transparency for the background. Must be one of transparent, opaque or auto (default). When transparent, the output format should be png or webp.',
|
||||
),
|
||||
/*
|
||||
n: z
|
||||
.number()
|
||||
.int()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The number of images to generate. Must be between 1 and 10.'),
|
||||
output_compression: z
|
||||
.number()
|
||||
.int()
|
||||
.min(0)
|
||||
.max(100)
|
||||
.optional()
|
||||
.describe('The compression level (0-100%) for webp or jpeg formats. Defaults to 100.'),
|
||||
*/
|
||||
quality: z
|
||||
.enum(['auto', 'high', 'medium', 'low'])
|
||||
.optional()
|
||||
.describe('The quality of the image. One of auto (default), high, medium, or low.'),
|
||||
size: z
|
||||
.enum(['auto', '1024x1024', '1536x1024', '1024x1536'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The size of the generated image. One of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default).',
|
||||
),
|
||||
}),
|
||||
schema: imageGenOaiJsonSchema,
|
||||
responseFormat: 'content_and_artifact' as const,
|
||||
} as const,
|
||||
image_edit_oai: {
|
||||
name: 'image_edit_oai' as const,
|
||||
description: getImageEditDescription(),
|
||||
schema: z.object({
|
||||
image_ids: z
|
||||
.array(z.string())
|
||||
.min(1)
|
||||
.describe(
|
||||
`
|
||||
IDs (image ID strings) of previously generated or uploaded images that should guide the edit.
|
||||
|
||||
Guidelines:
|
||||
- If the user's request depends on any prior image(s), copy their image IDs into the \`image_ids\` array (in the same order the user refers to them).
|
||||
- Never invent or hallucinate IDs; only use IDs that are still visible in the conversation context.
|
||||
- If no earlier image is relevant, omit the field entirely.
|
||||
`.trim(),
|
||||
),
|
||||
prompt: z.string().max(32000).describe(getImageEditPromptDescription()),
|
||||
/*
|
||||
n: z
|
||||
.number()
|
||||
.int()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The number of images to generate. Must be between 1 and 10. Defaults to 1.'),
|
||||
*/
|
||||
quality: z
|
||||
.enum(['auto', 'high', 'medium', 'low'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The quality of the image. One of auto (default), high, medium, or low. High/medium/low only supported for gpt-image-1.',
|
||||
),
|
||||
size: z
|
||||
.enum(['auto', '1024x1024', '1536x1024', '1024x1536', '256x256', '512x512'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The size of the generated images. For gpt-image-1: auto (default), 1024x1024, 1536x1024, 1024x1536. For dall-e-2: 256x256, 512x512, 1024x1024.',
|
||||
),
|
||||
}),
|
||||
schema: imageEditOaiJsonSchema,
|
||||
responseFormat: 'content_and_artifact' as const,
|
||||
},
|
||||
} as const;
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue