mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-02-02 07:41:49 +01:00
🦥 refactor: Event-Driven Lazy Tool Loading (#11588)
* refactor: json schema tools with lazy loading - Added LocalToolExecutor class for lazy loading and caching of tools during execution. - Introduced ToolExecutionContext and ToolExecutor interfaces for better type management. - Created utility functions to generate tool proxies with JSON schema support. - Added ExtendedJsonSchema type for enhanced schema definitions. - Updated existing toolkits to utilize the new schema and executor functionalities. - Introduced a comprehensive tool definitions registry for managing various tool schemas. chore: update @librechat/agents to version 3.1.2 refactor: enhance tool loading optimization and classification - Improved the loadAgentToolsOptimized function to utilize a proxy pattern for all tools, enabling deferred execution and reducing overhead. - Introduced caching for tool instances and refined tool classification logic to streamline tool management. - Updated the handling of MCP tools to improve logging and error reporting for missing tools in the cache. - Enhanced the structure of tool definitions to support better classification and integration with existing tools. refactor: modularize tool loading and enhance optimization - Moved the loadAgentToolsOptimized function to a new service file for better organization and maintainability. - Updated the ToolService to utilize the new service for optimized tool loading, improving code clarity. - Removed legacy tool loading methods and streamlined the tool loading process to enhance performance and reduce complexity. - Introduced feature flag handling for optimized tool loading, allowing for easier toggling of this functionality. refactor: replace loadAgentToolsWithFlag with loadAgentTools in tool loader refactor: enhance MCP tool loading with proxy creation and classification refactor: optimize MCP tool loading by grouping tools by server - Introduced a Map to group cached tools by server name, improving the organization of tool data. - Updated the createMCPProxyTool function to accept server name directly, enhancing clarity. - Refactored the logic for handling MCP tools, streamlining the process of creating proxy tools for classification. refactor: enhance MCP tool loading and proxy creation - Added functionality to retrieve MCP server tools and reinitialize servers if necessary, improving tool availability. - Updated the tool loading logic to utilize a Map for organizing tools by server, enhancing clarity and performance. - Refactored the createToolProxy function to ensure a default response format, streamlining tool creation. refactor: update createToolProxy to ensure consistent response format - Modified the createToolProxy function to await the executor's execution and validate the result format. - Ensured that the function returns a default response structure when the result is not an array of two elements, enhancing reliability in tool proxy creation. refactor: ToolExecutionContext with toolCall property - Added toolCall property to ToolExecutionContext interface for improved context handling during tool execution. - Updated LocalToolExecutor to include toolCall in the runnable configuration, allowing for more flexible tool invocation. - Modified createToolProxy to pass toolCall from the configuration, ensuring consistent context across tool executions. refactor: enhance event-driven tool execution and logging - Introduced ToolExecuteOptions for improved handling of event-driven tool execution, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to include support for ON_TOOL_EXECUTE events, enhancing the flexibility of tool invocation. - Added detailed logging in LocalToolExecutor to track tool loading and execution metrics, improving observability and debugging capabilities. - Refactored initializeClient to integrate event-driven tool loading, ensuring compatibility with the new execution model. chore: update @librechat/agents to version 3.1.21 refactor: remove legacy tool loading and executor components - Eliminated the loadAgentToolsWithFlag function, simplifying the tool loading process by directly using loadAgentTools. - Removed the LocalToolExecutor and related executor components to streamline the tool execution architecture. - Updated ToolService and related files to reflect the removal of deprecated features, enhancing code clarity and maintainability. refactor: enhance tool classification and definitions handling - Updated the loadAgentTools function to return toolDefinitions alongside toolRegistry, improving the structure of tool data returned to clients. - Removed the convertRegistryToDefinitions function from the initialize.js file, simplifying the initialization process. - Adjusted the buildToolClassification function to ensure toolDefinitions are built and returned simultaneously with the toolRegistry, enhancing efficiency in tool management. - Updated type definitions in initialize.ts to include toolDefinitions, ensuring consistency across the codebase. refactor: implement event-driven tool execution handler - Introduced createToolExecuteHandler function to streamline the handling of ON_TOOL_EXECUTE events, allowing for parallel execution of tool calls. - Updated getDefaultHandlers to utilize the new handler, simplifying the event-driven architecture. - Added handlers.ts file to encapsulate tool execution logic, improving code organization and maintainability. - Enhanced OpenAI handlers to integrate the new tool execution capabilities, ensuring consistent event handling across the application. refactor: integrate event-driven tool execution options - Added toolExecuteOptions to support event-driven tool execution in OpenAI and responses controllers, enhancing flexibility in tool handling. - Updated handlers to utilize createToolExecuteHandler, allowing for streamlined execution of tools during agent interactions. - Refactored service dependencies to include toolExecuteOptions, ensuring consistent integration across the application. refactor: enhance tool loading with definitionsOnly parameter - Updated createToolLoader and loadAgentTools functions to include a definitionsOnly parameter, allowing for the retrieval of only serializable tool definitions in event-driven mode. - Adjusted related interfaces and documentation to reflect the new parameter, improving clarity and flexibility in tool management. - Ensured compatibility across various components by integrating the definitionsOnly option in the initialization process. refactor: improve agent tool presence check in initialization - Added a check for tool presence using a new hasAgentTools variable, which evaluates both structuredTools and toolDefinitions. - Updated the conditional logic in the agent initialization process to utilize the hasAgentTools variable, enhancing clarity and maintainability in tool management. refactor: enhance agent tool extraction to support tool definitions - Updated the extractMCPServers function to handle both tool instances and serializable tool definitions, improving flexibility in agent tool management. - Added a new property toolDefinitions to the AgentWithTools type for better integration of event-driven mode. - Enhanced documentation to clarify the function's capabilities in extracting unique MCP server names from both tools and tool definitions. refactor: enhance tool classification and registry building - Added serverName property to ToolDefinition for improved tool identification. - Introduced buildToolRegistry function to streamline the creation of tool registries based on MCP tool definitions and agent options. - Updated buildToolClassification to utilize the new registry building logic, ensuring basic definitions are returned even when advanced classification features are not allowed. - Enhanced documentation and logging for clarity in tool classification processes. refactor: update @librechat/agents dependency to version 3.1.22 fix: expose loadTools function in ToolService - Added loadTools function to the exported module in ToolService.js, enhancing the accessibility of tool loading functionality. chore: remove configurable options from tool execute options in OpenAI controller refactor: enhance tool loading mechanism to utilize agent-specific context chore: update @librechat/agents dependency to version 3.1.23 fix: simplify result handling in createToolExecuteHandler * refactor: loadToolDefinitions for efficient tool loading in event-driven mode * refactor: replace legacy tool loading with loadToolsForExecution in OpenAI and responses controllers - Updated OpenAIChatCompletionController and createResponse functions to utilize loadToolsForExecution for improved tool loading. - Removed deprecated loadToolsLegacy references, streamlining the tool execution process. - Enhanced tool loading options to include agent-specific context and configurations. * refactor: enhance tool loading and execution handling - Introduced loadActionToolsForExecution function to streamline loading of action tools, improving organization and maintainability. - Updated loadToolsForExecution to handle both regular and action tools, optimizing the tool loading process. - Added detailed logging for missing tools in createToolExecuteHandler, enhancing error visibility. - Refactored tool definitions to normalize action tool names, improving consistency in tool management. * refactor: enhance built-in tool definitions loading - Updated loadToolDefinitions to include descriptions and parameters from the tool registry for built-in tools, improving the clarity and usability of tool definitions. - Integrated getToolDefinition to streamline the retrieval of tool metadata, enhancing the overall tool management process. * feat: add action tool definitions loading to tool service - Introduced getActionToolDefinitions function to load action tool definitions based on agent ID and tool names, enhancing the tool loading process. - Updated loadToolDefinitions to integrate action tool definitions, allowing for better management and retrieval of action-specific tools. - Added comprehensive tests for action tool definitions to ensure correct loading and parameter handling, improving overall reliability and functionality. * chore: update @librechat/agents dependency to version 3.1.26 * refactor: add toolEndCallback to handle tool execution results * fix: tool definitions and execution handling - Introduced native tools (execute_code, file_search, web_search) to the tool service, allowing for better integration and management of these tools. - Updated isBuiltInTool function to include native tools in the built-in check, improving tool recognition. - Added comprehensive tests for loading parameters of native tools, ensuring correct functionality and parameter handling. - Enhanced tool definitions registry to include new agent tool definitions, streamlining tool retrieval and management. * refactor: enhance tool loading and execution context - Added toolRegistry to the context for OpenAIChatCompletionController and createResponse functions, improving tool management. - Updated loadToolsForExecution to utilize toolRegistry for better integration of programmatic tools and tool search functionalities. - Enhanced the initialization process to include toolRegistry in agent context, streamlining tool access and configuration. - Refactored tool classification logic to support event-driven execution, ensuring compatibility with new tool definitions. * chore: add request duration logging to OpenAI and Responses controllers - Introduced logging for request start and completion times in OpenAIChatCompletionController and createResponse functions. - Calculated and logged the duration of each request, enhancing observability and performance tracking. - Improved debugging capabilities by providing detailed logs for both streaming and non-streaming responses. * chore: update @librechat/agents dependency to version 3.1.27 * refactor: implement buildToolSet function for tool management - Introduced buildToolSet function to streamline the creation of tool sets from agent configurations, enhancing tool management across various controllers. - Updated AgentClient, OpenAIChatCompletionController, and createResponse functions to utilize buildToolSet, improving consistency in tool handling. - Added comprehensive tests for buildToolSet to ensure correct functionality and edge case handling, enhancing overall reliability. * refactor: update import paths for ToolExecuteOptions and createToolExecuteHandler * fix: update GoogleSearch.js description for maximum search results - Changed the default maximum number of search results from 10 to 5 in the Google Search JSON schema description, ensuring accurate documentation of the expected behavior. * chore: remove deprecated Browser tool and associated assets - Deleted the Browser tool definition from manifest.json, which included its name, plugin key, description, and authentication configuration. - Removed the web-browser.svg asset as it is no longer needed following the removal of the Browser tool. * fix: ensure tool definitions are valid before processing - Added a check to verify the existence of tool definitions in the registry before accessing their properties, preventing potential runtime errors. - Updated the loading logic for built-in tool definitions to ensure that only valid definitions are pushed to the built-in tool definitions array. * fix: extend ExtendedJsonSchema to support 'null' type and nullable enums - Updated the ExtendedJsonSchema type to include 'null' as a valid type option. - Modified the enum property to accept an array of values that can include strings, numbers, booleans, and null, enhancing schema flexibility. * test: add comprehensive tests for tool definitions loading and registry behavior - Implemented tests to verify the handling of built-in tools without registry definitions, ensuring they are skipped correctly. - Added tests to confirm that built-in tools include descriptions and parameters in the registry. - Enhanced tests for action tools, checking for proper inclusion of metadata and handling of tools without parameters in the registry. * test: add tests for mixed-type and number enum schema handling - Introduced tests to validate the parsing of mixed-type enum values, including strings, numbers, booleans, and null. - Added tests for number enum schema values to ensure correct parsing of numeric inputs, enhancing schema validation coverage. * fix: update mock implementation for @librechat/agents - Changed the mock for @librechat/agents to spread the actual module's properties, ensuring that all necessary functionalities are preserved in tests. - This adjustment enhances the accuracy of the tests by reflecting the real structure of the module. * fix: change max_results type in GoogleSearch schema from number to integer - Updated the type of max_results in the Google Search JSON schema to 'integer' for better type accuracy and validation consistency. * fix: update max_results description and type in GoogleSearch schema - Changed the type of max_results from 'number' to 'integer' for improved type accuracy. - Updated the description to reflect the new default maximum number of search results, changing it from 10 to 5. * refactor: remove unused code and improve tool registry handling - Eliminated outdated comments and conditional logic related to event-driven mode in the ToolService. - Enhanced the handling of the tool registry by ensuring it is configurable for better integration during tool execution. * feat: add definitionsOnly option to buildToolClassification for event-driven mode - Introduced a new parameter, definitionsOnly, to the BuildToolClassificationParams interface to enable a mode that skips tool instance creation. - Updated the buildToolClassification function to conditionally add tool definitions without instantiating tools when definitionsOnly is true. - Modified the loadToolDefinitions function to pass definitionsOnly as true, ensuring compatibility with the new feature. * test: add unit tests for buildToolClassification with definitionsOnly option - Implemented tests to verify the behavior of buildToolClassification when definitionsOnly is set to true or false. - Ensured that tool instances are not created when definitionsOnly is true, while still adding necessary tool definitions. - Confirmed that loadAuthValues is called appropriately based on the definitionsOnly parameter, enhancing test coverage for this new feature.
This commit is contained in:
parent
6279ea8dd7
commit
5af1342dbb
46 changed files with 3297 additions and 565 deletions
|
|
@ -41,9 +41,9 @@ jest.mock('~/models', () => ({
|
|||
const { getConvo, saveConvo } = require('~/models');
|
||||
|
||||
jest.mock('@librechat/agents', () => {
|
||||
const { Providers } = jest.requireActual('@librechat/agents');
|
||||
const actual = jest.requireActual('@librechat/agents');
|
||||
return {
|
||||
Providers,
|
||||
...actual,
|
||||
ChatOpenAI: jest.fn().mockImplementation(() => {
|
||||
return {};
|
||||
}),
|
||||
|
|
|
|||
|
|
@ -57,19 +57,6 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Browser",
|
||||
"pluginKey": "web-browser",
|
||||
"description": "Scrape and summarize webpage data",
|
||||
"icon": "assets/web-browser.svg",
|
||||
"authConfig": [
|
||||
{
|
||||
"authField": "OPENAI_API_KEY",
|
||||
"label": "OpenAI API Key",
|
||||
"description": "Browser makes use of OpenAI embeddings"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "DALL-E-3",
|
||||
"pluginKey": "dalle",
|
||||
|
|
|
|||
|
|
@ -1,14 +1,28 @@
|
|||
const { z } = require('zod');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { SearchClient, AzureKeyCredential } = require('@azure/search-documents');
|
||||
|
||||
const azureAISearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description: 'Search word or phrase to Azure AI Search',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
class AzureAISearch extends Tool {
|
||||
// Constants for default values
|
||||
static DEFAULT_API_VERSION = '2023-11-01';
|
||||
static DEFAULT_QUERY_TYPE = 'simple';
|
||||
static DEFAULT_TOP = 5;
|
||||
|
||||
static get jsonSchema() {
|
||||
return azureAISearchJsonSchema;
|
||||
}
|
||||
|
||||
// Helper function for initializing properties
|
||||
_initializeField(field, envVar, defaultValue) {
|
||||
return field || process.env[envVar] || defaultValue;
|
||||
|
|
@ -22,10 +36,7 @@ class AzureAISearch extends Tool {
|
|||
/* Used to initialize the Tool without necessary variables. */
|
||||
this.override = fields.override ?? false;
|
||||
|
||||
// Define schema
|
||||
this.schema = z.object({
|
||||
query: z.string().describe('Search word or phrase to Azure AI Search'),
|
||||
});
|
||||
this.schema = azureAISearchJsonSchema;
|
||||
|
||||
// Initialize properties using helper function
|
||||
this.serviceEndpoint = this._initializeField(
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
const { z } = require('zod');
|
||||
const path = require('path');
|
||||
const OpenAI = require('openai');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
|
|
@ -8,6 +7,36 @@ const { logger } = require('@librechat/data-schemas');
|
|||
const { getImageBasename, extractBaseURL } = require('@librechat/api');
|
||||
const { FileContext, ContentTypes } = require('librechat-data-provider');
|
||||
|
||||
const dalle3JsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
maxLength: 4000,
|
||||
description:
|
||||
'A text description of the desired image, following the rules, up to 4000 characters.',
|
||||
},
|
||||
style: {
|
||||
type: 'string',
|
||||
enum: ['vivid', 'natural'],
|
||||
description:
|
||||
'Must be one of `vivid` or `natural`. `vivid` generates hyper-real and dramatic images, `natural` produces more natural, less hyper-real looking images',
|
||||
},
|
||||
quality: {
|
||||
type: 'string',
|
||||
enum: ['hd', 'standard'],
|
||||
description: 'The quality of the generated image. Only `hd` and `standard` are supported.',
|
||||
},
|
||||
size: {
|
||||
type: 'string',
|
||||
enum: ['1024x1024', '1792x1024', '1024x1792'],
|
||||
description:
|
||||
'The size of the requested image. Use 1024x1024 (square) as the default, 1792x1024 if the user requests a wide image, and 1024x1792 for full-body portraits. Always include this parameter in the request.',
|
||||
},
|
||||
},
|
||||
required: ['prompt', 'style', 'quality', 'size'],
|
||||
};
|
||||
|
||||
const displayMessage =
|
||||
"DALL-E displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
|
||||
class DALLE3 extends Tool {
|
||||
|
|
@ -72,27 +101,11 @@ class DALLE3 extends Tool {
|
|||
// The prompt must intricately describe every part of the image in concrete, objective detail. THINK about what the end goal of the description is, and extrapolate that to what would make satisfying images.
|
||||
// All descriptions sent to dalle should be a paragraph of text that is extremely descriptive and detailed. Each should be more than 3 sentences long.
|
||||
// - The "vivid" style is HIGHLY preferred, but "natural" is also supported.`;
|
||||
this.schema = z.object({
|
||||
prompt: z
|
||||
.string()
|
||||
.max(4000)
|
||||
.describe(
|
||||
'A text description of the desired image, following the rules, up to 4000 characters.',
|
||||
),
|
||||
style: z
|
||||
.enum(['vivid', 'natural'])
|
||||
.describe(
|
||||
'Must be one of `vivid` or `natural`. `vivid` generates hyper-real and dramatic images, `natural` produces more natural, less hyper-real looking images',
|
||||
),
|
||||
quality: z
|
||||
.enum(['hd', 'standard'])
|
||||
.describe('The quality of the generated image. Only `hd` and `standard` are supported.'),
|
||||
size: z
|
||||
.enum(['1024x1024', '1792x1024', '1024x1792'])
|
||||
.describe(
|
||||
'The size of the requested image. Use 1024x1024 (square) as the default, 1792x1024 if the user requests a wide image, and 1024x1792 for full-body portraits. Always include this parameter in the request.',
|
||||
),
|
||||
});
|
||||
this.schema = dalle3JsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return dalle3JsonSchema;
|
||||
}
|
||||
|
||||
getApiKey() {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
const { z } = require('zod');
|
||||
const axios = require('axios');
|
||||
const fetch = require('node-fetch');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
|
|
@ -7,6 +6,84 @@ const { logger } = require('@librechat/data-schemas');
|
|||
const { HttpsProxyAgent } = require('https-proxy-agent');
|
||||
const { FileContext, ContentTypes } = require('librechat-data-provider');
|
||||
|
||||
const fluxApiJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: {
|
||||
type: 'string',
|
||||
enum: ['generate', 'list_finetunes', 'generate_finetuned'],
|
||||
description:
|
||||
'Action to perform: "generate" for image generation, "generate_finetuned" for finetuned model generation, "list_finetunes" to get available custom models',
|
||||
},
|
||||
prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Text prompt for image generation. Required when action is "generate". Not used for list_finetunes.',
|
||||
},
|
||||
width: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Width of the generated image in pixels. Must be a multiple of 32. Default is 1024.',
|
||||
},
|
||||
height: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Height of the generated image in pixels. Must be a multiple of 32. Default is 768.',
|
||||
},
|
||||
prompt_upsampling: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to perform upsampling on the prompt.',
|
||||
},
|
||||
steps: {
|
||||
type: 'integer',
|
||||
description: 'Number of steps to run the model for, a number from 1 to 50. Default is 40.',
|
||||
},
|
||||
seed: {
|
||||
type: 'number',
|
||||
description: 'Optional seed for reproducibility.',
|
||||
},
|
||||
safety_tolerance: {
|
||||
type: 'number',
|
||||
description:
|
||||
'Tolerance level for input and output moderation. Between 0 and 6, 0 being most strict, 6 being least strict.',
|
||||
},
|
||||
endpoint: {
|
||||
type: 'string',
|
||||
enum: [
|
||||
'/v1/flux-pro-1.1',
|
||||
'/v1/flux-pro',
|
||||
'/v1/flux-dev',
|
||||
'/v1/flux-pro-1.1-ultra',
|
||||
'/v1/flux-pro-finetuned',
|
||||
'/v1/flux-pro-1.1-ultra-finetuned',
|
||||
],
|
||||
description: 'Endpoint to use for image generation.',
|
||||
},
|
||||
raw: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Generate less processed, more natural-looking images. Only works for /v1/flux-pro-1.1-ultra.',
|
||||
},
|
||||
finetune_id: {
|
||||
type: 'string',
|
||||
description: 'ID of the finetuned model to use',
|
||||
},
|
||||
finetune_strength: {
|
||||
type: 'number',
|
||||
description: 'Strength of the finetuning effect (typically between 0.1 and 1.2)',
|
||||
},
|
||||
guidance: {
|
||||
type: 'number',
|
||||
description: 'Guidance scale for finetuned models',
|
||||
},
|
||||
aspect_ratio: {
|
||||
type: 'string',
|
||||
description: 'Aspect ratio for ultra models (e.g., "16:9")',
|
||||
},
|
||||
},
|
||||
required: [],
|
||||
};
|
||||
|
||||
const displayMessage =
|
||||
"Flux displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
|
||||
|
||||
|
|
@ -57,82 +134,11 @@ class FluxAPI extends Tool {
|
|||
// Add base URL from environment variable with fallback
|
||||
this.baseUrl = process.env.FLUX_API_BASE_URL || 'https://api.us1.bfl.ai';
|
||||
|
||||
// Define the schema for structured input
|
||||
this.schema = z.object({
|
||||
action: z
|
||||
.enum(['generate', 'list_finetunes', 'generate_finetuned'])
|
||||
.default('generate')
|
||||
.describe(
|
||||
'Action to perform: "generate" for image generation, "generate_finetuned" for finetuned model generation, "list_finetunes" to get available custom models',
|
||||
),
|
||||
prompt: z
|
||||
.string()
|
||||
.optional()
|
||||
.describe(
|
||||
'Text prompt for image generation. Required when action is "generate". Not used for list_finetunes.',
|
||||
),
|
||||
width: z
|
||||
.number()
|
||||
.optional()
|
||||
.describe(
|
||||
'Width of the generated image in pixels. Must be a multiple of 32. Default is 1024.',
|
||||
),
|
||||
height: z
|
||||
.number()
|
||||
.optional()
|
||||
.describe(
|
||||
'Height of the generated image in pixels. Must be a multiple of 32. Default is 768.',
|
||||
),
|
||||
prompt_upsampling: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(false)
|
||||
.describe('Whether to perform upsampling on the prompt.'),
|
||||
steps: z
|
||||
.number()
|
||||
.int()
|
||||
.optional()
|
||||
.describe('Number of steps to run the model for, a number from 1 to 50. Default is 40.'),
|
||||
seed: z.number().optional().describe('Optional seed for reproducibility.'),
|
||||
safety_tolerance: z
|
||||
.number()
|
||||
.optional()
|
||||
.default(6)
|
||||
.describe(
|
||||
'Tolerance level for input and output moderation. Between 0 and 6, 0 being most strict, 6 being least strict.',
|
||||
),
|
||||
endpoint: z
|
||||
.enum([
|
||||
'/v1/flux-pro-1.1',
|
||||
'/v1/flux-pro',
|
||||
'/v1/flux-dev',
|
||||
'/v1/flux-pro-1.1-ultra',
|
||||
'/v1/flux-pro-finetuned',
|
||||
'/v1/flux-pro-1.1-ultra-finetuned',
|
||||
])
|
||||
.optional()
|
||||
.default('/v1/flux-pro-1.1')
|
||||
.describe('Endpoint to use for image generation.'),
|
||||
raw: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.default(false)
|
||||
.describe(
|
||||
'Generate less processed, more natural-looking images. Only works for /v1/flux-pro-1.1-ultra.',
|
||||
),
|
||||
finetune_id: z.string().optional().describe('ID of the finetuned model to use'),
|
||||
finetune_strength: z
|
||||
.number()
|
||||
.optional()
|
||||
.default(1.1)
|
||||
.describe('Strength of the finetuning effect (typically between 0.1 and 1.2)'),
|
||||
guidance: z.number().optional().default(2.5).describe('Guidance scale for finetuned models'),
|
||||
aspect_ratio: z
|
||||
.string()
|
||||
.optional()
|
||||
.default('16:9')
|
||||
.describe('Aspect ratio for ultra models (e.g., "16:9")'),
|
||||
});
|
||||
this.schema = fluxApiJsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return fluxApiJsonSchema;
|
||||
}
|
||||
|
||||
getAxiosConfig() {
|
||||
|
|
|
|||
|
|
@ -1,12 +1,33 @@
|
|||
const { z } = require('zod');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
|
||||
const googleSearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
description: 'The search query string.',
|
||||
},
|
||||
max_results: {
|
||||
type: 'integer',
|
||||
minimum: 1,
|
||||
maximum: 10,
|
||||
description: 'The maximum number of search results to return. Defaults to 5.',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
class GoogleSearchResults extends Tool {
|
||||
static lc_name() {
|
||||
return 'google';
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return googleSearchJsonSchema;
|
||||
}
|
||||
|
||||
constructor(fields = {}) {
|
||||
super(fields);
|
||||
this.name = 'google';
|
||||
|
|
@ -28,25 +49,11 @@ class GoogleSearchResults extends Tool {
|
|||
this.description =
|
||||
'A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events.';
|
||||
|
||||
this.schema = z.object({
|
||||
query: z.string().min(1).describe('The search query string.'),
|
||||
max_results: z
|
||||
.number()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The maximum number of search results to return. Defaults to 10.'),
|
||||
// Note: Google API has its own parameters for search customization, adjust as needed.
|
||||
});
|
||||
this.schema = googleSearchJsonSchema;
|
||||
}
|
||||
|
||||
async _call(input) {
|
||||
const validationResult = this.schema.safeParse(input);
|
||||
if (!validationResult.success) {
|
||||
throw new Error(`Validation failed: ${JSON.stringify(validationResult.error.issues)}`);
|
||||
}
|
||||
|
||||
const { query, max_results = 5 } = validationResult.data;
|
||||
const { query, max_results = 5 } = input;
|
||||
|
||||
const response = await fetch(
|
||||
`https://www.googleapis.com/customsearch/v1?key=${this.apiKey}&cx=${
|
||||
|
|
|
|||
|
|
@ -1,8 +1,52 @@
|
|||
const { Tool } = require('@langchain/core/tools');
|
||||
const { z } = require('zod');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
const fetch = require('node-fetch');
|
||||
|
||||
const openWeatherJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: {
|
||||
type: 'string',
|
||||
enum: ['help', 'current_forecast', 'timestamp', 'daily_aggregation', 'overview'],
|
||||
description: 'The action to perform',
|
||||
},
|
||||
city: {
|
||||
type: 'string',
|
||||
description: 'City name for geocoding if lat/lon not provided',
|
||||
},
|
||||
lat: {
|
||||
type: 'number',
|
||||
description: 'Latitude coordinate',
|
||||
},
|
||||
lon: {
|
||||
type: 'number',
|
||||
description: 'Longitude coordinate',
|
||||
},
|
||||
exclude: {
|
||||
type: 'string',
|
||||
description: 'Parts to exclude from the response',
|
||||
},
|
||||
units: {
|
||||
type: 'string',
|
||||
enum: ['Celsius', 'Kelvin', 'Fahrenheit'],
|
||||
description: 'Temperature units',
|
||||
},
|
||||
lang: {
|
||||
type: 'string',
|
||||
description: 'Language code',
|
||||
},
|
||||
date: {
|
||||
type: 'string',
|
||||
description: 'Date in YYYY-MM-DD format for timestamp and daily_aggregation',
|
||||
},
|
||||
tz: {
|
||||
type: 'string',
|
||||
description: 'Timezone',
|
||||
},
|
||||
},
|
||||
required: ['action'],
|
||||
};
|
||||
|
||||
/**
|
||||
* Map user-friendly units to OpenWeather units.
|
||||
* Defaults to Celsius if not specified.
|
||||
|
|
@ -66,17 +110,11 @@ class OpenWeather extends Tool {
|
|||
'Units: "Celsius", "Kelvin", or "Fahrenheit" (default: Celsius). ' +
|
||||
'For timestamp action, use "date" in YYYY-MM-DD format.';
|
||||
|
||||
schema = z.object({
|
||||
action: z.enum(['help', 'current_forecast', 'timestamp', 'daily_aggregation', 'overview']),
|
||||
city: z.string().optional(),
|
||||
lat: z.number().optional(),
|
||||
lon: z.number().optional(),
|
||||
exclude: z.string().optional(),
|
||||
units: z.enum(['Celsius', 'Kelvin', 'Fahrenheit']).optional(),
|
||||
lang: z.string().optional(),
|
||||
date: z.string().optional(), // For timestamp and daily_aggregation
|
||||
tz: z.string().optional(),
|
||||
});
|
||||
schema = openWeatherJsonSchema;
|
||||
|
||||
static get jsonSchema() {
|
||||
return openWeatherJsonSchema;
|
||||
}
|
||||
|
||||
constructor(fields = {}) {
|
||||
super();
|
||||
|
|
|
|||
|
|
@ -1,6 +1,5 @@
|
|||
// Generates image using stable diffusion webui's api (automatic1111)
|
||||
const fs = require('fs');
|
||||
const { z } = require('zod');
|
||||
const path = require('path');
|
||||
const axios = require('axios');
|
||||
const sharp = require('sharp');
|
||||
|
|
@ -11,6 +10,23 @@ const { FileContext, ContentTypes } = require('librechat-data-provider');
|
|||
const { getBasePath } = require('@librechat/api');
|
||||
const paths = require('~/config/paths');
|
||||
|
||||
const stableDiffusionJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Detailed keywords to describe the subject, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
},
|
||||
negative_prompt: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Keywords we want to exclude from the final image, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
},
|
||||
},
|
||||
required: ['prompt', 'negative_prompt'],
|
||||
};
|
||||
|
||||
const displayMessage =
|
||||
"Stable Diffusion displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
|
||||
|
||||
|
|
@ -46,18 +62,11 @@ class StableDiffusionAPI extends Tool {
|
|||
// - Generate images only once per human query unless explicitly requested by the user`;
|
||||
this.description =
|
||||
"You can generate images using text with 'stable-diffusion'. This tool is exclusively for visual content.";
|
||||
this.schema = z.object({
|
||||
prompt: z
|
||||
.string()
|
||||
.describe(
|
||||
'Detailed keywords to describe the subject, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
),
|
||||
negative_prompt: z
|
||||
.string()
|
||||
.describe(
|
||||
'Keywords we want to exclude from the final image, using at least 7 keywords to accurately describe the image, separated by comma',
|
||||
),
|
||||
});
|
||||
this.schema = stableDiffusionJsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return stableDiffusionJsonSchema;
|
||||
}
|
||||
|
||||
replaceNewLinesWithSpaces(inputString) {
|
||||
|
|
|
|||
|
|
@ -1,8 +1,75 @@
|
|||
const { z } = require('zod');
|
||||
const { ProxyAgent, fetch } = require('undici');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
|
||||
const tavilySearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
minLength: 1,
|
||||
description: 'The search query string.',
|
||||
},
|
||||
max_results: {
|
||||
type: 'number',
|
||||
minimum: 1,
|
||||
maximum: 10,
|
||||
description: 'The maximum number of search results to return. Defaults to 5.',
|
||||
},
|
||||
search_depth: {
|
||||
type: 'string',
|
||||
enum: ['basic', 'advanced'],
|
||||
description:
|
||||
'The depth of the search, affecting result quality and response time (`basic` or `advanced`). Default is basic for quick results and advanced for indepth high quality results but longer response time. Advanced calls equals 2 requests.',
|
||||
},
|
||||
include_images: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Whether to include a list of query-related images in the response. Default is False.',
|
||||
},
|
||||
include_answer: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to include answers in the search results. Default is False.',
|
||||
},
|
||||
include_raw_content: {
|
||||
type: 'boolean',
|
||||
description: 'Whether to include raw content in the search results. Default is False.',
|
||||
},
|
||||
include_domains: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'A list of domains to specifically include in the search results.',
|
||||
},
|
||||
exclude_domains: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'A list of domains to specifically exclude from the search results.',
|
||||
},
|
||||
topic: {
|
||||
type: 'string',
|
||||
enum: ['general', 'news', 'finance'],
|
||||
description:
|
||||
'The category of the search. Use news ONLY if query SPECIFCALLY mentions the word "news".',
|
||||
},
|
||||
time_range: {
|
||||
type: 'string',
|
||||
enum: ['day', 'week', 'month', 'year', 'd', 'w', 'm', 'y'],
|
||||
description: 'The time range back from the current date to filter results.',
|
||||
},
|
||||
days: {
|
||||
type: 'number',
|
||||
minimum: 1,
|
||||
description: 'Number of days back from the current date to include. Only if topic is news.',
|
||||
},
|
||||
include_image_descriptions: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'When include_images is true, also add a descriptive text for each image. Default is false.',
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
class TavilySearchResults extends Tool {
|
||||
static lc_name() {
|
||||
return 'TavilySearchResults';
|
||||
|
|
@ -20,64 +87,11 @@ class TavilySearchResults extends Tool {
|
|||
this.description =
|
||||
'A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events.';
|
||||
|
||||
this.schema = z.object({
|
||||
query: z.string().min(1).describe('The search query string.'),
|
||||
max_results: z
|
||||
.number()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The maximum number of search results to return. Defaults to 5.'),
|
||||
search_depth: z
|
||||
.enum(['basic', 'advanced'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The depth of the search, affecting result quality and response time (`basic` or `advanced`). Default is basic for quick results and advanced for indepth high quality results but longer response time. Advanced calls equals 2 requests.',
|
||||
),
|
||||
include_images: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe(
|
||||
'Whether to include a list of query-related images in the response. Default is False.',
|
||||
),
|
||||
include_answer: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Whether to include answers in the search results. Default is False.'),
|
||||
include_raw_content: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Whether to include raw content in the search results. Default is False.'),
|
||||
include_domains: z
|
||||
.array(z.string())
|
||||
.optional()
|
||||
.describe('A list of domains to specifically include in the search results.'),
|
||||
exclude_domains: z
|
||||
.array(z.string())
|
||||
.optional()
|
||||
.describe('A list of domains to specifically exclude from the search results.'),
|
||||
topic: z
|
||||
.enum(['general', 'news', 'finance'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The category of the search. Use news ONLY if query SPECIFCALLY mentions the word "news".',
|
||||
),
|
||||
time_range: z
|
||||
.enum(['day', 'week', 'month', 'year', 'd', 'w', 'm', 'y'])
|
||||
.optional()
|
||||
.describe('The time range back from the current date to filter results.'),
|
||||
days: z
|
||||
.number()
|
||||
.min(1)
|
||||
.optional()
|
||||
.describe('Number of days back from the current date to include. Only if topic is news.'),
|
||||
include_image_descriptions: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe(
|
||||
'When include_images is true, also add a descriptive text for each image. Default is false.',
|
||||
),
|
||||
});
|
||||
this.schema = tavilySearchJsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return tavilySearchJsonSchema;
|
||||
}
|
||||
|
||||
getApiKey() {
|
||||
|
|
@ -89,12 +103,7 @@ class TavilySearchResults extends Tool {
|
|||
}
|
||||
|
||||
async _call(input) {
|
||||
const validationResult = this.schema.safeParse(input);
|
||||
if (!validationResult.success) {
|
||||
throw new Error(`Validation failed: ${JSON.stringify(validationResult.error.issues)}`);
|
||||
}
|
||||
|
||||
const { query, ...rest } = validationResult.data;
|
||||
const { query, ...rest } = input;
|
||||
|
||||
const requestBody = {
|
||||
api_key: this.apiKey,
|
||||
|
|
|
|||
|
|
@ -1,8 +1,19 @@
|
|||
const { z } = require('zod');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
|
||||
const traversaalSearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description:
|
||||
"A properly written sentence to be interpreted by an AI to search the web according to the user's request.",
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/**
|
||||
* Tool for the Traversaal AI search API, Ares.
|
||||
*/
|
||||
|
|
@ -17,17 +28,15 @@ class TraversaalSearch extends Tool {
|
|||
Useful for when you need to answer questions about current events. Input should be a search query.`;
|
||||
this.description_for_model =
|
||||
'\'Please create a specific sentence for the AI to understand and use as a query to search the web based on the user\'s request. For example, "Find information about the highest mountains in the world." or "Show me the latest news articles about climate change and its impact on polar ice caps."\'';
|
||||
this.schema = z.object({
|
||||
query: z
|
||||
.string()
|
||||
.describe(
|
||||
"A properly written sentence to be interpreted by an AI to search the web according to the user's request.",
|
||||
),
|
||||
});
|
||||
this.schema = traversaalSearchJsonSchema;
|
||||
|
||||
this.apiKey = fields?.TRAVERSAAL_API_KEY ?? this.getApiKey();
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return traversaalSearchJsonSchema;
|
||||
}
|
||||
|
||||
getApiKey() {
|
||||
const apiKey = getEnvironmentVariable('TRAVERSAAL_API_KEY');
|
||||
if (!apiKey && this.override) {
|
||||
|
|
|
|||
|
|
@ -1,9 +1,19 @@
|
|||
/* eslint-disable no-useless-escape */
|
||||
const { z } = require('zod');
|
||||
const axios = require('axios');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
|
||||
const wolframJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
input: {
|
||||
type: 'string',
|
||||
description: 'Natural language query to WolframAlpha following the guidelines',
|
||||
},
|
||||
},
|
||||
required: ['input'],
|
||||
};
|
||||
|
||||
class WolframAlphaAPI extends Tool {
|
||||
constructor(fields) {
|
||||
super();
|
||||
|
|
@ -41,9 +51,11 @@ class WolframAlphaAPI extends Tool {
|
|||
// -- Do not explain each step unless user input is needed. Proceed directly to making a better API call based on the available assumptions.`;
|
||||
this.description = `WolframAlpha offers computation, math, curated knowledge, and real-time data. It handles natural language queries and performs complex calculations.
|
||||
Follow the guidelines to get the best results.`;
|
||||
this.schema = z.object({
|
||||
input: z.string().describe('Natural language query to WolframAlpha following the guidelines'),
|
||||
});
|
||||
this.schema = wolframJsonSchema;
|
||||
}
|
||||
|
||||
static get jsonSchema() {
|
||||
return wolframJsonSchema;
|
||||
}
|
||||
|
||||
async fetchRawText(url) {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
const { z } = require('zod');
|
||||
const axios = require('axios');
|
||||
const { tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
|
|
@ -7,6 +6,18 @@ const { Tools, EToolResources } = require('librechat-data-provider');
|
|||
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
|
||||
const { getFiles } = require('~/models');
|
||||
|
||||
const fileSearchJsonSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description:
|
||||
"A natural language query to search for relevant information in the files. Be specific and use keywords related to the information you're looking for. The query will be used for semantic similarity matching against the file contents.",
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
};
|
||||
|
||||
/**
|
||||
*
|
||||
* @param {Object} options
|
||||
|
|
@ -182,15 +193,9 @@ Use the EXACT anchor markers shown below (copy them verbatim) immediately after
|
|||
**ALWAYS mention the filename in your text before the citation marker. NEVER use markdown links or footnotes.**`
|
||||
: ''
|
||||
}`,
|
||||
schema: z.object({
|
||||
query: z
|
||||
.string()
|
||||
.describe(
|
||||
"A natural language query to search for relevant information in the files. Be specific and use keywords related to the information you're looking for. The query will be used for semantic similarity matching against the file contents.",
|
||||
),
|
||||
}),
|
||||
schema: fileSearchJsonSchema,
|
||||
},
|
||||
);
|
||||
};
|
||||
|
||||
module.exports = { createFileSearchTool, primeFiles };
|
||||
module.exports = { createFileSearchTool, primeFiles, fileSearchJsonSchema };
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue