🎉 feat: Code Interpreter API and Agents Release (#4860)

* feat: Code Interpreter API & File Search Agent Uploads

chore: add back code files

wip: first pass, abstract key dialog

refactor: influence checkbox on key changes

refactor: update localization keys for 'execute code' to 'run code'

wip: run code button

refactor: add throwError parameter to loadAuthValues and getUserPluginAuthValue functions

feat: first pass, API tool calling

fix: handle missing toolId in callTool function and return 404 for non-existent tools

feat: show code outputs

fix: improve error handling in callTool function and log errors

fix: handle potential null value for filepath in attachment destructuring

fix: normalize language before rendering and prevent null return

fix: add loading indicator in RunCode component while executing code

feat: add support for conditional code execution in Markdown components

feat: attachments

refactor: remove bash

fix: pass abort signal to graph/run

refactor: debounce and rate limit tool call

refactor: increase debounce delay for execute function

feat: set code output attachments

feat: image attachments

refactor: apply message context

refactor: pass `partIndex`

feat: toolCall schema/model/methods

feat: block indexing

feat: get tool calls

chore: imports

chore: typing

chore: condense type imports

feat: get tool calls

fix: block indexing

chore: typing

refactor: update tool calls mapping to support multiple results

fix: add unique key to nav link for rendering

wip: first pass, tool call results

refactor: update query cache from successful tool call mutation

style: improve result switcher styling

chore: note on using \`.toObject()\`

feat: add agent_id field to conversation schema

chore: typing

refactor: rename agentMap to agentsMap for consistency

feat: Agent Name as chat input placeholder

chore: bump agents

📦 chore: update @langchain dependencies to latest versions to match agents package

📦 chore: update @librechat/agents dependency to version 1.8.0

fix: Aborting agent stream removes sender; fix(bedrock): completion removes preset name label

refactor: remove direct file parameter to use req.file, add `processAgentFileUpload` for image uploads

feat: upload menu

feat: prime message_file resources

feat: implement conversation access validation in chat route

refactor: remove file parameter from processFileUpload and use req.file instead

feat: add savedMessageIds set to track saved message IDs in BaseClient, to prevent unnecessary double-write to db

feat: prevent duplicate message saves by checking savedMessageIds in AgentController

refactor: skip legacy RAG API handling for agents

feat: add files field to convoSchema

refactor: update request type annotations from Express.Request to ServerRequest in file processing functions

feat: track conversation files

fix: resendFiles, addPreviousAttachments handling

feat: add ID validation for session_id and file_id in download route

feat: entity_id for code file uploads/downloads

fix: code file edge cases

feat: delete related tool calls

feat: add stream rate handling for LLM configuration

feat: enhance system content with attached file information

fix: improve error logging in resource priming function

* WIP: PoC, sequential agents

WIP: PoC Sequential Agents, first pass content data + bump agents package

fix: package-lock

WIP: PoC, o1 support, refactor bufferString

feat: convertJsonSchemaToZod

fix: form issues and schema defining erroneous model

fix: max length issue on agent form instructions, limit conversation messages to sequential agents

feat: add abort signal support to createRun function and AgentClient

feat: PoC, hide prior sequential agent steps

fix: update parameter naming from config to metadata in event handlers for clarity, add model to usage data

refactor: use only last contentData, track model for usage data

chore: bump agents package

fix: content parts issue

refactor: filter contentParts to include tool calls and relevant indices

feat: show function calls

refactor: filter context messages to exclude tool calls when no tools are available to the agent

fix: ensure tool call content is not undefined in formatMessages

feat: add agent_id field to conversationPreset schema

feat: hide sequential agents

feat: increase upload toast duration to 10 seconds

* refactor: tool context handling & update Code API Key Dialog

feat: toolContextMap

chore: skipSpecs -> useSpecs

ci: fix handleTools tests

feat: API Key Dialog

* feat: Agent Permissions Admin Controls

feat: replace label with button for prompt permission toggle

feat: update agent permissions

feat: enable experimental agents and streamline capability configuration

feat: implement access control for agents and enhance endpoint menu items

feat: add welcome message for agent selection in localization

feat: add agents permission to access control and update version to 0.7.57

* fix: update types in useAssistantListMap and useMentions hooks for better null handling

* feat: mention agents

* fix: agent tool resource race conditions when deleting agent tool resource files

* feat: add error handling for code execution with user feedback

* refactor: rename AdminControls to AdminSettings for clarity

* style: add gap to button in AdminSettings for improved layout

* refactor: separate agent query hooks and check access to enable fetching

* fix: remove unused provider from agent initialization options, creates issue with custom endpoints

* refactor: remove redundant/deprecated modelOptions from AgentClient processes

* chore: update @librechat/agents to version 1.8.5 in package.json and package-lock.json

* fix: minor styling issues + agent panel uniformity

* fix: agent edge cases when set endpoint is no longer defined

* refactor: remove unused cleanup function call from AppService

* fix: update link in ApiKeyDialog to point to pricing page

* fix: improve type handling and layout calculations in SidePanel component

* fix: add missing localization string for agent selection in SidePanel

* chore: form styling and localizations for upload filesearch/code interpreter

* fix: model selection placeholder logic in AgentConfig component

* style: agent capabilities

* fix: add localization for provider selection and improve dropdown styling in ModelPanel

* refactor: use gpt-4o-mini > gpt-3.5-turbo

* fix: agents configuration for loadDefaultInterface and update related tests

* feat: DALLE Agents support
This commit is contained in:
Danny Avila 2024-12-04 15:48:13 -05:00 committed by GitHub
parent affcebd48c
commit 1a815f5e19
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
189 changed files with 5056 additions and 1815 deletions

View file

@ -127,6 +127,7 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
},
};
/** @type {TMessage} */
let response = await client.sendMessage(text, messageOptions);
response.endpoint = endpointOption.endpoint;
@ -150,11 +151,13 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
});
res.end();
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/AskController.js - response end' },
);
if (!client.savedMessageIds.has(response.messageId)) {
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/AskController.js - response end' },
);
}
}
if (!client.skipSaveUserMessage) {

View file

@ -14,6 +14,7 @@ const { updateUserPluginsService, deleteUserKey } = require('~/server/services/U
const { verifyEmail, resendVerificationEmail } = require('~/server/services/AuthService');
const { processDeleteRequest } = require('~/server/services/Files/process');
const { deleteAllSharedLinks } = require('~/models/Share');
const { deleteToolCalls } = require('~/models/ToolCall');
const { Transaction } = require('~/models/Transaction');
const { logger } = require('~/config');
@ -123,6 +124,7 @@ const deleteUserController = async (req, res) => {
await deleteAllSharedLinks(user.id); // delete user shared links
await deleteUserFiles(req); // delete user files
await deleteFiles(null, user.id); // delete database files in case of orphaned files from previous steps
await deleteToolCalls(user.id); // delete user tool calls
/* TODO: queue job for cleaning actions and assistants of non-existant users */
logger.info(`User deleted account. Email: ${user.email} ID: ${user.id}`);
res.status(200).send({ message: 'User deleted' });

View file

@ -1,4 +1,4 @@
const { Tools } = require('librechat-data-provider');
const { Tools, StepTypes, imageGenTools } = require('librechat-data-provider');
const {
EnvVar,
GraphEvents,
@ -57,6 +57,9 @@ class ModelEndHandler {
}
const usage = data?.output?.usage_metadata;
if (metadata?.model) {
usage.model = metadata.model;
}
if (usage) {
this.collectedUsage.push(usage);
@ -89,9 +92,27 @@ function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedU
* Handle ON_RUN_STEP event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
handle: (event, data, metadata) => {
if (data?.stepDetails.type === StepTypes.TOOL_CALLS) {
sendEvent(res, { event, data });
} else if (metadata?.last_agent_index === metadata?.agent_index) {
sendEvent(res, { event, data });
} else if (!metadata?.hide_sequential_outputs) {
sendEvent(res, { event, data });
} else {
const agentName = metadata?.name ?? 'Agent';
const isToolCall = data?.stepDetails.type === StepTypes.TOOL_CALLS;
const action = isToolCall ? 'performing a task...' : 'thinking...';
sendEvent(res, {
event: 'on_agent_update',
data: {
runId: metadata?.run_id,
message: `${agentName} is ${action}`,
},
});
}
aggregateContent({ event, data });
},
},
@ -100,9 +121,16 @@ function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedU
* Handle ON_RUN_STEP_DELTA event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
handle: (event, data, metadata) => {
if (data?.delta.type === StepTypes.TOOL_CALLS) {
sendEvent(res, { event, data });
} else if (metadata?.last_agent_index === metadata?.agent_index) {
sendEvent(res, { event, data });
} else if (!metadata?.hide_sequential_outputs) {
sendEvent(res, { event, data });
}
aggregateContent({ event, data });
},
},
@ -111,9 +139,16 @@ function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedU
* Handle ON_RUN_STEP_COMPLETED event.
* @param {string} event - The event name.
* @param {StreamEventData & { result: ToolEndData }} data - The event data.
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
handle: (event, data, metadata) => {
if (data?.result != null) {
sendEvent(res, { event, data });
} else if (metadata?.last_agent_index === metadata?.agent_index) {
sendEvent(res, { event, data });
} else if (!metadata?.hide_sequential_outputs) {
sendEvent(res, { event, data });
}
aggregateContent({ event, data });
},
},
@ -122,9 +157,14 @@ function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedU
* Handle ON_MESSAGE_DELTA event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
handle: (event, data, metadata) => {
if (metadata?.last_agent_index === metadata?.agent_index) {
sendEvent(res, { event, data });
} else if (!metadata?.hide_sequential_outputs) {
sendEvent(res, { event, data });
}
aggregateContent({ event, data });
},
},
@ -151,16 +191,41 @@ function createToolEndCallback({ req, res, artifactPromises }) {
return;
}
if (imageGenTools.has(output.name) && output.artifact) {
artifactPromises.push(
(async () => {
const fileMetadata = Object.assign(output.artifact, {
messageId: metadata.run_id,
toolCallId: output.tool_call_id,
conversationId: metadata.thread_id,
});
if (!res.headersSent) {
return fileMetadata;
}
if (!fileMetadata) {
return null;
}
res.write(`event: attachment\ndata: ${JSON.stringify(fileMetadata)}\n\n`);
return fileMetadata;
})().catch((error) => {
logger.error('Error processing code output:', error);
return null;
}),
);
return;
}
if (output.name !== Tools.execute_code) {
return;
}
const { tool_call_id, artifact } = output;
if (!artifact.files) {
if (!output.artifact.files) {
return;
}
for (const file of artifact.files) {
for (const file of output.artifact.files) {
const { id, name } = file;
artifactPromises.push(
(async () => {
@ -173,10 +238,10 @@ function createToolEndCallback({ req, res, artifactPromises }) {
id,
name,
apiKey: result[EnvVar.CODE_API_KEY],
toolCallId: tool_call_id,
messageId: metadata.run_id,
session_id: artifact.session_id,
toolCallId: output.tool_call_id,
conversationId: metadata.thread_id,
session_id: output.artifact.session_id,
});
if (!res.headersSent) {
return fileMetadata;

View file

@ -12,9 +12,11 @@ const {
Constants,
VisionModes,
openAISchema,
ContentTypes,
EModelEndpoint,
KnownEndpoints,
anthropicSchema,
isAgentsEndpoint,
bedrockOutputParser,
removeNullishValues,
} = require('librechat-data-provider');
@ -30,10 +32,10 @@ const {
createContextHandlers,
} = require('~/app/clients/prompts');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { getBufferString, HumanMessage } = require('@langchain/core/messages');
const Tokenizer = require('~/server/services/Tokenizer');
const { spendTokens } = require('~/models/spendTokens');
const BaseClient = require('~/app/clients/BaseClient');
// const { sleep } = require('~/server/utils');
const { createRun } = require('./run');
const { logger } = require('~/config');
@ -48,6 +50,12 @@ const providerParsers = {
const legacyContentEndpoints = new Set([KnownEndpoints.groq, KnownEndpoints.deepseek]);
const noSystemModelRegex = [/\bo1\b/gi];
// const { processMemory, memoryInstructions } = require('~/server/services/Endpoints/agents/memory');
// const { getFormattedMemories } = require('~/models/Memory');
// const { getCurrentDateTime } = require('~/utils');
class AgentClient extends BaseClient {
constructor(options = {}) {
super(null, options);
@ -62,15 +70,15 @@ class AgentClient extends BaseClient {
this.run;
const {
agentConfigs,
contentParts,
collectedUsage,
artifactPromises,
maxContextTokens,
modelOptions = {},
...clientOptions
} = options;
this.modelOptions = modelOptions;
this.agentConfigs = agentConfigs;
this.maxContextTokens = maxContextTokens;
/** @type {MessageContentComplex[]} */
this.contentParts = contentParts;
@ -80,6 +88,8 @@ class AgentClient extends BaseClient {
this.artifactPromises = artifactPromises;
/** @type {AgentClientOptions} */
this.options = Object.assign({ endpoint: options.endpoint }, clientOptions);
/** @type {string} */
this.model = this.options.agent.model_parameters.model;
}
/**
@ -169,7 +179,7 @@ class AgentClient extends BaseClient {
: {};
if (parseOptions) {
runOptions = parseOptions(this.modelOptions);
runOptions = parseOptions(this.options.agent.model_parameters);
}
return removeNullishValues(
@ -224,7 +234,28 @@ class AgentClient extends BaseClient {
let promptTokens;
/** @type {string} */
let systemContent = `${instructions ?? ''}${additional_instructions ?? ''}`;
let systemContent = [instructions ?? '', additional_instructions ?? '']
.filter(Boolean)
.join('\n')
.trim();
// this.systemMessage = getCurrentDateTime();
// const { withKeys, withoutKeys } = await getFormattedMemories({
// userId: this.options.req.user.id,
// });
// processMemory({
// userId: this.options.req.user.id,
// message: this.options.req.body.text,
// parentMessageId,
// memory: withKeys,
// thread_id: this.conversationId,
// }).catch((error) => {
// logger.error('Memory Agent failed to process memory', error);
// });
// this.systemMessage += '\n\n' + memoryInstructions;
// if (withoutKeys) {
// this.systemMessage += `\n\n# Existing memory about the user:\n${withoutKeys}`;
// }
if (this.options.attachments) {
const attachments = await this.options.attachments;
@ -245,7 +276,8 @@ class AgentClient extends BaseClient {
this.options.attachments = files;
}
if (this.message_file_map) {
/** Note: Bedrock uses legacy RAG API handling */
if (this.message_file_map && !isAgentsEndpoint(this.options.endpoint)) {
this.contextHandlers = createContextHandlers(
this.options.req,
orderedMessages[orderedMessages.length - 1].text,
@ -319,7 +351,6 @@ class AgentClient extends BaseClient {
/** @type {sendCompletion} */
async sendCompletion(payload, opts = {}) {
this.modelOptions.user = this.user;
await this.chatCompletion({
payload,
onProgress: opts.onProgress,
@ -339,10 +370,10 @@ class AgentClient extends BaseClient {
await spendTokens(
{
context,
model: model ?? this.modelOptions.model,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
model: usage.model ?? model ?? this.model ?? this.options.agent.model_parameters.model,
},
{ promptTokens: usage.input_tokens, completionTokens: usage.output_tokens },
);
@ -457,43 +488,190 @@ class AgentClient extends BaseClient {
// });
// }
const run = await createRun({
req: this.options.req,
agent: this.options.agent,
tools: this.options.tools,
runId: this.responseMessageId,
modelOptions: this.modelOptions,
customHandlers: this.options.eventHandlers,
});
const config = {
configurable: {
thread_id: this.conversationId,
last_agent_index: this.agentConfigs?.size ?? 0,
hide_sequential_outputs: this.options.agent.hide_sequential_outputs,
},
signal: abortController.signal,
streamMode: 'values',
version: 'v2',
};
if (!run) {
throw new Error('Failed to create run');
}
this.run = run;
const messages = formatAgentMessages(payload);
const initialMessages = formatAgentMessages(payload);
if (legacyContentEndpoints.has(this.options.agent.endpoint)) {
formatContentStrings(messages);
formatContentStrings(initialMessages);
}
await run.processStream({ messages }, config, {
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Tool Error',
error,
toolId,
);
},
/** @type {ReturnType<createRun>} */
let run;
/**
*
* @param {Agent} agent
* @param {BaseMessage[]} messages
* @param {number} [i]
* @param {TMessageContentParts[]} [contentData]
*/
const runAgent = async (agent, messages, i = 0, contentData = []) => {
config.configurable.model = agent.model_parameters.model;
if (i > 0) {
this.model = agent.model_parameters.model;
}
config.configurable.agent_id = agent.id;
config.configurable.name = agent.name;
config.configurable.agent_index = i;
const noSystemMessages = noSystemModelRegex.some((regex) =>
agent.model_parameters.model.match(regex),
);
const systemMessage = Object.values(agent.toolContextMap ?? {})
.join('\n')
.trim();
let systemContent = [
systemMessage,
agent.instructions ?? '',
i !== 0 ? agent.additional_instructions ?? '' : '',
]
.join('\n')
.trim();
if (noSystemMessages === true) {
agent.instructions = undefined;
agent.additional_instructions = undefined;
} else {
agent.instructions = systemContent;
agent.additional_instructions = undefined;
}
if (noSystemMessages === true && systemContent?.length) {
let latestMessage = messages.pop().content;
if (typeof latestMessage !== 'string') {
latestMessage = latestMessage[0].text;
}
latestMessage = [systemContent, latestMessage].join('\n');
messages.push(new HumanMessage(latestMessage));
}
run = await createRun({
agent,
req: this.options.req,
runId: this.responseMessageId,
signal: abortController.signal,
customHandlers: this.options.eventHandlers,
});
if (!run) {
throw new Error('Failed to create run');
}
if (i === 0) {
this.run = run;
}
if (contentData.length) {
run.Graph.contentData = contentData;
}
await run.processStream({ messages }, config, {
keepContent: i !== 0,
callbacks: {
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Tool Error',
error,
toolId,
);
},
},
});
};
await runAgent(this.options.agent, initialMessages);
let finalContentStart = 0;
if (this.agentConfigs && this.agentConfigs.size > 0) {
let latestMessage = initialMessages.pop().content;
if (typeof latestMessage !== 'string') {
latestMessage = latestMessage[0].text;
}
let i = 1;
let runMessages = [];
const lastFiveMessages = initialMessages.slice(-5);
for (const [agentId, agent] of this.agentConfigs) {
if (abortController.signal.aborted === true) {
break;
}
const currentRun = await run;
if (
i === this.agentConfigs.size &&
config.configurable.hide_sequential_outputs === true
) {
const content = this.contentParts.filter(
(part) => part.type === ContentTypes.TOOL_CALL,
);
this.options.res.write(
`event: message\ndata: ${JSON.stringify({
event: 'on_content_update',
data: {
runId: this.responseMessageId,
content,
},
})}\n\n`,
);
}
const _runMessages = currentRun.Graph.getRunMessages();
finalContentStart = this.contentParts.length;
runMessages = runMessages.concat(_runMessages);
const contentData = currentRun.Graph.contentData.slice();
const bufferString = getBufferString([new HumanMessage(latestMessage), ...runMessages]);
if (i === this.agentConfigs.size) {
logger.debug(`SEQUENTIAL AGENTS: Last buffer string:\n${bufferString}`);
}
try {
const contextMessages = [];
for (const message of lastFiveMessages) {
const messageType = message._getType();
if (
(!agent.tools || agent.tools.length === 0) &&
(messageType === 'tool' || (message.tool_calls?.length ?? 0) > 0)
) {
continue;
}
contextMessages.push(message);
}
const currentMessages = [...contextMessages, new HumanMessage(bufferString)];
await runAgent(agent, currentMessages, i, contentData);
} catch (err) {
logger.error(
`[api/server/controllers/agents/client.js #chatCompletion] Error running agent ${agentId} (${i})`,
err,
);
}
i++;
}
}
if (config.configurable.hide_sequential_outputs !== true) {
finalContentStart = 0;
}
this.contentParts = this.contentParts.filter((part, index) => {
// Include parts that are either:
// 1. At or after the finalContentStart index
// 2. Of type tool_call
// 3. Have tool_call_ids property
return (
index >= finalContentStart || part.type === ContentTypes.TOOL_CALL || part.tool_call_ids
);
});
this.recordCollectedUsage({ context: 'message' }).catch((err) => {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Error recording collected usage',
@ -586,7 +764,7 @@ class AgentClient extends BaseClient {
}
getEncoding() {
return this.modelOptions.model?.includes('gpt-4o') ? 'o200k_base' : 'cl100k_base';
return this.model?.includes('gpt-4o') ? 'o200k_base' : 'cl100k_base';
}
/**

View file

@ -94,8 +94,14 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
conversation.title =
conversation && !conversation.title ? null : conversation?.title || 'New Chat';
if (client.options.attachments) {
userMessage.files = client.options.attachments;
if (req.body.files && client.options.attachments) {
userMessage.files = [];
const messageFiles = new Set(req.body.files.map((file) => file.file_id));
for (let attachment of client.options.attachments) {
if (messageFiles.has(attachment.file_id)) {
userMessage.files.push(attachment);
}
}
delete userMessage.image_urls;
}
@ -109,11 +115,13 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
});
res.end();
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/agents/request.js - response end' },
);
if (!client.savedMessageIds.has(response.messageId)) {
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/agents/request.js - response end' },
);
}
}
if (!client.skipSaveUserMessage) {

View file

@ -3,8 +3,8 @@ const { providerEndpointMap } = require('librechat-data-provider');
/**
* @typedef {import('@librechat/agents').t} t
* @typedef {import('@librechat/agents').StandardGraphConfig} StandardGraphConfig
* @typedef {import('@librechat/agents').StreamEventData} StreamEventData
* @typedef {import('@librechat/agents').ClientOptions} ClientOptions
* @typedef {import('@librechat/agents').EventHandler} EventHandler
* @typedef {import('@librechat/agents').GraphEvents} GraphEvents
* @typedef {import('@librechat/agents').IState} IState
@ -17,18 +17,16 @@ const { providerEndpointMap } = require('librechat-data-provider');
* @param {ServerRequest} [options.req] - The server request.
* @param {string | undefined} [options.runId] - Optional run ID; otherwise, a new run ID will be generated.
* @param {Agent} options.agent - The agent for this run.
* @param {StructuredTool[] | undefined} [options.tools] - The tools to use in the run.
* @param {AbortSignal} options.signal - The signal for this run.
* @param {Record<GraphEvents, EventHandler> | undefined} [options.customHandlers] - Custom event handlers.
* @param {ClientOptions} [options.modelOptions] - Optional model to use; if not provided, it will use the default from modelMap.
* @param {boolean} [options.streaming=true] - Whether to use streaming.
* @param {boolean} [options.streamUsage=true] - Whether to stream usage information.
* @returns {Promise<Run<IState>>} A promise that resolves to a new Run instance.
*/
async function createRun({
runId,
tools,
agent,
modelOptions,
signal,
customHandlers,
streaming = true,
streamUsage = true,
@ -40,14 +38,17 @@ async function createRun({
streaming,
streamUsage,
},
modelOptions,
agent.model_parameters,
);
/** @type {StandardGraphConfig} */
const graphConfig = {
tools,
signal,
llmConfig,
tools: agent.tools,
instructions: agent.instructions,
additional_instructions: agent.additional_instructions,
// toolEnd: agent.end_after_tools,
};
// TEMPORARY FOR TESTING

View file

@ -1,6 +1,12 @@
const { nanoid } = require('nanoid');
const { EnvVar } = require('@librechat/agents');
const { Tools, AuthType } = require('librechat-data-provider');
const { loadAuthValues } = require('~/app/clients/tools/util');
const { Tools, AuthType, ToolCallTypes } = require('librechat-data-provider');
const { processFileURL, uploadImageBuffer } = require('~/server/services/Files/process');
const { processCodeOutput } = require('~/server/services/Files/Code/process');
const { loadAuthValues, loadTools } = require('~/app/clients/tools/util');
const { createToolCall, getToolCallsByConvo } = require('~/models/ToolCall');
const { getMessage } = require('~/models/Message');
const { logger } = require('~/config');
const fieldsMap = {
[Tools.execute_code]: [EnvVar.CODE_API_KEY],
@ -24,6 +30,7 @@ const verifyToolAuth = async (req, res) => {
result = await loadAuthValues({
userId: req.user.id,
authFields,
throwError: false,
});
} catch (error) {
res.status(200).json({ authenticated: false, message: AuthType.USER_PROVIDED });
@ -48,6 +55,131 @@ const verifyToolAuth = async (req, res) => {
}
};
/**
* @param {ServerRequest} req - The request object, containing information about the HTTP request.
* @param {ServerResponse} res - The response object, used to send back the desired HTTP response.
* @returns {Promise<void>} A promise that resolves when the function has completed.
*/
const callTool = async (req, res) => {
try {
const { toolId = '' } = req.params;
if (!fieldsMap[toolId]) {
logger.warn(`[${toolId}/call] User ${req.user.id} attempted call to invalid tool`);
res.status(404).json({ message: 'Tool not found' });
return;
}
const { partIndex, blockIndex, messageId, conversationId, ...args } = req.body;
if (!messageId) {
logger.warn(`[${toolId}/call] User ${req.user.id} attempted call without message ID`);
res.status(400).json({ message: 'Message ID required' });
return;
}
const message = await getMessage({ user: req.user.id, messageId });
if (!message) {
logger.debug(`[${toolId}/call] User ${req.user.id} attempted call with invalid message ID`);
res.status(404).json({ message: 'Message not found' });
return;
}
logger.debug(`[${toolId}/call] User: ${req.user.id}`);
const { loadedTools } = await loadTools({
user: req.user.id,
tools: [toolId],
functions: true,
options: {
req,
returnMetadata: true,
processFileURL,
uploadImageBuffer,
fileStrategy: req.app.locals.fileStrategy,
},
});
const tool = loadedTools[0];
const toolCallId = `${req.user.id}_${nanoid()}`;
const result = await tool.invoke({
args,
name: toolId,
id: toolCallId,
type: ToolCallTypes.TOOL_CALL,
});
const { content, artifact } = result;
const toolCallData = {
toolId,
messageId,
partIndex,
blockIndex,
conversationId,
result: content,
user: req.user.id,
};
if (!artifact || !artifact.files || toolId !== Tools.execute_code) {
createToolCall(toolCallData).catch((error) => {
logger.error(`Error creating tool call: ${error.message}`);
});
return res.status(200).json({
result: content,
});
}
const artifactPromises = [];
for (const file of artifact.files) {
const { id, name } = file;
artifactPromises.push(
(async () => {
const fileMetadata = await processCodeOutput({
req,
id,
name,
apiKey: tool.apiKey,
messageId,
toolCallId,
conversationId,
session_id: artifact.session_id,
});
if (!fileMetadata) {
return null;
}
return fileMetadata;
})().catch((error) => {
logger.error('Error processing code output:', error);
return null;
}),
);
}
const attachments = await Promise.all(artifactPromises);
toolCallData.attachments = attachments;
createToolCall(toolCallData).catch((error) => {
logger.error(`Error creating tool call: ${error.message}`);
});
res.status(200).json({
result: content,
attachments,
});
} catch (error) {
logger.error('Error calling tool', error);
res.status(500).json({ message: 'Error calling tool' });
}
};
const getToolCalls = async (req, res) => {
try {
const { conversationId } = req.query;
const toolCalls = await getToolCallsByConvo(conversationId, req.user.id);
res.status(200).json(toolCalls);
} catch (error) {
logger.error('Error getting tool calls', error);
res.status(500).json({ message: 'Error getting tool calls' });
}
};
module.exports = {
callTool,
getToolCalls,
verifyToolAuth,
};