🎉 feat: Code Interpreter API and Agents Release (#4860)

* feat: Code Interpreter API & File Search Agent Uploads

chore: add back code files

wip: first pass, abstract key dialog

refactor: influence checkbox on key changes

refactor: update localization keys for 'execute code' to 'run code'

wip: run code button

refactor: add throwError parameter to loadAuthValues and getUserPluginAuthValue functions

feat: first pass, API tool calling

fix: handle missing toolId in callTool function and return 404 for non-existent tools

feat: show code outputs

fix: improve error handling in callTool function and log errors

fix: handle potential null value for filepath in attachment destructuring

fix: normalize language before rendering and prevent null return

fix: add loading indicator in RunCode component while executing code

feat: add support for conditional code execution in Markdown components

feat: attachments

refactor: remove bash

fix: pass abort signal to graph/run

refactor: debounce and rate limit tool call

refactor: increase debounce delay for execute function

feat: set code output attachments

feat: image attachments

refactor: apply message context

refactor: pass `partIndex`

feat: toolCall schema/model/methods

feat: block indexing

feat: get tool calls

chore: imports

chore: typing

chore: condense type imports

feat: get tool calls

fix: block indexing

chore: typing

refactor: update tool calls mapping to support multiple results

fix: add unique key to nav link for rendering

wip: first pass, tool call results

refactor: update query cache from successful tool call mutation

style: improve result switcher styling

chore: note on using \`.toObject()\`

feat: add agent_id field to conversation schema

chore: typing

refactor: rename agentMap to agentsMap for consistency

feat: Agent Name as chat input placeholder

chore: bump agents

📦 chore: update @langchain dependencies to latest versions to match agents package

📦 chore: update @librechat/agents dependency to version 1.8.0

fix: Aborting agent stream removes sender; fix(bedrock): completion removes preset name label

refactor: remove direct file parameter to use req.file, add `processAgentFileUpload` for image uploads

feat: upload menu

feat: prime message_file resources

feat: implement conversation access validation in chat route

refactor: remove file parameter from processFileUpload and use req.file instead

feat: add savedMessageIds set to track saved message IDs in BaseClient, to prevent unnecessary double-write to db

feat: prevent duplicate message saves by checking savedMessageIds in AgentController

refactor: skip legacy RAG API handling for agents

feat: add files field to convoSchema

refactor: update request type annotations from Express.Request to ServerRequest in file processing functions

feat: track conversation files

fix: resendFiles, addPreviousAttachments handling

feat: add ID validation for session_id and file_id in download route

feat: entity_id for code file uploads/downloads

fix: code file edge cases

feat: delete related tool calls

feat: add stream rate handling for LLM configuration

feat: enhance system content with attached file information

fix: improve error logging in resource priming function

* WIP: PoC, sequential agents

WIP: PoC Sequential Agents, first pass content data + bump agents package

fix: package-lock

WIP: PoC, o1 support, refactor bufferString

feat: convertJsonSchemaToZod

fix: form issues and schema defining erroneous model

fix: max length issue on agent form instructions, limit conversation messages to sequential agents

feat: add abort signal support to createRun function and AgentClient

feat: PoC, hide prior sequential agent steps

fix: update parameter naming from config to metadata in event handlers for clarity, add model to usage data

refactor: use only last contentData, track model for usage data

chore: bump agents package

fix: content parts issue

refactor: filter contentParts to include tool calls and relevant indices

feat: show function calls

refactor: filter context messages to exclude tool calls when no tools are available to the agent

fix: ensure tool call content is not undefined in formatMessages

feat: add agent_id field to conversationPreset schema

feat: hide sequential agents

feat: increase upload toast duration to 10 seconds

* refactor: tool context handling & update Code API Key Dialog

feat: toolContextMap

chore: skipSpecs -> useSpecs

ci: fix handleTools tests

feat: API Key Dialog

* feat: Agent Permissions Admin Controls

feat: replace label with button for prompt permission toggle

feat: update agent permissions

feat: enable experimental agents and streamline capability configuration

feat: implement access control for agents and enhance endpoint menu items

feat: add welcome message for agent selection in localization

feat: add agents permission to access control and update version to 0.7.57

* fix: update types in useAssistantListMap and useMentions hooks for better null handling

* feat: mention agents

* fix: agent tool resource race conditions when deleting agent tool resource files

* feat: add error handling for code execution with user feedback

* refactor: rename AdminControls to AdminSettings for clarity

* style: add gap to button in AdminSettings for improved layout

* refactor: separate agent query hooks and check access to enable fetching

* fix: remove unused provider from agent initialization options, creates issue with custom endpoints

* refactor: remove redundant/deprecated modelOptions from AgentClient processes

* chore: update @librechat/agents to version 1.8.5 in package.json and package-lock.json

* fix: minor styling issues + agent panel uniformity

* fix: agent edge cases when set endpoint is no longer defined

* refactor: remove unused cleanup function call from AppService

* fix: update link in ApiKeyDialog to point to pricing page

* fix: improve type handling and layout calculations in SidePanel component

* fix: add missing localization string for agent selection in SidePanel

* chore: form styling and localizations for upload filesearch/code interpreter

* fix: model selection placeholder logic in AgentConfig component

* style: agent capabilities

* fix: add localization for provider selection and improve dropdown styling in ModelPanel

* refactor: use gpt-4o-mini > gpt-3.5-turbo

* fix: agents configuration for loadDefaultInterface and update related tests

* feat: DALLE Agents support
This commit is contained in:
Danny Avila 2024-12-04 15:48:13 -05:00 committed by GitHub
parent affcebd48c
commit 1a815f5e19
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
189 changed files with 5056 additions and 1815 deletions

View file

@ -127,6 +127,7 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
},
};
/** @type {TMessage} */
let response = await client.sendMessage(text, messageOptions);
response.endpoint = endpointOption.endpoint;
@ -150,11 +151,13 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
});
res.end();
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/AskController.js - response end' },
);
if (!client.savedMessageIds.has(response.messageId)) {
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/AskController.js - response end' },
);
}
}
if (!client.skipSaveUserMessage) {

View file

@ -14,6 +14,7 @@ const { updateUserPluginsService, deleteUserKey } = require('~/server/services/U
const { verifyEmail, resendVerificationEmail } = require('~/server/services/AuthService');
const { processDeleteRequest } = require('~/server/services/Files/process');
const { deleteAllSharedLinks } = require('~/models/Share');
const { deleteToolCalls } = require('~/models/ToolCall');
const { Transaction } = require('~/models/Transaction');
const { logger } = require('~/config');
@ -123,6 +124,7 @@ const deleteUserController = async (req, res) => {
await deleteAllSharedLinks(user.id); // delete user shared links
await deleteUserFiles(req); // delete user files
await deleteFiles(null, user.id); // delete database files in case of orphaned files from previous steps
await deleteToolCalls(user.id); // delete user tool calls
/* TODO: queue job for cleaning actions and assistants of non-existant users */
logger.info(`User deleted account. Email: ${user.email} ID: ${user.id}`);
res.status(200).send({ message: 'User deleted' });

View file

@ -1,4 +1,4 @@
const { Tools } = require('librechat-data-provider');
const { Tools, StepTypes, imageGenTools } = require('librechat-data-provider');
const {
EnvVar,
GraphEvents,
@ -57,6 +57,9 @@ class ModelEndHandler {
}
const usage = data?.output?.usage_metadata;
if (metadata?.model) {
usage.model = metadata.model;
}
if (usage) {
this.collectedUsage.push(usage);
@ -89,9 +92,27 @@ function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedU
* Handle ON_RUN_STEP event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
handle: (event, data, metadata) => {
if (data?.stepDetails.type === StepTypes.TOOL_CALLS) {
sendEvent(res, { event, data });
} else if (metadata?.last_agent_index === metadata?.agent_index) {
sendEvent(res, { event, data });
} else if (!metadata?.hide_sequential_outputs) {
sendEvent(res, { event, data });
} else {
const agentName = metadata?.name ?? 'Agent';
const isToolCall = data?.stepDetails.type === StepTypes.TOOL_CALLS;
const action = isToolCall ? 'performing a task...' : 'thinking...';
sendEvent(res, {
event: 'on_agent_update',
data: {
runId: metadata?.run_id,
message: `${agentName} is ${action}`,
},
});
}
aggregateContent({ event, data });
},
},
@ -100,9 +121,16 @@ function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedU
* Handle ON_RUN_STEP_DELTA event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
handle: (event, data, metadata) => {
if (data?.delta.type === StepTypes.TOOL_CALLS) {
sendEvent(res, { event, data });
} else if (metadata?.last_agent_index === metadata?.agent_index) {
sendEvent(res, { event, data });
} else if (!metadata?.hide_sequential_outputs) {
sendEvent(res, { event, data });
}
aggregateContent({ event, data });
},
},
@ -111,9 +139,16 @@ function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedU
* Handle ON_RUN_STEP_COMPLETED event.
* @param {string} event - The event name.
* @param {StreamEventData & { result: ToolEndData }} data - The event data.
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
handle: (event, data, metadata) => {
if (data?.result != null) {
sendEvent(res, { event, data });
} else if (metadata?.last_agent_index === metadata?.agent_index) {
sendEvent(res, { event, data });
} else if (!metadata?.hide_sequential_outputs) {
sendEvent(res, { event, data });
}
aggregateContent({ event, data });
},
},
@ -122,9 +157,14 @@ function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedU
* Handle ON_MESSAGE_DELTA event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
* @param {GraphRunnableConfig['configurable']} [metadata] The runnable metadata.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
handle: (event, data, metadata) => {
if (metadata?.last_agent_index === metadata?.agent_index) {
sendEvent(res, { event, data });
} else if (!metadata?.hide_sequential_outputs) {
sendEvent(res, { event, data });
}
aggregateContent({ event, data });
},
},
@ -151,16 +191,41 @@ function createToolEndCallback({ req, res, artifactPromises }) {
return;
}
if (imageGenTools.has(output.name) && output.artifact) {
artifactPromises.push(
(async () => {
const fileMetadata = Object.assign(output.artifact, {
messageId: metadata.run_id,
toolCallId: output.tool_call_id,
conversationId: metadata.thread_id,
});
if (!res.headersSent) {
return fileMetadata;
}
if (!fileMetadata) {
return null;
}
res.write(`event: attachment\ndata: ${JSON.stringify(fileMetadata)}\n\n`);
return fileMetadata;
})().catch((error) => {
logger.error('Error processing code output:', error);
return null;
}),
);
return;
}
if (output.name !== Tools.execute_code) {
return;
}
const { tool_call_id, artifact } = output;
if (!artifact.files) {
if (!output.artifact.files) {
return;
}
for (const file of artifact.files) {
for (const file of output.artifact.files) {
const { id, name } = file;
artifactPromises.push(
(async () => {
@ -173,10 +238,10 @@ function createToolEndCallback({ req, res, artifactPromises }) {
id,
name,
apiKey: result[EnvVar.CODE_API_KEY],
toolCallId: tool_call_id,
messageId: metadata.run_id,
session_id: artifact.session_id,
toolCallId: output.tool_call_id,
conversationId: metadata.thread_id,
session_id: output.artifact.session_id,
});
if (!res.headersSent) {
return fileMetadata;

View file

@ -12,9 +12,11 @@ const {
Constants,
VisionModes,
openAISchema,
ContentTypes,
EModelEndpoint,
KnownEndpoints,
anthropicSchema,
isAgentsEndpoint,
bedrockOutputParser,
removeNullishValues,
} = require('librechat-data-provider');
@ -30,10 +32,10 @@ const {
createContextHandlers,
} = require('~/app/clients/prompts');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { getBufferString, HumanMessage } = require('@langchain/core/messages');
const Tokenizer = require('~/server/services/Tokenizer');
const { spendTokens } = require('~/models/spendTokens');
const BaseClient = require('~/app/clients/BaseClient');
// const { sleep } = require('~/server/utils');
const { createRun } = require('./run');
const { logger } = require('~/config');
@ -48,6 +50,12 @@ const providerParsers = {
const legacyContentEndpoints = new Set([KnownEndpoints.groq, KnownEndpoints.deepseek]);
const noSystemModelRegex = [/\bo1\b/gi];
// const { processMemory, memoryInstructions } = require('~/server/services/Endpoints/agents/memory');
// const { getFormattedMemories } = require('~/models/Memory');
// const { getCurrentDateTime } = require('~/utils');
class AgentClient extends BaseClient {
constructor(options = {}) {
super(null, options);
@ -62,15 +70,15 @@ class AgentClient extends BaseClient {
this.run;
const {
agentConfigs,
contentParts,
collectedUsage,
artifactPromises,
maxContextTokens,
modelOptions = {},
...clientOptions
} = options;
this.modelOptions = modelOptions;
this.agentConfigs = agentConfigs;
this.maxContextTokens = maxContextTokens;
/** @type {MessageContentComplex[]} */
this.contentParts = contentParts;
@ -80,6 +88,8 @@ class AgentClient extends BaseClient {
this.artifactPromises = artifactPromises;
/** @type {AgentClientOptions} */
this.options = Object.assign({ endpoint: options.endpoint }, clientOptions);
/** @type {string} */
this.model = this.options.agent.model_parameters.model;
}
/**
@ -169,7 +179,7 @@ class AgentClient extends BaseClient {
: {};
if (parseOptions) {
runOptions = parseOptions(this.modelOptions);
runOptions = parseOptions(this.options.agent.model_parameters);
}
return removeNullishValues(
@ -224,7 +234,28 @@ class AgentClient extends BaseClient {
let promptTokens;
/** @type {string} */
let systemContent = `${instructions ?? ''}${additional_instructions ?? ''}`;
let systemContent = [instructions ?? '', additional_instructions ?? '']
.filter(Boolean)
.join('\n')
.trim();
// this.systemMessage = getCurrentDateTime();
// const { withKeys, withoutKeys } = await getFormattedMemories({
// userId: this.options.req.user.id,
// });
// processMemory({
// userId: this.options.req.user.id,
// message: this.options.req.body.text,
// parentMessageId,
// memory: withKeys,
// thread_id: this.conversationId,
// }).catch((error) => {
// logger.error('Memory Agent failed to process memory', error);
// });
// this.systemMessage += '\n\n' + memoryInstructions;
// if (withoutKeys) {
// this.systemMessage += `\n\n# Existing memory about the user:\n${withoutKeys}`;
// }
if (this.options.attachments) {
const attachments = await this.options.attachments;
@ -245,7 +276,8 @@ class AgentClient extends BaseClient {
this.options.attachments = files;
}
if (this.message_file_map) {
/** Note: Bedrock uses legacy RAG API handling */
if (this.message_file_map && !isAgentsEndpoint(this.options.endpoint)) {
this.contextHandlers = createContextHandlers(
this.options.req,
orderedMessages[orderedMessages.length - 1].text,
@ -319,7 +351,6 @@ class AgentClient extends BaseClient {
/** @type {sendCompletion} */
async sendCompletion(payload, opts = {}) {
this.modelOptions.user = this.user;
await this.chatCompletion({
payload,
onProgress: opts.onProgress,
@ -339,10 +370,10 @@ class AgentClient extends BaseClient {
await spendTokens(
{
context,
model: model ?? this.modelOptions.model,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
model: usage.model ?? model ?? this.model ?? this.options.agent.model_parameters.model,
},
{ promptTokens: usage.input_tokens, completionTokens: usage.output_tokens },
);
@ -457,43 +488,190 @@ class AgentClient extends BaseClient {
// });
// }
const run = await createRun({
req: this.options.req,
agent: this.options.agent,
tools: this.options.tools,
runId: this.responseMessageId,
modelOptions: this.modelOptions,
customHandlers: this.options.eventHandlers,
});
const config = {
configurable: {
thread_id: this.conversationId,
last_agent_index: this.agentConfigs?.size ?? 0,
hide_sequential_outputs: this.options.agent.hide_sequential_outputs,
},
signal: abortController.signal,
streamMode: 'values',
version: 'v2',
};
if (!run) {
throw new Error('Failed to create run');
}
this.run = run;
const messages = formatAgentMessages(payload);
const initialMessages = formatAgentMessages(payload);
if (legacyContentEndpoints.has(this.options.agent.endpoint)) {
formatContentStrings(messages);
formatContentStrings(initialMessages);
}
await run.processStream({ messages }, config, {
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Tool Error',
error,
toolId,
);
},
/** @type {ReturnType<createRun>} */
let run;
/**
*
* @param {Agent} agent
* @param {BaseMessage[]} messages
* @param {number} [i]
* @param {TMessageContentParts[]} [contentData]
*/
const runAgent = async (agent, messages, i = 0, contentData = []) => {
config.configurable.model = agent.model_parameters.model;
if (i > 0) {
this.model = agent.model_parameters.model;
}
config.configurable.agent_id = agent.id;
config.configurable.name = agent.name;
config.configurable.agent_index = i;
const noSystemMessages = noSystemModelRegex.some((regex) =>
agent.model_parameters.model.match(regex),
);
const systemMessage = Object.values(agent.toolContextMap ?? {})
.join('\n')
.trim();
let systemContent = [
systemMessage,
agent.instructions ?? '',
i !== 0 ? agent.additional_instructions ?? '' : '',
]
.join('\n')
.trim();
if (noSystemMessages === true) {
agent.instructions = undefined;
agent.additional_instructions = undefined;
} else {
agent.instructions = systemContent;
agent.additional_instructions = undefined;
}
if (noSystemMessages === true && systemContent?.length) {
let latestMessage = messages.pop().content;
if (typeof latestMessage !== 'string') {
latestMessage = latestMessage[0].text;
}
latestMessage = [systemContent, latestMessage].join('\n');
messages.push(new HumanMessage(latestMessage));
}
run = await createRun({
agent,
req: this.options.req,
runId: this.responseMessageId,
signal: abortController.signal,
customHandlers: this.options.eventHandlers,
});
if (!run) {
throw new Error('Failed to create run');
}
if (i === 0) {
this.run = run;
}
if (contentData.length) {
run.Graph.contentData = contentData;
}
await run.processStream({ messages }, config, {
keepContent: i !== 0,
callbacks: {
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Tool Error',
error,
toolId,
);
},
},
});
};
await runAgent(this.options.agent, initialMessages);
let finalContentStart = 0;
if (this.agentConfigs && this.agentConfigs.size > 0) {
let latestMessage = initialMessages.pop().content;
if (typeof latestMessage !== 'string') {
latestMessage = latestMessage[0].text;
}
let i = 1;
let runMessages = [];
const lastFiveMessages = initialMessages.slice(-5);
for (const [agentId, agent] of this.agentConfigs) {
if (abortController.signal.aborted === true) {
break;
}
const currentRun = await run;
if (
i === this.agentConfigs.size &&
config.configurable.hide_sequential_outputs === true
) {
const content = this.contentParts.filter(
(part) => part.type === ContentTypes.TOOL_CALL,
);
this.options.res.write(
`event: message\ndata: ${JSON.stringify({
event: 'on_content_update',
data: {
runId: this.responseMessageId,
content,
},
})}\n\n`,
);
}
const _runMessages = currentRun.Graph.getRunMessages();
finalContentStart = this.contentParts.length;
runMessages = runMessages.concat(_runMessages);
const contentData = currentRun.Graph.contentData.slice();
const bufferString = getBufferString([new HumanMessage(latestMessage), ...runMessages]);
if (i === this.agentConfigs.size) {
logger.debug(`SEQUENTIAL AGENTS: Last buffer string:\n${bufferString}`);
}
try {
const contextMessages = [];
for (const message of lastFiveMessages) {
const messageType = message._getType();
if (
(!agent.tools || agent.tools.length === 0) &&
(messageType === 'tool' || (message.tool_calls?.length ?? 0) > 0)
) {
continue;
}
contextMessages.push(message);
}
const currentMessages = [...contextMessages, new HumanMessage(bufferString)];
await runAgent(agent, currentMessages, i, contentData);
} catch (err) {
logger.error(
`[api/server/controllers/agents/client.js #chatCompletion] Error running agent ${agentId} (${i})`,
err,
);
}
i++;
}
}
if (config.configurable.hide_sequential_outputs !== true) {
finalContentStart = 0;
}
this.contentParts = this.contentParts.filter((part, index) => {
// Include parts that are either:
// 1. At or after the finalContentStart index
// 2. Of type tool_call
// 3. Have tool_call_ids property
return (
index >= finalContentStart || part.type === ContentTypes.TOOL_CALL || part.tool_call_ids
);
});
this.recordCollectedUsage({ context: 'message' }).catch((err) => {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Error recording collected usage',
@ -586,7 +764,7 @@ class AgentClient extends BaseClient {
}
getEncoding() {
return this.modelOptions.model?.includes('gpt-4o') ? 'o200k_base' : 'cl100k_base';
return this.model?.includes('gpt-4o') ? 'o200k_base' : 'cl100k_base';
}
/**

View file

@ -94,8 +94,14 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
conversation.title =
conversation && !conversation.title ? null : conversation?.title || 'New Chat';
if (client.options.attachments) {
userMessage.files = client.options.attachments;
if (req.body.files && client.options.attachments) {
userMessage.files = [];
const messageFiles = new Set(req.body.files.map((file) => file.file_id));
for (let attachment of client.options.attachments) {
if (messageFiles.has(attachment.file_id)) {
userMessage.files.push(attachment);
}
}
delete userMessage.image_urls;
}
@ -109,11 +115,13 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
});
res.end();
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/agents/request.js - response end' },
);
if (!client.savedMessageIds.has(response.messageId)) {
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/agents/request.js - response end' },
);
}
}
if (!client.skipSaveUserMessage) {

View file

@ -3,8 +3,8 @@ const { providerEndpointMap } = require('librechat-data-provider');
/**
* @typedef {import('@librechat/agents').t} t
* @typedef {import('@librechat/agents').StandardGraphConfig} StandardGraphConfig
* @typedef {import('@librechat/agents').StreamEventData} StreamEventData
* @typedef {import('@librechat/agents').ClientOptions} ClientOptions
* @typedef {import('@librechat/agents').EventHandler} EventHandler
* @typedef {import('@librechat/agents').GraphEvents} GraphEvents
* @typedef {import('@librechat/agents').IState} IState
@ -17,18 +17,16 @@ const { providerEndpointMap } = require('librechat-data-provider');
* @param {ServerRequest} [options.req] - The server request.
* @param {string | undefined} [options.runId] - Optional run ID; otherwise, a new run ID will be generated.
* @param {Agent} options.agent - The agent for this run.
* @param {StructuredTool[] | undefined} [options.tools] - The tools to use in the run.
* @param {AbortSignal} options.signal - The signal for this run.
* @param {Record<GraphEvents, EventHandler> | undefined} [options.customHandlers] - Custom event handlers.
* @param {ClientOptions} [options.modelOptions] - Optional model to use; if not provided, it will use the default from modelMap.
* @param {boolean} [options.streaming=true] - Whether to use streaming.
* @param {boolean} [options.streamUsage=true] - Whether to stream usage information.
* @returns {Promise<Run<IState>>} A promise that resolves to a new Run instance.
*/
async function createRun({
runId,
tools,
agent,
modelOptions,
signal,
customHandlers,
streaming = true,
streamUsage = true,
@ -40,14 +38,17 @@ async function createRun({
streaming,
streamUsage,
},
modelOptions,
agent.model_parameters,
);
/** @type {StandardGraphConfig} */
const graphConfig = {
tools,
signal,
llmConfig,
tools: agent.tools,
instructions: agent.instructions,
additional_instructions: agent.additional_instructions,
// toolEnd: agent.end_after_tools,
};
// TEMPORARY FOR TESTING

View file

@ -1,6 +1,12 @@
const { nanoid } = require('nanoid');
const { EnvVar } = require('@librechat/agents');
const { Tools, AuthType } = require('librechat-data-provider');
const { loadAuthValues } = require('~/app/clients/tools/util');
const { Tools, AuthType, ToolCallTypes } = require('librechat-data-provider');
const { processFileURL, uploadImageBuffer } = require('~/server/services/Files/process');
const { processCodeOutput } = require('~/server/services/Files/Code/process');
const { loadAuthValues, loadTools } = require('~/app/clients/tools/util');
const { createToolCall, getToolCallsByConvo } = require('~/models/ToolCall');
const { getMessage } = require('~/models/Message');
const { logger } = require('~/config');
const fieldsMap = {
[Tools.execute_code]: [EnvVar.CODE_API_KEY],
@ -24,6 +30,7 @@ const verifyToolAuth = async (req, res) => {
result = await loadAuthValues({
userId: req.user.id,
authFields,
throwError: false,
});
} catch (error) {
res.status(200).json({ authenticated: false, message: AuthType.USER_PROVIDED });
@ -48,6 +55,131 @@ const verifyToolAuth = async (req, res) => {
}
};
/**
* @param {ServerRequest} req - The request object, containing information about the HTTP request.
* @param {ServerResponse} res - The response object, used to send back the desired HTTP response.
* @returns {Promise<void>} A promise that resolves when the function has completed.
*/
const callTool = async (req, res) => {
try {
const { toolId = '' } = req.params;
if (!fieldsMap[toolId]) {
logger.warn(`[${toolId}/call] User ${req.user.id} attempted call to invalid tool`);
res.status(404).json({ message: 'Tool not found' });
return;
}
const { partIndex, blockIndex, messageId, conversationId, ...args } = req.body;
if (!messageId) {
logger.warn(`[${toolId}/call] User ${req.user.id} attempted call without message ID`);
res.status(400).json({ message: 'Message ID required' });
return;
}
const message = await getMessage({ user: req.user.id, messageId });
if (!message) {
logger.debug(`[${toolId}/call] User ${req.user.id} attempted call with invalid message ID`);
res.status(404).json({ message: 'Message not found' });
return;
}
logger.debug(`[${toolId}/call] User: ${req.user.id}`);
const { loadedTools } = await loadTools({
user: req.user.id,
tools: [toolId],
functions: true,
options: {
req,
returnMetadata: true,
processFileURL,
uploadImageBuffer,
fileStrategy: req.app.locals.fileStrategy,
},
});
const tool = loadedTools[0];
const toolCallId = `${req.user.id}_${nanoid()}`;
const result = await tool.invoke({
args,
name: toolId,
id: toolCallId,
type: ToolCallTypes.TOOL_CALL,
});
const { content, artifact } = result;
const toolCallData = {
toolId,
messageId,
partIndex,
blockIndex,
conversationId,
result: content,
user: req.user.id,
};
if (!artifact || !artifact.files || toolId !== Tools.execute_code) {
createToolCall(toolCallData).catch((error) => {
logger.error(`Error creating tool call: ${error.message}`);
});
return res.status(200).json({
result: content,
});
}
const artifactPromises = [];
for (const file of artifact.files) {
const { id, name } = file;
artifactPromises.push(
(async () => {
const fileMetadata = await processCodeOutput({
req,
id,
name,
apiKey: tool.apiKey,
messageId,
toolCallId,
conversationId,
session_id: artifact.session_id,
});
if (!fileMetadata) {
return null;
}
return fileMetadata;
})().catch((error) => {
logger.error('Error processing code output:', error);
return null;
}),
);
}
const attachments = await Promise.all(artifactPromises);
toolCallData.attachments = attachments;
createToolCall(toolCallData).catch((error) => {
logger.error(`Error creating tool call: ${error.message}`);
});
res.status(200).json({
result: content,
attachments,
});
} catch (error) {
logger.error('Error calling tool', error);
res.status(500).json({ message: 'Error calling tool' });
}
};
const getToolCalls = async (req, res) => {
try {
const { conversationId } = req.query;
const toolCalls = await getToolCallsByConvo(conversationId, req.user.id);
res.status(200).json(toolCalls);
} catch (error) {
logger.error('Error getting tool calls', error);
res.status(500).json({ message: 'Error getting tool calls' });
}
};
module.exports = {
callTool,
getToolCalls,
verifyToolAuth,
};

View file

@ -10,6 +10,7 @@ const openAI = require('~/server/services/Endpoints/openAI');
const agents = require('~/server/services/Endpoints/agents');
const custom = require('~/server/services/Endpoints/custom');
const google = require('~/server/services/Endpoints/google');
const { getConvoFiles } = require('~/models/Conversation');
const { handleError } = require('~/server/utils');
const buildFunction = {
@ -72,21 +73,32 @@ async function buildEndpointOption(req, res, next) {
}
}
const endpointFn = buildFunction[endpointType ?? endpoint];
const builder = isAgentsEndpoint(endpoint) ? (...args) => endpointFn(req, ...args) : endpointFn;
try {
const isAgents = isAgentsEndpoint(endpoint);
const endpointFn = buildFunction[endpointType ?? endpoint];
const builder = isAgents ? (...args) => endpointFn(req, ...args) : endpointFn;
// TODO: use object params
req.body.endpointOption = builder(endpoint, parsedBody, endpointType);
// TODO: use object params
req.body.endpointOption = builder(endpoint, parsedBody, endpointType);
// TODO: use `getModelsConfig` only when necessary
const modelsConfig = await getModelsConfig(req);
req.body.endpointOption.modelsConfig = modelsConfig;
if (req.body.files) {
// hold the promise
req.body.endpointOption.attachments = processFiles(req.body.files);
// TODO: use `getModelsConfig` only when necessary
const modelsConfig = await getModelsConfig(req);
const { resendFiles = true } = req.body.endpointOption;
req.body.endpointOption.modelsConfig = modelsConfig;
if (isAgents && resendFiles && req.body.conversationId) {
const fileIds = await getConvoFiles(req.body.conversationId);
const requestFiles = req.body.files ?? [];
if (requestFiles.length || fileIds.length) {
req.body.endpointOption.attachments = processFiles(requestFiles, fileIds);
}
} else if (req.body.files) {
// hold the promise
req.body.endpointOption.attachments = processFiles(req.body.files);
}
next();
} catch (error) {
return handleError(res, { text: 'Error building endpoint option' });
}
next();
}
module.exports = buildEndpointOption;

View file

@ -5,6 +5,7 @@ const loginLimiter = require('./loginLimiter');
const importLimiters = require('./importLimiters');
const uploadLimiters = require('./uploadLimiters');
const registerLimiter = require('./registerLimiter');
const toolCallLimiter = require('./toolCallLimiter');
const messageLimiters = require('./messageLimiters');
const verifyEmailLimiter = require('./verifyEmailLimiter');
const resetPasswordLimiter = require('./resetPasswordLimiter');
@ -15,6 +16,7 @@ module.exports = {
...messageLimiters,
loginLimiter,
registerLimiter,
toolCallLimiter,
createTTSLimiters,
createSTTLimiters,
verifyEmailLimiter,

View file

@ -0,0 +1,25 @@
const rateLimit = require('express-rate-limit');
const { ViolationTypes } = require('librechat-data-provider');
const logViolation = require('~/cache/logViolation');
const toolCallLimiter = rateLimit({
windowMs: 1000,
max: 1,
handler: async (req, res) => {
const type = ViolationTypes.TOOL_CALL_LIMIT;
const errorMessage = {
type,
max: 1,
limiter: 'user',
windowInMinutes: 1,
};
await logViolation(req, res, type, errorMessage, 0);
res.status(429).json({ message: 'Too many tool call requests. Try again later' });
},
keyGenerator: function (req) {
return req.user?.id;
},
});
module.exports = toolCallLimiter;

View file

@ -1,19 +1,23 @@
const express = require('express');
const router = express.Router();
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const {
setHeaders,
handleAbort,
// validateModel,
// validateEndpoint,
generateCheckAccess,
validateConvoAccess,
buildEndpointOption,
} = require('~/server/middleware');
const { initializeClient } = require('~/server/services/Endpoints/agents');
const AgentController = require('~/server/controllers/agents/request');
const addTitle = require('~/server/services/Endpoints/agents/title');
const router = express.Router();
router.post('/abort', handleAbort());
const checkAgentAccess = generateCheckAccess(PermissionTypes.AGENTS, [Permissions.USE]);
/**
* @route POST /
* @desc Chat with an assistant
@ -25,7 +29,8 @@ router.post('/abort', handleAbort());
router.post(
'/',
// validateModel,
// validateEndpoint,
checkAgentAccess,
validateConvoAccess,
buildEndpointOption,
setHeaders,
async (req, res, next) => {

View file

@ -1,6 +1,7 @@
const express = require('express');
const { callTool, verifyToolAuth, getToolCalls } = require('~/server/controllers/tools');
const { getAvailableTools } = require('~/server/controllers/PluginController');
const { verifyToolAuth } = require('~/server/controllers/tools');
const { toolCallLimiter } = require('~/server/middleware/limiters');
const router = express.Router();
@ -11,6 +12,13 @@ const router = express.Router();
*/
router.get('/', getAvailableTools);
/**
* Get a list of tool calls.
* @route GET /agents/tools/calls
* @returns {ToolCallData[]} 200 - application/json
*/
router.get('/calls', getToolCalls);
/**
* Verify authentication for a specific tool
* @route GET /agents/tools/:toolId/auth
@ -19,4 +27,13 @@ router.get('/', getAvailableTools);
*/
router.get('/:toolId/auth', verifyToolAuth);
/**
* Execute code for a specific tool
* @route POST /agents/tools/:toolId/call
* @param {string} toolId - The ID of the tool to execute
* @param {object} req.body - Request body
* @returns {object} Result of code execution
*/
router.post('/:toolId/call', toolCallLimiter, callTool);
module.exports = router;

View file

@ -7,6 +7,7 @@ const requireJwtAuth = require('~/server/middleware/requireJwtAuth');
const { forkConversation } = require('~/server/utils/import/fork');
const { importConversations } = require('~/server/utils/import');
const { createImportLimiters } = require('~/server/middleware');
const { deleteToolCalls } = require('~/models/ToolCall');
const getLogStores = require('~/cache/getLogStores');
const { sleep } = require('~/server/utils');
const { logger } = require('~/config');
@ -105,6 +106,7 @@ router.post('/clear', async (req, res) => {
try {
const dbResponse = await deleteConvos(req.user.id, filter);
await deleteToolCalls(req.user.id, filter.conversationId);
res.status(201).json(dbResponse);
} catch (error) {
logger.error('Error clearing conversations', error);

View file

@ -107,6 +107,10 @@ router.delete('/', async (req, res) => {
}
});
function isValidID(str) {
return /^[A-Za-z0-9_-]{21}$/.test(str);
}
router.get('/code/download/:session_id/:fileId', async (req, res) => {
try {
const { session_id, fileId } = req.params;
@ -117,6 +121,11 @@ router.get('/code/download/:session_id/:fileId', async (req, res) => {
return res.status(400).send('Bad request');
}
if (!isValidID(session_id) || !isValidID(fileId)) {
logger.debug(`${logPrefix} invalid session_id or fileId`);
return res.status(400).send('Bad request');
}
const { getDownloadStream } = getStrategyFunctions(FileSources.execute_code);
if (!getDownloadStream) {
logger.warn(
@ -213,21 +222,20 @@ router.get('/download/:userId/:file_id', async (req, res) => {
});
router.post('/', async (req, res) => {
const file = req.file;
const metadata = req.body;
let cleanup = true;
try {
filterFile({ req, file });
filterFile({ req });
metadata.temp_file_id = metadata.file_id;
metadata.file_id = req.file_id;
if (isAgentsEndpoint(metadata.endpoint)) {
return await processAgentFileUpload({ req, res, file, metadata });
return await processAgentFileUpload({ req, res, metadata });
}
await processFileUpload({ req, res, file, metadata });
await processFileUpload({ req, res, metadata });
} catch (error) {
let message = 'Error processing file';
logger.error('[/files] Error processing file:', error);
@ -238,7 +246,7 @@ router.post('/', async (req, res) => {
// TODO: delete remote file if it exists
try {
await fs.unlink(file.path);
await fs.unlink(req.file.path);
cleanup = false;
} catch (error) {
logger.error('[/files] Error deleting file:', error);
@ -248,7 +256,7 @@ router.post('/', async (req, res) => {
if (cleanup) {
try {
await fs.unlink(file.path);
await fs.unlink(req.file.path);
} catch (error) {
logger.error('[/files] Error deleting file after file processing:', error);
}

View file

@ -1,7 +1,12 @@
const path = require('path');
const fs = require('fs').promises;
const express = require('express');
const { filterFile, processImageFile } = require('~/server/services/Files/process');
const { isAgentsEndpoint } = require('librechat-data-provider');
const {
filterFile,
processImageFile,
processAgentFileUpload,
} = require('~/server/services/Files/process');
const { logger } = require('~/config');
const router = express.Router();
@ -10,12 +15,16 @@ router.post('/', async (req, res) => {
const metadata = req.body;
try {
filterFile({ req, file: req.file, image: true });
filterFile({ req, image: true });
metadata.temp_file_id = metadata.file_id;
metadata.file_id = req.file_id;
await processImageFile({ req, res, file: req.file, metadata });
if (isAgentsEndpoint(metadata.endpoint) && metadata.tool_resource != null) {
return await processAgentFileUpload({ req, res, metadata });
}
await processImageFile({ req, res, metadata });
} catch (error) {
// TODO: delete remote file if it exists
logger.error('[/files/images] Error processing file:', error);

View file

@ -1,6 +1,7 @@
const express = require('express');
const {
promptPermissionsSchema,
agentPermissionsSchema,
PermissionTypes,
roleDefaults,
SystemRoles,
@ -72,4 +73,37 @@ router.put('/:roleName/prompts', checkAdmin, async (req, res) => {
}
});
/**
* PUT /api/roles/:roleName/agents
* Update agent permissions for a specific role
*/
router.put('/:roleName/agents', checkAdmin, async (req, res) => {
const { roleName: _r } = req.params;
// TODO: TEMP, use a better parsing for roleName
const roleName = _r.toUpperCase();
/** @type {TRole['AGENTS']} */
const updates = req.body;
try {
const parsedUpdates = agentPermissionsSchema.partial().parse(updates);
const role = await getRoleByName(roleName);
if (!role) {
return res.status(404).send({ message: 'Role not found' });
}
const mergedUpdates = {
[PermissionTypes.AGENTS]: {
...role[PermissionTypes.AGENTS],
...parsedUpdates,
},
};
const updatedRole = await updateRoleByName(roleName, mergedUpdates);
res.status(200).send(updatedRole);
} catch (error) {
return res.status(400).send({ message: 'Invalid prompt permissions.', error: error.errors });
}
});
module.exports = router;

View file

@ -8,7 +8,6 @@ const { loadDefaultInterface } = require('./start/interface');
const { azureConfigSetup } = require('./start/azureOpenAI');
const { loadAndFormatTools } = require('./ToolService');
const { initializeRoles } = require('~/models/Role');
const { cleanup } = require('./cleanup');
const paths = require('~/config/paths');
/**
@ -18,7 +17,6 @@ const paths = require('~/config/paths');
* @param {Express.Application} app - The Express application object.
*/
const AppService = async (app) => {
cleanup();
await initializeRoles();
/** @type {TCustomConfig}*/
const config = (await loadCustomConfig()) ?? {};

View file

@ -49,10 +49,6 @@ module.exports = {
process.env.BEDROCK_AWS_SECRET_ACCESS_KEY ?? process.env.BEDROCK_AWS_DEFAULT_REGION,
),
/* key will be part of separate config */
[EModelEndpoint.agents]: generateConfig(
process.env.EXPERIMENTAL_AGENTS,
undefined,
EModelEndpoint.agents,
),
[EModelEndpoint.agents]: generateConfig('true', undefined, EModelEndpoint.agents),
},
};

View file

@ -2,8 +2,14 @@ const { loadAgent } = require('~/models/Agent');
const { logger } = require('~/config');
const buildOptions = (req, endpoint, parsedBody) => {
const { agent_id, instructions, spec, ...model_parameters } = parsedBody;
const {
agent_id,
instructions,
spec,
maxContextTokens,
resendFiles = true,
...model_parameters
} = parsedBody;
const agentPromise = loadAgent({
req,
agent_id,
@ -13,12 +19,14 @@ const buildOptions = (req, endpoint, parsedBody) => {
});
const endpointOption = {
agent: agentPromise,
spec,
endpoint,
agent_id,
resendFiles,
instructions,
spec,
maxContextTokens,
model_parameters,
agent: agentPromise,
};
return endpointOption;

View file

@ -16,6 +16,8 @@ const { getCustomEndpointConfig } = require('~/server/services/Config');
const { loadAgentTools } = require('~/server/services/ToolService');
const AgentClient = require('~/server/controllers/agents/client');
const { getModelMaxTokens } = require('~/utils');
const { getAgent } = require('~/models/Agent');
const { logger } = require('~/config');
const providerConfigMap = {
[EModelEndpoint.openAI]: initOpenAI,
@ -25,6 +27,113 @@ const providerConfigMap = {
[Providers.OLLAMA]: initCustom,
};
/**
*
* @param {Promise<Array<MongoFile | null>> | undefined} _attachments
* @param {AgentToolResources | undefined} _tool_resources
* @returns {Promise<{ attachments: Array<MongoFile | undefined> | undefined, tool_resources: AgentToolResources | undefined }>}
*/
const primeResources = async (_attachments, _tool_resources) => {
try {
if (!_attachments) {
return { attachments: undefined, tool_resources: _tool_resources };
}
/** @type {Array<MongoFile | undefined> | undefined} */
const files = await _attachments;
const attachments = [];
const tool_resources = _tool_resources ?? {};
for (const file of files) {
if (!file) {
continue;
}
if (file.metadata?.fileIdentifier) {
const execute_code = tool_resources.execute_code ?? {};
if (!execute_code.files) {
tool_resources.execute_code = { ...execute_code, files: [] };
}
tool_resources.execute_code.files.push(file);
} else if (file.embedded === true) {
const file_search = tool_resources.file_search ?? {};
if (!file_search.files) {
tool_resources.file_search = { ...file_search, files: [] };
}
tool_resources.file_search.files.push(file);
}
attachments.push(file);
}
return { attachments, tool_resources };
} catch (error) {
logger.error('Error priming resources', error);
return { attachments: _attachments, tool_resources: _tool_resources };
}
};
const initializeAgentOptions = async ({
req,
res,
agent,
endpointOption,
tool_resources,
isInitialAgent = false,
}) => {
const { tools, toolContextMap } = await loadAgentTools({
req,
tools: agent.tools,
agent_id: agent.id,
tool_resources,
});
const provider = agent.provider;
let getOptions = providerConfigMap[provider];
if (!getOptions) {
const customEndpointConfig = await getCustomEndpointConfig(provider);
if (!customEndpointConfig) {
throw new Error(`Provider ${provider} not supported`);
}
getOptions = initCustom;
agent.provider = Providers.OPENAI;
agent.endpoint = provider.toLowerCase();
}
const model_parameters = agent.model_parameters ?? { model: agent.model };
const _endpointOption = isInitialAgent
? endpointOption
: {
model_parameters,
};
const options = await getOptions({
req,
res,
optionsOnly: true,
overrideEndpoint: provider,
overrideModel: agent.model,
endpointOption: _endpointOption,
});
agent.model_parameters = Object.assign(model_parameters, options.llmConfig);
if (options.configOptions) {
agent.model_parameters.configuration = options.configOptions;
}
if (!agent.model_parameters.model) {
agent.model_parameters.model = agent.model;
}
return {
...agent,
tools,
toolContextMap,
maxContextTokens:
agent.max_context_tokens ??
getModelMaxTokens(agent.model_parameters.model, providerEndpointMap[provider]) ??
4000,
};
};
const initializeClient = async ({ req, res, endpointOption }) => {
if (!endpointOption) {
throw new Error('Endpoint option not provided');
@ -48,70 +157,68 @@ const initializeClient = async ({ req, res, endpointOption }) => {
throw new Error('No agent promise provided');
}
/** @type {Agent | null} */
const agent = await endpointOption.agent;
if (!agent) {
// Initialize primary agent
const primaryAgent = await endpointOption.agent;
if (!primaryAgent) {
throw new Error('Agent not found');
}
const { tools } = await loadAgentTools({
req,
tools: agent.tools,
agent_id: agent.id,
tool_resources: agent.tool_resources,
});
const { attachments, tool_resources } = await primeResources(
endpointOption.attachments,
primaryAgent.tool_resources,
);
const provider = agent.provider;
let modelOptions = { model: agent.model };
let getOptions = providerConfigMap[provider];
if (!getOptions) {
const customEndpointConfig = await getCustomEndpointConfig(provider);
if (!customEndpointConfig) {
throw new Error(`Provider ${provider} not supported`);
}
getOptions = initCustom;
agent.provider = Providers.OPENAI;
agent.endpoint = provider.toLowerCase();
}
const agentConfigs = new Map();
// TODO: pass-in override settings that are specific to current run
endpointOption.model_parameters.model = agent.model;
const options = await getOptions({
// Handle primary agent
const primaryConfig = await initializeAgentOptions({
req,
res,
agent: primaryAgent,
endpointOption,
optionsOnly: true,
overrideEndpoint: provider,
overrideModel: agent.model,
tool_resources,
isInitialAgent: true,
});
modelOptions = Object.assign(modelOptions, options.llmConfig);
if (options.configOptions) {
modelOptions.configuration = options.configOptions;
const agent_ids = primaryConfig.agent_ids;
if (agent_ids?.length) {
for (const agentId of agent_ids) {
const agent = await getAgent({ id: agentId });
if (!agent) {
throw new Error(`Agent ${agentId} not found`);
}
const config = await initializeAgentOptions({
req,
res,
agent,
endpointOption,
});
agentConfigs.set(agentId, config);
}
}
const sender = getResponseSender({
...endpointOption,
model: endpointOption.model_parameters.model,
});
const sender =
primaryAgent.name ??
getResponseSender({
...endpointOption,
model: endpointOption.model_parameters.model,
});
const client = new AgentClient({
req,
agent,
tools,
agent: primaryConfig,
sender,
attachments,
contentParts,
modelOptions,
eventHandlers,
collectedUsage,
artifactPromises,
spec: endpointOption.spec,
agentConfigs,
endpoint: EModelEndpoint.agents,
attachments: endpointOption.attachments,
maxContextTokens:
agent.max_context_tokens ??
getModelMaxTokens(modelOptions.model, providerEndpointMap[provider]) ??
4000,
maxContextTokens: primaryConfig.maxContextTokens,
});
return { client };
};

View file

@ -5,7 +5,6 @@ const {
getResponseSender,
} = require('librechat-data-provider');
const { getDefaultHandlers } = require('~/server/controllers/agents/callbacks');
// const { loadAgentTools } = require('~/server/services/ToolService');
const getOptions = require('~/server/services/Endpoints/bedrock/options');
const AgentClient = require('~/server/controllers/agents/client');
const { getModelMaxTokens } = require('~/utils');
@ -20,8 +19,6 @@ const initializeClient = async ({ req, res, endpointOption }) => {
const { contentParts, aggregateContent } = createContentAggregator();
const eventHandlers = getDefaultHandlers({ res, aggregateContent, collectedUsage });
// const tools = [createTavilySearchTool()];
/** @type {Agent} */
const agent = {
id: EModelEndpoint.bedrock,
@ -36,8 +33,6 @@ const initializeClient = async ({ req, res, endpointOption }) => {
agent.instructions = `${agent.instructions ?? ''}\n${endpointOption.artifactsPrompt}`.trim();
}
let modelOptions = { model: agent.model };
// TODO: pass-in override settings that are specific to current run
const options = await getOptions({
req,
@ -45,28 +40,34 @@ const initializeClient = async ({ req, res, endpointOption }) => {
endpointOption,
});
modelOptions = Object.assign(modelOptions, options.llmConfig);
const maxContextTokens =
agent.max_context_tokens ??
getModelMaxTokens(modelOptions.model, providerEndpointMap[agent.provider]);
agent.model_parameters = Object.assign(agent.model_parameters, options.llmConfig);
if (options.configOptions) {
agent.model_parameters.configuration = options.configOptions;
}
const sender = getResponseSender({
...endpointOption,
model: endpointOption.model_parameters.model,
});
const sender =
agent.name ??
getResponseSender({
...endpointOption,
model: endpointOption.model_parameters.model,
});
const client = new AgentClient({
req,
agent,
sender,
// tools,
modelOptions,
contentParts,
eventHandlers,
collectedUsage,
maxContextTokens,
spec: endpointOption.spec,
endpoint: EModelEndpoint.bedrock,
configOptions: options.configOptions,
resendFiles: endpointOption.resendFiles,
maxContextTokens:
endpointOption.maxContextTokens ??
agent.max_context_tokens ??
getModelMaxTokens(agent.model_parameters.model, providerEndpointMap[agent.provider]) ??
4000,
attachments: endpointOption.attachments,
});
return { client };

View file

@ -10,8 +10,8 @@ const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/User
const { getLLMConfig } = require('~/server/services/Endpoints/openAI/llm');
const { getCustomEndpointConfig } = require('~/server/services/Config');
const { fetchModels } = require('~/server/services/ModelService');
const { isUserProvided, sleep } = require('~/server/utils');
const getLogStores = require('~/cache/getLogStores');
const { isUserProvided } = require('~/server/utils');
const { OpenAIClient } = require('~/app');
const { PROXY } = process.env;
@ -141,7 +141,18 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
},
clientOptions,
);
return getLLMConfig(apiKey, requestOptions);
const options = getLLMConfig(apiKey, requestOptions);
if (!customOptions.streamRate) {
return options;
}
options.llmConfig.callbacks = [
{
handleLLMNewToken: async () => {
await sleep(customOptions.streamRate);
},
},
];
return options;
}
if (clientOptions.reverseProxyUrl) {

View file

@ -6,7 +6,7 @@ const {
} = require('librechat-data-provider');
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
const { getLLMConfig } = require('~/server/services/Endpoints/openAI/llm');
const { isEnabled, isUserProvided } = require('~/server/utils');
const { isEnabled, isUserProvided, sleep } = require('~/server/utils');
const { getAzureCredentials } = require('~/utils');
const { OpenAIClient } = require('~/app');
@ -140,7 +140,18 @@ const initializeClient = async ({
},
clientOptions,
);
return getLLMConfig(apiKey, requestOptions);
const options = getLLMConfig(apiKey, requestOptions);
if (!clientOptions.streamRate) {
return options;
}
options.llmConfig.callbacks = [
{
handleLLMNewToken: async () => {
await sleep(clientOptions.streamRate);
},
},
];
return options;
}
const client = new OpenAIClient(apiKey, Object.assign({ req, res }, clientOptions));

View file

@ -40,12 +40,16 @@ async function getCodeOutputDownloadStream(fileIdentifier, apiKey) {
* @param {import('fs').ReadStream | import('stream').Readable} params.stream - The read stream for the file.
* @param {string} params.filename - The name of the file.
* @param {string} params.apiKey - The API key for authentication.
* @param {string} [params.entity_id] - Optional entity ID for the file.
* @returns {Promise<string>}
* @throws {Error} If there's an error during the upload process.
*/
async function uploadCodeEnvFile({ req, stream, filename, apiKey }) {
async function uploadCodeEnvFile({ req, stream, filename, apiKey, entity_id = '' }) {
try {
const form = new FormData();
if (entity_id.length > 0) {
form.append('entity_id', entity_id);
}
form.append('file', stream, filename);
const baseURL = getCodeBaseURL();
@ -67,7 +71,12 @@ async function uploadCodeEnvFile({ req, stream, filename, apiKey }) {
throw new Error(`Error uploading file: ${result.message}`);
}
return `${result.session_id}/${result.files[0].fileId}`;
const fileIdentifier = `${result.session_id}/${result.files[0].fileId}`;
if (entity_id.length === 0) {
return fileIdentifier;
}
return `${fileIdentifier}?entity_id=${entity_id}`;
} catch (error) {
throw new Error(`Error uploading file: ${error.message}`);
}

View file

@ -3,10 +3,11 @@ const { v4 } = require('uuid');
const axios = require('axios');
const { getCodeBaseURL } = require('@librechat/agents');
const {
EToolResources,
Tools,
FileContext,
imageExtRegex,
FileSources,
imageExtRegex,
EToolResources,
} = require('librechat-data-provider');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { convertImage } = require('~/server/services/Files/images/convert');
@ -110,12 +111,20 @@ function checkIfActive(dateString) {
async function getSessionInfo(fileIdentifier, apiKey) {
try {
const baseURL = getCodeBaseURL();
const session_id = fileIdentifier.split('/')[0];
const [path, queryString] = fileIdentifier.split('?');
const session_id = path.split('/')[0];
let queryParams = {};
if (queryString) {
queryParams = Object.fromEntries(new URLSearchParams(queryString).entries());
}
const response = await axios({
method: 'get',
url: `${baseURL}/files/${session_id}`,
params: {
detail: 'summary',
...queryParams,
},
headers: {
'User-Agent': 'LibreChat/1.0',
@ -124,7 +133,7 @@ async function getSessionInfo(fileIdentifier, apiKey) {
timeout: 5000,
});
return response.data.find((file) => file.name.startsWith(fileIdentifier))?.lastModified;
return response.data.find((file) => file.name.startsWith(path))?.lastModified;
} catch (error) {
logger.error(`Error fetching session info: ${error.message}`, error);
return null;
@ -137,29 +146,56 @@ async function getSessionInfo(fileIdentifier, apiKey) {
* @param {ServerRequest} options.req
* @param {Agent['tool_resources']} options.tool_resources
* @param {string} apiKey
* @returns {Promise<Array<{ id: string; session_id: string; name: string }>>}
* @returns {Promise<{
* files: Array<{ id: string; session_id: string; name: string }>,
* toolContext: string,
* }>}
*/
const primeFiles = async (options, apiKey) => {
const { tool_resources } = options;
const file_ids = tool_resources?.[EToolResources.execute_code]?.file_ids ?? [];
const dbFiles = await getFiles({ file_id: { $in: file_ids } });
const agentResourceIds = new Set(file_ids);
const resourceFiles = tool_resources?.[EToolResources.execute_code]?.files ?? [];
const dbFiles = ((await getFiles({ file_id: { $in: file_ids } })) ?? []).concat(resourceFiles);
const files = [];
const sessions = new Map();
for (const file of dbFiles) {
let toolContext = '';
for (let i = 0; i < dbFiles.length; i++) {
const file = dbFiles[i];
if (!file) {
continue;
}
if (file.metadata.fileIdentifier) {
const [session_id, id] = file.metadata.fileIdentifier.split('/');
const [path, queryString] = file.metadata.fileIdentifier.split('?');
const [session_id, id] = path.split('/');
const pushFile = () => {
if (!toolContext) {
toolContext = `- Note: The following files are available in the "${Tools.execute_code}" tool environment:`;
}
toolContext += `\n\t- /mnt/data/${file.filename}${
agentResourceIds.has(file.file_id) ? '' : ' (just attached by user)'
}`;
files.push({
id,
session_id,
name: file.filename,
});
};
if (sessions.has(session_id)) {
pushFile();
continue;
}
let queryParams = {};
if (queryString) {
queryParams = Object.fromEntries(new URLSearchParams(queryString).entries());
}
const reuploadFile = async () => {
try {
const { getDownloadStream } = getStrategyFunctions(file.source);
@ -171,6 +207,7 @@ const primeFiles = async (options, apiKey) => {
req: options.req,
stream,
filename: file.filename,
entity_id: queryParams.entity_id,
apiKey,
});
await updateFile({ file_id: file.file_id, metadata: { fileIdentifier } });
@ -198,7 +235,7 @@ const primeFiles = async (options, apiKey) => {
}
}
return files;
return { files, toolContext };
};
module.exports = {

View file

@ -97,6 +97,7 @@ async function encodeAndFormat(req, files, endpoint, mode) {
filepath: file.filepath,
filename: file.filename,
embedded: !!file.embedded,
metadata: file.metadata,
};
if (file.height && file.width) {

View file

@ -20,7 +20,7 @@ const {
const { EnvVar } = require('@librechat/agents');
const { addResourceFileId, deleteResourceFileId } = require('~/server/controllers/assistants/v2');
const { convertImage, resizeAndConvert } = require('~/server/services/Files/images');
const { addAgentResourceFile, removeAgentResourceFile } = require('~/models/Agent');
const { addAgentResourceFile, removeAgentResourceFiles } = require('~/models/Agent');
const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
const { createFile, updateFileUsage, deleteFiles } = require('~/models/File');
const { loadAuthValues } = require('~/app/clients/tools/util');
@ -29,10 +29,34 @@ const { getStrategyFunctions } = require('./strategies');
const { determineFileType } = require('~/server/utils');
const { logger } = require('~/config');
const processFiles = async (files) => {
/**
*
* @param {Array<MongoFile>} files
* @param {Array<string>} [fileIds]
* @returns
*/
const processFiles = async (files, fileIds) => {
const promises = [];
const seen = new Set();
for (let file of files) {
const { file_id } = file;
if (seen.has(file_id)) {
continue;
}
seen.add(file_id);
promises.push(updateFileUsage({ file_id }));
}
if (!fileIds) {
return await Promise.all(promises);
}
for (let file_id of fileIds) {
if (seen.has(file_id)) {
continue;
}
seen.add(file_id);
promises.push(updateFileUsage({ file_id }));
}
@ -44,7 +68,7 @@ const processFiles = async (files) => {
* Enqueues the delete operation to the leaky bucket queue if necessary, or adds it directly to promises.
*
* @param {object} params - The passed parameters.
* @param {Express.Request} params.req - The express request object.
* @param {ServerRequest} params.req - The express request object.
* @param {MongoFile} params.file - The file object to delete.
* @param {Function} params.deleteFile - The delete file function.
* @param {Promise[]} params.promises - The array of promises to await.
@ -91,7 +115,7 @@ function enqueueDeleteOperation({ req, file, deleteFile, promises, resolvedFileI
*
* @param {Object} params - The params object.
* @param {MongoFile[]} params.files - The file objects to delete.
* @param {Express.Request} params.req - The express request object.
* @param {ServerRequest} params.req - The express request object.
* @param {DeleteFilesBody} params.req.body - The request body.
* @param {string} [params.req.body.agent_id] - The agent ID if file uploaded is associated to an agent.
* @param {string} [params.req.body.assistant_id] - The assistant ID if file uploaded is associated to an assistant.
@ -128,18 +152,16 @@ const processDeleteRequest = async ({ req, files }) => {
await initializeClients();
}
const agentFiles = [];
for (const file of files) {
const source = file.source ?? FileSources.local;
if (req.body.agent_id && req.body.tool_resource) {
promises.push(
removeAgentResourceFile({
req,
file_id: file.file_id,
agent_id: req.body.agent_id,
tool_resource: req.body.tool_resource,
}),
);
agentFiles.push({
tool_resource: req.body.tool_resource,
file_id: file.file_id,
});
}
if (checkOpenAIStorage(source) && !client[source]) {
@ -183,6 +205,15 @@ const processDeleteRequest = async ({ req, files }) => {
enqueueDeleteOperation({ req, file, deleteFile, promises, resolvedFileIds, openai });
}
if (agentFiles.length > 0) {
promises.push(
removeAgentResourceFiles({
agent_id: req.body.agent_id,
files: agentFiles,
}),
);
}
await Promise.allSettled(promises);
await deleteFiles(resolvedFileIds);
};
@ -242,14 +273,14 @@ const processFileURL = async ({ fileStrategy, userId, URL, fileName, basePath, c
* Saves file metadata to the database with an expiry TTL.
*
* @param {Object} params - The parameters object.
* @param {Express.Request} params.req - The Express request object.
* @param {ServerRequest} params.req - The Express request object.
* @param {Express.Response} [params.res] - The Express response object.
* @param {Express.Multer.File} params.file - The uploaded file.
* @param {ImageMetadata} params.metadata - Additional metadata for the file.
* @param {boolean} params.returnFile - Whether to return the file metadata or return response as normal.
* @returns {Promise<void>}
*/
const processImageFile = async ({ req, res, file, metadata, returnFile = false }) => {
const processImageFile = async ({ req, res, metadata, returnFile = false }) => {
const { file } = req;
const source = req.app.locals.fileStrategy;
const { handleImageUpload } = getStrategyFunctions(source);
const { file_id, temp_file_id, endpoint } = metadata;
@ -289,7 +320,7 @@ const processImageFile = async ({ req, res, file, metadata, returnFile = false }
* returns minimal file metadata, without saving to the database.
*
* @param {Object} params - The parameters object.
* @param {Express.Request} params.req - The Express request object.
* @param {ServerRequest} params.req - The Express request object.
* @param {FileContext} params.context - The context of the file (e.g., 'avatar', 'image_generation', etc.)
* @param {boolean} [params.resize=true] - Whether to resize and convert the image to target format. Default is `true`.
* @param {{ buffer: Buffer, width: number, height: number, bytes: number, filename: string, type: string, file_id: string }} [params.metadata] - Required metadata for the file if resize is false.
@ -335,13 +366,12 @@ const uploadImageBuffer = async ({ req, context, metadata = {}, resize = true })
* Files must be deleted from the server filesystem manually.
*
* @param {Object} params - The parameters object.
* @param {Express.Request} params.req - The Express request object.
* @param {ServerRequest} params.req - The Express request object.
* @param {Express.Response} params.res - The Express response object.
* @param {Express.Multer.File} params.file - The uploaded file.
* @param {FileMetadata} params.metadata - Additional metadata for the file.
* @returns {Promise<void>}
*/
const processFileUpload = async ({ req, res, file, metadata }) => {
const processFileUpload = async ({ req, res, metadata }) => {
const isAssistantUpload = isAssistantsEndpoint(metadata.endpoint);
const assistantSource =
metadata.endpoint === EModelEndpoint.azureAssistants ? FileSources.azure : FileSources.openai;
@ -355,6 +385,7 @@ const processFileUpload = async ({ req, res, file, metadata }) => {
({ openai } = await getOpenAIClient({ req }));
}
const { file } = req;
const {
id,
bytes,
@ -422,13 +453,13 @@ const processFileUpload = async ({ req, res, file, metadata }) => {
* Files must be deleted from the server filesystem manually.
*
* @param {Object} params - The parameters object.
* @param {Express.Request} params.req - The Express request object.
* @param {ServerRequest} params.req - The Express request object.
* @param {Express.Response} params.res - The Express response object.
* @param {Express.Multer.File} params.file - The uploaded file.
* @param {FileMetadata} params.metadata - Additional metadata for the file.
* @returns {Promise<void>}
*/
const processAgentFileUpload = async ({ req, res, file, metadata }) => {
const processAgentFileUpload = async ({ req, res, metadata }) => {
const { file } = req;
const { agent_id, tool_resource } = metadata;
if (agent_id && !tool_resource) {
throw new Error('No tool resource provided for agent file upload');
@ -453,6 +484,7 @@ const processAgentFileUpload = async ({ req, res, file, metadata }) => {
stream,
filename: file.originalname,
apiKey: result[EnvVar.CODE_API_KEY],
entity_id: messageAttachment === true ? undefined : agent_id,
});
fileInfoMetadata = { fileIdentifier };
}
@ -576,7 +608,7 @@ const processOpenAIFile = async ({
/**
* Process OpenAI image files, convert to target format, save and return file metadata.
* @param {object} params - The params object.
* @param {Express.Request} params.req - The Express request object.
* @param {ServerRequest} params.req - The Express request object.
* @param {Buffer} params.buffer - The image buffer.
* @param {string} params.file_id - The file ID.
* @param {string} params.filename - The filename.
@ -708,20 +740,20 @@ async function retrieveAndProcessFile({
* Filters a file based on its size and the endpoint origin.
*
* @param {Object} params - The parameters for the function.
* @param {object} params.req - The request object from Express.
* @param {ServerRequest} params.req - The request object from Express.
* @param {string} [params.req.endpoint]
* @param {string} [params.req.file_id]
* @param {number} [params.req.width]
* @param {number} [params.req.height]
* @param {number} [params.req.version]
* @param {Express.Multer.File} params.file - The file uploaded to the server via multer.
* @param {boolean} [params.image] - Whether the file expected is an image.
* @param {boolean} [params.isAvatar] - Whether the file expected is a user or entity avatar.
* @returns {void}
*
* @throws {Error} If a file exception is caught (invalid file size or type, lack of metadata).
*/
function filterFile({ req, file, image, isAvatar }) {
function filterFile({ req, image, isAvatar }) {
const { file } = req;
const { endpoint, file_id, width, height } = req.body;
if (!file_id && !isAvatar) {

View file

@ -7,6 +7,7 @@ const { logger } = require('~/config');
*
* @param {string} userId - The unique identifier of the user for whom the plugin authentication value is to be retrieved.
* @param {string} authField - The specific authentication field (e.g., 'API_KEY', 'URL') whose value is to be retrieved and decrypted.
* @param {boolean} throwError - Whether to throw an error if the authentication value does not exist. Defaults to `true`.
* @returns {Promise<string|null>} A promise that resolves to the decrypted authentication value if found, or `null` if no such authentication value exists for the given user and field.
*
* The function throws an error if it encounters any issue during the retrieval or decryption process, or if the authentication value does not exist.
@ -22,7 +23,7 @@ const { logger } = require('~/config');
* @throws {Error} Throws an error if there's an issue during the retrieval or decryption process, or if the authentication value does not exist.
* @async
*/
const getUserPluginAuthValue = async (userId, authField) => {
const getUserPluginAuthValue = async (userId, authField, throwError = true) => {
try {
const pluginAuth = await PluginAuth.findOne({ userId, authField }).lean();
if (!pluginAuth) {
@ -32,6 +33,9 @@ const getUserPluginAuthValue = async (userId, authField) => {
const decryptedValue = await decrypt(pluginAuth.value);
return decryptedValue;
} catch (err) {
if (!throwError) {
return null;
}
logger.error('[getUserPluginAuthValue]', err);
throw err;
}

View file

@ -1,8 +1,8 @@
const fs = require('fs');
const path = require('path');
const { zodToJsonSchema } = require('zod-to-json-schema');
const { Calculator } = require('@langchain/community/tools/calculator');
const { tool: toolFn, Tool } = require('@langchain/core/tools');
const { Calculator } = require('@langchain/community/tools/calculator');
const {
Tools,
ContentTypes,
@ -170,7 +170,7 @@ async function processRequiredActions(client, requiredActions) {
requiredActions,
);
const tools = requiredActions.map((action) => action.tool);
const loadedTools = await loadTools({
const { loadedTools } = await loadTools({
user: client.req.user.id,
model: client.req.body.model ?? 'gpt-4o-mini',
tools,
@ -183,7 +183,6 @@ async function processRequiredActions(client, requiredActions) {
fileStrategy: client.req.app.locals.fileStrategy,
returnMetadata: true,
},
skipSpecs: true,
});
const ToolMap = loadedTools.reduce((map, tool) => {
@ -378,21 +377,21 @@ async function loadAgentTools({ req, agent_id, tools, tool_resources, openAIApiK
if (!tools || tools.length === 0) {
return {};
}
const loadedTools = await loadTools({
const { loadedTools, toolContextMap } = await loadTools({
user: req.user.id,
// model: req.body.model ?? 'gpt-4o-mini',
tools,
functions: true,
isAgent: agent_id != null,
options: {
req,
openAIApiKey,
tool_resources,
returnMetadata: true,
processFileURL,
uploadImageBuffer,
returnMetadata: true,
fileStrategy: req.app.locals.fileStrategy,
},
skipSpecs: true,
});
const agentTools = [];
@ -403,16 +402,19 @@ async function loadAgentTools({ req, agent_id, tools, tool_resources, openAIApiK
continue;
}
const toolInstance = toolFn(
async (...args) => {
return tool['_call'](...args);
},
{
name: tool.name,
description: tool.description,
schema: tool.schema,
},
);
const toolDefinition = {
name: tool.name,
schema: tool.schema,
description: tool.description,
};
if (imageGenTools.has(tool.name)) {
toolDefinition.responseFormat = 'content_and_artifact';
}
const toolInstance = toolFn(async (...args) => {
return tool['_call'](...args);
}, toolDefinition);
agentTools.push(toolInstance);
}
@ -476,6 +478,7 @@ async function loadAgentTools({ req, agent_id, tools, tool_resources, openAIApiK
return {
tools: agentTools,
toolContextMap,
};
}

View file

@ -32,17 +32,20 @@ async function loadDefaultInterface(config, configDefaults, roleName = SystemRol
bookmarks: interfaceConfig?.bookmarks ?? defaults.bookmarks,
prompts: interfaceConfig?.prompts ?? defaults.prompts,
multiConvo: interfaceConfig?.multiConvo ?? defaults.multiConvo,
agents: interfaceConfig?.agents ?? defaults.agents,
});
await updateAccessPermissions(roleName, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: loadedInterface.prompts },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: loadedInterface.bookmarks },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: loadedInterface.multiConvo },
[PermissionTypes.AGENTS]: { [Permissions.USE]: loadedInterface.agents },
});
await updateAccessPermissions(SystemRoles.ADMIN, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: loadedInterface.prompts },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: loadedInterface.bookmarks },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: loadedInterface.multiConvo },
[PermissionTypes.AGENTS]: { [Permissions.USE]: loadedInterface.agents },
});
let i = 0;

View file

@ -7,8 +7,15 @@ jest.mock('~/models/Role', () => ({
}));
describe('loadDefaultInterface', () => {
it('should call updateAccessPermissions with the correct parameters when prompts and bookmarks are true', async () => {
const config = { interface: { prompts: true, bookmarks: true } };
it('should call updateAccessPermissions with the correct parameters when permission types are true', async () => {
const config = {
interface: {
prompts: true,
bookmarks: true,
multiConvo: true,
agents: true,
},
};
const configDefaults = { interface: {} };
await loadDefaultInterface(config, configDefaults);
@ -16,12 +23,20 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
});
});
it('should call updateAccessPermissions with false when prompts and bookmarks are false', async () => {
const config = { interface: { prompts: false, bookmarks: false } };
it('should call updateAccessPermissions with false when permission types are false', async () => {
const config = {
interface: {
prompts: false,
bookmarks: false,
multiConvo: false,
agents: false,
},
};
const configDefaults = { interface: {} };
await loadDefaultInterface(config, configDefaults);
@ -29,11 +44,12 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: false },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
[PermissionTypes.AGENTS]: { [Permissions.USE]: false },
});
});
it('should call updateAccessPermissions with undefined when prompts and bookmarks are not specified in config', async () => {
it('should call updateAccessPermissions with undefined when permission types are not specified in config', async () => {
const config = {};
const configDefaults = { interface: {} };
@ -43,11 +59,19 @@ describe('loadDefaultInterface', () => {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
});
});
it('should call updateAccessPermissions with undefined when prompts and bookmarks are explicitly undefined', async () => {
const config = { interface: { prompts: undefined, bookmarks: undefined } };
it('should call updateAccessPermissions with undefined when permission types are explicitly undefined', async () => {
const config = {
interface: {
prompts: undefined,
bookmarks: undefined,
multiConvo: undefined,
agents: undefined,
},
};
const configDefaults = { interface: {} };
await loadDefaultInterface(config, configDefaults);
@ -56,11 +80,19 @@ describe('loadDefaultInterface', () => {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
});
});
it('should call updateAccessPermissions with mixed values for prompts and bookmarks', async () => {
const config = { interface: { prompts: true, bookmarks: false } };
it('should call updateAccessPermissions with mixed values for permission types', async () => {
const config = {
interface: {
prompts: true,
bookmarks: false,
multiConvo: undefined,
agents: true,
},
};
const configDefaults = { interface: {} };
await loadDefaultInterface(config, configDefaults);
@ -69,19 +101,28 @@ describe('loadDefaultInterface', () => {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
});
});
it('should call updateAccessPermissions with true when config is undefined', async () => {
const config = undefined;
const configDefaults = { interface: { prompts: true, bookmarks: true } };
const configDefaults = {
interface: {
prompts: true,
bookmarks: true,
multiConvo: true,
agents: true,
},
};
await loadDefaultInterface(config, configDefaults);
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
});
});
@ -95,6 +136,7 @@ describe('loadDefaultInterface', () => {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
});
});
@ -108,6 +150,7 @@ describe('loadDefaultInterface', () => {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
});
});
@ -121,11 +164,19 @@ describe('loadDefaultInterface', () => {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
});
});
it('should call updateAccessPermissions with all interface options including multiConvo', async () => {
const config = { interface: { prompts: true, bookmarks: false, multiConvo: true } };
const config = {
interface: {
prompts: true,
bookmarks: false,
multiConvo: true,
agents: false,
},
};
const configDefaults = { interface: {} };
await loadDefaultInterface(config, configDefaults);
@ -134,12 +185,20 @@ describe('loadDefaultInterface', () => {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: false },
});
});
it('should use default values for multiConvo when config is undefined', async () => {
const config = undefined;
const configDefaults = { interface: { prompts: true, bookmarks: true, multiConvo: false } };
const configDefaults = {
interface: {
prompts: true,
bookmarks: true,
multiConvo: false,
agents: undefined,
},
};
await loadDefaultInterface(config, configDefaults);
@ -147,6 +206,7 @@ describe('loadDefaultInterface', () => {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
});
});
});

View file

@ -196,14 +196,11 @@ function generateConfig(key, baseURL, endpoint) {
if (agents) {
config.capabilities = [
AgentCapabilities.execute_code,
AgentCapabilities.file_search,
AgentCapabilities.actions,
AgentCapabilities.tools,
];
if (key === 'EXPERIMENTAL_RUN_CODE') {
config.capabilities.push(AgentCapabilities.execute_code);
}
}
if (assistants && endpoint === EModelEndpoint.azureAssistants) {