LibreChat/api/models/Agent.js
Danny Avila 6279ea8dd7
🛸 feat: Remote Agent Access with External API Support (#11503)
* 🪪 feat: Microsoft Graph Access Token Placeholder for MCP Servers (#10867)

* feat: MCP Graph Token env var

* Addressing copilot remarks

* Addressed Copilot review remarks

* Fixed graphtokenservice mock in MCP test suite

* fix: remove unnecessary type check and cast in resolveGraphTokensInRecord

* ci: add Graph Token integration tests in MCPManager

* refactor: update user type definitions to use Partial<IUser> in multiple functions

* test: enhance MCP tests for graph token processing and user placeholder resolution

- Added comprehensive tests to validate the interaction between preProcessGraphTokens and processMCPEnv.
- Ensured correct resolution of graph tokens and user placeholders in various configurations.
- Mocked OIDC utilities to facilitate testing of token extraction and validation.
- Verified that original options remain unchanged after processing.

* chore: import order

* chore: imports

---------

Co-authored-by: Danny Avila <danny@librechat.ai>

* WIP: OpenAI-compatible API for LibreChat agents

- Added OpenAIChatCompletionController for handling chat completions.
- Introduced ListModelsController and GetModelController for listing and retrieving agent details.
- Created routes for OpenAI API endpoints, including /v1/chat/completions and /v1/models.
- Developed event handlers for streaming responses in OpenAI format.
- Implemented request validation and error handling for API interactions.
- Integrated content aggregation and response formatting to align with OpenAI specifications.

This commit establishes a foundational API for interacting with LibreChat agents in a manner compatible with OpenAI's chat completion interface.

* refactor: OpenAI-spec content aggregation for improved performance and clarity

* fix: OpenAI chat completion controller with safe user handling for correct tool loading

* refactor: Remove conversation ID from OpenAI response context and related handlers

* refactor: OpenAI chat completion handling with streaming support

- Introduced a lightweight tracker for streaming responses, allowing for efficient tracking of emitted content and usage metadata.
- Updated the OpenAIChatCompletionController to utilize the new tracker, improving the handling of streaming and non-streaming responses.
- Refactored event handlers to accommodate the new streaming logic, ensuring proper management of tool calls and content aggregation.
- Adjusted response handling to streamline error reporting during streaming sessions.

* WIP: Open Responses API with core service, types, and handlers

- Added Open Responses API module with comprehensive types and enums.
- Implemented core service for processing requests, including validation and input conversion.
- Developed event handlers for streaming responses and non-streaming aggregation.
- Established response building logic and error handling mechanisms.
- Created detailed types for input and output content, ensuring compliance with Open Responses specification.

* feat: Implement response storage and retrieval in Open Responses API

- Added functionality to save user input messages and assistant responses to the database when the `store` flag is set to true.
- Introduced a new endpoint to retrieve stored responses by ID, allowing users to access previous interactions.
- Enhanced the response creation process to include database operations for conversation and message storage.
- Implemented tests to validate the storage and retrieval of responses, ensuring correct behavior for both existing and non-existent response IDs.

* refactor: Open Responses API with additional token tracking and validation

- Added support for tracking cached tokens in response usage, improving token management.
- Updated response structure to include new properties for top log probabilities and detailed usage metrics.
- Enhanced tests to validate the presence and types of new properties in API responses, ensuring compliance with updated specifications.
- Refactored response handling to accommodate new fields and improve overall clarity and performance.

* refactor: Update reasoning event handlers and types for consistency

- Renamed reasoning text events to simplify naming conventions, changing `emitReasoningTextDelta` to `emitReasoningDelta` and `emitReasoningTextDone` to `emitReasoningDone`.
- Updated event types in the API to reflect the new naming, ensuring consistency across the codebase.
- Added `logprobs` property to output events for enhanced tracking of log probabilities.

* feat: Add validation for streaming events in Open Responses API tests

* feat: Implement response.created event in Open Responses API

- Added emitResponseCreated function to emit the response.created event as the first event in the streaming sequence, adhering to the Open Responses specification.
- Updated createResponse function to emit response.created followed by response.in_progress.
- Enhanced tests to validate the order of emitted events, ensuring response.created is triggered before response.in_progress.

* feat: Responses API with attachment event handling

- Introduced `createResponsesToolEndCallback` to handle attachment events in the Responses API, emitting `librechat:attachment` events as per the Open Responses extension specification.
- Updated the `createResponse` function to utilize the new callback for processing tool outputs and emitting attachments during streaming.
- Added helper functions for writing attachment events and defined types for attachment data, ensuring compatibility with the Open Responses protocol.
- Enhanced tests to validate the integration of attachment events within the Responses API workflow.

* WIP: remote agent auth

* fix: Improve loading state handling in AgentApiKeys component

- Updated the rendering logic to conditionally display loading spinner and API keys based on the loading state.
- Removed unnecessary imports and streamlined the component for better readability.

* refactor: Update API key access handling in routes

- Replaced `checkAccess` with `generateCheckAccess` for improved access control.
- Consolidated access checks into a single `checkApiKeyAccess` function, enhancing code readability and maintainability.
- Streamlined route definitions for creating, listing, retrieving, and deleting API keys.

* fix: Add permission handling for REMOTE_AGENT resource type

* feat: Enhance permission handling for REMOTE_AGENT resources

- Updated the deleteAgent and deleteUserAgents functions to handle permissions for both AGENT and REMOTE_AGENT resource types.
- Introduced new functions to enrich REMOTE_AGENT principals and backfill permissions for AGENT owners.
- Modified createAgentHandler and duplicateAgentHandler to grant permissions for REMOTE_AGENT alongside AGENT.
- Added utility functions for retrieving effective permissions for REMOTE_AGENT resources, ensuring consistent access control across the application.

* refactor: Rename and update roles for remote agent access

- Changed role name from API User to Editor in translation files for clarity.
- Updated default editor role ID from REMOTE_AGENT_USER to REMOTE_AGENT_EDITOR in resource configurations.
- Adjusted role localization to reflect the new Editor role.
- Modified access permissions to align with the updated role definitions across the application.

* feat: Introduce remote agent permissions and update access handling

- Added support for REMOTE_AGENTS in permission schemas, including use, create, share, and share_public permissions.
- Updated the interface configuration to include remote agent settings.
- Modified middleware and API key access checks to align with the new remote agent permission structure.
- Enhanced role defaults to incorporate remote agent permissions, ensuring consistent access control across the application.

* refactor: Update AgentApiKeys component and permissions handling

- Refactored the AgentApiKeys component to improve structure and readability, including the introduction of ApiKeysContent for better separation of concerns.
- Updated CreateKeyDialog to accept an onKeyCreated callback, enhancing its functionality.
- Adjusted permission checks in Data component to use REMOTE_AGENTS and USE permissions, aligning with recent permission schema changes.
- Enhanced loading state handling and dialog management for a smoother user experience.

* refactor: Update remote agent access checks in API routes

- Replaced existing access checks with `generateCheckAccess` for remote agents in the API keys and agents routes.
- Introduced specific permission checks for creating, listing, retrieving, and deleting API keys, enhancing access control.
- Improved code structure by consolidating permission handling for remote agents across multiple routes.

* fix: Correct query parameters in ApiKeysContent component

- Updated the useGetAgentApiKeysQuery call to include an object for the enabled parameter, ensuring proper functionality when the component is open.
- This change improves the handling of API key retrieval based on the component's open state.

* feat: Implement remote agents permissions and update API routes

- Added new API route for updating remote agents permissions, enhancing role management capabilities.
- Introduced remote agents permissions handling in the AgentApiKeys component, including a dedicated settings dialog.
- Updated localization files to include new remote agents permission labels for better user experience.
- Refactored data provider to support remote agents permissions updates, ensuring consistent access control across the application.

* feat: Add remote agents permissions to role schema and interface

- Introduced new permissions for REMOTE_AGENTS in the role schema, including USE, CREATE, SHARE, and SHARE_PUBLIC.
- Updated the IRole interface to reflect the new remote agents permissions structure, enhancing role management capabilities.

* feat: Add remote agents settings button to API keys dialog

* feat: Update AgentFooter to include remote agent sharing permissions

- Refactored access checks to incorporate permissions for sharing remote agents.
- Enhanced conditional rendering logic to allow sharing by users with remote agent permissions.
- Improved loading state handling for remote agent permissions, ensuring a smoother user experience.

* refactor: Update API key creation access check and localization strings

- Replaced the access check for creating API keys to use the existing remote agents access check.
- Updated localization strings to correct the descriptions for remote agent permissions, ensuring clarity in user interface.

* fix: resource permission mapping to include remote agents

- Changed the resourceToPermissionMap to use a Partial<Record> for better flexibility.
- Added mapping for REMOTE_AGENT permissions, enhancing the sharing capabilities for remote agents.

* feat: Implement remote access checks for agent models

- Enhanced ListModelsController and GetModelController to include checks for user permissions on remote agents.
- Integrated findAccessibleResources to filter agents based on VIEW permission for REMOTE_AGENT.
- Updated response handling to ensure users can only access agents they have permissions for, improving security and access control.

* fix: Update user parameter type in processUserPlaceholders function

- Changed the user parameter type in the processUserPlaceholders function from Partial<Partial<IUser>> to Partial<IUser> for improved type clarity and consistency.

* refactor: Simplify integration test structure by removing conditional describe

- Replaced conditional describeWithApiKey with a standard describe for all integration tests in responses.spec.js.
- This change enhances test clarity and ensures all tests are executed consistently, regardless of the SKIP_INTEGRATION_TESTS flag.

* test: Update AgentFooter tests to reflect new grant access dialog ID

- Changed test IDs for the grant access dialog in AgentFooter tests to include the resource type, ensuring accurate identification in the test cases.
- This update improves test clarity and aligns with recent changes in the component's implementation.

* test: Enhance integration tests for Open Responses API

- Updated integration tests in responses.spec.js to utilize an authRequest helper for consistent authorization handling across all test cases.
- Introduced a test user and API key creation to improve test setup and ensure proper permission checks for remote agents.
- Added checks for existing access roles and created necessary roles if they do not exist, enhancing test reliability and coverage.

* feat: Extend accessRole schema to include remoteAgent resource type

- Updated the accessRole schema to add 'remoteAgent' to the resourceType enum, enhancing the flexibility of role assignments and permissions management.

* test: refactored test setup to create a minimal Express app for responses routes, enhancing test structure and maintainability.

* test: Enhance abort.spec.js by mocking additional modules for improved test isolation

- Updated the test setup in abort.spec.js to include actual implementations of '@librechat/data-schemas' and '@librechat/api' while maintaining mock functionality.
- This change improves test reliability and ensures that the tests are more representative of the actual module behavior.

* refactor: Update conversation ID generation to use UUID

- Replaced the nanoid with uuidv4 for generating conversation IDs in the createResponse function, enhancing uniqueness and consistency in ID generation.

* test: Add remote agent access roles to AccessRole model tests

- Included additional access roles for remote agents (REMOTE_AGENT_EDITOR, REMOTE_AGENT_OWNER, REMOTE_AGENT_VIEWER) in the AccessRole model tests to ensure comprehensive coverage of role assignments and permissions management.

* chore: Add deletion of user agent API keys in user deletion process

- Updated the user deletion process in UserController and delete-user.js to include the removal of user agent API keys, ensuring comprehensive cleanup of user data upon account deletion.

* test: Add remote agents permissions to permissions.spec.ts

- Enhanced the permissions tests by including comprehensive permission settings for remote agents across various scenarios, ensuring accurate validation of access controls for remote agent roles.

* chore: Update remote agents translations for clarity and consistency

- Removed outdated remote agents translation entries and added revised entries to improve clarity on API key creation and sharing permissions for remote agents. This enhances user understanding of the available functionalities.

* feat: Add indexing and TTL for agent API keys

- Introduced an index on the `key` field for improved query performance.
- Added a TTL index on the `expiresAt` field to enable automatic cleanup of expired API keys, ensuring efficient management of stored keys.

* chore: Update API route documentation for clarity

- Revised comments in the agents route file to clarify the handling of API key authentication.
- Removed outdated endpoint listings to streamline the documentation and focus on current functionality.

---------

Co-authored-by: Max Sanna <max@maxsanna.com>
2026-01-28 17:44:33 -05:00

931 lines
30 KiB
JavaScript

const mongoose = require('mongoose');
const crypto = require('node:crypto');
const { logger } = require('@librechat/data-schemas');
const { getCustomEndpointConfig } = require('@librechat/api');
const {
Tools,
SystemRoles,
ResourceType,
actionDelimiter,
isAgentsEndpoint,
isEphemeralAgentId,
encodeEphemeralAgentId,
} = require('librechat-data-provider');
const { mcp_all, mcp_delimiter } = require('librechat-data-provider').Constants;
const {
removeAgentFromAllProjects,
removeAgentIdsFromProject,
addAgentIdsToProject,
} = require('./Project');
const { removeAllPermissions } = require('~/server/services/PermissionService');
const { getMCPServerTools } = require('~/server/services/Config');
const { Agent, AclEntry, User } = require('~/db/models');
const { getActions } = require('./Action');
/**
* Extracts unique MCP server names from tools array
* Tools format: "toolName_mcp_serverName" or "sys__server__sys_mcp_serverName"
* @param {string[]} tools - Array of tool identifiers
* @returns {string[]} Array of unique MCP server names
*/
const extractMCPServerNames = (tools) => {
if (!tools || !Array.isArray(tools)) {
return [];
}
const serverNames = new Set();
for (const tool of tools) {
if (!tool || !tool.includes(mcp_delimiter)) {
continue;
}
const parts = tool.split(mcp_delimiter);
if (parts.length >= 2) {
serverNames.add(parts[parts.length - 1]);
}
}
return Array.from(serverNames);
};
/**
* Create an agent with the provided data.
* @param {Object} agentData - The agent data to create.
* @returns {Promise<Agent>} The created agent document as a plain object.
* @throws {Error} If the agent creation fails.
*/
const createAgent = async (agentData) => {
const { author: _author, ...versionData } = agentData;
const timestamp = new Date();
const initialAgentData = {
...agentData,
versions: [
{
...versionData,
createdAt: timestamp,
updatedAt: timestamp,
},
],
category: agentData.category || 'general',
mcpServerNames: extractMCPServerNames(agentData.tools),
};
return (await Agent.create(initialAgentData)).toObject();
};
/**
* Get an agent document based on the provided ID.
*
* @param {Object} searchParameter - The search parameters to find the agent to update.
* @param {string} searchParameter.id - The ID of the agent to update.
* @param {string} searchParameter.author - The user ID of the agent's author.
* @returns {Promise<Agent|null>} The agent document as a plain object, or null if not found.
*/
const getAgent = async (searchParameter) => await Agent.findOne(searchParameter).lean();
/**
* Get multiple agent documents based on the provided search parameters.
*
* @param {Object} searchParameter - The search parameters to find agents.
* @returns {Promise<Agent[]>} Array of agent documents as plain objects.
*/
const getAgents = async (searchParameter) => await Agent.find(searchParameter).lean();
/**
* Load an agent based on the provided ID
*
* @param {Object} params
* @param {ServerRequest} params.req
* @param {string} params.spec
* @param {string} params.agent_id
* @param {string} params.endpoint
* @param {import('@librechat/agents').ClientOptions} [params.model_parameters]
* @returns {Promise<Agent|null>} The agent document as a plain object, or null if not found.
*/
const loadEphemeralAgent = async ({ req, spec, endpoint, model_parameters: _m }) => {
const { model, ...model_parameters } = _m;
const modelSpecs = req.config?.modelSpecs?.list;
/** @type {TModelSpec | null} */
let modelSpec = null;
if (spec != null && spec !== '') {
modelSpec = modelSpecs?.find((s) => s.name === spec) || null;
}
/** @type {TEphemeralAgent | null} */
const ephemeralAgent = req.body.ephemeralAgent;
const mcpServers = new Set(ephemeralAgent?.mcp);
const userId = req.user?.id; // note: userId cannot be undefined at runtime
if (modelSpec?.mcpServers) {
for (const mcpServer of modelSpec.mcpServers) {
mcpServers.add(mcpServer);
}
}
/** @type {string[]} */
const tools = [];
if (ephemeralAgent?.execute_code === true || modelSpec?.executeCode === true) {
tools.push(Tools.execute_code);
}
if (ephemeralAgent?.file_search === true || modelSpec?.fileSearch === true) {
tools.push(Tools.file_search);
}
if (ephemeralAgent?.web_search === true || modelSpec?.webSearch === true) {
tools.push(Tools.web_search);
}
const addedServers = new Set();
if (mcpServers.size > 0) {
for (const mcpServer of mcpServers) {
if (addedServers.has(mcpServer)) {
continue;
}
const serverTools = await getMCPServerTools(userId, mcpServer);
if (!serverTools) {
tools.push(`${mcp_all}${mcp_delimiter}${mcpServer}`);
addedServers.add(mcpServer);
continue;
}
tools.push(...Object.keys(serverTools));
addedServers.add(mcpServer);
}
}
const instructions = req.body.promptPrefix;
// Get endpoint config for modelDisplayLabel fallback
const appConfig = req.config;
let endpointConfig = appConfig?.endpoints?.[endpoint];
if (!isAgentsEndpoint(endpoint) && !endpointConfig) {
try {
endpointConfig = getCustomEndpointConfig({ endpoint, appConfig });
} catch (err) {
logger.error('[loadEphemeralAgent] Error getting custom endpoint config', err);
}
}
// For ephemeral agents, use modelLabel if provided, then model spec's label,
// then modelDisplayLabel from endpoint config, otherwise empty string to show model name
const sender =
model_parameters?.modelLabel ?? modelSpec?.label ?? endpointConfig?.modelDisplayLabel ?? '';
// Encode ephemeral agent ID with endpoint, model, and computed sender for display
const ephemeralId = encodeEphemeralAgentId({ endpoint, model, sender });
const result = {
id: ephemeralId,
instructions,
provider: endpoint,
model_parameters,
model,
tools,
};
if (ephemeralAgent?.artifacts != null && ephemeralAgent.artifacts) {
result.artifacts = ephemeralAgent.artifacts;
}
return result;
};
/**
* Load an agent based on the provided ID
*
* @param {Object} params
* @param {ServerRequest} params.req
* @param {string} params.spec
* @param {string} params.agent_id
* @param {string} params.endpoint
* @param {import('@librechat/agents').ClientOptions} [params.model_parameters]
* @returns {Promise<Agent|null>} The agent document as a plain object, or null if not found.
*/
const loadAgent = async ({ req, spec, agent_id, endpoint, model_parameters }) => {
if (!agent_id) {
return null;
}
if (isEphemeralAgentId(agent_id)) {
return await loadEphemeralAgent({ req, spec, endpoint, model_parameters });
}
const agent = await getAgent({
id: agent_id,
});
if (!agent) {
return null;
}
agent.version = agent.versions ? agent.versions.length : 0;
return agent;
};
/**
* Check if a version already exists in the versions array, excluding timestamp and author fields
* @param {Object} updateData - The update data to compare
* @param {Object} currentData - The current agent data
* @param {Array} versions - The existing versions array
* @param {string} [actionsHash] - Hash of current action metadata
* @returns {Object|null} - The matching version if found, null otherwise
*/
const isDuplicateVersion = (updateData, currentData, versions, actionsHash = null) => {
if (!versions || versions.length === 0) {
return null;
}
const excludeFields = [
'_id',
'id',
'createdAt',
'updatedAt',
'author',
'updatedBy',
'created_at',
'updated_at',
'__v',
'versions',
'actionsHash', // Exclude actionsHash from direct comparison
];
const { $push: _$push, $pull: _$pull, $addToSet: _$addToSet, ...directUpdates } = updateData;
if (Object.keys(directUpdates).length === 0 && !actionsHash) {
return null;
}
const wouldBeVersion = { ...currentData, ...directUpdates };
const lastVersion = versions[versions.length - 1];
if (actionsHash && lastVersion.actionsHash !== actionsHash) {
return null;
}
const allFields = new Set([...Object.keys(wouldBeVersion), ...Object.keys(lastVersion)]);
const importantFields = Array.from(allFields).filter((field) => !excludeFields.includes(field));
let isMatch = true;
for (const field of importantFields) {
const wouldBeValue = wouldBeVersion[field];
const lastVersionValue = lastVersion[field];
// Skip if both are undefined/null
if (!wouldBeValue && !lastVersionValue) {
continue;
}
// Handle arrays
if (Array.isArray(wouldBeValue) || Array.isArray(lastVersionValue)) {
// Normalize: treat undefined/null as empty array for comparison
let wouldBeArr;
if (Array.isArray(wouldBeValue)) {
wouldBeArr = wouldBeValue;
} else if (wouldBeValue == null) {
wouldBeArr = [];
} else {
wouldBeArr = [wouldBeValue];
}
let lastVersionArr;
if (Array.isArray(lastVersionValue)) {
lastVersionArr = lastVersionValue;
} else if (lastVersionValue == null) {
lastVersionArr = [];
} else {
lastVersionArr = [lastVersionValue];
}
if (wouldBeArr.length !== lastVersionArr.length) {
isMatch = false;
break;
}
// Special handling for projectIds (MongoDB ObjectIds)
if (field === 'projectIds') {
const wouldBeIds = wouldBeArr.map((id) => id.toString()).sort();
const versionIds = lastVersionArr.map((id) => id.toString()).sort();
if (!wouldBeIds.every((id, i) => id === versionIds[i])) {
isMatch = false;
break;
}
}
// Handle arrays of objects
else if (
wouldBeArr.length > 0 &&
typeof wouldBeArr[0] === 'object' &&
wouldBeArr[0] !== null
) {
const sortedWouldBe = [...wouldBeArr].map((item) => JSON.stringify(item)).sort();
const sortedVersion = [...lastVersionArr].map((item) => JSON.stringify(item)).sort();
if (!sortedWouldBe.every((item, i) => item === sortedVersion[i])) {
isMatch = false;
break;
}
} else {
const sortedWouldBe = [...wouldBeArr].sort();
const sortedVersion = [...lastVersionArr].sort();
if (!sortedWouldBe.every((item, i) => item === sortedVersion[i])) {
isMatch = false;
break;
}
}
}
// Handle objects
else if (typeof wouldBeValue === 'object' && wouldBeValue !== null) {
const lastVersionObj =
typeof lastVersionValue === 'object' && lastVersionValue !== null ? lastVersionValue : {};
// For empty objects, normalize the comparison
const wouldBeKeys = Object.keys(wouldBeValue);
const lastVersionKeys = Object.keys(lastVersionObj);
// If both are empty objects, they're equal
if (wouldBeKeys.length === 0 && lastVersionKeys.length === 0) {
continue;
}
// Otherwise do a deep comparison
if (JSON.stringify(wouldBeValue) !== JSON.stringify(lastVersionObj)) {
isMatch = false;
break;
}
}
// Handle primitive values
else {
// For primitives, handle the case where one is undefined and the other is a default value
if (wouldBeValue !== lastVersionValue) {
// Special handling for boolean false vs undefined
if (
typeof wouldBeValue === 'boolean' &&
wouldBeValue === false &&
lastVersionValue === undefined
) {
continue;
}
// Special handling for empty string vs undefined
if (
typeof wouldBeValue === 'string' &&
wouldBeValue === '' &&
lastVersionValue === undefined
) {
continue;
}
isMatch = false;
break;
}
}
}
return isMatch ? lastVersion : null;
};
/**
* Update an agent with new data without overwriting existing
* properties, or create a new agent if it doesn't exist.
* When an agent is updated, a copy of the current state will be saved to the versions array.
*
* @param {Object} searchParameter - The search parameters to find the agent to update.
* @param {string} searchParameter.id - The ID of the agent to update.
* @param {string} [searchParameter.author] - The user ID of the agent's author.
* @param {Object} updateData - An object containing the properties to update.
* @param {Object} [options] - Optional configuration object.
* @param {string} [options.updatingUserId] - The ID of the user performing the update (used for tracking non-author updates).
* @param {boolean} [options.forceVersion] - Force creation of a new version even if no fields changed.
* @param {boolean} [options.skipVersioning] - Skip version creation entirely (useful for isolated operations like sharing).
* @returns {Promise<Agent>} The updated or newly created agent document as a plain object.
* @throws {Error} If the update would create a duplicate version
*/
const updateAgent = async (searchParameter, updateData, options = {}) => {
const { updatingUserId = null, forceVersion = false, skipVersioning = false } = options;
const mongoOptions = { new: true, upsert: false };
const currentAgent = await Agent.findOne(searchParameter);
if (currentAgent) {
const {
__v,
_id,
id: __id,
versions,
author: _author,
...versionData
} = currentAgent.toObject();
const { $push, $pull, $addToSet, ...directUpdates } = updateData;
// Sync mcpServerNames when tools are updated
if (directUpdates.tools !== undefined) {
const mcpServerNames = extractMCPServerNames(directUpdates.tools);
directUpdates.mcpServerNames = mcpServerNames;
updateData.mcpServerNames = mcpServerNames; // Also update the original updateData
}
let actionsHash = null;
// Generate actions hash if agent has actions
if (currentAgent.actions && currentAgent.actions.length > 0) {
// Extract action IDs from the format "domain_action_id"
const actionIds = currentAgent.actions
.map((action) => {
const parts = action.split(actionDelimiter);
return parts[1]; // Get just the action ID part
})
.filter(Boolean);
if (actionIds.length > 0) {
try {
const actions = await getActions(
{
action_id: { $in: actionIds },
},
true,
); // Include sensitive data for hash
actionsHash = await generateActionMetadataHash(currentAgent.actions, actions);
} catch (error) {
logger.error('Error fetching actions for hash generation:', error);
}
}
}
const shouldCreateVersion =
!skipVersioning &&
(forceVersion || Object.keys(directUpdates).length > 0 || $push || $pull || $addToSet);
if (shouldCreateVersion) {
const duplicateVersion = isDuplicateVersion(updateData, versionData, versions, actionsHash);
if (duplicateVersion && !forceVersion) {
// No changes detected, return the current agent without creating a new version
const agentObj = currentAgent.toObject();
agentObj.version = versions.length;
return agentObj;
}
}
const versionEntry = {
...versionData,
...directUpdates,
updatedAt: new Date(),
};
// Include actions hash in version if available
if (actionsHash) {
versionEntry.actionsHash = actionsHash;
}
// Always store updatedBy field to track who made the change
if (updatingUserId) {
versionEntry.updatedBy = new mongoose.Types.ObjectId(updatingUserId);
}
if (shouldCreateVersion) {
updateData.$push = {
...($push || {}),
versions: versionEntry,
};
}
}
return Agent.findOneAndUpdate(searchParameter, updateData, mongoOptions).lean();
};
/**
* Modifies an agent with the resource file id.
* @param {object} params
* @param {ServerRequest} params.req
* @param {string} params.agent_id
* @param {string} params.tool_resource
* @param {string} params.file_id
* @returns {Promise<Agent>} The updated agent.
*/
const addAgentResourceFile = async ({ req, agent_id, tool_resource, file_id }) => {
const searchParameter = { id: agent_id };
let agent = await getAgent(searchParameter);
if (!agent) {
throw new Error('Agent not found for adding resource file');
}
const fileIdsPath = `tool_resources.${tool_resource}.file_ids`;
await Agent.updateOne(
{
id: agent_id,
[`${fileIdsPath}`]: { $exists: false },
},
{
$set: {
[`${fileIdsPath}`]: [],
},
},
);
const updateData = {
$addToSet: {
tools: tool_resource,
[fileIdsPath]: file_id,
},
};
const updatedAgent = await updateAgent(searchParameter, updateData, {
updatingUserId: req?.user?.id,
});
if (updatedAgent) {
return updatedAgent;
} else {
throw new Error('Agent not found for adding resource file');
}
};
/**
* Removes multiple resource files from an agent using atomic operations.
* @param {object} params
* @param {string} params.agent_id
* @param {Array<{tool_resource: string, file_id: string}>} params.files
* @returns {Promise<Agent>} The updated agent.
* @throws {Error} If the agent is not found or update fails.
*/
const removeAgentResourceFiles = async ({ agent_id, files }) => {
const searchParameter = { id: agent_id };
// Group files to remove by resource
const filesByResource = files.reduce((acc, { tool_resource, file_id }) => {
if (!acc[tool_resource]) {
acc[tool_resource] = [];
}
acc[tool_resource].push(file_id);
return acc;
}, {});
// Step 1: Atomically remove file IDs using $pull
const pullOps = {};
const resourcesToCheck = new Set();
for (const [resource, fileIds] of Object.entries(filesByResource)) {
const fileIdsPath = `tool_resources.${resource}.file_ids`;
pullOps[fileIdsPath] = { $in: fileIds };
resourcesToCheck.add(resource);
}
const updatePullData = { $pull: pullOps };
const agentAfterPull = await Agent.findOneAndUpdate(searchParameter, updatePullData, {
new: true,
}).lean();
if (!agentAfterPull) {
// Agent might have been deleted concurrently, or never existed.
// Check if it existed before trying to throw.
const agentExists = await getAgent(searchParameter);
if (!agentExists) {
throw new Error('Agent not found for removing resource files');
}
// If it existed but findOneAndUpdate returned null, something else went wrong.
throw new Error('Failed to update agent during file removal (pull step)');
}
// Return the agent state directly after the $pull operation.
// Skipping the $unset step for now to simplify and test core $pull atomicity.
// Empty arrays might remain, but the removal itself should be correct.
return agentAfterPull;
};
/**
* Deletes an agent based on the provided ID.
*
* @param {Object} searchParameter - The search parameters to find the agent to delete.
* @param {string} searchParameter.id - The ID of the agent to delete.
* @param {string} [searchParameter.author] - The user ID of the agent's author.
* @returns {Promise<void>} Resolves when the agent has been successfully deleted.
*/
const deleteAgent = async (searchParameter) => {
const agent = await Agent.findOneAndDelete(searchParameter);
if (agent) {
await removeAgentFromAllProjects(agent.id);
await Promise.all([
removeAllPermissions({
resourceType: ResourceType.AGENT,
resourceId: agent._id,
}),
removeAllPermissions({
resourceType: ResourceType.REMOTE_AGENT,
resourceId: agent._id,
}),
]);
try {
await Agent.updateMany({ 'edges.to': agent.id }, { $pull: { edges: { to: agent.id } } });
} catch (error) {
logger.error('[deleteAgent] Error removing agent from handoff edges', error);
}
try {
await User.updateMany(
{ 'favorites.agentId': agent.id },
{ $pull: { favorites: { agentId: agent.id } } },
);
} catch (error) {
logger.error('[deleteAgent] Error removing agent from user favorites', error);
}
}
return agent;
};
/**
* Deletes all agents created by a specific user.
* @param {string} userId - The ID of the user whose agents should be deleted.
* @returns {Promise<void>} A promise that resolves when all user agents have been deleted.
*/
const deleteUserAgents = async (userId) => {
try {
const userAgents = await getAgents({ author: userId });
if (userAgents.length === 0) {
return;
}
const agentIds = userAgents.map((agent) => agent.id);
const agentObjectIds = userAgents.map((agent) => agent._id);
for (const agentId of agentIds) {
await removeAgentFromAllProjects(agentId);
}
await AclEntry.deleteMany({
resourceType: { $in: [ResourceType.AGENT, ResourceType.REMOTE_AGENT] },
resourceId: { $in: agentObjectIds },
});
try {
await User.updateMany(
{ 'favorites.agentId': { $in: agentIds } },
{ $pull: { favorites: { agentId: { $in: agentIds } } } },
);
} catch (error) {
logger.error('[deleteUserAgents] Error removing agents from user favorites', error);
}
await Agent.deleteMany({ author: userId });
} catch (error) {
logger.error('[deleteUserAgents] General error:', error);
}
};
/**
* Get agents by accessible IDs with optional cursor-based pagination.
* @param {Object} params - The parameters for getting accessible agents.
* @param {Array} [params.accessibleIds] - Array of agent ObjectIds the user has ACL access to.
* @param {Object} [params.otherParams] - Additional query parameters (including author filter).
* @param {number} [params.limit] - Number of agents to return (max 100). If not provided, returns all agents.
* @param {string} [params.after] - Cursor for pagination - get agents after this cursor. // base64 encoded JSON string with updatedAt and _id.
* @returns {Promise<Object>} A promise that resolves to an object containing the agents data and pagination info.
*/
const getListAgentsByAccess = async ({
accessibleIds = [],
otherParams = {},
limit = null,
after = null,
}) => {
const isPaginated = limit !== null && limit !== undefined;
const normalizedLimit = isPaginated ? Math.min(Math.max(1, parseInt(limit) || 20), 100) : null;
// Build base query combining ACL accessible agents with other filters
const baseQuery = { ...otherParams, _id: { $in: accessibleIds } };
// Add cursor condition
if (after) {
try {
const cursor = JSON.parse(Buffer.from(after, 'base64').toString('utf8'));
const { updatedAt, _id } = cursor;
const cursorCondition = {
$or: [
{ updatedAt: { $lt: new Date(updatedAt) } },
{ updatedAt: new Date(updatedAt), _id: { $gt: new mongoose.Types.ObjectId(_id) } },
],
};
// Merge cursor condition with base query
if (Object.keys(baseQuery).length > 0) {
baseQuery.$and = [{ ...baseQuery }, cursorCondition];
// Remove the original conditions from baseQuery to avoid duplication
Object.keys(baseQuery).forEach((key) => {
if (key !== '$and') delete baseQuery[key];
});
} else {
Object.assign(baseQuery, cursorCondition);
}
} catch (error) {
logger.warn('Invalid cursor:', error.message);
}
}
let query = Agent.find(baseQuery, {
id: 1,
_id: 1,
name: 1,
avatar: 1,
author: 1,
projectIds: 1,
description: 1,
updatedAt: 1,
category: 1,
support_contact: 1,
is_promoted: 1,
}).sort({ updatedAt: -1, _id: 1 });
// Only apply limit if pagination is requested
if (isPaginated) {
query = query.limit(normalizedLimit + 1);
}
const agents = await query.lean();
const hasMore = isPaginated ? agents.length > normalizedLimit : false;
const data = (isPaginated ? agents.slice(0, normalizedLimit) : agents).map((agent) => {
if (agent.author) {
agent.author = agent.author.toString();
}
return agent;
});
// Generate next cursor only if paginated
let nextCursor = null;
if (isPaginated && hasMore && data.length > 0) {
const lastAgent = agents[normalizedLimit - 1];
nextCursor = Buffer.from(
JSON.stringify({
updatedAt: lastAgent.updatedAt.toISOString(),
_id: lastAgent._id.toString(),
}),
).toString('base64');
}
return {
object: 'list',
data,
first_id: data.length > 0 ? data[0].id : null,
last_id: data.length > 0 ? data[data.length - 1].id : null,
has_more: hasMore,
after: nextCursor,
};
};
/**
* Updates the projects associated with an agent, adding and removing project IDs as specified.
* This function also updates the corresponding projects to include or exclude the agent ID.
*
* @param {Object} params - Parameters for updating the agent's projects.
* @param {IUser} params.user - Parameters for updating the agent's projects.
* @param {string} params.agentId - The ID of the agent to update.
* @param {string[]} [params.projectIds] - Array of project IDs to add to the agent.
* @param {string[]} [params.removeProjectIds] - Array of project IDs to remove from the agent.
* @returns {Promise<MongoAgent>} The updated agent document.
* @throws {Error} If there's an error updating the agent or projects.
*/
const updateAgentProjects = async ({ user, agentId, projectIds, removeProjectIds }) => {
const updateOps = {};
if (removeProjectIds && removeProjectIds.length > 0) {
for (const projectId of removeProjectIds) {
await removeAgentIdsFromProject(projectId, [agentId]);
}
updateOps.$pull = { projectIds: { $in: removeProjectIds } };
}
if (projectIds && projectIds.length > 0) {
for (const projectId of projectIds) {
await addAgentIdsToProject(projectId, [agentId]);
}
updateOps.$addToSet = { projectIds: { $each: projectIds } };
}
if (Object.keys(updateOps).length === 0) {
return await getAgent({ id: agentId });
}
const updateQuery = { id: agentId, author: user.id };
if (user.role === SystemRoles.ADMIN) {
delete updateQuery.author;
}
const updatedAgent = await updateAgent(updateQuery, updateOps, {
updatingUserId: user.id,
skipVersioning: true,
});
if (updatedAgent) {
return updatedAgent;
}
if (updateOps.$addToSet) {
for (const projectId of projectIds) {
await removeAgentIdsFromProject(projectId, [agentId]);
}
} else if (updateOps.$pull) {
for (const projectId of removeProjectIds) {
await addAgentIdsToProject(projectId, [agentId]);
}
}
return await getAgent({ id: agentId });
};
/**
* Reverts an agent to a specific version in its version history.
* @param {Object} searchParameter - The search parameters to find the agent to revert.
* @param {string} searchParameter.id - The ID of the agent to revert.
* @param {string} [searchParameter.author] - The user ID of the agent's author.
* @param {number} versionIndex - The index of the version to revert to in the versions array.
* @returns {Promise<MongoAgent>} The updated agent document after reverting.
* @throws {Error} If the agent is not found or the specified version does not exist.
*/
const revertAgentVersion = async (searchParameter, versionIndex) => {
const agent = await Agent.findOne(searchParameter);
if (!agent) {
throw new Error('Agent not found');
}
if (!agent.versions || !agent.versions[versionIndex]) {
throw new Error(`Version ${versionIndex} not found`);
}
const revertToVersion = agent.versions[versionIndex];
const updateData = {
...revertToVersion,
};
delete updateData._id;
delete updateData.id;
delete updateData.versions;
delete updateData.author;
delete updateData.updatedBy;
return Agent.findOneAndUpdate(searchParameter, updateData, { new: true }).lean();
};
/**
* Generates a hash of action metadata for version comparison
* @param {string[]} actionIds - Array of action IDs in format "domain_action_id"
* @param {Action[]} actions - Array of action documents
* @returns {Promise<string>} - SHA256 hash of the action metadata
*/
const generateActionMetadataHash = async (actionIds, actions) => {
if (!actionIds || actionIds.length === 0) {
return '';
}
// Create a map of action_id to metadata for quick lookup
const actionMap = new Map();
actions.forEach((action) => {
actionMap.set(action.action_id, action.metadata);
});
// Sort action IDs for consistent hashing
const sortedActionIds = [...actionIds].sort();
// Build a deterministic string representation of all action metadata
const metadataString = sortedActionIds
.map((actionFullId) => {
// Extract just the action_id part (after the delimiter)
const parts = actionFullId.split(actionDelimiter);
const actionId = parts[1];
const metadata = actionMap.get(actionId);
if (!metadata) {
return `${actionId}:null`;
}
// Sort metadata keys for deterministic output
const sortedKeys = Object.keys(metadata).sort();
const metadataStr = sortedKeys
.map((key) => `${key}:${JSON.stringify(metadata[key])}`)
.join(',');
return `${actionId}:{${metadataStr}}`;
})
.join(';');
// Use Web Crypto API to generate hash
const encoder = new TextEncoder();
const data = encoder.encode(metadataString);
const hashBuffer = await crypto.webcrypto.subtle.digest('SHA-256', data);
const hashArray = Array.from(new Uint8Array(hashBuffer));
const hashHex = hashArray.map((b) => b.toString(16).padStart(2, '0')).join('');
return hashHex;
};
/**
* Counts the number of promoted agents.
* @returns {Promise<number>} - The count of promoted agents
*/
const countPromotedAgents = async () => {
const count = await Agent.countDocuments({ is_promoted: true });
return count;
};
/**
* Load a default agent based on the endpoint
* @param {string} endpoint
* @returns {Agent | null}
*/
module.exports = {
getAgent,
getAgents,
loadAgent,
createAgent,
updateAgent,
deleteAgent,
deleteUserAgents,
revertAgentVersion,
updateAgentProjects,
countPromotedAgents,
addAgentResourceFile,
getListAgentsByAccess,
removeAgentResourceFiles,
generateActionMetadataHash,
};