mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-01-29 22:05:18 +01:00
🛸 feat: Remote Agent Access with External API Support (#11503)
* 🪪 feat: Microsoft Graph Access Token Placeholder for MCP Servers (#10867) * feat: MCP Graph Token env var * Addressing copilot remarks * Addressed Copilot review remarks * Fixed graphtokenservice mock in MCP test suite * fix: remove unnecessary type check and cast in resolveGraphTokensInRecord * ci: add Graph Token integration tests in MCPManager * refactor: update user type definitions to use Partial<IUser> in multiple functions * test: enhance MCP tests for graph token processing and user placeholder resolution - Added comprehensive tests to validate the interaction between preProcessGraphTokens and processMCPEnv. - Ensured correct resolution of graph tokens and user placeholders in various configurations. - Mocked OIDC utilities to facilitate testing of token extraction and validation. - Verified that original options remain unchanged after processing. * chore: import order * chore: imports --------- Co-authored-by: Danny Avila <danny@librechat.ai> * WIP: OpenAI-compatible API for LibreChat agents - Added OpenAIChatCompletionController for handling chat completions. - Introduced ListModelsController and GetModelController for listing and retrieving agent details. - Created routes for OpenAI API endpoints, including /v1/chat/completions and /v1/models. - Developed event handlers for streaming responses in OpenAI format. - Implemented request validation and error handling for API interactions. - Integrated content aggregation and response formatting to align with OpenAI specifications. This commit establishes a foundational API for interacting with LibreChat agents in a manner compatible with OpenAI's chat completion interface. * refactor: OpenAI-spec content aggregation for improved performance and clarity * fix: OpenAI chat completion controller with safe user handling for correct tool loading * refactor: Remove conversation ID from OpenAI response context and related handlers * refactor: OpenAI chat completion handling with streaming support - Introduced a lightweight tracker for streaming responses, allowing for efficient tracking of emitted content and usage metadata. - Updated the OpenAIChatCompletionController to utilize the new tracker, improving the handling of streaming and non-streaming responses. - Refactored event handlers to accommodate the new streaming logic, ensuring proper management of tool calls and content aggregation. - Adjusted response handling to streamline error reporting during streaming sessions. * WIP: Open Responses API with core service, types, and handlers - Added Open Responses API module with comprehensive types and enums. - Implemented core service for processing requests, including validation and input conversion. - Developed event handlers for streaming responses and non-streaming aggregation. - Established response building logic and error handling mechanisms. - Created detailed types for input and output content, ensuring compliance with Open Responses specification. * feat: Implement response storage and retrieval in Open Responses API - Added functionality to save user input messages and assistant responses to the database when the `store` flag is set to true. - Introduced a new endpoint to retrieve stored responses by ID, allowing users to access previous interactions. - Enhanced the response creation process to include database operations for conversation and message storage. - Implemented tests to validate the storage and retrieval of responses, ensuring correct behavior for both existing and non-existent response IDs. * refactor: Open Responses API with additional token tracking and validation - Added support for tracking cached tokens in response usage, improving token management. - Updated response structure to include new properties for top log probabilities and detailed usage metrics. - Enhanced tests to validate the presence and types of new properties in API responses, ensuring compliance with updated specifications. - Refactored response handling to accommodate new fields and improve overall clarity and performance. * refactor: Update reasoning event handlers and types for consistency - Renamed reasoning text events to simplify naming conventions, changing `emitReasoningTextDelta` to `emitReasoningDelta` and `emitReasoningTextDone` to `emitReasoningDone`. - Updated event types in the API to reflect the new naming, ensuring consistency across the codebase. - Added `logprobs` property to output events for enhanced tracking of log probabilities. * feat: Add validation for streaming events in Open Responses API tests * feat: Implement response.created event in Open Responses API - Added emitResponseCreated function to emit the response.created event as the first event in the streaming sequence, adhering to the Open Responses specification. - Updated createResponse function to emit response.created followed by response.in_progress. - Enhanced tests to validate the order of emitted events, ensuring response.created is triggered before response.in_progress. * feat: Responses API with attachment event handling - Introduced `createResponsesToolEndCallback` to handle attachment events in the Responses API, emitting `librechat:attachment` events as per the Open Responses extension specification. - Updated the `createResponse` function to utilize the new callback for processing tool outputs and emitting attachments during streaming. - Added helper functions for writing attachment events and defined types for attachment data, ensuring compatibility with the Open Responses protocol. - Enhanced tests to validate the integration of attachment events within the Responses API workflow. * WIP: remote agent auth * fix: Improve loading state handling in AgentApiKeys component - Updated the rendering logic to conditionally display loading spinner and API keys based on the loading state. - Removed unnecessary imports and streamlined the component for better readability. * refactor: Update API key access handling in routes - Replaced `checkAccess` with `generateCheckAccess` for improved access control. - Consolidated access checks into a single `checkApiKeyAccess` function, enhancing code readability and maintainability. - Streamlined route definitions for creating, listing, retrieving, and deleting API keys. * fix: Add permission handling for REMOTE_AGENT resource type * feat: Enhance permission handling for REMOTE_AGENT resources - Updated the deleteAgent and deleteUserAgents functions to handle permissions for both AGENT and REMOTE_AGENT resource types. - Introduced new functions to enrich REMOTE_AGENT principals and backfill permissions for AGENT owners. - Modified createAgentHandler and duplicateAgentHandler to grant permissions for REMOTE_AGENT alongside AGENT. - Added utility functions for retrieving effective permissions for REMOTE_AGENT resources, ensuring consistent access control across the application. * refactor: Rename and update roles for remote agent access - Changed role name from API User to Editor in translation files for clarity. - Updated default editor role ID from REMOTE_AGENT_USER to REMOTE_AGENT_EDITOR in resource configurations. - Adjusted role localization to reflect the new Editor role. - Modified access permissions to align with the updated role definitions across the application. * feat: Introduce remote agent permissions and update access handling - Added support for REMOTE_AGENTS in permission schemas, including use, create, share, and share_public permissions. - Updated the interface configuration to include remote agent settings. - Modified middleware and API key access checks to align with the new remote agent permission structure. - Enhanced role defaults to incorporate remote agent permissions, ensuring consistent access control across the application. * refactor: Update AgentApiKeys component and permissions handling - Refactored the AgentApiKeys component to improve structure and readability, including the introduction of ApiKeysContent for better separation of concerns. - Updated CreateKeyDialog to accept an onKeyCreated callback, enhancing its functionality. - Adjusted permission checks in Data component to use REMOTE_AGENTS and USE permissions, aligning with recent permission schema changes. - Enhanced loading state handling and dialog management for a smoother user experience. * refactor: Update remote agent access checks in API routes - Replaced existing access checks with `generateCheckAccess` for remote agents in the API keys and agents routes. - Introduced specific permission checks for creating, listing, retrieving, and deleting API keys, enhancing access control. - Improved code structure by consolidating permission handling for remote agents across multiple routes. * fix: Correct query parameters in ApiKeysContent component - Updated the useGetAgentApiKeysQuery call to include an object for the enabled parameter, ensuring proper functionality when the component is open. - This change improves the handling of API key retrieval based on the component's open state. * feat: Implement remote agents permissions and update API routes - Added new API route for updating remote agents permissions, enhancing role management capabilities. - Introduced remote agents permissions handling in the AgentApiKeys component, including a dedicated settings dialog. - Updated localization files to include new remote agents permission labels for better user experience. - Refactored data provider to support remote agents permissions updates, ensuring consistent access control across the application. * feat: Add remote agents permissions to role schema and interface - Introduced new permissions for REMOTE_AGENTS in the role schema, including USE, CREATE, SHARE, and SHARE_PUBLIC. - Updated the IRole interface to reflect the new remote agents permissions structure, enhancing role management capabilities. * feat: Add remote agents settings button to API keys dialog * feat: Update AgentFooter to include remote agent sharing permissions - Refactored access checks to incorporate permissions for sharing remote agents. - Enhanced conditional rendering logic to allow sharing by users with remote agent permissions. - Improved loading state handling for remote agent permissions, ensuring a smoother user experience. * refactor: Update API key creation access check and localization strings - Replaced the access check for creating API keys to use the existing remote agents access check. - Updated localization strings to correct the descriptions for remote agent permissions, ensuring clarity in user interface. * fix: resource permission mapping to include remote agents - Changed the resourceToPermissionMap to use a Partial<Record> for better flexibility. - Added mapping for REMOTE_AGENT permissions, enhancing the sharing capabilities for remote agents. * feat: Implement remote access checks for agent models - Enhanced ListModelsController and GetModelController to include checks for user permissions on remote agents. - Integrated findAccessibleResources to filter agents based on VIEW permission for REMOTE_AGENT. - Updated response handling to ensure users can only access agents they have permissions for, improving security and access control. * fix: Update user parameter type in processUserPlaceholders function - Changed the user parameter type in the processUserPlaceholders function from Partial<Partial<IUser>> to Partial<IUser> for improved type clarity and consistency. * refactor: Simplify integration test structure by removing conditional describe - Replaced conditional describeWithApiKey with a standard describe for all integration tests in responses.spec.js. - This change enhances test clarity and ensures all tests are executed consistently, regardless of the SKIP_INTEGRATION_TESTS flag. * test: Update AgentFooter tests to reflect new grant access dialog ID - Changed test IDs for the grant access dialog in AgentFooter tests to include the resource type, ensuring accurate identification in the test cases. - This update improves test clarity and aligns with recent changes in the component's implementation. * test: Enhance integration tests for Open Responses API - Updated integration tests in responses.spec.js to utilize an authRequest helper for consistent authorization handling across all test cases. - Introduced a test user and API key creation to improve test setup and ensure proper permission checks for remote agents. - Added checks for existing access roles and created necessary roles if they do not exist, enhancing test reliability and coverage. * feat: Extend accessRole schema to include remoteAgent resource type - Updated the accessRole schema to add 'remoteAgent' to the resourceType enum, enhancing the flexibility of role assignments and permissions management. * test: refactored test setup to create a minimal Express app for responses routes, enhancing test structure and maintainability. * test: Enhance abort.spec.js by mocking additional modules for improved test isolation - Updated the test setup in abort.spec.js to include actual implementations of '@librechat/data-schemas' and '@librechat/api' while maintaining mock functionality. - This change improves test reliability and ensures that the tests are more representative of the actual module behavior. * refactor: Update conversation ID generation to use UUID - Replaced the nanoid with uuidv4 for generating conversation IDs in the createResponse function, enhancing uniqueness and consistency in ID generation. * test: Add remote agent access roles to AccessRole model tests - Included additional access roles for remote agents (REMOTE_AGENT_EDITOR, REMOTE_AGENT_OWNER, REMOTE_AGENT_VIEWER) in the AccessRole model tests to ensure comprehensive coverage of role assignments and permissions management. * chore: Add deletion of user agent API keys in user deletion process - Updated the user deletion process in UserController and delete-user.js to include the removal of user agent API keys, ensuring comprehensive cleanup of user data upon account deletion. * test: Add remote agents permissions to permissions.spec.ts - Enhanced the permissions tests by including comprehensive permission settings for remote agents across various scenarios, ensuring accurate validation of access controls for remote agent roles. * chore: Update remote agents translations for clarity and consistency - Removed outdated remote agents translation entries and added revised entries to improve clarity on API key creation and sharing permissions for remote agents. This enhances user understanding of the available functionalities. * feat: Add indexing and TTL for agent API keys - Introduced an index on the `key` field for improved query performance. - Added a TTL index on the `expiresAt` field to enable automatic cleanup of expired API keys, ensuring efficient management of stored keys. * chore: Update API route documentation for clarity - Revised comments in the agents route file to clarify the handling of API key authentication. - Removed outdated endpoint listings to streamline the documentation and focus on current functionality. --------- Co-authored-by: Max Sanna <max@maxsanna.com>
This commit is contained in:
parent
dd4bbd38fc
commit
6279ea8dd7
70 changed files with 8926 additions and 50 deletions
|
|
@ -589,10 +589,16 @@ const deleteAgent = async (searchParameter) => {
|
|||
const agent = await Agent.findOneAndDelete(searchParameter);
|
||||
if (agent) {
|
||||
await removeAgentFromAllProjects(agent.id);
|
||||
await removeAllPermissions({
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceId: agent._id,
|
||||
});
|
||||
await Promise.all([
|
||||
removeAllPermissions({
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceId: agent._id,
|
||||
}),
|
||||
removeAllPermissions({
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
resourceId: agent._id,
|
||||
}),
|
||||
]);
|
||||
try {
|
||||
await Agent.updateMany({ 'edges.to': agent.id }, { $pull: { edges: { to: agent.id } } });
|
||||
} catch (error) {
|
||||
|
|
@ -631,7 +637,7 @@ const deleteUserAgents = async (userId) => {
|
|||
}
|
||||
|
||||
await AclEntry.deleteMany({
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceType: { $in: [ResourceType.AGENT, ResourceType.REMOTE_AGENT] },
|
||||
resourceId: { $in: agentObjectIds },
|
||||
});
|
||||
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@
|
|||
const mongoose = require('mongoose');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { ResourceType, PrincipalType, PermissionBits } = require('librechat-data-provider');
|
||||
const { enrichRemoteAgentPrincipals, backfillRemoteAgentPermissions } = require('@librechat/api');
|
||||
const {
|
||||
bulkUpdateResourcePermissions,
|
||||
ensureGroupPrincipalExists,
|
||||
|
|
@ -14,7 +15,6 @@ const {
|
|||
findAccessibleResources,
|
||||
getResourcePermissionsMap,
|
||||
} = require('~/server/services/PermissionService');
|
||||
const { AclEntry } = require('~/db/models');
|
||||
const {
|
||||
searchPrincipals: searchLocalPrincipals,
|
||||
sortPrincipalsByRelevance,
|
||||
|
|
@ -24,6 +24,7 @@ const {
|
|||
entraIdPrincipalFeatureEnabled,
|
||||
searchEntraIdPrincipals,
|
||||
} = require('~/server/services/GraphApiService');
|
||||
const { AclEntry, AccessRole } = require('~/db/models');
|
||||
|
||||
/**
|
||||
* Generic controller for resource permission endpoints
|
||||
|
|
@ -234,7 +235,7 @@ const getResourcePermissions = async (req, res) => {
|
|||
},
|
||||
]);
|
||||
|
||||
const principals = [];
|
||||
let principals = [];
|
||||
let publicPermission = null;
|
||||
|
||||
// Process aggregation results
|
||||
|
|
@ -280,6 +281,13 @@ const getResourcePermissions = async (req, res) => {
|
|||
}
|
||||
}
|
||||
|
||||
if (resourceType === ResourceType.REMOTE_AGENT) {
|
||||
const enricherDeps = { AclEntry, AccessRole, logger };
|
||||
const enrichResult = await enrichRemoteAgentPrincipals(enricherDeps, resourceId, principals);
|
||||
principals = enrichResult.principals;
|
||||
backfillRemoteAgentPermissions(enricherDeps, resourceId, enrichResult.entriesToBackfill);
|
||||
}
|
||||
|
||||
// Return response in format expected by frontend
|
||||
const response = {
|
||||
resourceType,
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@ const {
|
|||
} = require('~/models');
|
||||
const {
|
||||
ConversationTag,
|
||||
AgentApiKey,
|
||||
Transaction,
|
||||
MemoryEntry,
|
||||
Assistant,
|
||||
|
|
@ -256,6 +257,7 @@ const deleteUserController = async (req, res) => {
|
|||
await deleteFiles(null, user.id); // delete database files in case of orphaned files from previous steps
|
||||
await deleteToolCalls(user.id); // delete user tool calls
|
||||
await deleteUserAgents(user.id); // delete user agents
|
||||
await AgentApiKey.deleteMany({ user: user._id }); // delete user agent API keys
|
||||
await Assistant.deleteMany({ user: user.id }); // delete user assistants
|
||||
await ConversationTag.deleteMany({ user: user.id }); // delete user conversation tags
|
||||
await MemoryEntry.deleteMany({ userId: user.id }); // delete user memory entries
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
const { nanoid } = require('nanoid');
|
||||
const { Constants } = require('@librechat/agents');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { sendEvent, GenerationJobManager } = require('@librechat/api');
|
||||
const { sendEvent, GenerationJobManager, writeAttachmentEvent } = require('@librechat/api');
|
||||
const { Tools, StepTypes, FileContext, ErrorTypes } = require('librechat-data-provider');
|
||||
const {
|
||||
EnvVar,
|
||||
|
|
@ -489,7 +489,226 @@ function createToolEndCallback({ req, res, artifactPromises, streamId = null })
|
|||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper to write attachment events in Open Responses format (librechat:attachment)
|
||||
* @param {ServerResponse} res - The server response object
|
||||
* @param {Object} tracker - The response tracker with sequence number
|
||||
* @param {Object} attachment - The attachment data
|
||||
* @param {Object} metadata - Additional metadata (messageId, conversationId)
|
||||
*/
|
||||
function writeResponsesAttachment(res, tracker, attachment, metadata) {
|
||||
const sequenceNumber = tracker.nextSequence();
|
||||
writeAttachmentEvent(res, sequenceNumber, attachment, {
|
||||
messageId: metadata.run_id,
|
||||
conversationId: metadata.thread_id,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a tool end callback specifically for the Responses API.
|
||||
* Emits attachments as `librechat:attachment` events per the Open Responses extension spec.
|
||||
*
|
||||
* @param {Object} params
|
||||
* @param {ServerRequest} params.req
|
||||
* @param {ServerResponse} params.res
|
||||
* @param {Object} params.tracker - Response tracker with sequence number
|
||||
* @param {Promise<MongoFile | { filename: string; filepath: string; expires: number;} | null>[]} params.artifactPromises
|
||||
* @returns {ToolEndCallback} The tool end callback.
|
||||
*/
|
||||
function createResponsesToolEndCallback({ req, res, tracker, artifactPromises }) {
|
||||
/**
|
||||
* @type {ToolEndCallback}
|
||||
*/
|
||||
return async (data, metadata) => {
|
||||
const output = data?.output;
|
||||
if (!output) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!output.artifact) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (output.artifact[Tools.file_search]) {
|
||||
artifactPromises.push(
|
||||
(async () => {
|
||||
const user = req.user;
|
||||
const attachment = await processFileCitations({
|
||||
user,
|
||||
metadata,
|
||||
appConfig: req.config,
|
||||
toolArtifact: output.artifact,
|
||||
toolCallId: output.tool_call_id,
|
||||
});
|
||||
if (!attachment) {
|
||||
return null;
|
||||
}
|
||||
// For Responses API, emit attachment during streaming
|
||||
if (res.headersSent && !res.writableEnded) {
|
||||
writeResponsesAttachment(res, tracker, attachment, metadata);
|
||||
}
|
||||
return attachment;
|
||||
})().catch((error) => {
|
||||
logger.error('Error processing file citations:', error);
|
||||
return null;
|
||||
}),
|
||||
);
|
||||
}
|
||||
|
||||
if (output.artifact[Tools.ui_resources]) {
|
||||
artifactPromises.push(
|
||||
(async () => {
|
||||
const attachment = {
|
||||
type: Tools.ui_resources,
|
||||
toolCallId: output.tool_call_id,
|
||||
[Tools.ui_resources]: output.artifact[Tools.ui_resources].data,
|
||||
};
|
||||
// For Responses API, always emit attachment during streaming
|
||||
if (res.headersSent && !res.writableEnded) {
|
||||
writeResponsesAttachment(res, tracker, attachment, metadata);
|
||||
}
|
||||
return attachment;
|
||||
})().catch((error) => {
|
||||
logger.error('Error processing artifact content:', error);
|
||||
return null;
|
||||
}),
|
||||
);
|
||||
}
|
||||
|
||||
if (output.artifact[Tools.web_search]) {
|
||||
artifactPromises.push(
|
||||
(async () => {
|
||||
const attachment = {
|
||||
type: Tools.web_search,
|
||||
toolCallId: output.tool_call_id,
|
||||
[Tools.web_search]: { ...output.artifact[Tools.web_search] },
|
||||
};
|
||||
// For Responses API, always emit attachment during streaming
|
||||
if (res.headersSent && !res.writableEnded) {
|
||||
writeResponsesAttachment(res, tracker, attachment, metadata);
|
||||
}
|
||||
return attachment;
|
||||
})().catch((error) => {
|
||||
logger.error('Error processing artifact content:', error);
|
||||
return null;
|
||||
}),
|
||||
);
|
||||
}
|
||||
|
||||
if (output.artifact.content) {
|
||||
/** @type {FormattedContent[]} */
|
||||
const content = output.artifact.content;
|
||||
for (let i = 0; i < content.length; i++) {
|
||||
const part = content[i];
|
||||
if (!part) {
|
||||
continue;
|
||||
}
|
||||
if (part.type !== 'image_url') {
|
||||
continue;
|
||||
}
|
||||
const { url } = part.image_url;
|
||||
artifactPromises.push(
|
||||
(async () => {
|
||||
const filename = `${output.name}_img_${nanoid()}`;
|
||||
const file_id = output.artifact.file_ids?.[i];
|
||||
const file = await saveBase64Image(url, {
|
||||
req,
|
||||
file_id,
|
||||
filename,
|
||||
endpoint: metadata.provider,
|
||||
context: FileContext.image_generation,
|
||||
});
|
||||
const fileMetadata = Object.assign(file, {
|
||||
toolCallId: output.tool_call_id,
|
||||
});
|
||||
|
||||
if (!fileMetadata) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// For Responses API, emit attachment during streaming
|
||||
if (res.headersSent && !res.writableEnded) {
|
||||
const attachment = {
|
||||
file_id: fileMetadata.file_id,
|
||||
filename: fileMetadata.filename,
|
||||
type: fileMetadata.type,
|
||||
url: fileMetadata.filepath,
|
||||
width: fileMetadata.width,
|
||||
height: fileMetadata.height,
|
||||
tool_call_id: output.tool_call_id,
|
||||
};
|
||||
writeResponsesAttachment(res, tracker, attachment, metadata);
|
||||
}
|
||||
|
||||
return fileMetadata;
|
||||
})().catch((error) => {
|
||||
logger.error('Error processing artifact content:', error);
|
||||
return null;
|
||||
}),
|
||||
);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
const isCodeTool =
|
||||
output.name === Tools.execute_code || output.name === Constants.PROGRAMMATIC_TOOL_CALLING;
|
||||
if (!isCodeTool) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!output.artifact.files) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const file of output.artifact.files) {
|
||||
const { id, name } = file;
|
||||
artifactPromises.push(
|
||||
(async () => {
|
||||
const result = await loadAuthValues({
|
||||
userId: req.user.id,
|
||||
authFields: [EnvVar.CODE_API_KEY],
|
||||
});
|
||||
const fileMetadata = await processCodeOutput({
|
||||
req,
|
||||
id,
|
||||
name,
|
||||
apiKey: result[EnvVar.CODE_API_KEY],
|
||||
messageId: metadata.run_id,
|
||||
toolCallId: output.tool_call_id,
|
||||
conversationId: metadata.thread_id,
|
||||
session_id: output.artifact.session_id,
|
||||
});
|
||||
|
||||
if (!fileMetadata) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// For Responses API, emit attachment during streaming
|
||||
if (res.headersSent && !res.writableEnded) {
|
||||
const attachment = {
|
||||
file_id: fileMetadata.file_id,
|
||||
filename: fileMetadata.filename,
|
||||
type: fileMetadata.type,
|
||||
url: fileMetadata.filepath,
|
||||
width: fileMetadata.width,
|
||||
height: fileMetadata.height,
|
||||
tool_call_id: output.tool_call_id,
|
||||
};
|
||||
writeResponsesAttachment(res, tracker, attachment, metadata);
|
||||
}
|
||||
|
||||
return fileMetadata;
|
||||
})().catch((error) => {
|
||||
logger.error('Error processing code output:', error);
|
||||
return null;
|
||||
}),
|
||||
);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
getDefaultHandlers,
|
||||
createToolEndCallback,
|
||||
createResponsesToolEndCallback,
|
||||
};
|
||||
|
|
|
|||
660
api/server/controllers/agents/openai.js
Normal file
660
api/server/controllers/agents/openai.js
Normal file
|
|
@ -0,0 +1,660 @@
|
|||
const { nanoid } = require('nanoid');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { EModelEndpoint, ResourceType, PermissionBits } = require('librechat-data-provider');
|
||||
const {
|
||||
Callback,
|
||||
ToolEndHandler,
|
||||
formatAgentMessages,
|
||||
ChatModelStreamHandler,
|
||||
} = require('@librechat/agents');
|
||||
const {
|
||||
writeSSE,
|
||||
createRun,
|
||||
createChunk,
|
||||
sendFinalChunk,
|
||||
createSafeUser,
|
||||
validateRequest,
|
||||
initializeAgent,
|
||||
createErrorResponse,
|
||||
buildNonStreamingResponse,
|
||||
createOpenAIStreamTracker,
|
||||
createOpenAIContentAggregator,
|
||||
isChatCompletionValidationFailure,
|
||||
} = require('@librechat/api');
|
||||
const { createToolEndCallback } = require('~/server/controllers/agents/callbacks');
|
||||
const { findAccessibleResources } = require('~/server/services/PermissionService');
|
||||
const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const { getConvoFiles } = require('~/models/Conversation');
|
||||
const { getAgent, getAgents } = require('~/models/Agent');
|
||||
const db = require('~/models');
|
||||
|
||||
/**
|
||||
* Creates a tool loader function for the agent.
|
||||
* @param {AbortSignal} signal - The abort signal
|
||||
*/
|
||||
function createToolLoader(signal) {
|
||||
return async function loadTools({
|
||||
req,
|
||||
res,
|
||||
tools,
|
||||
model,
|
||||
agentId,
|
||||
provider,
|
||||
tool_options,
|
||||
tool_resources,
|
||||
}) {
|
||||
const agent = { id: agentId, tools, provider, model, tool_options };
|
||||
try {
|
||||
return await loadAgentTools({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
signal,
|
||||
tool_resources,
|
||||
streamId: null, // No resumable stream for OpenAI compat
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('Error loading tools for agent ' + agentId, error);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert content part to internal format
|
||||
* @param {Object} part - Content part
|
||||
* @returns {Object} Converted part
|
||||
*/
|
||||
function convertContentPart(part) {
|
||||
if (part.type === 'text') {
|
||||
return { type: 'text', text: part.text };
|
||||
}
|
||||
if (part.type === 'image_url') {
|
||||
return { type: 'image_url', image_url: part.image_url };
|
||||
}
|
||||
return part;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert OpenAI messages to internal format
|
||||
* @param {Array} messages - OpenAI format messages
|
||||
* @returns {Array} Internal format messages
|
||||
*/
|
||||
function convertMessages(messages) {
|
||||
return messages.map((msg) => {
|
||||
let content;
|
||||
if (typeof msg.content === 'string') {
|
||||
content = msg.content;
|
||||
} else if (msg.content) {
|
||||
content = msg.content.map(convertContentPart);
|
||||
} else {
|
||||
content = '';
|
||||
}
|
||||
|
||||
return {
|
||||
role: msg.role,
|
||||
content,
|
||||
...(msg.name && { name: msg.name }),
|
||||
...(msg.tool_calls && { tool_calls: msg.tool_calls }),
|
||||
...(msg.tool_call_id && { tool_call_id: msg.tool_call_id }),
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Send an error response in OpenAI format
|
||||
*/
|
||||
function sendErrorResponse(res, statusCode, message, type = 'invalid_request_error', code = null) {
|
||||
res.status(statusCode).json(createErrorResponse(message, type, code));
|
||||
}
|
||||
|
||||
/**
|
||||
* OpenAI-compatible chat completions controller for agents.
|
||||
*
|
||||
* POST /v1/chat/completions
|
||||
*
|
||||
* Request format:
|
||||
* {
|
||||
* "model": "agent_id_here",
|
||||
* "messages": [{"role": "user", "content": "Hello!"}],
|
||||
* "stream": true,
|
||||
* "conversation_id": "optional",
|
||||
* "parent_message_id": "optional"
|
||||
* }
|
||||
*/
|
||||
const OpenAIChatCompletionController = async (req, res) => {
|
||||
const appConfig = req.config;
|
||||
|
||||
// Validate request
|
||||
const validation = validateRequest(req.body);
|
||||
if (isChatCompletionValidationFailure(validation)) {
|
||||
return sendErrorResponse(res, 400, validation.error);
|
||||
}
|
||||
|
||||
const request = validation.request;
|
||||
const agentId = request.model;
|
||||
|
||||
// Look up the agent
|
||||
const agent = await getAgent({ id: agentId });
|
||||
if (!agent) {
|
||||
return sendErrorResponse(
|
||||
res,
|
||||
404,
|
||||
`Agent not found: ${agentId}`,
|
||||
'invalid_request_error',
|
||||
'model_not_found',
|
||||
);
|
||||
}
|
||||
|
||||
// Generate IDs
|
||||
const requestId = `chatcmpl-${nanoid()}`;
|
||||
const conversationId = request.conversation_id ?? nanoid();
|
||||
const parentMessageId = request.parent_message_id ?? null;
|
||||
const created = Math.floor(Date.now() / 1000);
|
||||
|
||||
const context = {
|
||||
created,
|
||||
requestId,
|
||||
model: agentId,
|
||||
};
|
||||
|
||||
// Set up abort controller
|
||||
const abortController = new AbortController();
|
||||
|
||||
// Handle client disconnect
|
||||
req.on('close', () => {
|
||||
if (!abortController.signal.aborted) {
|
||||
abortController.abort();
|
||||
logger.debug('[OpenAI API] Client disconnected, aborting');
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
// Build allowed providers set
|
||||
const allowedProviders = new Set(
|
||||
appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders,
|
||||
);
|
||||
|
||||
// Create tool loader
|
||||
const loadTools = createToolLoader(abortController.signal);
|
||||
|
||||
// Initialize the agent first to check for disableStreaming
|
||||
const endpointOption = {
|
||||
endpoint: agent.provider,
|
||||
model_parameters: agent.model_parameters ?? {},
|
||||
};
|
||||
|
||||
const primaryConfig = await initializeAgent(
|
||||
{
|
||||
req,
|
||||
res,
|
||||
loadTools,
|
||||
requestFiles: [],
|
||||
conversationId,
|
||||
parentMessageId,
|
||||
agent,
|
||||
endpointOption,
|
||||
allowedProviders,
|
||||
isInitialAgent: true,
|
||||
},
|
||||
{
|
||||
getConvoFiles,
|
||||
getFiles: db.getFiles,
|
||||
getUserKey: db.getUserKey,
|
||||
getMessages: db.getMessages,
|
||||
updateFilesUsage: db.updateFilesUsage,
|
||||
getUserKeyValues: db.getUserKeyValues,
|
||||
getUserCodeFiles: db.getUserCodeFiles,
|
||||
getToolFilesByIds: db.getToolFilesByIds,
|
||||
getCodeGeneratedFiles: db.getCodeGeneratedFiles,
|
||||
},
|
||||
);
|
||||
|
||||
// Determine if streaming is enabled (check both request and agent config)
|
||||
const streamingDisabled = !!primaryConfig.model_parameters?.disableStreaming;
|
||||
const isStreaming = request.stream === true && !streamingDisabled;
|
||||
|
||||
// Create tracker for streaming or aggregator for non-streaming
|
||||
const tracker = isStreaming ? createOpenAIStreamTracker() : null;
|
||||
const aggregator = isStreaming ? null : createOpenAIContentAggregator();
|
||||
|
||||
// Set up response for streaming
|
||||
if (isStreaming) {
|
||||
res.setHeader('Content-Type', 'text/event-stream');
|
||||
res.setHeader('Cache-Control', 'no-cache');
|
||||
res.setHeader('Connection', 'keep-alive');
|
||||
res.setHeader('X-Accel-Buffering', 'no');
|
||||
res.flushHeaders();
|
||||
|
||||
// Send initial chunk with role
|
||||
const initialChunk = createChunk(context, { role: 'assistant' });
|
||||
writeSSE(res, initialChunk);
|
||||
}
|
||||
|
||||
// Create handler config for OpenAI streaming (only used when streaming)
|
||||
const handlerConfig = isStreaming
|
||||
? {
|
||||
res,
|
||||
context,
|
||||
tracker,
|
||||
}
|
||||
: null;
|
||||
|
||||
// We need custom handlers that stream in OpenAI format
|
||||
const collectedUsage = [];
|
||||
/** @type {Promise<import('librechat-data-provider').TAttachment | null>[]} */
|
||||
const artifactPromises = [];
|
||||
|
||||
// Create tool end callback for processing artifacts (images, file citations, code output)
|
||||
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises, streamId: null });
|
||||
|
||||
// Convert messages to internal format
|
||||
const openaiMessages = convertMessages(request.messages);
|
||||
|
||||
// Format for agent
|
||||
const toolSet = new Set((primaryConfig.tools ?? []).map((tool) => tool && tool.name));
|
||||
const { messages: formattedMessages, indexTokenCountMap } = formatAgentMessages(
|
||||
openaiMessages,
|
||||
{},
|
||||
toolSet,
|
||||
);
|
||||
|
||||
/**
|
||||
* Create a simple handler that processes data
|
||||
*/
|
||||
const createHandler = (processor) => ({
|
||||
handle: (_event, data) => {
|
||||
if (processor) {
|
||||
processor(data);
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
/**
|
||||
* Stream text content in OpenAI format
|
||||
*/
|
||||
const streamText = (text) => {
|
||||
if (!text) {
|
||||
return;
|
||||
}
|
||||
if (isStreaming) {
|
||||
tracker.addText();
|
||||
writeSSE(res, createChunk(context, { content: text }));
|
||||
} else {
|
||||
aggregator.addText(text);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Stream reasoning content in OpenAI format (OpenRouter convention)
|
||||
*/
|
||||
const streamReasoning = (text) => {
|
||||
if (!text) {
|
||||
return;
|
||||
}
|
||||
if (isStreaming) {
|
||||
tracker.addReasoning();
|
||||
writeSSE(res, createChunk(context, { reasoning: text }));
|
||||
} else {
|
||||
aggregator.addReasoning(text);
|
||||
}
|
||||
};
|
||||
|
||||
// Built-in handler for processing raw model stream chunks
|
||||
const chatModelStreamHandler = new ChatModelStreamHandler();
|
||||
|
||||
// Event handlers for OpenAI-compatible streaming
|
||||
const handlers = {
|
||||
// Process raw model chunks and dispatch message/reasoning deltas
|
||||
on_chat_model_stream: {
|
||||
handle: async (event, data, metadata, graph) => {
|
||||
await chatModelStreamHandler.handle(event, data, metadata, graph);
|
||||
},
|
||||
},
|
||||
|
||||
// Text content streaming
|
||||
on_message_delta: createHandler((data) => {
|
||||
const content = data?.delta?.content;
|
||||
if (Array.isArray(content)) {
|
||||
for (const part of content) {
|
||||
if (part.type === 'text' && part.text) {
|
||||
streamText(part.text);
|
||||
}
|
||||
}
|
||||
}
|
||||
}),
|
||||
|
||||
// Reasoning/thinking content streaming
|
||||
on_reasoning_delta: createHandler((data) => {
|
||||
const content = data?.delta?.content;
|
||||
if (Array.isArray(content)) {
|
||||
for (const part of content) {
|
||||
const text = part.think || part.text;
|
||||
if (text) {
|
||||
streamReasoning(text);
|
||||
}
|
||||
}
|
||||
}
|
||||
}),
|
||||
|
||||
// Tool call initiation - streams id and name (from on_run_step)
|
||||
on_run_step: createHandler((data) => {
|
||||
const stepDetails = data?.stepDetails;
|
||||
if (stepDetails?.type === 'tool_calls' && stepDetails.tool_calls) {
|
||||
for (const tc of stepDetails.tool_calls) {
|
||||
const toolIndex = data.index ?? 0;
|
||||
const toolId = tc.id ?? '';
|
||||
const toolName = tc.name ?? '';
|
||||
const toolCall = {
|
||||
id: toolId,
|
||||
type: 'function',
|
||||
function: { name: toolName, arguments: '' },
|
||||
};
|
||||
|
||||
// Track tool call in tracker or aggregator
|
||||
if (isStreaming) {
|
||||
if (!tracker.toolCalls.has(toolIndex)) {
|
||||
tracker.toolCalls.set(toolIndex, toolCall);
|
||||
}
|
||||
// Stream initial tool call chunk (like OpenAI does)
|
||||
writeSSE(
|
||||
res,
|
||||
createChunk(context, {
|
||||
tool_calls: [{ index: toolIndex, ...toolCall }],
|
||||
}),
|
||||
);
|
||||
} else {
|
||||
if (!aggregator.toolCalls.has(toolIndex)) {
|
||||
aggregator.toolCalls.set(toolIndex, toolCall);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}),
|
||||
|
||||
// Tool call argument streaming (from on_run_step_delta)
|
||||
on_run_step_delta: createHandler((data) => {
|
||||
const delta = data?.delta;
|
||||
if (delta?.type === 'tool_calls' && delta.tool_calls) {
|
||||
for (const tc of delta.tool_calls) {
|
||||
const args = tc.args ?? '';
|
||||
if (!args) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const toolIndex = tc.index ?? 0;
|
||||
|
||||
// Update tool call arguments
|
||||
const targetMap = isStreaming ? tracker.toolCalls : aggregator.toolCalls;
|
||||
const tracked = targetMap.get(toolIndex);
|
||||
if (tracked) {
|
||||
tracked.function.arguments += args;
|
||||
}
|
||||
|
||||
// Stream argument delta (only for streaming)
|
||||
if (isStreaming) {
|
||||
writeSSE(
|
||||
res,
|
||||
createChunk(context, {
|
||||
tool_calls: [
|
||||
{
|
||||
index: toolIndex,
|
||||
function: { arguments: args },
|
||||
},
|
||||
],
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}),
|
||||
|
||||
// Usage tracking
|
||||
on_chat_model_end: createHandler((data) => {
|
||||
const usage = data?.output?.usage_metadata;
|
||||
if (usage) {
|
||||
collectedUsage.push(usage);
|
||||
const target = isStreaming ? tracker : aggregator;
|
||||
target.usage.promptTokens += usage.input_tokens ?? 0;
|
||||
target.usage.completionTokens += usage.output_tokens ?? 0;
|
||||
}
|
||||
}),
|
||||
on_run_step_completed: createHandler(),
|
||||
// Use proper ToolEndHandler for processing artifacts (images, file citations, code output)
|
||||
on_tool_end: new ToolEndHandler(toolEndCallback, logger),
|
||||
on_chain_stream: createHandler(),
|
||||
on_chain_end: createHandler(),
|
||||
on_agent_update: createHandler(),
|
||||
on_custom_event: createHandler(),
|
||||
};
|
||||
|
||||
// Create and run the agent
|
||||
const userId = req.user?.id ?? 'api-user';
|
||||
|
||||
// Extract userMCPAuthMap from primaryConfig (needed for MCP tool connections)
|
||||
const userMCPAuthMap = primaryConfig.userMCPAuthMap;
|
||||
|
||||
const run = await createRun({
|
||||
agents: [primaryConfig],
|
||||
messages: formattedMessages,
|
||||
indexTokenCountMap,
|
||||
runId: requestId,
|
||||
signal: abortController.signal,
|
||||
customHandlers: handlers,
|
||||
requestBody: {
|
||||
messageId: requestId,
|
||||
conversationId,
|
||||
},
|
||||
user: { id: userId },
|
||||
});
|
||||
|
||||
if (!run) {
|
||||
throw new Error('Failed to create agent run');
|
||||
}
|
||||
|
||||
// Process the stream
|
||||
const config = {
|
||||
runName: 'AgentRun',
|
||||
configurable: {
|
||||
thread_id: conversationId,
|
||||
user_id: userId,
|
||||
user: createSafeUser(req.user),
|
||||
...(userMCPAuthMap != null && { userMCPAuthMap }),
|
||||
},
|
||||
signal: abortController.signal,
|
||||
streamMode: 'values',
|
||||
version: 'v2',
|
||||
};
|
||||
|
||||
await run.processStream({ messages: formattedMessages }, config, {
|
||||
callbacks: {
|
||||
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
|
||||
logger.error(`[OpenAI API] Tool Error "${toolId}"`, error);
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
// Finalize response
|
||||
if (isStreaming) {
|
||||
sendFinalChunk(handlerConfig);
|
||||
res.end();
|
||||
|
||||
// Wait for artifact processing after response ends (non-blocking)
|
||||
if (artifactPromises.length > 0) {
|
||||
Promise.all(artifactPromises).catch((artifactError) => {
|
||||
logger.warn('[OpenAI API] Error processing artifacts:', artifactError);
|
||||
});
|
||||
}
|
||||
} else {
|
||||
// For non-streaming, wait for artifacts before sending response
|
||||
if (artifactPromises.length > 0) {
|
||||
try {
|
||||
await Promise.all(artifactPromises);
|
||||
} catch (artifactError) {
|
||||
logger.warn('[OpenAI API] Error processing artifacts:', artifactError);
|
||||
}
|
||||
}
|
||||
|
||||
// Build usage from aggregated data
|
||||
const usage = {
|
||||
prompt_tokens: aggregator.usage.promptTokens,
|
||||
completion_tokens: aggregator.usage.completionTokens,
|
||||
total_tokens: aggregator.usage.promptTokens + aggregator.usage.completionTokens,
|
||||
};
|
||||
|
||||
if (aggregator.usage.reasoningTokens > 0) {
|
||||
usage.completion_tokens_details = {
|
||||
reasoning_tokens: aggregator.usage.reasoningTokens,
|
||||
};
|
||||
}
|
||||
|
||||
const response = buildNonStreamingResponse(
|
||||
context,
|
||||
aggregator.getText(),
|
||||
aggregator.getReasoning(),
|
||||
aggregator.toolCalls,
|
||||
usage,
|
||||
);
|
||||
res.json(response);
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'An error occurred';
|
||||
logger.error('[OpenAI API] Error:', error);
|
||||
|
||||
// Check if we already started streaming (headers sent)
|
||||
if (res.headersSent) {
|
||||
// Headers already sent, send error in stream
|
||||
const errorChunk = createChunk(context, { content: `\n\nError: ${errorMessage}` }, 'stop');
|
||||
writeSSE(res, errorChunk);
|
||||
writeSSE(res, '[DONE]');
|
||||
res.end();
|
||||
} else {
|
||||
sendErrorResponse(res, 500, errorMessage, 'server_error');
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* List available agents as models (filtered by remote access permissions)
|
||||
*
|
||||
* GET /v1/models
|
||||
*/
|
||||
const ListModelsController = async (req, res) => {
|
||||
try {
|
||||
const userId = req.user?.id;
|
||||
const userRole = req.user?.role;
|
||||
|
||||
if (!userId) {
|
||||
return sendErrorResponse(res, 401, 'Authentication required', 'auth_error');
|
||||
}
|
||||
|
||||
// Find agents the user has remote access to (VIEW permission on REMOTE_AGENT)
|
||||
const accessibleAgentIds = await findAccessibleResources({
|
||||
userId,
|
||||
role: userRole,
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
requiredPermissions: PermissionBits.VIEW,
|
||||
});
|
||||
|
||||
// Get the accessible agents
|
||||
let agents = [];
|
||||
if (accessibleAgentIds.length > 0) {
|
||||
agents = await getAgents({ _id: { $in: accessibleAgentIds } });
|
||||
}
|
||||
|
||||
const models = agents.map((agent) => ({
|
||||
id: agent.id,
|
||||
object: 'model',
|
||||
created: Math.floor(new Date(agent.createdAt || Date.now()).getTime() / 1000),
|
||||
owned_by: 'librechat',
|
||||
permission: [],
|
||||
root: agent.id,
|
||||
parent: null,
|
||||
// LibreChat extensions
|
||||
name: agent.name,
|
||||
description: agent.description,
|
||||
provider: agent.provider,
|
||||
}));
|
||||
|
||||
res.json({
|
||||
object: 'list',
|
||||
data: models,
|
||||
});
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'Failed to list models';
|
||||
logger.error('[OpenAI API] Error listing models:', error);
|
||||
sendErrorResponse(res, 500, errorMessage, 'server_error');
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Get a specific model/agent (with remote access permission check)
|
||||
*
|
||||
* GET /v1/models/:model
|
||||
*/
|
||||
const GetModelController = async (req, res) => {
|
||||
try {
|
||||
const { model } = req.params;
|
||||
const userId = req.user?.id;
|
||||
const userRole = req.user?.role;
|
||||
|
||||
if (!userId) {
|
||||
return sendErrorResponse(res, 401, 'Authentication required', 'auth_error');
|
||||
}
|
||||
|
||||
const agent = await getAgent({ id: model });
|
||||
|
||||
if (!agent) {
|
||||
return sendErrorResponse(
|
||||
res,
|
||||
404,
|
||||
`Model not found: ${model}`,
|
||||
'invalid_request_error',
|
||||
'model_not_found',
|
||||
);
|
||||
}
|
||||
|
||||
// Check if user has remote access to this agent
|
||||
const accessibleAgentIds = await findAccessibleResources({
|
||||
userId,
|
||||
role: userRole,
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
requiredPermissions: PermissionBits.VIEW,
|
||||
});
|
||||
|
||||
const hasAccess = accessibleAgentIds.some((id) => id.toString() === agent._id.toString());
|
||||
|
||||
if (!hasAccess) {
|
||||
return sendErrorResponse(
|
||||
res,
|
||||
403,
|
||||
`No remote access to model: ${model}`,
|
||||
'permission_error',
|
||||
'access_denied',
|
||||
);
|
||||
}
|
||||
|
||||
res.json({
|
||||
id: agent.id,
|
||||
object: 'model',
|
||||
created: Math.floor(new Date(agent.createdAt || Date.now()).getTime() / 1000),
|
||||
owned_by: 'librechat',
|
||||
permission: [],
|
||||
root: agent.id,
|
||||
parent: null,
|
||||
// LibreChat extensions
|
||||
name: agent.name,
|
||||
description: agent.description,
|
||||
provider: agent.provider,
|
||||
});
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'Failed to get model';
|
||||
logger.error('[OpenAI API] Error getting model:', error);
|
||||
sendErrorResponse(res, 500, errorMessage, 'server_error');
|
||||
}
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
OpenAIChatCompletionController,
|
||||
ListModelsController,
|
||||
GetModelController,
|
||||
};
|
||||
800
api/server/controllers/agents/responses.js
Normal file
800
api/server/controllers/agents/responses.js
Normal file
|
|
@ -0,0 +1,800 @@
|
|||
const { nanoid } = require('nanoid');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { EModelEndpoint, ResourceType, PermissionBits } = require('librechat-data-provider');
|
||||
const {
|
||||
Callback,
|
||||
ToolEndHandler,
|
||||
formatAgentMessages,
|
||||
ChatModelStreamHandler,
|
||||
} = require('@librechat/agents');
|
||||
const {
|
||||
createRun,
|
||||
createSafeUser,
|
||||
initializeAgent,
|
||||
// Responses API
|
||||
writeDone,
|
||||
buildResponse,
|
||||
generateResponseId,
|
||||
isValidationFailure,
|
||||
emitResponseCreated,
|
||||
createResponseContext,
|
||||
createResponseTracker,
|
||||
setupStreamingResponse,
|
||||
emitResponseInProgress,
|
||||
convertInputToMessages,
|
||||
validateResponseRequest,
|
||||
buildAggregatedResponse,
|
||||
createResponseAggregator,
|
||||
sendResponsesErrorResponse,
|
||||
createResponsesEventHandlers,
|
||||
createAggregatorEventHandlers,
|
||||
} = require('@librechat/api');
|
||||
const {
|
||||
createResponsesToolEndCallback,
|
||||
createToolEndCallback,
|
||||
} = require('~/server/controllers/agents/callbacks');
|
||||
const { findAccessibleResources } = require('~/server/services/PermissionService');
|
||||
const { getConvoFiles, saveConvo, getConvo } = require('~/models/Conversation');
|
||||
const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const { getAgent, getAgents } = require('~/models/Agent');
|
||||
const db = require('~/models');
|
||||
|
||||
/** @type {import('@librechat/api').AppConfig | null} */
|
||||
let appConfig = null;
|
||||
|
||||
/**
|
||||
* Set the app config for the controller
|
||||
* @param {import('@librechat/api').AppConfig} config
|
||||
*/
|
||||
function setAppConfig(config) {
|
||||
appConfig = config;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a tool loader function for the agent.
|
||||
* @param {AbortSignal} signal - The abort signal
|
||||
*/
|
||||
function createToolLoader(signal) {
|
||||
return async function loadTools({
|
||||
req,
|
||||
res,
|
||||
tools,
|
||||
model,
|
||||
agentId,
|
||||
provider,
|
||||
tool_options,
|
||||
tool_resources,
|
||||
}) {
|
||||
const agent = { id: agentId, tools, provider, model, tool_options };
|
||||
try {
|
||||
return await loadAgentTools({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
signal,
|
||||
tool_resources,
|
||||
streamId: null,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('Error loading tools for agent ' + agentId, error);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert Open Responses input items to internal messages
|
||||
* @param {import('@librechat/api').InputItem[]} input
|
||||
* @returns {Array} Internal messages
|
||||
*/
|
||||
function convertToInternalMessages(input) {
|
||||
return convertInputToMessages(input);
|
||||
}
|
||||
|
||||
/**
|
||||
* Load messages from a previous response/conversation
|
||||
* @param {string} conversationId - The conversation/response ID
|
||||
* @param {string} userId - The user ID
|
||||
* @returns {Promise<Array>} Messages from the conversation
|
||||
*/
|
||||
async function loadPreviousMessages(conversationId, userId) {
|
||||
try {
|
||||
const messages = await db.getMessages({ conversationId, user: userId });
|
||||
if (!messages || messages.length === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Convert stored messages to internal format
|
||||
return messages.map((msg) => {
|
||||
const internalMsg = {
|
||||
role: msg.isCreatedByUser ? 'user' : 'assistant',
|
||||
content: '',
|
||||
messageId: msg.messageId,
|
||||
};
|
||||
|
||||
// Handle content - could be string or array
|
||||
if (typeof msg.text === 'string') {
|
||||
internalMsg.content = msg.text;
|
||||
} else if (Array.isArray(msg.content)) {
|
||||
// Handle content parts
|
||||
internalMsg.content = msg.content;
|
||||
} else if (msg.text) {
|
||||
internalMsg.content = String(msg.text);
|
||||
}
|
||||
|
||||
return internalMsg;
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Responses API] Error loading previous messages:', error);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Save input messages to database
|
||||
* @param {import('express').Request} req
|
||||
* @param {string} conversationId
|
||||
* @param {Array} inputMessages - Internal format messages
|
||||
* @param {string} agentId
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function saveInputMessages(req, conversationId, inputMessages, agentId) {
|
||||
for (const msg of inputMessages) {
|
||||
if (msg.role === 'user') {
|
||||
await db.saveMessage(
|
||||
req,
|
||||
{
|
||||
messageId: msg.messageId || nanoid(),
|
||||
conversationId,
|
||||
parentMessageId: null,
|
||||
isCreatedByUser: true,
|
||||
text: typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content),
|
||||
sender: 'User',
|
||||
endpoint: EModelEndpoint.agents,
|
||||
model: agentId,
|
||||
},
|
||||
{ context: 'Responses API - save user input' },
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Save response output to database
|
||||
* @param {import('express').Request} req
|
||||
* @param {string} conversationId
|
||||
* @param {string} responseId
|
||||
* @param {import('@librechat/api').Response} response
|
||||
* @param {string} agentId
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function saveResponseOutput(req, conversationId, responseId, response, agentId) {
|
||||
// Extract text content from output items
|
||||
let responseText = '';
|
||||
for (const item of response.output) {
|
||||
if (item.type === 'message' && item.content) {
|
||||
for (const part of item.content) {
|
||||
if (part.type === 'output_text' && part.text) {
|
||||
responseText += part.text;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Save the assistant message
|
||||
await db.saveMessage(
|
||||
req,
|
||||
{
|
||||
messageId: responseId,
|
||||
conversationId,
|
||||
parentMessageId: null,
|
||||
isCreatedByUser: false,
|
||||
text: responseText,
|
||||
sender: 'Agent',
|
||||
endpoint: EModelEndpoint.agents,
|
||||
model: agentId,
|
||||
finish_reason: response.status === 'completed' ? 'stop' : response.status,
|
||||
tokenCount: response.usage?.output_tokens,
|
||||
},
|
||||
{ context: 'Responses API - save assistant response' },
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Save or update conversation
|
||||
* @param {import('express').Request} req
|
||||
* @param {string} conversationId
|
||||
* @param {string} agentId
|
||||
* @param {object} agent
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function saveConversation(req, conversationId, agentId, agent) {
|
||||
await saveConvo(
|
||||
req,
|
||||
{
|
||||
conversationId,
|
||||
endpoint: EModelEndpoint.agents,
|
||||
agentId,
|
||||
title: agent?.name || 'Open Responses Conversation',
|
||||
model: agent?.model,
|
||||
},
|
||||
{ context: 'Responses API - save conversation' },
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert stored messages to Open Responses output format
|
||||
* @param {Array} messages - Stored messages
|
||||
* @returns {Array} Output items
|
||||
*/
|
||||
function convertMessagesToOutputItems(messages) {
|
||||
const output = [];
|
||||
|
||||
for (const msg of messages) {
|
||||
if (!msg.isCreatedByUser) {
|
||||
output.push({
|
||||
type: 'message',
|
||||
id: msg.messageId,
|
||||
role: 'assistant',
|
||||
status: 'completed',
|
||||
content: [
|
||||
{
|
||||
type: 'output_text',
|
||||
text: msg.text || '',
|
||||
annotations: [],
|
||||
},
|
||||
],
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return output;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create Response - POST /v1/responses
|
||||
*
|
||||
* Creates a model response following the Open Responses API specification.
|
||||
* Supports both streaming and non-streaming responses.
|
||||
*
|
||||
* @param {import('express').Request} req
|
||||
* @param {import('express').Response} res
|
||||
*/
|
||||
const createResponse = async (req, res) => {
|
||||
// Validate request
|
||||
const validation = validateResponseRequest(req.body);
|
||||
if (isValidationFailure(validation)) {
|
||||
return sendResponsesErrorResponse(res, 400, validation.error);
|
||||
}
|
||||
|
||||
const request = validation.request;
|
||||
const agentId = request.model;
|
||||
const isStreaming = request.stream === true;
|
||||
|
||||
// Look up the agent
|
||||
const agent = await getAgent({ id: agentId });
|
||||
if (!agent) {
|
||||
return sendResponsesErrorResponse(
|
||||
res,
|
||||
404,
|
||||
`Agent not found: ${agentId}`,
|
||||
'not_found',
|
||||
'model_not_found',
|
||||
);
|
||||
}
|
||||
|
||||
// Generate IDs
|
||||
const responseId = generateResponseId();
|
||||
const conversationId = request.previous_response_id ?? uuidv4();
|
||||
const parentMessageId = null;
|
||||
|
||||
// Create response context
|
||||
const context = createResponseContext(request, responseId);
|
||||
|
||||
// Set up abort controller
|
||||
const abortController = new AbortController();
|
||||
|
||||
// Handle client disconnect
|
||||
req.on('close', () => {
|
||||
if (!abortController.signal.aborted) {
|
||||
abortController.abort();
|
||||
logger.debug('[Responses API] Client disconnected, aborting');
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
// Build allowed providers set
|
||||
const allowedProviders = new Set(
|
||||
appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders,
|
||||
);
|
||||
|
||||
// Create tool loader
|
||||
const loadTools = createToolLoader(abortController.signal);
|
||||
|
||||
// Initialize the agent first to check for disableStreaming
|
||||
const endpointOption = {
|
||||
endpoint: agent.provider,
|
||||
model_parameters: agent.model_parameters ?? {},
|
||||
};
|
||||
|
||||
const primaryConfig = await initializeAgent(
|
||||
{
|
||||
req,
|
||||
res,
|
||||
loadTools,
|
||||
requestFiles: [],
|
||||
conversationId,
|
||||
parentMessageId,
|
||||
agent,
|
||||
endpointOption,
|
||||
allowedProviders,
|
||||
isInitialAgent: true,
|
||||
},
|
||||
{
|
||||
getConvoFiles,
|
||||
getFiles: db.getFiles,
|
||||
getUserKey: db.getUserKey,
|
||||
getMessages: db.getMessages,
|
||||
updateFilesUsage: db.updateFilesUsage,
|
||||
getUserKeyValues: db.getUserKeyValues,
|
||||
getUserCodeFiles: db.getUserCodeFiles,
|
||||
getToolFilesByIds: db.getToolFilesByIds,
|
||||
getCodeGeneratedFiles: db.getCodeGeneratedFiles,
|
||||
},
|
||||
);
|
||||
|
||||
// Determine if streaming is enabled (check both request and agent config)
|
||||
const streamingDisabled = !!primaryConfig.model_parameters?.disableStreaming;
|
||||
const actuallyStreaming = isStreaming && !streamingDisabled;
|
||||
|
||||
// Load previous messages if previous_response_id is provided
|
||||
let previousMessages = [];
|
||||
if (request.previous_response_id) {
|
||||
const userId = req.user?.id ?? 'api-user';
|
||||
previousMessages = await loadPreviousMessages(request.previous_response_id, userId);
|
||||
}
|
||||
|
||||
// Convert input to internal messages
|
||||
const inputMessages = convertToInternalMessages(
|
||||
typeof request.input === 'string' ? request.input : request.input,
|
||||
);
|
||||
|
||||
// Merge previous messages with new input
|
||||
const allMessages = [...previousMessages, ...inputMessages];
|
||||
|
||||
// Format for agent
|
||||
const toolSet = new Set((primaryConfig.tools ?? []).map((tool) => tool && tool.name));
|
||||
const { messages: formattedMessages, indexTokenCountMap } = formatAgentMessages(
|
||||
allMessages,
|
||||
{},
|
||||
toolSet,
|
||||
);
|
||||
|
||||
// Create tracker for streaming or aggregator for non-streaming
|
||||
const tracker = actuallyStreaming ? createResponseTracker() : null;
|
||||
const aggregator = actuallyStreaming ? null : createResponseAggregator();
|
||||
|
||||
// Set up response for streaming
|
||||
if (actuallyStreaming) {
|
||||
setupStreamingResponse(res);
|
||||
|
||||
// Create handler config
|
||||
const handlerConfig = {
|
||||
res,
|
||||
context,
|
||||
tracker,
|
||||
};
|
||||
|
||||
// Emit response.created then response.in_progress per Open Responses spec
|
||||
emitResponseCreated(handlerConfig);
|
||||
emitResponseInProgress(handlerConfig);
|
||||
|
||||
// Create event handlers
|
||||
const { handlers: responsesHandlers, finalizeStream } =
|
||||
createResponsesEventHandlers(handlerConfig);
|
||||
|
||||
// Built-in handler for processing raw model stream chunks
|
||||
const chatModelStreamHandler = new ChatModelStreamHandler();
|
||||
|
||||
// Artifact promises for processing tool outputs
|
||||
/** @type {Promise<import('librechat-data-provider').TAttachment | null>[]} */
|
||||
const artifactPromises = [];
|
||||
// Use Responses API-specific callback that emits librechat:attachment events
|
||||
const toolEndCallback = createResponsesToolEndCallback({
|
||||
req,
|
||||
res,
|
||||
tracker,
|
||||
artifactPromises,
|
||||
});
|
||||
|
||||
// Combine handlers
|
||||
const handlers = {
|
||||
on_chat_model_stream: {
|
||||
handle: async (event, data, metadata, graph) => {
|
||||
await chatModelStreamHandler.handle(event, data, metadata, graph);
|
||||
},
|
||||
},
|
||||
on_message_delta: responsesHandlers.on_message_delta,
|
||||
on_reasoning_delta: responsesHandlers.on_reasoning_delta,
|
||||
on_run_step: responsesHandlers.on_run_step,
|
||||
on_run_step_delta: responsesHandlers.on_run_step_delta,
|
||||
on_chat_model_end: responsesHandlers.on_chat_model_end,
|
||||
on_tool_end: new ToolEndHandler(toolEndCallback, logger),
|
||||
on_run_step_completed: { handle: () => {} },
|
||||
on_chain_stream: { handle: () => {} },
|
||||
on_chain_end: { handle: () => {} },
|
||||
on_agent_update: { handle: () => {} },
|
||||
on_custom_event: { handle: () => {} },
|
||||
};
|
||||
|
||||
// Create and run the agent
|
||||
const userId = req.user?.id ?? 'api-user';
|
||||
const userMCPAuthMap = primaryConfig.userMCPAuthMap;
|
||||
|
||||
const run = await createRun({
|
||||
agents: [primaryConfig],
|
||||
messages: formattedMessages,
|
||||
indexTokenCountMap,
|
||||
runId: responseId,
|
||||
signal: abortController.signal,
|
||||
customHandlers: handlers,
|
||||
requestBody: {
|
||||
messageId: responseId,
|
||||
conversationId,
|
||||
},
|
||||
user: { id: userId },
|
||||
});
|
||||
|
||||
if (!run) {
|
||||
throw new Error('Failed to create agent run');
|
||||
}
|
||||
|
||||
// Process the stream
|
||||
const config = {
|
||||
runName: 'AgentRun',
|
||||
configurable: {
|
||||
thread_id: conversationId,
|
||||
user_id: userId,
|
||||
user: createSafeUser(req.user),
|
||||
...(userMCPAuthMap != null && { userMCPAuthMap }),
|
||||
},
|
||||
signal: abortController.signal,
|
||||
streamMode: 'values',
|
||||
version: 'v2',
|
||||
};
|
||||
|
||||
await run.processStream({ messages: formattedMessages }, config, {
|
||||
callbacks: {
|
||||
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
|
||||
logger.error(`[Responses API] Tool Error "${toolId}"`, error);
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
// Finalize the stream
|
||||
finalizeStream();
|
||||
res.end();
|
||||
|
||||
// Save to database if store: true
|
||||
if (request.store === true) {
|
||||
try {
|
||||
// Save conversation
|
||||
await saveConversation(req, conversationId, agentId, agent);
|
||||
|
||||
// Save input messages
|
||||
await saveInputMessages(req, conversationId, inputMessages, agentId);
|
||||
|
||||
// Build response for saving (use tracker with buildResponse for streaming)
|
||||
const finalResponse = buildResponse(context, tracker, 'completed');
|
||||
await saveResponseOutput(req, conversationId, responseId, finalResponse, agentId);
|
||||
|
||||
logger.debug(
|
||||
`[Responses API] Stored response ${responseId} in conversation ${conversationId}`,
|
||||
);
|
||||
} catch (saveError) {
|
||||
logger.error('[Responses API] Error saving response:', saveError);
|
||||
// Don't fail the request if saving fails
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for artifact processing after response ends (non-blocking)
|
||||
if (artifactPromises.length > 0) {
|
||||
Promise.all(artifactPromises).catch((artifactError) => {
|
||||
logger.warn('[Responses API] Error processing artifacts:', artifactError);
|
||||
});
|
||||
}
|
||||
} else {
|
||||
// Non-streaming response
|
||||
const aggregatorHandlers = createAggregatorEventHandlers(aggregator);
|
||||
|
||||
// Built-in handler for processing raw model stream chunks
|
||||
const chatModelStreamHandler = new ChatModelStreamHandler();
|
||||
|
||||
// Artifact promises for processing tool outputs
|
||||
/** @type {Promise<import('librechat-data-provider').TAttachment | null>[]} */
|
||||
const artifactPromises = [];
|
||||
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises, streamId: null });
|
||||
|
||||
// Combine handlers
|
||||
const handlers = {
|
||||
on_chat_model_stream: {
|
||||
handle: async (event, data, metadata, graph) => {
|
||||
await chatModelStreamHandler.handle(event, data, metadata, graph);
|
||||
},
|
||||
},
|
||||
on_message_delta: aggregatorHandlers.on_message_delta,
|
||||
on_reasoning_delta: aggregatorHandlers.on_reasoning_delta,
|
||||
on_run_step: aggregatorHandlers.on_run_step,
|
||||
on_run_step_delta: aggregatorHandlers.on_run_step_delta,
|
||||
on_chat_model_end: aggregatorHandlers.on_chat_model_end,
|
||||
on_tool_end: new ToolEndHandler(toolEndCallback, logger),
|
||||
on_run_step_completed: { handle: () => {} },
|
||||
on_chain_stream: { handle: () => {} },
|
||||
on_chain_end: { handle: () => {} },
|
||||
on_agent_update: { handle: () => {} },
|
||||
on_custom_event: { handle: () => {} },
|
||||
};
|
||||
|
||||
// Create and run the agent
|
||||
const userId = req.user?.id ?? 'api-user';
|
||||
const userMCPAuthMap = primaryConfig.userMCPAuthMap;
|
||||
|
||||
const run = await createRun({
|
||||
agents: [primaryConfig],
|
||||
messages: formattedMessages,
|
||||
indexTokenCountMap,
|
||||
runId: responseId,
|
||||
signal: abortController.signal,
|
||||
customHandlers: handlers,
|
||||
requestBody: {
|
||||
messageId: responseId,
|
||||
conversationId,
|
||||
},
|
||||
user: { id: userId },
|
||||
});
|
||||
|
||||
if (!run) {
|
||||
throw new Error('Failed to create agent run');
|
||||
}
|
||||
|
||||
// Process the stream
|
||||
const config = {
|
||||
runName: 'AgentRun',
|
||||
configurable: {
|
||||
thread_id: conversationId,
|
||||
user_id: userId,
|
||||
user: createSafeUser(req.user),
|
||||
...(userMCPAuthMap != null && { userMCPAuthMap }),
|
||||
},
|
||||
signal: abortController.signal,
|
||||
streamMode: 'values',
|
||||
version: 'v2',
|
||||
};
|
||||
|
||||
await run.processStream({ messages: formattedMessages }, config, {
|
||||
callbacks: {
|
||||
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
|
||||
logger.error(`[Responses API] Tool Error "${toolId}"`, error);
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
// Wait for artifacts before sending response
|
||||
if (artifactPromises.length > 0) {
|
||||
try {
|
||||
await Promise.all(artifactPromises);
|
||||
} catch (artifactError) {
|
||||
logger.warn('[Responses API] Error processing artifacts:', artifactError);
|
||||
}
|
||||
}
|
||||
|
||||
// Build and send the response
|
||||
const response = buildAggregatedResponse(context, aggregator);
|
||||
|
||||
// Save to database if store: true
|
||||
if (request.store === true) {
|
||||
try {
|
||||
// Save conversation
|
||||
await saveConversation(req, conversationId, agentId, agent);
|
||||
|
||||
// Save input messages
|
||||
await saveInputMessages(req, conversationId, inputMessages, agentId);
|
||||
|
||||
// Save response output
|
||||
await saveResponseOutput(req, conversationId, responseId, response, agentId);
|
||||
|
||||
logger.debug(
|
||||
`[Responses API] Stored response ${responseId} in conversation ${conversationId}`,
|
||||
);
|
||||
} catch (saveError) {
|
||||
logger.error('[Responses API] Error saving response:', saveError);
|
||||
// Don't fail the request if saving fails
|
||||
}
|
||||
}
|
||||
|
||||
res.json(response);
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'An error occurred';
|
||||
logger.error('[Responses API] Error:', error);
|
||||
|
||||
// Check if we already started streaming (headers sent)
|
||||
if (res.headersSent) {
|
||||
// Headers already sent, write error event and close
|
||||
writeDone(res);
|
||||
res.end();
|
||||
} else {
|
||||
sendResponsesErrorResponse(res, 500, errorMessage, 'server_error');
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* List available agents as models - GET /v1/models (also works with /v1/responses/models)
|
||||
*
|
||||
* Returns a list of available agents the user has remote access to.
|
||||
*
|
||||
* @param {import('express').Request} req
|
||||
* @param {import('express').Response} res
|
||||
*/
|
||||
const listModels = async (req, res) => {
|
||||
try {
|
||||
const userId = req.user?.id;
|
||||
const userRole = req.user?.role;
|
||||
|
||||
if (!userId) {
|
||||
return sendResponsesErrorResponse(res, 401, 'Authentication required', 'auth_error');
|
||||
}
|
||||
|
||||
// Find agents the user has remote access to (VIEW permission on REMOTE_AGENT)
|
||||
const accessibleAgentIds = await findAccessibleResources({
|
||||
userId,
|
||||
role: userRole,
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
requiredPermissions: PermissionBits.VIEW,
|
||||
});
|
||||
|
||||
// Get the accessible agents
|
||||
let agents = [];
|
||||
if (accessibleAgentIds.length > 0) {
|
||||
agents = await getAgents({ _id: { $in: accessibleAgentIds } });
|
||||
}
|
||||
|
||||
// Convert to models format
|
||||
const models = agents.map((agent) => ({
|
||||
id: agent.id,
|
||||
object: 'model',
|
||||
created: Math.floor(new Date(agent.createdAt).getTime() / 1000),
|
||||
owned_by: agent.author ?? 'librechat',
|
||||
// Additional metadata
|
||||
name: agent.name,
|
||||
description: agent.description,
|
||||
provider: agent.provider,
|
||||
}));
|
||||
|
||||
res.json({
|
||||
object: 'list',
|
||||
data: models,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[Responses API] Error listing models:', error);
|
||||
sendResponsesErrorResponse(
|
||||
res,
|
||||
500,
|
||||
error instanceof Error ? error.message : 'Failed to list models',
|
||||
'server_error',
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Get Response - GET /v1/responses/:id
|
||||
*
|
||||
* Retrieves a stored response by its ID.
|
||||
* The response ID maps to a conversationId in LibreChat's storage.
|
||||
*
|
||||
* @param {import('express').Request} req
|
||||
* @param {import('express').Response} res
|
||||
*/
|
||||
const getResponse = async (req, res) => {
|
||||
try {
|
||||
const responseId = req.params.id;
|
||||
const userId = req.user?.id;
|
||||
|
||||
if (!responseId) {
|
||||
return sendResponsesErrorResponse(res, 400, 'Response ID is required');
|
||||
}
|
||||
|
||||
// The responseId could be either the response ID or the conversation ID
|
||||
// Try to find a conversation with this ID
|
||||
const conversation = await getConvo(userId, responseId);
|
||||
|
||||
if (!conversation) {
|
||||
return sendResponsesErrorResponse(
|
||||
res,
|
||||
404,
|
||||
`Response not found: ${responseId}`,
|
||||
'not_found',
|
||||
'response_not_found',
|
||||
);
|
||||
}
|
||||
|
||||
// Load messages for this conversation
|
||||
const messages = await db.getMessages({ conversationId: responseId, user: userId });
|
||||
|
||||
if (!messages || messages.length === 0) {
|
||||
return sendResponsesErrorResponse(
|
||||
res,
|
||||
404,
|
||||
`No messages found for response: ${responseId}`,
|
||||
'not_found',
|
||||
'response_not_found',
|
||||
);
|
||||
}
|
||||
|
||||
// Convert messages to Open Responses output format
|
||||
const output = convertMessagesToOutputItems(messages);
|
||||
|
||||
// Find the last assistant message for usage info
|
||||
const lastAssistantMessage = messages.filter((m) => !m.isCreatedByUser).pop();
|
||||
|
||||
// Build the response object
|
||||
const response = {
|
||||
id: responseId,
|
||||
object: 'response',
|
||||
created_at: Math.floor(new Date(conversation.createdAt || Date.now()).getTime() / 1000),
|
||||
completed_at: Math.floor(new Date(conversation.updatedAt || Date.now()).getTime() / 1000),
|
||||
status: 'completed',
|
||||
incomplete_details: null,
|
||||
model: conversation.agentId || conversation.model || 'unknown',
|
||||
previous_response_id: null,
|
||||
instructions: null,
|
||||
output,
|
||||
error: null,
|
||||
tools: [],
|
||||
tool_choice: 'auto',
|
||||
truncation: 'disabled',
|
||||
parallel_tool_calls: true,
|
||||
text: { format: { type: 'text' } },
|
||||
temperature: 1,
|
||||
top_p: 1,
|
||||
presence_penalty: 0,
|
||||
frequency_penalty: 0,
|
||||
top_logprobs: null,
|
||||
reasoning: null,
|
||||
user: userId,
|
||||
usage: lastAssistantMessage?.tokenCount
|
||||
? {
|
||||
input_tokens: 0,
|
||||
output_tokens: lastAssistantMessage.tokenCount,
|
||||
total_tokens: lastAssistantMessage.tokenCount,
|
||||
}
|
||||
: null,
|
||||
max_output_tokens: null,
|
||||
max_tool_calls: null,
|
||||
store: true,
|
||||
background: false,
|
||||
service_tier: 'default',
|
||||
metadata: {},
|
||||
safety_identifier: null,
|
||||
prompt_cache_key: null,
|
||||
};
|
||||
|
||||
res.json(response);
|
||||
} catch (error) {
|
||||
logger.error('[Responses API] Error getting response:', error);
|
||||
sendResponsesErrorResponse(
|
||||
res,
|
||||
500,
|
||||
error instanceof Error ? error.message : 'Failed to get response',
|
||||
'server_error',
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
createResponse,
|
||||
getResponse,
|
||||
listModels,
|
||||
setAppConfig,
|
||||
};
|
||||
|
|
@ -11,7 +11,9 @@ const {
|
|||
convertOcrToContextInPlace,
|
||||
} = require('@librechat/api');
|
||||
const {
|
||||
Time,
|
||||
Tools,
|
||||
CacheKeys,
|
||||
Constants,
|
||||
FileSources,
|
||||
ResourceType,
|
||||
|
|
@ -21,8 +23,6 @@ const {
|
|||
PermissionBits,
|
||||
actionDelimiter,
|
||||
removeNullishValues,
|
||||
CacheKeys,
|
||||
Time,
|
||||
} = require('librechat-data-provider');
|
||||
const {
|
||||
getListAgentsByAccess,
|
||||
|
|
@ -94,16 +94,25 @@ const createAgentHandler = async (req, res) => {
|
|||
|
||||
const agent = await createAgent(agentData);
|
||||
|
||||
// Automatically grant owner permissions to the creator
|
||||
try {
|
||||
await grantPermission({
|
||||
principalType: PrincipalType.USER,
|
||||
principalId: userId,
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceId: agent._id,
|
||||
accessRoleId: AccessRoleIds.AGENT_OWNER,
|
||||
grantedBy: userId,
|
||||
});
|
||||
await Promise.all([
|
||||
grantPermission({
|
||||
principalType: PrincipalType.USER,
|
||||
principalId: userId,
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceId: agent._id,
|
||||
accessRoleId: AccessRoleIds.AGENT_OWNER,
|
||||
grantedBy: userId,
|
||||
}),
|
||||
grantPermission({
|
||||
principalType: PrincipalType.USER,
|
||||
principalId: userId,
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
resourceId: agent._id,
|
||||
accessRoleId: AccessRoleIds.REMOTE_AGENT_OWNER,
|
||||
grantedBy: userId,
|
||||
}),
|
||||
]);
|
||||
logger.debug(
|
||||
`[createAgent] Granted owner permissions to user ${userId} for agent ${agent.id}`,
|
||||
);
|
||||
|
|
@ -396,16 +405,25 @@ const duplicateAgentHandler = async (req, res) => {
|
|||
newAgentData.actions = agentActions;
|
||||
const newAgent = await createAgent(newAgentData);
|
||||
|
||||
// Automatically grant owner permissions to the duplicator
|
||||
try {
|
||||
await grantPermission({
|
||||
principalType: PrincipalType.USER,
|
||||
principalId: userId,
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceId: newAgent._id,
|
||||
accessRoleId: AccessRoleIds.AGENT_OWNER,
|
||||
grantedBy: userId,
|
||||
});
|
||||
await Promise.all([
|
||||
grantPermission({
|
||||
principalType: PrincipalType.USER,
|
||||
principalId: userId,
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceId: newAgent._id,
|
||||
accessRoleId: AccessRoleIds.AGENT_OWNER,
|
||||
grantedBy: userId,
|
||||
}),
|
||||
grantPermission({
|
||||
principalType: PrincipalType.USER,
|
||||
principalId: userId,
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
resourceId: newAgent._id,
|
||||
accessRoleId: AccessRoleIds.REMOTE_AGENT_OWNER,
|
||||
grantedBy: userId,
|
||||
}),
|
||||
]);
|
||||
logger.debug(
|
||||
`[duplicateAgent] Granted owner permissions to user ${userId} for duplicated agent ${newAgent.id}`,
|
||||
);
|
||||
|
|
|
|||
|
|
@ -299,6 +299,7 @@ if (cluster.isMaster) {
|
|||
app.use('/api/auth', routes.auth);
|
||||
app.use('/api/actions', routes.actions);
|
||||
app.use('/api/keys', routes.keys);
|
||||
app.use('/api/api-keys', routes.apiKeys);
|
||||
app.use('/api/user', routes.user);
|
||||
app.use('/api/search', routes.search);
|
||||
app.use('/api/messages', routes.messages);
|
||||
|
|
|
|||
|
|
@ -137,6 +137,7 @@ const startServer = async () => {
|
|||
app.use('/api/admin', routes.adminAuth);
|
||||
app.use('/api/actions', routes.actions);
|
||||
app.use('/api/keys', routes.keys);
|
||||
app.use('/api/api-keys', routes.apiKeys);
|
||||
app.use('/api/user', routes.user);
|
||||
app.use('/api/search', routes.search);
|
||||
app.use('/api/messages', routes.messages);
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ const resourceToPermissionType = {
|
|||
[ResourceType.AGENT]: PermissionTypes.AGENTS,
|
||||
[ResourceType.PROMPTGROUP]: PermissionTypes.PROMPTS,
|
||||
[ResourceType.MCPSERVER]: PermissionTypes.MCP_SERVERS,
|
||||
[ResourceType.REMOTE_AGENT]: PermissionTypes.REMOTE_AGENTS,
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -53,6 +53,12 @@ const checkResourcePermissionAccess = (requiredPermission) => (req, res, next) =
|
|||
requiredPermission,
|
||||
resourceIdParam: 'resourceId',
|
||||
});
|
||||
} else if (resourceType === ResourceType.REMOTE_AGENT) {
|
||||
middleware = canAccessResource({
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
requiredPermission,
|
||||
resourceIdParam: 'resourceId',
|
||||
});
|
||||
} else if (resourceType === ResourceType.PROMPTGROUP) {
|
||||
middleware = canAccessResource({
|
||||
resourceType: ResourceType.PROMPTGROUP,
|
||||
|
|
|
|||
|
|
@ -26,10 +26,12 @@ const mockGenerationJobManager = {
|
|||
const mockSaveMessage = jest.fn();
|
||||
|
||||
jest.mock('@librechat/data-schemas', () => ({
|
||||
...jest.requireActual('@librechat/data-schemas'),
|
||||
logger: mockLogger,
|
||||
}));
|
||||
|
||||
jest.mock('@librechat/api', () => ({
|
||||
...jest.requireActual('@librechat/api'),
|
||||
isEnabled: jest.fn().mockReturnValue(false),
|
||||
GenerationJobManager: mockGenerationJobManager,
|
||||
}));
|
||||
|
|
|
|||
1125
api/server/routes/agents/__tests__/responses.spec.js
Normal file
1125
api/server/routes/agents/__tests__/responses.spec.js
Normal file
File diff suppressed because it is too large
Load diff
|
|
@ -10,6 +10,8 @@ const {
|
|||
messageUserLimiter,
|
||||
} = require('~/server/middleware');
|
||||
const { saveMessage } = require('~/models');
|
||||
const openai = require('./openai');
|
||||
const responses = require('./responses');
|
||||
const { v1 } = require('./v1');
|
||||
const chat = require('./chat');
|
||||
|
||||
|
|
@ -17,6 +19,20 @@ const { LIMIT_MESSAGE_IP, LIMIT_MESSAGE_USER } = process.env ?? {};
|
|||
|
||||
const router = express.Router();
|
||||
|
||||
/**
|
||||
* Open Responses API routes (API key authentication handled in route file)
|
||||
* Mounted at /agents/v1/responses (full path: /api/agents/v1/responses)
|
||||
* NOTE: Must be mounted BEFORE /v1 to avoid being caught by the less specific route
|
||||
* @see https://openresponses.org/specification
|
||||
*/
|
||||
router.use('/v1/responses', responses);
|
||||
|
||||
/**
|
||||
* OpenAI-compatible API routes (API key authentication handled in route file)
|
||||
* Mounted at /agents/v1 (full path: /api/agents/v1/chat/completions)
|
||||
*/
|
||||
router.use('/v1', openai);
|
||||
|
||||
router.use(requireJwtAuth);
|
||||
router.use(checkBan);
|
||||
router.use(uaParser);
|
||||
|
|
|
|||
110
api/server/routes/agents/openai.js
Normal file
110
api/server/routes/agents/openai.js
Normal file
|
|
@ -0,0 +1,110 @@
|
|||
/**
|
||||
* OpenAI-compatible API routes for LibreChat agents.
|
||||
*
|
||||
* Provides a /v1/chat/completions compatible interface for
|
||||
* interacting with LibreChat agents remotely via API.
|
||||
*
|
||||
* Usage:
|
||||
* POST /v1/chat/completions - Chat with an agent
|
||||
* GET /v1/models - List available agents
|
||||
* GET /v1/models/:model - Get agent details
|
||||
*
|
||||
* Request format:
|
||||
* {
|
||||
* "model": "agent_id_here",
|
||||
* "messages": [{"role": "user", "content": "Hello!"}],
|
||||
* "stream": true
|
||||
* }
|
||||
*/
|
||||
const express = require('express');
|
||||
const { PermissionTypes, Permissions } = require('librechat-data-provider');
|
||||
const {
|
||||
generateCheckAccess,
|
||||
createRequireApiKeyAuth,
|
||||
createCheckRemoteAgentAccess,
|
||||
} = require('@librechat/api');
|
||||
const {
|
||||
OpenAIChatCompletionController,
|
||||
ListModelsController,
|
||||
GetModelController,
|
||||
} = require('~/server/controllers/agents/openai');
|
||||
const { getEffectivePermissions } = require('~/server/services/PermissionService');
|
||||
const { validateAgentApiKey, findUser } = require('~/models');
|
||||
const { configMiddleware } = require('~/server/middleware');
|
||||
const { getRoleByName } = require('~/models/Role');
|
||||
const { getAgent } = require('~/models/Agent');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
const requireApiKeyAuth = createRequireApiKeyAuth({
|
||||
validateAgentApiKey,
|
||||
findUser,
|
||||
});
|
||||
|
||||
const checkRemoteAgentsFeature = generateCheckAccess({
|
||||
permissionType: PermissionTypes.REMOTE_AGENTS,
|
||||
permissions: [Permissions.USE],
|
||||
getRoleByName,
|
||||
});
|
||||
|
||||
const checkAgentPermission = createCheckRemoteAgentAccess({
|
||||
getAgent,
|
||||
getEffectivePermissions,
|
||||
});
|
||||
|
||||
router.use(requireApiKeyAuth);
|
||||
router.use(configMiddleware);
|
||||
router.use(checkRemoteAgentsFeature);
|
||||
|
||||
/**
|
||||
* @route POST /v1/chat/completions
|
||||
* @desc OpenAI-compatible chat completions with agents
|
||||
* @access Private (API key auth required)
|
||||
*
|
||||
* Request body:
|
||||
* {
|
||||
* "model": "agent_id", // Required: The agent ID to use
|
||||
* "messages": [...], // Required: Array of chat messages
|
||||
* "stream": true, // Optional: Whether to stream (default: false)
|
||||
* "conversation_id": "...", // Optional: Conversation ID for context
|
||||
* "parent_message_id": "..." // Optional: Parent message for threading
|
||||
* }
|
||||
*
|
||||
* Response (streaming):
|
||||
* - SSE stream with OpenAI chat.completion.chunk format
|
||||
* - Includes delta.reasoning for thinking/reasoning content
|
||||
*
|
||||
* Response (non-streaming):
|
||||
* - Standard OpenAI chat.completion format
|
||||
*/
|
||||
router.post('/chat/completions', checkAgentPermission, OpenAIChatCompletionController);
|
||||
|
||||
/**
|
||||
* @route GET /v1/models
|
||||
* @desc List available agents as models
|
||||
* @access Private (API key auth required)
|
||||
*
|
||||
* Response:
|
||||
* {
|
||||
* "object": "list",
|
||||
* "data": [
|
||||
* {
|
||||
* "id": "agent_id",
|
||||
* "object": "model",
|
||||
* "name": "Agent Name",
|
||||
* "provider": "openai",
|
||||
* ...
|
||||
* }
|
||||
* ]
|
||||
* }
|
||||
*/
|
||||
router.get('/models', ListModelsController);
|
||||
|
||||
/**
|
||||
* @route GET /v1/models/:model
|
||||
* @desc Get details for a specific agent/model
|
||||
* @access Private (API key auth required)
|
||||
*/
|
||||
router.get('/models/:model', GetModelController);
|
||||
|
||||
module.exports = router;
|
||||
144
api/server/routes/agents/responses.js
Normal file
144
api/server/routes/agents/responses.js
Normal file
|
|
@ -0,0 +1,144 @@
|
|||
/**
|
||||
* Open Responses API routes for LibreChat agents.
|
||||
*
|
||||
* Implements the Open Responses specification for a forward-looking,
|
||||
* agentic API that uses items as the fundamental unit and semantic
|
||||
* streaming events.
|
||||
*
|
||||
* Usage:
|
||||
* POST /v1/responses - Create a response
|
||||
* GET /v1/models - List available agents
|
||||
*
|
||||
* Request format:
|
||||
* {
|
||||
* "model": "agent_id_here",
|
||||
* "input": "Hello!" or [{ type: "message", role: "user", content: "Hello!" }],
|
||||
* "stream": true,
|
||||
* "previous_response_id": "optional_conversation_id"
|
||||
* }
|
||||
*
|
||||
* @see https://openresponses.org/specification
|
||||
*/
|
||||
const express = require('express');
|
||||
const { PermissionTypes, Permissions } = require('librechat-data-provider');
|
||||
const {
|
||||
generateCheckAccess,
|
||||
createRequireApiKeyAuth,
|
||||
createCheckRemoteAgentAccess,
|
||||
} = require('@librechat/api');
|
||||
const {
|
||||
createResponse,
|
||||
getResponse,
|
||||
listModels,
|
||||
} = require('~/server/controllers/agents/responses');
|
||||
const { getEffectivePermissions } = require('~/server/services/PermissionService');
|
||||
const { validateAgentApiKey, findUser } = require('~/models');
|
||||
const { configMiddleware } = require('~/server/middleware');
|
||||
const { getRoleByName } = require('~/models/Role');
|
||||
const { getAgent } = require('~/models/Agent');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
const requireApiKeyAuth = createRequireApiKeyAuth({
|
||||
validateAgentApiKey,
|
||||
findUser,
|
||||
});
|
||||
|
||||
const checkRemoteAgentsFeature = generateCheckAccess({
|
||||
permissionType: PermissionTypes.REMOTE_AGENTS,
|
||||
permissions: [Permissions.USE],
|
||||
getRoleByName,
|
||||
});
|
||||
|
||||
const checkAgentPermission = createCheckRemoteAgentAccess({
|
||||
getAgent,
|
||||
getEffectivePermissions,
|
||||
});
|
||||
|
||||
router.use(requireApiKeyAuth);
|
||||
router.use(configMiddleware);
|
||||
router.use(checkRemoteAgentsFeature);
|
||||
|
||||
/**
|
||||
* @route POST /v1/responses
|
||||
* @desc Create a model response following Open Responses specification
|
||||
* @access Private (API key auth required)
|
||||
*
|
||||
* Request body:
|
||||
* {
|
||||
* "model": "agent_id", // Required: The agent ID to use
|
||||
* "input": "..." | [...], // Required: String or array of input items
|
||||
* "stream": true, // Optional: Whether to stream (default: false)
|
||||
* "previous_response_id": "...", // Optional: Previous response for continuation
|
||||
* "instructions": "...", // Optional: Additional instructions
|
||||
* "tools": [...], // Optional: Additional tools
|
||||
* "tool_choice": "auto", // Optional: Tool choice mode
|
||||
* "max_output_tokens": 4096, // Optional: Max tokens
|
||||
* "temperature": 0.7 // Optional: Temperature
|
||||
* }
|
||||
*
|
||||
* Response (streaming):
|
||||
* - SSE stream with semantic events:
|
||||
* - response.in_progress
|
||||
* - response.output_item.added
|
||||
* - response.content_part.added
|
||||
* - response.output_text.delta
|
||||
* - response.output_text.done
|
||||
* - response.function_call_arguments.delta
|
||||
* - response.output_item.done
|
||||
* - response.completed
|
||||
* - [DONE]
|
||||
*
|
||||
* Response (non-streaming):
|
||||
* {
|
||||
* "id": "resp_xxx",
|
||||
* "object": "response",
|
||||
* "created_at": 1234567890,
|
||||
* "status": "completed",
|
||||
* "model": "agent_id",
|
||||
* "output": [...], // Array of output items
|
||||
* "usage": { ... }
|
||||
* }
|
||||
*/
|
||||
router.post('/', checkAgentPermission, createResponse);
|
||||
|
||||
/**
|
||||
* @route GET /v1/responses/models
|
||||
* @desc List available agents as models
|
||||
* @access Private (API key auth required)
|
||||
*
|
||||
* Response:
|
||||
* {
|
||||
* "object": "list",
|
||||
* "data": [
|
||||
* {
|
||||
* "id": "agent_id",
|
||||
* "object": "model",
|
||||
* "name": "Agent Name",
|
||||
* "provider": "openai",
|
||||
* ...
|
||||
* }
|
||||
* ]
|
||||
* }
|
||||
*/
|
||||
router.get('/models', listModels);
|
||||
|
||||
/**
|
||||
* @route GET /v1/responses/:id
|
||||
* @desc Retrieve a stored response by ID
|
||||
* @access Private (API key auth required)
|
||||
*
|
||||
* Response:
|
||||
* {
|
||||
* "id": "resp_xxx",
|
||||
* "object": "response",
|
||||
* "created_at": 1234567890,
|
||||
* "status": "completed",
|
||||
* "model": "agent_id",
|
||||
* "output": [...],
|
||||
* "usage": { ... }
|
||||
* }
|
||||
*/
|
||||
router.get('/:id', getResponse);
|
||||
|
||||
module.exports = router;
|
||||
36
api/server/routes/apiKeys.js
Normal file
36
api/server/routes/apiKeys.js
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
const express = require('express');
|
||||
const { generateCheckAccess, createApiKeyHandlers } = require('@librechat/api');
|
||||
const { PermissionTypes, Permissions } = require('librechat-data-provider');
|
||||
const {
|
||||
getAgentApiKeyById,
|
||||
createAgentApiKey,
|
||||
deleteAgentApiKey,
|
||||
listAgentApiKeys,
|
||||
} = require('~/models');
|
||||
const { requireJwtAuth } = require('~/server/middleware');
|
||||
const { getRoleByName } = require('~/models/Role');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
const handlers = createApiKeyHandlers({
|
||||
createAgentApiKey,
|
||||
listAgentApiKeys,
|
||||
deleteAgentApiKey,
|
||||
getAgentApiKeyById,
|
||||
});
|
||||
|
||||
const checkRemoteAgentsUse = generateCheckAccess({
|
||||
permissionType: PermissionTypes.REMOTE_AGENTS,
|
||||
permissions: [Permissions.USE],
|
||||
getRoleByName,
|
||||
});
|
||||
|
||||
router.post('/', requireJwtAuth, checkRemoteAgentsUse, handlers.createApiKey);
|
||||
|
||||
router.get('/', requireJwtAuth, checkRemoteAgentsUse, handlers.listApiKeys);
|
||||
|
||||
router.get('/:id', requireJwtAuth, checkRemoteAgentsUse, handlers.getApiKey);
|
||||
|
||||
router.delete('/:id', requireJwtAuth, checkRemoteAgentsUse, handlers.deleteApiKey);
|
||||
|
||||
module.exports = router;
|
||||
|
|
@ -10,6 +10,7 @@ const presets = require('./presets');
|
|||
const prompts = require('./prompts');
|
||||
const balance = require('./balance');
|
||||
const actions = require('./actions');
|
||||
const apiKeys = require('./apiKeys');
|
||||
const banner = require('./banner');
|
||||
const search = require('./search');
|
||||
const models = require('./models');
|
||||
|
|
@ -31,6 +32,7 @@ module.exports = {
|
|||
auth,
|
||||
adminAuth,
|
||||
keys,
|
||||
apiKeys,
|
||||
user,
|
||||
tags,
|
||||
roles,
|
||||
|
|
|
|||
|
|
@ -6,9 +6,10 @@ const {
|
|||
agentPermissionsSchema,
|
||||
promptPermissionsSchema,
|
||||
memoryPermissionsSchema,
|
||||
mcpServersPermissionsSchema,
|
||||
marketplacePermissionsSchema,
|
||||
peoplePickerPermissionsSchema,
|
||||
mcpServersPermissionsSchema,
|
||||
remoteAgentsPermissionsSchema,
|
||||
} = require('librechat-data-provider');
|
||||
const { checkAdmin, requireJwtAuth } = require('~/server/middleware');
|
||||
const { updateRoleByName, getRoleByName } = require('~/models/Role');
|
||||
|
|
@ -51,6 +52,11 @@ const permissionConfigs = {
|
|||
permissionType: PermissionTypes.MARKETPLACE,
|
||||
errorMessage: 'Invalid marketplace permissions.',
|
||||
},
|
||||
'remote-agents': {
|
||||
schema: remoteAgentsPermissionsSchema,
|
||||
permissionType: PermissionTypes.REMOTE_AGENTS,
|
||||
errorMessage: 'Invalid remote agents permissions.',
|
||||
},
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
@ -160,4 +166,10 @@ router.put('/:roleName/mcp-servers', checkAdmin, createPermissionUpdateHandler('
|
|||
*/
|
||||
router.put('/:roleName/marketplace', checkAdmin, createPermissionUpdateHandler('marketplace'));
|
||||
|
||||
/**
|
||||
* PUT /api/roles/:roleName/remote-agents
|
||||
* Update remote agents (API) permissions for a specific role
|
||||
*/
|
||||
router.put('/:roleName/remote-agents', checkAdmin, createPermissionUpdateHandler('remote-agents'));
|
||||
|
||||
module.exports = router;
|
||||
|
|
|
|||
|
|
@ -141,7 +141,6 @@ const checkPermission = async ({ userId, role, resourceType, resourceId, require
|
|||
|
||||
validateResourceType(resourceType);
|
||||
|
||||
// Get all principals for the user (user + groups + public)
|
||||
const principals = await getUserPrincipals({ userId, role });
|
||||
|
||||
if (principals.length === 0) {
|
||||
|
|
@ -151,7 +150,6 @@ const checkPermission = async ({ userId, role, resourceType, resourceId, require
|
|||
return await hasPermission(principals, resourceType, resourceId, requiredPermission);
|
||||
} catch (error) {
|
||||
logger.error(`[PermissionService.checkPermission] Error: ${error.message}`);
|
||||
// Re-throw validation errors
|
||||
if (error.message.includes('requiredPermission must be')) {
|
||||
throw error;
|
||||
}
|
||||
|
|
@ -172,12 +170,12 @@ const getEffectivePermissions = async ({ userId, role, resourceType, resourceId
|
|||
try {
|
||||
validateResourceType(resourceType);
|
||||
|
||||
// Get all principals for the user (user + groups + public)
|
||||
const principals = await getUserPrincipals({ userId, role });
|
||||
|
||||
if (principals.length === 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
return await getEffectivePermissionsACL(principals, resourceType, resourceId);
|
||||
} catch (error) {
|
||||
logger.error(`[PermissionService.getEffectivePermissions] Error: ${error.message}`);
|
||||
|
|
|
|||
362
client/src/components/Nav/SettingsTabs/Data/AgentApiKeys.tsx
Normal file
362
client/src/components/Nav/SettingsTabs/Data/AgentApiKeys.tsx
Normal file
|
|
@ -0,0 +1,362 @@
|
|||
import React, { useState } from 'react';
|
||||
import {
|
||||
useGetAgentApiKeysQuery,
|
||||
useCreateAgentApiKeyMutation,
|
||||
useDeleteAgentApiKeyMutation,
|
||||
} from 'librechat-data-provider/react-query';
|
||||
import { Permissions, PermissionTypes } from 'librechat-data-provider';
|
||||
import { Plus, Trash2, Copy, CopyCheck, Key, Eye, EyeOff, ShieldEllipsis } from 'lucide-react';
|
||||
import {
|
||||
Button,
|
||||
Input,
|
||||
Label,
|
||||
Spinner,
|
||||
OGDialog,
|
||||
OGDialogClose,
|
||||
OGDialogTitle,
|
||||
OGDialogHeader,
|
||||
OGDialogContent,
|
||||
OGDialogTrigger,
|
||||
useToastContext,
|
||||
} from '@librechat/client';
|
||||
import type { PermissionConfig } from '~/components/ui';
|
||||
import { useUpdateRemoteAgentsPermissionsMutation } from '~/data-provider';
|
||||
import { useLocalize, useCopyToClipboard } from '~/hooks';
|
||||
import { AdminSettingsDialog } from '~/components/ui';
|
||||
|
||||
function CreateKeyDialog({ onKeyCreated }: { onKeyCreated?: () => void }) {
|
||||
const localize = useLocalize();
|
||||
const { showToast } = useToastContext();
|
||||
const [open, setOpen] = useState(false);
|
||||
const [name, setName] = useState('');
|
||||
const [newKey, setNewKey] = useState<string | null>(null);
|
||||
const [showKey, setShowKey] = useState(false);
|
||||
const [isCopying, setIsCopying] = useState(false);
|
||||
const createMutation = useCreateAgentApiKeyMutation();
|
||||
const copyKey = useCopyToClipboard({ text: newKey || '' });
|
||||
|
||||
const handleCreate = async () => {
|
||||
if (!name.trim()) {
|
||||
showToast({ message: localize('com_ui_api_key_name_required'), status: 'error' });
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const result = await createMutation.mutateAsync({ name: name.trim() });
|
||||
setNewKey(result.key);
|
||||
showToast({ message: localize('com_ui_api_key_created'), status: 'success' });
|
||||
onKeyCreated?.();
|
||||
} catch {
|
||||
showToast({ message: localize('com_ui_api_key_create_error'), status: 'error' });
|
||||
}
|
||||
};
|
||||
|
||||
const handleClose = () => {
|
||||
setName('');
|
||||
setNewKey(null);
|
||||
setShowKey(false);
|
||||
setOpen(false);
|
||||
};
|
||||
|
||||
const handleCopy = () => {
|
||||
if (isCopying) {
|
||||
return;
|
||||
}
|
||||
copyKey(setIsCopying);
|
||||
showToast({ message: localize('com_ui_api_key_copied'), status: 'success' });
|
||||
};
|
||||
|
||||
return (
|
||||
<OGDialog open={open} onOpenChange={setOpen}>
|
||||
<OGDialogTrigger asChild>
|
||||
<Button variant="outline" size="sm" className="gap-2">
|
||||
<Plus className="h-4 w-4" />
|
||||
{localize('com_ui_create_api_key')}
|
||||
</Button>
|
||||
</OGDialogTrigger>
|
||||
<OGDialogContent className="max-w-md">
|
||||
<OGDialogTitle>{localize('com_ui_create_api_key')}</OGDialogTitle>
|
||||
<div className="space-y-4 py-4">
|
||||
{!newKey ? (
|
||||
<>
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="key-name">{localize('com_ui_api_key_name')}</Label>
|
||||
<Input
|
||||
id="key-name"
|
||||
value={name}
|
||||
onChange={(e) => setName(e.target.value)}
|
||||
placeholder={localize('com_ui_api_key_name_placeholder')}
|
||||
/>
|
||||
</div>
|
||||
<div className="flex justify-end gap-2">
|
||||
<OGDialogClose asChild>
|
||||
<Button variant="outline" onClick={handleClose}>
|
||||
{localize('com_ui_cancel')}
|
||||
</Button>
|
||||
</OGDialogClose>
|
||||
<Button onClick={handleCreate} disabled={createMutation.isLoading}>
|
||||
{createMutation.isLoading ? (
|
||||
<Spinner className="h-4 w-4" />
|
||||
) : (
|
||||
localize('com_ui_create')
|
||||
)}
|
||||
</Button>
|
||||
</div>
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<div className="rounded-lg border border-yellow-500/50 bg-yellow-50 p-4 dark:bg-yellow-900/20">
|
||||
<p className="text-sm text-yellow-800 dark:text-yellow-200">
|
||||
{localize('com_ui_api_key_warning')}
|
||||
</p>
|
||||
</div>
|
||||
<div className="space-y-2">
|
||||
<Label>{localize('com_ui_your_api_key')}</Label>
|
||||
<div className="flex gap-2">
|
||||
<Input
|
||||
value={showKey ? newKey : '•'.repeat(newKey.length)}
|
||||
readOnly
|
||||
className="font-mono text-sm"
|
||||
/>
|
||||
<Button
|
||||
variant="outline"
|
||||
size="icon"
|
||||
onClick={() => setShowKey(!showKey)}
|
||||
title={showKey ? localize('com_ui_hide') : localize('com_ui_show')}
|
||||
>
|
||||
{showKey ? <EyeOff className="h-4 w-4" /> : <Eye className="h-4 w-4" />}
|
||||
</Button>
|
||||
<Button
|
||||
variant="outline"
|
||||
size="icon"
|
||||
onClick={handleCopy}
|
||||
disabled={isCopying}
|
||||
title={localize('com_ui_copy')}
|
||||
>
|
||||
{isCopying ? <CopyCheck className="h-4 w-4" /> : <Copy className="h-4 w-4" />}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
<div className="flex justify-end">
|
||||
<Button onClick={handleClose}>{localize('com_ui_done')}</Button>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</OGDialogContent>
|
||||
</OGDialog>
|
||||
);
|
||||
}
|
||||
|
||||
function KeyItem({
|
||||
id,
|
||||
name,
|
||||
keyPrefix,
|
||||
createdAt,
|
||||
lastUsedAt,
|
||||
}: {
|
||||
id: string;
|
||||
name: string;
|
||||
keyPrefix: string;
|
||||
createdAt: string;
|
||||
lastUsedAt?: string;
|
||||
}) {
|
||||
const localize = useLocalize();
|
||||
const { showToast } = useToastContext();
|
||||
const [confirmDelete, setConfirmDelete] = useState(false);
|
||||
const deleteMutation = useDeleteAgentApiKeyMutation();
|
||||
|
||||
const handleDelete = async () => {
|
||||
try {
|
||||
await deleteMutation.mutateAsync(id);
|
||||
showToast({ message: localize('com_ui_api_key_deleted'), status: 'success' });
|
||||
} catch {
|
||||
showToast({ message: localize('com_ui_api_key_delete_error'), status: 'error' });
|
||||
}
|
||||
setConfirmDelete(false);
|
||||
};
|
||||
|
||||
const formatDate = (dateStr: string) => {
|
||||
return new Date(dateStr).toLocaleDateString(undefined, {
|
||||
year: 'numeric',
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
});
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="flex items-center justify-between rounded-lg border border-border-light p-3">
|
||||
<div className="flex items-center gap-3">
|
||||
<Key className="h-5 w-5 text-text-secondary" />
|
||||
<div>
|
||||
<div className="font-medium">{name}</div>
|
||||
<div className="text-sm text-text-secondary">
|
||||
<span className="font-mono">{keyPrefix}...</span>
|
||||
<span className="mx-2">•</span>
|
||||
<span>
|
||||
{localize('com_ui_created')} {formatDate(createdAt)}
|
||||
</span>
|
||||
{lastUsedAt && (
|
||||
<>
|
||||
<span className="mx-2">•</span>
|
||||
<span>
|
||||
{localize('com_ui_last_used')} {formatDate(lastUsedAt)}
|
||||
</span>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div>
|
||||
{confirmDelete ? (
|
||||
<div className="flex gap-2">
|
||||
<Button variant="outline" size="sm" onClick={() => setConfirmDelete(false)}>
|
||||
{localize('com_ui_cancel')}
|
||||
</Button>
|
||||
<Button
|
||||
variant="destructive"
|
||||
size="sm"
|
||||
onClick={handleDelete}
|
||||
disabled={deleteMutation.isLoading}
|
||||
>
|
||||
{deleteMutation.isLoading ? (
|
||||
<Spinner className="h-4 w-4" />
|
||||
) : (
|
||||
localize('com_ui_delete')
|
||||
)}
|
||||
</Button>
|
||||
</div>
|
||||
) : (
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
onClick={() => setConfirmDelete(true)}
|
||||
title={localize('com_ui_delete')}
|
||||
>
|
||||
<Trash2 className="h-4 w-4 text-text-secondary hover:text-red-500" />
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function ApiKeysContent({ isOpen }: { isOpen: boolean }) {
|
||||
const localize = useLocalize();
|
||||
const { data, isLoading, error } = useGetAgentApiKeysQuery({ enabled: isOpen });
|
||||
|
||||
if (error) {
|
||||
return <div className="text-sm text-red-500">{localize('com_ui_api_keys_load_error')}</div>;
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="space-y-4">
|
||||
<div className="flex items-center justify-end gap-2">
|
||||
<RemoteAgentsAdminSettings />
|
||||
<CreateKeyDialog />
|
||||
</div>
|
||||
|
||||
<div className="max-h-[400px] space-y-2 overflow-y-auto">
|
||||
{isLoading && (
|
||||
<div className="flex items-center justify-center py-8">
|
||||
<Spinner className="h-6 w-6" />
|
||||
</div>
|
||||
)}
|
||||
{!isLoading &&
|
||||
data?.keys &&
|
||||
data.keys.length > 0 &&
|
||||
data.keys.map((key) => (
|
||||
<KeyItem
|
||||
key={key.id}
|
||||
id={key.id}
|
||||
name={key.name}
|
||||
keyPrefix={key.keyPrefix}
|
||||
createdAt={key.createdAt}
|
||||
lastUsedAt={key.lastUsedAt}
|
||||
/>
|
||||
))}
|
||||
{!isLoading && (!data?.keys || data.keys.length === 0) && (
|
||||
<div className="rounded-lg border-2 border-dashed border-border-light p-8 text-center">
|
||||
<Key className="mx-auto h-8 w-8 text-text-secondary" />
|
||||
<p className="mt-2 text-sm text-text-secondary">{localize('com_ui_no_api_keys')}</p>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const remoteAgentsPermissions: PermissionConfig[] = [
|
||||
{ permission: Permissions.USE, labelKey: 'com_ui_remote_agents_allow_use' },
|
||||
{ permission: Permissions.CREATE, labelKey: 'com_ui_remote_agents_allow_create' },
|
||||
{ permission: Permissions.SHARE, labelKey: 'com_ui_remote_agents_allow_share' },
|
||||
{ permission: Permissions.SHARE_PUBLIC, labelKey: 'com_ui_remote_agents_allow_share_public' },
|
||||
];
|
||||
|
||||
function RemoteAgentsAdminSettings() {
|
||||
const localize = useLocalize();
|
||||
const { showToast } = useToastContext();
|
||||
|
||||
const mutation = useUpdateRemoteAgentsPermissionsMutation({
|
||||
onSuccess: () => {
|
||||
showToast({ status: 'success', message: localize('com_ui_saved') });
|
||||
},
|
||||
onError: () => {
|
||||
showToast({ status: 'error', message: localize('com_ui_error_save_admin_settings') });
|
||||
},
|
||||
});
|
||||
|
||||
const trigger = (
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="icon"
|
||||
className="h-8 w-8"
|
||||
aria-label={localize('com_ui_admin_settings')}
|
||||
>
|
||||
<ShieldEllipsis className="h-5 w-5" aria-hidden="true" />
|
||||
</Button>
|
||||
);
|
||||
|
||||
return (
|
||||
<AdminSettingsDialog
|
||||
permissionType={PermissionTypes.REMOTE_AGENTS}
|
||||
sectionKey="com_ui_remote_agents"
|
||||
permissions={remoteAgentsPermissions}
|
||||
menuId="remote-agents-role-dropdown"
|
||||
mutation={mutation}
|
||||
trigger={trigger}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
export function AgentApiKeys() {
|
||||
const localize = useLocalize();
|
||||
const [isOpen, setIsOpen] = useState(false);
|
||||
|
||||
return (
|
||||
<div className="flex items-center justify-between">
|
||||
<Label id="api-keys-label">{localize('com_ui_agent_api_keys')}</Label>
|
||||
|
||||
<OGDialog open={isOpen} onOpenChange={setIsOpen}>
|
||||
<OGDialogTrigger asChild>
|
||||
<Button aria-labelledby="api-keys-label" variant="outline">
|
||||
{localize('com_ui_manage')}
|
||||
</Button>
|
||||
</OGDialogTrigger>
|
||||
|
||||
<OGDialogContent
|
||||
title={localize('com_ui_agent_api_keys')}
|
||||
className="w-11/12 max-w-2xl bg-background text-text-primary shadow-2xl"
|
||||
>
|
||||
<OGDialogHeader>
|
||||
<OGDialogTitle>{localize('com_ui_agent_api_keys')}</OGDialogTitle>
|
||||
<p className="text-sm text-text-secondary">
|
||||
{localize('com_ui_agent_api_keys_description')}
|
||||
</p>
|
||||
</OGDialogHeader>
|
||||
<ApiKeysContent isOpen={isOpen} />
|
||||
</OGDialogContent>
|
||||
</OGDialog>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
|
@ -1,15 +1,22 @@
|
|||
import React, { useState, useRef } from 'react';
|
||||
import { useOnClickOutside } from '@librechat/client';
|
||||
import { Permissions, PermissionTypes } from 'librechat-data-provider';
|
||||
import ImportConversations from './ImportConversations';
|
||||
import { RevokeKeys } from './RevokeKeys';
|
||||
import { AgentApiKeys } from './AgentApiKeys';
|
||||
import { DeleteCache } from './DeleteCache';
|
||||
import { RevokeKeys } from './RevokeKeys';
|
||||
import { ClearChats } from './ClearChats';
|
||||
import SharedLinks from './SharedLinks';
|
||||
import { useHasAccess } from '~/hooks';
|
||||
|
||||
function Data() {
|
||||
const dataTabRef = useRef(null);
|
||||
const [confirmClearConvos, setConfirmClearConvos] = useState(false);
|
||||
useOnClickOutside(dataTabRef, () => confirmClearConvos && setConfirmClearConvos(false), []);
|
||||
const hasAccessToApiKeys = useHasAccess({
|
||||
permissionType: PermissionTypes.REMOTE_AGENTS,
|
||||
permission: Permissions.USE,
|
||||
});
|
||||
|
||||
return (
|
||||
<div className="flex flex-col gap-3 p-1 text-sm text-text-primary">
|
||||
|
|
@ -19,6 +26,11 @@ function Data() {
|
|||
<div className="pb-3">
|
||||
<SharedLinks />
|
||||
</div>
|
||||
{hasAccessToApiKeys && (
|
||||
<div className="pb-3">
|
||||
<AgentApiKeys />
|
||||
</div>
|
||||
)}
|
||||
<div className="pb-3">
|
||||
<RevokeKeys />
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import { Globe } from 'lucide-react';
|
||||
import { Spinner } from '@librechat/client';
|
||||
import { useWatch, useFormContext } from 'react-hook-form';
|
||||
import {
|
||||
|
|
@ -44,13 +45,20 @@ export default function AgentFooter({
|
|||
permissionType: PermissionTypes.AGENTS,
|
||||
permission: Permissions.SHARE,
|
||||
});
|
||||
const hasAccessToShareRemoteAgents = useHasAccess({
|
||||
permissionType: PermissionTypes.REMOTE_AGENTS,
|
||||
permission: Permissions.SHARE,
|
||||
});
|
||||
const { hasPermission, isLoading: permissionsLoading } = useResourcePermissions(
|
||||
ResourceType.AGENT,
|
||||
agent?._id || '',
|
||||
);
|
||||
const { hasPermission: hasRemoteAgentPermission, isLoading: remotePermissionsLoading } =
|
||||
useResourcePermissions(ResourceType.REMOTE_AGENT, agent?._id || '');
|
||||
|
||||
const canShareThisAgent = hasPermission(PermissionBits.SHARE);
|
||||
const canDeleteThisAgent = hasPermission(PermissionBits.DELETE);
|
||||
const canShareRemoteAgent = hasRemoteAgentPermission(PermissionBits.SHARE);
|
||||
const isSaving = createMutation.isLoading || updateMutation.isLoading || isAvatarUploading;
|
||||
const renderSaveButton = () => {
|
||||
if (isSaving) {
|
||||
|
|
@ -91,6 +99,25 @@ export default function AgentFooter({
|
|||
resourceType={ResourceType.AGENT}
|
||||
/>
|
||||
)}
|
||||
{(agent?.author === user?.id || user?.role === SystemRoles.ADMIN || canShareRemoteAgent) &&
|
||||
hasAccessToShareRemoteAgents &&
|
||||
!remotePermissionsLoading &&
|
||||
agent?._id && (
|
||||
<GenericGrantAccessDialog
|
||||
resourceDbId={agent?._id}
|
||||
resourceId={agent_id}
|
||||
resourceName={agent?.name ?? ''}
|
||||
resourceType={ResourceType.REMOTE_AGENT}
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
className="btn btn-neutral border-token-border-light h-9 px-3"
|
||||
title={localize('com_ui_remote_access')}
|
||||
>
|
||||
<Globe className="h-4 w-4" aria-hidden="true" />
|
||||
</button>
|
||||
</GenericGrantAccessDialog>
|
||||
)}
|
||||
{agent && agent.author === user?.id && <DuplicateAgent agent_id={agent_id} />}
|
||||
{/* Submit Button */}
|
||||
<button
|
||||
|
|
|
|||
|
|
@ -174,7 +174,7 @@ jest.mock('~/components/Sharing', () => ({
|
|||
resourceType: ResourceType;
|
||||
}) => (
|
||||
<div
|
||||
data-testid="grant-access-dialog"
|
||||
data-testid={`grant-access-dialog-${resourceType}`}
|
||||
data-resource-db-id={resourceDbId}
|
||||
data-resource-id={resourceId}
|
||||
data-resource-name={resourceName}
|
||||
|
|
@ -274,7 +274,7 @@ describe('AgentFooter', () => {
|
|||
expect(screen.getByTestId('version-button')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('delete-button')).toBeInTheDocument();
|
||||
expect(screen.queryByTestId('admin-settings')).not.toBeInTheDocument();
|
||||
expect(screen.getByTestId('grant-access-dialog')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('grant-access-dialog-agent')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('duplicate-button')).toBeInTheDocument();
|
||||
expect(screen.queryByTestId('spinner')).not.toBeInTheDocument();
|
||||
});
|
||||
|
|
@ -338,7 +338,7 @@ describe('AgentFooter', () => {
|
|||
expect(screen.getByText('Create')).toBeInTheDocument();
|
||||
expect(screen.queryByTestId('version-button')).not.toBeInTheDocument();
|
||||
expect(screen.queryByTestId('delete-button')).not.toBeInTheDocument();
|
||||
expect(screen.queryByTestId('grant-access-dialog')).not.toBeInTheDocument();
|
||||
expect(screen.queryByTestId('grant-access-dialog-agent')).not.toBeInTheDocument();
|
||||
expect(screen.queryByTestId('duplicate-agent')).not.toBeInTheDocument();
|
||||
});
|
||||
|
||||
|
|
@ -346,7 +346,7 @@ describe('AgentFooter', () => {
|
|||
mockUseAuthContext.mockReturnValue(createAuthContext(mockUsers.admin));
|
||||
const { unmount } = render(<AgentFooter {...defaultProps} />);
|
||||
expect(screen.getByTestId('admin-settings')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('grant-access-dialog')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('grant-access-dialog-agent')).toBeInTheDocument();
|
||||
|
||||
// Clean up the first render
|
||||
unmount();
|
||||
|
|
@ -363,7 +363,7 @@ describe('AgentFooter', () => {
|
|||
return undefined;
|
||||
});
|
||||
render(<AgentFooter {...defaultProps} />);
|
||||
expect(screen.queryByTestId('grant-access-dialog')).toBeInTheDocument(); // Still shows because hasAccess is true
|
||||
expect(screen.queryByTestId('grant-access-dialog-agent')).toBeInTheDocument(); // Still shows because hasAccess is true
|
||||
expect(screen.queryByTestId('duplicate-agent')).not.toBeInTheDocument(); // Should not show for different author
|
||||
});
|
||||
|
||||
|
|
@ -392,7 +392,7 @@ describe('AgentFooter', () => {
|
|||
permissionBits: 0,
|
||||
});
|
||||
render(<AgentFooter {...defaultProps} />);
|
||||
expect(screen.queryByTestId('grant-access-dialog')).not.toBeInTheDocument();
|
||||
expect(screen.queryByTestId('grant-access-dialog-agent')).not.toBeInTheDocument();
|
||||
});
|
||||
|
||||
test('hides action buttons when permissions are loading', () => {
|
||||
|
|
@ -419,7 +419,7 @@ describe('AgentFooter', () => {
|
|||
});
|
||||
render(<AgentFooter {...defaultProps} />);
|
||||
expect(screen.queryByTestId('delete-button')).not.toBeInTheDocument();
|
||||
expect(screen.queryByTestId('grant-access-dialog')).not.toBeInTheDocument();
|
||||
expect(screen.queryByTestId('grant-access-dialog-agent')).not.toBeInTheDocument();
|
||||
// Duplicate button should still show as it doesn't depend on permissions loading
|
||||
expect(screen.getByTestId('duplicate-button')).toBeInTheDocument();
|
||||
});
|
||||
|
|
|
|||
|
|
@ -4,14 +4,15 @@ import {
|
|||
dataService,
|
||||
promptPermissionsSchema,
|
||||
memoryPermissionsSchema,
|
||||
mcpServersPermissionsSchema,
|
||||
marketplacePermissionsSchema,
|
||||
peoplePickerPermissionsSchema,
|
||||
mcpServersPermissionsSchema,
|
||||
remoteAgentsPermissionsSchema,
|
||||
} from 'librechat-data-provider';
|
||||
import type {
|
||||
UseQueryOptions,
|
||||
UseMutationResult,
|
||||
QueryObserverResult,
|
||||
UseMutationResult,
|
||||
UseQueryOptions,
|
||||
} from '@tanstack/react-query';
|
||||
import type * as t from 'librechat-data-provider';
|
||||
|
||||
|
|
@ -243,3 +244,39 @@ export const useUpdateMarketplacePermissionsMutation = (
|
|||
},
|
||||
);
|
||||
};
|
||||
|
||||
export const useUpdateRemoteAgentsPermissionsMutation = (
|
||||
options?: t.UpdateRemoteAgentsPermOptions,
|
||||
): UseMutationResult<
|
||||
t.UpdatePermResponse,
|
||||
t.TError | undefined,
|
||||
t.UpdateRemoteAgentsPermVars,
|
||||
unknown
|
||||
> => {
|
||||
const queryClient = useQueryClient();
|
||||
const { onMutate, onSuccess, onError } = options ?? {};
|
||||
return useMutation(
|
||||
(variables) => {
|
||||
remoteAgentsPermissionsSchema.partial().parse(variables.updates);
|
||||
return dataService.updateRemoteAgentsPermissions(variables);
|
||||
},
|
||||
{
|
||||
onSuccess: (data, variables, context) => {
|
||||
queryClient.invalidateQueries([QueryKeys.roles, variables.roleName]);
|
||||
if (onSuccess) {
|
||||
onSuccess(data, variables, context);
|
||||
}
|
||||
},
|
||||
onError: (...args) => {
|
||||
const error = args[0];
|
||||
if (error != null) {
|
||||
console.error('Failed to update remote agents permissions:', error);
|
||||
}
|
||||
if (onError) {
|
||||
onError(...args);
|
||||
}
|
||||
},
|
||||
onMutate,
|
||||
},
|
||||
);
|
||||
};
|
||||
|
|
|
|||
|
|
@ -1,10 +1,11 @@
|
|||
import { ResourceType, PermissionTypes, Permissions } from 'librechat-data-provider';
|
||||
import { useHasAccess } from '~/hooks';
|
||||
|
||||
const resourceToPermissionMap: Record<ResourceType, PermissionTypes> = {
|
||||
const resourceToPermissionMap: Partial<Record<ResourceType, PermissionTypes>> = {
|
||||
[ResourceType.AGENT]: PermissionTypes.AGENTS,
|
||||
[ResourceType.PROMPTGROUP]: PermissionTypes.PROMPTS,
|
||||
[ResourceType.MCPSERVER]: PermissionTypes.MCP_SERVERS,
|
||||
[ResourceType.REMOTE_AGENT]: PermissionTypes.REMOTE_AGENTS,
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -708,6 +708,21 @@
|
|||
"com_ui_analyzing": "Analyzing",
|
||||
"com_ui_analyzing_finished": "Finished analyzing",
|
||||
"com_ui_api_key": "API Key",
|
||||
"com_ui_api_key_copied": "API key copied to clipboard",
|
||||
"com_ui_api_key_create_error": "Failed to create API key",
|
||||
"com_ui_api_key_created": "API key created successfully",
|
||||
"com_ui_api_key_delete_error": "Failed to delete API key",
|
||||
"com_ui_api_key_deleted": "API key deleted successfully",
|
||||
"com_ui_api_key_name": "Key Name",
|
||||
"com_ui_api_key_name_placeholder": "My API Key",
|
||||
"com_ui_api_key_name_required": "API key name is required",
|
||||
"com_ui_api_key_warning": "Make sure to copy your API key now. You won't be able to see it again!",
|
||||
"com_ui_api_keys_load_error": "Failed to load API keys",
|
||||
"com_ui_agent_api_keys": "Agent API Keys",
|
||||
"com_ui_agent_api_keys_description": "Create API keys to access agents remotely via the API",
|
||||
"com_ui_create_api_key": "Create API Key",
|
||||
"com_ui_no_api_keys": "No API keys yet. Create one to get started.",
|
||||
"com_ui_your_api_key": "Your API Key",
|
||||
"com_ui_archive": "Archive",
|
||||
"com_ui_archive_delete_error": "Failed to delete archived conversation",
|
||||
"com_ui_archive_error": "Failed to archive conversation",
|
||||
|
|
@ -838,6 +853,7 @@
|
|||
"com_ui_copy_to_clipboard": "Copy to clipboard",
|
||||
"com_ui_copy_url_to_clipboard": "Copy URL to clipboard",
|
||||
"com_ui_create": "Create",
|
||||
"com_ui_created": "Created",
|
||||
"com_ui_create_assistant": "Create Assistant",
|
||||
"com_ui_create_link": "Create link",
|
||||
"com_ui_create_memory": "Create Memory",
|
||||
|
|
@ -1023,6 +1039,7 @@
|
|||
"com_ui_hide_image_details": "Hide Image Details",
|
||||
"com_ui_hide_password": "Hide password",
|
||||
"com_ui_hide_qr": "Hide QR Code",
|
||||
"com_ui_hide": "Hide",
|
||||
"com_ui_high": "High",
|
||||
"com_ui_host": "Host",
|
||||
"com_ui_icon": "Icon",
|
||||
|
|
@ -1044,6 +1061,7 @@
|
|||
"com_ui_instructions": "Instructions",
|
||||
"com_ui_key": "Key",
|
||||
"com_ui_key_required": "API key is required",
|
||||
"com_ui_last_used": "Last used",
|
||||
"com_ui_late_night": "Happy late night",
|
||||
"com_ui_latest_footer": "Every AI for Everyone.",
|
||||
"com_ui_latest_production_version": "Latest production version",
|
||||
|
|
@ -1089,6 +1107,13 @@
|
|||
"com_ui_mcp_server_role_viewer": "MCP Server Viewer",
|
||||
"com_ui_mcp_server_role_viewer_desc": "Can view and use MCP servers",
|
||||
"com_ui_mcp_server_updated": "MCP server updated successfully",
|
||||
"com_ui_remote_access": "Remote Access",
|
||||
"com_ui_remote_agent_role_owner": "API Owner",
|
||||
"com_ui_remote_agent_role_owner_desc": "Full API access and can grant remote access to others",
|
||||
"com_ui_remote_agent_role_editor": "Editor",
|
||||
"com_ui_remote_agent_role_editor_desc": "Can view and modify the agent via API",
|
||||
"com_ui_remote_agent_role_viewer": "API Viewer",
|
||||
"com_ui_remote_agent_role_viewer_desc": "Can query the agent via API",
|
||||
"com_ui_mcp_server_url_placeholder": "https://mcp.example.com",
|
||||
"com_ui_mcp_servers": "MCP Servers",
|
||||
"com_ui_mcp_servers_allow_create": "Allow users to create MCP servers",
|
||||
|
|
@ -1238,6 +1263,11 @@
|
|||
"com_ui_regenerating": "Regenerating...",
|
||||
"com_ui_region": "Region",
|
||||
"com_ui_reinitialize": "Reinitialize",
|
||||
"com_ui_remote_agents": "Remote Agents (API)",
|
||||
"com_ui_remote_agents_allow_use": "Allow users to create API keys and query agents remotely",
|
||||
"com_ui_remote_agents_allow_create": "Allow users to create agents via API",
|
||||
"com_ui_remote_agents_allow_share": "Allow users to grant API access to agents to others",
|
||||
"com_ui_remote_agents_allow_share_public": "Allow users to grant API access to agents to all users",
|
||||
"com_ui_remove_agent_from_chain": "Remove {{0}} from chain",
|
||||
"com_ui_remove_user": "Remove {{0}}",
|
||||
"com_ui_rename": "Rename",
|
||||
|
|
@ -1327,6 +1357,7 @@
|
|||
"com_ui_shared_link_not_found": "Shared link not found",
|
||||
"com_ui_shared_prompts": "Shared Prompts",
|
||||
"com_ui_shop": "Shopping",
|
||||
"com_ui_show": "Show",
|
||||
"com_ui_show_all": "Show All",
|
||||
"com_ui_show_code": "Show Code",
|
||||
"com_ui_show_image_details": "Show Image Details",
|
||||
|
|
|
|||
|
|
@ -47,6 +47,19 @@ export const RESOURCE_CONFIGS: Record<ResourceType, ResourceConfig> = {
|
|||
`Manage permissions for ${name && name !== '' ? `"${name}"` : 'MCP server'}`,
|
||||
getCopyUrlMessage: () => 'MCP Server URL copied',
|
||||
},
|
||||
[ResourceType.REMOTE_AGENT]: {
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
defaultViewerRoleId: AccessRoleIds.REMOTE_AGENT_VIEWER,
|
||||
defaultEditorRoleId: AccessRoleIds.REMOTE_AGENT_EDITOR,
|
||||
defaultOwnerRoleId: AccessRoleIds.REMOTE_AGENT_OWNER,
|
||||
getResourceUrl: () => `${window.location.origin}/api/v1/responses`,
|
||||
getResourceName: (name?: string) => (name && name !== '' ? `"${name}"` : 'remote agent'),
|
||||
getShareMessage: (name?: string) =>
|
||||
name && name !== '' ? `"${name}" (API Access)` : 'remote agent access',
|
||||
getManageMessage: (name?: string) =>
|
||||
`Manage API access for ${name && name !== '' ? `"${name}"` : 'agent'}`,
|
||||
getCopyUrlMessage: () => 'API endpoint copied',
|
||||
},
|
||||
};
|
||||
|
||||
export const getResourceConfig = (resourceType: ResourceType): ResourceConfig | undefined => {
|
||||
|
|
|
|||
|
|
@ -48,6 +48,18 @@ export const ROLE_LOCALIZATIONS = {
|
|||
name: 'com_ui_mcp_server_role_owner' as const,
|
||||
description: 'com_ui_mcp_server_role_owner_desc' as const,
|
||||
} as const,
|
||||
remoteAgent_viewer: {
|
||||
name: 'com_ui_remote_agent_role_viewer' as const,
|
||||
description: 'com_ui_remote_agent_role_viewer_desc' as const,
|
||||
} as const,
|
||||
remoteAgent_editor: {
|
||||
name: 'com_ui_remote_agent_role_editor' as const,
|
||||
description: 'com_ui_remote_agent_role_editor_desc' as const,
|
||||
} as const,
|
||||
remoteAgent_owner: {
|
||||
name: 'com_ui_remote_agent_role_owner' as const,
|
||||
description: 'com_ui_remote_agent_role_owner_desc' as const,
|
||||
} as const,
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ const {
|
|||
PluginAuth,
|
||||
MemoryEntry,
|
||||
PromptGroup,
|
||||
AgentApiKey,
|
||||
Transaction,
|
||||
Conversation,
|
||||
ConversationTag,
|
||||
|
|
@ -79,6 +80,7 @@ async function gracefulExit(code = 0) {
|
|||
const tasks = [
|
||||
Action.deleteMany({ user: uid }),
|
||||
Agent.deleteMany({ author: uid }),
|
||||
AgentApiKey.deleteMany({ user: uid }),
|
||||
Assistant.deleteMany({ user: uid }),
|
||||
Balance.deleteMany({ user: uid }),
|
||||
ConversationTag.deleteMany({ user: uid }),
|
||||
|
|
|
|||
|
|
@ -6,6 +6,8 @@ export * from './initialize';
|
|||
export * from './legacy';
|
||||
export * from './memory';
|
||||
export * from './migration';
|
||||
export * from './openai';
|
||||
export * from './resources';
|
||||
export * from './responses';
|
||||
export * from './run';
|
||||
export * from './validation';
|
||||
|
|
|
|||
454
packages/api/src/agents/openai/handlers.ts
Normal file
454
packages/api/src/agents/openai/handlers.ts
Normal file
|
|
@ -0,0 +1,454 @@
|
|||
/**
|
||||
* OpenAI-compatible event handlers for agent streaming.
|
||||
*
|
||||
* These handlers convert LibreChat's internal graph events into OpenAI-compatible
|
||||
* streaming format (SSE with chat.completion.chunk objects).
|
||||
*/
|
||||
import type { Response as ServerResponse } from 'express';
|
||||
import type {
|
||||
ChatCompletionChunkChoice,
|
||||
OpenAIResponseContext,
|
||||
ChatCompletionChunk,
|
||||
CompletionUsage,
|
||||
ToolCall,
|
||||
} from './types';
|
||||
|
||||
/**
|
||||
* Create a chat completion chunk in OpenAI format
|
||||
*/
|
||||
export function createChunk(
|
||||
context: OpenAIResponseContext,
|
||||
delta: ChatCompletionChunkChoice['delta'],
|
||||
finishReason: ChatCompletionChunkChoice['finish_reason'] = null,
|
||||
usage?: CompletionUsage,
|
||||
): ChatCompletionChunk {
|
||||
return {
|
||||
id: context.requestId,
|
||||
object: 'chat.completion.chunk',
|
||||
created: context.created,
|
||||
model: context.model,
|
||||
choices: [
|
||||
{
|
||||
index: 0,
|
||||
delta,
|
||||
finish_reason: finishReason,
|
||||
},
|
||||
],
|
||||
...(usage && { usage }),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Write an SSE event to the response
|
||||
*/
|
||||
export function writeSSE(res: ServerResponse, data: ChatCompletionChunk | string): void {
|
||||
if (typeof data === 'string') {
|
||||
res.write(`data: ${data}\n\n`);
|
||||
} else {
|
||||
res.write(`data: ${JSON.stringify(data)}\n\n`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Lightweight tracker for streaming responses.
|
||||
* Only tracks what's needed for finish_reason and usage - doesn't store content.
|
||||
*/
|
||||
export interface OpenAIStreamTracker {
|
||||
/** Whether any text content was emitted */
|
||||
hasText: boolean;
|
||||
/** Whether any reasoning content was emitted */
|
||||
hasReasoning: boolean;
|
||||
/** Accumulated tool calls by index */
|
||||
toolCalls: Map<number, ToolCall>;
|
||||
/** Accumulated usage metadata */
|
||||
usage: {
|
||||
promptTokens: number;
|
||||
completionTokens: number;
|
||||
reasoningTokens: number;
|
||||
};
|
||||
/** Mark that text was emitted */
|
||||
addText: () => void;
|
||||
/** Mark that reasoning was emitted */
|
||||
addReasoning: () => void;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a lightweight stream tracker (doesn't store content)
|
||||
*/
|
||||
export function createOpenAIStreamTracker(): OpenAIStreamTracker {
|
||||
const tracker: OpenAIStreamTracker = {
|
||||
hasText: false,
|
||||
hasReasoning: false,
|
||||
toolCalls: new Map(),
|
||||
usage: {
|
||||
promptTokens: 0,
|
||||
completionTokens: 0,
|
||||
reasoningTokens: 0,
|
||||
},
|
||||
addText: () => {
|
||||
tracker.hasText = true;
|
||||
},
|
||||
addReasoning: () => {
|
||||
tracker.hasReasoning = true;
|
||||
},
|
||||
};
|
||||
return tracker;
|
||||
}
|
||||
|
||||
/**
|
||||
* Content aggregator for non-streaming responses.
|
||||
* Accumulates full text content, reasoning, and tool calls.
|
||||
* Uses arrays for O(n) text accumulation instead of O(n²) string concatenation.
|
||||
*/
|
||||
export interface OpenAIContentAggregator {
|
||||
/** Accumulated text chunks */
|
||||
textChunks: string[];
|
||||
/** Accumulated reasoning/thinking chunks */
|
||||
reasoningChunks: string[];
|
||||
/** Accumulated tool calls by index */
|
||||
toolCalls: Map<number, ToolCall>;
|
||||
/** Accumulated usage metadata */
|
||||
usage: {
|
||||
promptTokens: number;
|
||||
completionTokens: number;
|
||||
reasoningTokens: number;
|
||||
};
|
||||
/** Get accumulated text (joins chunks) */
|
||||
getText: () => string;
|
||||
/** Get accumulated reasoning (joins chunks) */
|
||||
getReasoning: () => string;
|
||||
/** Add text chunk */
|
||||
addText: (text: string) => void;
|
||||
/** Add reasoning chunk */
|
||||
addReasoning: (text: string) => void;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a content aggregator for non-streaming responses
|
||||
*/
|
||||
export function createOpenAIContentAggregator(): OpenAIContentAggregator {
|
||||
const textChunks: string[] = [];
|
||||
const reasoningChunks: string[] = [];
|
||||
|
||||
return {
|
||||
textChunks,
|
||||
reasoningChunks,
|
||||
toolCalls: new Map(),
|
||||
usage: {
|
||||
promptTokens: 0,
|
||||
completionTokens: 0,
|
||||
reasoningTokens: 0,
|
||||
},
|
||||
getText: () => textChunks.join(''),
|
||||
getReasoning: () => reasoningChunks.join(''),
|
||||
addText: (text: string) => textChunks.push(text),
|
||||
addReasoning: (text: string) => reasoningChunks.push(text),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Handler configuration for OpenAI streaming
|
||||
*/
|
||||
export interface OpenAIStreamHandlerConfig {
|
||||
res: ServerResponse;
|
||||
context: OpenAIResponseContext;
|
||||
tracker: OpenAIStreamTracker;
|
||||
}
|
||||
|
||||
/**
|
||||
* Graph event types from @librechat/agents
|
||||
*/
|
||||
export const GraphEvents = {
|
||||
CHAT_MODEL_END: 'on_chat_model_end',
|
||||
TOOL_END: 'on_tool_end',
|
||||
CHAT_MODEL_STREAM: 'on_chat_model_stream',
|
||||
ON_RUN_STEP: 'on_run_step',
|
||||
ON_RUN_STEP_DELTA: 'on_run_step_delta',
|
||||
ON_RUN_STEP_COMPLETED: 'on_run_step_completed',
|
||||
ON_MESSAGE_DELTA: 'on_message_delta',
|
||||
ON_REASONING_DELTA: 'on_reasoning_delta',
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Step types from librechat-data-provider
|
||||
*/
|
||||
export const StepTypes = {
|
||||
MESSAGE_CREATION: 'message_creation',
|
||||
TOOL_CALLS: 'tool_calls',
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Event data interfaces
|
||||
*/
|
||||
export interface MessageDeltaData {
|
||||
id?: string;
|
||||
content?: Array<{ type: string; text?: string }>;
|
||||
}
|
||||
|
||||
export interface RunStepDeltaData {
|
||||
id?: string;
|
||||
delta?: {
|
||||
type?: string;
|
||||
tool_calls?: Array<{
|
||||
index?: number;
|
||||
id?: string;
|
||||
type?: string;
|
||||
function?: {
|
||||
name?: string;
|
||||
arguments?: string;
|
||||
};
|
||||
}>;
|
||||
};
|
||||
}
|
||||
|
||||
export interface ToolEndData {
|
||||
output?: {
|
||||
name?: string;
|
||||
tool_call_id?: string;
|
||||
content?: string;
|
||||
};
|
||||
}
|
||||
|
||||
export interface ModelEndData {
|
||||
output?: {
|
||||
usage_metadata?: {
|
||||
input_tokens?: number;
|
||||
output_tokens?: number;
|
||||
model?: string;
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Event handler interface
|
||||
*/
|
||||
export interface EventHandler {
|
||||
handle(
|
||||
event: string,
|
||||
data: unknown,
|
||||
metadata?: Record<string, unknown>,
|
||||
graph?: unknown,
|
||||
): void | Promise<void>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Handler for message delta events - streams text content
|
||||
*/
|
||||
export class OpenAIMessageDeltaHandler implements EventHandler {
|
||||
constructor(private config: OpenAIStreamHandlerConfig) {}
|
||||
|
||||
handle(_event: string, data: MessageDeltaData): void {
|
||||
const content = data?.content;
|
||||
if (!content || !Array.isArray(content)) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const part of content) {
|
||||
if (part.type === 'text' && part.text) {
|
||||
this.config.tracker.addText();
|
||||
const chunk = createChunk(this.config.context, { content: part.text });
|
||||
writeSSE(this.config.res, chunk);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handler for run step delta events - streams tool calls
|
||||
*/
|
||||
export class OpenAIRunStepDeltaHandler implements EventHandler {
|
||||
constructor(private config: OpenAIStreamHandlerConfig) {}
|
||||
|
||||
handle(_event: string, data: RunStepDeltaData): void {
|
||||
const delta = data?.delta;
|
||||
if (!delta || delta.type !== StepTypes.TOOL_CALLS) {
|
||||
return;
|
||||
}
|
||||
|
||||
const toolCalls = delta.tool_calls;
|
||||
if (!toolCalls || !Array.isArray(toolCalls)) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const tc of toolCalls) {
|
||||
if (tc.index === undefined) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Initialize tool call in tracker if needed
|
||||
let trackedTc = this.config.tracker.toolCalls.get(tc.index);
|
||||
if (!trackedTc && tc.id) {
|
||||
trackedTc = {
|
||||
id: tc.id,
|
||||
type: 'function',
|
||||
function: {
|
||||
name: '',
|
||||
arguments: '',
|
||||
},
|
||||
};
|
||||
this.config.tracker.toolCalls.set(tc.index, trackedTc);
|
||||
}
|
||||
|
||||
// Build the streaming delta
|
||||
const streamDelta: ChatCompletionChunkChoice['delta'] = {
|
||||
tool_calls: [
|
||||
{
|
||||
index: tc.index,
|
||||
...(tc.id && { id: tc.id }),
|
||||
...(tc.type && { type: tc.type as 'function' }),
|
||||
...(tc.function && {
|
||||
function: {
|
||||
...(tc.function.name && { name: tc.function.name }),
|
||||
...(tc.function.arguments && { arguments: tc.function.arguments }),
|
||||
},
|
||||
}),
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
// Update tracked tool call
|
||||
if (trackedTc) {
|
||||
if (tc.function?.name) {
|
||||
trackedTc.function.name += tc.function.name;
|
||||
}
|
||||
if (tc.function?.arguments) {
|
||||
trackedTc.function.arguments += tc.function.arguments;
|
||||
}
|
||||
}
|
||||
|
||||
const chunk = createChunk(this.config.context, streamDelta);
|
||||
writeSSE(this.config.res, chunk);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handler for run step events - sends initial tool call info
|
||||
*/
|
||||
export class OpenAIRunStepHandler implements EventHandler {
|
||||
constructor(private config: OpenAIStreamHandlerConfig) {}
|
||||
|
||||
handle(_event: string, data: { stepDetails?: { type?: string } }): void {
|
||||
// Run step events are primarily for LibreChat UI, we use deltas for streaming
|
||||
// This handler is a no-op for OpenAI format
|
||||
if (data?.stepDetails?.type === StepTypes.TOOL_CALLS) {
|
||||
// Tool calls will be streamed via delta events
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handler for model end events - captures usage
|
||||
*/
|
||||
export class OpenAIModelEndHandler implements EventHandler {
|
||||
constructor(private config: OpenAIStreamHandlerConfig) {}
|
||||
|
||||
handle(_event: string, data: ModelEndData): void {
|
||||
const usage = data?.output?.usage_metadata;
|
||||
if (!usage) {
|
||||
return;
|
||||
}
|
||||
|
||||
this.config.tracker.usage.promptTokens += usage.input_tokens ?? 0;
|
||||
this.config.tracker.usage.completionTokens += usage.output_tokens ?? 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handler for chat model stream events
|
||||
*/
|
||||
export class OpenAIChatModelStreamHandler implements EventHandler {
|
||||
handle(): void {
|
||||
// Handled by message delta handler
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handler for tool end events
|
||||
*/
|
||||
export class OpenAIToolEndHandler implements EventHandler {
|
||||
handle(): void {
|
||||
// Tool results don't need to be streamed in OpenAI format
|
||||
// They're used internally by the agent
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handler for reasoning delta events.
|
||||
* Streams reasoning/thinking content using the `delta.reasoning` field (OpenRouter convention).
|
||||
*/
|
||||
export class OpenAIReasoningDeltaHandler implements EventHandler {
|
||||
constructor(private config: OpenAIStreamHandlerConfig) {}
|
||||
|
||||
handle(_event: string, data: MessageDeltaData): void {
|
||||
const content = data?.content;
|
||||
if (!content || !Array.isArray(content)) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const part of content) {
|
||||
if (part.type === 'text' && part.text) {
|
||||
// Mark that reasoning was emitted
|
||||
this.config.tracker.addReasoning();
|
||||
|
||||
// Stream as delta.reasoning (OpenRouter convention)
|
||||
const chunk = createChunk(this.config.context, { reasoning: part.text });
|
||||
writeSSE(this.config.res, chunk);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create all handlers for OpenAI streaming format
|
||||
*/
|
||||
export function createOpenAIHandlers(
|
||||
config: OpenAIStreamHandlerConfig,
|
||||
): Record<string, EventHandler> {
|
||||
return {
|
||||
[GraphEvents.ON_MESSAGE_DELTA]: new OpenAIMessageDeltaHandler(config),
|
||||
[GraphEvents.ON_RUN_STEP_DELTA]: new OpenAIRunStepDeltaHandler(config),
|
||||
[GraphEvents.ON_RUN_STEP]: new OpenAIRunStepHandler(config),
|
||||
[GraphEvents.ON_RUN_STEP_COMPLETED]: new OpenAIRunStepHandler(config),
|
||||
[GraphEvents.CHAT_MODEL_END]: new OpenAIModelEndHandler(config),
|
||||
[GraphEvents.CHAT_MODEL_STREAM]: new OpenAIChatModelStreamHandler(),
|
||||
[GraphEvents.TOOL_END]: new OpenAIToolEndHandler(),
|
||||
[GraphEvents.ON_REASONING_DELTA]: new OpenAIReasoningDeltaHandler(config),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Send the final chunk with finish_reason and optional usage
|
||||
*/
|
||||
export function sendFinalChunk(
|
||||
config: OpenAIStreamHandlerConfig,
|
||||
finishReason: ChatCompletionChunkChoice['finish_reason'] = 'stop',
|
||||
): void {
|
||||
const { res, context, tracker } = config;
|
||||
|
||||
// Determine finish reason based on content
|
||||
let reason = finishReason;
|
||||
if (tracker.toolCalls.size > 0 && !tracker.hasText) {
|
||||
reason = 'tool_calls';
|
||||
}
|
||||
|
||||
// Build usage object with reasoning token details (OpenRouter/OpenAI convention)
|
||||
const usage: CompletionUsage = {
|
||||
prompt_tokens: tracker.usage.promptTokens,
|
||||
completion_tokens: tracker.usage.completionTokens,
|
||||
total_tokens: tracker.usage.promptTokens + tracker.usage.completionTokens,
|
||||
};
|
||||
|
||||
// Add reasoning token breakdown if there are reasoning tokens
|
||||
if (tracker.usage.reasoningTokens > 0) {
|
||||
usage.completion_tokens_details = {
|
||||
reasoning_tokens: tracker.usage.reasoningTokens,
|
||||
};
|
||||
}
|
||||
|
||||
const finalChunk = createChunk(context, {}, reason, usage);
|
||||
writeSSE(res, finalChunk);
|
||||
|
||||
// Send [DONE] marker
|
||||
writeSSE(res, '[DONE]');
|
||||
}
|
||||
52
packages/api/src/agents/openai/index.ts
Normal file
52
packages/api/src/agents/openai/index.ts
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
/**
|
||||
* OpenAI-compatible API for LibreChat agents.
|
||||
*
|
||||
* This module provides an OpenAI v1/chat/completions compatible interface
|
||||
* for interacting with LibreChat agents remotely via API.
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* import { createAgentChatCompletion, listAgentModels } from '@librechat/api';
|
||||
*
|
||||
* // POST /v1/chat/completions
|
||||
* app.post('/v1/chat/completions', async (req, res) => {
|
||||
* await createAgentChatCompletion(req, res, dependencies);
|
||||
* });
|
||||
*
|
||||
* // GET /v1/models
|
||||
* app.get('/v1/models', async (req, res) => {
|
||||
* await listAgentModels(req, res, { getAgents });
|
||||
* });
|
||||
* ```
|
||||
*
|
||||
* Request format:
|
||||
* ```json
|
||||
* {
|
||||
* "model": "agent_id_here",
|
||||
* "messages": [
|
||||
* {"role": "user", "content": "Hello!"}
|
||||
* ],
|
||||
* "stream": true
|
||||
* }
|
||||
* ```
|
||||
*
|
||||
* The "model" parameter should be the agent ID you want to invoke.
|
||||
* Use the /v1/models endpoint to list available agents.
|
||||
*/
|
||||
|
||||
export * from './types';
|
||||
export * from './handlers';
|
||||
export {
|
||||
createAgentChatCompletion,
|
||||
listAgentModels,
|
||||
convertMessages,
|
||||
validateRequest,
|
||||
isChatCompletionValidationFailure,
|
||||
createErrorResponse,
|
||||
sendErrorResponse,
|
||||
buildNonStreamingResponse,
|
||||
type ChatCompletionDependencies,
|
||||
type ChatCompletionValidationResult,
|
||||
type ChatCompletionValidationSuccess,
|
||||
type ChatCompletionValidationFailure,
|
||||
} from './service';
|
||||
554
packages/api/src/agents/openai/service.ts
Normal file
554
packages/api/src/agents/openai/service.ts
Normal file
|
|
@ -0,0 +1,554 @@
|
|||
/**
|
||||
* OpenAI-compatible chat completions service for agents.
|
||||
*
|
||||
* This service provides an OpenAI v1/chat/completions compatible API for
|
||||
* interacting with LibreChat agents. The agent_id is passed as the "model"
|
||||
* parameter per OpenAI spec.
|
||||
*
|
||||
* Usage:
|
||||
* ```typescript
|
||||
* import { createAgentChatCompletion } from '@librechat/api';
|
||||
*
|
||||
* // In your Express route handler:
|
||||
* app.post('/v1/chat/completions', async (req, res) => {
|
||||
* await createAgentChatCompletion(req, res, {
|
||||
* getAgent: db.getAgent,
|
||||
* // ... other dependencies
|
||||
* });
|
||||
* });
|
||||
* ```
|
||||
*/
|
||||
import { nanoid } from 'nanoid';
|
||||
import type { Response as ServerResponse, Request } from 'express';
|
||||
import type {
|
||||
ChatCompletionResponse,
|
||||
OpenAIResponseContext,
|
||||
ChatCompletionRequest,
|
||||
OpenAIErrorResponse,
|
||||
CompletionUsage,
|
||||
ChatMessage,
|
||||
ToolCall,
|
||||
} from './types';
|
||||
import type { OpenAIStreamHandlerConfig, EventHandler } from './handlers';
|
||||
import {
|
||||
createOpenAIContentAggregator,
|
||||
createOpenAIStreamTracker,
|
||||
createOpenAIHandlers,
|
||||
sendFinalChunk,
|
||||
createChunk,
|
||||
writeSSE,
|
||||
} from './handlers';
|
||||
|
||||
/**
|
||||
* Dependencies for the chat completion service
|
||||
*/
|
||||
export interface ChatCompletionDependencies {
|
||||
/** Get agent by ID */
|
||||
getAgent: (params: { id: string }) => Promise<Agent | null>;
|
||||
/** Initialize agent for use */
|
||||
initializeAgent: (params: InitializeAgentParams) => Promise<InitializedAgent>;
|
||||
/** Load agent tools */
|
||||
loadAgentTools?: LoadToolsFn;
|
||||
/** Get models config */
|
||||
getModelsConfig?: (req: Request) => Promise<unknown>;
|
||||
/** Validate agent model */
|
||||
validateAgentModel?: (
|
||||
params: unknown,
|
||||
) => Promise<{ isValid: boolean; error?: { message: string } }>;
|
||||
/** Log violation */
|
||||
logViolation?: (
|
||||
req: Request,
|
||||
res: ServerResponse,
|
||||
type: string,
|
||||
info: unknown,
|
||||
score: number,
|
||||
) => Promise<void>;
|
||||
/** Create agent run */
|
||||
createRun?: CreateRunFn;
|
||||
/** App config */
|
||||
appConfig?: AppConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Agent type from librechat-data-provider
|
||||
*/
|
||||
interface Agent {
|
||||
id: string;
|
||||
name?: string;
|
||||
model?: string;
|
||||
provider: string;
|
||||
tools?: string[];
|
||||
instructions?: string;
|
||||
model_parameters?: Record<string, unknown>;
|
||||
tool_resources?: Record<string, unknown>;
|
||||
tool_options?: Record<string, unknown>;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialized agent type - note: after initialization, tools become structured tool objects
|
||||
*/
|
||||
interface InitializedAgent {
|
||||
id: string;
|
||||
name?: string;
|
||||
model?: string;
|
||||
provider: string;
|
||||
/** After initialization, tools are structured tool objects, not strings */
|
||||
tools: unknown[];
|
||||
instructions?: string;
|
||||
model_parameters?: Record<string, unknown>;
|
||||
tool_resources?: Record<string, unknown>;
|
||||
tool_options?: Record<string, unknown>;
|
||||
attachments: unknown[];
|
||||
toolContextMap: Record<string, unknown>;
|
||||
maxContextTokens: number;
|
||||
userMCPAuthMap?: Record<string, Record<string, string>>;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize agent params
|
||||
*/
|
||||
interface InitializeAgentParams {
|
||||
req: Request;
|
||||
res: ServerResponse;
|
||||
agent: Agent;
|
||||
conversationId?: string | null;
|
||||
parentMessageId?: string | null;
|
||||
requestFiles?: unknown[];
|
||||
loadTools?: LoadToolsFn;
|
||||
endpointOption?: Record<string, unknown>;
|
||||
allowedProviders: Set<string>;
|
||||
isInitialAgent?: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Tool loading function type
|
||||
*/
|
||||
type LoadToolsFn = (params: {
|
||||
req: Request;
|
||||
res: ServerResponse;
|
||||
provider: string;
|
||||
agentId: string;
|
||||
tools: string[];
|
||||
model: string | null;
|
||||
tool_options: unknown;
|
||||
tool_resources: unknown;
|
||||
}) => Promise<{
|
||||
tools: unknown[];
|
||||
toolContextMap: Record<string, unknown>;
|
||||
userMCPAuthMap?: Record<string, Record<string, string>>;
|
||||
} | null>;
|
||||
|
||||
/**
|
||||
* Create run function type
|
||||
*/
|
||||
type CreateRunFn = (params: {
|
||||
agents: unknown[];
|
||||
messages: unknown[];
|
||||
runId: string;
|
||||
signal: AbortSignal;
|
||||
customHandlers: Record<string, EventHandler>;
|
||||
requestBody: Record<string, unknown>;
|
||||
user: Record<string, unknown>;
|
||||
tokenCounter?: (message: unknown) => number;
|
||||
}) => Promise<{
|
||||
Graph?: unknown;
|
||||
processStream: (
|
||||
input: { messages: unknown[] },
|
||||
config: Record<string, unknown>,
|
||||
options: Record<string, unknown>,
|
||||
) => Promise<void>;
|
||||
} | null>;
|
||||
|
||||
/**
|
||||
* App config type
|
||||
*/
|
||||
interface AppConfig {
|
||||
endpoints?: Record<string, unknown>;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert OpenAI messages to LibreChat format
|
||||
*/
|
||||
export function convertMessages(messages: ChatMessage[]): unknown[] {
|
||||
return messages.map((msg) => {
|
||||
let content: string | unknown[];
|
||||
if (typeof msg.content === 'string') {
|
||||
content = msg.content;
|
||||
} else if (msg.content) {
|
||||
content = msg.content.map((part) => {
|
||||
if (part.type === 'text') {
|
||||
return { type: 'text', text: part.text };
|
||||
}
|
||||
if (part.type === 'image_url') {
|
||||
return { type: 'image_url', image_url: part.image_url };
|
||||
}
|
||||
return part;
|
||||
});
|
||||
} else {
|
||||
content = '';
|
||||
}
|
||||
|
||||
return {
|
||||
role: msg.role,
|
||||
content,
|
||||
...(msg.name && { name: msg.name }),
|
||||
...(msg.tool_calls && { tool_calls: msg.tool_calls }),
|
||||
...(msg.tool_call_id && { tool_call_id: msg.tool_call_id }),
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Create an error response in OpenAI format
|
||||
*/
|
||||
export function createErrorResponse(
|
||||
message: string,
|
||||
type = 'invalid_request_error',
|
||||
code: string | null = null,
|
||||
): OpenAIErrorResponse {
|
||||
return {
|
||||
error: {
|
||||
message,
|
||||
type,
|
||||
param: null,
|
||||
code,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Send an error response
|
||||
*/
|
||||
export function sendErrorResponse(
|
||||
res: ServerResponse,
|
||||
statusCode: number,
|
||||
message: string,
|
||||
type = 'invalid_request_error',
|
||||
code: string | null = null,
|
||||
): void {
|
||||
res.status(statusCode).json(createErrorResponse(message, type, code));
|
||||
}
|
||||
|
||||
/**
|
||||
* Validation result types for chat completion requests
|
||||
*/
|
||||
export type ChatCompletionValidationSuccess = { valid: true; request: ChatCompletionRequest };
|
||||
export type ChatCompletionValidationFailure = { valid: false; error: string };
|
||||
export type ChatCompletionValidationResult =
|
||||
| ChatCompletionValidationSuccess
|
||||
| ChatCompletionValidationFailure;
|
||||
|
||||
/**
|
||||
* Type guard for validation failure
|
||||
*/
|
||||
export function isChatCompletionValidationFailure(
|
||||
result: ChatCompletionValidationResult,
|
||||
): result is ChatCompletionValidationFailure {
|
||||
return !result.valid;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate the chat completion request
|
||||
*/
|
||||
export function validateRequest(body: unknown): ChatCompletionValidationResult {
|
||||
if (!body || typeof body !== 'object') {
|
||||
return { valid: false, error: 'Request body is required' };
|
||||
}
|
||||
|
||||
const request = body as Record<string, unknown>;
|
||||
|
||||
if (!request.model || typeof request.model !== 'string') {
|
||||
return { valid: false, error: 'model (agent_id) is required' };
|
||||
}
|
||||
|
||||
if (!request.messages || !Array.isArray(request.messages)) {
|
||||
return { valid: false, error: 'messages array is required' };
|
||||
}
|
||||
|
||||
if (request.messages.length === 0) {
|
||||
return { valid: false, error: 'messages array cannot be empty' };
|
||||
}
|
||||
|
||||
// Validate each message has role and content
|
||||
for (let i = 0; i < request.messages.length; i++) {
|
||||
const msg = request.messages[i] as Record<string, unknown>;
|
||||
if (!msg.role || typeof msg.role !== 'string') {
|
||||
return { valid: false, error: `messages[${i}].role is required` };
|
||||
}
|
||||
if (!['system', 'user', 'assistant', 'tool'].includes(msg.role)) {
|
||||
return {
|
||||
valid: false,
|
||||
error: `messages[${i}].role must be one of: system, user, assistant, tool`,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: true, request: request as unknown as ChatCompletionRequest };
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a non-streaming response from aggregated content
|
||||
*/
|
||||
export function buildNonStreamingResponse(
|
||||
context: OpenAIResponseContext,
|
||||
text: string,
|
||||
reasoning: string,
|
||||
toolCalls: Map<number, ToolCall>,
|
||||
usage: CompletionUsage,
|
||||
): ChatCompletionResponse {
|
||||
const toolCallsArray = Array.from(toolCalls.values());
|
||||
const finishReason = toolCallsArray.length > 0 && !text ? 'tool_calls' : 'stop';
|
||||
|
||||
return {
|
||||
id: context.requestId,
|
||||
object: 'chat.completion',
|
||||
created: context.created,
|
||||
model: context.model,
|
||||
choices: [
|
||||
{
|
||||
index: 0,
|
||||
message: {
|
||||
role: 'assistant',
|
||||
content: text || null,
|
||||
...(reasoning && { reasoning }),
|
||||
...(toolCallsArray.length > 0 && { tool_calls: toolCallsArray }),
|
||||
},
|
||||
finish_reason: finishReason,
|
||||
},
|
||||
],
|
||||
usage,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Main handler for OpenAI-compatible chat completions with agents.
|
||||
*
|
||||
* This function:
|
||||
* 1. Validates the request
|
||||
* 2. Looks up the agent by ID (model parameter)
|
||||
* 3. Initializes the agent with tools
|
||||
* 4. Runs the agent and streams/returns the response
|
||||
*
|
||||
* @param req - Express request object
|
||||
* @param res - Express response object
|
||||
* @param deps - Dependencies for the service
|
||||
*/
|
||||
export async function createAgentChatCompletion(
|
||||
req: Request,
|
||||
res: ServerResponse,
|
||||
deps: ChatCompletionDependencies,
|
||||
): Promise<void> {
|
||||
// Validate request
|
||||
const validation = validateRequest(req.body);
|
||||
if (isChatCompletionValidationFailure(validation)) {
|
||||
sendErrorResponse(res, 400, validation.error);
|
||||
return;
|
||||
}
|
||||
|
||||
const request = validation.request;
|
||||
const agentId = request.model;
|
||||
const requestedStreaming = request.stream === true;
|
||||
|
||||
// Look up the agent
|
||||
const agent = await deps.getAgent({ id: agentId });
|
||||
if (!agent) {
|
||||
sendErrorResponse(
|
||||
res,
|
||||
404,
|
||||
`Agent not found: ${agentId}`,
|
||||
'invalid_request_error',
|
||||
'model_not_found',
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Generate IDs
|
||||
const requestId = `chatcmpl-${nanoid()}`;
|
||||
const conversationId = request.conversation_id ?? nanoid();
|
||||
const created = Math.floor(Date.now() / 1000);
|
||||
|
||||
// Build response context
|
||||
const context: OpenAIResponseContext = {
|
||||
created,
|
||||
requestId,
|
||||
model: agentId,
|
||||
};
|
||||
|
||||
// Set up abort controller
|
||||
const abortController = new AbortController();
|
||||
|
||||
// Handle client disconnect
|
||||
req.on('close', () => {
|
||||
abortController.abort();
|
||||
});
|
||||
|
||||
try {
|
||||
// Build allowed providers set (empty = all allowed)
|
||||
const allowedProviders = new Set<string>();
|
||||
|
||||
// Initialize the agent first to check for disableStreaming
|
||||
const initializedAgent = await deps.initializeAgent({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
conversationId,
|
||||
parentMessageId: request.parent_message_id,
|
||||
loadTools: deps.loadAgentTools,
|
||||
endpointOption: {
|
||||
endpoint: agent.provider,
|
||||
model_parameters: agent.model_parameters ?? {},
|
||||
},
|
||||
allowedProviders,
|
||||
isInitialAgent: true,
|
||||
});
|
||||
|
||||
// Determine if streaming is enabled (check both request and agent config)
|
||||
const streamingDisabled = !!(initializedAgent.model_parameters as Record<string, unknown>)
|
||||
?.disableStreaming;
|
||||
const isStreaming = requestedStreaming && !streamingDisabled;
|
||||
|
||||
// Create tracker for streaming or aggregator for non-streaming
|
||||
const tracker = isStreaming ? createOpenAIStreamTracker() : null;
|
||||
const aggregator = isStreaming ? null : createOpenAIContentAggregator();
|
||||
|
||||
// Set up response headers for streaming
|
||||
if (isStreaming) {
|
||||
res.setHeader('Content-Type', 'text/event-stream');
|
||||
res.setHeader('Cache-Control', 'no-cache');
|
||||
res.setHeader('Connection', 'keep-alive');
|
||||
res.setHeader('X-Accel-Buffering', 'no');
|
||||
res.flushHeaders();
|
||||
|
||||
// Send initial chunk with role
|
||||
const initialChunk = createChunk(context, { role: 'assistant' });
|
||||
writeSSE(res, initialChunk);
|
||||
}
|
||||
|
||||
// Create handler config (only used for streaming)
|
||||
const handlerConfig: OpenAIStreamHandlerConfig | null =
|
||||
isStreaming && tracker
|
||||
? {
|
||||
res,
|
||||
context,
|
||||
tracker,
|
||||
}
|
||||
: null;
|
||||
|
||||
// Create event handlers
|
||||
const eventHandlers = isStreaming && handlerConfig ? createOpenAIHandlers(handlerConfig) : {};
|
||||
|
||||
// Convert messages to internal format
|
||||
const messages = convertMessages(request.messages);
|
||||
|
||||
// Create and run the agent
|
||||
if (deps.createRun) {
|
||||
const userId = (req as unknown as { user?: { id?: string } }).user?.id ?? 'api-user';
|
||||
|
||||
const run = await deps.createRun({
|
||||
agents: [initializedAgent],
|
||||
messages,
|
||||
runId: requestId,
|
||||
signal: abortController.signal,
|
||||
customHandlers: eventHandlers,
|
||||
requestBody: {
|
||||
messageId: requestId,
|
||||
conversationId,
|
||||
},
|
||||
user: { id: userId },
|
||||
});
|
||||
|
||||
if (run) {
|
||||
await run.processStream(
|
||||
{ messages },
|
||||
{
|
||||
runName: 'AgentRun',
|
||||
configurable: {
|
||||
thread_id: conversationId,
|
||||
user_id: userId,
|
||||
},
|
||||
signal: abortController.signal,
|
||||
streamMode: 'values',
|
||||
version: 'v2',
|
||||
},
|
||||
{},
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Finalize response
|
||||
if (isStreaming && handlerConfig) {
|
||||
sendFinalChunk(handlerConfig);
|
||||
res.end();
|
||||
} else if (aggregator) {
|
||||
// Build and send non-streaming response
|
||||
const usage: CompletionUsage = {
|
||||
prompt_tokens: aggregator.usage.promptTokens,
|
||||
completion_tokens: aggregator.usage.completionTokens,
|
||||
total_tokens: aggregator.usage.promptTokens + aggregator.usage.completionTokens,
|
||||
...(aggregator.usage.reasoningTokens > 0 && {
|
||||
completion_tokens_details: { reasoning_tokens: aggregator.usage.reasoningTokens },
|
||||
}),
|
||||
};
|
||||
const response = buildNonStreamingResponse(
|
||||
context,
|
||||
aggregator.getText(),
|
||||
aggregator.getReasoning(),
|
||||
aggregator.toolCalls,
|
||||
usage,
|
||||
);
|
||||
res.json(response);
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'An error occurred';
|
||||
|
||||
// Check if we already started streaming (headers sent)
|
||||
if (res.headersSent) {
|
||||
// Headers already sent, try to send error in stream format
|
||||
const errorChunk = createChunk(context, { content: `\n\nError: ${errorMessage}` }, 'stop');
|
||||
writeSSE(res, errorChunk);
|
||||
writeSSE(res, '[DONE]');
|
||||
res.end();
|
||||
} else {
|
||||
sendErrorResponse(res, 500, errorMessage, 'server_error');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List available agents/models
|
||||
*
|
||||
* This provides a /v1/models compatible endpoint that lists available agents.
|
||||
*/
|
||||
export async function listAgentModels(
|
||||
_req: Request,
|
||||
res: ServerResponse,
|
||||
deps: { getAgents: (params: Record<string, unknown>) => Promise<Agent[]> },
|
||||
): Promise<void> {
|
||||
try {
|
||||
const agents = await deps.getAgents({});
|
||||
|
||||
const models = agents.map((agent) => ({
|
||||
id: agent.id,
|
||||
object: 'model',
|
||||
created: Math.floor(Date.now() / 1000),
|
||||
owned_by: 'librechat',
|
||||
permission: [],
|
||||
root: agent.id,
|
||||
parent: null,
|
||||
// Extensions
|
||||
name: agent.name,
|
||||
provider: agent.provider,
|
||||
}));
|
||||
|
||||
res.json({
|
||||
object: 'list',
|
||||
data: models,
|
||||
});
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'Failed to list models';
|
||||
sendErrorResponse(res, 500, errorMessage, 'server_error');
|
||||
}
|
||||
}
|
||||
194
packages/api/src/agents/openai/types.ts
Normal file
194
packages/api/src/agents/openai/types.ts
Normal file
|
|
@ -0,0 +1,194 @@
|
|||
/**
|
||||
* OpenAI-compatible types for the agent chat completions API.
|
||||
* These types follow the OpenAI API spec for /v1/chat/completions.
|
||||
*
|
||||
* Note: This API uses agent_id as the "model" parameter per OpenAI spec.
|
||||
* In the future, this will be extended to support the Responses API.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Content part types for OpenAI format
|
||||
*/
|
||||
export interface OpenAITextContentPart {
|
||||
type: 'text';
|
||||
text: string;
|
||||
}
|
||||
|
||||
export interface OpenAIImageContentPart {
|
||||
type: 'image_url';
|
||||
image_url: {
|
||||
url: string;
|
||||
detail?: 'auto' | 'low' | 'high';
|
||||
};
|
||||
}
|
||||
|
||||
export type OpenAIContentPart = OpenAITextContentPart | OpenAIImageContentPart;
|
||||
|
||||
/**
|
||||
* Tool call in OpenAI format
|
||||
*/
|
||||
export interface ToolCall {
|
||||
id: string;
|
||||
type: 'function';
|
||||
function: {
|
||||
name: string;
|
||||
arguments: string;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* OpenAI chat message format
|
||||
*/
|
||||
export interface ChatMessage {
|
||||
role: 'system' | 'user' | 'assistant' | 'tool';
|
||||
content: string | OpenAIContentPart[] | null;
|
||||
name?: string;
|
||||
tool_calls?: ToolCall[];
|
||||
tool_call_id?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* OpenAI chat completion request
|
||||
*/
|
||||
export interface ChatCompletionRequest {
|
||||
/** Agent ID to invoke (maps to model in OpenAI spec) */
|
||||
model: string;
|
||||
/** Conversation messages */
|
||||
messages: ChatMessage[];
|
||||
/** Whether to stream the response */
|
||||
stream?: boolean;
|
||||
/** Maximum tokens to generate */
|
||||
max_tokens?: number;
|
||||
/** Temperature for sampling */
|
||||
temperature?: number;
|
||||
/** Top-p sampling */
|
||||
top_p?: number;
|
||||
/** Frequency penalty */
|
||||
frequency_penalty?: number;
|
||||
/** Presence penalty */
|
||||
presence_penalty?: number;
|
||||
/** Stop sequences */
|
||||
stop?: string | string[];
|
||||
/** User identifier */
|
||||
user?: string;
|
||||
/** Conversation ID (LibreChat extension) */
|
||||
conversation_id?: string;
|
||||
/** Parent message ID (LibreChat extension) */
|
||||
parent_message_id?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Token usage information
|
||||
*/
|
||||
export interface CompletionUsage {
|
||||
prompt_tokens: number;
|
||||
completion_tokens: number;
|
||||
total_tokens: number;
|
||||
/** Detailed breakdown of output tokens (OpenRouter/OpenAI convention) */
|
||||
completion_tokens_details?: {
|
||||
reasoning_tokens?: number;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Non-streaming choice
|
||||
*/
|
||||
export interface ChatCompletionChoice {
|
||||
index: number;
|
||||
message: {
|
||||
role: 'assistant';
|
||||
content: string | null;
|
||||
/** Reasoning/thinking content (OpenRouter convention) */
|
||||
reasoning?: string | null;
|
||||
tool_calls?: ToolCall[];
|
||||
};
|
||||
finish_reason: 'stop' | 'length' | 'tool_calls' | 'content_filter' | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Non-streaming response
|
||||
*/
|
||||
export interface ChatCompletionResponse {
|
||||
id: string;
|
||||
object: 'chat.completion';
|
||||
created: number;
|
||||
model: string;
|
||||
choices: ChatCompletionChoice[];
|
||||
usage?: CompletionUsage;
|
||||
}
|
||||
|
||||
/**
|
||||
* Streaming choice delta
|
||||
* Note: `reasoning` field follows OpenRouter convention for streaming reasoning/thinking content
|
||||
*/
|
||||
export interface ChatCompletionChunkChoice {
|
||||
index: number;
|
||||
delta: {
|
||||
role?: 'assistant';
|
||||
content?: string | null;
|
||||
/** Reasoning/thinking content (OpenRouter convention) */
|
||||
reasoning?: string | null;
|
||||
tool_calls?: Array<{
|
||||
index: number;
|
||||
id?: string;
|
||||
type?: 'function';
|
||||
function?: {
|
||||
name?: string;
|
||||
arguments?: string;
|
||||
};
|
||||
}>;
|
||||
};
|
||||
finish_reason: 'stop' | 'length' | 'tool_calls' | 'content_filter' | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Streaming response chunk
|
||||
*/
|
||||
export interface ChatCompletionChunk {
|
||||
id: string;
|
||||
object: 'chat.completion.chunk';
|
||||
created: number;
|
||||
model: string;
|
||||
choices: ChatCompletionChunkChoice[];
|
||||
/** Final chunk may include usage */
|
||||
usage?: CompletionUsage;
|
||||
}
|
||||
|
||||
/**
|
||||
* SSE event wrapper for streaming
|
||||
*/
|
||||
export interface SSEEvent {
|
||||
data: ChatCompletionChunk | '[DONE]';
|
||||
}
|
||||
|
||||
/**
|
||||
* Context for building OpenAI responses
|
||||
*/
|
||||
export interface OpenAIResponseContext {
|
||||
/** Request ID for the chat completion */
|
||||
requestId: string;
|
||||
/** Model/agent ID */
|
||||
model: string;
|
||||
/** Created timestamp */
|
||||
created: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Aggregated content for building final response
|
||||
*/
|
||||
export interface AggregatedContent {
|
||||
text: string;
|
||||
toolCalls: ToolCall[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Error response in OpenAI format
|
||||
*/
|
||||
export interface OpenAIErrorResponse {
|
||||
error: {
|
||||
message: string;
|
||||
type: string;
|
||||
param: string | null;
|
||||
code: string | null;
|
||||
};
|
||||
}
|
||||
914
packages/api/src/agents/responses/handlers.ts
Normal file
914
packages/api/src/agents/responses/handlers.ts
Normal file
|
|
@ -0,0 +1,914 @@
|
|||
/**
|
||||
* Open Responses API Handlers
|
||||
*
|
||||
* Semantic event emitters and response tracking for the Open Responses API.
|
||||
* Events follow the Open Responses spec with proper lifecycle management.
|
||||
*/
|
||||
import type { Response as ServerResponse } from 'express';
|
||||
import type {
|
||||
Response,
|
||||
ResponseContext,
|
||||
ResponseEvent,
|
||||
OutputItem,
|
||||
MessageItem,
|
||||
FunctionCallItem,
|
||||
FunctionCallOutputItem,
|
||||
ReasoningItem,
|
||||
OutputTextContent,
|
||||
ReasoningTextContent,
|
||||
ItemStatus,
|
||||
ResponseStatus,
|
||||
} from './types';
|
||||
|
||||
/* =============================================================================
|
||||
* RESPONSE TRACKER
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Tracks the state of a response during streaming.
|
||||
* Manages items, sequence numbers, and accumulated content.
|
||||
*/
|
||||
export interface ResponseTracker {
|
||||
/** Current sequence number (monotonically increasing) */
|
||||
sequenceNumber: number;
|
||||
/** Output items being built */
|
||||
items: OutputItem[];
|
||||
/** Current message item (if any) */
|
||||
currentMessage: MessageItem | null;
|
||||
/** Current message content index */
|
||||
currentContentIndex: number;
|
||||
/** Current reasoning item (if any) */
|
||||
currentReasoning: ReasoningItem | null;
|
||||
/** Current reasoning content index */
|
||||
currentReasoningContentIndex: number;
|
||||
/** Map of function call items by call_id */
|
||||
functionCalls: Map<string, FunctionCallItem>;
|
||||
/** Map of function call outputs by call_id */
|
||||
functionCallOutputs: Map<string, FunctionCallOutputItem>;
|
||||
/** Accumulated text for current message */
|
||||
accumulatedText: string;
|
||||
/** Accumulated reasoning text */
|
||||
accumulatedReasoningText: string;
|
||||
/** Accumulated function call arguments by call_id */
|
||||
accumulatedArguments: Map<string, string>;
|
||||
/** Token usage */
|
||||
usage: {
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
reasoningTokens: number;
|
||||
cachedTokens: number;
|
||||
};
|
||||
/** Response status */
|
||||
status: ResponseStatus;
|
||||
/** Get next sequence number */
|
||||
nextSequence: () => number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new response tracker
|
||||
*/
|
||||
export function createResponseTracker(): ResponseTracker {
|
||||
const tracker: ResponseTracker = {
|
||||
sequenceNumber: 0,
|
||||
items: [],
|
||||
currentMessage: null,
|
||||
currentContentIndex: 0,
|
||||
currentReasoning: null,
|
||||
currentReasoningContentIndex: 0,
|
||||
functionCalls: new Map(),
|
||||
functionCallOutputs: new Map(),
|
||||
accumulatedText: '',
|
||||
accumulatedReasoningText: '',
|
||||
accumulatedArguments: new Map(),
|
||||
usage: {
|
||||
inputTokens: 0,
|
||||
outputTokens: 0,
|
||||
reasoningTokens: 0,
|
||||
cachedTokens: 0,
|
||||
},
|
||||
status: 'in_progress',
|
||||
nextSequence: () => tracker.sequenceNumber++,
|
||||
};
|
||||
return tracker;
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* SSE EVENT WRITING
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Write a semantic SSE event to the response.
|
||||
* The `event:` field matches the `type` in the data payload.
|
||||
*/
|
||||
export function writeEvent(res: ServerResponse, event: ResponseEvent): void {
|
||||
res.write(`event: ${event.type}\n`);
|
||||
res.write(`data: ${JSON.stringify(event)}\n\n`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Write the terminal [DONE] event
|
||||
*/
|
||||
export function writeDone(res: ServerResponse): void {
|
||||
res.write('data: [DONE]\n\n');
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* RESPONSE BUILDING
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Build a Response object from context and tracker
|
||||
* Includes all required fields per Open Responses spec
|
||||
*/
|
||||
export function buildResponse(
|
||||
context: ResponseContext,
|
||||
tracker: ResponseTracker,
|
||||
status: ResponseStatus = 'in_progress',
|
||||
): Response {
|
||||
const isCompleted = status === 'completed';
|
||||
|
||||
return {
|
||||
// Required fields
|
||||
id: context.responseId,
|
||||
object: 'response',
|
||||
created_at: context.createdAt,
|
||||
completed_at: isCompleted ? Math.floor(Date.now() / 1000) : null,
|
||||
status,
|
||||
incomplete_details: null,
|
||||
model: context.model,
|
||||
previous_response_id: context.previousResponseId ?? null,
|
||||
instructions: context.instructions ?? null,
|
||||
output: tracker.items,
|
||||
error: null,
|
||||
tools: [],
|
||||
tool_choice: 'auto',
|
||||
truncation: 'disabled',
|
||||
parallel_tool_calls: true,
|
||||
text: { format: { type: 'text' } },
|
||||
temperature: 1,
|
||||
top_p: 1,
|
||||
presence_penalty: 0,
|
||||
frequency_penalty: 0,
|
||||
top_logprobs: 0,
|
||||
reasoning: null,
|
||||
user: null,
|
||||
usage: isCompleted
|
||||
? {
|
||||
input_tokens: tracker.usage.inputTokens,
|
||||
output_tokens: tracker.usage.outputTokens,
|
||||
total_tokens: tracker.usage.inputTokens + tracker.usage.outputTokens,
|
||||
input_tokens_details: { cached_tokens: tracker.usage.cachedTokens },
|
||||
output_tokens_details: { reasoning_tokens: tracker.usage.reasoningTokens },
|
||||
}
|
||||
: null,
|
||||
max_output_tokens: null,
|
||||
max_tool_calls: null,
|
||||
store: false,
|
||||
background: false,
|
||||
service_tier: 'default',
|
||||
metadata: {},
|
||||
safety_identifier: null,
|
||||
prompt_cache_key: null,
|
||||
};
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* ITEM BUILDERS
|
||||
* ============================================================================= */
|
||||
|
||||
let itemIdCounter = 0;
|
||||
|
||||
/**
|
||||
* Generate a unique item ID
|
||||
*/
|
||||
export function generateItemId(prefix: string): string {
|
||||
return `${prefix}_${Date.now().toString(36)}${(itemIdCounter++).toString(36)}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new message item
|
||||
*/
|
||||
export function createMessageItem(status: ItemStatus = 'in_progress'): MessageItem {
|
||||
return {
|
||||
type: 'message',
|
||||
id: generateItemId('msg'),
|
||||
role: 'assistant',
|
||||
status,
|
||||
content: [],
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new function call item
|
||||
*/
|
||||
export function createFunctionCallItem(
|
||||
callId: string,
|
||||
name: string,
|
||||
status: ItemStatus = 'in_progress',
|
||||
): FunctionCallItem {
|
||||
return {
|
||||
type: 'function_call',
|
||||
id: generateItemId('fc'),
|
||||
call_id: callId,
|
||||
name,
|
||||
arguments: '',
|
||||
status,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new function call output item
|
||||
*/
|
||||
export function createFunctionCallOutputItem(
|
||||
callId: string,
|
||||
output: string,
|
||||
status: ItemStatus = 'completed',
|
||||
): FunctionCallOutputItem {
|
||||
return {
|
||||
type: 'function_call_output',
|
||||
id: generateItemId('fco'),
|
||||
call_id: callId,
|
||||
output,
|
||||
status,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new reasoning item
|
||||
*/
|
||||
export function createReasoningItem(status: ItemStatus = 'in_progress'): ReasoningItem {
|
||||
return {
|
||||
type: 'reasoning',
|
||||
id: generateItemId('reason'),
|
||||
status,
|
||||
content: [],
|
||||
summary: [],
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create output text content
|
||||
*/
|
||||
export function createOutputTextContent(text: string = ''): OutputTextContent {
|
||||
return {
|
||||
type: 'output_text',
|
||||
text,
|
||||
annotations: [],
|
||||
logprobs: [],
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create reasoning text content
|
||||
*/
|
||||
export function createReasoningTextContent(text: string = ''): ReasoningTextContent {
|
||||
return {
|
||||
type: 'reasoning_text',
|
||||
text,
|
||||
};
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* STREAMING EVENT EMITTERS
|
||||
* ============================================================================= */
|
||||
|
||||
export interface StreamHandlerConfig {
|
||||
res: ServerResponse;
|
||||
context: ResponseContext;
|
||||
tracker: ResponseTracker;
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.created event
|
||||
* This is the first event emitted per the Open Responses spec
|
||||
*/
|
||||
export function emitResponseCreated(config: StreamHandlerConfig): void {
|
||||
const { res, context, tracker } = config;
|
||||
const response = buildResponse(context, tracker, 'in_progress');
|
||||
writeEvent(res, {
|
||||
type: 'response.created',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
response,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.in_progress event
|
||||
*/
|
||||
export function emitResponseInProgress(config: StreamHandlerConfig): void {
|
||||
const { res, context, tracker } = config;
|
||||
const response = buildResponse(context, tracker, 'in_progress');
|
||||
writeEvent(res, {
|
||||
type: 'response.in_progress',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
response,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.completed event
|
||||
*/
|
||||
export function emitResponseCompleted(config: StreamHandlerConfig): void {
|
||||
const { res, context, tracker } = config;
|
||||
tracker.status = 'completed';
|
||||
const response = buildResponse(context, tracker, 'completed');
|
||||
writeEvent(res, {
|
||||
type: 'response.completed',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
response,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.failed event
|
||||
*/
|
||||
export function emitResponseFailed(
|
||||
config: StreamHandlerConfig,
|
||||
error: { type: string; message: string; code?: string },
|
||||
): void {
|
||||
const { res, context, tracker } = config;
|
||||
tracker.status = 'failed';
|
||||
const response = buildResponse(context, tracker, 'failed');
|
||||
response.error = {
|
||||
type: error.type as
|
||||
| 'server_error'
|
||||
| 'invalid_request'
|
||||
| 'not_found'
|
||||
| 'model_error'
|
||||
| 'too_many_requests',
|
||||
message: error.message,
|
||||
code: error.code,
|
||||
};
|
||||
writeEvent(res, {
|
||||
type: 'response.failed',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
response,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.output_item.added event for a message
|
||||
*/
|
||||
export function emitMessageItemAdded(config: StreamHandlerConfig): MessageItem {
|
||||
const { res, tracker } = config;
|
||||
const item = createMessageItem('in_progress');
|
||||
tracker.currentMessage = item;
|
||||
tracker.currentContentIndex = 0;
|
||||
tracker.accumulatedText = '';
|
||||
tracker.items.push(item);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.output_item.added',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
output_index: tracker.items.length - 1,
|
||||
item,
|
||||
});
|
||||
|
||||
return item;
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.output_item.done event for a message
|
||||
*/
|
||||
export function emitMessageItemDone(config: StreamHandlerConfig): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentMessage) {
|
||||
return;
|
||||
}
|
||||
|
||||
tracker.currentMessage.status = 'completed';
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.output_item.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
output_index: outputIndex,
|
||||
item: tracker.currentMessage,
|
||||
});
|
||||
|
||||
tracker.currentMessage = null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.content_part.added for text content
|
||||
*/
|
||||
export function emitTextContentPartAdded(config: StreamHandlerConfig): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentMessage) {
|
||||
return;
|
||||
}
|
||||
|
||||
const part = createOutputTextContent('');
|
||||
tracker.currentMessage.content.push(part);
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.content_part.added',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: tracker.currentMessage.id,
|
||||
output_index: outputIndex,
|
||||
content_index: tracker.currentContentIndex,
|
||||
part,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.output_text.delta event
|
||||
*/
|
||||
export function emitOutputTextDelta(config: StreamHandlerConfig, delta: string): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentMessage) {
|
||||
return;
|
||||
}
|
||||
|
||||
tracker.accumulatedText += delta;
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.output_text.delta',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: tracker.currentMessage.id,
|
||||
output_index: outputIndex,
|
||||
content_index: tracker.currentContentIndex,
|
||||
delta,
|
||||
logprobs: [],
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.output_text.done event
|
||||
*/
|
||||
export function emitOutputTextDone(config: StreamHandlerConfig): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentMessage) {
|
||||
return;
|
||||
}
|
||||
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
|
||||
const contentIndex = tracker.currentContentIndex;
|
||||
|
||||
// Update the content part with final text
|
||||
if (tracker.currentMessage.content[contentIndex]) {
|
||||
(tracker.currentMessage.content[contentIndex] as OutputTextContent).text =
|
||||
tracker.accumulatedText;
|
||||
}
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.output_text.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: tracker.currentMessage.id,
|
||||
output_index: outputIndex,
|
||||
content_index: contentIndex,
|
||||
text: tracker.accumulatedText,
|
||||
logprobs: [],
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.content_part.done for text content
|
||||
*/
|
||||
export function emitTextContentPartDone(config: StreamHandlerConfig): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentMessage) {
|
||||
return;
|
||||
}
|
||||
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentMessage);
|
||||
const contentIndex = tracker.currentContentIndex;
|
||||
const part = tracker.currentMessage.content[contentIndex];
|
||||
|
||||
if (part) {
|
||||
writeEvent(res, {
|
||||
type: 'response.content_part.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: tracker.currentMessage.id,
|
||||
output_index: outputIndex,
|
||||
content_index: contentIndex,
|
||||
part,
|
||||
});
|
||||
}
|
||||
|
||||
tracker.currentContentIndex++;
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* FUNCTION CALL EVENT EMITTERS
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Emit response.output_item.added for a function call
|
||||
*/
|
||||
export function emitFunctionCallItemAdded(
|
||||
config: StreamHandlerConfig,
|
||||
callId: string,
|
||||
name: string,
|
||||
): FunctionCallItem {
|
||||
const { res, tracker } = config;
|
||||
const item = createFunctionCallItem(callId, name, 'in_progress');
|
||||
tracker.functionCalls.set(callId, item);
|
||||
tracker.accumulatedArguments.set(callId, '');
|
||||
tracker.items.push(item);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.output_item.added',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
output_index: tracker.items.length - 1,
|
||||
item,
|
||||
});
|
||||
|
||||
return item;
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.function_call_arguments.delta event
|
||||
*/
|
||||
export function emitFunctionCallArgumentsDelta(
|
||||
config: StreamHandlerConfig,
|
||||
callId: string,
|
||||
delta: string,
|
||||
): void {
|
||||
const { res, tracker } = config;
|
||||
const item = tracker.functionCalls.get(callId);
|
||||
if (!item) {
|
||||
return;
|
||||
}
|
||||
|
||||
const accumulated = (tracker.accumulatedArguments.get(callId) ?? '') + delta;
|
||||
tracker.accumulatedArguments.set(callId, accumulated);
|
||||
item.arguments = accumulated;
|
||||
|
||||
const outputIndex = tracker.items.indexOf(item);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.function_call_arguments.delta',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: item.id,
|
||||
output_index: outputIndex,
|
||||
call_id: callId,
|
||||
delta,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.function_call_arguments.done event
|
||||
*/
|
||||
export function emitFunctionCallArgumentsDone(config: StreamHandlerConfig, callId: string): void {
|
||||
const { res, tracker } = config;
|
||||
const item = tracker.functionCalls.get(callId);
|
||||
if (!item) {
|
||||
return;
|
||||
}
|
||||
|
||||
const outputIndex = tracker.items.indexOf(item);
|
||||
const args = tracker.accumulatedArguments.get(callId) ?? '';
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.function_call_arguments.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: item.id,
|
||||
output_index: outputIndex,
|
||||
call_id: callId,
|
||||
arguments: args,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.output_item.done for a function call
|
||||
*/
|
||||
export function emitFunctionCallItemDone(config: StreamHandlerConfig, callId: string): void {
|
||||
const { res, tracker } = config;
|
||||
const item = tracker.functionCalls.get(callId);
|
||||
if (!item) {
|
||||
return;
|
||||
}
|
||||
|
||||
item.status = 'completed';
|
||||
const outputIndex = tracker.items.indexOf(item);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.output_item.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
output_index: outputIndex,
|
||||
item,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit function call output item (internal tool result)
|
||||
*/
|
||||
export function emitFunctionCallOutputItem(
|
||||
config: StreamHandlerConfig,
|
||||
callId: string,
|
||||
output: string,
|
||||
): void {
|
||||
const { res, tracker } = config;
|
||||
const item = createFunctionCallOutputItem(callId, output, 'completed');
|
||||
tracker.functionCallOutputs.set(callId, item);
|
||||
tracker.items.push(item);
|
||||
|
||||
// Emit added
|
||||
writeEvent(res, {
|
||||
type: 'response.output_item.added',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
output_index: tracker.items.length - 1,
|
||||
item,
|
||||
});
|
||||
|
||||
// Immediately emit done since it's already complete
|
||||
writeEvent(res, {
|
||||
type: 'response.output_item.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
output_index: tracker.items.length - 1,
|
||||
item,
|
||||
});
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* REASONING EVENT EMITTERS
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Emit response.output_item.added for reasoning
|
||||
*/
|
||||
export function emitReasoningItemAdded(config: StreamHandlerConfig): ReasoningItem {
|
||||
const { res, tracker } = config;
|
||||
const item = createReasoningItem('in_progress');
|
||||
tracker.currentReasoning = item;
|
||||
tracker.currentReasoningContentIndex = 0;
|
||||
tracker.accumulatedReasoningText = '';
|
||||
tracker.items.push(item);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.output_item.added',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
output_index: tracker.items.length - 1,
|
||||
item,
|
||||
});
|
||||
|
||||
return item;
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.content_part.added for reasoning
|
||||
*/
|
||||
export function emitReasoningContentPartAdded(config: StreamHandlerConfig): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentReasoning) {
|
||||
return;
|
||||
}
|
||||
|
||||
const part = createReasoningTextContent('');
|
||||
if (!tracker.currentReasoning.content) {
|
||||
tracker.currentReasoning.content = [];
|
||||
}
|
||||
tracker.currentReasoning.content.push(part);
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.content_part.added',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: tracker.currentReasoning.id,
|
||||
output_index: outputIndex,
|
||||
content_index: tracker.currentReasoningContentIndex,
|
||||
part,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.reasoning.delta event
|
||||
*/
|
||||
export function emitReasoningDelta(config: StreamHandlerConfig, delta: string): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentReasoning) {
|
||||
return;
|
||||
}
|
||||
|
||||
tracker.accumulatedReasoningText += delta;
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.reasoning.delta',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: tracker.currentReasoning.id,
|
||||
output_index: outputIndex,
|
||||
content_index: tracker.currentReasoningContentIndex,
|
||||
delta,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.reasoning.done event
|
||||
*/
|
||||
export function emitReasoningDone(config: StreamHandlerConfig): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentReasoning || !tracker.currentReasoning.content) {
|
||||
return;
|
||||
}
|
||||
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
|
||||
const contentIndex = tracker.currentReasoningContentIndex;
|
||||
|
||||
// Update the content part with final text
|
||||
if (tracker.currentReasoning.content[contentIndex]) {
|
||||
(tracker.currentReasoning.content[contentIndex] as ReasoningTextContent).text =
|
||||
tracker.accumulatedReasoningText;
|
||||
}
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.reasoning.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: tracker.currentReasoning.id,
|
||||
output_index: outputIndex,
|
||||
content_index: contentIndex,
|
||||
text: tracker.accumulatedReasoningText,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.content_part.done for reasoning
|
||||
*/
|
||||
export function emitReasoningContentPartDone(config: StreamHandlerConfig): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentReasoning || !tracker.currentReasoning.content) {
|
||||
return;
|
||||
}
|
||||
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
|
||||
const contentIndex = tracker.currentReasoningContentIndex;
|
||||
const part = tracker.currentReasoning.content[contentIndex];
|
||||
|
||||
if (part) {
|
||||
writeEvent(res, {
|
||||
type: 'response.content_part.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
item_id: tracker.currentReasoning.id,
|
||||
output_index: outputIndex,
|
||||
content_index: contentIndex,
|
||||
part,
|
||||
});
|
||||
}
|
||||
|
||||
tracker.currentReasoningContentIndex++;
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit response.output_item.done for reasoning
|
||||
*/
|
||||
export function emitReasoningItemDone(config: StreamHandlerConfig): void {
|
||||
const { res, tracker } = config;
|
||||
if (!tracker.currentReasoning) {
|
||||
return;
|
||||
}
|
||||
|
||||
tracker.currentReasoning.status = 'completed';
|
||||
const outputIndex = tracker.items.indexOf(tracker.currentReasoning);
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'response.output_item.done',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
output_index: outputIndex,
|
||||
item: tracker.currentReasoning,
|
||||
});
|
||||
|
||||
tracker.currentReasoning = null;
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* ERROR HANDLING
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Emit error event
|
||||
*/
|
||||
export function emitError(
|
||||
config: StreamHandlerConfig,
|
||||
error: { type: string; message: string; code?: string },
|
||||
): void {
|
||||
const { res, tracker } = config;
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'error',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
error: {
|
||||
type: error.type as 'server_error',
|
||||
message: error.message,
|
||||
code: error.code,
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* LIBRECHAT EXTENSION EVENTS
|
||||
* Custom events prefixed with 'librechat:' per Open Responses spec
|
||||
* @see https://openresponses.org/specification#extending-streaming-events
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Attachment data for librechat:attachment events
|
||||
*/
|
||||
export interface AttachmentData {
|
||||
/** File ID in LibreChat storage */
|
||||
file_id?: string;
|
||||
/** Original filename */
|
||||
filename?: string;
|
||||
/** MIME type */
|
||||
type?: string;
|
||||
/** URL to access the file */
|
||||
url?: string;
|
||||
/** Base64-encoded image data (for inline images) */
|
||||
image_url?: string;
|
||||
/** Width for images */
|
||||
width?: number;
|
||||
/** Height for images */
|
||||
height?: number;
|
||||
/** Associated tool call ID */
|
||||
tool_call_id?: string;
|
||||
/** Additional metadata */
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit librechat:attachment event for file/image attachments
|
||||
* This is a LibreChat extension to the Open Responses streaming protocol.
|
||||
* External clients can safely ignore these events.
|
||||
*/
|
||||
export function emitAttachment(
|
||||
config: StreamHandlerConfig,
|
||||
attachment: AttachmentData,
|
||||
options?: {
|
||||
messageId?: string;
|
||||
conversationId?: string;
|
||||
},
|
||||
): void {
|
||||
const { res, tracker } = config;
|
||||
|
||||
writeEvent(res, {
|
||||
type: 'librechat:attachment',
|
||||
sequence_number: tracker.nextSequence(),
|
||||
attachment,
|
||||
message_id: options?.messageId,
|
||||
conversation_id: options?.conversationId,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Write attachment event directly to response (for use outside streaming context)
|
||||
* Useful when attachment processing happens asynchronously
|
||||
*/
|
||||
export function writeAttachmentEvent(
|
||||
res: ServerResponse,
|
||||
sequenceNumber: number,
|
||||
attachment: AttachmentData,
|
||||
options?: {
|
||||
messageId?: string;
|
||||
conversationId?: string;
|
||||
},
|
||||
): void {
|
||||
writeEvent(res, {
|
||||
type: 'librechat:attachment',
|
||||
sequence_number: sequenceNumber,
|
||||
attachment,
|
||||
message_id: options?.messageId,
|
||||
conversation_id: options?.conversationId,
|
||||
});
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* NON-STREAMING RESPONSE BUILDER
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Build a complete non-streaming response
|
||||
*/
|
||||
export function buildResponsesNonStreamingResponse(
|
||||
context: ResponseContext,
|
||||
tracker: ResponseTracker,
|
||||
): Response {
|
||||
return buildResponse(context, tracker, 'completed');
|
||||
}
|
||||
|
||||
/**
|
||||
* Update tracker usage from collected data
|
||||
*/
|
||||
export function updateTrackerUsage(
|
||||
tracker: ResponseTracker,
|
||||
usage: {
|
||||
promptTokens?: number;
|
||||
completionTokens?: number;
|
||||
reasoningTokens?: number;
|
||||
cachedTokens?: number;
|
||||
},
|
||||
): void {
|
||||
if (usage.promptTokens != null) {
|
||||
tracker.usage.inputTokens = usage.promptTokens;
|
||||
}
|
||||
if (usage.completionTokens != null) {
|
||||
tracker.usage.outputTokens = usage.completionTokens;
|
||||
}
|
||||
if (usage.reasoningTokens != null) {
|
||||
tracker.usage.reasoningTokens = usage.reasoningTokens;
|
||||
}
|
||||
if (usage.cachedTokens != null) {
|
||||
tracker.usage.cachedTokens = usage.cachedTokens;
|
||||
}
|
||||
}
|
||||
183
packages/api/src/agents/responses/index.ts
Normal file
183
packages/api/src/agents/responses/index.ts
Normal file
|
|
@ -0,0 +1,183 @@
|
|||
/**
|
||||
* Open Responses API Module
|
||||
*
|
||||
* Exports for the Open Responses API implementation.
|
||||
* @see https://openresponses.org/specification
|
||||
*/
|
||||
|
||||
// Types
|
||||
export type {
|
||||
// Enums
|
||||
ItemStatus,
|
||||
ResponseStatus,
|
||||
MessageRole,
|
||||
ToolChoiceValue,
|
||||
TruncationValue,
|
||||
ServiceTier,
|
||||
ReasoningEffort,
|
||||
ReasoningSummary,
|
||||
// Input content
|
||||
InputTextContent,
|
||||
InputImageContent,
|
||||
InputFileContent,
|
||||
InputContent,
|
||||
// Output content
|
||||
LogProb,
|
||||
TopLogProb,
|
||||
OutputTextContent,
|
||||
RefusalContent,
|
||||
ModelContent,
|
||||
// Annotations
|
||||
UrlCitationAnnotation,
|
||||
FileCitationAnnotation,
|
||||
Annotation,
|
||||
// Reasoning content
|
||||
ReasoningTextContent,
|
||||
SummaryTextContent,
|
||||
ReasoningContent,
|
||||
// Input items
|
||||
SystemMessageItemParam,
|
||||
DeveloperMessageItemParam,
|
||||
UserMessageItemParam,
|
||||
AssistantMessageItemParam,
|
||||
FunctionCallItemParam,
|
||||
FunctionCallOutputItemParam,
|
||||
ReasoningItemParam,
|
||||
ItemReferenceParam,
|
||||
InputItem,
|
||||
// Output items
|
||||
MessageItem,
|
||||
FunctionCallItem,
|
||||
FunctionCallOutputItem,
|
||||
ReasoningItem,
|
||||
OutputItem,
|
||||
// Tools
|
||||
FunctionTool,
|
||||
HostedTool,
|
||||
Tool,
|
||||
FunctionToolChoice,
|
||||
ToolChoice,
|
||||
// Request
|
||||
ReasoningConfig,
|
||||
TextConfig,
|
||||
StreamOptions,
|
||||
Metadata,
|
||||
ResponseRequest,
|
||||
// Response field types
|
||||
TextField,
|
||||
// Response
|
||||
InputTokensDetails,
|
||||
OutputTokensDetails,
|
||||
Usage,
|
||||
IncompleteDetails,
|
||||
ResponseError,
|
||||
Response,
|
||||
// Streaming events
|
||||
BaseEvent,
|
||||
ResponseCreatedEvent,
|
||||
ResponseInProgressEvent,
|
||||
ResponseCompletedEvent,
|
||||
ResponseFailedEvent,
|
||||
ResponseIncompleteEvent,
|
||||
OutputItemAddedEvent,
|
||||
OutputItemDoneEvent,
|
||||
ContentPartAddedEvent,
|
||||
ContentPartDoneEvent,
|
||||
OutputTextDeltaEvent,
|
||||
OutputTextDoneEvent,
|
||||
RefusalDeltaEvent,
|
||||
RefusalDoneEvent,
|
||||
FunctionCallArgumentsDeltaEvent,
|
||||
FunctionCallArgumentsDoneEvent,
|
||||
ReasoningDeltaEvent,
|
||||
ReasoningDoneEvent,
|
||||
ErrorEvent,
|
||||
ResponseEvent,
|
||||
// LibreChat extensions
|
||||
LibreChatAttachmentContent,
|
||||
LibreChatAttachmentEvent,
|
||||
// Internal
|
||||
ResponseContext,
|
||||
RequestValidationResult,
|
||||
} from './types';
|
||||
|
||||
// Handlers
|
||||
export {
|
||||
// Tracker
|
||||
createResponseTracker,
|
||||
type ResponseTracker,
|
||||
// SSE
|
||||
writeEvent,
|
||||
writeDone,
|
||||
// Response building
|
||||
buildResponse,
|
||||
// Item builders
|
||||
generateItemId,
|
||||
createMessageItem,
|
||||
createFunctionCallItem,
|
||||
createFunctionCallOutputItem,
|
||||
createReasoningItem,
|
||||
createOutputTextContent,
|
||||
createReasoningTextContent,
|
||||
// Stream config
|
||||
type StreamHandlerConfig,
|
||||
// Response events
|
||||
emitResponseCreated,
|
||||
emitResponseInProgress,
|
||||
emitResponseCompleted,
|
||||
emitResponseFailed,
|
||||
// Message events
|
||||
emitMessageItemAdded,
|
||||
emitMessageItemDone,
|
||||
emitTextContentPartAdded,
|
||||
emitOutputTextDelta,
|
||||
emitOutputTextDone,
|
||||
emitTextContentPartDone,
|
||||
// Function call events
|
||||
emitFunctionCallItemAdded,
|
||||
emitFunctionCallArgumentsDelta,
|
||||
emitFunctionCallArgumentsDone,
|
||||
emitFunctionCallItemDone,
|
||||
emitFunctionCallOutputItem,
|
||||
// Reasoning events
|
||||
emitReasoningItemAdded,
|
||||
emitReasoningContentPartAdded,
|
||||
emitReasoningDelta,
|
||||
emitReasoningDone,
|
||||
emitReasoningContentPartDone,
|
||||
emitReasoningItemDone,
|
||||
// Error events
|
||||
emitError,
|
||||
// LibreChat extension events
|
||||
emitAttachment,
|
||||
writeAttachmentEvent,
|
||||
type AttachmentData,
|
||||
// Non-streaming
|
||||
buildResponsesNonStreamingResponse,
|
||||
updateTrackerUsage,
|
||||
} from './handlers';
|
||||
|
||||
// Service
|
||||
export {
|
||||
// Validation
|
||||
validateResponseRequest,
|
||||
isValidationFailure,
|
||||
// Input conversion
|
||||
convertInputToMessages,
|
||||
mergeMessagesWithInput,
|
||||
type InternalMessage,
|
||||
// Error response
|
||||
sendResponsesErrorResponse,
|
||||
// Context
|
||||
generateResponseId,
|
||||
createResponseContext,
|
||||
// Streaming setup
|
||||
setupStreamingResponse,
|
||||
// Event handlers
|
||||
createResponsesEventHandlers,
|
||||
// Non-streaming
|
||||
createResponseAggregator,
|
||||
buildAggregatedResponse,
|
||||
createAggregatorEventHandlers,
|
||||
type ResponseAggregator,
|
||||
} from './service';
|
||||
869
packages/api/src/agents/responses/service.ts
Normal file
869
packages/api/src/agents/responses/service.ts
Normal file
|
|
@ -0,0 +1,869 @@
|
|||
/**
|
||||
* Open Responses API Service
|
||||
*
|
||||
* Core service for processing Open Responses API requests.
|
||||
* Handles input conversion, message formatting, and request validation.
|
||||
*/
|
||||
import type { Response as ServerResponse } from 'express';
|
||||
import type {
|
||||
ResponseRequest,
|
||||
RequestValidationResult,
|
||||
InputItem,
|
||||
InputContent,
|
||||
ResponseContext,
|
||||
Response,
|
||||
} from './types';
|
||||
import {
|
||||
writeDone,
|
||||
emitResponseCompleted,
|
||||
emitMessageItemAdded,
|
||||
emitMessageItemDone,
|
||||
emitTextContentPartAdded,
|
||||
emitOutputTextDelta,
|
||||
emitOutputTextDone,
|
||||
emitTextContentPartDone,
|
||||
emitFunctionCallItemAdded,
|
||||
emitFunctionCallArgumentsDelta,
|
||||
emitFunctionCallArgumentsDone,
|
||||
emitFunctionCallItemDone,
|
||||
emitFunctionCallOutputItem,
|
||||
emitReasoningItemAdded,
|
||||
emitReasoningContentPartAdded,
|
||||
emitReasoningDelta,
|
||||
emitReasoningDone,
|
||||
emitReasoningContentPartDone,
|
||||
emitReasoningItemDone,
|
||||
updateTrackerUsage,
|
||||
type StreamHandlerConfig,
|
||||
} from './handlers';
|
||||
|
||||
/* =============================================================================
|
||||
* REQUEST VALIDATION
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Validate a request body
|
||||
*/
|
||||
export function validateResponseRequest(body: unknown): RequestValidationResult {
|
||||
if (!body || typeof body !== 'object') {
|
||||
return { valid: false, error: 'Request body is required' };
|
||||
}
|
||||
|
||||
const request = body as Record<string, unknown>;
|
||||
|
||||
// Required: model
|
||||
if (!request.model || typeof request.model !== 'string') {
|
||||
return { valid: false, error: 'model is required and must be a string' };
|
||||
}
|
||||
|
||||
// Required: input (string or array)
|
||||
if (request.input === undefined || request.input === null) {
|
||||
return { valid: false, error: 'input is required' };
|
||||
}
|
||||
|
||||
if (typeof request.input !== 'string' && !Array.isArray(request.input)) {
|
||||
return { valid: false, error: 'input must be a string or array of items' };
|
||||
}
|
||||
|
||||
// Optional validations
|
||||
if (request.stream !== undefined && typeof request.stream !== 'boolean') {
|
||||
return { valid: false, error: 'stream must be a boolean' };
|
||||
}
|
||||
|
||||
if (request.temperature !== undefined) {
|
||||
const temp = request.temperature as number;
|
||||
if (typeof temp !== 'number' || temp < 0 || temp > 2) {
|
||||
return { valid: false, error: 'temperature must be a number between 0 and 2' };
|
||||
}
|
||||
}
|
||||
|
||||
if (request.max_output_tokens !== undefined) {
|
||||
if (typeof request.max_output_tokens !== 'number' || request.max_output_tokens < 1) {
|
||||
return { valid: false, error: 'max_output_tokens must be a positive number' };
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: true, request: request as unknown as ResponseRequest };
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if validation failed
|
||||
*/
|
||||
export function isValidationFailure(
|
||||
result: RequestValidationResult,
|
||||
): result is { valid: false; error: string } {
|
||||
return !result.valid;
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* INPUT CONVERSION
|
||||
* ============================================================================= */
|
||||
|
||||
/** Internal message format (LibreChat-compatible) */
|
||||
export interface InternalMessage {
|
||||
role: 'system' | 'user' | 'assistant' | 'tool';
|
||||
content: string | Array<{ type: string; text?: string; image_url?: unknown }>;
|
||||
name?: string;
|
||||
tool_call_id?: string;
|
||||
tool_calls?: Array<{
|
||||
id: string;
|
||||
type: 'function';
|
||||
function: { name: string; arguments: string };
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert Open Responses input to internal message format.
|
||||
* Handles both string input and array of items.
|
||||
*/
|
||||
export function convertInputToMessages(input: string | InputItem[]): InternalMessage[] {
|
||||
// Simple string input becomes a user message
|
||||
if (typeof input === 'string') {
|
||||
return [{ role: 'user', content: input }];
|
||||
}
|
||||
|
||||
const messages: InternalMessage[] = [];
|
||||
|
||||
for (const item of input) {
|
||||
if (item.type === 'item_reference') {
|
||||
// Skip item references - they're handled by previous_response_id
|
||||
continue;
|
||||
}
|
||||
|
||||
if (item.type === 'message') {
|
||||
const messageItem = item as {
|
||||
type: 'message';
|
||||
role: string;
|
||||
content: string | InputContent[];
|
||||
};
|
||||
|
||||
let content: InternalMessage['content'];
|
||||
|
||||
if (typeof messageItem.content === 'string') {
|
||||
content = messageItem.content;
|
||||
} else if (Array.isArray(messageItem.content)) {
|
||||
content = messageItem.content.map((part) => {
|
||||
if (part.type === 'input_text') {
|
||||
return { type: 'text', text: part.text };
|
||||
}
|
||||
if (part.type === 'input_image') {
|
||||
return {
|
||||
type: 'image_url',
|
||||
image_url: {
|
||||
url: (part as { image_url?: string }).image_url,
|
||||
detail: (part as { detail?: string }).detail,
|
||||
},
|
||||
};
|
||||
}
|
||||
return { type: part.type };
|
||||
});
|
||||
} else {
|
||||
content = '';
|
||||
}
|
||||
|
||||
// Map developer role to system (LibreChat convention)
|
||||
let role: InternalMessage['role'];
|
||||
if (messageItem.role === 'developer') {
|
||||
role = 'system';
|
||||
} else if (messageItem.role === 'user') {
|
||||
role = 'user';
|
||||
} else if (messageItem.role === 'assistant') {
|
||||
role = 'assistant';
|
||||
} else if (messageItem.role === 'system') {
|
||||
role = 'system';
|
||||
} else {
|
||||
role = 'user';
|
||||
}
|
||||
|
||||
messages.push({ role, content });
|
||||
}
|
||||
|
||||
if (item.type === 'function_call') {
|
||||
// Function call items represent prior tool calls from assistant
|
||||
const fcItem = item as {
|
||||
type: 'function_call';
|
||||
call_id: string;
|
||||
name: string;
|
||||
arguments: string;
|
||||
};
|
||||
|
||||
// Add as assistant message with tool_calls
|
||||
messages.push({
|
||||
role: 'assistant',
|
||||
content: '',
|
||||
tool_calls: [
|
||||
{
|
||||
id: fcItem.call_id,
|
||||
type: 'function',
|
||||
function: { name: fcItem.name, arguments: fcItem.arguments },
|
||||
},
|
||||
],
|
||||
});
|
||||
}
|
||||
|
||||
if (item.type === 'function_call_output') {
|
||||
// Function call output items represent tool results
|
||||
const fcoItem = item as { type: 'function_call_output'; call_id: string; output: string };
|
||||
|
||||
messages.push({
|
||||
role: 'tool',
|
||||
content: fcoItem.output,
|
||||
tool_call_id: fcoItem.call_id,
|
||||
});
|
||||
}
|
||||
|
||||
// Reasoning items are typically not passed back as input
|
||||
// They're model-generated and may be encrypted
|
||||
}
|
||||
|
||||
return messages;
|
||||
}
|
||||
|
||||
/**
|
||||
* Merge previous conversation messages with new input
|
||||
*/
|
||||
export function mergeMessagesWithInput(
|
||||
previousMessages: InternalMessage[],
|
||||
newInput: InternalMessage[],
|
||||
): InternalMessage[] {
|
||||
return [...previousMessages, ...newInput];
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* ERROR RESPONSE
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Send an error response in Open Responses format
|
||||
*/
|
||||
export function sendResponsesErrorResponse(
|
||||
res: ServerResponse,
|
||||
statusCode: number,
|
||||
message: string,
|
||||
type: string = 'invalid_request',
|
||||
code?: string,
|
||||
): void {
|
||||
res.status(statusCode).json({
|
||||
error: {
|
||||
type,
|
||||
message,
|
||||
code: code ?? null,
|
||||
param: null,
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* RESPONSE CONTEXT
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Generate a unique response ID
|
||||
*/
|
||||
export function generateResponseId(): string {
|
||||
return `resp_${Date.now().toString(36)}${Math.random().toString(36).substring(2, 8)}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a response context from request
|
||||
*/
|
||||
export function createResponseContext(
|
||||
request: ResponseRequest,
|
||||
responseId?: string,
|
||||
): ResponseContext {
|
||||
return {
|
||||
responseId: responseId ?? generateResponseId(),
|
||||
model: request.model,
|
||||
createdAt: Math.floor(Date.now() / 1000),
|
||||
previousResponseId: request.previous_response_id,
|
||||
instructions: request.instructions,
|
||||
};
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* STREAMING SETUP
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Set up streaming response headers
|
||||
*/
|
||||
export function setupStreamingResponse(res: ServerResponse): void {
|
||||
res.setHeader('Content-Type', 'text/event-stream');
|
||||
res.setHeader('Cache-Control', 'no-cache');
|
||||
res.setHeader('Connection', 'keep-alive');
|
||||
res.setHeader('X-Accel-Buffering', 'no');
|
||||
res.flushHeaders();
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* STREAM HANDLER FACTORY
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* State for tracking streaming progress
|
||||
*/
|
||||
interface StreamState {
|
||||
messageStarted: boolean;
|
||||
messageContentStarted: boolean;
|
||||
reasoningStarted: boolean;
|
||||
reasoningContentStarted: boolean;
|
||||
activeToolCalls: Set<string>;
|
||||
completedToolCalls: Set<string>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create LibreChat event handlers that emit Open Responses events
|
||||
*/
|
||||
export function createResponsesEventHandlers(config: StreamHandlerConfig): {
|
||||
handlers: Record<string, { handle: (event: string, data: unknown) => void }>;
|
||||
state: StreamState;
|
||||
finalizeStream: () => void;
|
||||
} {
|
||||
const state: StreamState = {
|
||||
messageStarted: false,
|
||||
messageContentStarted: false,
|
||||
reasoningStarted: false,
|
||||
reasoningContentStarted: false,
|
||||
activeToolCalls: new Set(),
|
||||
completedToolCalls: new Set(),
|
||||
};
|
||||
|
||||
/**
|
||||
* Ensure message item is started
|
||||
*/
|
||||
const ensureMessageStarted = (): void => {
|
||||
if (!state.messageStarted) {
|
||||
emitMessageItemAdded(config);
|
||||
state.messageStarted = true;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Ensure message content part is started
|
||||
*/
|
||||
const ensureMessageContentStarted = (): void => {
|
||||
ensureMessageStarted();
|
||||
if (!state.messageContentStarted) {
|
||||
emitTextContentPartAdded(config);
|
||||
state.messageContentStarted = true;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Ensure reasoning item is started
|
||||
*/
|
||||
const ensureReasoningStarted = (): void => {
|
||||
if (!state.reasoningStarted) {
|
||||
emitReasoningItemAdded(config);
|
||||
state.reasoningStarted = true;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Ensure reasoning content part is started
|
||||
*/
|
||||
const ensureReasoningContentStarted = (): void => {
|
||||
ensureReasoningStarted();
|
||||
if (!state.reasoningContentStarted) {
|
||||
emitReasoningContentPartAdded(config);
|
||||
state.reasoningContentStarted = true;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Close any open content streams
|
||||
*/
|
||||
const closeOpenStreams = (): void => {
|
||||
// Close message content if open
|
||||
if (state.messageContentStarted) {
|
||||
emitOutputTextDone(config);
|
||||
emitTextContentPartDone(config);
|
||||
state.messageContentStarted = false;
|
||||
}
|
||||
|
||||
// Close message item if open
|
||||
if (state.messageStarted) {
|
||||
emitMessageItemDone(config);
|
||||
state.messageStarted = false;
|
||||
}
|
||||
|
||||
// Close reasoning content if open
|
||||
if (state.reasoningContentStarted) {
|
||||
emitReasoningDone(config);
|
||||
emitReasoningContentPartDone(config);
|
||||
state.reasoningContentStarted = false;
|
||||
}
|
||||
|
||||
// Close reasoning item if open
|
||||
if (state.reasoningStarted) {
|
||||
emitReasoningItemDone(config);
|
||||
state.reasoningStarted = false;
|
||||
}
|
||||
};
|
||||
|
||||
const handlers = {
|
||||
/**
|
||||
* Handle text message deltas
|
||||
*/
|
||||
on_message_delta: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const deltaData = data as { delta?: { content?: Array<{ type: string; text?: string }> } };
|
||||
const content = deltaData?.delta?.content;
|
||||
|
||||
if (Array.isArray(content)) {
|
||||
for (const part of content) {
|
||||
if (part.type === 'text' && part.text) {
|
||||
ensureMessageContentStarted();
|
||||
emitOutputTextDelta(config, part.text);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
/**
|
||||
* Handle reasoning deltas
|
||||
*/
|
||||
on_reasoning_delta: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const deltaData = data as {
|
||||
delta?: { content?: Array<{ type: string; text?: string; think?: string }> };
|
||||
};
|
||||
const content = deltaData?.delta?.content;
|
||||
|
||||
if (Array.isArray(content)) {
|
||||
for (const part of content) {
|
||||
const text = part.think || part.text;
|
||||
if (text) {
|
||||
ensureReasoningContentStarted();
|
||||
emitReasoningDelta(config, text);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
/**
|
||||
* Handle run step (tool call initiation)
|
||||
*/
|
||||
on_run_step: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const stepData = data as {
|
||||
stepDetails?: { type: string; tool_calls?: Array<{ id?: string; name?: string }> };
|
||||
};
|
||||
const stepDetails = stepData?.stepDetails;
|
||||
|
||||
if (stepDetails?.type === 'tool_calls' && stepDetails.tool_calls) {
|
||||
// Close any open message/reasoning before tool calls
|
||||
closeOpenStreams();
|
||||
|
||||
for (const tc of stepDetails.tool_calls) {
|
||||
const callId = tc.id ?? '';
|
||||
const name = tc.name ?? '';
|
||||
|
||||
if (callId && !state.activeToolCalls.has(callId)) {
|
||||
state.activeToolCalls.add(callId);
|
||||
emitFunctionCallItemAdded(config, callId, name);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
/**
|
||||
* Handle run step delta (tool call argument streaming)
|
||||
*/
|
||||
on_run_step_delta: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const deltaData = data as {
|
||||
delta?: { type: string; tool_calls?: Array<{ index?: number; args?: string }> };
|
||||
};
|
||||
const delta = deltaData?.delta;
|
||||
|
||||
if (delta?.type === 'tool_calls' && delta.tool_calls) {
|
||||
for (const tc of delta.tool_calls) {
|
||||
const args = tc.args ?? '';
|
||||
if (!args) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Find the call_id for this tool call by index
|
||||
const toolCallsArray = Array.from(state.activeToolCalls);
|
||||
const callId = toolCallsArray[tc.index ?? 0];
|
||||
|
||||
if (callId) {
|
||||
emitFunctionCallArgumentsDelta(config, callId, args);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
/**
|
||||
* Handle tool end (tool execution complete)
|
||||
*/
|
||||
on_tool_end: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const toolData = data as { tool_call_id?: string; output?: string };
|
||||
const callId = toolData?.tool_call_id;
|
||||
const output = toolData?.output ?? '';
|
||||
|
||||
if (callId && state.activeToolCalls.has(callId) && !state.completedToolCalls.has(callId)) {
|
||||
state.completedToolCalls.add(callId);
|
||||
|
||||
// Complete the function call item
|
||||
emitFunctionCallArgumentsDone(config, callId);
|
||||
emitFunctionCallItemDone(config, callId);
|
||||
|
||||
// Emit the function call output (internal tool result)
|
||||
emitFunctionCallOutputItem(config, callId, output);
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
/**
|
||||
* Handle chat model end (usage collection)
|
||||
*/
|
||||
on_chat_model_end: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const endData = data as {
|
||||
output?: {
|
||||
usage_metadata?: {
|
||||
input_tokens?: number;
|
||||
output_tokens?: number;
|
||||
// OpenAI format
|
||||
input_token_details?: {
|
||||
cache_creation?: number;
|
||||
cache_read?: number;
|
||||
};
|
||||
// Anthropic format
|
||||
cache_creation_input_tokens?: number;
|
||||
cache_read_input_tokens?: number;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
const usage = endData?.output?.usage_metadata;
|
||||
if (usage) {
|
||||
// Extract cached tokens from either OpenAI or Anthropic format
|
||||
const cachedTokens =
|
||||
(usage.input_token_details?.cache_read ?? 0) + (usage.cache_read_input_tokens ?? 0);
|
||||
|
||||
updateTrackerUsage(config.tracker, {
|
||||
promptTokens: usage.input_tokens,
|
||||
completionTokens: usage.output_tokens,
|
||||
cachedTokens,
|
||||
});
|
||||
}
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
/**
|
||||
* Finalize the stream - close open items and emit completed
|
||||
*/
|
||||
const finalizeStream = (): void => {
|
||||
closeOpenStreams();
|
||||
emitResponseCompleted(config);
|
||||
writeDone(config.res);
|
||||
};
|
||||
|
||||
return { handlers, state, finalizeStream };
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* NON-STREAMING AGGREGATOR
|
||||
* ============================================================================= */
|
||||
|
||||
/**
|
||||
* Aggregator for non-streaming responses
|
||||
*/
|
||||
export interface ResponseAggregator {
|
||||
textChunks: string[];
|
||||
reasoningChunks: string[];
|
||||
toolCalls: Map<
|
||||
string,
|
||||
{
|
||||
id: string;
|
||||
name: string;
|
||||
arguments: string;
|
||||
}
|
||||
>;
|
||||
toolOutputs: Map<string, string>;
|
||||
usage: {
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
reasoningTokens: number;
|
||||
cachedTokens: number;
|
||||
};
|
||||
addText: (text: string) => void;
|
||||
addReasoning: (text: string) => void;
|
||||
getText: () => string;
|
||||
getReasoning: () => string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create an aggregator for non-streaming responses
|
||||
*/
|
||||
export function createResponseAggregator(): ResponseAggregator {
|
||||
const aggregator: ResponseAggregator = {
|
||||
textChunks: [],
|
||||
reasoningChunks: [],
|
||||
toolCalls: new Map(),
|
||||
toolOutputs: new Map(),
|
||||
usage: {
|
||||
inputTokens: 0,
|
||||
outputTokens: 0,
|
||||
reasoningTokens: 0,
|
||||
cachedTokens: 0,
|
||||
},
|
||||
addText: (text: string) => {
|
||||
aggregator.textChunks.push(text);
|
||||
},
|
||||
addReasoning: (text: string) => {
|
||||
aggregator.reasoningChunks.push(text);
|
||||
},
|
||||
getText: () => aggregator.textChunks.join(''),
|
||||
getReasoning: () => aggregator.reasoningChunks.join(''),
|
||||
};
|
||||
return aggregator;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a non-streaming response from aggregator
|
||||
* Includes all required fields per Open Responses spec
|
||||
*/
|
||||
export function buildAggregatedResponse(
|
||||
context: ResponseContext,
|
||||
aggregator: ResponseAggregator,
|
||||
): Response {
|
||||
const output: Response['output'] = [];
|
||||
|
||||
// Add reasoning item if present
|
||||
const reasoningText = aggregator.getReasoning();
|
||||
if (reasoningText) {
|
||||
output.push({
|
||||
type: 'reasoning',
|
||||
id: `reason_${Date.now().toString(36)}`,
|
||||
status: 'completed',
|
||||
content: [{ type: 'reasoning_text', text: reasoningText }],
|
||||
summary: [],
|
||||
});
|
||||
}
|
||||
|
||||
// Add function calls and outputs
|
||||
for (const [callId, tc] of aggregator.toolCalls) {
|
||||
output.push({
|
||||
type: 'function_call',
|
||||
id: `fc_${Date.now().toString(36)}${Math.random().toString(36).substring(2, 6)}`,
|
||||
call_id: callId,
|
||||
name: tc.name,
|
||||
arguments: tc.arguments,
|
||||
status: 'completed',
|
||||
});
|
||||
|
||||
const toolOutput = aggregator.toolOutputs.get(callId);
|
||||
if (toolOutput) {
|
||||
output.push({
|
||||
type: 'function_call_output',
|
||||
id: `fco_${Date.now().toString(36)}${Math.random().toString(36).substring(2, 6)}`,
|
||||
call_id: callId,
|
||||
output: toolOutput,
|
||||
status: 'completed',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Add message item if there's text (or always add one if no other output)
|
||||
const text = aggregator.getText();
|
||||
if (text || output.length === 0) {
|
||||
output.push({
|
||||
type: 'message',
|
||||
id: `msg_${Date.now().toString(36)}`,
|
||||
role: 'assistant',
|
||||
status: 'completed',
|
||||
content: text ? [{ type: 'output_text', text, annotations: [], logprobs: [] }] : [],
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
// Required fields per Open Responses spec
|
||||
id: context.responseId,
|
||||
object: 'response',
|
||||
created_at: context.createdAt,
|
||||
completed_at: Math.floor(Date.now() / 1000),
|
||||
status: 'completed',
|
||||
incomplete_details: null,
|
||||
model: context.model,
|
||||
previous_response_id: context.previousResponseId ?? null,
|
||||
instructions: context.instructions ?? null,
|
||||
output,
|
||||
error: null,
|
||||
tools: [],
|
||||
tool_choice: 'auto',
|
||||
truncation: 'disabled',
|
||||
parallel_tool_calls: true,
|
||||
text: { format: { type: 'text' } },
|
||||
temperature: 1,
|
||||
top_p: 1,
|
||||
presence_penalty: 0,
|
||||
frequency_penalty: 0,
|
||||
top_logprobs: 0,
|
||||
reasoning: null,
|
||||
user: null,
|
||||
usage: {
|
||||
input_tokens: aggregator.usage.inputTokens,
|
||||
output_tokens: aggregator.usage.outputTokens,
|
||||
total_tokens: aggregator.usage.inputTokens + aggregator.usage.outputTokens,
|
||||
input_tokens_details: { cached_tokens: aggregator.usage.cachedTokens },
|
||||
output_tokens_details: { reasoning_tokens: aggregator.usage.reasoningTokens },
|
||||
},
|
||||
max_output_tokens: null,
|
||||
max_tool_calls: null,
|
||||
store: false,
|
||||
background: false,
|
||||
service_tier: 'default',
|
||||
metadata: {},
|
||||
safety_identifier: null,
|
||||
prompt_cache_key: null,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create event handlers for non-streaming aggregation
|
||||
*/
|
||||
export function createAggregatorEventHandlers(aggregator: ResponseAggregator): Record<
|
||||
string,
|
||||
{
|
||||
handle: (event: string, data: unknown) => void;
|
||||
}
|
||||
> {
|
||||
const activeToolCalls = new Set<string>();
|
||||
|
||||
return {
|
||||
on_message_delta: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const deltaData = data as { delta?: { content?: Array<{ type: string; text?: string }> } };
|
||||
const content = deltaData?.delta?.content;
|
||||
|
||||
if (Array.isArray(content)) {
|
||||
for (const part of content) {
|
||||
if (part.type === 'text' && part.text) {
|
||||
aggregator.addText(part.text);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
on_reasoning_delta: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const deltaData = data as {
|
||||
delta?: { content?: Array<{ type: string; text?: string; think?: string }> };
|
||||
};
|
||||
const content = deltaData?.delta?.content;
|
||||
|
||||
if (Array.isArray(content)) {
|
||||
for (const part of content) {
|
||||
const text = part.think || part.text;
|
||||
if (text) {
|
||||
aggregator.addReasoning(text);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
on_run_step: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const stepData = data as {
|
||||
stepDetails?: { type: string; tool_calls?: Array<{ id?: string; name?: string }> };
|
||||
};
|
||||
const stepDetails = stepData?.stepDetails;
|
||||
|
||||
if (stepDetails?.type === 'tool_calls' && stepDetails.tool_calls) {
|
||||
for (const tc of stepDetails.tool_calls) {
|
||||
const callId = tc.id ?? '';
|
||||
const name = tc.name ?? '';
|
||||
|
||||
if (callId && !activeToolCalls.has(callId)) {
|
||||
activeToolCalls.add(callId);
|
||||
aggregator.toolCalls.set(callId, { id: callId, name, arguments: '' });
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
on_run_step_delta: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const deltaData = data as {
|
||||
delta?: { type: string; tool_calls?: Array<{ index?: number; args?: string }> };
|
||||
};
|
||||
const delta = deltaData?.delta;
|
||||
|
||||
if (delta?.type === 'tool_calls' && delta.tool_calls) {
|
||||
for (const tc of delta.tool_calls) {
|
||||
const args = tc.args ?? '';
|
||||
if (!args) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const toolCallsArray = Array.from(activeToolCalls);
|
||||
const callId = toolCallsArray[tc.index ?? 0];
|
||||
|
||||
if (callId) {
|
||||
const existing = aggregator.toolCalls.get(callId);
|
||||
if (existing) {
|
||||
existing.arguments += args;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
on_tool_end: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const toolData = data as { tool_call_id?: string; output?: string };
|
||||
const callId = toolData?.tool_call_id;
|
||||
const output = toolData?.output ?? '';
|
||||
|
||||
if (callId) {
|
||||
aggregator.toolOutputs.set(callId, output);
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
on_chat_model_end: {
|
||||
handle: (_event: string, data: unknown): void => {
|
||||
const endData = data as {
|
||||
output?: {
|
||||
usage_metadata?: {
|
||||
input_tokens?: number;
|
||||
output_tokens?: number;
|
||||
// OpenAI format
|
||||
input_token_details?: {
|
||||
cache_creation?: number;
|
||||
cache_read?: number;
|
||||
};
|
||||
// Anthropic format
|
||||
cache_creation_input_tokens?: number;
|
||||
cache_read_input_tokens?: number;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
const usage = endData?.output?.usage_metadata;
|
||||
if (usage) {
|
||||
aggregator.usage.inputTokens = usage.input_tokens ?? 0;
|
||||
aggregator.usage.outputTokens = usage.output_tokens ?? 0;
|
||||
|
||||
// Extract cached tokens from either OpenAI or Anthropic format
|
||||
aggregator.usage.cachedTokens =
|
||||
(usage.input_token_details?.cache_read ?? 0) + (usage.cache_read_input_tokens ?? 0);
|
||||
}
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
779
packages/api/src/agents/responses/types.ts
Normal file
779
packages/api/src/agents/responses/types.ts
Normal file
|
|
@ -0,0 +1,779 @@
|
|||
/**
|
||||
* Open Responses API Types
|
||||
*
|
||||
* Types following the Open Responses specification for building multi-provider,
|
||||
* interoperable LLM interfaces. Items are the fundamental unit of context,
|
||||
* and streaming uses semantic events rather than simple deltas.
|
||||
*
|
||||
* @see https://openresponses.org/specification
|
||||
*/
|
||||
|
||||
/* =============================================================================
|
||||
* ENUMS
|
||||
* ============================================================================= */
|
||||
|
||||
/** Item status lifecycle */
|
||||
export type ItemStatus = 'in_progress' | 'incomplete' | 'completed';
|
||||
|
||||
/** Response status lifecycle */
|
||||
export type ResponseStatus = 'in_progress' | 'completed' | 'failed' | 'incomplete';
|
||||
|
||||
/** Message roles */
|
||||
export type MessageRole = 'user' | 'assistant' | 'system' | 'developer';
|
||||
|
||||
/** Tool choice options */
|
||||
export type ToolChoiceValue = 'none' | 'auto' | 'required';
|
||||
|
||||
/** Truncation options */
|
||||
export type TruncationValue = 'auto' | 'disabled';
|
||||
|
||||
/** Service tier options */
|
||||
export type ServiceTier = 'auto' | 'default' | 'flex' | 'priority';
|
||||
|
||||
/** Reasoning effort levels */
|
||||
export type ReasoningEffort = 'none' | 'low' | 'medium' | 'high' | 'xhigh';
|
||||
|
||||
/** Reasoning summary options */
|
||||
export type ReasoningSummary = 'concise' | 'detailed' | 'auto';
|
||||
|
||||
/* =============================================================================
|
||||
* INPUT CONTENT TYPES
|
||||
* ============================================================================= */
|
||||
|
||||
/** Text input content */
|
||||
export interface InputTextContent {
|
||||
type: 'input_text';
|
||||
text: string;
|
||||
}
|
||||
|
||||
/** Image input content */
|
||||
export interface InputImageContent {
|
||||
type: 'input_image';
|
||||
image_url?: string;
|
||||
file_id?: string;
|
||||
detail?: 'auto' | 'low' | 'high';
|
||||
}
|
||||
|
||||
/** File input content */
|
||||
export interface InputFileContent {
|
||||
type: 'input_file';
|
||||
file_id?: string;
|
||||
file_data?: string;
|
||||
filename?: string;
|
||||
}
|
||||
|
||||
/** Union of all input content types */
|
||||
export type InputContent = InputTextContent | InputImageContent | InputFileContent;
|
||||
|
||||
/* =============================================================================
|
||||
* OUTPUT CONTENT TYPES
|
||||
* ============================================================================= */
|
||||
|
||||
/** Log probability for a token */
|
||||
export interface LogProb {
|
||||
token: string;
|
||||
logprob: number;
|
||||
bytes?: number[];
|
||||
top_logprobs?: TopLogProb[];
|
||||
}
|
||||
|
||||
/** Top log probability entry */
|
||||
export interface TopLogProb {
|
||||
token: string;
|
||||
logprob: number;
|
||||
bytes?: number[];
|
||||
}
|
||||
|
||||
/** Text output content */
|
||||
export interface OutputTextContent {
|
||||
type: 'output_text';
|
||||
text: string;
|
||||
annotations: Annotation[];
|
||||
logprobs: LogProb[];
|
||||
}
|
||||
|
||||
/** Refusal content */
|
||||
export interface RefusalContent {
|
||||
type: 'refusal';
|
||||
refusal: string;
|
||||
}
|
||||
|
||||
/** Union of model output content types */
|
||||
export type ModelContent = OutputTextContent | RefusalContent;
|
||||
|
||||
/* =============================================================================
|
||||
* ANNOTATIONS
|
||||
* ============================================================================= */
|
||||
|
||||
/** URL citation annotation */
|
||||
export interface UrlCitationAnnotation {
|
||||
type: 'url_citation';
|
||||
url: string;
|
||||
title?: string;
|
||||
start_index: number;
|
||||
end_index: number;
|
||||
}
|
||||
|
||||
/** File citation annotation */
|
||||
export interface FileCitationAnnotation {
|
||||
type: 'file_citation';
|
||||
file_id: string;
|
||||
start_index: number;
|
||||
end_index: number;
|
||||
}
|
||||
|
||||
/** Union of annotation types */
|
||||
export type Annotation = UrlCitationAnnotation | FileCitationAnnotation;
|
||||
|
||||
/* =============================================================================
|
||||
* REASONING CONTENT
|
||||
* ============================================================================= */
|
||||
|
||||
/** Reasoning text content */
|
||||
export interface ReasoningTextContent {
|
||||
type: 'reasoning_text';
|
||||
text: string;
|
||||
}
|
||||
|
||||
/** Summary text content */
|
||||
export interface SummaryTextContent {
|
||||
type: 'summary_text';
|
||||
text: string;
|
||||
}
|
||||
|
||||
/** Reasoning content union */
|
||||
export type ReasoningContent = ReasoningTextContent;
|
||||
|
||||
/* =============================================================================
|
||||
* INPUT ITEMS (for request)
|
||||
* ============================================================================= */
|
||||
|
||||
/** System message input item */
|
||||
export interface SystemMessageItemParam {
|
||||
type: 'message';
|
||||
role: 'system';
|
||||
content: string | InputContent[];
|
||||
}
|
||||
|
||||
/** Developer message input item */
|
||||
export interface DeveloperMessageItemParam {
|
||||
type: 'message';
|
||||
role: 'developer';
|
||||
content: string | InputContent[];
|
||||
}
|
||||
|
||||
/** User message input item */
|
||||
export interface UserMessageItemParam {
|
||||
type: 'message';
|
||||
role: 'user';
|
||||
content: string | InputContent[];
|
||||
}
|
||||
|
||||
/** Assistant message input item */
|
||||
export interface AssistantMessageItemParam {
|
||||
type: 'message';
|
||||
role: 'assistant';
|
||||
content: string | ModelContent[];
|
||||
}
|
||||
|
||||
/** Function call input item (for providing context) */
|
||||
export interface FunctionCallItemParam {
|
||||
type: 'function_call';
|
||||
id: string;
|
||||
call_id: string;
|
||||
name: string;
|
||||
arguments: string;
|
||||
status?: ItemStatus;
|
||||
}
|
||||
|
||||
/** Function call output input item (for providing tool results) */
|
||||
export interface FunctionCallOutputItemParam {
|
||||
type: 'function_call_output';
|
||||
call_id: string;
|
||||
output: string;
|
||||
status?: ItemStatus;
|
||||
}
|
||||
|
||||
/** Reasoning input item */
|
||||
export interface ReasoningItemParam {
|
||||
type: 'reasoning';
|
||||
id?: string;
|
||||
content?: ReasoningContent[];
|
||||
encrypted_content?: string;
|
||||
summary?: SummaryTextContent[];
|
||||
status?: ItemStatus;
|
||||
}
|
||||
|
||||
/** Item reference (for referencing existing items) */
|
||||
export interface ItemReferenceParam {
|
||||
type: 'item_reference';
|
||||
id: string;
|
||||
}
|
||||
|
||||
/** Union of all input item types */
|
||||
export type InputItem =
|
||||
| SystemMessageItemParam
|
||||
| DeveloperMessageItemParam
|
||||
| UserMessageItemParam
|
||||
| AssistantMessageItemParam
|
||||
| FunctionCallItemParam
|
||||
| FunctionCallOutputItemParam
|
||||
| ReasoningItemParam
|
||||
| ItemReferenceParam;
|
||||
|
||||
/* =============================================================================
|
||||
* OUTPUT ITEMS (in response)
|
||||
* ============================================================================= */
|
||||
|
||||
/** Message output item */
|
||||
export interface MessageItem {
|
||||
type: 'message';
|
||||
id: string;
|
||||
role: 'assistant';
|
||||
status: ItemStatus;
|
||||
content: ModelContent[];
|
||||
}
|
||||
|
||||
/** Function call output item */
|
||||
export interface FunctionCallItem {
|
||||
type: 'function_call';
|
||||
id: string;
|
||||
call_id: string;
|
||||
name: string;
|
||||
arguments: string;
|
||||
status: ItemStatus;
|
||||
}
|
||||
|
||||
/** Function call output result item (internal tool execution result) */
|
||||
export interface FunctionCallOutputItem {
|
||||
type: 'function_call_output';
|
||||
id: string;
|
||||
call_id: string;
|
||||
output: string;
|
||||
status: ItemStatus;
|
||||
}
|
||||
|
||||
/** Reasoning output item */
|
||||
export interface ReasoningItem {
|
||||
type: 'reasoning';
|
||||
id: string;
|
||||
status?: ItemStatus;
|
||||
content?: ReasoningContent[];
|
||||
encrypted_content?: string;
|
||||
/** Required per Open Responses spec - summary content parts */
|
||||
summary: SummaryTextContent[];
|
||||
}
|
||||
|
||||
/** Union of all output item types */
|
||||
export type OutputItem = MessageItem | FunctionCallItem | FunctionCallOutputItem | ReasoningItem;
|
||||
|
||||
/* =============================================================================
|
||||
* TOOLS
|
||||
* ============================================================================= */
|
||||
|
||||
/** Function tool definition */
|
||||
export interface FunctionTool {
|
||||
type: 'function';
|
||||
name: string;
|
||||
description?: string;
|
||||
parameters?: Record<string, unknown>;
|
||||
strict?: boolean;
|
||||
}
|
||||
|
||||
/** Hosted tool (provider-specific) */
|
||||
export interface HostedTool {
|
||||
type: string; // e.g., 'librechat:web_search'
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
/** Union of tool types */
|
||||
export type Tool = FunctionTool | HostedTool;
|
||||
|
||||
/** Specific function tool choice */
|
||||
export interface FunctionToolChoice {
|
||||
type: 'function';
|
||||
name: string;
|
||||
}
|
||||
|
||||
/** Tool choice parameter */
|
||||
export type ToolChoice = ToolChoiceValue | FunctionToolChoice;
|
||||
|
||||
/* =============================================================================
|
||||
* REQUEST
|
||||
* ============================================================================= */
|
||||
|
||||
/** Reasoning configuration */
|
||||
export interface ReasoningConfig {
|
||||
effort?: ReasoningEffort;
|
||||
summary?: ReasoningSummary;
|
||||
}
|
||||
|
||||
/** Text output configuration */
|
||||
export interface TextConfig {
|
||||
format?: {
|
||||
type: 'text' | 'json_object' | 'json_schema';
|
||||
json_schema?: Record<string, unknown>;
|
||||
};
|
||||
}
|
||||
|
||||
/** Stream options */
|
||||
export interface StreamOptions {
|
||||
include_usage?: boolean;
|
||||
}
|
||||
|
||||
/** Metadata (key-value pairs) */
|
||||
export type Metadata = Record<string, string>;
|
||||
|
||||
/** Open Responses API Request */
|
||||
export interface ResponseRequest {
|
||||
/** Model/agent ID to use */
|
||||
model: string;
|
||||
|
||||
/** Input context - string or array of items */
|
||||
input: string | InputItem[];
|
||||
|
||||
/** Previous response ID for conversation continuation */
|
||||
previous_response_id?: string;
|
||||
|
||||
/** Tools available to the model */
|
||||
tools?: Tool[];
|
||||
|
||||
/** Tool choice configuration */
|
||||
tool_choice?: ToolChoice;
|
||||
|
||||
/** Whether to stream the response */
|
||||
stream?: boolean;
|
||||
|
||||
/** Stream options */
|
||||
stream_options?: StreamOptions;
|
||||
|
||||
/** Additional instructions */
|
||||
instructions?: string;
|
||||
|
||||
/** Maximum output tokens */
|
||||
max_output_tokens?: number;
|
||||
|
||||
/** Maximum tool calls */
|
||||
max_tool_calls?: number;
|
||||
|
||||
/** Sampling temperature */
|
||||
temperature?: number;
|
||||
|
||||
/** Top-p sampling */
|
||||
top_p?: number;
|
||||
|
||||
/** Presence penalty */
|
||||
presence_penalty?: number;
|
||||
|
||||
/** Frequency penalty */
|
||||
frequency_penalty?: number;
|
||||
|
||||
/** Reasoning configuration */
|
||||
reasoning?: ReasoningConfig;
|
||||
|
||||
/** Text output configuration */
|
||||
text?: TextConfig;
|
||||
|
||||
/** Truncation behavior */
|
||||
truncation?: TruncationValue;
|
||||
|
||||
/** Service tier */
|
||||
service_tier?: ServiceTier;
|
||||
|
||||
/** Whether to store the response */
|
||||
store?: boolean;
|
||||
|
||||
/** Metadata */
|
||||
metadata?: Metadata;
|
||||
|
||||
/** Whether model can call multiple tools in parallel */
|
||||
parallel_tool_calls?: boolean;
|
||||
|
||||
/** User identifier for safety */
|
||||
user?: string;
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* RESPONSE
|
||||
* ============================================================================= */
|
||||
|
||||
/** Token usage details */
|
||||
export interface InputTokensDetails {
|
||||
cached_tokens: number;
|
||||
}
|
||||
|
||||
/** Output tokens details */
|
||||
export interface OutputTokensDetails {
|
||||
reasoning_tokens: number;
|
||||
}
|
||||
|
||||
/** Token usage statistics */
|
||||
export interface Usage {
|
||||
input_tokens: number;
|
||||
output_tokens: number;
|
||||
total_tokens: number;
|
||||
input_tokens_details: InputTokensDetails;
|
||||
output_tokens_details: OutputTokensDetails;
|
||||
}
|
||||
|
||||
/** Incomplete details */
|
||||
export interface IncompleteDetails {
|
||||
reason: 'max_output_tokens' | 'max_tool_calls' | 'content_filter' | 'other';
|
||||
}
|
||||
|
||||
/** Error object */
|
||||
export interface ResponseError {
|
||||
type: 'server_error' | 'invalid_request' | 'not_found' | 'model_error' | 'too_many_requests';
|
||||
code?: string;
|
||||
message: string;
|
||||
param?: string;
|
||||
}
|
||||
|
||||
/** Text field configuration */
|
||||
export interface TextField {
|
||||
format?: {
|
||||
type: 'text' | 'json_object' | 'json_schema';
|
||||
json_schema?: Record<string, unknown>;
|
||||
};
|
||||
}
|
||||
|
||||
/** Open Responses API Response - All required fields per spec */
|
||||
export interface Response {
|
||||
/** Response ID */
|
||||
id: string;
|
||||
|
||||
/** Object type - always "response" */
|
||||
object: 'response';
|
||||
|
||||
/** Creation timestamp (Unix seconds) */
|
||||
created_at: number;
|
||||
|
||||
/** Completion timestamp (Unix seconds) - null if not completed */
|
||||
completed_at: number | null;
|
||||
|
||||
/** Response status */
|
||||
status: ResponseStatus;
|
||||
|
||||
/** Incomplete details - null if not incomplete */
|
||||
incomplete_details: IncompleteDetails | null;
|
||||
|
||||
/** Model that generated the response */
|
||||
model: string;
|
||||
|
||||
/** Previous response ID - null if not a continuation */
|
||||
previous_response_id: string | null;
|
||||
|
||||
/** Instructions used - null if none */
|
||||
instructions: string | null;
|
||||
|
||||
/** Output items */
|
||||
output: OutputItem[];
|
||||
|
||||
/** Error - null if no error */
|
||||
error: ResponseError | null;
|
||||
|
||||
/** Tools available */
|
||||
tools: Tool[];
|
||||
|
||||
/** Tool choice setting */
|
||||
tool_choice: ToolChoice;
|
||||
|
||||
/** Truncation setting used */
|
||||
truncation: TruncationValue;
|
||||
|
||||
/** Whether parallel tool calls were allowed */
|
||||
parallel_tool_calls: boolean;
|
||||
|
||||
/** Text configuration used */
|
||||
text: TextField;
|
||||
|
||||
/** Temperature used */
|
||||
temperature: number;
|
||||
|
||||
/** Top-p used */
|
||||
top_p: number;
|
||||
|
||||
/** Presence penalty used */
|
||||
presence_penalty: number;
|
||||
|
||||
/** Frequency penalty used */
|
||||
frequency_penalty: number;
|
||||
|
||||
/** Top logprobs - number of most likely tokens to return */
|
||||
top_logprobs: number;
|
||||
|
||||
/** Reasoning configuration - null if none */
|
||||
reasoning: ReasoningConfig | null;
|
||||
|
||||
/** User identifier - null if none */
|
||||
user: string | null;
|
||||
|
||||
/** Token usage - null if not available */
|
||||
usage: Usage | null;
|
||||
|
||||
/** Max output tokens - null if not set */
|
||||
max_output_tokens: number | null;
|
||||
|
||||
/** Max tool calls - null if not set */
|
||||
max_tool_calls: number | null;
|
||||
|
||||
/** Whether response was stored */
|
||||
store: boolean;
|
||||
|
||||
/** Whether request was run in background */
|
||||
background: boolean;
|
||||
|
||||
/** Service tier used */
|
||||
service_tier: string;
|
||||
|
||||
/** Metadata */
|
||||
metadata: Metadata;
|
||||
|
||||
/** Safety identifier - null if none */
|
||||
safety_identifier: string | null;
|
||||
|
||||
/** Prompt cache key - null if none */
|
||||
prompt_cache_key: string | null;
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* STREAMING EVENTS
|
||||
* ============================================================================= */
|
||||
|
||||
/** Base event structure */
|
||||
export interface BaseEvent {
|
||||
type: string;
|
||||
sequence_number: number;
|
||||
}
|
||||
|
||||
/** Response created event (first event in stream) */
|
||||
export interface ResponseCreatedEvent extends BaseEvent {
|
||||
type: 'response.created';
|
||||
response: Response;
|
||||
}
|
||||
|
||||
/** Response in_progress event */
|
||||
export interface ResponseInProgressEvent extends BaseEvent {
|
||||
type: 'response.in_progress';
|
||||
response: Response;
|
||||
}
|
||||
|
||||
/** Response completed event */
|
||||
export interface ResponseCompletedEvent extends BaseEvent {
|
||||
type: 'response.completed';
|
||||
response: Response;
|
||||
}
|
||||
|
||||
/** Response failed event */
|
||||
export interface ResponseFailedEvent extends BaseEvent {
|
||||
type: 'response.failed';
|
||||
response: Response;
|
||||
}
|
||||
|
||||
/** Response incomplete event */
|
||||
export interface ResponseIncompleteEvent extends BaseEvent {
|
||||
type: 'response.incomplete';
|
||||
response: Response;
|
||||
}
|
||||
|
||||
/** Output item added event */
|
||||
export interface OutputItemAddedEvent extends BaseEvent {
|
||||
type: 'response.output_item.added';
|
||||
output_index: number;
|
||||
item: OutputItem;
|
||||
}
|
||||
|
||||
/** Output item done event */
|
||||
export interface OutputItemDoneEvent extends BaseEvent {
|
||||
type: 'response.output_item.done';
|
||||
output_index: number;
|
||||
item: OutputItem;
|
||||
}
|
||||
|
||||
/** Content part added event */
|
||||
export interface ContentPartAddedEvent extends BaseEvent {
|
||||
type: 'response.content_part.added';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
content_index: number;
|
||||
part: ModelContent | ReasoningContent;
|
||||
}
|
||||
|
||||
/** Content part done event */
|
||||
export interface ContentPartDoneEvent extends BaseEvent {
|
||||
type: 'response.content_part.done';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
content_index: number;
|
||||
part: ModelContent | ReasoningContent;
|
||||
}
|
||||
|
||||
/** Output text delta event */
|
||||
export interface OutputTextDeltaEvent extends BaseEvent {
|
||||
type: 'response.output_text.delta';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
content_index: number;
|
||||
delta: string;
|
||||
logprobs: LogProb[];
|
||||
}
|
||||
|
||||
/** Output text done event */
|
||||
export interface OutputTextDoneEvent extends BaseEvent {
|
||||
type: 'response.output_text.done';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
content_index: number;
|
||||
text: string;
|
||||
logprobs: LogProb[];
|
||||
}
|
||||
|
||||
/** Refusal delta event */
|
||||
export interface RefusalDeltaEvent extends BaseEvent {
|
||||
type: 'response.refusal.delta';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
content_index: number;
|
||||
delta: string;
|
||||
}
|
||||
|
||||
/** Refusal done event */
|
||||
export interface RefusalDoneEvent extends BaseEvent {
|
||||
type: 'response.refusal.done';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
content_index: number;
|
||||
refusal: string;
|
||||
}
|
||||
|
||||
/** Function call arguments delta event */
|
||||
export interface FunctionCallArgumentsDeltaEvent extends BaseEvent {
|
||||
type: 'response.function_call_arguments.delta';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
call_id: string;
|
||||
delta: string;
|
||||
}
|
||||
|
||||
/** Function call arguments done event */
|
||||
export interface FunctionCallArgumentsDoneEvent extends BaseEvent {
|
||||
type: 'response.function_call_arguments.done';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
call_id: string;
|
||||
arguments: string;
|
||||
}
|
||||
|
||||
/** Reasoning delta event */
|
||||
export interface ReasoningDeltaEvent extends BaseEvent {
|
||||
type: 'response.reasoning.delta';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
content_index: number;
|
||||
delta: string;
|
||||
}
|
||||
|
||||
/** Reasoning done event */
|
||||
export interface ReasoningDoneEvent extends BaseEvent {
|
||||
type: 'response.reasoning.done';
|
||||
item_id: string;
|
||||
output_index: number;
|
||||
content_index: number;
|
||||
text: string;
|
||||
}
|
||||
|
||||
/** Error event */
|
||||
export interface ErrorEvent extends BaseEvent {
|
||||
type: 'error';
|
||||
error: ResponseError;
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
* LIBRECHAT EXTENSION TYPES
|
||||
* Per Open Responses spec, custom types MUST be prefixed with implementor slug
|
||||
* @see https://openresponses.org/specification#extending-streaming-events
|
||||
* ============================================================================= */
|
||||
|
||||
/** Attachment content types for LibreChat extensions */
|
||||
export interface LibreChatAttachmentContent {
|
||||
/** File ID in LibreChat storage */
|
||||
file_id?: string;
|
||||
/** Original filename */
|
||||
filename?: string;
|
||||
/** MIME type */
|
||||
type?: string;
|
||||
/** URL to access the file */
|
||||
url?: string;
|
||||
/** Base64-encoded image data (for inline images) */
|
||||
image_url?: string;
|
||||
/** Width for images */
|
||||
width?: number;
|
||||
/** Height for images */
|
||||
height?: number;
|
||||
/** Associated tool call ID */
|
||||
tool_call_id?: string;
|
||||
/** Additional metadata */
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
/**
|
||||
* LibreChat attachment event - custom streaming event for file/image attachments
|
||||
* Follows Open Responses extension pattern with librechat: prefix
|
||||
*/
|
||||
export interface LibreChatAttachmentEvent extends BaseEvent {
|
||||
type: 'librechat:attachment';
|
||||
/** The attachment data */
|
||||
attachment: LibreChatAttachmentContent;
|
||||
/** Associated message ID */
|
||||
message_id?: string;
|
||||
/** Associated conversation ID */
|
||||
conversation_id?: string;
|
||||
}
|
||||
|
||||
/** Union of all streaming events (including LibreChat extensions) */
|
||||
export type ResponseEvent =
|
||||
| ResponseCreatedEvent
|
||||
| ResponseInProgressEvent
|
||||
| ResponseCompletedEvent
|
||||
| ResponseFailedEvent
|
||||
| ResponseIncompleteEvent
|
||||
| OutputItemAddedEvent
|
||||
| OutputItemDoneEvent
|
||||
| ContentPartAddedEvent
|
||||
| ContentPartDoneEvent
|
||||
| OutputTextDeltaEvent
|
||||
| OutputTextDoneEvent
|
||||
| RefusalDeltaEvent
|
||||
| RefusalDoneEvent
|
||||
| FunctionCallArgumentsDeltaEvent
|
||||
| FunctionCallArgumentsDoneEvent
|
||||
| ReasoningDeltaEvent
|
||||
| ReasoningDoneEvent
|
||||
| ErrorEvent
|
||||
// LibreChat extensions (prefixed per Open Responses spec)
|
||||
| LibreChatAttachmentEvent;
|
||||
|
||||
/* =============================================================================
|
||||
* INTERNAL TYPES
|
||||
* ============================================================================= */
|
||||
|
||||
/** Context for building responses */
|
||||
export interface ResponseContext {
|
||||
/** Response ID */
|
||||
responseId: string;
|
||||
/** Model/agent ID */
|
||||
model: string;
|
||||
/** Creation timestamp */
|
||||
createdAt: number;
|
||||
/** Previous response ID */
|
||||
previousResponseId?: string;
|
||||
/** Instructions */
|
||||
instructions?: string;
|
||||
}
|
||||
|
||||
/** Validation result for requests */
|
||||
export interface RequestValidationResult {
|
||||
valid: boolean;
|
||||
request?: ResponseRequest;
|
||||
error?: string;
|
||||
}
|
||||
129
packages/api/src/apiKeys/handlers.ts
Normal file
129
packages/api/src/apiKeys/handlers.ts
Normal file
|
|
@ -0,0 +1,129 @@
|
|||
import type { Request, Response } from 'express';
|
||||
import type { Types } from 'mongoose';
|
||||
import { logger } from '@librechat/data-schemas';
|
||||
|
||||
export interface ApiKeyHandlerDependencies {
|
||||
createAgentApiKey: (params: {
|
||||
userId: string | Types.ObjectId;
|
||||
name: string;
|
||||
expiresAt?: Date | null;
|
||||
}) => Promise<{
|
||||
id: string;
|
||||
name: string;
|
||||
key: string;
|
||||
keyPrefix: string;
|
||||
createdAt: Date;
|
||||
expiresAt?: Date;
|
||||
}>;
|
||||
listAgentApiKeys: (userId: string | Types.ObjectId) => Promise<
|
||||
Array<{
|
||||
id: string;
|
||||
name: string;
|
||||
keyPrefix: string;
|
||||
lastUsedAt?: Date;
|
||||
expiresAt?: Date;
|
||||
createdAt: Date;
|
||||
}>
|
||||
>;
|
||||
deleteAgentApiKey: (
|
||||
keyId: string | Types.ObjectId,
|
||||
userId: string | Types.ObjectId,
|
||||
) => Promise<boolean>;
|
||||
getAgentApiKeyById: (
|
||||
keyId: string | Types.ObjectId,
|
||||
userId: string | Types.ObjectId,
|
||||
) => Promise<{
|
||||
id: string;
|
||||
name: string;
|
||||
keyPrefix: string;
|
||||
lastUsedAt?: Date;
|
||||
expiresAt?: Date;
|
||||
createdAt: Date;
|
||||
} | null>;
|
||||
}
|
||||
|
||||
interface AuthenticatedRequest extends Request {
|
||||
user?: {
|
||||
id: string;
|
||||
_id: Types.ObjectId;
|
||||
};
|
||||
}
|
||||
|
||||
export function createApiKeyHandlers(deps: ApiKeyHandlerDependencies) {
|
||||
async function createApiKey(req: AuthenticatedRequest, res: Response) {
|
||||
try {
|
||||
const { name, expiresAt } = req.body;
|
||||
|
||||
if (!name || typeof name !== 'string' || name.trim() === '') {
|
||||
return res.status(400).json({
|
||||
error: 'API key name is required',
|
||||
});
|
||||
}
|
||||
|
||||
const result = await deps.createAgentApiKey({
|
||||
userId: req.user?.id || '',
|
||||
name: name.trim(),
|
||||
expiresAt: expiresAt ? new Date(expiresAt) : null,
|
||||
});
|
||||
|
||||
res.status(201).json({
|
||||
id: result.id,
|
||||
name: result.name,
|
||||
key: result.key,
|
||||
keyPrefix: result.keyPrefix,
|
||||
createdAt: result.createdAt,
|
||||
expiresAt: result.expiresAt,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[createApiKey] Error creating API key:', error);
|
||||
res.status(500).json({ error: 'Failed to create API key' });
|
||||
}
|
||||
}
|
||||
|
||||
async function listApiKeys(req: AuthenticatedRequest, res: Response) {
|
||||
try {
|
||||
const keys = await deps.listAgentApiKeys(req.user?.id || '');
|
||||
res.status(200).json({ keys });
|
||||
} catch (error) {
|
||||
logger.error('[listApiKeys] Error listing API keys:', error);
|
||||
res.status(500).json({ error: 'Failed to list API keys' });
|
||||
}
|
||||
}
|
||||
|
||||
async function getApiKey(req: AuthenticatedRequest, res: Response) {
|
||||
try {
|
||||
const key = await deps.getAgentApiKeyById(req.params.id, req.user?.id || '');
|
||||
|
||||
if (!key) {
|
||||
return res.status(404).json({ error: 'API key not found' });
|
||||
}
|
||||
|
||||
res.status(200).json(key);
|
||||
} catch (error) {
|
||||
logger.error('[getApiKey] Error getting API key:', error);
|
||||
res.status(500).json({ error: 'Failed to get API key' });
|
||||
}
|
||||
}
|
||||
|
||||
async function deleteApiKey(req: AuthenticatedRequest, res: Response) {
|
||||
try {
|
||||
const deleted = await deps.deleteAgentApiKey(req.params.id, req.user?.id || '');
|
||||
|
||||
if (!deleted) {
|
||||
return res.status(404).json({ error: 'API key not found' });
|
||||
}
|
||||
|
||||
res.status(204).send();
|
||||
} catch (error) {
|
||||
logger.error('[deleteApiKey] Error deleting API key:', error);
|
||||
res.status(500).json({ error: 'Failed to delete API key' });
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
createApiKey,
|
||||
listApiKeys,
|
||||
getApiKey,
|
||||
deleteApiKey,
|
||||
};
|
||||
}
|
||||
4
packages/api/src/apiKeys/index.ts
Normal file
4
packages/api/src/apiKeys/index.ts
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
export * from './service';
|
||||
export * from './middleware';
|
||||
export * from './handlers';
|
||||
export * from './permissions';
|
||||
163
packages/api/src/apiKeys/middleware.ts
Normal file
163
packages/api/src/apiKeys/middleware.ts
Normal file
|
|
@ -0,0 +1,163 @@
|
|||
import { logger } from '@librechat/data-schemas';
|
||||
import { ResourceType, PermissionBits, hasPermissions } from 'librechat-data-provider';
|
||||
import type { Request, Response, NextFunction } from 'express';
|
||||
import type { IUser } from '@librechat/data-schemas';
|
||||
import type { Types } from 'mongoose';
|
||||
import { getRemoteAgentPermissions } from './service';
|
||||
|
||||
export interface ApiKeyAuthDependencies {
|
||||
validateAgentApiKey: (apiKey: string) => Promise<{
|
||||
userId: Types.ObjectId;
|
||||
keyId: Types.ObjectId;
|
||||
} | null>;
|
||||
findUser: (query: { _id: string | Types.ObjectId }) => Promise<IUser | null>;
|
||||
}
|
||||
|
||||
export interface RemoteAgentAccessDependencies {
|
||||
getAgent: (query: {
|
||||
id: string;
|
||||
}) => Promise<{ _id: Types.ObjectId; [key: string]: unknown } | null>;
|
||||
getEffectivePermissions: (params: {
|
||||
userId: string;
|
||||
role?: string;
|
||||
resourceType: ResourceType;
|
||||
resourceId: string | Types.ObjectId;
|
||||
}) => Promise<number>;
|
||||
}
|
||||
|
||||
export interface ApiKeyAuthRequest extends Request {
|
||||
user?: IUser & { id: string };
|
||||
apiKeyId?: Types.ObjectId;
|
||||
}
|
||||
|
||||
export interface RemoteAgentAccessRequest extends ApiKeyAuthRequest {
|
||||
agent?: { _id: Types.ObjectId; [key: string]: unknown };
|
||||
agentPermissions?: number;
|
||||
}
|
||||
|
||||
export function createRequireApiKeyAuth(deps: ApiKeyAuthDependencies) {
|
||||
return async (req: ApiKeyAuthRequest, res: Response, next: NextFunction) => {
|
||||
const authHeader = req.headers.authorization;
|
||||
|
||||
if (!authHeader || !authHeader.startsWith('Bearer ')) {
|
||||
return res.status(401).json({
|
||||
error: {
|
||||
message: 'Missing or invalid Authorization header. Expected: Bearer <api_key>',
|
||||
type: 'invalid_request_error',
|
||||
code: 'missing_api_key',
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
const apiKey = authHeader.slice(7);
|
||||
|
||||
if (!apiKey || apiKey.trim() === '') {
|
||||
return res.status(401).json({
|
||||
error: {
|
||||
message: 'API key is required',
|
||||
type: 'invalid_request_error',
|
||||
code: 'missing_api_key',
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
try {
|
||||
const keyValidation = await deps.validateAgentApiKey(apiKey);
|
||||
|
||||
if (!keyValidation) {
|
||||
return res.status(401).json({
|
||||
error: {
|
||||
message: 'Invalid API key',
|
||||
type: 'invalid_request_error',
|
||||
code: 'invalid_api_key',
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
const user = await deps.findUser({ _id: keyValidation.userId });
|
||||
|
||||
if (!user) {
|
||||
return res.status(401).json({
|
||||
error: {
|
||||
message: 'User not found for this API key',
|
||||
type: 'invalid_request_error',
|
||||
code: 'invalid_api_key',
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
user.id = (user._id as Types.ObjectId).toString();
|
||||
req.user = user as IUser & { id: string };
|
||||
req.apiKeyId = keyValidation.keyId;
|
||||
|
||||
next();
|
||||
} catch (error) {
|
||||
logger.error('[requireApiKeyAuth] Error validating API key:', error);
|
||||
return res.status(500).json({
|
||||
error: {
|
||||
message: 'Internal server error during authentication',
|
||||
type: 'server_error',
|
||||
code: 'internal_error',
|
||||
},
|
||||
});
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
export function createCheckRemoteAgentAccess(deps: RemoteAgentAccessDependencies) {
|
||||
return async (req: RemoteAgentAccessRequest, res: Response, next: NextFunction) => {
|
||||
const agentId = req.body?.model || req.params?.model;
|
||||
|
||||
if (!agentId) {
|
||||
return res.status(400).json({
|
||||
error: {
|
||||
message: 'Model (agent ID) is required',
|
||||
type: 'invalid_request_error',
|
||||
code: 'missing_model',
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
try {
|
||||
const agent = await deps.getAgent({ id: agentId });
|
||||
|
||||
if (!agent) {
|
||||
return res.status(404).json({
|
||||
error: {
|
||||
message: `Agent not found: ${agentId}`,
|
||||
type: 'invalid_request_error',
|
||||
code: 'model_not_found',
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
const userId = req.user?.id || '';
|
||||
|
||||
const permissions = await getRemoteAgentPermissions(deps, userId, req.user?.role, agent._id);
|
||||
|
||||
if (!hasPermissions(permissions, PermissionBits.VIEW)) {
|
||||
return res.status(403).json({
|
||||
error: {
|
||||
message: `No remote access to agent: ${agentId}`,
|
||||
type: 'permission_error',
|
||||
code: 'access_denied',
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
req.agent = agent;
|
||||
req.agentPermissions = permissions;
|
||||
|
||||
next();
|
||||
} catch (error) {
|
||||
logger.error('[checkRemoteAgentAccess] Error checking agent access:', error);
|
||||
return res.status(500).json({
|
||||
error: {
|
||||
message: 'Internal server error while checking agent access',
|
||||
type: 'server_error',
|
||||
code: 'internal_error',
|
||||
},
|
||||
});
|
||||
}
|
||||
};
|
||||
}
|
||||
169
packages/api/src/apiKeys/permissions.ts
Normal file
169
packages/api/src/apiKeys/permissions.ts
Normal file
|
|
@ -0,0 +1,169 @@
|
|||
import {
|
||||
ResourceType,
|
||||
PrincipalType,
|
||||
PermissionBits,
|
||||
AccessRoleIds,
|
||||
} from 'librechat-data-provider';
|
||||
import type { Types, Model } from 'mongoose';
|
||||
|
||||
export interface Principal {
|
||||
type: string;
|
||||
id: string;
|
||||
name: string;
|
||||
email?: string;
|
||||
avatar?: string;
|
||||
source?: string;
|
||||
idOnTheSource?: string;
|
||||
accessRoleId: string;
|
||||
isImplicit?: boolean;
|
||||
}
|
||||
|
||||
export interface EnricherDependencies {
|
||||
AclEntry: Model<{
|
||||
principalType: string;
|
||||
principalId: Types.ObjectId;
|
||||
resourceType: string;
|
||||
resourceId: Types.ObjectId;
|
||||
permBits: number;
|
||||
roleId: Types.ObjectId;
|
||||
grantedBy: Types.ObjectId;
|
||||
grantedAt: Date;
|
||||
}>;
|
||||
AccessRole: Model<{
|
||||
accessRoleId: string;
|
||||
permBits: number;
|
||||
}>;
|
||||
logger: { error: (msg: string, ...args: unknown[]) => void };
|
||||
}
|
||||
|
||||
export interface EnrichResult {
|
||||
principals: Principal[];
|
||||
entriesToBackfill: Types.ObjectId[];
|
||||
}
|
||||
|
||||
/** Enriches REMOTE_AGENT principals with implicit AGENT owners */
|
||||
export async function enrichRemoteAgentPrincipals(
|
||||
deps: EnricherDependencies,
|
||||
resourceId: string | Types.ObjectId,
|
||||
principals: Principal[],
|
||||
): Promise<EnrichResult> {
|
||||
const { AclEntry } = deps;
|
||||
|
||||
const resourceObjectId =
|
||||
typeof resourceId === 'string' && /^[a-f\d]{24}$/i.test(resourceId)
|
||||
? deps.AclEntry.base.Types.ObjectId.createFromHexString(resourceId)
|
||||
: resourceId;
|
||||
|
||||
const agentOwnerEntries = await AclEntry.aggregate([
|
||||
{
|
||||
$match: {
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceId: resourceObjectId,
|
||||
principalType: PrincipalType.USER,
|
||||
permBits: { $bitsAllSet: PermissionBits.SHARE },
|
||||
},
|
||||
},
|
||||
{
|
||||
$lookup: {
|
||||
from: 'users',
|
||||
localField: 'principalId',
|
||||
foreignField: '_id',
|
||||
as: 'userInfo',
|
||||
},
|
||||
},
|
||||
{
|
||||
$project: {
|
||||
principalId: 1,
|
||||
userInfo: { $arrayElemAt: ['$userInfo', 0] },
|
||||
},
|
||||
},
|
||||
]);
|
||||
|
||||
const enrichedPrincipals = [...principals];
|
||||
const entriesToBackfill: Types.ObjectId[] = [];
|
||||
|
||||
for (const entry of agentOwnerEntries) {
|
||||
if (!entry.userInfo) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const alreadyIncluded = enrichedPrincipals.some(
|
||||
(p) => p.type === PrincipalType.USER && p.id === entry.principalId.toString(),
|
||||
);
|
||||
|
||||
if (!alreadyIncluded) {
|
||||
enrichedPrincipals.unshift({
|
||||
type: PrincipalType.USER,
|
||||
id: entry.userInfo._id.toString(),
|
||||
name: entry.userInfo.name || entry.userInfo.username,
|
||||
email: entry.userInfo.email,
|
||||
avatar: entry.userInfo.avatar,
|
||||
source: 'local',
|
||||
idOnTheSource: entry.userInfo.idOnTheSource || entry.userInfo._id.toString(),
|
||||
accessRoleId: AccessRoleIds.REMOTE_AGENT_OWNER,
|
||||
isImplicit: true,
|
||||
});
|
||||
|
||||
entriesToBackfill.push(entry.principalId);
|
||||
}
|
||||
}
|
||||
|
||||
return { principals: enrichedPrincipals, entriesToBackfill };
|
||||
}
|
||||
|
||||
/** Backfills REMOTE_AGENT ACL entries for AGENT owners (fire-and-forget) */
|
||||
export function backfillRemoteAgentPermissions(
|
||||
deps: EnricherDependencies,
|
||||
resourceId: string | Types.ObjectId,
|
||||
entriesToBackfill: Types.ObjectId[],
|
||||
): void {
|
||||
if (entriesToBackfill.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const { AclEntry, AccessRole, logger } = deps;
|
||||
|
||||
const resourceObjectId =
|
||||
typeof resourceId === 'string' && /^[a-f\d]{24}$/i.test(resourceId)
|
||||
? AclEntry.base.Types.ObjectId.createFromHexString(resourceId)
|
||||
: resourceId;
|
||||
|
||||
AccessRole.findOne({ accessRoleId: AccessRoleIds.REMOTE_AGENT_OWNER })
|
||||
.lean()
|
||||
.then((role) => {
|
||||
if (!role) {
|
||||
logger.error('[backfillRemoteAgentPermissions] REMOTE_AGENT_OWNER role not found');
|
||||
return;
|
||||
}
|
||||
|
||||
const bulkOps = entriesToBackfill.map((principalId) => ({
|
||||
updateOne: {
|
||||
filter: {
|
||||
principalType: PrincipalType.USER,
|
||||
principalId,
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
resourceId: resourceObjectId,
|
||||
},
|
||||
update: {
|
||||
$setOnInsert: {
|
||||
principalType: PrincipalType.USER,
|
||||
principalId,
|
||||
principalModel: 'User',
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
resourceId: resourceObjectId,
|
||||
permBits: role.permBits,
|
||||
roleId: role._id,
|
||||
grantedBy: principalId,
|
||||
grantedAt: new Date(),
|
||||
},
|
||||
},
|
||||
upsert: true,
|
||||
},
|
||||
}));
|
||||
|
||||
return AclEntry.bulkWrite(bulkOps, { ordered: false });
|
||||
})
|
||||
.catch((err) => {
|
||||
logger.error('[backfillRemoteAgentPermissions] Failed to backfill:', err);
|
||||
});
|
||||
}
|
||||
146
packages/api/src/apiKeys/service.ts
Normal file
146
packages/api/src/apiKeys/service.ts
Normal file
|
|
@ -0,0 +1,146 @@
|
|||
import { createMethods } from '@librechat/data-schemas';
|
||||
import { ResourceType, PermissionBits, hasPermissions } from 'librechat-data-provider';
|
||||
import type { AllMethods, IUser } from '@librechat/data-schemas';
|
||||
import type { Types } from 'mongoose';
|
||||
|
||||
export interface ApiKeyServiceDependencies {
|
||||
validateAgentApiKey: AllMethods['validateAgentApiKey'];
|
||||
createAgentApiKey: AllMethods['createAgentApiKey'];
|
||||
listAgentApiKeys: AllMethods['listAgentApiKeys'];
|
||||
deleteAgentApiKey: AllMethods['deleteAgentApiKey'];
|
||||
getAgentApiKeyById: AllMethods['getAgentApiKeyById'];
|
||||
findUser: (query: { _id: string | Types.ObjectId }) => Promise<IUser | null>;
|
||||
}
|
||||
|
||||
export interface RemoteAgentAccessResult {
|
||||
hasAccess: boolean;
|
||||
permissions: number;
|
||||
agent: { _id: Types.ObjectId; [key: string]: unknown } | null;
|
||||
}
|
||||
|
||||
export class AgentApiKeyService {
|
||||
private deps: ApiKeyServiceDependencies;
|
||||
|
||||
constructor(deps: ApiKeyServiceDependencies) {
|
||||
this.deps = deps;
|
||||
}
|
||||
|
||||
async validateApiKey(apiKey: string): Promise<{
|
||||
userId: Types.ObjectId;
|
||||
keyId: Types.ObjectId;
|
||||
} | null> {
|
||||
return this.deps.validateAgentApiKey(apiKey);
|
||||
}
|
||||
|
||||
async createApiKey(params: {
|
||||
userId: string | Types.ObjectId;
|
||||
name: string;
|
||||
expiresAt?: Date | null;
|
||||
}) {
|
||||
return this.deps.createAgentApiKey(params);
|
||||
}
|
||||
|
||||
async listApiKeys(userId: string | Types.ObjectId) {
|
||||
return this.deps.listAgentApiKeys(userId);
|
||||
}
|
||||
|
||||
async deleteApiKey(keyId: string | Types.ObjectId, userId: string | Types.ObjectId) {
|
||||
return this.deps.deleteAgentApiKey(keyId, userId);
|
||||
}
|
||||
|
||||
async getApiKeyById(keyId: string | Types.ObjectId, userId: string | Types.ObjectId) {
|
||||
return this.deps.getAgentApiKeyById(keyId, userId);
|
||||
}
|
||||
|
||||
async getUserFromApiKey(apiKey: string): Promise<IUser | null> {
|
||||
const keyValidation = await this.validateApiKey(apiKey);
|
||||
if (!keyValidation) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return this.deps.findUser({ _id: keyValidation.userId });
|
||||
}
|
||||
}
|
||||
|
||||
export function createApiKeyServiceDependencies(
|
||||
mongoose: typeof import('mongoose'),
|
||||
): ApiKeyServiceDependencies {
|
||||
const methods = createMethods(mongoose);
|
||||
return {
|
||||
validateAgentApiKey: methods.validateAgentApiKey,
|
||||
createAgentApiKey: methods.createAgentApiKey,
|
||||
listAgentApiKeys: methods.listAgentApiKeys,
|
||||
deleteAgentApiKey: methods.deleteAgentApiKey,
|
||||
getAgentApiKeyById: methods.getAgentApiKeyById,
|
||||
findUser: methods.findUser,
|
||||
};
|
||||
}
|
||||
|
||||
export interface GetRemoteAgentPermissionsDeps {
|
||||
getEffectivePermissions: (params: {
|
||||
userId: string;
|
||||
role?: string;
|
||||
resourceType: ResourceType;
|
||||
resourceId: string | Types.ObjectId;
|
||||
}) => Promise<number>;
|
||||
}
|
||||
|
||||
/** AGENT owners automatically have full REMOTE_AGENT permissions */
|
||||
export async function getRemoteAgentPermissions(
|
||||
deps: GetRemoteAgentPermissionsDeps,
|
||||
userId: string,
|
||||
role: string | undefined,
|
||||
resourceId: string | Types.ObjectId,
|
||||
): Promise<number> {
|
||||
const agentPerms = await deps.getEffectivePermissions({
|
||||
userId,
|
||||
role,
|
||||
resourceType: ResourceType.AGENT,
|
||||
resourceId,
|
||||
});
|
||||
|
||||
if (hasPermissions(agentPerms, PermissionBits.SHARE)) {
|
||||
return PermissionBits.VIEW | PermissionBits.EDIT | PermissionBits.DELETE | PermissionBits.SHARE;
|
||||
}
|
||||
|
||||
return deps.getEffectivePermissions({
|
||||
userId,
|
||||
role,
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
resourceId,
|
||||
});
|
||||
}
|
||||
|
||||
export async function checkRemoteAgentAccess(params: {
|
||||
userId: string;
|
||||
role?: string;
|
||||
agentId: string;
|
||||
getAgent: (query: {
|
||||
id: string;
|
||||
}) => Promise<{ _id: Types.ObjectId; [key: string]: unknown } | null>;
|
||||
getEffectivePermissions: (params: {
|
||||
userId: string;
|
||||
role?: string;
|
||||
resourceType: ResourceType;
|
||||
resourceId: string | Types.ObjectId;
|
||||
}) => Promise<number>;
|
||||
}): Promise<RemoteAgentAccessResult> {
|
||||
const { userId, role, agentId, getAgent, getEffectivePermissions } = params;
|
||||
|
||||
const agent = await getAgent({ id: agentId });
|
||||
|
||||
if (!agent) {
|
||||
return { hasAccess: false, permissions: 0, agent: null };
|
||||
}
|
||||
|
||||
const permissions = await getRemoteAgentPermissions(
|
||||
{ getEffectivePermissions },
|
||||
userId,
|
||||
role,
|
||||
agent._id,
|
||||
);
|
||||
|
||||
const hasAccess = hasPermissions(permissions, PermissionBits.VIEW);
|
||||
|
||||
return { hasAccess, permissions, agent };
|
||||
}
|
||||
|
|
@ -100,6 +100,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: false,
|
||||
[Permissions.CREATE]: false,
|
||||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
};
|
||||
|
||||
const expectedPermissionsForAdmin = {
|
||||
|
|
@ -141,6 +147,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: true,
|
||||
[Permissions.CREATE]: true,
|
||||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
};
|
||||
|
||||
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
|
||||
|
|
@ -246,6 +258,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: false,
|
||||
[Permissions.CREATE]: false,
|
||||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
};
|
||||
|
||||
const expectedPermissionsForAdmin = {
|
||||
|
|
@ -287,6 +305,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: true,
|
||||
[Permissions.CREATE]: true,
|
||||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
};
|
||||
|
||||
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
|
||||
|
|
@ -378,6 +402,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: false,
|
||||
[Permissions.CREATE]: false,
|
||||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
};
|
||||
|
||||
const expectedPermissionsForAdmin = {
|
||||
|
|
@ -419,6 +449,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: true,
|
||||
[Permissions.CREATE]: true,
|
||||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
};
|
||||
|
||||
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
|
||||
|
|
@ -523,6 +559,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: false,
|
||||
[Permissions.CREATE]: false,
|
||||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
};
|
||||
|
||||
const expectedPermissionsForAdmin = {
|
||||
|
|
@ -564,6 +606,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: true,
|
||||
[Permissions.CREATE]: true,
|
||||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
};
|
||||
|
||||
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
|
||||
|
|
@ -655,6 +703,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: false,
|
||||
[Permissions.CREATE]: false,
|
||||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
};
|
||||
|
||||
const expectedPermissionsForAdmin = {
|
||||
|
|
@ -696,6 +750,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: true,
|
||||
[Permissions.CREATE]: true,
|
||||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
};
|
||||
|
||||
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
|
||||
|
|
@ -784,6 +844,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: false,
|
||||
[Permissions.CREATE]: false,
|
||||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
};
|
||||
|
||||
const expectedPermissionsForAdmin = {
|
||||
|
|
@ -813,6 +879,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: true,
|
||||
[Permissions.CREATE]: true,
|
||||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
};
|
||||
|
||||
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
|
||||
|
|
@ -920,6 +992,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: false,
|
||||
[Permissions.CREATE]: false,
|
||||
[Permissions.SHARE]: false,
|
||||
[Permissions.SHARE_PUBLIC]: false,
|
||||
},
|
||||
};
|
||||
|
||||
const expectedPermissionsForAdmin = {
|
||||
|
|
@ -955,6 +1033,12 @@ describe('updateInterfacePermissions - permissions', () => {
|
|||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: true,
|
||||
[Permissions.CREATE]: true,
|
||||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
};
|
||||
|
||||
expect(mockUpdateAccessPermissions).toHaveBeenCalledTimes(2);
|
||||
|
|
|
|||
|
|
@ -43,6 +43,8 @@ function hasExplicitConfig(
|
|||
return interfaceConfig?.fileCitations !== undefined;
|
||||
case PermissionTypes.MCP_SERVERS:
|
||||
return interfaceConfig?.mcpServers !== undefined;
|
||||
case PermissionTypes.REMOTE_AGENTS:
|
||||
return interfaceConfig?.remoteAgents !== undefined;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
|
|
@ -101,7 +103,9 @@ export async function updateInterfacePermissions({
|
|||
const defaultPerms = roleDefaults[roleName]?.permissions;
|
||||
|
||||
const existingRole = await getRoleByName(roleName);
|
||||
const existingPermissions = existingRole?.permissions;
|
||||
const existingPermissions = existingRole?.permissions as
|
||||
| Partial<Record<PermissionTypes, Record<string, boolean | undefined>>>
|
||||
| undefined;
|
||||
const permissionsToUpdate: Partial<
|
||||
Record<PermissionTypes, Record<string, boolean | undefined>>
|
||||
> = {};
|
||||
|
|
@ -335,6 +339,28 @@ export async function updateInterfacePermissions({
|
|||
defaults.mcpServers?.public,
|
||||
),
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: getPermissionValue(
|
||||
loadedInterface.remoteAgents?.use,
|
||||
defaultPerms[PermissionTypes.REMOTE_AGENTS]?.[Permissions.USE],
|
||||
defaults.remoteAgents?.use,
|
||||
),
|
||||
[Permissions.CREATE]: getPermissionValue(
|
||||
loadedInterface.remoteAgents?.create,
|
||||
defaultPerms[PermissionTypes.REMOTE_AGENTS]?.[Permissions.CREATE],
|
||||
defaults.remoteAgents?.create,
|
||||
),
|
||||
[Permissions.SHARE]: getPermissionValue(
|
||||
loadedInterface.remoteAgents?.share,
|
||||
defaultPerms[PermissionTypes.REMOTE_AGENTS]?.[Permissions.SHARE],
|
||||
defaults.remoteAgents?.share,
|
||||
),
|
||||
[Permissions.SHARE_PUBLIC]: getPermissionValue(
|
||||
loadedInterface.remoteAgents?.public,
|
||||
defaultPerms[PermissionTypes.REMOTE_AGENTS]?.[Permissions.SHARE_PUBLIC],
|
||||
defaults.remoteAgents?.public,
|
||||
),
|
||||
},
|
||||
};
|
||||
|
||||
// Check and add each permission type if needed
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@ export * from './app';
|
|||
export * from './cdn';
|
||||
/* Auth */
|
||||
export * from './auth';
|
||||
/* API Keys */
|
||||
export * from './apiKeys';
|
||||
/* MCP */
|
||||
export * from './mcp/registry/MCPServersRegistry';
|
||||
export * from './mcp/MCPManager';
|
||||
|
|
|
|||
|
|
@ -46,6 +46,7 @@ export enum ResourceType {
|
|||
AGENT = 'agent',
|
||||
PROMPTGROUP = 'promptGroup',
|
||||
MCPSERVER = 'mcpServer',
|
||||
REMOTE_AGENT = 'remoteAgent',
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -75,6 +76,9 @@ export enum AccessRoleIds {
|
|||
MCPSERVER_VIEWER = 'mcpServer_viewer',
|
||||
MCPSERVER_EDITOR = 'mcpServer_editor',
|
||||
MCPSERVER_OWNER = 'mcpServer_owner',
|
||||
REMOTE_AGENT_VIEWER = 'remoteAgent_viewer',
|
||||
REMOTE_AGENT_EDITOR = 'remoteAgent_editor',
|
||||
REMOTE_AGENT_OWNER = 'remoteAgent_owner',
|
||||
}
|
||||
|
||||
// ===== ZOD SCHEMAS =====
|
||||
|
|
@ -310,11 +314,22 @@ export function permBitsToAccessLevel(permBits: number): TAccessLevel {
|
|||
export function accessRoleToPermBits(accessRoleId: string): number {
|
||||
switch (accessRoleId) {
|
||||
case AccessRoleIds.AGENT_VIEWER:
|
||||
case AccessRoleIds.PROMPTGROUP_VIEWER:
|
||||
case AccessRoleIds.MCPSERVER_VIEWER:
|
||||
case AccessRoleIds.REMOTE_AGENT_VIEWER:
|
||||
return PermissionBits.VIEW;
|
||||
case AccessRoleIds.AGENT_EDITOR:
|
||||
case AccessRoleIds.PROMPTGROUP_EDITOR:
|
||||
case AccessRoleIds.MCPSERVER_EDITOR:
|
||||
case AccessRoleIds.REMOTE_AGENT_EDITOR:
|
||||
return PermissionBits.VIEW | PermissionBits.EDIT;
|
||||
case AccessRoleIds.AGENT_OWNER:
|
||||
return PermissionBits.VIEW | PermissionBits.EDIT | PermissionBits.DELETE;
|
||||
case AccessRoleIds.PROMPTGROUP_OWNER:
|
||||
case AccessRoleIds.MCPSERVER_OWNER:
|
||||
case AccessRoleIds.REMOTE_AGENT_OWNER:
|
||||
return (
|
||||
PermissionBits.VIEW | PermissionBits.EDIT | PermissionBits.DELETE | PermissionBits.SHARE
|
||||
);
|
||||
default:
|
||||
return PermissionBits.VIEW;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -95,6 +95,12 @@ export const revokeUserKey = (name: string) => `${keysEndpoint}/${name}`;
|
|||
|
||||
export const revokeAllUserKeys = () => `${keysEndpoint}?all=true`;
|
||||
|
||||
const apiKeysEndpoint = `${BASE_URL}/api/api-keys`;
|
||||
|
||||
export const apiKeys = () => apiKeysEndpoint;
|
||||
|
||||
export const apiKeyById = (id: string) => `${apiKeysEndpoint}/${id}`;
|
||||
|
||||
export const conversationsRoot = `${BASE_URL}/api/convos`;
|
||||
|
||||
export const conversations = (params: q.ConversationListParams) => {
|
||||
|
|
@ -329,6 +335,8 @@ export const updateAgentPermissions = (roleName: string) => `${getRole(roleName)
|
|||
export const updatePeoplePickerPermissions = (roleName: string) =>
|
||||
`${getRole(roleName)}/people-picker`;
|
||||
export const updateMCPServersPermissions = (roleName: string) => `${getRole(roleName)}/mcp-servers`;
|
||||
export const updateRemoteAgentsPermissions = (roleName: string) =>
|
||||
`${getRole(roleName)}/remote-agents`;
|
||||
|
||||
export const updateMarketplacePermissions = (roleName: string) =>
|
||||
`${getRole(roleName)}/marketplace`;
|
||||
|
|
|
|||
|
|
@ -660,6 +660,14 @@ export const interfaceSchema = z
|
|||
.optional(),
|
||||
fileSearch: z.boolean().optional(),
|
||||
fileCitations: z.boolean().optional(),
|
||||
remoteAgents: z
|
||||
.object({
|
||||
use: z.boolean().optional(),
|
||||
create: z.boolean().optional(),
|
||||
share: z.boolean().optional(),
|
||||
public: z.boolean().optional(),
|
||||
})
|
||||
.optional(),
|
||||
})
|
||||
.default({
|
||||
endpointsMenu: true,
|
||||
|
|
@ -699,6 +707,12 @@ export const interfaceSchema = z
|
|||
},
|
||||
fileSearch: true,
|
||||
fileCitations: true,
|
||||
remoteAgents: {
|
||||
use: false,
|
||||
create: false,
|
||||
share: false,
|
||||
public: false,
|
||||
},
|
||||
});
|
||||
|
||||
export type TInterfaceConfig = z.infer<typeof interfaceSchema>;
|
||||
|
|
|
|||
|
|
@ -81,6 +81,20 @@ export function updateUserKey(payload: t.TUpdateUserKeyRequest) {
|
|||
return request.put(endpoints.keys(), payload);
|
||||
}
|
||||
|
||||
export function getAgentApiKeys(): Promise<t.TAgentApiKeyListResponse> {
|
||||
return request.get(endpoints.apiKeys());
|
||||
}
|
||||
|
||||
export function createAgentApiKey(
|
||||
payload: t.TAgentApiKeyCreateRequest,
|
||||
): Promise<t.TAgentApiKeyCreateResponse> {
|
||||
return request.post(endpoints.apiKeys(), payload);
|
||||
}
|
||||
|
||||
export function deleteAgentApiKey(id: string): Promise<void> {
|
||||
return request.delete(endpoints.apiKeyById(id));
|
||||
}
|
||||
|
||||
export function getPresets(): Promise<s.TPreset[]> {
|
||||
return request.get(endpoints.presets());
|
||||
}
|
||||
|
|
@ -877,6 +891,15 @@ export function updateMCPServersPermissions(
|
|||
return request.put(endpoints.updateMCPServersPermissions(variables.roleName), variables.updates);
|
||||
}
|
||||
|
||||
export function updateRemoteAgentsPermissions(
|
||||
variables: m.UpdateRemoteAgentsPermVars,
|
||||
): Promise<m.UpdatePermResponse> {
|
||||
return request.put(
|
||||
endpoints.updateRemoteAgentsPermissions(variables.roleName),
|
||||
variables.updates,
|
||||
);
|
||||
}
|
||||
|
||||
export function updateMarketplacePermissions(
|
||||
variables: m.UpdateMarketplacePermVars,
|
||||
): Promise<m.UpdatePermResponse> {
|
||||
|
|
|
|||
|
|
@ -62,6 +62,8 @@ export enum QueryKeys {
|
|||
mcpServer = 'mcpServer',
|
||||
/* Active Jobs */
|
||||
activeJobs = 'activeJobs',
|
||||
/* Agent API Keys */
|
||||
agentApiKeys = 'agentApiKeys',
|
||||
}
|
||||
|
||||
// Dynamic query keys that require parameters
|
||||
|
|
@ -70,6 +72,8 @@ export const DynamicQueryKeys = {
|
|||
} as const;
|
||||
|
||||
export enum MutationKeys {
|
||||
createAgentApiKey = 'createAgentApiKey',
|
||||
deleteAgentApiKey = 'deleteAgentApiKey',
|
||||
fileUpload = 'fileUpload',
|
||||
fileDelete = 'fileDelete',
|
||||
updatePreset = 'updatePreset',
|
||||
|
|
|
|||
|
|
@ -56,6 +56,10 @@ export enum PermissionTypes {
|
|||
* Type for MCP Server Permissions
|
||||
*/
|
||||
MCP_SERVERS = 'MCP_SERVERS',
|
||||
/**
|
||||
* Type for Remote Agent (API) Permissions
|
||||
*/
|
||||
REMOTE_AGENTS = 'REMOTE_AGENTS',
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -157,6 +161,14 @@ export const mcpServersPermissionsSchema = z.object({
|
|||
});
|
||||
export type TMcpServersPermissions = z.infer<typeof mcpServersPermissionsSchema>;
|
||||
|
||||
export const remoteAgentsPermissionsSchema = z.object({
|
||||
[Permissions.USE]: z.boolean().default(false),
|
||||
[Permissions.CREATE]: z.boolean().default(false),
|
||||
[Permissions.SHARE]: z.boolean().default(false),
|
||||
[Permissions.SHARE_PUBLIC]: z.boolean().default(false),
|
||||
});
|
||||
export type TRemoteAgentsPermissions = z.infer<typeof remoteAgentsPermissionsSchema>;
|
||||
|
||||
// Define a single permissions schema that holds all permission types.
|
||||
export const permissionsSchema = z.object({
|
||||
[PermissionTypes.PROMPTS]: promptPermissionsSchema,
|
||||
|
|
@ -172,4 +184,5 @@ export const permissionsSchema = z.object({
|
|||
[PermissionTypes.FILE_SEARCH]: fileSearchPermissionsSchema,
|
||||
[PermissionTypes.FILE_CITATIONS]: fileCitationsPermissionsSchema,
|
||||
[PermissionTypes.MCP_SERVERS]: mcpServersPermissionsSchema,
|
||||
[PermissionTypes.REMOTE_AGENTS]: remoteAgentsPermissionsSchema,
|
||||
});
|
||||
|
|
|
|||
|
|
@ -524,3 +524,43 @@ export const useMCPServerConnectionStatusQuery = (
|
|||
},
|
||||
);
|
||||
};
|
||||
|
||||
export const useGetAgentApiKeysQuery = (
|
||||
config?: UseQueryOptions<t.TAgentApiKeyListResponse>,
|
||||
): QueryObserverResult<t.TAgentApiKeyListResponse> => {
|
||||
return useQuery<t.TAgentApiKeyListResponse>(
|
||||
[QueryKeys.agentApiKeys],
|
||||
() => dataService.getAgentApiKeys(),
|
||||
{
|
||||
refetchOnWindowFocus: false,
|
||||
refetchOnReconnect: false,
|
||||
refetchOnMount: false,
|
||||
...config,
|
||||
},
|
||||
);
|
||||
};
|
||||
|
||||
export const useCreateAgentApiKeyMutation = (): UseMutationResult<
|
||||
t.TAgentApiKeyCreateResponse,
|
||||
unknown,
|
||||
t.TAgentApiKeyCreateRequest
|
||||
> => {
|
||||
const queryClient = useQueryClient();
|
||||
return useMutation(
|
||||
(payload: t.TAgentApiKeyCreateRequest) => dataService.createAgentApiKey(payload),
|
||||
{
|
||||
onSuccess: () => {
|
||||
queryClient.invalidateQueries([QueryKeys.agentApiKeys]);
|
||||
},
|
||||
},
|
||||
);
|
||||
};
|
||||
|
||||
export const useDeleteAgentApiKeyMutation = (): UseMutationResult<void, unknown, string> => {
|
||||
const queryClient = useQueryClient();
|
||||
return useMutation((id: string) => dataService.deleteAgentApiKey(id), {
|
||||
onSuccess: () => {
|
||||
queryClient.invalidateQueries([QueryKeys.agentApiKeys]);
|
||||
},
|
||||
});
|
||||
};
|
||||
|
|
|
|||
|
|
@ -11,10 +11,11 @@ import {
|
|||
webSearchPermissionsSchema,
|
||||
fileSearchPermissionsSchema,
|
||||
multiConvoPermissionsSchema,
|
||||
temporaryChatPermissionsSchema,
|
||||
peoplePickerPermissionsSchema,
|
||||
fileCitationsPermissionsSchema,
|
||||
mcpServersPermissionsSchema,
|
||||
peoplePickerPermissionsSchema,
|
||||
remoteAgentsPermissionsSchema,
|
||||
temporaryChatPermissionsSchema,
|
||||
fileCitationsPermissionsSchema,
|
||||
} from './permissions';
|
||||
|
||||
/**
|
||||
|
|
@ -96,6 +97,12 @@ const defaultRolesSchema = z.object({
|
|||
[Permissions.SHARE]: z.boolean().default(true),
|
||||
[Permissions.SHARE_PUBLIC]: z.boolean().default(true),
|
||||
}),
|
||||
[PermissionTypes.REMOTE_AGENTS]: remoteAgentsPermissionsSchema.extend({
|
||||
[Permissions.USE]: z.boolean().default(true),
|
||||
[Permissions.CREATE]: z.boolean().default(true),
|
||||
[Permissions.SHARE]: z.boolean().default(true),
|
||||
[Permissions.SHARE_PUBLIC]: z.boolean().default(true),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
[SystemRoles.USER]: roleSchema.extend({
|
||||
|
|
@ -162,6 +169,12 @@ export const roleDefaults = defaultRolesSchema.parse({
|
|||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: true,
|
||||
[Permissions.CREATE]: true,
|
||||
[Permissions.SHARE]: true,
|
||||
[Permissions.SHARE_PUBLIC]: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
[SystemRoles.USER]: {
|
||||
|
|
@ -186,6 +199,7 @@ export const roleDefaults = defaultRolesSchema.parse({
|
|||
[PermissionTypes.FILE_SEARCH]: {},
|
||||
[PermissionTypes.FILE_CITATIONS]: {},
|
||||
[PermissionTypes.MCP_SERVERS]: {},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
|
|
|||
|
|
@ -235,6 +235,33 @@ export type TUpdateUserKeyRequest = {
|
|||
expiresAt: string;
|
||||
};
|
||||
|
||||
export type TAgentApiKeyCreateRequest = {
|
||||
name: string;
|
||||
expiresAt?: string | null;
|
||||
};
|
||||
|
||||
export type TAgentApiKeyCreateResponse = {
|
||||
id: string;
|
||||
name: string;
|
||||
key: string;
|
||||
keyPrefix: string;
|
||||
createdAt: string;
|
||||
expiresAt?: string;
|
||||
};
|
||||
|
||||
export type TAgentApiKeyListItem = {
|
||||
id: string;
|
||||
name: string;
|
||||
keyPrefix: string;
|
||||
lastUsedAt?: string;
|
||||
expiresAt?: string;
|
||||
createdAt: string;
|
||||
};
|
||||
|
||||
export type TAgentApiKeyListResponse = {
|
||||
keys: TAgentApiKeyListItem[];
|
||||
};
|
||||
|
||||
export type TUpdateConversationRequest = {
|
||||
conversationId: string;
|
||||
title: string;
|
||||
|
|
|
|||
|
|
@ -313,6 +313,15 @@ export type UpdateMCPServersPermOptions = MutationOptions<
|
|||
types.TError | null | undefined
|
||||
>;
|
||||
|
||||
export type UpdateRemoteAgentsPermVars = UpdatePermVars<p.TRemoteAgentsPermissions>;
|
||||
|
||||
export type UpdateRemoteAgentsPermOptions = MutationOptions<
|
||||
UpdatePermResponse,
|
||||
UpdateRemoteAgentsPermVars,
|
||||
unknown,
|
||||
types.TError | null | undefined
|
||||
>;
|
||||
|
||||
export type UpdateMarketplacePermVars = UpdatePermVars<p.TMarketplacePermissions>;
|
||||
|
||||
export type UpdateMarketplacePermOptions = MutationOptions<
|
||||
|
|
|
|||
|
|
@ -203,6 +203,9 @@ describe('AccessRole Model Tests', () => {
|
|||
AccessRoleIds.MCPSERVER_EDITOR,
|
||||
AccessRoleIds.MCPSERVER_OWNER,
|
||||
AccessRoleIds.MCPSERVER_VIEWER,
|
||||
AccessRoleIds.REMOTE_AGENT_EDITOR,
|
||||
AccessRoleIds.REMOTE_AGENT_OWNER,
|
||||
AccessRoleIds.REMOTE_AGENT_VIEWER,
|
||||
].sort(),
|
||||
);
|
||||
|
||||
|
|
|
|||
|
|
@ -167,6 +167,27 @@ export function createAccessRoleMethods(mongoose: typeof import('mongoose')) {
|
|||
resourceType: ResourceType.MCPSERVER,
|
||||
permBits: RoleBits.OWNER,
|
||||
},
|
||||
{
|
||||
accessRoleId: AccessRoleIds.REMOTE_AGENT_VIEWER,
|
||||
name: 'com_ui_remote_agent_role_viewer',
|
||||
description: 'com_ui_remote_agent_role_viewer_desc',
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
permBits: RoleBits.VIEWER,
|
||||
},
|
||||
{
|
||||
accessRoleId: AccessRoleIds.REMOTE_AGENT_EDITOR,
|
||||
name: 'com_ui_remote_agent_role_editor',
|
||||
description: 'com_ui_remote_agent_role_editor_desc',
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
permBits: RoleBits.EDITOR,
|
||||
},
|
||||
{
|
||||
accessRoleId: AccessRoleIds.REMOTE_AGENT_OWNER,
|
||||
name: 'com_ui_remote_agent_role_owner',
|
||||
description: 'com_ui_remote_agent_role_owner_desc',
|
||||
resourceType: ResourceType.REMOTE_AGENT,
|
||||
permBits: RoleBits.OWNER,
|
||||
},
|
||||
];
|
||||
|
||||
const result: Record<string, IAccessRole> = {};
|
||||
|
|
|
|||
164
packages/data-schemas/src/methods/agentApiKey.ts
Normal file
164
packages/data-schemas/src/methods/agentApiKey.ts
Normal file
|
|
@ -0,0 +1,164 @@
|
|||
import type { Types } from 'mongoose';
|
||||
import type {
|
||||
AgentApiKeyCreateResult,
|
||||
AgentApiKeyCreateData,
|
||||
AgentApiKeyListItem,
|
||||
IAgentApiKey,
|
||||
} from '~/types';
|
||||
import { hashToken, getRandomValues } from '~/crypto';
|
||||
import logger from '~/config/winston';
|
||||
|
||||
const API_KEY_PREFIX = 'sk-';
|
||||
const API_KEY_LENGTH = 32;
|
||||
|
||||
export function createAgentApiKeyMethods(mongoose: typeof import('mongoose')) {
|
||||
async function generateApiKey(): Promise<{ key: string; keyHash: string; keyPrefix: string }> {
|
||||
const randomPart = await getRandomValues(API_KEY_LENGTH);
|
||||
const key = `${API_KEY_PREFIX}${randomPart}`;
|
||||
const keyHash = await hashToken(key);
|
||||
const keyPrefix = key.slice(0, 8);
|
||||
return { key, keyHash, keyPrefix };
|
||||
}
|
||||
|
||||
async function createAgentApiKey(data: AgentApiKeyCreateData): Promise<AgentApiKeyCreateResult> {
|
||||
try {
|
||||
const AgentApiKey = mongoose.models.AgentApiKey;
|
||||
const { key, keyHash, keyPrefix } = await generateApiKey();
|
||||
|
||||
const apiKeyDoc = await AgentApiKey.create({
|
||||
userId: data.userId,
|
||||
name: data.name,
|
||||
keyHash,
|
||||
keyPrefix,
|
||||
expiresAt: data.expiresAt || undefined,
|
||||
});
|
||||
|
||||
return {
|
||||
id: apiKeyDoc._id.toString(),
|
||||
name: apiKeyDoc.name,
|
||||
keyPrefix,
|
||||
key,
|
||||
createdAt: apiKeyDoc.createdAt,
|
||||
expiresAt: apiKeyDoc.expiresAt,
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error('[createAgentApiKey] Error creating API key:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async function validateAgentApiKey(
|
||||
apiKey: string,
|
||||
): Promise<{ userId: Types.ObjectId; keyId: Types.ObjectId } | null> {
|
||||
try {
|
||||
const AgentApiKey = mongoose.models.AgentApiKey;
|
||||
const keyHash = await hashToken(apiKey);
|
||||
|
||||
const keyDoc = (await AgentApiKey.findOne({ keyHash }).lean()) as IAgentApiKey | null;
|
||||
|
||||
if (!keyDoc) {
|
||||
return null;
|
||||
}
|
||||
|
||||
if (keyDoc.expiresAt && new Date(keyDoc.expiresAt) < new Date()) {
|
||||
return null;
|
||||
}
|
||||
|
||||
await AgentApiKey.updateOne({ _id: keyDoc._id }, { $set: { lastUsedAt: new Date() } });
|
||||
|
||||
return {
|
||||
userId: keyDoc.userId,
|
||||
keyId: keyDoc._id as Types.ObjectId,
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error('[validateAgentApiKey] Error validating API key:', error);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async function listAgentApiKeys(userId: string | Types.ObjectId): Promise<AgentApiKeyListItem[]> {
|
||||
try {
|
||||
const AgentApiKey = mongoose.models.AgentApiKey;
|
||||
const keys = (await AgentApiKey.find({ userId })
|
||||
.sort({ createdAt: -1 })
|
||||
.lean()) as unknown as IAgentApiKey[];
|
||||
|
||||
return keys.map((key) => ({
|
||||
id: (key._id as Types.ObjectId).toString(),
|
||||
name: key.name,
|
||||
keyPrefix: key.keyPrefix,
|
||||
lastUsedAt: key.lastUsedAt,
|
||||
expiresAt: key.expiresAt,
|
||||
createdAt: key.createdAt,
|
||||
}));
|
||||
} catch (error) {
|
||||
logger.error('[listAgentApiKeys] Error listing API keys:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async function deleteAgentApiKey(
|
||||
keyId: string | Types.ObjectId,
|
||||
userId: string | Types.ObjectId,
|
||||
): Promise<boolean> {
|
||||
try {
|
||||
const AgentApiKey = mongoose.models.AgentApiKey;
|
||||
const result = await AgentApiKey.deleteOne({ _id: keyId, userId });
|
||||
return result.deletedCount > 0;
|
||||
} catch (error) {
|
||||
logger.error('[deleteAgentApiKey] Error deleting API key:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async function deleteAllAgentApiKeys(userId: string | Types.ObjectId): Promise<number> {
|
||||
try {
|
||||
const AgentApiKey = mongoose.models.AgentApiKey;
|
||||
const result = await AgentApiKey.deleteMany({ userId });
|
||||
return result.deletedCount;
|
||||
} catch (error) {
|
||||
logger.error('[deleteAllAgentApiKeys] Error deleting all API keys:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async function getAgentApiKeyById(
|
||||
keyId: string | Types.ObjectId,
|
||||
userId: string | Types.ObjectId,
|
||||
): Promise<AgentApiKeyListItem | null> {
|
||||
try {
|
||||
const AgentApiKey = mongoose.models.AgentApiKey;
|
||||
const keyDoc = (await AgentApiKey.findOne({
|
||||
_id: keyId,
|
||||
userId,
|
||||
}).lean()) as IAgentApiKey | null;
|
||||
|
||||
if (!keyDoc) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
id: (keyDoc._id as Types.ObjectId).toString(),
|
||||
name: keyDoc.name,
|
||||
keyPrefix: keyDoc.keyPrefix,
|
||||
lastUsedAt: keyDoc.lastUsedAt,
|
||||
expiresAt: keyDoc.expiresAt,
|
||||
createdAt: keyDoc.createdAt,
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error('[getAgentApiKeyById] Error getting API key:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
createAgentApiKey,
|
||||
validateAgentApiKey,
|
||||
listAgentApiKeys,
|
||||
deleteAgentApiKey,
|
||||
deleteAllAgentApiKeys,
|
||||
getAgentApiKeyById,
|
||||
};
|
||||
}
|
||||
|
||||
export type AgentApiKeyMethods = ReturnType<typeof createAgentApiKeyMethods>;
|
||||
|
|
@ -10,6 +10,8 @@ import { createFileMethods, type FileMethods } from './file';
|
|||
import { createMemoryMethods, type MemoryMethods } from './memory';
|
||||
/* Agent Categories */
|
||||
import { createAgentCategoryMethods, type AgentCategoryMethods } from './agentCategory';
|
||||
/* Agent API Keys */
|
||||
import { createAgentApiKeyMethods, type AgentApiKeyMethods } from './agentApiKey';
|
||||
/* MCP Servers */
|
||||
import { createMCPServerMethods, type MCPServerMethods } from './mcpServer';
|
||||
/* Plugin Auth */
|
||||
|
|
@ -28,6 +30,7 @@ export type AllMethods = UserMethods &
|
|||
FileMethods &
|
||||
MemoryMethods &
|
||||
AgentCategoryMethods &
|
||||
AgentApiKeyMethods &
|
||||
MCPServerMethods &
|
||||
UserGroupMethods &
|
||||
AclEntryMethods &
|
||||
|
|
@ -49,6 +52,7 @@ export function createMethods(mongoose: typeof import('mongoose')): AllMethods {
|
|||
...createFileMethods(mongoose),
|
||||
...createMemoryMethods(mongoose),
|
||||
...createAgentCategoryMethods(mongoose),
|
||||
...createAgentApiKeyMethods(mongoose),
|
||||
...createMCPServerMethods(mongoose),
|
||||
...createAccessRoleMethods(mongoose),
|
||||
...createUserGroupMethods(mongoose),
|
||||
|
|
@ -67,6 +71,7 @@ export type {
|
|||
FileMethods,
|
||||
MemoryMethods,
|
||||
AgentCategoryMethods,
|
||||
AgentApiKeyMethods,
|
||||
MCPServerMethods,
|
||||
UserGroupMethods,
|
||||
AclEntryMethods,
|
||||
|
|
|
|||
7
packages/data-schemas/src/models/agentApiKey.ts
Normal file
7
packages/data-schemas/src/models/agentApiKey.ts
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
import agentApiKeySchema, { IAgentApiKey } from '~/schema/agentApiKey';
|
||||
|
||||
export function createAgentApiKeyModel(mongoose: typeof import('mongoose')) {
|
||||
return (
|
||||
mongoose.models.AgentApiKey || mongoose.model<IAgentApiKey>('AgentApiKey', agentApiKeySchema)
|
||||
);
|
||||
}
|
||||
|
|
@ -5,6 +5,7 @@ import { createBalanceModel } from './balance';
|
|||
import { createConversationModel } from './convo';
|
||||
import { createMessageModel } from './message';
|
||||
import { createAgentModel } from './agent';
|
||||
import { createAgentApiKeyModel } from './agentApiKey';
|
||||
import { createAgentCategoryModel } from './agentCategory';
|
||||
import { createMCPServerModel } from './mcpServer';
|
||||
import { createRoleModel } from './role';
|
||||
|
|
@ -39,6 +40,7 @@ export function createModels(mongoose: typeof import('mongoose')) {
|
|||
Conversation: createConversationModel(mongoose),
|
||||
Message: createMessageModel(mongoose),
|
||||
Agent: createAgentModel(mongoose),
|
||||
AgentApiKey: createAgentApiKeyModel(mongoose),
|
||||
AgentCategory: createAgentCategoryModel(mongoose),
|
||||
MCPServer: createMCPServerModel(mongoose),
|
||||
Role: createRoleModel(mongoose),
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ const accessRoleSchema = new Schema<IAccessRole>(
|
|||
description: String,
|
||||
resourceType: {
|
||||
type: String,
|
||||
enum: ['agent', 'project', 'file', 'promptGroup', 'mcpServer'],
|
||||
enum: ['agent', 'project', 'file', 'promptGroup', 'mcpServer', 'remoteAgent'],
|
||||
required: true,
|
||||
default: 'agent',
|
||||
},
|
||||
|
|
|
|||
59
packages/data-schemas/src/schema/agentApiKey.ts
Normal file
59
packages/data-schemas/src/schema/agentApiKey.ts
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
import mongoose, { Schema, Document, Types } from 'mongoose';
|
||||
|
||||
export interface IAgentApiKey extends Document {
|
||||
userId: Types.ObjectId;
|
||||
name: string;
|
||||
keyHash: string;
|
||||
keyPrefix: string;
|
||||
lastUsedAt?: Date;
|
||||
expiresAt?: Date;
|
||||
createdAt: Date;
|
||||
updatedAt: Date;
|
||||
}
|
||||
|
||||
const agentApiKeySchema: Schema<IAgentApiKey> = new Schema(
|
||||
{
|
||||
userId: {
|
||||
type: mongoose.Schema.Types.ObjectId,
|
||||
ref: 'User',
|
||||
required: true,
|
||||
index: true,
|
||||
},
|
||||
name: {
|
||||
type: String,
|
||||
required: true,
|
||||
trim: true,
|
||||
maxlength: 100,
|
||||
},
|
||||
keyHash: {
|
||||
type: String,
|
||||
required: true,
|
||||
select: false,
|
||||
index: true,
|
||||
},
|
||||
keyPrefix: {
|
||||
type: String,
|
||||
required: true,
|
||||
index: true,
|
||||
},
|
||||
lastUsedAt: {
|
||||
type: Date,
|
||||
},
|
||||
expiresAt: {
|
||||
type: Date,
|
||||
},
|
||||
},
|
||||
{ timestamps: true },
|
||||
);
|
||||
|
||||
agentApiKeySchema.index({ userId: 1, name: 1 });
|
||||
|
||||
/**
|
||||
* TTL index for automatic cleanup of expired keys.
|
||||
* MongoDB deletes documents when expiresAt passes (expireAfterSeconds: 0 means immediate).
|
||||
* Note: Expired keys are permanently removed, not soft-deleted.
|
||||
* If audit trails are needed, remove this index and check expiration programmatically.
|
||||
*/
|
||||
agentApiKeySchema.index({ expiresAt: 1 }, { expireAfterSeconds: 0 });
|
||||
|
||||
export default agentApiKeySchema;
|
||||
|
|
@ -1,5 +1,6 @@
|
|||
export { default as actionSchema } from './action';
|
||||
export { default as agentSchema } from './agent';
|
||||
export { default as agentApiKeySchema } from './agentApiKey';
|
||||
export { default as agentCategorySchema } from './agentCategory';
|
||||
export { default as assistantSchema } from './assistant';
|
||||
export { default as balanceSchema } from './balance';
|
||||
|
|
|
|||
|
|
@ -61,6 +61,12 @@ const rolePermissionsSchema = new Schema(
|
|||
[Permissions.SHARE]: { type: Boolean },
|
||||
[Permissions.SHARE_PUBLIC]: { type: Boolean },
|
||||
},
|
||||
[PermissionTypes.REMOTE_AGENTS]: {
|
||||
[Permissions.USE]: { type: Boolean },
|
||||
[Permissions.CREATE]: { type: Boolean },
|
||||
[Permissions.SHARE]: { type: Boolean },
|
||||
[Permissions.SHARE_PUBLIC]: { type: Boolean },
|
||||
},
|
||||
},
|
||||
{ _id: false },
|
||||
);
|
||||
|
|
|
|||
46
packages/data-schemas/src/types/agentApiKey.ts
Normal file
46
packages/data-schemas/src/types/agentApiKey.ts
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
import { Document, Types } from 'mongoose';
|
||||
|
||||
export interface IAgentApiKey extends Document {
|
||||
userId: Types.ObjectId;
|
||||
name: string;
|
||||
keyHash: string;
|
||||
keyPrefix: string;
|
||||
lastUsedAt?: Date;
|
||||
expiresAt?: Date;
|
||||
createdAt: Date;
|
||||
updatedAt: Date;
|
||||
}
|
||||
|
||||
export interface AgentApiKeyCreateData {
|
||||
userId: Types.ObjectId | string;
|
||||
name: string;
|
||||
expiresAt?: Date | null;
|
||||
}
|
||||
|
||||
export interface AgentApiKeyCreateResult {
|
||||
id: string;
|
||||
name: string;
|
||||
keyPrefix: string;
|
||||
key: string;
|
||||
createdAt: Date;
|
||||
expiresAt?: Date;
|
||||
}
|
||||
|
||||
export interface AgentApiKeyListItem {
|
||||
id: string;
|
||||
name: string;
|
||||
keyPrefix: string;
|
||||
lastUsedAt?: Date;
|
||||
expiresAt?: Date;
|
||||
createdAt: Date;
|
||||
}
|
||||
|
||||
export interface AgentApiKeyQuery {
|
||||
userId?: Types.ObjectId | string;
|
||||
keyPrefix?: string;
|
||||
id?: string;
|
||||
}
|
||||
|
||||
export interface AgentApiKeyDeleteResult {
|
||||
deletedCount?: number;
|
||||
}
|
||||
|
|
@ -10,6 +10,7 @@ export * from './balance';
|
|||
export * from './banner';
|
||||
export * from './message';
|
||||
export * from './agent';
|
||||
export * from './agentApiKey';
|
||||
export * from './agentCategory';
|
||||
export * from './role';
|
||||
export * from './action';
|
||||
|
|
|
|||
|
|
@ -59,6 +59,12 @@ export interface IRole extends Document {
|
|||
[Permissions.SHARE]?: boolean;
|
||||
[Permissions.SHARE_PUBLIC]?: boolean;
|
||||
};
|
||||
[PermissionTypes.REMOTE_AGENTS]?: {
|
||||
[Permissions.USE]?: boolean;
|
||||
[Permissions.CREATE]?: boolean;
|
||||
[Permissions.SHARE]?: boolean;
|
||||
[Permissions.SHARE_PUBLIC]?: boolean;
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue