mirror of
https://github.com/danny-avila/LibreChat.git
synced 2025-12-20 18:30:15 +01:00
* ✨ feat: Implement Resumable Generation Jobs with SSE Support
- Introduced GenerationJobManager to handle resumable LLM generation jobs independently of HTTP connections.
- Added support for subscribing to ongoing generation jobs via SSE, allowing clients to reconnect and receive updates without losing progress.
- Enhanced existing agent controllers and routes to integrate resumable functionality, including job creation, completion, and error handling.
- Updated client-side hooks to manage adaptive SSE streams, switching between standard and resumable modes based on user settings.
- Added UI components and settings for enabling/disabling resumable streams, improving user experience during unstable connections.
* WIP: resuming
* WIP: resumable stream
* feat: Enhance Stream Management with Abort Functionality
- Updated the abort endpoint to support aborting ongoing generation streams using either streamId or conversationId.
- Introduced a new mutation hook `useAbortStreamMutation` for client-side integration.
- Added `useStreamStatus` query to monitor stream status and facilitate resuming conversations.
- Enhanced `useChatHelpers` to incorporate abort functionality when stopping generation.
- Improved `useResumableSSE` to handle stream errors and token refresh seamlessly.
- Updated `useResumeOnLoad` to check for active streams and resume conversations appropriately.
* fix: Update query parameter handling in useChatHelpers
- Refactored the logic for determining the query parameter used in fetching messages to prioritize paramId from the URL, falling back to conversationId only if paramId is not available. This change ensures consistency with the ChatView component's expectations.
* fix: improve syncing when switching conversations
* fix: Prevent memory leaks in useResumableSSE by clearing handler maps on stream completion and cleanup
* fix: Improve content type mismatch handling in useStepHandler
- Enhanced the condition for detecting content type mismatches to include additional checks, ensuring more robust validation of content types before processing updates.
* fix: Allow dynamic content creation in useChatFunctions
- Updated the initial response handling to avoid pre-initializing content types, enabling dynamic creation of content parts based on incoming delta events. This change supports various content types such as think and text.
* fix: Refine response message handling in useStepHandler
- Updated logic to determine the appropriate response message based on the last message's origin, ensuring correct message replacement or appending based on user interaction. This change enhances the accuracy of message updates in the chat flow.
* refactor: Enhance GenerationJobManager with In-Memory Implementations
- Introduced InMemoryJobStore, InMemoryEventTransport, and InMemoryContentState for improved job management and event handling.
- Updated GenerationJobManager to utilize these new implementations, allowing for better separation of concerns and easier maintenance.
- Enhanced job metadata handling to support user messages and response IDs for resumable functionality.
- Improved cleanup and state management processes to prevent memory leaks and ensure efficient resource usage.
* refactor: Enhance GenerationJobManager with improved subscriber handling
- Updated RuntimeJobState to include allSubscribersLeftHandlers for managing client disconnections without affecting subscriber count.
- Refined createJob and subscribe methods to ensure generation starts only when the first real client connects.
- Added detailed documentation for methods and properties to clarify the synchronization of job generation with client readiness.
- Improved logging for subscriber checks and event handling to facilitate debugging and monitoring.
* chore: Adjust timeout for subscriber readiness in ResumableAgentController
- Reduced the timeout duration from 5000ms to 2500ms in the startGeneration function to improve responsiveness when waiting for subscriber readiness. This change aims to enhance the efficiency of the agent's background generation process.
* refactor: Update GenerationJobManager documentation and structure
- Enhanced the documentation for GenerationJobManager to clarify the architecture and pluggable service design.
- Updated comments to reflect the potential for Redis integration and the need for async refactoring.
- Improved the structure of the GenerationJob facade to emphasize the unified API while allowing for implementation swapping without affecting consumer code.
* refactor: Convert GenerationJobManager methods to async for improved performance
- Updated methods in GenerationJobManager and InMemoryJobStore to be asynchronous, enhancing the handling of job creation, retrieval, and management.
- Adjusted the ResumableAgentController and related routes to await job operations, ensuring proper flow and error handling.
- Increased timeout duration in ResumableAgentController's startGeneration function to 3500ms for better subscriber readiness management.
* refactor: Simplify initial response handling in useChatFunctions
- Removed unnecessary pre-initialization of content types in the initial response, allowing for dynamic content creation based on incoming delta events. This change enhances flexibility in handling various content types in the chat flow.
* refactor: Clarify content handling logic in useStepHandler
- Updated comments to better explain the handling of initialContent and existingContent in edit and resume scenarios.
- Simplified the logic for merging content, ensuring that initialContent is used directly when available, improving clarity and maintainability.
* refactor: Improve message handling logic in useStepHandler
- Enhanced the logic for managing messages in multi-tab scenarios, ensuring that the most up-to-date message history is utilized.
- Removed existing response placeholders and ensured user messages are included, improving the accuracy of message updates in the chat flow.
* fix: remove unnecessary content length logging in the chat stream response, simplifying the debug message while retaining essential information about run steps. This change enhances clarity in logging without losing critical context.
* refactor: Integrate streamId handling for improved resumable functionality for attachments
- Added streamId parameter to various functions to support resumable mode in tool loading and memory processing.
- Updated related methods to ensure proper handling of attachments and responses based on the presence of streamId, enhancing the overall streaming experience.
- Improved logging and attachment management to accommodate both standard and resumable modes.
* refactor: Streamline abort handling and integrate GenerationJobManager for improved job management
- Removed the abortControllers middleware and integrated abort handling directly into GenerationJobManager.
- Updated abortMessage function to utilize GenerationJobManager for aborting jobs by conversation ID, enhancing clarity and efficiency.
- Simplified cleanup processes and improved error handling during abort operations.
- Enhanced metadata management for jobs, including endpoint and model information, to facilitate better tracking and resource management.
* refactor: Unify streamId and conversationId handling for improved job management
- Updated ResumableAgentController and AgentController to generate conversationId upfront, ensuring it matches streamId for consistency.
- Simplified job creation and metadata management by removing redundant conversationId updates from callbacks.
- Refactored abortMiddleware and related methods to utilize the unified streamId/conversationId approach, enhancing clarity in job handling.
- Removed deprecated methods from GenerationJobManager and InMemoryJobStore, streamlining the codebase and improving maintainability.
* refactor: Enhance resumable SSE handling with improved UI state management and error recovery
- Added UI state restoration on successful SSE connection to indicate ongoing submission.
- Implemented detailed error handling for network failures, including retry logic with exponential backoff.
- Introduced abort event handling to reset UI state on intentional stream closure.
- Enhanced debugging capabilities for testing reconnection and clean close scenarios.
- Updated generation function to retry on network errors, improving resilience during submission processes.
* refactor: Consolidate content state management into IJobStore for improved job handling
- Removed InMemoryContentState and integrated its functionality into InMemoryJobStore, streamlining content state management.
- Updated GenerationJobManager to utilize jobStore for content state operations, enhancing clarity and reducing redundancy.
- Introduced RedisJobStore for horizontal scaling, allowing for efficient job management and content reconstruction from chunks.
- Updated IJobStore interface to reflect changes in content state handling, ensuring consistency across implementations.
* feat: Introduce Redis-backed stream services for enhanced job management
- Added createStreamServices function to configure job store and event transport, supporting both Redis and in-memory options.
- Updated GenerationJobManager to allow configuration with custom job stores and event transports, improving flexibility for different deployment scenarios.
- Refactored IJobStore interface to support asynchronous content retrieval, ensuring compatibility with Redis implementations.
- Implemented RedisEventTransport for real-time event delivery across instances, enhancing scalability and responsiveness.
- Updated InMemoryJobStore to align with new async patterns for content and run step retrieval, ensuring consistent behavior across storage options.
* refactor: Remove redundant debug logging in GenerationJobManager and RedisEventTransport
- Eliminated unnecessary debug statements in GenerationJobManager related to subscriber actions and job updates, enhancing log clarity.
- Removed debug logging in RedisEventTransport for subscription and subscriber disconnection events, streamlining the logging output.
- Cleaned up debug messages in RedisJobStore to focus on essential information, improving overall logging efficiency.
* refactor: Enhance job state management and TTL configuration in RedisJobStore
- Updated the RedisJobStore to allow customizable TTL values for job states, improving flexibility in job management.
- Refactored the handling of job expiration and cleanup processes to align with new TTL configurations.
- Simplified the response structure in the chat status endpoint by consolidating state retrieval, enhancing clarity and performance.
- Improved comments and documentation for better understanding of the changes made.
* refactor: cleanupOnComplete option to GenerationJobManager for flexible resource management
- Introduced a new configuration option, cleanupOnComplete, allowing immediate cleanup of event transport and job resources upon job completion.
- Updated completeJob and abortJob methods to respect the cleanupOnComplete setting, enhancing memory management.
- Improved cleanup logic in the cleanup method to handle orphaned resources effectively.
- Enhanced documentation and comments for better clarity on the new functionality.
* refactor: Update TTL configuration for completed jobs in InMemoryJobStore
- Changed the TTL for completed jobs from 5 minutes to 0, allowing for immediate cleanup.
- Enhanced cleanup logic to respect the new TTL setting, improving resource management.
- Updated comments for clarity on the behavior of the TTL configuration.
* refactor: Enhance RedisJobStore with local graph caching for improved performance
- Introduced a local cache for graph references using WeakRef to optimize reconnects for the same instance.
- Updated job deletion and cleanup methods to manage the local cache effectively, ensuring stale entries are removed.
- Enhanced content retrieval methods to prioritize local cache access, reducing Redis round-trips for same-instance reconnects.
- Improved documentation and comments for clarity on the caching mechanism and its benefits.
* feat: Add integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore, add Redis Cluster support
- Introduced comprehensive integration tests for GenerationJobManager, covering both in-memory and Redis modes to ensure consistent job management and event handling.
- Added tests for RedisEventTransport to validate pub/sub functionality, including cross-instance event delivery and error handling.
- Implemented integration tests for RedisJobStore, focusing on multi-instance job access, content reconstruction from chunks, and consumer group behavior.
- Enhanced test setup and teardown processes to ensure a clean environment for each test run, improving reliability and maintainability.
* fix: Improve error handling in GenerationJobManager for allSubscribersLeft handlers
- Enhanced the error handling logic when retrieving content parts for allSubscribersLeft handlers, ensuring that any failures are logged appropriately.
- Updated the promise chain to catch errors from getContentParts, improving robustness and clarity in error reporting.
* ci: Improve Redis client disconnection handling in integration tests
- Updated the afterAll cleanup logic in integration tests for GenerationJobManager, RedisEventTransport, and RedisJobStore to use `quit()` for graceful disconnection of the Redis client.
- Added fallback to `disconnect()` if `quit()` fails, enhancing robustness in resource management during test teardown.
- Improved comments for clarity on the disconnection process and error handling.
* refactor: Enhance GenerationJobManager and event transports for improved resource management
- Updated GenerationJobManager to prevent immediate cleanup of eventTransport upon job completion, allowing final events to transmit fully before cleanup.
- Added orphaned stream cleanup logic in GenerationJobManager to handle streams without corresponding jobs.
- Introduced getTrackedStreamIds method in both InMemoryEventTransport and RedisEventTransport for better management of orphaned streams.
- Improved comments for clarity on resource management and cleanup processes.
* refactor: Update GenerationJobManager and ResumableAgentController for improved event handling
- Modified GenerationJobManager to resolve readyPromise immediately, eliminating startup latency and allowing early event buffering for late subscribers.
- Enhanced event handling logic to replay buffered events when the first subscriber connects, ensuring no events are lost due to race conditions.
- Updated comments for clarity on the new event synchronization mechanism and its benefits in both Redis and in-memory modes.
* fix: Update cache integration test command for stream to ensure proper execution
- Modified the test command for cache integration related to streams by adding the --forceExit flag to prevent hanging tests.
- This change enhances the reliability of the test suite by ensuring all tests complete as expected.
* feat: Add active job management for user and show progress in conversation list
- Implemented a new endpoint to retrieve active generation job IDs for the current user, enhancing user experience by allowing visibility of ongoing tasks.
- Integrated active job tracking in the Conversations component, displaying generation indicators based on active jobs.
- Optimized job management in the GenerationJobManager and InMemoryJobStore to support user-specific job queries, ensuring efficient resource handling and cleanup.
- Updated relevant components and hooks to utilize the new active jobs feature, improving overall application responsiveness and user feedback.
* feat: Implement active job tracking by user in RedisJobStore
- Added functionality to retrieve active job IDs for a specific user, enhancing user experience by allowing visibility of ongoing tasks.
- Implemented self-healing cleanup for stale job entries, ensuring accurate tracking of active jobs.
- Updated job creation, update, and deletion methods to manage user-specific job sets effectively.
- Enhanced integration tests to validate the new user-specific job management features.
* refactor: Simplify job deletion logic by removing user job cleanup from InMemoryJobStore and RedisJobStore
* WIP: Add backend inspect script for easier debugging in production
* refactor: title generation logic
- Changed the title generation endpoint from POST to GET, allowing for more efficient retrieval of titles based on conversation ID.
- Implemented exponential backoff for title fetching retries, improving responsiveness and reducing server load.
- Introduced a queuing mechanism for title generation, ensuring titles are generated only after job completion.
- Updated relevant components and hooks to utilize the new title generation logic, enhancing user experience and application performance.
* feat: Enhance updateConvoInAllQueries to support moving conversations to the top
* chore: temp. remove added multi convo
* refactor: Update active jobs query integration for optimistic updates on abort
- Introduced a new interface for active jobs response to standardize data handling.
- Updated query keys for active jobs to ensure consistency across components.
- Enhanced job management logic in hooks to properly reflect active job states, improving overall application responsiveness.
* refactor: useResumableStreamToggle hook to manage resumable streams for legacy/assistants endpoints
- Introduced a new hook, useResumableStreamToggle, to automatically toggle resumable streams off for assistants endpoints and restore the previous value when switching away.
- Updated ChatView component to utilize the new hook, enhancing the handling of streaming behavior based on endpoint type.
- Refactored imports in ChatView for better organization.
* refactor: streamline conversation title generation handling
- Removed unused type definition for TGenTitleMutation in mutations.ts to clean up the codebase.
- Integrated queueTitleGeneration call in useEventHandlers to trigger title generation for new conversations, enhancing the responsiveness of the application.
* feat: Add USE_REDIS_STREAMS configuration for stream job storage
- Introduced USE_REDIS_STREAMS to control Redis usage for resumable stream job storage, defaulting to true if USE_REDIS is enabled but not explicitly set.
- Updated cacheConfig to include USE_REDIS_STREAMS and modified createStreamServices to utilize this new configuration.
- Enhanced unit tests to validate the behavior of USE_REDIS_STREAMS under various environment settings, ensuring correct defaults and overrides.
* fix: title generation queue management for assistants
- Introduced a queueListeners mechanism to notify changes in the title generation queue, improving responsiveness for non-resumable streams.
- Updated the useTitleGeneration hook to track queue changes with a queueVersion state, ensuring accurate updates when jobs complete.
- Refactored the queueTitleGeneration function to trigger listeners upon adding new conversation IDs, enhancing the overall title generation flow.
* refactor: streamline agent controller and remove legacy resumable handling
- Updated the AgentController to route all requests to ResumableAgentController, simplifying the logic.
- Deprecated the legacy non-resumable path, providing a clear migration path for future use.
- Adjusted setHeaders middleware to remove unnecessary checks for resumable mode.
- Cleaned up the useResumableSSE hook to eliminate redundant query parameters, enhancing clarity and performance.
* feat: Add USE_REDIS_STREAMS configuration to .env.example
- Updated .env.example to include USE_REDIS_STREAMS setting, allowing control over Redis usage for resumable LLM streams.
- Provided additional context on the behavior of USE_REDIS_STREAMS when not explicitly set, enhancing clarity for configuration management.
* refactor: remove unused setHeaders middleware from chat route
- Eliminated the setHeaders middleware from the chat route, streamlining the request handling process.
- This change contributes to cleaner code and improved performance by reducing unnecessary middleware checks.
* fix: Add streamId parameter for resumable stream handling across services (actions, mcp oauth)
* fix(flow): add immediate abort handling and fix intervalId initialization
- Add immediate abort handler that responds instantly to abort signal
- Declare intervalId before cleanup function to prevent 'Cannot access before initialization' error
- Consolidate cleanup logic into single function to avoid duplicate cleanup
- Properly remove abort event listener on cleanup
* fix(mcp): clean up OAuth flows on abort and simplify flow handling
- Add abort handler in reconnectServer to clean up mcp_oauth and mcp_get_tokens flows
- Update createAbortHandler to clean up both flow types on tool call abort
- Pass abort signal to createFlow in returnOnOAuth path
- Simplify handleOAuthRequired to always cancel existing flows and start fresh
- This ensures user always gets a new OAuth URL instead of waiting for stale flows
* fix(agents): handle 'new' conversationId and improve abort reliability
- Treat 'new' as placeholder that needs UUID in request controller
- Send JSON response immediately before tool loading for faster SSE connection
- Use job's abort controller instead of prelimAbortController
- Emit errors to stream if headers already sent
- Skip 'new' as valid ID in abort endpoint
- Add fallback to find active jobs by userId when conversationId is 'new'
* fix(stream): detect early abort and prevent navigation to non-existent conversation
- Abort controller on job completion to signal pending operations
- Detect early abort (no content, no responseMessageId) in abortJob
- Set conversation and responseMessage to null for early aborts
- Add earlyAbort flag to final event for frontend detection
- Remove unused text field from AbortResult interface
- Frontend handles earlyAbort by staying on/navigating to new chat
* test(mcp): update test to expect signal parameter in createFlow
540 lines
16 KiB
TypeScript
540 lines
16 KiB
TypeScript
/** Memories */
|
|
import { z } from 'zod';
|
|
import { tool } from '@langchain/core/tools';
|
|
import { Tools } from 'librechat-data-provider';
|
|
import { logger } from '@librechat/data-schemas';
|
|
import { Run, Providers, GraphEvents } from '@librechat/agents';
|
|
import type {
|
|
OpenAIClientOptions,
|
|
StreamEventData,
|
|
ToolEndCallback,
|
|
ClientOptions,
|
|
EventHandler,
|
|
ToolEndData,
|
|
LLMConfig,
|
|
} from '@librechat/agents';
|
|
import type { TAttachment, MemoryArtifact } from 'librechat-data-provider';
|
|
import type { ObjectId, MemoryMethods } from '@librechat/data-schemas';
|
|
import type { BaseMessage, ToolMessage } from '@langchain/core/messages';
|
|
import type { Response as ServerResponse } from 'express';
|
|
import { GenerationJobManager } from '~/stream/GenerationJobManager';
|
|
import { Tokenizer } from '~/utils';
|
|
|
|
type RequiredMemoryMethods = Pick<
|
|
MemoryMethods,
|
|
'setMemory' | 'deleteMemory' | 'getFormattedMemories'
|
|
>;
|
|
|
|
type ToolEndMetadata = Record<string, unknown> & {
|
|
run_id?: string;
|
|
thread_id?: string;
|
|
};
|
|
|
|
export interface MemoryConfig {
|
|
validKeys?: string[];
|
|
instructions?: string;
|
|
llmConfig?: Partial<LLMConfig>;
|
|
tokenLimit?: number;
|
|
}
|
|
|
|
export const memoryInstructions =
|
|
'The system automatically stores important user information and can update or delete memories based on user requests, enabling dynamic memory management.';
|
|
|
|
const getDefaultInstructions = (
|
|
validKeys?: string[],
|
|
tokenLimit?: number,
|
|
) => `Use the \`set_memory\` tool to save important information about the user, but ONLY when the user has requested you to remember something.
|
|
|
|
The \`delete_memory\` tool should only be used in two scenarios:
|
|
1. When the user explicitly asks to forget or remove specific information
|
|
2. When updating existing memories, use the \`set_memory\` tool instead of deleting and re-adding the memory.
|
|
|
|
1. ONLY use memory tools when the user requests memory actions with phrases like:
|
|
- "Remember [that] [I]..."
|
|
- "Don't forget [that] [I]..."
|
|
- "Please remember..."
|
|
- "Store this..."
|
|
- "Forget [that] [I]..."
|
|
- "Delete the memory about..."
|
|
|
|
2. NEVER store information just because the user mentioned it in conversation.
|
|
|
|
3. NEVER use memory tools when the user asks you to use other tools or invoke tools in general.
|
|
|
|
4. Memory tools are ONLY for memory requests, not for general tool usage.
|
|
|
|
5. If the user doesn't ask you to remember or forget something, DO NOT use any memory tools.
|
|
|
|
${validKeys && validKeys.length > 0 ? `\nVALID KEYS: ${validKeys.join(', ')}` : ''}
|
|
|
|
${tokenLimit ? `\nTOKEN LIMIT: Maximum ${tokenLimit} tokens per memory value.` : ''}
|
|
|
|
When in doubt, and the user hasn't asked to remember or forget anything, END THE TURN IMMEDIATELY.`;
|
|
|
|
/**
|
|
* Creates a memory tool instance with user context
|
|
*/
|
|
export const createMemoryTool = ({
|
|
userId,
|
|
setMemory,
|
|
validKeys,
|
|
tokenLimit,
|
|
totalTokens = 0,
|
|
}: {
|
|
userId: string | ObjectId;
|
|
setMemory: MemoryMethods['setMemory'];
|
|
validKeys?: string[];
|
|
tokenLimit?: number;
|
|
totalTokens?: number;
|
|
}) => {
|
|
const remainingTokens = tokenLimit ? tokenLimit - totalTokens : Infinity;
|
|
const isOverflowing = tokenLimit ? remainingTokens <= 0 : false;
|
|
|
|
return tool(
|
|
async ({ key, value }) => {
|
|
try {
|
|
if (validKeys && validKeys.length > 0 && !validKeys.includes(key)) {
|
|
logger.warn(
|
|
`Memory Agent failed to set memory: Invalid key "${key}". Must be one of: ${validKeys.join(
|
|
', ',
|
|
)}`,
|
|
);
|
|
return [`Invalid key "${key}". Must be one of: ${validKeys.join(', ')}`, undefined];
|
|
}
|
|
|
|
const tokenCount = Tokenizer.getTokenCount(value, 'o200k_base');
|
|
|
|
if (isOverflowing) {
|
|
const errorArtifact: Record<Tools.memory, MemoryArtifact> = {
|
|
[Tools.memory]: {
|
|
key: 'system',
|
|
type: 'error',
|
|
value: JSON.stringify({
|
|
errorType: 'already_exceeded',
|
|
tokenCount: Math.abs(remainingTokens),
|
|
totalTokens: totalTokens,
|
|
tokenLimit: tokenLimit!,
|
|
}),
|
|
tokenCount: totalTokens,
|
|
},
|
|
};
|
|
return [`Memory storage exceeded. Cannot save new memories.`, errorArtifact];
|
|
}
|
|
|
|
if (tokenLimit) {
|
|
const newTotalTokens = totalTokens + tokenCount;
|
|
const newRemainingTokens = tokenLimit - newTotalTokens;
|
|
|
|
if (newRemainingTokens < 0) {
|
|
const errorArtifact: Record<Tools.memory, MemoryArtifact> = {
|
|
[Tools.memory]: {
|
|
key: 'system',
|
|
type: 'error',
|
|
value: JSON.stringify({
|
|
errorType: 'would_exceed',
|
|
tokenCount: Math.abs(newRemainingTokens),
|
|
totalTokens: newTotalTokens,
|
|
tokenLimit,
|
|
}),
|
|
tokenCount: totalTokens,
|
|
},
|
|
};
|
|
return [`Memory storage would exceed limit. Cannot save this memory.`, errorArtifact];
|
|
}
|
|
}
|
|
|
|
const artifact: Record<Tools.memory, MemoryArtifact> = {
|
|
[Tools.memory]: {
|
|
key,
|
|
value,
|
|
tokenCount,
|
|
type: 'update',
|
|
},
|
|
};
|
|
|
|
const result = await setMemory({ userId, key, value, tokenCount });
|
|
if (result.ok) {
|
|
logger.debug(`Memory set for key "${key}" (${tokenCount} tokens) for user "${userId}"`);
|
|
return [`Memory set for key "${key}" (${tokenCount} tokens)`, artifact];
|
|
}
|
|
logger.warn(`Failed to set memory for key "${key}" for user "${userId}"`);
|
|
return [`Failed to set memory for key "${key}"`, undefined];
|
|
} catch (error) {
|
|
logger.error('Memory Agent failed to set memory', error);
|
|
return [`Error setting memory for key "${key}"`, undefined];
|
|
}
|
|
},
|
|
{
|
|
name: 'set_memory',
|
|
description: 'Saves important information about the user into memory.',
|
|
responseFormat: 'content_and_artifact',
|
|
schema: z.object({
|
|
key: z
|
|
.string()
|
|
.describe(
|
|
validKeys && validKeys.length > 0
|
|
? `The key of the memory value. Must be one of: ${validKeys.join(', ')}`
|
|
: 'The key identifier for this memory',
|
|
),
|
|
value: z
|
|
.string()
|
|
.describe(
|
|
'Value MUST be a complete sentence that fully describes relevant user information.',
|
|
),
|
|
}),
|
|
},
|
|
);
|
|
};
|
|
|
|
/**
|
|
* Creates a delete memory tool instance with user context
|
|
*/
|
|
const createDeleteMemoryTool = ({
|
|
userId,
|
|
deleteMemory,
|
|
validKeys,
|
|
}: {
|
|
userId: string | ObjectId;
|
|
deleteMemory: MemoryMethods['deleteMemory'];
|
|
validKeys?: string[];
|
|
}) => {
|
|
return tool(
|
|
async ({ key }) => {
|
|
try {
|
|
if (validKeys && validKeys.length > 0 && !validKeys.includes(key)) {
|
|
logger.warn(
|
|
`Memory Agent failed to delete memory: Invalid key "${key}". Must be one of: ${validKeys.join(
|
|
', ',
|
|
)}`,
|
|
);
|
|
return [`Invalid key "${key}". Must be one of: ${validKeys.join(', ')}`, undefined];
|
|
}
|
|
|
|
const artifact: Record<Tools.memory, MemoryArtifact> = {
|
|
[Tools.memory]: {
|
|
key,
|
|
type: 'delete',
|
|
},
|
|
};
|
|
|
|
const result = await deleteMemory({ userId, key });
|
|
if (result.ok) {
|
|
logger.debug(`Memory deleted for key "${key}" for user "${userId}"`);
|
|
return [`Memory deleted for key "${key}"`, artifact];
|
|
}
|
|
logger.warn(`Failed to delete memory for key "${key}" for user "${userId}"`);
|
|
return [`Failed to delete memory for key "${key}"`, undefined];
|
|
} catch (error) {
|
|
logger.error('Memory Agent failed to delete memory', error);
|
|
return [`Error deleting memory for key "${key}"`, undefined];
|
|
}
|
|
},
|
|
{
|
|
name: 'delete_memory',
|
|
description:
|
|
'Deletes specific memory data about the user using the provided key. For updating existing memories, use the `set_memory` tool instead',
|
|
responseFormat: 'content_and_artifact',
|
|
schema: z.object({
|
|
key: z
|
|
.string()
|
|
.describe(
|
|
validKeys && validKeys.length > 0
|
|
? `The key of the memory to delete. Must be one of: ${validKeys.join(', ')}`
|
|
: 'The key identifier of the memory to delete',
|
|
),
|
|
}),
|
|
},
|
|
);
|
|
};
|
|
export class BasicToolEndHandler implements EventHandler {
|
|
private callback?: ToolEndCallback;
|
|
constructor(callback?: ToolEndCallback) {
|
|
this.callback = callback;
|
|
}
|
|
|
|
handle(
|
|
event: string,
|
|
data: StreamEventData | undefined,
|
|
metadata?: Record<string, unknown>,
|
|
): void {
|
|
if (!metadata) {
|
|
console.warn(`Graph or metadata not found in ${event} event`);
|
|
return;
|
|
}
|
|
const toolEndData = data as ToolEndData | undefined;
|
|
if (!toolEndData?.output) {
|
|
console.warn('No output found in tool_end event');
|
|
return;
|
|
}
|
|
this.callback?.(toolEndData, metadata);
|
|
}
|
|
}
|
|
|
|
export async function processMemory({
|
|
res,
|
|
userId,
|
|
setMemory,
|
|
deleteMemory,
|
|
messages,
|
|
memory,
|
|
messageId,
|
|
conversationId,
|
|
validKeys,
|
|
instructions,
|
|
llmConfig,
|
|
tokenLimit,
|
|
totalTokens = 0,
|
|
streamId = null,
|
|
}: {
|
|
res: ServerResponse;
|
|
setMemory: MemoryMethods['setMemory'];
|
|
deleteMemory: MemoryMethods['deleteMemory'];
|
|
userId: string | ObjectId;
|
|
memory: string;
|
|
messageId: string;
|
|
conversationId: string;
|
|
messages: BaseMessage[];
|
|
validKeys?: string[];
|
|
instructions: string;
|
|
tokenLimit?: number;
|
|
totalTokens?: number;
|
|
llmConfig?: Partial<LLMConfig>;
|
|
streamId?: string | null;
|
|
}): Promise<(TAttachment | null)[] | undefined> {
|
|
try {
|
|
const memoryTool = createMemoryTool({
|
|
userId,
|
|
tokenLimit,
|
|
setMemory,
|
|
validKeys,
|
|
totalTokens,
|
|
});
|
|
const deleteMemoryTool = createDeleteMemoryTool({
|
|
userId,
|
|
validKeys,
|
|
deleteMemory,
|
|
});
|
|
|
|
const currentMemoryTokens = totalTokens;
|
|
|
|
let memoryStatus = `# Existing memory:\n${memory ?? 'No existing memories'}`;
|
|
|
|
if (tokenLimit) {
|
|
const remainingTokens = tokenLimit - currentMemoryTokens;
|
|
memoryStatus = `# Memory Status:
|
|
Current memory usage: ${currentMemoryTokens} tokens
|
|
Token limit: ${tokenLimit} tokens
|
|
Remaining capacity: ${remainingTokens} tokens
|
|
|
|
# Existing memory:
|
|
${memory ?? 'No existing memories'}`;
|
|
}
|
|
|
|
const defaultLLMConfig: LLMConfig = {
|
|
provider: Providers.OPENAI,
|
|
model: 'gpt-4.1-mini',
|
|
temperature: 0.4,
|
|
streaming: false,
|
|
disableStreaming: true,
|
|
};
|
|
|
|
const finalLLMConfig: ClientOptions = {
|
|
...defaultLLMConfig,
|
|
...llmConfig,
|
|
/**
|
|
* Ensure streaming is always disabled for memory processing
|
|
*/
|
|
streaming: false,
|
|
disableStreaming: true,
|
|
};
|
|
|
|
// Handle GPT-5+ models
|
|
if ('model' in finalLLMConfig && /\bgpt-[5-9](?:\.\d+)?\b/i.test(finalLLMConfig.model ?? '')) {
|
|
// Remove temperature for GPT-5+ models
|
|
delete finalLLMConfig.temperature;
|
|
|
|
// Move maxTokens to modelKwargs for GPT-5+ models
|
|
if ('maxTokens' in finalLLMConfig && finalLLMConfig.maxTokens != null) {
|
|
const modelKwargs = (finalLLMConfig as OpenAIClientOptions).modelKwargs ?? {};
|
|
const paramName =
|
|
(finalLLMConfig as OpenAIClientOptions).useResponsesApi === true
|
|
? 'max_output_tokens'
|
|
: 'max_completion_tokens';
|
|
modelKwargs[paramName] = finalLLMConfig.maxTokens;
|
|
delete finalLLMConfig.maxTokens;
|
|
(finalLLMConfig as OpenAIClientOptions).modelKwargs = modelKwargs;
|
|
}
|
|
}
|
|
|
|
const artifactPromises: Promise<TAttachment | null>[] = [];
|
|
const memoryCallback = createMemoryCallback({ res, artifactPromises, streamId });
|
|
const customHandlers = {
|
|
[GraphEvents.TOOL_END]: new BasicToolEndHandler(memoryCallback),
|
|
};
|
|
|
|
const run = await Run.create({
|
|
runId: messageId,
|
|
graphConfig: {
|
|
type: 'standard',
|
|
llmConfig: finalLLMConfig,
|
|
tools: [memoryTool, deleteMemoryTool],
|
|
instructions,
|
|
additional_instructions: memoryStatus,
|
|
toolEnd: true,
|
|
},
|
|
customHandlers,
|
|
returnContent: true,
|
|
});
|
|
|
|
const config = {
|
|
runName: 'MemoryRun',
|
|
configurable: {
|
|
user_id: userId,
|
|
thread_id: conversationId,
|
|
provider: llmConfig?.provider,
|
|
},
|
|
streamMode: 'values',
|
|
recursionLimit: 3,
|
|
version: 'v2',
|
|
} as const;
|
|
|
|
const inputs = {
|
|
messages,
|
|
};
|
|
const content = await run.processStream(inputs, config);
|
|
if (content) {
|
|
logger.debug('Memory Agent processed memory successfully', content);
|
|
} else {
|
|
logger.warn('Memory Agent processed memory but returned no content');
|
|
}
|
|
return await Promise.all(artifactPromises);
|
|
} catch (error) {
|
|
logger.error('Memory Agent failed to process memory', error);
|
|
}
|
|
}
|
|
|
|
export async function createMemoryProcessor({
|
|
res,
|
|
userId,
|
|
messageId,
|
|
memoryMethods,
|
|
conversationId,
|
|
config = {},
|
|
streamId = null,
|
|
}: {
|
|
res: ServerResponse;
|
|
messageId: string;
|
|
conversationId: string;
|
|
userId: string | ObjectId;
|
|
memoryMethods: RequiredMemoryMethods;
|
|
config?: MemoryConfig;
|
|
streamId?: string | null;
|
|
}): Promise<[string, (messages: BaseMessage[]) => Promise<(TAttachment | null)[] | undefined>]> {
|
|
const { validKeys, instructions, llmConfig, tokenLimit } = config;
|
|
const finalInstructions = instructions || getDefaultInstructions(validKeys, tokenLimit);
|
|
|
|
const { withKeys, withoutKeys, totalTokens } = await memoryMethods.getFormattedMemories({
|
|
userId,
|
|
});
|
|
|
|
return [
|
|
withoutKeys,
|
|
async function (messages: BaseMessage[]): Promise<(TAttachment | null)[] | undefined> {
|
|
try {
|
|
return await processMemory({
|
|
res,
|
|
userId,
|
|
messages,
|
|
validKeys,
|
|
llmConfig,
|
|
messageId,
|
|
tokenLimit,
|
|
streamId,
|
|
conversationId,
|
|
memory: withKeys,
|
|
totalTokens: totalTokens || 0,
|
|
instructions: finalInstructions,
|
|
setMemory: memoryMethods.setMemory,
|
|
deleteMemory: memoryMethods.deleteMemory,
|
|
});
|
|
} catch (error) {
|
|
logger.error('Memory Agent failed to process memory', error);
|
|
}
|
|
},
|
|
];
|
|
}
|
|
|
|
async function handleMemoryArtifact({
|
|
res,
|
|
data,
|
|
metadata,
|
|
streamId = null,
|
|
}: {
|
|
res: ServerResponse;
|
|
data: ToolEndData;
|
|
metadata?: ToolEndMetadata;
|
|
streamId?: string | null;
|
|
}) {
|
|
const output = data?.output as ToolMessage | undefined;
|
|
if (!output) {
|
|
return null;
|
|
}
|
|
|
|
if (!output.artifact) {
|
|
return null;
|
|
}
|
|
|
|
const memoryArtifact = output.artifact[Tools.memory] as MemoryArtifact | undefined;
|
|
if (!memoryArtifact) {
|
|
return null;
|
|
}
|
|
|
|
const attachment: Partial<TAttachment> = {
|
|
type: Tools.memory,
|
|
toolCallId: output.tool_call_id,
|
|
messageId: metadata?.run_id ?? '',
|
|
conversationId: metadata?.thread_id ?? '',
|
|
[Tools.memory]: memoryArtifact,
|
|
};
|
|
if (!res.headersSent) {
|
|
return attachment;
|
|
}
|
|
if (streamId) {
|
|
GenerationJobManager.emitChunk(streamId, { event: 'attachment', data: attachment });
|
|
} else {
|
|
res.write(`event: attachment\ndata: ${JSON.stringify(attachment)}\n\n`);
|
|
}
|
|
return attachment;
|
|
}
|
|
|
|
/**
|
|
* Creates a memory callback for handling memory artifacts
|
|
* @param params - The parameters object
|
|
* @param params.res - The server response object
|
|
* @param params.artifactPromises - Array to collect artifact promises
|
|
* @param params.streamId - The stream ID for resumable mode, or null for standard mode
|
|
* @returns The memory callback function
|
|
*/
|
|
export function createMemoryCallback({
|
|
res,
|
|
artifactPromises,
|
|
streamId = null,
|
|
}: {
|
|
res: ServerResponse;
|
|
artifactPromises: Promise<Partial<TAttachment> | null>[];
|
|
streamId?: string | null;
|
|
}): ToolEndCallback {
|
|
return async (data: ToolEndData, metadata?: Record<string, unknown>) => {
|
|
const output = data?.output as ToolMessage | undefined;
|
|
const memoryArtifact = output?.artifact?.[Tools.memory] as MemoryArtifact;
|
|
if (memoryArtifact == null) {
|
|
return;
|
|
}
|
|
artifactPromises.push(
|
|
handleMemoryArtifact({ res, data, metadata, streamId }).catch((error) => {
|
|
logger.error('Error processing memory artifact content:', error);
|
|
return null;
|
|
}),
|
|
);
|
|
};
|
|
}
|