🧠 feat: User Memories for Conversational Context (#7760)
* 🧠 feat: User Memories for Conversational Context
chore: mcp typing, use `t`
WIP: first pass, Memories UI
- Added MemoryViewer component for displaying, editing, and deleting user memories.
- Integrated data provider hooks for fetching, updating, and deleting memories.
- Implemented pagination and loading states for better user experience.
- Created unit tests for MemoryViewer to ensure functionality and interaction with data provider.
- Updated translation files to include new UI strings related to memories.
chore: move mcp-related files to own directory
chore: rename librechat-mcp to librechat-api
WIP: first pass, memory processing and data schemas
chore: linting in fileSearch.js query description
chore: rename librechat-api to @librechat/api across the project
WIP: first pass, functional memory agent
feat: add MemoryEditDialog and MemoryViewer components for managing user memories
- Introduced MemoryEditDialog for editing memory entries with validation and toast notifications.
- Updated MemoryViewer to support editing and deleting memories, including pagination and loading states.
- Enhanced data provider to handle memory updates with optional original key for better management.
- Added new localization strings for memory-related UI elements.
feat: add memory permissions management
- Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories.
- Added new API endpoints for updating memory permissions associated with roles.
- Created a new AdminSettings component for managing memory permissions in the frontend.
- Integrated memory permissions into the existing roles and permissions schemas.
- Updated the interface to include memory settings and permissions.
- Enhanced the MemoryViewer component to conditionally render admin settings based on user roles.
- Added localization support for memory permissions in the translation files.
feat: move AdminSettings component to a new position in MemoryViewer for better visibility
refactor: clean up commented code in MemoryViewer component
feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration
- Added a search input to filter memories in the MemoryViewer component.
- Refactored MemoryEditDialog to accept children for better customization.
- Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories.
- Improved localization support by adding new strings for memory filtering and deletion confirmation.
refactor: optimize memory filtering in MemoryViewer using match-sorter
- Replaced manual filtering logic with match-sorter for improved search functionality.
- Enhanced performance and readability of the filteredMemories computation.
feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling
feat: implement access control for MemoryEditDialog and MemoryViewer components
refactor: remove commented out code and create runMemory method
refactor: rename role based files
feat: implement access control for memory usage in AgentClient
refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code
refactor: make `agents` dir in api package
refactor: migrate Azure utilities to TypeScript and consolidate imports
refactor: move sanitizeFilename function to a new file and update imports, add related tests
refactor: update LLM configuration types and consolidate Azure options in the API package
chore: linting
chore: import order
refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file
chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json
refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports
refactor: move createRun function to a new run.ts file and update related imports
fix: ensure safeAttachments is correctly typed as an array of TFile
chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file
refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options
feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling
fix: update types due to new TEndpointOption typing
fix: ensure safe access to group parameters in initializeOpenAIOptions function
fix: remove redundant API key validation comment in initializeOpenAIOptions function
refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation
refactor: decouple req.body fields and tool loading from initializeAgentOptions
chore: linting
refactor: adjust column widths in MemoryViewer for improved layout
refactor: simplify agent initialization by creating loadAgent function and removing unused code
feat: add memory configuration loading and validation functions
WIP: first pass, memory processing with config
feat: implement memory callback and artifact handling
feat: implement memory artifacts display and processing updates
feat: add memory configuration options and schema validation for validKeys
fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements
refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling
WIP: initial tokenLimit config and move Tokenizer to @librechat/api
refactor: update mongoMeili plugin methods to use callback for better error handling
feat: enhance memory management with token tracking and usage metrics
- Added token counting for memory entries to enforce limits and provide usage statistics.
- Updated memory retrieval and update routes to include total token usage and limit.
- Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information.
- Refactored memory processing functions to handle token limits and provide feedback on memory capacity.
feat: implement memory artifact handling in attachment handler
- Enhanced useAttachmentHandler to process memory artifacts when receiving updates.
- Introduced handleMemoryArtifact utility to manage memory updates and deletions.
- Updated query client to reflect changes in memory state based on incoming data.
refactor: restructure web search key extraction logic
- Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys.
- Updated webSearchKeys to utilize the new function for improved clarity and maintainability.
- Prevents build time errors
feat: add personalization settings and memory preferences management
- Introduced a new Personalization tab in settings to manage user memory preferences.
- Implemented API endpoints and client-side logic for updating memory preferences.
- Enhanced user interface components to reflect personalization options and memory usage.
- Updated permissions to allow users to opt out of memory features.
- Added localization support for new settings and messages related to personalization.
style: personalization switch class
feat: add PersonalizationIcon and align Side Panel UI
feat: implement memory creation functionality
- Added a new API endpoint for creating memory entries, including validation for key and value.
- Introduced MemoryCreateDialog component for user interface to facilitate memory creation.
- Integrated token limit checks to prevent exceeding user memory capacity.
- Updated MemoryViewer to include a button for opening the memory creation dialog.
- Enhanced localization support for new messages related to memory creation.
feat: enhance message processing with configurable window size
- Updated AgentClient to use a configurable message window size for processing messages.
- Introduced messageWindowSize option in memory configuration schema with a default value of 5.
- Improved logic for selecting messages to process based on the configured window size.
chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json
chore: remove OpenAPIPlugin and its associated tests
chore: remove MIGRATION_README.md as migration tasks are completed
ci: fix backend tests
chore: remove unused translation keys from localization file
chore: remove problematic test file and unused var in AgentClient
chore: remove unused import and import directly for JSDoc
* feat: add api package build stage in Dockerfile for improved modularity
* docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
|
|
|
import type * as t from './types';
|
2025-05-15 12:17:17 -04:00
|
|
|
const RECOGNIZED_PROVIDERS = new Set([
|
|
|
|
|
'google',
|
|
|
|
|
'anthropic',
|
|
|
|
|
'openai',
|
2025-08-16 12:44:12 -04:00
|
|
|
'azureopenai',
|
2025-05-15 12:17:17 -04:00
|
|
|
'openrouter',
|
|
|
|
|
'xai',
|
|
|
|
|
'deepseek',
|
|
|
|
|
'ollama',
|
2025-06-25 21:38:24 +02:00
|
|
|
'bedrock',
|
2025-05-15 12:17:17 -04:00
|
|
|
]);
|
2025-08-16 12:44:12 -04:00
|
|
|
const CONTENT_ARRAY_PROVIDERS = new Set(['google', 'anthropic', 'azureopenai', 'openai']);
|
2024-12-17 13:12:57 -05:00
|
|
|
|
|
|
|
|
const imageFormatters: Record<string, undefined | t.ImageFormatter> = {
|
|
|
|
|
// google: (item) => ({
|
|
|
|
|
// type: 'image',
|
|
|
|
|
// inlineData: {
|
|
|
|
|
// mimeType: item.mimeType,
|
|
|
|
|
// data: item.data,
|
|
|
|
|
// },
|
|
|
|
|
// }),
|
|
|
|
|
// anthropic: (item) => ({
|
|
|
|
|
// type: 'image',
|
|
|
|
|
// source: {
|
|
|
|
|
// type: 'base64',
|
|
|
|
|
// media_type: item.mimeType,
|
|
|
|
|
// data: item.data,
|
|
|
|
|
// },
|
|
|
|
|
// }),
|
|
|
|
|
default: (item) => ({
|
|
|
|
|
type: 'image_url',
|
|
|
|
|
image_url: {
|
|
|
|
|
url: item.data.startsWith('http') ? item.data : `data:${item.mimeType};base64,${item.data}`,
|
|
|
|
|
},
|
|
|
|
|
}),
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
function isImageContent(item: t.ToolContentPart): item is t.ImageContent {
|
|
|
|
|
return item.type === 'image';
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
function parseAsString(result: t.MCPToolCallResponse): string {
|
|
|
|
|
const content = result?.content ?? [];
|
|
|
|
|
if (!content.length) {
|
|
|
|
|
return '(No response)';
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
const text = content
|
|
|
|
|
.map((item) => {
|
|
|
|
|
if (item.type === 'text') {
|
|
|
|
|
return item.text;
|
|
|
|
|
}
|
|
|
|
|
if (item.type === 'resource') {
|
|
|
|
|
const resourceText = [];
|
|
|
|
|
if (item.resource.text != null && item.resource.text) {
|
|
|
|
|
resourceText.push(item.resource.text);
|
|
|
|
|
}
|
|
|
|
|
if (item.resource.uri) {
|
|
|
|
|
resourceText.push(`Resource URI: ${item.resource.uri}`);
|
|
|
|
|
}
|
2025-05-20 01:35:05 +02:00
|
|
|
if (item.resource.name) {
|
|
|
|
|
resourceText.push(`Resource: ${item.resource.name}`);
|
|
|
|
|
}
|
|
|
|
|
if (item.resource.description) {
|
|
|
|
|
resourceText.push(`Description: ${item.resource.description}`);
|
|
|
|
|
}
|
2024-12-17 13:12:57 -05:00
|
|
|
if (item.resource.mimeType != null && item.resource.mimeType) {
|
|
|
|
|
resourceText.push(`Type: ${item.resource.mimeType}`);
|
|
|
|
|
}
|
|
|
|
|
return resourceText.join('\n');
|
|
|
|
|
}
|
|
|
|
|
return JSON.stringify(item, null, 2);
|
|
|
|
|
})
|
|
|
|
|
.filter(Boolean)
|
|
|
|
|
.join('\n\n');
|
|
|
|
|
|
|
|
|
|
return text;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Converts MCPToolCallResponse content into recognized content block types
|
|
|
|
|
* Recognized types: "image", "image_url", "text", "json"
|
|
|
|
|
*
|
|
|
|
|
* @param {t.MCPToolCallResponse} result - The MCPToolCallResponse object
|
|
|
|
|
* @param {string} provider - The provider name (google, anthropic, openai)
|
|
|
|
|
* @returns {Array<Object>} Formatted content blocks
|
|
|
|
|
*/
|
|
|
|
|
/**
|
|
|
|
|
* Converts MCPToolCallResponse content into recognized content block types
|
|
|
|
|
* First element: string or formatted content (excluding image_url)
|
|
|
|
|
* Second element: image_url content if any
|
|
|
|
|
*
|
|
|
|
|
* @param {t.MCPToolCallResponse} result - The MCPToolCallResponse object
|
|
|
|
|
* @param {string} provider - The provider name (google, anthropic, openai)
|
2025-04-23 18:56:06 -04:00
|
|
|
* @returns {t.FormattedContentResult} Tuple of content and image_urls
|
2024-12-17 13:12:57 -05:00
|
|
|
*/
|
|
|
|
|
export function formatToolContent(
|
|
|
|
|
result: t.MCPToolCallResponse,
|
|
|
|
|
provider: t.Provider,
|
2025-04-23 18:56:06 -04:00
|
|
|
): t.FormattedContentResult {
|
2024-12-17 13:12:57 -05:00
|
|
|
if (!RECOGNIZED_PROVIDERS.has(provider)) {
|
|
|
|
|
return [parseAsString(result), undefined];
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
const content = result?.content ?? [];
|
|
|
|
|
if (!content.length) {
|
|
|
|
|
return [[{ type: 'text', text: '(No response)' }], undefined];
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
const formattedContent: t.FormattedContent[] = [];
|
|
|
|
|
const imageUrls: t.FormattedContent[] = [];
|
|
|
|
|
let currentTextBlock = '';
|
|
|
|
|
|
|
|
|
|
type ContentHandler = undefined | ((item: t.ToolContentPart) => void);
|
|
|
|
|
|
|
|
|
|
const contentHandlers: {
|
|
|
|
|
text: (item: Extract<t.ToolContentPart, { type: 'text' }>) => void;
|
|
|
|
|
image: (item: t.ToolContentPart) => void;
|
|
|
|
|
resource: (item: Extract<t.ToolContentPart, { type: 'resource' }>) => void;
|
|
|
|
|
} = {
|
|
|
|
|
text: (item) => {
|
|
|
|
|
currentTextBlock += (currentTextBlock ? '\n\n' : '') + item.text;
|
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
image: (item) => {
|
|
|
|
|
if (!isImageContent(item)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
2025-04-23 18:56:06 -04:00
|
|
|
if (CONTENT_ARRAY_PROVIDERS.has(provider) && currentTextBlock) {
|
2024-12-17 13:12:57 -05:00
|
|
|
formattedContent.push({ type: 'text', text: currentTextBlock });
|
|
|
|
|
currentTextBlock = '';
|
|
|
|
|
}
|
|
|
|
|
const formatter = imageFormatters.default as t.ImageFormatter;
|
|
|
|
|
const formattedImage = formatter(item);
|
|
|
|
|
|
|
|
|
|
if (formattedImage.type === 'image_url') {
|
|
|
|
|
imageUrls.push(formattedImage);
|
|
|
|
|
} else {
|
|
|
|
|
formattedContent.push(formattedImage);
|
|
|
|
|
}
|
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
resource: (item) => {
|
|
|
|
|
const resourceText = [];
|
|
|
|
|
if (item.resource.text != null && item.resource.text) {
|
|
|
|
|
resourceText.push(item.resource.text);
|
|
|
|
|
}
|
|
|
|
|
if (item.resource.uri.length) {
|
|
|
|
|
resourceText.push(`Resource URI: ${item.resource.uri}`);
|
|
|
|
|
}
|
2025-05-20 01:35:05 +02:00
|
|
|
if (item.resource.name) {
|
|
|
|
|
resourceText.push(`Resource: ${item.resource.name}`);
|
|
|
|
|
}
|
|
|
|
|
if (item.resource.description) {
|
|
|
|
|
resourceText.push(`Description: ${item.resource.description}`);
|
|
|
|
|
}
|
2024-12-17 13:12:57 -05:00
|
|
|
if (item.resource.mimeType != null && item.resource.mimeType) {
|
|
|
|
|
resourceText.push(`Type: ${item.resource.mimeType}`);
|
|
|
|
|
}
|
|
|
|
|
currentTextBlock += (currentTextBlock ? '\n\n' : '') + resourceText.join('\n');
|
|
|
|
|
},
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
for (const item of content) {
|
|
|
|
|
const handler = contentHandlers[item.type as keyof typeof contentHandlers] as ContentHandler;
|
|
|
|
|
if (handler) {
|
|
|
|
|
handler(item as never);
|
|
|
|
|
} else {
|
|
|
|
|
const stringified = JSON.stringify(item, null, 2);
|
|
|
|
|
currentTextBlock += (currentTextBlock ? '\n\n' : '') + stringified;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2025-04-23 18:56:06 -04:00
|
|
|
if (CONTENT_ARRAY_PROVIDERS.has(provider) && currentTextBlock) {
|
2024-12-17 13:12:57 -05:00
|
|
|
formattedContent.push({ type: 'text', text: currentTextBlock });
|
|
|
|
|
}
|
|
|
|
|
|
2025-04-23 18:56:06 -04:00
|
|
|
const artifacts = imageUrls.length ? { content: imageUrls } : undefined;
|
|
|
|
|
if (CONTENT_ARRAY_PROVIDERS.has(provider)) {
|
|
|
|
|
return [formattedContent, artifacts];
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return [currentTextBlock, artifacts];
|
2024-12-17 13:12:57 -05:00
|
|
|
}
|