mirror of
https://github.com/danny-avila/LibreChat.git
synced 2025-09-22 08:12:00 +02:00

* chore: Update @modelcontextprotocol/sdk to version 1.12.3 in package.json and package-lock.json - Bump version of @modelcontextprotocol/sdk to 1.12.3 to incorporate recent updates. - Update dependencies for ajv and cross-spawn to their latest versions. - Add ajv as a new dependency in the sdk module. - Include json-schema-traverse as a new dependency in the sdk module. * feat: @librechat/auth * feat: Add crypto module exports to auth package - Introduced a new crypto module by creating index.ts in the crypto directory. - Updated the main index.ts of the auth package to export from the new crypto module. * feat: Update package dependencies and build scripts for auth package - Added @librechat/auth as a dependency in package.json and package-lock.json. - Updated build scripts to include the auth package in both frontend and bun build processes. - Removed unused mongoose and openid-client dependencies from package-lock.json for cleaner dependency management. * refactor: Migrate crypto utility functions to @librechat/auth - Replaced local crypto utility imports with the new @librechat/auth package across multiple files. - Removed the obsolete crypto.js file and its exports. - Updated relevant services and models to utilize the new encryption and decryption methods from @librechat/auth. * feat: Enhance OAuth token handling and update dependencies in auth package * chore: Remove Token model and TokenService due to restructuring of OAuth handling - Deleted the Token.js model and TokenService.js, which were responsible for managing OAuth tokens. - This change is part of a broader refactor to streamline OAuth token management and improve code organization. * refactor: imports from '@librechat/auth' to '@librechat/api' and add OAuth token handling functionality * refactor: Simplify logger usage in MCP and FlowStateManager classes * chore: fix imports * feat: Add OAuth configuration schema to MCP with token exchange method support * feat: FIRST PASS Implement MCP OAuth flow with token management and error handling - Added a new route for handling OAuth callbacks and token retrieval. - Integrated OAuth token storage and retrieval mechanisms. - Enhanced MCP connection to support automatic OAuth flow initiation on 401 errors. - Implemented dynamic client registration and metadata discovery for OAuth. - Updated MCPManager to manage OAuth tokens and handle authentication requirements. - Introduced comprehensive logging for OAuth processes and error handling. * refactor: Update MCPConnection and MCPManager to utilize new URL handling - Added a `url` property to MCPConnection for better URL management. - Refactored MCPManager to use the new `url` property instead of a deprecated method for OAuth handling. - Changed logging from info to debug level for flow manager and token methods initialization. - Improved comments for clarity on existing tokens and OAuth event listener setup. * refactor: Improve connection timeout error messages in MCPConnection and MCPManager and use initTimeout for connection - Updated the connection timeout error messages to include the duration of the timeout. - Introduced a configurable `connectTimeout` variable in both MCPConnection and MCPManager for better flexibility. * chore: cleanup MCP OAuth Token exchange handling; fix: erroneous use of flowsCache and remove verbose logs * refactor: Update MCPManager and MCPTokenStorage to use TokenMethods for token management - Removed direct token storage handling in MCPManager and replaced it with TokenMethods for better abstraction. - Refactored MCPTokenStorage methods to accept parameters for token operations, enhancing flexibility and readability. - Improved logging messages related to token persistence and retrieval processes. * refactor: Update MCP OAuth handling to use static methods and improve flow management - Refactored MCPOAuthHandler to utilize static methods for initiating and completing OAuth flows, enhancing clarity and reducing instance dependencies. - Updated MCPManager to pass flowManager explicitly to OAuth handling methods, improving flexibility in flow state management. - Enhanced comments and logging for better understanding of OAuth processes and flow state retrieval. * refactor: Integrate token methods into createMCPTool for enhanced token management * refactor: Change logging from info to debug level in MCPOAuthHandler for improved log management * chore: clean up logging * feat: first pass, auth URL from MCP OAuth flow * chore: Improve logging format for OAuth authentication URL display * chore: cleanup mcp manager comments * feat: add connection reconnection logic in MCPManager * refactor: reorganize token storage handling in MCP - Moved token storage logic from MCPManager to a new MCPTokenStorage class for better separation of concerns. - Updated imports to reflect the new token storage structure. - Enhanced methods for storing, retrieving, updating, and deleting OAuth tokens, improving overall token management. * chore: update comment for SYSTEM_USER_ID in MCPManager for clarity * feat: implement refresh token functionality in MCP - Added refresh token handling in MCPManager to support token renewal for both app-level and user-specific connections. - Introduced a refreshTokens function to facilitate token refresh logic. - Enhanced MCPTokenStorage to manage client information and refresh token processes. - Updated logging for better traceability during token operations. * chore: cleanup @librechat/auth * feat: implement MCP server initialization in a separate service - Added a new service to handle the initialization of MCP servers, improving code organization and readability. - Refactored the server startup logic to utilize the new initializeMCP function. - Removed redundant MCP initialization code from the main server file. * fix: don't log auth url for user connections * feat: enhance OAuth flow with success and error handling components - Updated OAuth callback routes to redirect to new success and error pages instead of sending status messages. - Introduced `OAuthSuccess` and `OAuthError` components to provide user feedback during authentication. - Added localization support for success and error messages in the translation files. - Implemented countdown functionality in the success component for a better user experience. * fix: refresh token handling for user connections, add missing URL and methods - add standard enum for system user id and helper for determining app-lvel vs. user-level connections * refactor: update token handling in MCPManager and MCPTokenStorage * fix: improve error logging in OAuth authentication handler * fix: concurrency issues for both login url emission and concurrency of oauth flows for shared flows (same user, same server, multiple calls for same server) * fix: properly fail shared flows for concurrent server calls and prevent duplication of tokens * chore: remove unused auth package directory from update configuration * ci: fix mocks in samlStrategy tests * ci: add mcpConfig to AppService test setup * chore: remove obsolete MCP OAuth implementation documentation * fix: update build script for API to use correct command * chore: bump version of @librechat/api to 1.2.4 * fix: update abort signal handling in createMCPTool function * fix: add optional clientInfo parameter to refreshTokensFunction metadata * refactor: replace app.locals.availableTools with getCachedTools in multiple services and controllers for improved tool management * fix: concurrent refresh token handling issue * refactor: add signal parameter to getUserConnection method for improved abort handling * chore: JSDoc typing for `loadEphemeralAgent` * refactor: update isConnectionActive method to use destructured parameters for improved readability * feat: implement caching for MCP tools to handle app-level disconnects for loading list of tools * ci: fix agent test
281 lines
9.2 KiB
JavaScript
281 lines
9.2 KiB
JavaScript
const { Keyv } = require('keyv');
|
|
const { CacheKeys, ViolationTypes, Time } = require('librechat-data-provider');
|
|
const { logFile, violationFile } = require('./keyvFiles');
|
|
const { isEnabled, math } = require('~/server/utils');
|
|
const keyvRedis = require('./keyvRedis');
|
|
const keyvMongo = require('./keyvMongo');
|
|
|
|
const { BAN_DURATION, USE_REDIS, DEBUG_MEMORY_CACHE, CI } = process.env ?? {};
|
|
|
|
const duration = math(BAN_DURATION, 7200000);
|
|
const isRedisEnabled = isEnabled(USE_REDIS);
|
|
const debugMemoryCache = isEnabled(DEBUG_MEMORY_CACHE);
|
|
|
|
const createViolationInstance = (namespace) => {
|
|
const config = isRedisEnabled ? { store: keyvRedis } : { store: violationFile, namespace };
|
|
return new Keyv(config);
|
|
};
|
|
|
|
// Serve cache from memory so no need to clear it on startup/exit
|
|
const pending_req = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis })
|
|
: new Keyv({ namespace: CacheKeys.PENDING_REQ });
|
|
|
|
const config = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis })
|
|
: new Keyv({ namespace: CacheKeys.CONFIG_STORE });
|
|
|
|
const roles = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis })
|
|
: new Keyv({ namespace: CacheKeys.ROLES });
|
|
|
|
const mcpTools = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis })
|
|
: new Keyv({ namespace: CacheKeys.MCP_TOOLS });
|
|
|
|
const audioRuns = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis, ttl: Time.TEN_MINUTES })
|
|
: new Keyv({ namespace: CacheKeys.AUDIO_RUNS, ttl: Time.TEN_MINUTES });
|
|
|
|
const messages = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis, ttl: Time.ONE_MINUTE })
|
|
: new Keyv({ namespace: CacheKeys.MESSAGES, ttl: Time.ONE_MINUTE });
|
|
|
|
const flows = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis, ttl: Time.TWO_MINUTES })
|
|
: new Keyv({ namespace: CacheKeys.FLOWS, ttl: Time.ONE_MINUTE * 3 });
|
|
|
|
const tokenConfig = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis, ttl: Time.THIRTY_MINUTES })
|
|
: new Keyv({ namespace: CacheKeys.TOKEN_CONFIG, ttl: Time.THIRTY_MINUTES });
|
|
|
|
const genTitle = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis, ttl: Time.TWO_MINUTES })
|
|
: new Keyv({ namespace: CacheKeys.GEN_TITLE, ttl: Time.TWO_MINUTES });
|
|
|
|
const s3ExpiryInterval = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis, ttl: Time.THIRTY_MINUTES })
|
|
: new Keyv({ namespace: CacheKeys.S3_EXPIRY_INTERVAL, ttl: Time.THIRTY_MINUTES });
|
|
|
|
const modelQueries = isEnabled(process.env.USE_REDIS)
|
|
? new Keyv({ store: keyvRedis })
|
|
: new Keyv({ namespace: CacheKeys.MODEL_QUERIES });
|
|
|
|
const abortKeys = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis })
|
|
: new Keyv({ namespace: CacheKeys.ABORT_KEYS, ttl: Time.TEN_MINUTES });
|
|
|
|
const openIdExchangedTokensCache = isRedisEnabled
|
|
? new Keyv({ store: keyvRedis, ttl: Time.TEN_MINUTES })
|
|
: new Keyv({ namespace: CacheKeys.OPENID_EXCHANGED_TOKENS, ttl: Time.TEN_MINUTES });
|
|
|
|
const namespaces = {
|
|
[CacheKeys.ROLES]: roles,
|
|
[CacheKeys.MCP_TOOLS]: mcpTools,
|
|
[CacheKeys.CONFIG_STORE]: config,
|
|
[CacheKeys.PENDING_REQ]: pending_req,
|
|
[ViolationTypes.BAN]: new Keyv({ store: keyvMongo, namespace: CacheKeys.BANS, ttl: duration }),
|
|
[CacheKeys.ENCODED_DOMAINS]: new Keyv({
|
|
store: keyvMongo,
|
|
namespace: CacheKeys.ENCODED_DOMAINS,
|
|
ttl: 0,
|
|
}),
|
|
general: new Keyv({ store: logFile, namespace: 'violations' }),
|
|
concurrent: createViolationInstance('concurrent'),
|
|
non_browser: createViolationInstance('non_browser'),
|
|
message_limit: createViolationInstance('message_limit'),
|
|
token_balance: createViolationInstance(ViolationTypes.TOKEN_BALANCE),
|
|
registrations: createViolationInstance('registrations'),
|
|
[ViolationTypes.TTS_LIMIT]: createViolationInstance(ViolationTypes.TTS_LIMIT),
|
|
[ViolationTypes.STT_LIMIT]: createViolationInstance(ViolationTypes.STT_LIMIT),
|
|
[ViolationTypes.CONVO_ACCESS]: createViolationInstance(ViolationTypes.CONVO_ACCESS),
|
|
[ViolationTypes.TOOL_CALL_LIMIT]: createViolationInstance(ViolationTypes.TOOL_CALL_LIMIT),
|
|
[ViolationTypes.FILE_UPLOAD_LIMIT]: createViolationInstance(ViolationTypes.FILE_UPLOAD_LIMIT),
|
|
[ViolationTypes.VERIFY_EMAIL_LIMIT]: createViolationInstance(ViolationTypes.VERIFY_EMAIL_LIMIT),
|
|
[ViolationTypes.RESET_PASSWORD_LIMIT]: createViolationInstance(
|
|
ViolationTypes.RESET_PASSWORD_LIMIT,
|
|
),
|
|
[ViolationTypes.ILLEGAL_MODEL_REQUEST]: createViolationInstance(
|
|
ViolationTypes.ILLEGAL_MODEL_REQUEST,
|
|
),
|
|
logins: createViolationInstance('logins'),
|
|
[CacheKeys.ABORT_KEYS]: abortKeys,
|
|
[CacheKeys.TOKEN_CONFIG]: tokenConfig,
|
|
[CacheKeys.GEN_TITLE]: genTitle,
|
|
[CacheKeys.S3_EXPIRY_INTERVAL]: s3ExpiryInterval,
|
|
[CacheKeys.MODEL_QUERIES]: modelQueries,
|
|
[CacheKeys.AUDIO_RUNS]: audioRuns,
|
|
[CacheKeys.MESSAGES]: messages,
|
|
[CacheKeys.FLOWS]: flows,
|
|
[CacheKeys.OPENID_EXCHANGED_TOKENS]: openIdExchangedTokensCache,
|
|
};
|
|
|
|
/**
|
|
* Gets all cache stores that have TTL configured
|
|
* @returns {Keyv[]}
|
|
*/
|
|
function getTTLStores() {
|
|
return Object.values(namespaces).filter(
|
|
(store) => store instanceof Keyv && typeof store.opts?.ttl === 'number' && store.opts.ttl > 0,
|
|
);
|
|
}
|
|
|
|
/**
|
|
* Clears entries older than the cache's TTL
|
|
* @param {Keyv} cache
|
|
*/
|
|
async function clearExpiredFromCache(cache) {
|
|
if (!cache?.opts?.store?.entries) {
|
|
return;
|
|
}
|
|
|
|
const ttl = cache.opts.ttl;
|
|
if (!ttl) {
|
|
return;
|
|
}
|
|
|
|
const expiryTime = Date.now() - ttl;
|
|
let cleared = 0;
|
|
|
|
// Get all keys first to avoid modification during iteration
|
|
const keys = Array.from(cache.opts.store.keys());
|
|
|
|
for (const key of keys) {
|
|
try {
|
|
const raw = cache.opts.store.get(key);
|
|
if (!raw) {
|
|
continue;
|
|
}
|
|
|
|
const data = cache.opts.deserialize(raw);
|
|
// Check if the entry is older than TTL
|
|
if (data?.expires && data.expires <= expiryTime) {
|
|
const deleted = await cache.opts.store.delete(key);
|
|
if (!deleted) {
|
|
debugMemoryCache &&
|
|
console.warn(`[Cache] Error deleting entry: ${key} from ${cache.opts.namespace}`);
|
|
continue;
|
|
}
|
|
cleared++;
|
|
}
|
|
} catch (error) {
|
|
debugMemoryCache &&
|
|
console.log(`[Cache] Error processing entry from ${cache.opts.namespace}:`, error);
|
|
const deleted = await cache.opts.store.delete(key);
|
|
if (!deleted) {
|
|
debugMemoryCache &&
|
|
console.warn(`[Cache] Error deleting entry: ${key} from ${cache.opts.namespace}`);
|
|
continue;
|
|
}
|
|
cleared++;
|
|
}
|
|
}
|
|
|
|
if (cleared > 0) {
|
|
debugMemoryCache &&
|
|
console.log(
|
|
`[Cache] Cleared ${cleared} entries older than ${ttl}ms from ${cache.opts.namespace}`,
|
|
);
|
|
}
|
|
}
|
|
|
|
const auditCache = () => {
|
|
const ttlStores = getTTLStores();
|
|
console.log('[Cache] Starting audit');
|
|
|
|
ttlStores.forEach((store) => {
|
|
if (!store?.opts?.store?.entries) {
|
|
return;
|
|
}
|
|
|
|
console.log(`[Cache] ${store.opts.namespace} entries:`, {
|
|
count: store.opts.store.size,
|
|
ttl: store.opts.ttl,
|
|
keys: Array.from(store.opts.store.keys()),
|
|
entriesWithTimestamps: Array.from(store.opts.store.entries()).map(([key, value]) => ({
|
|
key,
|
|
value,
|
|
})),
|
|
});
|
|
});
|
|
};
|
|
|
|
/**
|
|
* Clears expired entries from all TTL-enabled stores
|
|
*/
|
|
async function clearAllExpiredFromCache() {
|
|
const ttlStores = getTTLStores();
|
|
await Promise.all(ttlStores.map((store) => clearExpiredFromCache(store)));
|
|
|
|
// Force garbage collection if available (Node.js with --expose-gc flag)
|
|
if (global.gc) {
|
|
global.gc();
|
|
}
|
|
}
|
|
|
|
if (!isRedisEnabled && !isEnabled(CI)) {
|
|
/** @type {Set<NodeJS.Timeout>} */
|
|
const cleanupIntervals = new Set();
|
|
|
|
// Clear expired entries every 30 seconds
|
|
const cleanup = setInterval(() => {
|
|
clearAllExpiredFromCache();
|
|
}, Time.THIRTY_SECONDS);
|
|
|
|
cleanupIntervals.add(cleanup);
|
|
|
|
if (debugMemoryCache) {
|
|
const monitor = setInterval(() => {
|
|
const ttlStores = getTTLStores();
|
|
const memory = process.memoryUsage();
|
|
const totalSize = ttlStores.reduce((sum, store) => sum + (store.opts?.store?.size ?? 0), 0);
|
|
|
|
console.log('[Cache] Memory usage:', {
|
|
heapUsed: `${(memory.heapUsed / 1024 / 1024).toFixed(2)} MB`,
|
|
heapTotal: `${(memory.heapTotal / 1024 / 1024).toFixed(2)} MB`,
|
|
rss: `${(memory.rss / 1024 / 1024).toFixed(2)} MB`,
|
|
external: `${(memory.external / 1024 / 1024).toFixed(2)} MB`,
|
|
totalCacheEntries: totalSize,
|
|
});
|
|
|
|
auditCache();
|
|
}, Time.ONE_MINUTE);
|
|
|
|
cleanupIntervals.add(monitor);
|
|
}
|
|
|
|
const dispose = () => {
|
|
debugMemoryCache && console.log('[Cache] Cleaning up and shutting down...');
|
|
cleanupIntervals.forEach((interval) => clearInterval(interval));
|
|
cleanupIntervals.clear();
|
|
|
|
// One final cleanup before exit
|
|
clearAllExpiredFromCache().then(() => {
|
|
debugMemoryCache && console.log('[Cache] Final cleanup completed');
|
|
process.exit(0);
|
|
});
|
|
};
|
|
|
|
// Handle various termination signals
|
|
process.on('SIGTERM', dispose);
|
|
process.on('SIGINT', dispose);
|
|
process.on('SIGQUIT', dispose);
|
|
process.on('SIGHUP', dispose);
|
|
}
|
|
|
|
/**
|
|
* Returns the keyv cache specified by type.
|
|
* If an invalid type is passed, an error will be thrown.
|
|
*
|
|
* @param {string} key - The key for the namespace to access
|
|
* @returns {Keyv} - If a valid key is passed, returns an object containing the cache store of the specified key.
|
|
* @throws Will throw an error if an invalid key is passed.
|
|
*/
|
|
const getLogStores = (key) => {
|
|
if (!key || !namespaces[key]) {
|
|
throw new Error(`Invalid store key: ${key}`);
|
|
}
|
|
return namespaces[key];
|
|
};
|
|
|
|
module.exports = getLogStores;
|