🛜 refactor: Streamline App Config Usage (#9234)

* WIP: app.locals refactoring

WIP: appConfig

fix: update memory configuration retrieval to use getAppConfig based on user role

fix: update comment for AppConfig interface to clarify purpose

🏷️ refactor: Update tests to use getAppConfig for endpoint configurations

ci: Update AppService tests to initialize app config instead of app.locals

ci: Integrate getAppConfig into remaining tests

refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests

refactor: Rename initializeAppConfig to setAppConfig and update related tests

ci: Mock getAppConfig in various tests to provide default configurations

refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests

chore: rename `Config/getAppConfig` -> `Config/app`

fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters

chore: correct parameter documentation for imageOutputType in ToolService.js

refactor: remove `getCustomConfig` dependency in config route

refactor: update domain validation to use appConfig for allowed domains

refactor: use appConfig registration property

chore: remove app parameter from AppService invocation

refactor: update AppConfig interface to correct registration and turnstile configurations

refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services

refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files

refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type

refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration

ci: update related tests

refactor: update getAppConfig call in getCustomConfigSpeech to include user role

fix: update appConfig usage to access allowedDomains from actions instead of registration

refactor: enhance AppConfig to include fileStrategies and update related file strategy logic

refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions

chore: remove deprecated unused RunManager

refactor: get balance config primarily from appConfig

refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic

refactor: remove getCustomConfig usage and use app config in file citations

refactor: consolidate endpoint loading logic into loadEndpoints function

refactor: update appConfig access to use endpoints structure across various services

refactor: implement custom endpoints configuration and streamline endpoint loading logic

refactor: update getAppConfig call to include user role parameter

refactor: streamline endpoint configuration and enhance appConfig usage across services

refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file

refactor: add type annotation for loadedEndpoints in loadEndpoints function

refactor: move /services/Files/images/parse to TS API

chore: add missing FILE_CITATIONS permission to IRole interface

refactor: restructure toolkits to TS API

refactor: separate manifest logic into its own module

refactor: consolidate tool loading logic into a new tools module for startup logic

refactor: move interface config logic to TS API

refactor: migrate checkEmailConfig to TypeScript and update imports

refactor: add FunctionTool interface and availableTools to AppConfig

refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig`

WIP: fix tests

* fix: rebase conflicts

* refactor: remove app.locals references

* refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware

* refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients

* test: add balance configuration to titleConvo method in AgentClient tests

* chore: remove unused `openai-chat-tokens` package

* chore: remove unused imports in initializeMCPs.js

* refactor: update balance configuration to use getAppConfig instead of getBalanceConfig

* refactor: integrate configMiddleware for centralized configuration handling

* refactor: optimize email domain validation by removing unnecessary async calls

* refactor: simplify multer storage configuration by removing async calls

* refactor: reorder imports for better readability in user.js

* refactor: replace getAppConfig calls with req.config for improved performance

* chore: replace getAppConfig calls with req.config in tests for centralized configuration handling

* chore: remove unused override config

* refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config

* chore: remove customConfig parameter from TTSService constructor

* refactor: pass appConfig from request to processFileCitations for improved configuration handling

* refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config`

* test: add mockAppConfig to processFileCitations tests for improved configuration handling

* fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor

* fix: type safety in useExportConversation

* refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached

* chore: change `MongoUser` typedef to `IUser`

* fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest

* fix: remove unused setAppConfig mock from Server configuration tests
This commit is contained in:
Danny Avila 2025-08-26 12:10:18 -04:00 committed by GitHub
parent e1ad235f17
commit 9a210971f5
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
210 changed files with 4102 additions and 3465 deletions

View file

@ -1,10 +1,7 @@
const { Constants, EModelEndpoint, actionDomainSeparator } = require('librechat-data-provider');
const { Constants, actionDomainSeparator } = require('librechat-data-provider');
const { domainParser } = require('./ActionService');
jest.mock('keyv');
jest.mock('~/server/services/Config', () => ({
getCustomConfig: jest.fn(),
}));
const globalCache = {};
jest.mock('~/cache/getLogStores', () => {
@ -53,26 +50,6 @@ jest.mock('~/cache/getLogStores', () => {
});
describe('domainParser', () => {
const req = {
app: {
locals: {
[EModelEndpoint.azureOpenAI]: {
assistants: true,
},
},
},
};
const reqNoAzure = {
app: {
locals: {
[EModelEndpoint.azureOpenAI]: {
assistants: false,
},
},
},
};
const TLD = '.com';
// Non-azure request

View file

@ -1,15 +1,4 @@
jest.mock('~/models', () => ({
initializeRoles: jest.fn(),
seedDefaultRoles: jest.fn(),
ensureDefaultCategories: jest.fn(),
}));
jest.mock('~/models/Role', () => ({
updateAccessPermissions: jest.fn(),
getRoleByName: jest.fn().mockResolvedValue(null),
updateRoleByName: jest.fn(),
}));
jest.mock('~/config', () => ({
jest.mock('@librechat/data-schemas', () => ({
logger: {
info: jest.fn(),
warn: jest.fn(),
@ -17,11 +6,11 @@ jest.mock('~/config', () => ({
},
}));
jest.mock('./Config/loadCustomConfig', () => jest.fn());
jest.mock('./start/interface', () => ({
jest.mock('@librechat/api', () => ({
...jest.requireActual('@librechat/api'),
loadDefaultInterface: jest.fn(),
}));
jest.mock('./ToolService', () => ({
jest.mock('./start/tools', () => ({
loadAndFormatTools: jest.fn().mockReturnValue({}),
}));
jest.mock('./start/checks', () => ({
@ -32,15 +21,15 @@ jest.mock('./start/checks', () => ({
checkWebSearchConfig: jest.fn(),
}));
jest.mock('./Config/loadCustomConfig', () => jest.fn());
const AppService = require('./AppService');
const { loadDefaultInterface } = require('./start/interface');
const { loadDefaultInterface } = require('@librechat/api');
describe('AppService interface configuration', () => {
let app;
let mockLoadCustomConfig;
beforeEach(() => {
app = { locals: {} };
jest.resetModules();
jest.clearAllMocks();
mockLoadCustomConfig = require('./Config/loadCustomConfig');
@ -50,10 +39,16 @@ describe('AppService interface configuration', () => {
mockLoadCustomConfig.mockResolvedValue({});
loadDefaultInterface.mockResolvedValue({ prompts: true, bookmarks: true });
await AppService(app);
const result = await AppService();
expect(app.locals.interfaceConfig.prompts).toBe(true);
expect(app.locals.interfaceConfig.bookmarks).toBe(true);
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
prompts: true,
bookmarks: true,
}),
}),
);
expect(loadDefaultInterface).toHaveBeenCalled();
});
@ -61,10 +56,16 @@ describe('AppService interface configuration', () => {
mockLoadCustomConfig.mockResolvedValue({ interface: { prompts: false, bookmarks: false } });
loadDefaultInterface.mockResolvedValue({ prompts: false, bookmarks: false });
await AppService(app);
const result = await AppService();
expect(app.locals.interfaceConfig.prompts).toBe(false);
expect(app.locals.interfaceConfig.bookmarks).toBe(false);
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
prompts: false,
bookmarks: false,
}),
}),
);
expect(loadDefaultInterface).toHaveBeenCalled();
});
@ -72,10 +73,17 @@ describe('AppService interface configuration', () => {
mockLoadCustomConfig.mockResolvedValue({});
loadDefaultInterface.mockResolvedValue({});
await AppService(app);
const result = await AppService();
expect(app.locals.interfaceConfig.prompts).toBeUndefined();
expect(app.locals.interfaceConfig.bookmarks).toBeUndefined();
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.anything(),
}),
);
// Verify that prompts and bookmarks are undefined when not provided
expect(result.interfaceConfig.prompts).toBeUndefined();
expect(result.interfaceConfig.bookmarks).toBeUndefined();
expect(loadDefaultInterface).toHaveBeenCalled();
});
@ -83,10 +91,16 @@ describe('AppService interface configuration', () => {
mockLoadCustomConfig.mockResolvedValue({ interface: { prompts: true, bookmarks: false } });
loadDefaultInterface.mockResolvedValue({ prompts: true, bookmarks: false });
await AppService(app);
const result = await AppService();
expect(app.locals.interfaceConfig.prompts).toBe(true);
expect(app.locals.interfaceConfig.bookmarks).toBe(false);
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
prompts: true,
bookmarks: false,
}),
}),
);
expect(loadDefaultInterface).toHaveBeenCalled();
});
@ -108,14 +122,19 @@ describe('AppService interface configuration', () => {
},
});
await AppService(app);
const result = await AppService();
expect(app.locals.interfaceConfig.peoplePicker).toBeDefined();
expect(app.locals.interfaceConfig.peoplePicker).toMatchObject({
users: true,
groups: true,
roles: true,
});
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
peoplePicker: expect.objectContaining({
users: true,
groups: true,
roles: true,
}),
}),
}),
);
expect(loadDefaultInterface).toHaveBeenCalled();
});
@ -137,11 +156,19 @@ describe('AppService interface configuration', () => {
},
});
await AppService(app);
const result = await AppService();
expect(app.locals.interfaceConfig.peoplePicker.users).toBe(true);
expect(app.locals.interfaceConfig.peoplePicker.groups).toBe(false);
expect(app.locals.interfaceConfig.peoplePicker.roles).toBe(true);
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
peoplePicker: expect.objectContaining({
users: true,
groups: false,
roles: true,
}),
}),
}),
);
});
it('should set default peoplePicker permissions when not provided', async () => {
@ -154,11 +181,18 @@ describe('AppService interface configuration', () => {
},
});
await AppService(app);
const result = await AppService();
expect(app.locals.interfaceConfig.peoplePicker).toBeDefined();
expect(app.locals.interfaceConfig.peoplePicker.users).toBe(true);
expect(app.locals.interfaceConfig.peoplePicker.groups).toBe(true);
expect(app.locals.interfaceConfig.peoplePicker.roles).toBe(true);
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
peoplePicker: expect.objectContaining({
users: true,
groups: true,
roles: true,
}),
}),
}),
);
});
});

View file

@ -3,6 +3,7 @@ const {
loadMemoryConfig,
agentsConfigSetup,
loadWebSearchConfig,
loadDefaultInterface,
} = require('@librechat/api');
const {
FileSources,
@ -12,35 +13,26 @@ const {
} = require('librechat-data-provider');
const {
checkWebSearchConfig,
checkAzureVariables,
checkVariables,
checkHealth,
checkConfig,
} = require('./start/checks');
const { ensureDefaultCategories, seedDefaultRoles, initializeRoles } = require('~/models');
const { azureAssistantsDefaults, assistantsConfigSetup } = require('./start/assistants');
const { initializeAzureBlobService } = require('./Files/Azure/initialize');
const { initializeFirebase } = require('./Files/Firebase/initialize');
const loadCustomConfig = require('./Config/loadCustomConfig');
const handleRateLimits = require('./Config/handleRateLimits');
const { loadDefaultInterface } = require('./start/interface');
const loadCustomConfig = require('./Config/loadCustomConfig');
const { loadTurnstileConfig } = require('./start/turnstile');
const { azureConfigSetup } = require('./start/azureOpenAI');
const { processModelSpecs } = require('./start/modelSpecs');
const { initializeS3 } = require('./Files/S3/initialize');
const { loadAndFormatTools } = require('./ToolService');
const { setCachedTools } = require('./Config');
const { loadAndFormatTools } = require('./start/tools');
const { loadEndpoints } = require('./start/endpoints');
const paths = require('~/config/paths');
/**
* Loads custom config and initializes app-wide variables.
* @function AppService
* @param {Express.Application} app - The Express application object.
*/
const AppService = async (app) => {
await initializeRoles();
await seedDefaultRoles();
await ensureDefaultCategories();
const AppService = async () => {
/** @type {TCustomConfig} */
const config = (await loadCustomConfig()) ?? {};
const configDefaults = getConfigDefaults();
@ -79,101 +71,57 @@ const AppService = async (app) => {
directory: paths.structuredTools,
});
await setCachedTools(availableTools, { isGlobal: true });
// Store MCP config for later initialization
const mcpConfig = config.mcpServers || null;
const socialLogins =
config?.registration?.socialLogins ?? configDefaults?.registration?.socialLogins;
const interfaceConfig = await loadDefaultInterface(config, configDefaults);
const registration = config.registration ?? configDefaults.registration;
const interfaceConfig = await loadDefaultInterface({ config, configDefaults });
const turnstileConfig = loadTurnstileConfig(config, configDefaults);
const speech = config.speech;
const defaultLocals = {
config,
const defaultConfig = {
ocr,
paths,
config,
memory,
speech,
balance,
mcpConfig,
webSearch,
fileStrategy,
socialLogins,
registration,
filteredTools,
includedTools,
availableTools,
imageOutputType,
interfaceConfig,
turnstileConfig,
balance,
mcpConfig,
fileStrategies: config.fileStrategies,
};
const agentsDefaults = agentsConfigSetup(config);
if (!Object.keys(config).length) {
app.locals = {
...defaultLocals,
[EModelEndpoint.agents]: agentsDefaults,
const appConfig = {
...defaultConfig,
endpoints: {
[EModelEndpoint.agents]: agentsDefaults,
},
};
return;
return appConfig;
}
checkConfig(config);
handleRateLimits(config?.rateLimits);
const loadedEndpoints = loadEndpoints(config, agentsDefaults);
const endpointLocals = {};
const endpoints = config?.endpoints;
if (endpoints?.[EModelEndpoint.azureOpenAI]) {
endpointLocals[EModelEndpoint.azureOpenAI] = azureConfigSetup(config);
checkAzureVariables();
}
if (endpoints?.[EModelEndpoint.azureOpenAI]?.assistants) {
endpointLocals[EModelEndpoint.azureAssistants] = azureAssistantsDefaults();
}
if (endpoints?.[EModelEndpoint.azureAssistants]) {
endpointLocals[EModelEndpoint.azureAssistants] = assistantsConfigSetup(
config,
EModelEndpoint.azureAssistants,
endpointLocals[EModelEndpoint.azureAssistants],
);
}
if (endpoints?.[EModelEndpoint.assistants]) {
endpointLocals[EModelEndpoint.assistants] = assistantsConfigSetup(
config,
EModelEndpoint.assistants,
endpointLocals[EModelEndpoint.assistants],
);
}
endpointLocals[EModelEndpoint.agents] = agentsConfigSetup(config, agentsDefaults);
const endpointKeys = [
EModelEndpoint.openAI,
EModelEndpoint.google,
EModelEndpoint.bedrock,
EModelEndpoint.anthropic,
EModelEndpoint.gptPlugins,
];
endpointKeys.forEach((key) => {
if (endpoints?.[key]) {
endpointLocals[key] = endpoints[key];
}
});
if (endpoints?.all) {
endpointLocals.all = endpoints.all;
}
app.locals = {
...defaultLocals,
const appConfig = {
...defaultConfig,
fileConfig: config?.fileConfig,
secureImageLinks: config?.secureImageLinks,
modelSpecs: processModelSpecs(endpoints, config.modelSpecs, interfaceConfig),
...endpointLocals,
modelSpecs: processModelSpecs(config?.endpoints, config.modelSpecs, interfaceConfig),
endpoints: loadedEndpoints,
};
return appConfig;
};
module.exports = AppService;

File diff suppressed because it is too large Load diff

View file

@ -350,6 +350,7 @@ async function runAssistant({
accumulatedMessages = [],
in_progress: inProgress,
}) {
const appConfig = openai.req.config;
let steps = accumulatedSteps;
let messages = accumulatedMessages;
const in_progress = inProgress ?? createInProgressHandler(openai, thread_id, messages);
@ -396,8 +397,8 @@ async function runAssistant({
});
const { endpoint = EModelEndpoint.azureAssistants } = openai.req.body;
/** @type {TCustomConfig.endpoints.assistants} */
const assistantsEndpointConfig = openai.req.app.locals?.[endpoint] ?? {};
/** @type {AppConfig['endpoints']['assistants']} */
const assistantsEndpointConfig = appConfig.endpoints?.[endpoint] ?? {};
const { pollIntervalMs, timeoutMs } = assistantsEndpointConfig;
const run = await waitForRun({

View file

@ -1,8 +1,8 @@
const bcrypt = require('bcryptjs');
const jwt = require('jsonwebtoken');
const { webcrypto } = require('node:crypto');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { isEnabled, checkEmailConfig } = require('@librechat/api');
const { SystemRoles, errorsToString } = require('librechat-data-provider');
const {
findUser,
@ -21,9 +21,9 @@ const {
generateRefreshToken,
} = require('~/models');
const { isEmailDomainAllowed } = require('~/server/services/domains');
const { checkEmailConfig, sendEmail } = require('~/server/utils');
const { getBalanceConfig } = require('~/server/services/Config');
const { registerSchema } = require('~/strategies/validators');
const { getAppConfig } = require('~/server/services/Config');
const { sendEmail } = require('~/server/utils');
const domains = {
client: process.env.DOMAIN_CLIENT,
@ -78,7 +78,7 @@ const createTokenHash = () => {
/**
* Send Verification Email
* @param {Partial<MongoUser> & { _id: ObjectId, email: string, name: string}} user
* @param {Partial<IUser>} user
* @returns {Promise<void>}
*/
const sendVerificationEmail = async (user) => {
@ -112,7 +112,7 @@ const sendVerificationEmail = async (user) => {
/**
* Verify Email
* @param {Express.Request} req
* @param {ServerRequest} req
*/
const verifyEmail = async (req) => {
const { email, token } = req.body;
@ -160,9 +160,9 @@ const verifyEmail = async (req) => {
/**
* Register a new user.
* @param {MongoUser} user <email, password, name, username>
* @param {Partial<MongoUser>} [additionalData={}]
* @returns {Promise<{status: number, message: string, user?: MongoUser}>}
* @param {IUser} user <email, password, name, username>
* @param {Partial<IUser>} [additionalData={}]
* @returns {Promise<{status: number, message: string, user?: IUser}>}
*/
const registerUser = async (user, additionalData = {}) => {
const { error } = registerSchema.safeParse(user);
@ -195,7 +195,8 @@ const registerUser = async (user, additionalData = {}) => {
return { status: 200, message: genericVerificationMessage };
}
if (!(await isEmailDomainAllowed(email))) {
const appConfig = await getAppConfig({ role: user.role });
if (!isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains)) {
const errorMessage =
'The email address provided cannot be used. Please use a different email address.';
logger.error(`[registerUser] [Registration not allowed] [Email: ${user.email}]`);
@ -219,9 +220,8 @@ const registerUser = async (user, additionalData = {}) => {
const emailEnabled = checkEmailConfig();
const disableTTL = isEnabled(process.env.ALLOW_UNVERIFIED_EMAIL_LOGIN);
const balanceConfig = await getBalanceConfig();
const newUser = await createUser(newUserData, balanceConfig, disableTTL, true);
const newUser = await createUser(newUserData, appConfig.balance, disableTTL, true);
newUserId = newUser._id;
if (emailEnabled && !newUser.emailVerified) {
await sendVerificationEmail({
@ -248,7 +248,7 @@ const registerUser = async (user, additionalData = {}) => {
/**
* Request password reset
* @param {Express.Request} req
* @param {ServerRequest} req
*/
const requestPasswordReset = async (req) => {
const { email } = req.body;

View file

@ -0,0 +1,68 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys } = require('librechat-data-provider');
const AppService = require('~/server/services/AppService');
const { setCachedTools } = require('./getCachedTools');
const getLogStores = require('~/cache/getLogStores');
/**
* Get the app configuration based on user context
* @param {Object} [options]
* @param {string} [options.role] - User role for role-based config
* @param {boolean} [options.refresh] - Force refresh the cache
* @returns {Promise<AppConfig>}
*/
async function getAppConfig(options = {}) {
const { role, refresh } = options;
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cacheKey = role ? `${CacheKeys.APP_CONFIG}:${role}` : CacheKeys.APP_CONFIG;
if (!refresh) {
const cached = await cache.get(cacheKey);
if (cached) {
return cached;
}
}
let baseConfig = await cache.get(CacheKeys.APP_CONFIG);
if (!baseConfig) {
logger.info('[getAppConfig] App configuration not initialized. Initializing AppService...');
baseConfig = await AppService();
if (!baseConfig) {
throw new Error('Failed to initialize app configuration through AppService.');
}
if (baseConfig.availableTools) {
await setCachedTools(baseConfig.availableTools, { isGlobal: true });
}
await cache.set(CacheKeys.APP_CONFIG, baseConfig);
}
// For now, return the base config
// In the future, this is where we'll apply role-based modifications
if (role) {
// TODO: Apply role-based config modifications
// const roleConfig = await applyRoleBasedConfig(baseConfig, role);
// await cache.set(cacheKey, roleConfig);
// return roleConfig;
}
return baseConfig;
}
/**
* Clear the app configuration cache
* @returns {Promise<boolean>}
*/
async function clearAppConfigCache() {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cacheKey = CacheKeys.APP_CONFIG;
return await cache.delete(cacheKey);
}
module.exports = {
getAppConfig,
clearAppConfigCache,
};

View file

@ -1,69 +0,0 @@
const { isEnabled } = require('@librechat/api');
const { CacheKeys, EModelEndpoint } = require('librechat-data-provider');
const { normalizeEndpointName } = require('~/server/utils');
const loadCustomConfig = require('./loadCustomConfig');
const getLogStores = require('~/cache/getLogStores');
/**
* Retrieves the configuration object
* @function getCustomConfig
* @returns {Promise<TCustomConfig | null>}
* */
async function getCustomConfig() {
const cache = getLogStores(CacheKeys.STATIC_CONFIG);
return (await cache.get(CacheKeys.LIBRECHAT_YAML_CONFIG)) || (await loadCustomConfig());
}
/**
* Retrieves the configuration object
* @function getBalanceConfig
* @returns {Promise<TCustomConfig['balance'] | null>}
* */
async function getBalanceConfig() {
const isLegacyEnabled = isEnabled(process.env.CHECK_BALANCE);
const startBalance = process.env.START_BALANCE;
/** @type {TCustomConfig['balance']} */
const config = {
enabled: isLegacyEnabled,
startBalance: startBalance != null && startBalance ? parseInt(startBalance, 10) : undefined,
};
const customConfig = await getCustomConfig();
if (!customConfig) {
return config;
}
return { ...config, ...(customConfig?.['balance'] ?? {}) };
}
/**
*
* @param {string | EModelEndpoint} endpoint
* @returns {Promise<TEndpoint | undefined>}
*/
const getCustomEndpointConfig = async (endpoint) => {
const customConfig = await getCustomConfig();
if (!customConfig) {
throw new Error(`Config not found for the ${endpoint} custom endpoint.`);
}
const { endpoints = {} } = customConfig;
const customEndpoints = endpoints[EModelEndpoint.custom] ?? [];
return customEndpoints.find(
(endpointConfig) => normalizeEndpointName(endpointConfig.name) === endpoint,
);
};
/**
* @returns {Promise<boolean>}
*/
async function hasCustomUserVars() {
const customConfig = await getCustomConfig();
const mcpServers = customConfig?.mcpServers;
return Object.values(mcpServers ?? {}).some((server) => server.customUserVars);
}
module.exports = {
getCustomConfig,
getBalanceConfig,
hasCustomUserVars,
getCustomEndpointConfig,
};

View file

@ -1,3 +1,4 @@
const { loadCustomEndpointsConfig } = require('@librechat/api');
const {
CacheKeys,
EModelEndpoint,
@ -6,8 +7,8 @@ const {
defaultAgentCapabilities,
} = require('librechat-data-provider');
const loadDefaultEndpointsConfig = require('./loadDefaultEConfig');
const loadConfigEndpoints = require('./loadConfigEndpoints');
const getLogStores = require('~/cache/getLogStores');
const { getAppConfig } = require('./app');
/**
*
@ -21,14 +22,36 @@ async function getEndpointsConfig(req) {
return cachedEndpointsConfig;
}
const defaultEndpointsConfig = await loadDefaultEndpointsConfig(req);
const customConfigEndpoints = await loadConfigEndpoints(req);
const appConfig = req.config ?? (await getAppConfig({ role: req.user?.role }));
const defaultEndpointsConfig = await loadDefaultEndpointsConfig(appConfig);
const customEndpointsConfig = loadCustomEndpointsConfig(appConfig?.endpoints?.custom);
/** @type {TEndpointsConfig} */
const mergedConfig = { ...defaultEndpointsConfig, ...customConfigEndpoints };
if (mergedConfig[EModelEndpoint.assistants] && req.app.locals?.[EModelEndpoint.assistants]) {
const mergedConfig = {
...defaultEndpointsConfig,
...customEndpointsConfig,
};
if (appConfig.endpoints?.[EModelEndpoint.azureOpenAI]) {
/** @type {Omit<TConfig, 'order'>} */
mergedConfig[EModelEndpoint.azureOpenAI] = {
userProvide: false,
};
}
if (appConfig.endpoints?.[EModelEndpoint.azureOpenAI]?.assistants) {
/** @type {Omit<TConfig, 'order'>} */
mergedConfig[EModelEndpoint.azureAssistants] = {
userProvide: false,
};
}
if (
mergedConfig[EModelEndpoint.assistants] &&
appConfig?.endpoints?.[EModelEndpoint.assistants]
) {
const { disableBuilder, retrievalModels, capabilities, version, ..._rest } =
req.app.locals[EModelEndpoint.assistants];
appConfig.endpoints[EModelEndpoint.assistants];
mergedConfig[EModelEndpoint.assistants] = {
...mergedConfig[EModelEndpoint.assistants],
@ -38,9 +61,9 @@ async function getEndpointsConfig(req) {
capabilities,
};
}
if (mergedConfig[EModelEndpoint.agents] && req.app.locals?.[EModelEndpoint.agents]) {
if (mergedConfig[EModelEndpoint.agents] && appConfig?.endpoints?.[EModelEndpoint.agents]) {
const { disableBuilder, capabilities, allowedProviders, ..._rest } =
req.app.locals[EModelEndpoint.agents];
appConfig.endpoints[EModelEndpoint.agents];
mergedConfig[EModelEndpoint.agents] = {
...mergedConfig[EModelEndpoint.agents],
@ -52,10 +75,10 @@ async function getEndpointsConfig(req) {
if (
mergedConfig[EModelEndpoint.azureAssistants] &&
req.app.locals?.[EModelEndpoint.azureAssistants]
appConfig?.endpoints?.[EModelEndpoint.azureAssistants]
) {
const { disableBuilder, retrievalModels, capabilities, version, ..._rest } =
req.app.locals[EModelEndpoint.azureAssistants];
appConfig.endpoints[EModelEndpoint.azureAssistants];
mergedConfig[EModelEndpoint.azureAssistants] = {
...mergedConfig[EModelEndpoint.azureAssistants],
@ -66,8 +89,8 @@ async function getEndpointsConfig(req) {
};
}
if (mergedConfig[EModelEndpoint.bedrock] && req.app.locals?.[EModelEndpoint.bedrock]) {
const { availableRegions } = req.app.locals[EModelEndpoint.bedrock];
if (mergedConfig[EModelEndpoint.bedrock] && appConfig?.endpoints?.[EModelEndpoint.bedrock]) {
const { availableRegions } = appConfig.endpoints[EModelEndpoint.bedrock];
mergedConfig[EModelEndpoint.bedrock] = {
...mergedConfig[EModelEndpoint.bedrock],
availableRegions,

View file

@ -1,12 +1,11 @@
const appConfig = require('./app');
const { config } = require('./EndpointService');
const getCachedTools = require('./getCachedTools');
const getCustomConfig = require('./getCustomConfig');
const mcpToolsCache = require('./mcpToolsCache');
const loadCustomConfig = require('./loadCustomConfig');
const loadConfigModels = require('./loadConfigModels');
const loadDefaultModels = require('./loadDefaultModels');
const getEndpointsConfig = require('./getEndpointsConfig');
const loadOverrideConfig = require('./loadOverrideConfig');
const loadAsyncEndpoints = require('./loadAsyncEndpoints');
module.exports = {
@ -14,10 +13,9 @@ module.exports = {
loadCustomConfig,
loadConfigModels,
loadDefaultModels,
loadOverrideConfig,
loadAsyncEndpoints,
...appConfig,
...getCachedTools,
...getCustomConfig,
...mcpToolsCache,
...getEndpointsConfig,
};

View file

@ -1,16 +1,16 @@
const path = require('path');
const { logger } = require('@librechat/data-schemas');
const { loadServiceKey, isUserProvided } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const { loadServiceKey, isUserProvided } = require('@librechat/api');
const { config } = require('./EndpointService');
const { openAIApiKey, azureOpenAIApiKey, useAzurePlugins, userProvidedOpenAI, googleKey } = config;
/**
* Load async endpoints and return a configuration object
* @param {Express.Request} req - The request object
* @param {AppConfig} [appConfig] - The app configuration object
*/
async function loadAsyncEndpoints(req) {
async function loadAsyncEndpoints(appConfig) {
let serviceKey, googleUserProvides;
/** Check if GOOGLE_KEY is provided at all(including 'user_provided') */
@ -34,7 +34,7 @@ async function loadAsyncEndpoints(req) {
const google = serviceKey || isGoogleKeyProvided ? { userProvide: googleUserProvides } : false;
const useAzure = req.app.locals[EModelEndpoint.azureOpenAI]?.plugins;
const useAzure = !!appConfig?.endpoints?.[EModelEndpoint.azureOpenAI]?.plugins;
const gptPlugins =
useAzure || openAIApiKey || azureOpenAIApiKey
? {

View file

@ -1,73 +0,0 @@
const { EModelEndpoint, extractEnvVariable } = require('librechat-data-provider');
const { isUserProvided, normalizeEndpointName } = require('~/server/utils');
const { getCustomConfig } = require('./getCustomConfig');
/**
* Load config endpoints from the cached configuration object
* @param {Express.Request} req - The request object
* @returns {Promise<TEndpointsConfig>} A promise that resolves to an object containing the endpoints configuration
*/
async function loadConfigEndpoints(req) {
const customConfig = await getCustomConfig();
if (!customConfig) {
return {};
}
const { endpoints = {} } = customConfig ?? {};
const endpointsConfig = {};
if (Array.isArray(endpoints[EModelEndpoint.custom])) {
const customEndpoints = endpoints[EModelEndpoint.custom].filter(
(endpoint) =>
endpoint.baseURL &&
endpoint.apiKey &&
endpoint.name &&
endpoint.models &&
(endpoint.models.fetch || endpoint.models.default),
);
for (let i = 0; i < customEndpoints.length; i++) {
const endpoint = customEndpoints[i];
const {
baseURL,
apiKey,
name: configName,
iconURL,
modelDisplayLabel,
customParams,
} = endpoint;
const name = normalizeEndpointName(configName);
const resolvedApiKey = extractEnvVariable(apiKey);
const resolvedBaseURL = extractEnvVariable(baseURL);
endpointsConfig[name] = {
type: EModelEndpoint.custom,
userProvide: isUserProvided(resolvedApiKey),
userProvideURL: isUserProvided(resolvedBaseURL),
modelDisplayLabel,
iconURL,
customParams,
};
}
}
if (req.app.locals[EModelEndpoint.azureOpenAI]) {
/** @type {Omit<TConfig, 'order'>} */
endpointsConfig[EModelEndpoint.azureOpenAI] = {
userProvide: false,
};
}
if (req.app.locals[EModelEndpoint.azureOpenAI]?.assistants) {
/** @type {Omit<TConfig, 'order'>} */
endpointsConfig[EModelEndpoint.azureAssistants] = {
userProvide: false,
};
}
return endpointsConfig;
}
module.exports = loadConfigEndpoints;

View file

@ -1,43 +1,39 @@
const { isUserProvided, normalizeEndpointName } = require('@librechat/api');
const { EModelEndpoint, extractEnvVariable } = require('librechat-data-provider');
const { isUserProvided, normalizeEndpointName } = require('~/server/utils');
const { fetchModels } = require('~/server/services/ModelService');
const { getCustomConfig } = require('./getCustomConfig');
const { getAppConfig } = require('./app');
/**
* Load config endpoints from the cached configuration object
* @function loadConfigModels
* @param {Express.Request} req - The Express request object.
* @param {ServerRequest} req - The Express request object.
*/
async function loadConfigModels(req) {
const customConfig = await getCustomConfig();
if (!customConfig) {
const appConfig = await getAppConfig({ role: req.user?.role });
if (!appConfig) {
return {};
}
const { endpoints = {} } = customConfig ?? {};
const modelsConfig = {};
const azureEndpoint = endpoints[EModelEndpoint.azureOpenAI];
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
const { modelNames } = azureConfig ?? {};
if (modelNames && azureEndpoint) {
if (modelNames && azureConfig) {
modelsConfig[EModelEndpoint.azureOpenAI] = modelNames;
}
if (modelNames && azureEndpoint && azureEndpoint.plugins) {
if (modelNames && azureConfig && azureConfig.plugins) {
modelsConfig[EModelEndpoint.gptPlugins] = modelNames;
}
if (azureEndpoint?.assistants && azureConfig.assistantModels) {
if (azureConfig?.assistants && azureConfig.assistantModels) {
modelsConfig[EModelEndpoint.azureAssistants] = azureConfig.assistantModels;
}
if (!Array.isArray(endpoints[EModelEndpoint.custom])) {
if (!Array.isArray(appConfig.endpoints?.[EModelEndpoint.custom])) {
return modelsConfig;
}
const customEndpoints = endpoints[EModelEndpoint.custom].filter(
const customEndpoints = appConfig.endpoints[EModelEndpoint.custom].filter(
(endpoint) =>
endpoint.baseURL &&
endpoint.apiKey &&

View file

@ -1,9 +1,9 @@
const { fetchModels } = require('~/server/services/ModelService');
const { getCustomConfig } = require('./getCustomConfig');
const loadConfigModels = require('./loadConfigModels');
const { getAppConfig } = require('./app');
jest.mock('~/server/services/ModelService');
jest.mock('./getCustomConfig');
jest.mock('./app');
const exampleConfig = {
endpoints: {
@ -60,7 +60,7 @@ const exampleConfig = {
};
describe('loadConfigModels', () => {
const mockRequest = { app: { locals: {} }, user: { id: 'testUserId' } };
const mockRequest = { user: { id: 'testUserId' } };
const originalEnv = process.env;
@ -68,6 +68,9 @@ describe('loadConfigModels', () => {
jest.resetAllMocks();
jest.resetModules();
process.env = { ...originalEnv };
// Default mock for getAppConfig
getAppConfig.mockResolvedValue({});
});
afterEach(() => {
@ -75,18 +78,15 @@ describe('loadConfigModels', () => {
});
it('should return an empty object if customConfig is null', async () => {
getCustomConfig.mockResolvedValue(null);
getAppConfig.mockResolvedValue(null);
const result = await loadConfigModels(mockRequest);
expect(result).toEqual({});
});
it('handles azure models and endpoint correctly', async () => {
mockRequest.app.locals.azureOpenAI = { modelNames: ['model1', 'model2'] };
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
azureOpenAI: {
models: ['model1', 'model2'],
},
azureOpenAI: { modelNames: ['model1', 'model2'] },
},
});
@ -97,18 +97,16 @@ describe('loadConfigModels', () => {
it('fetches custom models based on the unique key', async () => {
process.env.BASE_URL = 'http://example.com';
process.env.API_KEY = 'some-api-key';
const customEndpoints = {
custom: [
{
baseURL: '${BASE_URL}',
apiKey: '${API_KEY}',
name: 'CustomModel',
models: { fetch: true },
},
],
};
const customEndpoints = [
{
baseURL: '${BASE_URL}',
apiKey: '${API_KEY}',
name: 'CustomModel',
models: { fetch: true },
},
];
getCustomConfig.mockResolvedValue({ endpoints: customEndpoints });
getAppConfig.mockResolvedValue({ endpoints: { custom: customEndpoints } });
fetchModels.mockResolvedValue(['customModel1', 'customModel2']);
const result = await loadConfigModels(mockRequest);
@ -117,7 +115,7 @@ describe('loadConfigModels', () => {
});
it('correctly associates models to names using unique keys', async () => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: [
{
@ -146,7 +144,7 @@ describe('loadConfigModels', () => {
it('correctly handles multiple endpoints with the same baseURL but different apiKeys', async () => {
// Mock the custom configuration to simulate the user's scenario
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: [
{
@ -210,7 +208,7 @@ describe('loadConfigModels', () => {
process.env.MY_OPENROUTER_API_KEY = 'actual_openrouter_api_key';
// Setup custom configuration with specific API keys for Mistral and OpenRouter
// and "user_provided" for groq and Ollama, indicating no fetch for the latter two
getCustomConfig.mockResolvedValue(exampleConfig);
getAppConfig.mockResolvedValue(exampleConfig);
// Assuming fetchModels would be called only for Mistral and OpenRouter
fetchModels.mockImplementation(({ name }) => {
@ -273,7 +271,7 @@ describe('loadConfigModels', () => {
});
it('falls back to default models if fetching returns an empty array', async () => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: [
{
@ -306,7 +304,7 @@ describe('loadConfigModels', () => {
});
it('falls back to default models if fetching returns a falsy value', async () => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: [
{
@ -367,7 +365,7 @@ describe('loadConfigModels', () => {
},
];
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: testCases,
},

View file

@ -4,11 +4,11 @@ const { config } = require('./EndpointService');
/**
* Load async endpoints and return a configuration object
* @param {Express.Request} req - The request object
* @param {AppConfig} appConfig - The app configuration object
* @returns {Promise<Object.<string, EndpointWithOrder>>} An object whose keys are endpoint names and values are objects that contain the endpoint configuration and an order.
*/
async function loadDefaultEndpointsConfig(req) {
const { google, gptPlugins } = await loadAsyncEndpoints(req);
async function loadDefaultEndpointsConfig(appConfig) {
const { google, gptPlugins } = await loadAsyncEndpoints(appConfig);
const { assistants, azureAssistants, azureOpenAI, chatGPTBrowser } = config;
const enabledEndpoints = getEnabledEndpoints();

View file

@ -11,7 +11,7 @@ const {
* Loads the default models for the application.
* @async
* @function
* @param {Express.Request} req - The Express request object.
* @param {ServerRequest} req - The Express request object.
*/
async function loadDefaultModels(req) {
try {

View file

@ -1,6 +0,0 @@
// fetch some remote config
async function loadOverrideConfig() {
return false;
}
module.exports = loadOverrideConfig;

View file

@ -49,6 +49,7 @@ const initializeAgent = async ({
allowedProviders,
isInitialAgent = false,
}) => {
const appConfig = req.config;
if (
isAgentsEndpoint(endpointOption?.endpoint) &&
allowedProviders.size > 0 &&
@ -90,10 +91,11 @@ const initializeAgent = async ({
const { attachments, tool_resources } = await primeResources({
req,
getFiles,
appConfig,
agentId: agent.id,
attachments: currentFiles,
tool_resources: agent.tool_resources,
requestFileSet: new Set(requestFiles?.map((file) => file.file_id)),
agentId: agent.id,
});
const provider = agent.provider;
@ -112,7 +114,7 @@ const initializeAgent = async ({
})) ?? {};
agent.endpoint = provider;
const { getOptions, overrideProvider } = await getProviderConfig(provider);
const { getOptions, overrideProvider } = getProviderConfig({ provider, appConfig });
if (overrideProvider !== agent.provider) {
agent.provider = overrideProvider;
}

View file

@ -1,6 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { validateAgentModel } = require('@librechat/api');
const { createContentAggregator } = require('@librechat/agents');
const { validateAgentModel, getCustomEndpointConfig } = require('@librechat/api');
const {
Constants,
EModelEndpoint,
@ -13,7 +13,6 @@ const {
} = require('~/server/controllers/agents/callbacks');
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
const { getModelsConfig } = require('~/server/controllers/ModelController');
const { getCustomEndpointConfig } = require('~/server/services/Config');
const { loadAgentTools } = require('~/server/services/ToolService');
const AgentClient = require('~/server/controllers/agents/client');
const { getAgent } = require('~/models/Agent');
@ -58,6 +57,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
if (!endpointOption) {
throw new Error('Endpoint option not provided');
}
const appConfig = req.config;
// TODO: use endpointOption to determine options/modelOptions
/** @type {Array<UsageMetadata>} */
@ -97,8 +97,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
}
const agentConfigs = new Map();
/** @type {Set<string>} */
const allowedProviders = new Set(req?.app?.locals?.[EModelEndpoint.agents]?.allowedProviders);
const allowedProviders = new Set(appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders);
const loadTools = createToolLoader(signal);
/** @type {Array<MongoFile>} */
@ -158,10 +157,13 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
}
}
let endpointConfig = req.app.locals[primaryConfig.endpoint];
let endpointConfig = appConfig.endpoints?.[primaryConfig.endpoint];
if (!isAgentsEndpoint(primaryConfig.endpoint) && !endpointConfig) {
try {
endpointConfig = await getCustomEndpointConfig(primaryConfig.endpoint);
endpointConfig = getCustomEndpointConfig({
endpoint: primaryConfig.endpoint,
appConfig,
});
} catch (err) {
logger.error(
'[api/server/controllers/agents/client.js #titleConvo] Error getting custom endpoint config',

View file

@ -4,6 +4,7 @@ const { getLLMConfig } = require('~/server/services/Endpoints/anthropic/llm');
const AnthropicClient = require('~/app/clients/AnthropicClient');
const initializeClient = async ({ req, res, endpointOption, overrideModel, optionsOnly }) => {
const appConfig = req.config;
const { ANTHROPIC_API_KEY, ANTHROPIC_REVERSE_PROXY, PROXY } = process.env;
const expiresAt = req.body.key;
const isUserProvided = ANTHROPIC_API_KEY === 'user_provided';
@ -23,15 +24,14 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
let clientOptions = {};
/** @type {undefined | TBaseEndpoint} */
const anthropicConfig = req.app.locals[EModelEndpoint.anthropic];
const anthropicConfig = appConfig.endpoints?.[EModelEndpoint.anthropic];
if (anthropicConfig) {
clientOptions.streamRate = anthropicConfig.streamRate;
clientOptions.titleModel = anthropicConfig.titleModel;
}
/** @type {undefined | TBaseEndpoint} */
const allConfig = req.app.locals.all;
const allConfig = appConfig.endpoints?.all;
if (allConfig) {
clientOptions.streamRate = allConfig.streamRate;
}

View file

@ -48,6 +48,7 @@ class Files {
}
const initializeClient = async ({ req, res, version, endpointOption, initAppClient = false }) => {
const appConfig = req.config;
const { PROXY, OPENAI_ORGANIZATION, AZURE_ASSISTANTS_API_KEY, AZURE_ASSISTANTS_BASE_URL } =
process.env;
@ -81,7 +82,7 @@ const initializeClient = async ({ req, res, version, endpointOption, initAppClie
};
/** @type {TAzureConfig | undefined} */
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
/** @type {AzureOptions | undefined} */
let azureOptions;

View file

@ -1,6 +1,6 @@
// const OpenAI = require('openai');
const { ProxyAgent } = require('undici');
const { ErrorTypes } = require('librechat-data-provider');
const { ErrorTypes, EModelEndpoint } = require('librechat-data-provider');
const { getUserKey, getUserKeyExpiry, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initialize');
// const { OpenAIClient } = require('~/app');
@ -12,6 +12,8 @@ jest.mock('~/server/services/UserService', () => ({
checkUserKeyExpiry: jest.requireActual('~/server/services/UserService').checkUserKeyExpiry,
}));
// Config is now passed via req.config, not getAppConfig
const today = new Date();
const tenDaysFromToday = new Date(today.setDate(today.getDate() + 10));
const isoString = tenDaysFromToday.toISOString();
@ -41,7 +43,11 @@ describe('initializeClient', () => {
isUserProvided: jest.fn().mockReturnValueOnce(false),
}));
const req = { user: { id: 'user123' }, app };
const req = {
user: { id: 'user123' },
app,
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
};
const res = {};
const { openai, openAIApiKey } = await initializeClient({ req, res });
@ -57,7 +63,11 @@ describe('initializeClient', () => {
getUserKeyValues.mockResolvedValue({ apiKey: 'user-api-key', baseURL: 'https://user.api.url' });
getUserKeyExpiry.mockResolvedValue(isoString);
const req = { user: { id: 'user123' }, app };
const req = {
user: { id: 'user123' },
app,
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
};
const res = {};
const { openai, openAIApiKey } = await initializeClient({ req, res });
@ -74,7 +84,7 @@ describe('initializeClient', () => {
let userValues = getUserKey();
try {
userValues = JSON.parse(userValues);
} catch (e) {
} catch {
throw new Error(
JSON.stringify({
type: ErrorTypes.INVALID_USER_KEY,
@ -84,7 +94,10 @@ describe('initializeClient', () => {
return userValues;
});
const req = { user: { id: 'user123' } };
const req = {
user: { id: 'user123' },
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
};
const res = {};
await expect(initializeClient({ req, res })).rejects.toThrow(/invalid_user_key/);
@ -93,7 +106,11 @@ describe('initializeClient', () => {
test('throws error if API key is not provided', async () => {
delete process.env.AZURE_ASSISTANTS_API_KEY; // Simulate missing API key
const req = { user: { id: 'user123' }, app };
const req = {
user: { id: 'user123' },
app,
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
};
const res = {};
await expect(initializeClient({ req, res })).rejects.toThrow(/Assistants API key not/);
@ -103,7 +120,11 @@ describe('initializeClient', () => {
process.env.AZURE_ASSISTANTS_API_KEY = 'test-key';
process.env.PROXY = 'http://proxy.server';
const req = { user: { id: 'user123' }, app };
const req = {
user: { id: 'user123' },
app,
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
};
const res = {};
const { openai } = await initializeClient({ req, res });

View file

@ -11,6 +11,7 @@ const {
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
const getOptions = async ({ req, overrideModel, endpointOption }) => {
const appConfig = req.config;
const {
BEDROCK_AWS_SECRET_ACCESS_KEY,
BEDROCK_AWS_ACCESS_KEY_ID,
@ -50,14 +51,13 @@ const getOptions = async ({ req, overrideModel, endpointOption }) => {
let streamRate = Constants.DEFAULT_STREAM_RATE;
/** @type {undefined | TBaseEndpoint} */
const bedrockConfig = req.app.locals[EModelEndpoint.bedrock];
const bedrockConfig = appConfig.endpoints?.[EModelEndpoint.bedrock];
if (bedrockConfig && bedrockConfig.streamRate) {
streamRate = bedrockConfig.streamRate;
}
/** @type {undefined | TBaseEndpoint} */
const allConfig = req.app.locals.all;
const allConfig = appConfig.endpoints?.all;
if (allConfig && allConfig.streamRate) {
streamRate = allConfig.streamRate;
}

View file

@ -1,3 +1,11 @@
const { Providers } = require('@librechat/agents');
const {
resolveHeaders,
isUserProvided,
getOpenAIConfig,
getCustomEndpointConfig,
createHandleLLMNewToken,
} = require('@librechat/api');
const {
CacheKeys,
ErrorTypes,
@ -5,22 +13,22 @@ const {
FetchTokenConfig,
extractEnvVariable,
} = require('librechat-data-provider');
const { Providers } = require('@librechat/agents');
const { getOpenAIConfig, createHandleLLMNewToken, resolveHeaders } = require('@librechat/api');
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
const { getCustomEndpointConfig } = require('~/server/services/Config');
const { fetchModels } = require('~/server/services/ModelService');
const OpenAIClient = require('~/app/clients/OpenAIClient');
const { isUserProvided } = require('~/server/utils');
const getLogStores = require('~/cache/getLogStores');
const { PROXY } = process.env;
const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrideEndpoint }) => {
const appConfig = req.config;
const { key: expiresAt } = req.body;
const endpoint = overrideEndpoint ?? req.body.endpoint;
const endpointConfig = await getCustomEndpointConfig(endpoint);
const endpointConfig = getCustomEndpointConfig({
endpoint,
appConfig,
});
if (!endpointConfig) {
throw new Error(`Config not found for the ${endpoint} custom endpoint.`);
}
@ -117,8 +125,7 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
endpointTokenConfig,
};
/** @type {undefined | TBaseEndpoint} */
const allConfig = req.app.locals.all;
const allConfig = appConfig.endpoints?.all;
if (allConfig) {
customOptions.streamRate = allConfig.streamRate;
}

View file

@ -1,21 +1,16 @@
const initializeClient = require('./initialize');
jest.mock('@librechat/api', () => ({
...jest.requireActual('@librechat/api'),
resolveHeaders: jest.fn(),
getOpenAIConfig: jest.fn(),
createHandleLLMNewToken: jest.fn(),
}));
jest.mock('librechat-data-provider', () => ({
CacheKeys: { TOKEN_CONFIG: 'token_config' },
ErrorTypes: { NO_USER_KEY: 'NO_USER_KEY', NO_BASE_URL: 'NO_BASE_URL' },
envVarRegex: /\$\{([^}]+)\}/,
FetchTokenConfig: {},
extractEnvVariable: jest.fn((value) => value),
}));
jest.mock('@librechat/agents', () => ({
Providers: { OLLAMA: 'ollama' },
getCustomEndpointConfig: jest.fn().mockReturnValue({
apiKey: 'test-key',
baseURL: 'https://test.com',
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
models: { default: ['test-model'] },
}),
}));
jest.mock('~/server/services/UserService', () => ({
@ -23,14 +18,7 @@ jest.mock('~/server/services/UserService', () => ({
checkUserKeyExpiry: jest.fn(),
}));
jest.mock('~/server/services/Config', () => ({
getCustomEndpointConfig: jest.fn().mockResolvedValue({
apiKey: 'test-key',
baseURL: 'https://test.com',
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
models: { default: ['test-model'] },
}),
}));
// Config is now passed via req.config, not getAppConfig
jest.mock('~/server/services/ModelService', () => ({
fetchModels: jest.fn(),
@ -42,10 +30,6 @@ jest.mock('~/app/clients/OpenAIClient', () => {
}));
});
jest.mock('~/server/utils', () => ({
isUserProvided: jest.fn().mockReturnValue(false),
}));
jest.mock('~/cache/getLogStores', () =>
jest.fn().mockReturnValue({
get: jest.fn(),
@ -55,13 +39,35 @@ jest.mock('~/cache/getLogStores', () =>
describe('custom/initializeClient', () => {
const mockRequest = {
body: { endpoint: 'test-endpoint' },
user: { id: 'user-123', email: 'test@example.com' },
user: { id: 'user-123', email: 'test@example.com', role: 'user' },
app: { locals: {} },
config: {
endpoints: {
all: {
streamRate: 25,
},
},
},
};
const mockResponse = {};
beforeEach(() => {
jest.clearAllMocks();
const { getCustomEndpointConfig, resolveHeaders, getOpenAIConfig } = require('@librechat/api');
getCustomEndpointConfig.mockReturnValue({
apiKey: 'test-key',
baseURL: 'https://test.com',
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
models: { default: ['test-model'] },
});
resolveHeaders.mockReturnValue({ 'x-user': 'user-123', 'x-email': 'test@example.com' });
getOpenAIConfig.mockReturnValue({
useLegacyContent: true,
endpointTokenConfig: null,
llmConfig: {
callbacks: [],
},
});
});
it('calls resolveHeaders with headers, user, and body for body placeholder support', async () => {
@ -69,14 +75,14 @@ describe('custom/initializeClient', () => {
await initializeClient({ req: mockRequest, res: mockResponse, optionsOnly: true });
expect(resolveHeaders).toHaveBeenCalledWith({
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
user: { id: 'user-123', email: 'test@example.com' },
user: { id: 'user-123', email: 'test@example.com', role: 'user' },
body: { endpoint: 'test-endpoint' }, // body - supports {{LIBRECHAT_BODY_*}} placeholders
});
});
it('throws if endpoint config is missing', async () => {
const { getCustomEndpointConfig } = require('~/server/services/Config');
getCustomEndpointConfig.mockResolvedValueOnce(null);
const { getCustomEndpointConfig } = require('@librechat/api');
getCustomEndpointConfig.mockReturnValueOnce(null);
await expect(
initializeClient({ req: mockRequest, res: mockResponse, optionsOnly: true }),
).rejects.toThrow('Config not found for the test-endpoint custom endpoint.');

View file

@ -46,10 +46,11 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
let clientOptions = {};
const appConfig = req.config;
/** @type {undefined | TBaseEndpoint} */
const allConfig = req.app.locals.all;
const allConfig = appConfig.endpoints?.all;
/** @type {undefined | TBaseEndpoint} */
const googleConfig = req.app.locals[EModelEndpoint.google];
const googleConfig = appConfig.endpoints?.[EModelEndpoint.google];
if (googleConfig) {
clientOptions.streamRate = googleConfig.streamRate;

View file

@ -8,6 +8,8 @@ jest.mock('~/server/services/UserService', () => ({
getUserKey: jest.fn().mockImplementation(() => ({})),
}));
// Config is now passed via req.config, not getAppConfig
const app = { locals: {} };
describe('google/initializeClient', () => {
@ -26,6 +28,12 @@ describe('google/initializeClient', () => {
body: { key: expiresAt },
user: { id: '123' },
app,
config: {
endpoints: {
all: {},
google: {},
},
},
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
@ -48,6 +56,12 @@ describe('google/initializeClient', () => {
body: { key: null },
user: { id: '123' },
app,
config: {
endpoints: {
all: {},
google: {},
},
},
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
@ -71,6 +85,12 @@ describe('google/initializeClient', () => {
body: { key: expiresAt },
user: { id: '123' },
app,
config: {
endpoints: {
all: {},
google: {},
},
},
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };

View file

@ -1,7 +1,7 @@
const { isEnabled } = require('@librechat/api');
const { EModelEndpoint, CacheKeys, Constants, googleSettings } = require('librechat-data-provider');
const getLogStores = require('~/cache/getLogStores');
const initializeClient = require('./initialize');
const { isEnabled } = require('~/server/utils');
const { saveConvo } = require('~/models');
const addTitle = async (req, { text, response, client }) => {
@ -14,7 +14,8 @@ const addTitle = async (req, { text, response, client }) => {
return;
}
const { GOOGLE_TITLE_MODEL } = process.env ?? {};
const providerConfig = req.app.locals[EModelEndpoint.google];
const appConfig = req.config;
const providerConfig = appConfig.endpoints?.[EModelEndpoint.google];
let model =
providerConfig?.titleModel ??
GOOGLE_TITLE_MODEL ??

View file

@ -1,11 +1,11 @@
const { Providers } = require('@librechat/agents');
const { EModelEndpoint } = require('librechat-data-provider');
const { getCustomEndpointConfig } = require('@librechat/api');
const initAnthropic = require('~/server/services/Endpoints/anthropic/initialize');
const getBedrockOptions = require('~/server/services/Endpoints/bedrock/options');
const initOpenAI = require('~/server/services/Endpoints/openAI/initialize');
const initCustom = require('~/server/services/Endpoints/custom/initialize');
const initGoogle = require('~/server/services/Endpoints/google/initialize');
const { getCustomEndpointConfig } = require('~/server/services/Config');
/** Check if the provider is a known custom provider
* @param {string | undefined} [provider] - The provider string
@ -31,14 +31,16 @@ const providerConfigMap = {
/**
* Get the provider configuration and override endpoint based on the provider string
* @param {string} provider - The provider string
* @returns {Promise<{
* getOptions: Function,
* @param {Object} params
* @param {string} params.provider - The provider string
* @param {AppConfig} params.appConfig - The application configuration
* @returns {{
* getOptions: (typeof providerConfigMap)[keyof typeof providerConfigMap],
* overrideProvider: string,
* customEndpointConfig?: TEndpoint
* }>}
* }}
*/
async function getProviderConfig(provider) {
function getProviderConfig({ provider, appConfig }) {
let getOptions = providerConfigMap[provider];
let overrideProvider = provider;
/** @type {TEndpoint | undefined} */
@ -48,7 +50,7 @@ async function getProviderConfig(provider) {
overrideProvider = provider.toLowerCase();
getOptions = providerConfigMap[overrideProvider];
} else if (!getOptions) {
customEndpointConfig = await getCustomEndpointConfig(provider);
customEndpointConfig = getCustomEndpointConfig({ endpoint: provider, appConfig });
if (!customEndpointConfig) {
throw new Error(`Provider ${provider} not supported`);
}
@ -57,7 +59,7 @@ async function getProviderConfig(provider) {
}
if (isKnownCustomProvider(overrideProvider) && !customEndpointConfig) {
customEndpointConfig = await getCustomEndpointConfig(provider);
customEndpointConfig = getCustomEndpointConfig({ endpoint: provider, appConfig });
if (!customEndpointConfig) {
throw new Error(`Provider ${provider} not supported`);
}

View file

@ -18,6 +18,7 @@ const initializeClient = async ({
overrideEndpoint,
overrideModel,
}) => {
const appConfig = req.config;
const {
PROXY,
OPENAI_API_KEY,
@ -64,7 +65,7 @@ const initializeClient = async ({
const isAzureOpenAI = endpoint === EModelEndpoint.azureOpenAI;
/** @type {false | TAzureConfig} */
const azureConfig = isAzureOpenAI && req.app.locals[EModelEndpoint.azureOpenAI];
const azureConfig = isAzureOpenAI && appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
let serverless = false;
if (isAzureOpenAI && azureConfig) {
const { modelGroupMap, groupMap } = azureConfig;
@ -113,15 +114,14 @@ const initializeClient = async ({
}
/** @type {undefined | TBaseEndpoint} */
const openAIConfig = req.app.locals[EModelEndpoint.openAI];
const openAIConfig = appConfig.endpoints?.[EModelEndpoint.openAI];
if (!isAzureOpenAI && openAIConfig) {
clientOptions.streamRate = openAIConfig.streamRate;
clientOptions.titleModel = openAIConfig.titleModel;
}
/** @type {undefined | TBaseEndpoint} */
const allConfig = req.app.locals.all;
const allConfig = appConfig.endpoints?.all;
if (allConfig) {
clientOptions.streamRate = allConfig.streamRate;
}

View file

@ -1,4 +1,13 @@
jest.mock('~/cache/getLogStores');
jest.mock('~/cache/getLogStores', () => ({
getLogStores: jest.fn().mockReturnValue({
get: jest.fn().mockResolvedValue({
openAI: { apiKey: 'test-key' },
}),
set: jest.fn(),
delete: jest.fn(),
}),
}));
const { EModelEndpoint, ErrorTypes, validateAzureGroups } = require('librechat-data-provider');
const { getUserKey, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initialize');
@ -11,6 +20,38 @@ jest.mock('~/server/services/UserService', () => ({
checkUserKeyExpiry: jest.requireActual('~/server/services/UserService').checkUserKeyExpiry,
}));
const mockAppConfig = {
endpoints: {
openAI: {
apiKey: 'test-key',
},
azureOpenAI: {
apiKey: 'test-azure-key',
modelNames: ['gpt-4-vision-preview', 'gpt-3.5-turbo', 'gpt-4'],
modelGroupMap: {
'gpt-4-vision-preview': {
group: 'librechat-westus',
deploymentName: 'gpt-4-vision-preview',
version: '2024-02-15-preview',
},
},
groupMap: {
'librechat-westus': {
apiKey: 'WESTUS_API_KEY',
instanceName: 'librechat-westus',
version: '2023-12-01-preview',
models: {
'gpt-4-vision-preview': {
deploymentName: 'gpt-4-vision-preview',
version: '2024-02-15-preview',
},
},
},
},
},
},
};
describe('initializeClient', () => {
// Set up environment variables
const originalEnvironment = process.env;
@ -79,7 +120,7 @@ describe('initializeClient', () => {
},
];
const { modelNames, modelGroupMap, groupMap } = validateAzureGroups(validAzureConfigs);
const { modelNames } = validateAzureGroups(validAzureConfigs);
beforeEach(() => {
jest.resetModules(); // Clears the cache
@ -99,6 +140,7 @@ describe('initializeClient', () => {
body: { key: null, endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -112,25 +154,30 @@ describe('initializeClient', () => {
test('should initialize client with Azure credentials when endpoint is azureOpenAI', async () => {
process.env.AZURE_API_KEY = 'test-azure-api-key';
(process.env.AZURE_OPENAI_API_INSTANCE_NAME = 'some-value'),
(process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME = 'some-value'),
(process.env.AZURE_OPENAI_API_VERSION = 'some-value'),
(process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME = 'some-value'),
(process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME = 'some-value'),
(process.env.OPENAI_API_KEY = 'test-openai-api-key');
(process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME = 'some-value'),
(process.env.AZURE_OPENAI_API_VERSION = 'some-value'),
(process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME = 'some-value'),
(process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME = 'some-value'),
(process.env.OPENAI_API_KEY = 'test-openai-api-key');
process.env.DEBUG_OPENAI = 'false';
process.env.OPENAI_SUMMARIZE = 'false';
const req = {
body: { key: null, endpoint: 'azureOpenAI' },
body: {
key: null,
endpoint: 'azureOpenAI',
model: 'gpt-4-vision-preview',
},
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = { modelOptions: { model: 'test-model' } };
const endpointOption = {};
const client = await initializeClient({ req, res, endpointOption });
expect(client.openAIApiKey).toBe('test-azure-api-key');
expect(client.openAIApiKey).toBe('WESTUS_API_KEY');
expect(client.client).toBeInstanceOf(OpenAIClient);
});
@ -142,6 +189,7 @@ describe('initializeClient', () => {
body: { key: null, endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -159,6 +207,7 @@ describe('initializeClient', () => {
body: { key: null, endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -177,6 +226,7 @@ describe('initializeClient', () => {
body: { key: null, endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -198,6 +248,7 @@ describe('initializeClient', () => {
body: { key: expiresAt, endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -216,6 +267,7 @@ describe('initializeClient', () => {
body: { key: null, endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -236,6 +288,7 @@ describe('initializeClient', () => {
id: '123',
},
app,
config: mockAppConfig,
};
const res = {};
@ -260,6 +313,7 @@ describe('initializeClient', () => {
body: { key: invalidKey, endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -281,6 +335,7 @@ describe('initializeClient', () => {
body: { key: new Date(Date.now() + 10000).toISOString(), endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -291,7 +346,7 @@ describe('initializeClient', () => {
let userValues = getUserKey();
try {
userValues = JSON.parse(userValues);
} catch (e) {
} catch {
throw new Error(
JSON.stringify({
type: ErrorTypes.INVALID_USER_KEY,
@ -307,6 +362,9 @@ describe('initializeClient', () => {
});
test('should initialize client correctly for Azure OpenAI with valid configuration', async () => {
// Set up Azure environment variables
process.env.WESTUS_API_KEY = 'test-westus-key';
const req = {
body: {
key: null,
@ -314,15 +372,7 @@ describe('initializeClient', () => {
model: modelNames[0],
},
user: { id: '123' },
app: {
locals: {
[EModelEndpoint.azureOpenAI]: {
modelNames,
modelGroupMap,
groupMap,
},
},
},
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -340,6 +390,7 @@ describe('initializeClient', () => {
body: { key: null, endpoint: EModelEndpoint.openAI },
user: { id: '123' },
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};
@ -362,6 +413,7 @@ describe('initializeClient', () => {
id: '123',
},
app,
config: mockAppConfig,
};
const res = {};
const endpointOption = {};

View file

@ -2,10 +2,10 @@ const axios = require('axios');
const fs = require('fs').promises;
const FormData = require('form-data');
const { Readable } = require('stream');
const { logger } = require('@librechat/data-schemas');
const { genAzureEndpoint } = require('@librechat/api');
const { extractEnvVariable, STTProviders } = require('librechat-data-provider');
const { getCustomConfig } = require('~/server/services/Config');
const { logger } = require('~/config');
const { getAppConfig } = require('~/server/services/Config');
/**
* Maps MIME types to their corresponding file extensions for audio files.
@ -84,12 +84,7 @@ function getFileExtensionFromMime(mimeType) {
* @class
*/
class STTService {
/**
* Creates an instance of STTService.
* @param {Object} customConfig - The custom configuration object.
*/
constructor(customConfig) {
this.customConfig = customConfig;
constructor() {
this.providerStrategies = {
[STTProviders.OPENAI]: this.openAIProvider,
[STTProviders.AZURE_OPENAI]: this.azureOpenAIProvider,
@ -104,21 +99,20 @@ class STTService {
* @throws {Error} If the custom config is not found.
*/
static async getInstance() {
const customConfig = await getCustomConfig();
if (!customConfig) {
throw new Error('Custom config not found');
}
return new STTService(customConfig);
return new STTService();
}
/**
* Retrieves the configured STT provider and its schema.
* @param {ServerRequest} req - The request object.
* @returns {Promise<[string, Object]>} A promise that resolves to an array containing the provider name and its schema.
* @throws {Error} If no STT schema is set, multiple providers are set, or no provider is set.
*/
async getProviderSchema() {
const sttSchema = this.customConfig.speech.stt;
async getProviderSchema(req) {
const appConfig = await getAppConfig({
role: req?.user?.role,
});
const sttSchema = appConfig?.speech?.stt;
if (!sttSchema) {
throw new Error(
'No STT schema is set. Did you configure STT in the custom config (librechat.yaml)?',
@ -274,7 +268,7 @@ class STTService {
* @param {Object} res - The response object.
* @returns {Promise<void>}
*/
async processTextToSpeech(req, res) {
async processSpeechToText(req, res) {
if (!req.file) {
return res.status(400).json({ message: 'No audio file provided in the FormData' });
}
@ -287,7 +281,7 @@ class STTService {
};
try {
const [provider, sttSchema] = await this.getProviderSchema();
const [provider, sttSchema] = await this.getProviderSchema(req);
const text = await this.sttRequest(provider, sttSchema, { audioBuffer, audioFile });
res.json({ text });
} catch (error) {
@ -297,7 +291,7 @@ class STTService {
try {
await fs.unlink(req.file.path);
logger.debug('[/speech/stt] Temp. audio upload file deleted');
} catch (error) {
} catch {
logger.debug('[/speech/stt] Temp. audio upload file already deleted');
}
}
@ -322,7 +316,7 @@ async function createSTTService() {
*/
async function speechToText(req, res) {
const sttService = await createSTTService();
await sttService.processTextToSpeech(req, res);
await sttService.processSpeechToText(req, res);
}
module.exports = { speechToText };

View file

@ -1,9 +1,9 @@
const axios = require('axios');
const { logger } = require('@librechat/data-schemas');
const { genAzureEndpoint } = require('@librechat/api');
const { extractEnvVariable, TTSProviders } = require('librechat-data-provider');
const { getRandomVoiceId, createChunkProcessor, splitTextIntoChunks } = require('./streamAudio');
const { getCustomConfig } = require('~/server/services/Config');
const { logger } = require('~/config');
const { getAppConfig } = require('~/server/services/Config');
/**
* Service class for handling Text-to-Speech (TTS) operations.
@ -12,10 +12,8 @@ const { logger } = require('~/config');
class TTSService {
/**
* Creates an instance of TTSService.
* @param {Object} customConfig - The custom configuration object.
*/
constructor(customConfig) {
this.customConfig = customConfig;
constructor() {
this.providerStrategies = {
[TTSProviders.OPENAI]: this.openAIProvider.bind(this),
[TTSProviders.AZURE_OPENAI]: this.azureOpenAIProvider.bind(this),
@ -32,11 +30,7 @@ class TTSService {
* @throws {Error} If the custom config is not found.
*/
static async getInstance() {
const customConfig = await getCustomConfig();
if (!customConfig) {
throw new Error('Custom config not found');
}
return new TTSService(customConfig);
return new TTSService();
}
/**
@ -293,10 +287,13 @@ class TTSService {
return res.status(400).send('Missing text in request body');
}
const appConfig = await getAppConfig({
role: req.user?.role,
});
try {
res.setHeader('Content-Type', 'audio/mpeg');
const provider = this.getProvider();
const ttsSchema = this.customConfig.speech.tts[provider];
const ttsSchema = appConfig?.speech?.tts?.[provider];
const voice = await this.getVoice(ttsSchema, requestVoice);
if (input.length < 4096) {

View file

@ -1,5 +1,5 @@
const { getCustomConfig } = require('~/server/services/Config');
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
const { getAppConfig } = require('~/server/services/Config');
/**
* This function retrieves the speechTab settings from the custom configuration
@ -15,26 +15,28 @@ const { logger } = require('~/config');
*/
async function getCustomConfigSpeech(req, res) {
try {
const customConfig = await getCustomConfig();
const appConfig = await getAppConfig({
role: req.user?.role,
});
if (!customConfig) {
if (!appConfig) {
return res.status(200).send({
message: 'not_found',
});
}
const sttExternal = !!customConfig.speech?.stt;
const ttsExternal = !!customConfig.speech?.tts;
const sttExternal = !!appConfig.speech?.stt;
const ttsExternal = !!appConfig.speech?.tts;
let settings = {
sttExternal,
ttsExternal,
};
if (!customConfig.speech?.speechTab) {
if (!appConfig.speech?.speechTab) {
return res.status(200).send(settings);
}
const speechTab = customConfig.speech.speechTab;
const speechTab = appConfig.speech.speechTab;
if (speechTab.advancedMode !== undefined) {
settings.advancedMode = speechTab.advancedMode;

View file

@ -1,5 +1,5 @@
const { TTSProviders } = require('librechat-data-provider');
const { getCustomConfig } = require('~/server/services/Config');
const { getAppConfig } = require('~/server/services/Config');
const { getProvider } = require('./TTSService');
/**
@ -14,13 +14,15 @@ const { getProvider } = require('./TTSService');
*/
async function getVoices(req, res) {
try {
const customConfig = await getCustomConfig();
const appConfig = await getAppConfig({
role: req.user?.role,
});
if (!customConfig || !customConfig?.speech?.tts) {
if (!appConfig || !appConfig?.speech?.tts) {
throw new Error('Configuration or TTS schema is missing');
}
const ttsSchema = customConfig?.speech?.tts;
const ttsSchema = appConfig?.speech?.tts;
const provider = await getProvider(ttsSchema);
let voices;

View file

@ -30,6 +30,7 @@ async function uploadImageToAzure({
containerName,
}) {
try {
const appConfig = req.config;
const inputFilePath = file.path;
const inputBuffer = await fs.promises.readFile(inputFilePath);
const {
@ -41,12 +42,12 @@ async function uploadImageToAzure({
const userId = req.user.id;
let webPBuffer;
let fileName = `${file_id}__${path.basename(inputFilePath)}`;
const targetExtension = `.${req.app.locals.imageOutputType}`;
const targetExtension = `.${appConfig.imageOutputType}`;
if (extension.toLowerCase() === targetExtension) {
webPBuffer = resizedBuffer;
} else {
webPBuffer = await sharp(resizedBuffer).toFormat(req.app.locals.imageOutputType).toBuffer();
webPBuffer = await sharp(resizedBuffer).toFormat(appConfig.imageOutputType).toBuffer();
const extRegExp = new RegExp(path.extname(fileName) + '$');
fileName = fileName.replace(extRegExp, targetExtension);
if (!path.extname(fileName)) {

View file

@ -1,21 +1,27 @@
const { nanoid } = require('nanoid');
const { checkAccess } = require('@librechat/api');
const { Tools, PermissionTypes, Permissions } = require('librechat-data-provider');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
const { logger } = require('@librechat/data-schemas');
const {
Tools,
Permissions,
FileSources,
EModelEndpoint,
PermissionTypes,
} = require('librechat-data-provider');
const { getRoleByName } = require('~/models/Role');
const { logger } = require('~/config');
const { Files } = require('~/models');
/**
* Process file search results from tool calls
* @param {Object} options
* @param {IUser} options.user - The user object
* @param {AppConfig} options.appConfig - The app configuration object
* @param {GraphRunnableConfig['configurable']} options.metadata - The metadata
* @param {any} options.toolArtifact - The tool artifact containing structured data
* @param {string} options.toolCallId - The tool call ID
* @returns {Promise<Object|null>} The file search attachment or null
*/
async function processFileCitations({ user, toolArtifact, toolCallId, metadata }) {
async function processFileCitations({ user, appConfig, toolArtifact, toolCallId, metadata }) {
try {
if (!toolArtifact?.[Tools.file_search]?.sources) {
return null;
@ -44,10 +50,11 @@ async function processFileCitations({ user, toolArtifact, toolCallId, metadata }
}
}
const customConfig = await getCustomConfig();
const maxCitations = customConfig?.endpoints?.agents?.maxCitations ?? 30;
const maxCitationsPerFile = customConfig?.endpoints?.agents?.maxCitationsPerFile ?? 5;
const minRelevanceScore = customConfig?.endpoints?.agents?.minRelevanceScore ?? 0.45;
const maxCitations = appConfig.endpoints?.[EModelEndpoint.agents]?.maxCitations ?? 30;
const maxCitationsPerFile =
appConfig.endpoints?.[EModelEndpoint.agents]?.maxCitationsPerFile ?? 5;
const minRelevanceScore =
appConfig.endpoints?.[EModelEndpoint.agents]?.minRelevanceScore ?? 0.45;
const sources = toolArtifact[Tools.file_search].sources || [];
const filteredSources = sources.filter((source) => source.relevance >= minRelevanceScore);
@ -59,7 +66,7 @@ async function processFileCitations({ user, toolArtifact, toolCallId, metadata }
}
const selectedSources = applyCitationLimits(filteredSources, maxCitations, maxCitationsPerFile);
const enhancedSources = await enhanceSourcesWithMetadata(selectedSources, customConfig);
const enhancedSources = await enhanceSourcesWithMetadata(selectedSources, appConfig);
if (enhancedSources.length > 0) {
const fileSearchAttachment = {
@ -110,10 +117,10 @@ function applyCitationLimits(sources, maxCitations, maxCitationsPerFile) {
/**
* Enhance sources with file metadata from database
* @param {Array} sources - Selected sources
* @param {Object} customConfig - Custom configuration
* @param {AppConfig} appConfig - Custom configuration
* @returns {Promise<Array>} Enhanced sources
*/
async function enhanceSourcesWithMetadata(sources, customConfig) {
async function enhanceSourcesWithMetadata(sources, appConfig) {
const fileIds = [...new Set(sources.map((source) => source.fileId))];
let fileMetadataMap = {};
@ -129,7 +136,7 @@ async function enhanceSourcesWithMetadata(sources, customConfig) {
return sources.map((source) => {
const fileRecord = fileMetadataMap[source.fileId] || {};
const configuredStorageType = fileRecord.source || customConfig?.fileStrategy || 'local';
const configuredStorageType = fileRecord.source || appConfig?.fileStrategy || FileSources.local;
return {
...source,

View file

@ -43,8 +43,7 @@ async function getCodeOutputDownloadStream(fileIdentifier, apiKey) {
/**
* Uploads a file to the Code Environment server.
* @param {Object} params - The params object.
* @param {ServerRequest} params.req - The request object from Express. It should have a `user` property with an `id`
* representing the user, and an `app.locals.paths` object with an `uploads` path.
* @param {ServerRequest} params.req - The request object from Express. It should have a `user` property with an `id` representing the user
* @param {import('fs').ReadStream | import('stream').Readable} params.stream - The read stream for the file.
* @param {string} params.filename - The name of the file.
* @param {string} params.apiKey - The API key for authentication.

View file

@ -38,6 +38,7 @@ const processCodeOutput = async ({
messageId,
session_id,
}) => {
const appConfig = req.config;
const currentDate = new Date();
const baseURL = getCodeBaseURL();
const fileExt = path.extname(name);
@ -77,10 +78,10 @@ const processCodeOutput = async ({
filename: name,
conversationId,
user: req.user.id,
type: `image/${req.app.locals.imageOutputType}`,
type: `image/${appConfig.imageOutputType}`,
createdAt: formattedDate,
updatedAt: formattedDate,
source: req.app.locals.fileStrategy,
source: appConfig.fileStrategy,
context: FileContext.execute_code,
};
createFile(file, true);

View file

@ -11,8 +11,7 @@ const { saveBufferToFirebase } = require('./crud');
* resolution.
*
* @param {Object} params - The params object.
* @param {Express.Request} params.req - The request object from Express. It should have a `user` property with an `id`
* representing the user, and an `app.locals.paths` object with an `imageOutput` path.
* @param {ServerRequest} params.req - The request object from Express. It should have a `user` property with an `id` representing the user
* @param {Express.Multer.File} params.file - The file object, which is part of the request. The file object should
* have a `path` property that points to the location of the uploaded file.
* @param {EModelEndpoint} params.endpoint - The params object.
@ -26,6 +25,7 @@ const { saveBufferToFirebase } = require('./crud');
* - height: The height of the converted image.
*/
async function uploadImageToFirebase({ req, file, file_id, endpoint, resolution = 'high' }) {
const appConfig = req.config;
const inputFilePath = file.path;
const inputBuffer = await fs.promises.readFile(inputFilePath);
const {
@ -38,11 +38,11 @@ async function uploadImageToFirebase({ req, file, file_id, endpoint, resolution
let webPBuffer;
let fileName = `${file_id}__${path.basename(inputFilePath)}`;
const targetExtension = `.${req.app.locals.imageOutputType}`;
const targetExtension = `.${appConfig.imageOutputType}`;
if (extension.toLowerCase() === targetExtension) {
webPBuffer = resizedBuffer;
} else {
webPBuffer = await sharp(resizedBuffer).toFormat(req.app.locals.imageOutputType).toBuffer();
webPBuffer = await sharp(resizedBuffer).toFormat(appConfig.imageOutputType).toBuffer();
// Replace or append the correct extension
const extRegExp = new RegExp(path.extname(fileName) + '$');
fileName = fileName.replace(extRegExp, targetExtension);

View file

@ -38,14 +38,15 @@ async function saveLocalFile(file, outputPath, outputFilename) {
/**
* Saves an uploaded image file to a specified directory based on the user's ID and a filename.
*
* @param {Express.Request} req - The Express request object, containing the user's information and app configuration.
* @param {ServerRequest} req - The Express request object, containing the user's information and app configuration.
* @param {Express.Multer.File} file - The uploaded file object.
* @param {string} filename - The new filename to assign to the saved image (without extension).
* @returns {Promise<void>}
* @throws Will throw an error if the image saving process fails.
*/
const saveLocalImage = async (req, file, filename) => {
const imagePath = req.app.locals.paths.imageOutput;
const appConfig = req.config;
const imagePath = appConfig.paths.imageOutput;
const outputPath = path.join(imagePath, req.user.id ?? '');
await saveLocalFile(file, outputPath, filename);
};
@ -162,7 +163,7 @@ async function getLocalFileURL({ fileName, basePath = 'images' }) {
* the expected base path using the base, subfolder, and user id from the request, and then checks if the
* provided filepath starts with this constructed base path.
*
* @param {Express.Request} req - The request object from Express. It should contain a `user` property with an `id`.
* @param {ServerRequest} req - The request object from Express. It should contain a `user` property with an `id`.
* @param {string} base - The base directory path.
* @param {string} subfolder - The subdirectory under the base path.
* @param {string} filepath - The complete file path to be validated.
@ -191,8 +192,7 @@ const unlinkFile = async (filepath) => {
* Deletes a file from the filesystem. This function takes a file object, constructs the full path, and
* verifies the path's validity before deleting the file. If the path is invalid, an error is thrown.
*
* @param {Express.Request} req - The request object from Express. It should have an `app.locals.paths` object with
* a `publicPath` property.
* @param {ServerRequest} req - The request object from Express.
* @param {MongoFile} file - The file object to be deleted. It should have a `filepath` property that is
* a string representing the path of the file relative to the publicPath.
*
@ -201,7 +201,8 @@ const unlinkFile = async (filepath) => {
* file path is invalid or if there is an error in deletion.
*/
const deleteLocalFile = async (req, file) => {
const { publicPath, uploads } = req.app.locals.paths;
const appConfig = req.config;
const { publicPath, uploads } = appConfig.paths;
/** Filepath stripped of query parameters (e.g., ?manual=true) */
const cleanFilepath = file.filepath.split('?')[0];
@ -256,8 +257,7 @@ const deleteLocalFile = async (req, file) => {
* Uploads a file to the specified upload directory.
*
* @param {Object} params - The params object.
* @param {ServerRequest} params.req - The request object from Express. It should have a `user` property with an `id`
* representing the user, and an `app.locals.paths` object with an `uploads` path.
* @param {ServerRequest} params.req - The request object from Express. It should have a `user` property with an `id` representing the user
* @param {Express.Multer.File} params.file - The file object, which is part of the request. The file object should
* have a `path` property that points to the location of the uploaded file.
* @param {string} params.file_id - The file ID.
@ -268,11 +268,12 @@ const deleteLocalFile = async (req, file) => {
* - bytes: The size of the file in bytes.
*/
async function uploadLocalFile({ req, file, file_id }) {
const appConfig = req.config;
const inputFilePath = file.path;
const inputBuffer = await fs.promises.readFile(inputFilePath);
const bytes = Buffer.byteLength(inputBuffer);
const { uploads } = req.app.locals.paths;
const { uploads } = appConfig.paths;
const userPath = path.join(uploads, req.user.id);
if (!fs.existsSync(userPath)) {
@ -295,8 +296,9 @@ async function uploadLocalFile({ req, file, file_id }) {
* @param {string} filepath - The filepath.
* @returns {ReadableStream} A readable stream of the file.
*/
function getLocalFileStream(req, filepath) {
async function getLocalFileStream(req, filepath) {
try {
const appConfig = req.config;
if (filepath.includes('/uploads/')) {
const basePath = filepath.split('/uploads/')[1];
@ -305,8 +307,8 @@ function getLocalFileStream(req, filepath) {
throw new Error(`Invalid file path: ${filepath}`);
}
const fullPath = path.join(req.app.locals.paths.uploads, basePath);
const uploadsDir = req.app.locals.paths.uploads;
const fullPath = path.join(appConfig.paths.uploads, basePath);
const uploadsDir = appConfig.paths.uploads;
const rel = path.relative(uploadsDir, fullPath);
if (rel.startsWith('..') || path.isAbsolute(rel) || rel.includes(`..${path.sep}`)) {
@ -323,8 +325,8 @@ function getLocalFileStream(req, filepath) {
throw new Error(`Invalid file path: ${filepath}`);
}
const fullPath = path.join(req.app.locals.paths.imageOutput, basePath);
const publicDir = req.app.locals.paths.imageOutput;
const fullPath = path.join(appConfig.paths.imageOutput, basePath);
const publicDir = appConfig.paths.imageOutput;
const rel = path.relative(publicDir, fullPath);
if (rel.startsWith('..') || path.isAbsolute(rel) || rel.includes(`..${path.sep}`)) {

View file

@ -13,8 +13,7 @@ const { updateUser, updateFile } = require('~/models');
*
* The original image is deleted after conversion.
* @param {Object} params - The params object.
* @param {Object} params.req - The request object from Express. It should have a `user` property with an `id`
* representing the user, and an `app.locals.paths` object with an `imageOutput` path.
* @param {Object} params.req - The request object from Express. It should have a `user` property with an `id` representing the user
* @param {Express.Multer.File} params.file - The file object, which is part of the request. The file object should
* have a `path` property that points to the location of the uploaded file.
* @param {string} params.file_id - The file ID.
@ -29,6 +28,7 @@ const { updateUser, updateFile } = require('~/models');
* - height: The height of the converted image.
*/
async function uploadLocalImage({ req, file, file_id, endpoint, resolution = 'high' }) {
const appConfig = req.config;
const inputFilePath = file.path;
const inputBuffer = await fs.promises.readFile(inputFilePath);
const {
@ -38,7 +38,7 @@ async function uploadLocalImage({ req, file, file_id, endpoint, resolution = 'hi
} = await resizeImageBuffer(inputBuffer, resolution, endpoint);
const extension = path.extname(inputFilePath);
const { imageOutput } = req.app.locals.paths;
const { imageOutput } = appConfig.paths;
const userPath = path.join(imageOutput, req.user.id);
if (!fs.existsSync(userPath)) {
@ -47,7 +47,7 @@ async function uploadLocalImage({ req, file, file_id, endpoint, resolution = 'hi
const fileName = `${file_id}__${path.basename(inputFilePath)}`;
const newPath = path.join(userPath, fileName);
const targetExtension = `.${req.app.locals.imageOutputType}`;
const targetExtension = `.${appConfig.imageOutputType}`;
if (extension.toLowerCase() === targetExtension) {
const bytes = Buffer.byteLength(resizedBuffer);
@ -57,7 +57,7 @@ async function uploadLocalImage({ req, file, file_id, endpoint, resolution = 'hi
}
const outputFilePath = newPath.replace(extension, targetExtension);
const data = await sharp(resizedBuffer).toFormat(req.app.locals.imageOutputType).toBuffer();
const data = await sharp(resizedBuffer).toFormat(appConfig.imageOutputType).toBuffer();
await fs.promises.writeFile(outputFilePath, data);
const bytes = Buffer.byteLength(data);
const filepath = path.posix.join('/', 'images', req.user.id, path.basename(outputFilePath));
@ -90,7 +90,8 @@ function encodeImage(imagePath) {
* @returns {Promise<[MongoFile, string]>} - A promise that resolves to an array of results from updateFile and encodeImage.
*/
async function prepareImagesLocal(req, file) {
const { publicPath, imageOutput } = req.app.locals.paths;
const appConfig = req.config;
const { publicPath, imageOutput } = appConfig.paths;
const userPath = path.join(imageOutput, req.user.id);
if (!fs.existsSync(userPath)) {

View file

@ -7,8 +7,7 @@ const { logger } = require('~/config');
* Uploads a file that can be used across various OpenAI services.
*
* @param {Object} params - The params object.
* @param {ServerRequest} params.req - The request object from Express. It should have a `user` property with an `id`
* representing the user, and an `app.locals.paths` object with an `imageOutput` path.
* @param {ServerRequest} params.req - The request object from Express. It should have a `user` property with an `id` representing the user
* @param {Express.Multer.File} params.file - The file uploaded to the server via multer.
* @param {OpenAIClient} params.openai - The initialized OpenAI client.
* @returns {Promise<OpenAIFile>}

View file

@ -12,7 +12,7 @@ const defaultBasePath = 'images';
* Resizes, converts, and uploads an image file to S3.
*
* @param {Object} params
* @param {import('express').Request} params.req - Express request (expects user and app.locals.imageOutputType).
* @param {import('express').Request} params.req - Express request (expects `user` and `appConfig.imageOutputType`).
* @param {Express.Multer.File} params.file - File object from Multer.
* @param {string} params.file_id - Unique file identifier.
* @param {any} params.endpoint - Endpoint identifier used in image processing.
@ -29,6 +29,7 @@ async function uploadImageToS3({
basePath = defaultBasePath,
}) {
try {
const appConfig = req.config;
const inputFilePath = file.path;
const inputBuffer = await fs.promises.readFile(inputFilePath);
const {
@ -41,14 +42,12 @@ async function uploadImageToS3({
let processedBuffer;
let fileName = `${file_id}__${path.basename(inputFilePath)}`;
const targetExtension = `.${req.app.locals.imageOutputType}`;
const targetExtension = `.${appConfig.imageOutputType}`;
if (extension.toLowerCase() === targetExtension) {
processedBuffer = resizedBuffer;
} else {
processedBuffer = await sharp(resizedBuffer)
.toFormat(req.app.locals.imageOutputType)
.toBuffer();
processedBuffer = await sharp(resizedBuffer).toFormat(appConfig.imageOutputType).toBuffer();
fileName = fileName.replace(new RegExp(path.extname(fileName) + '$'), targetExtension);
if (!path.extname(fileName)) {
fileName += targetExtension;

View file

@ -10,8 +10,7 @@ const { generateShortLivedToken } = require('~/server/services/AuthService');
* Deletes a file from the vector database. This function takes a file object, constructs the full path, and
* verifies the path's validity before deleting the file. If the path is invalid, an error is thrown.
*
* @param {ServerRequest} req - The request object from Express. It should have an `app.locals.paths` object with
* a `publicPath` property.
* @param {ServerRequest} req - The request object from Express.
* @param {MongoFile} file - The file object to be deleted. It should have a `filepath` property that is
* a string representing the path of the file relative to the publicPath.
*
@ -54,8 +53,7 @@ const deleteVectors = async (req, file) => {
* Uploads a file to the configured Vector database
*
* @param {Object} params - The params object.
* @param {Object} params.req - The request object from Express. It should have a `user` property with an `id`
* representing the user, and an `app.locals.paths` object with an `uploads` path.
* @param {Object} params.req - The request object from Express. It should have a `user` property with an `id` representing the user
* @param {Express.Multer.File} params.file - The file object, which is part of the request. The file object should
* have a `path` property that points to the location of the uploaded file.
* @param {string} params.file_id - The file ID.

View file

@ -1,14 +1,14 @@
const fs = require('fs');
const path = require('path');
const sharp = require('sharp');
const { resizeImageBuffer } = require('./resize');
const { getStrategyFunctions } = require('../strategies');
const { resizeImageBuffer } = require('./resize');
const { logger } = require('~/config');
/**
* Converts an image file or buffer to target output type with specified resolution.
*
* @param {Express.Request} req - The request object, containing user and app configuration data.
* @param {ServerRequest} req - The request object, containing user and app configuration data.
* @param {Buffer | Express.Multer.File} file - The file object, containing either a path or a buffer.
* @param {'low' | 'high'} [resolution='high'] - The desired resolution for the output image.
* @param {string} [basename=''] - The basename of the input file, if it is a buffer.
@ -17,6 +17,7 @@ const { logger } = require('~/config');
*/
async function convertImage(req, file, resolution = 'high', basename = '') {
try {
const appConfig = req.config;
let inputBuffer;
let outputBuffer;
let extension = path.extname(file.path ?? basename).toLowerCase();
@ -39,11 +40,11 @@ async function convertImage(req, file, resolution = 'high', basename = '') {
} = await resizeImageBuffer(inputBuffer, resolution);
// Check if the file is already in target format; if it isn't, convert it:
const targetExtension = `.${req.app.locals.imageOutputType}`;
const targetExtension = `.${appConfig.imageOutputType}`;
if (extension === targetExtension) {
outputBuffer = resizedBuffer;
} else {
outputBuffer = await sharp(resizedBuffer).toFormat(req.app.locals.imageOutputType).toBuffer();
outputBuffer = await sharp(resizedBuffer).toFormat(appConfig.imageOutputType).toBuffer();
extension = targetExtension;
}
@ -51,7 +52,7 @@ async function convertImage(req, file, resolution = 'high', basename = '') {
const newFileName =
path.basename(file.path ?? basename, path.extname(file.path ?? basename)) + extension;
const { saveBuffer } = getStrategyFunctions(req.app.locals.fileStrategy);
const { saveBuffer } = getStrategyFunctions(appConfig.fileStrategy);
const savedFilePath = await saveBuffer({
userId: req.user.id,

View file

@ -81,7 +81,7 @@ const blobStorageSources = new Set([FileSources.azure_blob, FileSources.s3]);
/**
* Encodes and formats the given files.
* @param {Express.Request} req - The request object.
* @param {ServerRequest} req - The request object.
* @param {Array<MongoFile>} files - The array of files to encode and format.
* @param {EModelEndpoint} [endpoint] - Optional: The endpoint for the image.
* @param {string} [mode] - Optional: The endpoint mode for the image.

View file

@ -1,13 +1,11 @@
const avatar = require('./avatar');
const convert = require('./convert');
const encode = require('./encode');
const parse = require('./parse');
const resize = require('./resize');
module.exports = {
...convert,
...encode,
...parse,
...resize,
avatar,
};

View file

@ -1,45 +0,0 @@
const URL = require('url').URL;
const path = require('path');
const imageExtensionRegex = /\.(jpg|jpeg|png|gif|bmp|tiff|svg|webp)$/i;
/**
* Extracts the image basename from a given URL.
*
* @param {string} urlString - The URL string from which the image basename is to be extracted.
* @returns {string} The basename of the image file from the URL.
* Returns an empty string if the URL does not contain a valid image basename.
*/
function getImageBasename(urlString) {
try {
const url = new URL(urlString);
const basename = path.basename(url.pathname);
return imageExtensionRegex.test(basename) ? basename : '';
} catch (error) {
// If URL parsing fails, return an empty string
return '';
}
}
/**
* Extracts the basename of a file from a given URL.
*
* @param {string} urlString - The URL string from which the file basename is to be extracted.
* @returns {string} The basename of the file from the URL.
* Returns an empty string if the URL parsing fails.
*/
function getFileBasename(urlString) {
try {
const url = new URL(urlString);
return path.basename(url.pathname);
} catch (error) {
// If URL parsing fails, return an empty string
return '';
}
}
module.exports = {
getImageBasename,
getFileBasename,
};

View file

@ -16,8 +16,8 @@ const {
removeNullishValues,
isAssistantsEndpoint,
} = require('librechat-data-provider');
const { sanitizeFilename } = require('@librechat/api');
const { EnvVar } = require('@librechat/agents');
const { sanitizeFilename } = require('@librechat/api');
const {
convertImage,
resizeAndConvert,
@ -28,10 +28,10 @@ const { addAgentResourceFile, removeAgentResourceFiles } = require('~/models/Age
const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
const { createFile, updateFileUsage, deleteFiles } = require('~/models/File');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { getFileStrategy } = require('~/server/utils/getFileStrategy');
const { checkCapability } = require('~/server/services/Config');
const { LB_QueueAsyncCall } = require('~/server/utils/queue');
const { getStrategyFunctions } = require('./strategies');
const { getFileStrategy } = require('~/server/utils/getFileStrategy');
const { determineFileType } = require('~/server/utils');
const { logger } = require('~/config');
@ -157,6 +157,7 @@ function enqueueDeleteOperation({ req, file, deleteFile, promises, resolvedFileI
* @returns {Promise<void>}
*/
const processDeleteRequest = async ({ req, files }) => {
const appConfig = req.config;
const resolvedFileIds = [];
const deletionMethods = {};
const promises = [];
@ -164,7 +165,7 @@ const processDeleteRequest = async ({ req, files }) => {
/** @type {Record<string, OpenAI | undefined>} */
const client = { [FileSources.openai]: undefined, [FileSources.azure]: undefined };
const initializeClients = async () => {
if (req.app.locals[EModelEndpoint.assistants]) {
if (appConfig.endpoints?.[EModelEndpoint.assistants]) {
const openAIClient = await getOpenAIClient({
req,
overrideEndpoint: EModelEndpoint.assistants,
@ -172,7 +173,7 @@ const processDeleteRequest = async ({ req, files }) => {
client[FileSources.openai] = openAIClient.openai;
}
if (!req.app.locals[EModelEndpoint.azureOpenAI]?.assistants) {
if (!appConfig.endpoints?.[EModelEndpoint.azureOpenAI]?.assistants) {
return;
}
@ -320,7 +321,8 @@ const processFileURL = async ({ fileStrategy, userId, URL, fileName, basePath, c
*/
const processImageFile = async ({ req, res, metadata, returnFile = false }) => {
const { file } = req;
const source = getFileStrategy(req.app.locals, { isImage: true });
const appConfig = req.config;
const source = getFileStrategy(appConfig, { isImage: true });
const { handleImageUpload } = getStrategyFunctions(source);
const { file_id, temp_file_id, endpoint } = metadata;
@ -341,7 +343,7 @@ const processImageFile = async ({ req, res, metadata, returnFile = false }) => {
filename: file.originalname,
context: FileContext.message_attachment,
source,
type: `image/${req.app.locals.imageOutputType}`,
type: `image/${appConfig.imageOutputType}`,
width,
height,
},
@ -366,18 +368,19 @@ const processImageFile = async ({ req, res, metadata, returnFile = false }) => {
* @returns {Promise<{ filepath: string, filename: string, source: string, type: string}>}
*/
const uploadImageBuffer = async ({ req, context, metadata = {}, resize = true }) => {
const source = getFileStrategy(req.app.locals, { isImage: true });
const appConfig = req.config;
const source = getFileStrategy(appConfig, { isImage: true });
const { saveBuffer } = getStrategyFunctions(source);
let { buffer, width, height, bytes, filename, file_id, type } = metadata;
if (resize) {
file_id = v4();
type = `image/${req.app.locals.imageOutputType}`;
type = `image/${appConfig.imageOutputType}`;
({ buffer, width, height, bytes } = await resizeAndConvert({
inputBuffer: buffer,
desiredFormat: req.app.locals.imageOutputType,
desiredFormat: appConfig.imageOutputType,
}));
filename = `${path.basename(req.file.originalname, path.extname(req.file.originalname))}.${
req.app.locals.imageOutputType
appConfig.imageOutputType
}`;
}
const fileName = `${file_id}-${filename}`;
@ -411,11 +414,12 @@ const uploadImageBuffer = async ({ req, context, metadata = {}, resize = true })
* @returns {Promise<void>}
*/
const processFileUpload = async ({ req, res, metadata }) => {
const appConfig = req.config;
const isAssistantUpload = isAssistantsEndpoint(metadata.endpoint);
const assistantSource =
metadata.endpoint === EModelEndpoint.azureAssistants ? FileSources.azure : FileSources.openai;
// Use the configured file strategy for regular file uploads (not vectordb)
const source = isAssistantUpload ? assistantSource : req.app.locals.fileStrategy;
const source = isAssistantUpload ? assistantSource : appConfig.fileStrategy;
const { handleFileUpload } = getStrategyFunctions(source);
const { file_id, temp_file_id = null } = metadata;
@ -501,6 +505,7 @@ const processFileUpload = async ({ req, res, metadata }) => {
*/
const processAgentFileUpload = async ({ req, res, metadata }) => {
const { file } = req;
const appConfig = req.config;
const { agent_id, tool_resource, file_id, temp_file_id = null } = metadata;
if (agent_id && !tool_resource) {
throw new Error('No tool resource provided for agent file upload');
@ -553,7 +558,7 @@ const processAgentFileUpload = async ({ req, res, metadata }) => {
}
const { handleFileUpload: uploadOCR } = getStrategyFunctions(
req.app.locals?.ocr?.strategy ?? FileSources.mistral_ocr,
appConfig?.ocr?.strategy ?? FileSources.mistral_ocr,
);
const { file_id, temp_file_id = null } = metadata;
@ -564,7 +569,7 @@ const processAgentFileUpload = async ({ req, res, metadata }) => {
images: _i,
filename,
filepath: ocrFileURL,
} = await uploadOCR({ req, file, loadAuthValues });
} = await uploadOCR({ req, appConfig, file, loadAuthValues });
const fileInfo = removeNullishValues({
text,
@ -597,7 +602,7 @@ const processAgentFileUpload = async ({ req, res, metadata }) => {
// Dual storage pattern for RAG files: Storage + Vector DB
let storageResult, embeddingResult;
const isImageFile = file.mimetype.startsWith('image');
const source = getFileStrategy(req.app.locals, { isImage: isImageFile });
const source = getFileStrategy(appConfig, { isImage: isImageFile });
if (tool_resource === EToolResources.file_search) {
// FIRST: Upload to Storage for permanent backup (S3/local/etc.)
@ -752,6 +757,7 @@ const processOpenAIFile = async ({
const processOpenAIImageOutput = async ({ req, buffer, file_id, filename, fileExt }) => {
const currentDate = new Date();
const formattedDate = currentDate.toISOString();
const appConfig = req.config;
const _file = await convertImage(req, buffer, undefined, `${file_id}${fileExt}`);
// Create only one file record with the correct information
@ -762,7 +768,7 @@ const processOpenAIImageOutput = async ({ req, buffer, file_id, filename, fileEx
type: mime.getType(fileExt),
createdAt: formattedDate,
updatedAt: formattedDate,
source: getFileStrategy(req.app.locals, { isImage: true }),
source: getFileStrategy(appConfig, { isImage: true }),
context: FileContext.assistants_output,
file_id,
filename,
@ -889,7 +895,7 @@ async function saveBase64Image(
url,
{ req, file_id: _file_id, filename: _filename, endpoint, context, resolution },
) {
const effectiveResolution = resolution ?? req.app.locals.fileConfig?.imageGeneration ?? 'high';
const effectiveResolution = resolution ?? appConfig.fileConfig?.imageGeneration ?? 'high';
const file_id = _file_id ?? v4();
let filename = `${file_id}-${_filename}`;
const { buffer: inputBuffer, type } = base64ToBuffer(url);
@ -903,7 +909,8 @@ async function saveBase64Image(
}
const image = await resizeImageBuffer(inputBuffer, effectiveResolution, endpoint);
const source = getFileStrategy(req.app.locals, { isImage: true });
const appConfig = req.config;
const source = getFileStrategy(appConfig, { isImage: true });
const { saveBuffer } = getStrategyFunctions(source);
const filepath = await saveBuffer({
userId: req.user.id,
@ -964,7 +971,8 @@ function filterFile({ req, image, isAvatar }) {
throw new Error('No endpoint provided');
}
const fileConfig = mergeFileConfig(req.app.locals.fileConfig);
const appConfig = req.config;
const fileConfig = mergeFileConfig(appConfig.fileConfig);
const { fileSizeLimit: sizeLimit, supportedMimeTypes } =
fileConfig.endpoints[endpoint] ?? fileConfig.endpoints.default;

View file

@ -20,9 +20,9 @@ const {
ContentTypes,
isAssistantsEndpoint,
} = require('librechat-data-provider');
const { getCachedTools, loadCustomConfig } = require('./Config');
const { findToken, createToken, updateToken } = require('~/models');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { getCachedTools, getAppConfig } = require('./Config');
const { reinitMCPServer } = require('./Tools/mcp');
const { getLogStores } = require('~/cache');
@ -428,9 +428,8 @@ function createToolInstance({ res, toolName, serverName, toolDefinition, provide
* @returns {Object} Object containing mcpConfig, appConnections, userConnections, and oauthServers
*/
async function getMCPSetupData(userId) {
const printConfig = false;
const config = await loadCustomConfig(printConfig);
const mcpConfig = config?.mcpServers;
const config = await getAppConfig();
const mcpConfig = config?.mcpConfig;
if (!mcpConfig) {
throw new Error('MCP config not found');

View file

@ -25,6 +25,7 @@ jest.mock('librechat-data-provider', () => ({
jest.mock('./Config', () => ({
loadCustomConfig: jest.fn(),
getAppConfig: jest.fn(),
}));
jest.mock('~/config', () => ({
@ -65,8 +66,10 @@ describe('tests for the new helper functions used by the MCP connection status e
server2: { type: 'http' },
},
};
let mockGetAppConfig;
beforeEach(() => {
mockGetAppConfig = require('./Config').getAppConfig;
mockGetMCPManager.mockReturnValue({
getAllConnections: jest.fn(() => new Map()),
getUserConnections: jest.fn(() => new Map()),
@ -75,7 +78,7 @@ describe('tests for the new helper functions used by the MCP connection status e
});
it('should successfully return MCP setup data', async () => {
mockLoadCustomConfig.mockResolvedValue(mockConfig);
mockGetAppConfig.mockResolvedValue({ mcpConfig: mockConfig.mcpServers });
const mockAppConnections = new Map([['server1', { status: 'connected' }]]);
const mockUserConnections = new Map([['server2', { status: 'disconnected' }]]);
@ -90,7 +93,7 @@ describe('tests for the new helper functions used by the MCP connection status e
const result = await getMCPSetupData(mockUserId);
expect(mockLoadCustomConfig).toHaveBeenCalledWith(false);
expect(mockGetAppConfig).toHaveBeenCalled();
expect(mockGetMCPManager).toHaveBeenCalledWith(mockUserId);
expect(mockMCPManager.getAllConnections).toHaveBeenCalled();
expect(mockMCPManager.getUserConnections).toHaveBeenCalledWith(mockUserId);
@ -105,12 +108,12 @@ describe('tests for the new helper functions used by the MCP connection status e
});
it('should throw error when MCP config not found', async () => {
mockLoadCustomConfig.mockResolvedValue({});
mockGetAppConfig.mockResolvedValue({});
await expect(getMCPSetupData(mockUserId)).rejects.toThrow('MCP config not found');
});
it('should handle null values from MCP manager gracefully', async () => {
mockLoadCustomConfig.mockResolvedValue(mockConfig);
mockGetAppConfig.mockResolvedValue({ mcpConfig: mockConfig.mcpServers });
const mockMCPManager = {
getAllConnections: jest.fn(() => null),

View file

@ -36,7 +36,7 @@ class StreamRunManager {
/** @type {Run | null} */
this.run = null;
/** @type {Express.Request} */
/** @type {ServerRequest} */
this.req = fields.req;
/** @type {Express.Response} */
this.res = fields.res;

View file

@ -18,6 +18,7 @@ const { EModelEndpoint } = require('librechat-data-provider');
* @returns {Promise<Object>} The data retrieved from the API.
*/
async function retrieveRun({ thread_id, run_id, timeout, openai }) {
const appConfig = openai.req.config;
const { apiKey, baseURL, httpAgent, organization } = openai;
let url = `${baseURL}/threads/${thread_id}/runs/${run_id}`;
@ -31,7 +32,7 @@ async function retrieveRun({ thread_id, run_id, timeout, openai }) {
}
/** @type {TAzureConfig | undefined} */
const azureConfig = openai.req.app.locals[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
if (azureConfig && azureConfig.assistants) {
delete headers.Authorization;

View file

@ -1,11 +1,7 @@
const fs = require('fs');
const path = require('path');
const { sleep } = require('@librechat/agents');
const { logger } = require('@librechat/data-schemas');
const { zodToJsonSchema } = require('zod-to-json-schema');
const { getToolkitKey, getUserMCPAuthMap } = require('@librechat/api');
const { Calculator } = require('@langchain/community/tools/calculator');
const { tool: toolFn, Tool, DynamicStructuredTool } = require('@langchain/core/tools');
const { tool: toolFn, DynamicStructuredTool } = require('@langchain/core/tools');
const { getToolkitKey, hasCustomUserVars, getUserMCPAuthMap } = require('@librechat/api');
const {
Tools,
Constants,
@ -26,145 +22,15 @@ const {
loadActionSets,
domainParser,
} = require('./ActionService');
const {
createOpenAIImageTools,
createYouTubeTools,
manifestToolMap,
toolkits,
} = require('~/app/clients/tools');
const { processFileURL, uploadImageBuffer } = require('~/server/services/Files/process');
const {
getEndpointsConfig,
hasCustomUserVars,
getCachedTools,
} = require('~/server/services/Config');
const { getEndpointsConfig, getCachedTools } = require('~/server/services/Config');
const { manifestToolMap, toolkits } = require('~/app/clients/tools/manifest');
const { createOnSearchResults } = require('~/server/services/Tools/search');
const { isActionDomainAllowed } = require('~/server/services/domains');
const { recordUsage } = require('~/server/services/Threads');
const { loadTools } = require('~/app/clients/tools/util');
const { redactMessage } = require('~/config/parsers');
const { findPluginAuthsByKeys } = require('~/models');
/**
* Loads and formats tools from the specified tool directory.
*
* The directory is scanned for JavaScript files, excluding any files in the filter set.
* For each file, it attempts to load the file as a module and instantiate a class, if it's a subclass of `StructuredTool`.
* Each tool instance is then formatted to be compatible with the OpenAI Assistant.
* Additionally, instances of LangChain Tools are included in the result.
*
* @param {object} params - The parameters for the function.
* @param {string} params.directory - The directory path where the tools are located.
* @param {Array<string>} [params.adminFilter=[]] - Array of admin-defined tool keys to exclude from loading.
* @param {Array<string>} [params.adminIncluded=[]] - Array of admin-defined tool keys to include from loading.
* @returns {Record<string, FunctionTool>} An object mapping each tool's plugin key to its instance.
*/
function loadAndFormatTools({ directory, adminFilter = [], adminIncluded = [] }) {
const filter = new Set([...adminFilter]);
const included = new Set(adminIncluded);
const tools = [];
/* Structured Tools Directory */
const files = fs.readdirSync(directory);
if (included.size > 0 && adminFilter.length > 0) {
logger.warn(
'Both `includedTools` and `filteredTools` are defined; `filteredTools` will be ignored.',
);
}
for (const file of files) {
const filePath = path.join(directory, file);
if (!file.endsWith('.js') || (filter.has(file) && included.size === 0)) {
continue;
}
let ToolClass = null;
try {
ToolClass = require(filePath);
} catch (error) {
logger.error(`[loadAndFormatTools] Error loading tool from ${filePath}:`, error);
continue;
}
if (!ToolClass || !(ToolClass.prototype instanceof Tool)) {
continue;
}
let toolInstance = null;
try {
toolInstance = new ToolClass({ override: true });
} catch (error) {
logger.error(
`[loadAndFormatTools] Error initializing \`${file}\` tool; if it requires authentication, is the \`override\` field configured?`,
error,
);
continue;
}
if (!toolInstance) {
continue;
}
if (filter.has(toolInstance.name) && included.size === 0) {
continue;
}
if (included.size > 0 && !included.has(file) && !included.has(toolInstance.name)) {
continue;
}
const formattedTool = formatToOpenAIAssistantTool(toolInstance);
tools.push(formattedTool);
}
/** Basic Tools & Toolkits; schema: { input: string } */
const basicToolInstances = [
new Calculator(),
...createOpenAIImageTools({ override: true }),
...createYouTubeTools({ override: true }),
];
for (const toolInstance of basicToolInstances) {
const formattedTool = formatToOpenAIAssistantTool(toolInstance);
let toolName = formattedTool[Tools.function].name;
toolName = getToolkitKey({ toolkits, toolName }) ?? toolName;
if (filter.has(toolName) && included.size === 0) {
continue;
}
if (included.size > 0 && !included.has(toolName)) {
continue;
}
tools.push(formattedTool);
}
tools.push(ImageVisionTool);
return tools.reduce((map, tool) => {
map[tool.function.name] = tool;
return map;
}, {});
}
/**
* Formats a `StructuredTool` instance into a format that is compatible
* with OpenAI's ChatCompletionFunctions. It uses the `zodToJsonSchema`
* function to convert the schema of the `StructuredTool` into a JSON
* schema, which is then used as the parameters for the OpenAI function.
*
* @param {StructuredTool} tool - The StructuredTool to format.
* @returns {FunctionTool} The OpenAI Assistant Tool.
*/
function formatToOpenAIAssistantTool(tool) {
return {
type: Tools.function,
[Tools.function]: {
name: tool.name,
description: tool.description,
parameters: zodToJsonSchema(tool.schema),
},
};
}
/**
* Processes the required actions by calling the appropriate tools and returning the outputs.
* @param {OpenAIClient} client - OpenAI or StreamRunManager Client.
@ -207,6 +73,7 @@ async function processRequiredActions(client, requiredActions) {
`[required actions] user: ${client.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id}`,
requiredActions,
);
const appConfig = client.req.config;
const toolDefinitions = await getCachedTools({ userId: client.req.user.id, includeGlobal: true });
const seenToolkits = new Set();
const tools = requiredActions
@ -238,9 +105,11 @@ async function processRequiredActions(client, requiredActions) {
req: client.req,
uploadImageBuffer,
openAIApiKey: client.apiKey,
fileStrategy: client.req.app.locals.fileStrategy,
returnMetadata: true,
},
webSearch: appConfig.webSearch,
fileStrategy: appConfig.fileStrategy,
imageOutputType: appConfig.imageOutputType,
});
const ToolMap = loadedTools.reduce((map, tool) => {
@ -353,8 +222,10 @@ async function processRequiredActions(client, requiredActions) {
const domain = await domainParser(action.metadata.domain, true);
domainMap.set(domain, action);
// Check if domain is allowed
const isDomainAllowed = await isActionDomainAllowed(action.metadata.domain);
const isDomainAllowed = await isActionDomainAllowed(
action.metadata.domain,
appConfig?.actions?.allowedDomains,
);
if (!isDomainAllowed) {
continue;
}
@ -486,12 +357,13 @@ async function loadAgentTools({ req, res, agent, signal, tool_resources, openAIA
return {};
}
const appConfig = req.config;
const endpointsConfig = await getEndpointsConfig(req);
let enabledCapabilities = new Set(endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? []);
/** Edge case: use defined/fallback capabilities when the "agents" endpoint is not enabled */
if (enabledCapabilities.size === 0 && agent.id === Constants.EPHEMERAL_AGENT_ID) {
enabledCapabilities = new Set(
req.app?.locals?.[EModelEndpoint.agents]?.capabilities ?? defaultAgentCapabilities,
appConfig.endpoints?.[EModelEndpoint.agents]?.capabilities ?? defaultAgentCapabilities,
);
}
const checkCapability = (capability) => {
@ -523,7 +395,7 @@ async function loadAgentTools({ req, res, agent, signal, tool_resources, openAIA
if (!_agentTools || _agentTools.length === 0) {
return {};
}
/** @type {ReturnType<createOnSearchResults>} */
/** @type {ReturnType<typeof createOnSearchResults>} */
let webSearchCallbacks;
if (includesWebSearch) {
webSearchCallbacks = createOnSearchResults(res);
@ -531,7 +403,7 @@ async function loadAgentTools({ req, res, agent, signal, tool_resources, openAIA
/** @type {Record<string, Record<string, string>>} */
let userMCPAuthMap;
if (await hasCustomUserVars()) {
if (hasCustomUserVars(req.config)) {
userMCPAuthMap = await getUserMCPAuthMap({
tools: agent.tools,
userId: req.user.id,
@ -554,9 +426,11 @@ async function loadAgentTools({ req, res, agent, signal, tool_resources, openAIA
processFileURL,
uploadImageBuffer,
returnMetadata: true,
fileStrategy: req.app.locals.fileStrategy,
[Tools.web_search]: webSearchCallbacks,
},
webSearch: appConfig.webSearch,
fileStrategy: appConfig.fileStrategy,
imageOutputType: appConfig.imageOutputType,
});
const agentTools = [];
@ -632,7 +506,10 @@ async function loadAgentTools({ req, res, agent, signal, tool_resources, openAIA
domainMap.set(domain, action);
// Check if domain is allowed (do this once per action set)
const isDomainAllowed = await isActionDomainAllowed(action.metadata.domain);
const isDomainAllowed = await isActionDomainAllowed(
action.metadata.domain,
appConfig?.actions?.allowedDomains,
);
if (!isDomainAllowed) {
continue;
}
@ -734,7 +611,5 @@ async function loadAgentTools({ req, res, agent, signal, tool_resources, openAIA
module.exports = {
getToolkitKey,
loadAgentTools,
loadAndFormatTools,
processRequiredActions,
formatToOpenAIAssistantTool,
};

View file

@ -1,6 +1,6 @@
const { nanoid } = require('nanoid');
const { Tools } = require('librechat-data-provider');
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
/**
* Creates a function to handle search results and stream them as attachments

View file

@ -1,10 +1,9 @@
const { getCustomConfig } = require('~/server/services/Config');
/**
* @param {string} email
* @returns {Promise<boolean>}
* @param {string[]} [allowedDomains]
* @returns {boolean}
*/
async function isEmailDomainAllowed(email) {
function isEmailDomainAllowed(email, allowedDomains) {
if (!email) {
return false;
}
@ -15,14 +14,13 @@ async function isEmailDomainAllowed(email) {
return false;
}
const customConfig = await getCustomConfig();
if (!customConfig) {
if (!allowedDomains) {
return true;
} else if (!customConfig?.registration?.allowedDomains) {
} else if (!Array.isArray(allowedDomains) || !allowedDomains.length) {
return true;
}
return customConfig.registration.allowedDomains.includes(domain);
return allowedDomains.includes(domain);
}
/**
@ -65,16 +63,14 @@ function normalizeDomain(domain) {
/**
* Checks if the given domain is allowed. If no restrictions are set, allows all domains.
* @param {string} [domain]
* @param {string[]} [allowedDomains]
* @returns {Promise<boolean>}
*/
async function isActionDomainAllowed(domain) {
async function isActionDomainAllowed(domain, allowedDomains) {
if (!domain || typeof domain !== 'string') {
return false;
}
const customConfig = await getCustomConfig();
const allowedDomains = customConfig?.actions?.allowedDomains;
if (!Array.isArray(allowedDomains) || !allowedDomains.length) {
return true;
}

View file

@ -1,8 +1,8 @@
const { isEmailDomainAllowed, isActionDomainAllowed } = require('~/server/services/domains');
const { getCustomConfig } = require('~/server/services/Config');
const { getAppConfig } = require('~/server/services/Config');
jest.mock('~/server/services/Config', () => ({
getCustomConfig: jest.fn(),
getAppConfig: jest.fn(),
}));
describe('isEmailDomainAllowed', () => {
@ -12,49 +12,49 @@ describe('isEmailDomainAllowed', () => {
it('should return false if email is falsy', async () => {
const email = '';
const result = await isEmailDomainAllowed(email);
const result = isEmailDomainAllowed(email);
expect(result).toBe(false);
});
it('should return false if domain is not present in the email', async () => {
const email = 'test';
const result = await isEmailDomainAllowed(email);
const result = isEmailDomainAllowed(email);
expect(result).toBe(false);
});
it('should return true if customConfig is not available', async () => {
const email = 'test@domain1.com';
getCustomConfig.mockResolvedValue(null);
const result = await isEmailDomainAllowed(email);
getAppConfig.mockResolvedValue(null);
const result = isEmailDomainAllowed(email, null);
expect(result).toBe(true);
});
it('should return true if allowedDomains is not defined in customConfig', async () => {
const email = 'test@domain1.com';
getCustomConfig.mockResolvedValue({});
const result = await isEmailDomainAllowed(email);
getAppConfig.mockResolvedValue({});
const result = isEmailDomainAllowed(email, undefined);
expect(result).toBe(true);
});
it('should return true if domain is included in the allowedDomains', async () => {
const email = 'user@domain1.com';
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
registration: {
allowedDomains: ['domain1.com', 'domain2.com'],
},
});
const result = await isEmailDomainAllowed(email);
const result = isEmailDomainAllowed(email, ['domain1.com', 'domain2.com']);
expect(result).toBe(true);
});
it('should return false if domain is not included in the allowedDomains', async () => {
const email = 'user@domain3.com';
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
registration: {
allowedDomains: ['domain1.com', 'domain2.com'],
},
});
const result = await isEmailDomainAllowed(email);
const result = isEmailDomainAllowed(email, ['domain1.com', 'domain2.com']);
expect(result).toBe(false);
});
});
@ -80,114 +80,119 @@ describe('isActionDomainAllowed', () => {
});
it('should return false for invalid domain formats', async () => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
actions: { allowedDomains: ['http://', 'https://'] },
});
expect(await isActionDomainAllowed('http://')).toBe(false);
expect(await isActionDomainAllowed('https://')).toBe(false);
expect(await isActionDomainAllowed('http://', ['http://', 'https://'])).toBe(false);
expect(await isActionDomainAllowed('https://', ['http://', 'https://'])).toBe(false);
});
});
// Configuration Tests
describe('configuration handling', () => {
it('should return true if customConfig is null', async () => {
getCustomConfig.mockResolvedValue(null);
expect(await isActionDomainAllowed('example.com')).toBe(true);
getAppConfig.mockResolvedValue(null);
expect(await isActionDomainAllowed('example.com', null)).toBe(true);
});
it('should return true if actions.allowedDomains is not defined', async () => {
getCustomConfig.mockResolvedValue({});
expect(await isActionDomainAllowed('example.com')).toBe(true);
getAppConfig.mockResolvedValue({});
expect(await isActionDomainAllowed('example.com', undefined)).toBe(true);
});
it('should return true if allowedDomains is empty array', async () => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
actions: { allowedDomains: [] },
});
expect(await isActionDomainAllowed('example.com')).toBe(true);
expect(await isActionDomainAllowed('example.com', [])).toBe(true);
});
});
// Domain Matching Tests
describe('domain matching', () => {
const allowedDomains = [
'example.com',
'*.subdomain.com',
'specific.domain.com',
'www.withprefix.com',
'swapi.dev',
];
beforeEach(() => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
actions: {
allowedDomains: [
'example.com',
'*.subdomain.com',
'specific.domain.com',
'www.withprefix.com',
'swapi.dev',
],
allowedDomains,
},
});
});
it('should match exact domains', async () => {
expect(await isActionDomainAllowed('example.com')).toBe(true);
expect(await isActionDomainAllowed('other.com')).toBe(false);
expect(await isActionDomainAllowed('swapi.dev')).toBe(true);
expect(await isActionDomainAllowed('example.com', allowedDomains)).toBe(true);
expect(await isActionDomainAllowed('other.com', allowedDomains)).toBe(false);
expect(await isActionDomainAllowed('swapi.dev', allowedDomains)).toBe(true);
});
it('should handle domains with www prefix', async () => {
expect(await isActionDomainAllowed('www.example.com')).toBe(true);
expect(await isActionDomainAllowed('www.withprefix.com')).toBe(true);
expect(await isActionDomainAllowed('www.example.com', allowedDomains)).toBe(true);
expect(await isActionDomainAllowed('www.withprefix.com', allowedDomains)).toBe(true);
});
it('should handle full URLs', async () => {
expect(await isActionDomainAllowed('https://example.com')).toBe(true);
expect(await isActionDomainAllowed('http://example.com')).toBe(true);
expect(await isActionDomainAllowed('https://example.com/path')).toBe(true);
expect(await isActionDomainAllowed('https://example.com', allowedDomains)).toBe(true);
expect(await isActionDomainAllowed('http://example.com', allowedDomains)).toBe(true);
expect(await isActionDomainAllowed('https://example.com/path', allowedDomains)).toBe(true);
});
it('should handle wildcard subdomains', async () => {
expect(await isActionDomainAllowed('test.subdomain.com')).toBe(true);
expect(await isActionDomainAllowed('any.subdomain.com')).toBe(true);
expect(await isActionDomainAllowed('subdomain.com')).toBe(true);
expect(await isActionDomainAllowed('test.subdomain.com', allowedDomains)).toBe(true);
expect(await isActionDomainAllowed('any.subdomain.com', allowedDomains)).toBe(true);
expect(await isActionDomainAllowed('subdomain.com', allowedDomains)).toBe(true);
});
it('should handle specific subdomains', async () => {
expect(await isActionDomainAllowed('specific.domain.com')).toBe(true);
expect(await isActionDomainAllowed('other.domain.com')).toBe(false);
expect(await isActionDomainAllowed('specific.domain.com', allowedDomains)).toBe(true);
expect(await isActionDomainAllowed('other.domain.com', allowedDomains)).toBe(false);
});
});
// Edge Cases
describe('edge cases', () => {
const edgeAllowedDomains = ['example.com', '*.test.com'];
beforeEach(() => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
actions: {
allowedDomains: ['example.com', '*.test.com'],
allowedDomains: edgeAllowedDomains,
},
});
});
it('should handle domains with query parameters', async () => {
expect(await isActionDomainAllowed('example.com?param=value')).toBe(true);
expect(await isActionDomainAllowed('example.com?param=value', edgeAllowedDomains)).toBe(true);
});
it('should handle domains with ports', async () => {
expect(await isActionDomainAllowed('example.com:8080')).toBe(true);
expect(await isActionDomainAllowed('example.com:8080', edgeAllowedDomains)).toBe(true);
});
it('should handle domains with trailing slashes', async () => {
expect(await isActionDomainAllowed('example.com/')).toBe(true);
expect(await isActionDomainAllowed('example.com/', edgeAllowedDomains)).toBe(true);
});
it('should handle case insensitivity', async () => {
expect(await isActionDomainAllowed('EXAMPLE.COM')).toBe(true);
expect(await isActionDomainAllowed('Example.Com')).toBe(true);
expect(await isActionDomainAllowed('EXAMPLE.COM', edgeAllowedDomains)).toBe(true);
expect(await isActionDomainAllowed('Example.Com', edgeAllowedDomains)).toBe(true);
});
it('should handle invalid entries in allowedDomains', async () => {
getCustomConfig.mockResolvedValue({
const invalidAllowedDomains = ['example.com', null, undefined, '', 'test.com'];
getAppConfig.mockResolvedValue({
actions: {
allowedDomains: ['example.com', null, undefined, '', 'test.com'],
allowedDomains: invalidAllowedDomains,
},
});
expect(await isActionDomainAllowed('example.com')).toBe(true);
expect(await isActionDomainAllowed('test.com')).toBe(true);
expect(await isActionDomainAllowed('example.com', invalidAllowedDomains)).toBe(true);
expect(await isActionDomainAllowed('test.com', invalidAllowedDomains)).toBe(true);
});
});
});

View file

@ -1,13 +1,13 @@
const { logger } = require('@librechat/data-schemas');
const { mergeAppTools, getAppConfig } = require('./Config');
const { createMCPManager } = require('~/config');
const { mergeAppTools } = require('./Config');
/**
* Initialize MCP servers
* @param {import('express').Application} app - Express app instance
*/
async function initializeMCPs(app) {
const mcpServers = app.locals.mcpConfig;
async function initializeMCPs() {
const appConfig = await getAppConfig();
const mcpServers = appConfig.mcpConfig;
if (!mcpServers) {
return;
}
@ -15,7 +15,6 @@ async function initializeMCPs(app) {
const mcpManager = await createMCPManager(mcpServers);
try {
delete app.locals.mcpConfig;
const mcpTools = mcpManager.getAppToolFunctions() || {};
await mergeAppTools(mcpTools);

View file

@ -1,9 +1,9 @@
const { logger } = require('@librechat/data-schemas');
const {
Capabilities,
assistantEndpointSchema,
defaultAssistantsVersion,
} = require('librechat-data-provider');
const { logger } = require('~/config');
/**
* Sets up the minimum, default Assistants configuration if Azure OpenAI Assistants option is enabled.

View file

@ -1,9 +1,9 @@
const { logger } = require('@librechat/data-schemas');
const {
EModelEndpoint,
validateAzureGroups,
mapModelToAzureConfig,
} = require('librechat-data-provider');
const { logger } = require('~/config');
/**
* Sets up the Azure OpenAI configuration from the config (`librechat.yaml`) file.

View file

@ -1,12 +1,11 @@
const { webSearchKeys } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { isEnabled, webSearchKeys, checkEmailConfig } = require('@librechat/api');
const {
Constants,
extractVariableName,
deprecatedAzureVariables,
conflictingAzureVariables,
extractVariableName,
} = require('librechat-data-provider');
const { isEnabled, checkEmailConfig } = require('~/server/utils');
const { logger } = require('~/config');
const secretDefaults = {
CREDS_KEY: 'f34be427ebb29de8d88c107a71546019685ed8b241d8f2ed00c3df97ad2566f0',
@ -76,7 +75,7 @@ async function checkHealth() {
if (response?.ok && response?.status === 200) {
logger.info(`RAG API is running and reachable at ${process.env.RAG_API_URL}.`);
}
} catch (error) {
} catch {
logger.warn(
`RAG API is either not running or not reachable at ${process.env.RAG_API_URL}, you may experience errors with file uploads.`,
);

View file

@ -1,11 +1,10 @@
// Mock librechat-data-provider
jest.mock('librechat-data-provider', () => ({
...jest.requireActual('librechat-data-provider'),
extractVariableName: jest.fn(),
}));
// Mock the config logger
jest.mock('~/config', () => ({
jest.mock('@librechat/data-schemas', () => ({
...jest.requireActual('@librechat/data-schemas'),
logger: {
debug: jest.fn(),
warn: jest.fn(),
@ -13,7 +12,7 @@ jest.mock('~/config', () => ({
}));
const { checkWebSearchConfig } = require('./checks');
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
const { extractVariableName } = require('librechat-data-provider');
describe('checkWebSearchConfig', () => {

View file

@ -0,0 +1,67 @@
const { agentsConfigSetup } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const { azureAssistantsDefaults, assistantsConfigSetup } = require('./assistants');
const { azureConfigSetup } = require('./azureOpenAI');
const { checkAzureVariables } = require('./checks');
/**
* Loads custom config endpoints
* @param {TCustomConfig} [config]
* @param {TCustomConfig['endpoints']['agents']} [agentsDefaults]
*/
const loadEndpoints = (config, agentsDefaults) => {
/** @type {AppConfig['endpoints']} */
const loadedEndpoints = {};
const endpoints = config?.endpoints;
if (endpoints?.[EModelEndpoint.azureOpenAI]) {
loadedEndpoints[EModelEndpoint.azureOpenAI] = azureConfigSetup(config);
checkAzureVariables();
}
if (endpoints?.[EModelEndpoint.azureOpenAI]?.assistants) {
loadedEndpoints[EModelEndpoint.azureAssistants] = azureAssistantsDefaults();
}
if (endpoints?.[EModelEndpoint.azureAssistants]) {
loadedEndpoints[EModelEndpoint.azureAssistants] = assistantsConfigSetup(
config,
EModelEndpoint.azureAssistants,
loadedEndpoints[EModelEndpoint.azureAssistants],
);
}
if (endpoints?.[EModelEndpoint.assistants]) {
loadedEndpoints[EModelEndpoint.assistants] = assistantsConfigSetup(
config,
EModelEndpoint.assistants,
loadedEndpoints[EModelEndpoint.assistants],
);
}
loadedEndpoints[EModelEndpoint.agents] = agentsConfigSetup(config, agentsDefaults);
const endpointKeys = [
EModelEndpoint.openAI,
EModelEndpoint.google,
EModelEndpoint.custom,
EModelEndpoint.bedrock,
EModelEndpoint.anthropic,
];
endpointKeys.forEach((key) => {
if (endpoints?.[key]) {
loadedEndpoints[key] = endpoints[key];
}
});
if (endpoints?.all) {
loadedEndpoints.all = endpoints.all;
}
return loadedEndpoints;
};
module.exports = {
loadEndpoints,
};

View file

@ -1,292 +0,0 @@
const {
SystemRoles,
Permissions,
roleDefaults,
PermissionTypes,
removeNullishValues,
} = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { isMemoryEnabled } = require('@librechat/api');
const { updateAccessPermissions, getRoleByName } = require('~/models/Role');
/**
* Checks if a permission type has explicit configuration
*/
function hasExplicitConfig(interfaceConfig, permissionType) {
switch (permissionType) {
case PermissionTypes.PROMPTS:
return interfaceConfig.prompts !== undefined;
case PermissionTypes.BOOKMARKS:
return interfaceConfig.bookmarks !== undefined;
case PermissionTypes.MEMORIES:
return interfaceConfig.memories !== undefined;
case PermissionTypes.MULTI_CONVO:
return interfaceConfig.multiConvo !== undefined;
case PermissionTypes.AGENTS:
return interfaceConfig.agents !== undefined;
case PermissionTypes.TEMPORARY_CHAT:
return interfaceConfig.temporaryChat !== undefined;
case PermissionTypes.RUN_CODE:
return interfaceConfig.runCode !== undefined;
case PermissionTypes.WEB_SEARCH:
return interfaceConfig.webSearch !== undefined;
case PermissionTypes.PEOPLE_PICKER:
return interfaceConfig.peoplePicker !== undefined;
case PermissionTypes.MARKETPLACE:
return interfaceConfig.marketplace !== undefined;
case PermissionTypes.FILE_SEARCH:
return interfaceConfig.fileSearch !== undefined;
case PermissionTypes.FILE_CITATIONS:
return interfaceConfig.fileCitations !== undefined;
default:
return false;
}
}
/**
* Loads the default interface object.
* @param {TCustomConfig | undefined} config - The loaded custom configuration.
* @param {TConfigDefaults} configDefaults - The custom configuration default values.
* @returns {Promise<TCustomConfig['interface']>} The default interface object.
*/
async function loadDefaultInterface(config, configDefaults) {
const { interface: interfaceConfig } = config ?? {};
const { interface: defaults } = configDefaults;
const hasModelSpecs = config?.modelSpecs?.list?.length > 0;
const includesAddedEndpoints = config?.modelSpecs?.addedEndpoints?.length > 0;
const memoryConfig = config?.memory;
const memoryEnabled = isMemoryEnabled(memoryConfig);
/** Only disable memories if memory config is present but disabled/invalid */
const shouldDisableMemories = memoryConfig && !memoryEnabled;
/** Check if personalization is enabled (defaults to true if memory is configured and enabled) */
const isPersonalizationEnabled =
memoryConfig && memoryEnabled && memoryConfig.personalize !== false;
/** @type {TCustomConfig['interface']} */
const loadedInterface = removeNullishValues({
// UI elements - use schema defaults
endpointsMenu:
interfaceConfig?.endpointsMenu ?? (hasModelSpecs ? false : defaults.endpointsMenu),
modelSelect:
interfaceConfig?.modelSelect ??
(hasModelSpecs ? includesAddedEndpoints : defaults.modelSelect),
parameters: interfaceConfig?.parameters ?? (hasModelSpecs ? false : defaults.parameters),
presets: interfaceConfig?.presets ?? (hasModelSpecs ? false : defaults.presets),
sidePanel: interfaceConfig?.sidePanel ?? defaults.sidePanel,
privacyPolicy: interfaceConfig?.privacyPolicy ?? defaults.privacyPolicy,
termsOfService: interfaceConfig?.termsOfService ?? defaults.termsOfService,
mcpServers: interfaceConfig?.mcpServers ?? defaults.mcpServers,
customWelcome: interfaceConfig?.customWelcome ?? defaults.customWelcome,
// Permissions - only include if explicitly configured
bookmarks: interfaceConfig?.bookmarks,
memories: shouldDisableMemories ? false : interfaceConfig?.memories,
prompts: interfaceConfig?.prompts,
multiConvo: interfaceConfig?.multiConvo,
agents: interfaceConfig?.agents,
temporaryChat: interfaceConfig?.temporaryChat,
runCode: interfaceConfig?.runCode,
webSearch: interfaceConfig?.webSearch,
fileSearch: interfaceConfig?.fileSearch,
fileCitations: interfaceConfig?.fileCitations,
peoplePicker: interfaceConfig?.peoplePicker,
marketplace: interfaceConfig?.marketplace,
});
// Helper to get permission value with proper precedence
const getPermissionValue = (configValue, roleDefault, schemaDefault) => {
if (configValue !== undefined) return configValue;
if (roleDefault !== undefined) return roleDefault;
return schemaDefault;
};
// Permission precedence order:
// 1. Explicit user configuration (from librechat.yaml)
// 2. Role-specific defaults (from roleDefaults)
// 3. Interface schema defaults (from interfaceSchema.default())
for (const roleName of [SystemRoles.USER, SystemRoles.ADMIN]) {
const defaultPerms = roleDefaults[roleName].permissions;
const existingRole = await getRoleByName(roleName);
const existingPermissions = existingRole?.permissions || {};
const permissionsToUpdate = {};
// Helper to add permission if it should be updated
const addPermissionIfNeeded = (permType, permissions) => {
const permTypeExists = existingPermissions[permType];
const isExplicitlyConfigured =
interfaceConfig && hasExplicitConfig(interfaceConfig, permType);
// Only update if: doesn't exist OR explicitly configured
if (!permTypeExists || isExplicitlyConfigured) {
permissionsToUpdate[permType] = permissions;
if (!permTypeExists) {
logger.debug(`Role '${roleName}': Setting up default permissions for '${permType}'`);
} else if (isExplicitlyConfigured) {
logger.debug(`Role '${roleName}': Applying explicit config for '${permType}'`);
}
} else {
logger.debug(`Role '${roleName}': Preserving existing permissions for '${permType}'`);
}
};
// Build permissions for each type
const allPermissions = {
[PermissionTypes.PROMPTS]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.prompts,
defaultPerms[PermissionTypes.PROMPTS]?.[Permissions.USE],
defaults.prompts,
),
},
[PermissionTypes.BOOKMARKS]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.bookmarks,
defaultPerms[PermissionTypes.BOOKMARKS]?.[Permissions.USE],
defaults.bookmarks,
),
},
[PermissionTypes.MEMORIES]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.memories,
defaultPerms[PermissionTypes.MEMORIES]?.[Permissions.USE],
defaults.memories,
),
[Permissions.OPT_OUT]: isPersonalizationEnabled,
},
[PermissionTypes.MULTI_CONVO]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.multiConvo,
defaultPerms[PermissionTypes.MULTI_CONVO]?.[Permissions.USE],
defaults.multiConvo,
),
},
[PermissionTypes.AGENTS]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.agents,
defaultPerms[PermissionTypes.AGENTS]?.[Permissions.USE],
defaults.agents,
),
},
[PermissionTypes.TEMPORARY_CHAT]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.temporaryChat,
defaultPerms[PermissionTypes.TEMPORARY_CHAT]?.[Permissions.USE],
defaults.temporaryChat,
),
},
[PermissionTypes.RUN_CODE]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.runCode,
defaultPerms[PermissionTypes.RUN_CODE]?.[Permissions.USE],
defaults.runCode,
),
},
[PermissionTypes.WEB_SEARCH]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.webSearch,
defaultPerms[PermissionTypes.WEB_SEARCH]?.[Permissions.USE],
defaults.webSearch,
),
},
[PermissionTypes.PEOPLE_PICKER]: {
[Permissions.VIEW_USERS]: getPermissionValue(
loadedInterface.peoplePicker?.users,
defaultPerms[PermissionTypes.PEOPLE_PICKER]?.[Permissions.VIEW_USERS],
defaults.peoplePicker?.users,
),
[Permissions.VIEW_GROUPS]: getPermissionValue(
loadedInterface.peoplePicker?.groups,
defaultPerms[PermissionTypes.PEOPLE_PICKER]?.[Permissions.VIEW_GROUPS],
defaults.peoplePicker?.groups,
),
[Permissions.VIEW_ROLES]: getPermissionValue(
loadedInterface.peoplePicker?.roles,
defaultPerms[PermissionTypes.PEOPLE_PICKER]?.[Permissions.VIEW_ROLES],
defaults.peoplePicker?.roles,
),
},
[PermissionTypes.MARKETPLACE]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.marketplace?.use,
defaultPerms[PermissionTypes.MARKETPLACE]?.[Permissions.USE],
defaults.marketplace?.use,
),
},
[PermissionTypes.FILE_SEARCH]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.fileSearch,
defaultPerms[PermissionTypes.FILE_SEARCH]?.[Permissions.USE],
defaults.fileSearch,
),
},
[PermissionTypes.FILE_CITATIONS]: {
[Permissions.USE]: getPermissionValue(
loadedInterface.fileCitations,
defaultPerms[PermissionTypes.FILE_CITATIONS]?.[Permissions.USE],
defaults.fileCitations,
),
},
};
// Check and add each permission type if needed
for (const [permType, permissions] of Object.entries(allPermissions)) {
addPermissionIfNeeded(permType, permissions);
}
// Update permissions if any need updating
if (Object.keys(permissionsToUpdate).length > 0) {
await updateAccessPermissions(roleName, permissionsToUpdate, existingRole);
}
}
let i = 0;
const logSettings = () => {
// log interface object and model specs object (without list) for reference
logger.warn(`\`interface\` settings:\n${JSON.stringify(loadedInterface, null, 2)}`);
logger.warn(
`\`modelSpecs\` settings:\n${JSON.stringify(
{ ...(config?.modelSpecs ?? {}), list: undefined },
null,
2,
)}`,
);
};
// warn about config.modelSpecs.prioritize if true and presets are enabled, that default presets will conflict with prioritizing model specs.
if (config?.modelSpecs?.prioritize && loadedInterface.presets) {
logger.warn(
"Note: Prioritizing model specs can conflict with default presets if a default preset is set. It's recommended to disable presets from the interface or disable use of a default preset.",
);
i === 0 && i++;
}
// warn about config.modelSpecs.enforce if true and if any of these, endpointsMenu, modelSelect, presets, or parameters are enabled, that enforcing model specs can conflict with these options.
if (
config?.modelSpecs?.enforce &&
(loadedInterface.endpointsMenu ||
loadedInterface.modelSelect ||
loadedInterface.presets ||
loadedInterface.parameters)
) {
logger.warn(
"Note: Enforcing model specs can conflict with the interface options: endpointsMenu, modelSelect, presets, and parameters. It's recommended to disable these options from the interface or disable enforcing model specs.",
);
i === 0 && i++;
}
// warn if enforce is true and prioritize is not, that enforcing model specs without prioritizing them can lead to unexpected behavior.
if (config?.modelSpecs?.enforce && !config?.modelSpecs?.prioritize) {
logger.warn(
"Note: Enforcing model specs without prioritizing them can lead to unexpected behavior. It's recommended to enable prioritizing model specs if enforcing them.",
);
i === 0 && i++;
}
if (i > 0) {
logSettings();
}
return loadedInterface;
}
module.exports = { loadDefaultInterface };

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { normalizeEndpointName } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const { normalizeEndpointName } = require('~/server/utils');
const { logger } = require('~/config');
/**
* Sets up Model Specs from the config (`librechat.yaml`) file.

View file

@ -0,0 +1,132 @@
const fs = require('fs');
const path = require('path');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { zodToJsonSchema } = require('zod-to-json-schema');
const { Tools, ImageVisionTool } = require('librechat-data-provider');
const { Calculator } = require('@langchain/community/tools/calculator');
const { getToolkitKey, oaiToolkit, ytToolkit } = require('@librechat/api');
const { toolkits } = require('~/app/clients/tools/manifest');
/**
* Loads and formats tools from the specified tool directory.
*
* The directory is scanned for JavaScript files, excluding any files in the filter set.
* For each file, it attempts to load the file as a module and instantiate a class, if it's a subclass of `StructuredTool`.
* Each tool instance is then formatted to be compatible with the OpenAI Assistant.
* Additionally, instances of LangChain Tools are included in the result.
*
* @param {object} params - The parameters for the function.
* @param {string} params.directory - The directory path where the tools are located.
* @param {Array<string>} [params.adminFilter=[]] - Array of admin-defined tool keys to exclude from loading.
* @param {Array<string>} [params.adminIncluded=[]] - Array of admin-defined tool keys to include from loading.
* @returns {Record<string, FunctionTool>} An object mapping each tool's plugin key to its instance.
*/
function loadAndFormatTools({ directory, adminFilter = [], adminIncluded = [] }) {
const filter = new Set([...adminFilter]);
const included = new Set(adminIncluded);
const tools = [];
/* Structured Tools Directory */
const files = fs.readdirSync(directory);
if (included.size > 0 && adminFilter.length > 0) {
logger.warn(
'Both `includedTools` and `filteredTools` are defined; `filteredTools` will be ignored.',
);
}
for (const file of files) {
const filePath = path.join(directory, file);
if (!file.endsWith('.js') || (filter.has(file) && included.size === 0)) {
continue;
}
let ToolClass = null;
try {
ToolClass = require(filePath);
} catch (error) {
logger.error(`[loadAndFormatTools] Error loading tool from ${filePath}:`, error);
continue;
}
if (!ToolClass || !(ToolClass.prototype instanceof Tool)) {
continue;
}
let toolInstance = null;
try {
toolInstance = new ToolClass({ override: true });
} catch (error) {
logger.error(
`[loadAndFormatTools] Error initializing \`${file}\` tool; if it requires authentication, is the \`override\` field configured?`,
error,
);
continue;
}
if (!toolInstance) {
continue;
}
if (filter.has(toolInstance.name) && included.size === 0) {
continue;
}
if (included.size > 0 && !included.has(file) && !included.has(toolInstance.name)) {
continue;
}
const formattedTool = formatToOpenAIAssistantTool(toolInstance);
tools.push(formattedTool);
}
const basicToolInstances = [
new Calculator(),
...Object.values(oaiToolkit),
...Object.values(ytToolkit),
];
for (const toolInstance of basicToolInstances) {
const formattedTool = formatToOpenAIAssistantTool(toolInstance);
let toolName = formattedTool[Tools.function].name;
toolName = getToolkitKey({ toolkits, toolName }) ?? toolName;
if (filter.has(toolName) && included.size === 0) {
continue;
}
if (included.size > 0 && !included.has(toolName)) {
continue;
}
tools.push(formattedTool);
}
tools.push(ImageVisionTool);
return tools.reduce((map, tool) => {
map[tool.function.name] = tool;
return map;
}, {});
}
/**
* Formats a `StructuredTool` instance into a format that is compatible
* with OpenAI's ChatCompletionFunctions. It uses the `zodToJsonSchema`
* function to convert the schema of the `StructuredTool` into a JSON
* schema, which is then used as the parameters for the OpenAI function.
*
* @param {StructuredTool} tool - The StructuredTool to format.
* @returns {FunctionTool} The OpenAI Assistant Tool.
*/
function formatToOpenAIAssistantTool(tool) {
return {
type: Tools.function,
[Tools.function]: {
name: tool.name,
description: tool.description,
parameters: zodToJsonSchema(tool.schema),
},
};
}
module.exports = {
loadAndFormatTools,
};

View file

@ -1,5 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { removeNullishValues } = require('librechat-data-provider');
const { logger } = require('~/config');
/**
* Loads and maps the Cloudflare Turnstile configuration.