🛜 refactor: Streamline App Config Usage (#9234)

* WIP: app.locals refactoring

WIP: appConfig

fix: update memory configuration retrieval to use getAppConfig based on user role

fix: update comment for AppConfig interface to clarify purpose

🏷️ refactor: Update tests to use getAppConfig for endpoint configurations

ci: Update AppService tests to initialize app config instead of app.locals

ci: Integrate getAppConfig into remaining tests

refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests

refactor: Rename initializeAppConfig to setAppConfig and update related tests

ci: Mock getAppConfig in various tests to provide default configurations

refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests

chore: rename `Config/getAppConfig` -> `Config/app`

fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters

chore: correct parameter documentation for imageOutputType in ToolService.js

refactor: remove `getCustomConfig` dependency in config route

refactor: update domain validation to use appConfig for allowed domains

refactor: use appConfig registration property

chore: remove app parameter from AppService invocation

refactor: update AppConfig interface to correct registration and turnstile configurations

refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services

refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files

refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type

refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration

ci: update related tests

refactor: update getAppConfig call in getCustomConfigSpeech to include user role

fix: update appConfig usage to access allowedDomains from actions instead of registration

refactor: enhance AppConfig to include fileStrategies and update related file strategy logic

refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions

chore: remove deprecated unused RunManager

refactor: get balance config primarily from appConfig

refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic

refactor: remove getCustomConfig usage and use app config in file citations

refactor: consolidate endpoint loading logic into loadEndpoints function

refactor: update appConfig access to use endpoints structure across various services

refactor: implement custom endpoints configuration and streamline endpoint loading logic

refactor: update getAppConfig call to include user role parameter

refactor: streamline endpoint configuration and enhance appConfig usage across services

refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file

refactor: add type annotation for loadedEndpoints in loadEndpoints function

refactor: move /services/Files/images/parse to TS API

chore: add missing FILE_CITATIONS permission to IRole interface

refactor: restructure toolkits to TS API

refactor: separate manifest logic into its own module

refactor: consolidate tool loading logic into a new tools module for startup logic

refactor: move interface config logic to TS API

refactor: migrate checkEmailConfig to TypeScript and update imports

refactor: add FunctionTool interface and availableTools to AppConfig

refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig`

WIP: fix tests

* fix: rebase conflicts

* refactor: remove app.locals references

* refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware

* refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients

* test: add balance configuration to titleConvo method in AgentClient tests

* chore: remove unused `openai-chat-tokens` package

* chore: remove unused imports in initializeMCPs.js

* refactor: update balance configuration to use getAppConfig instead of getBalanceConfig

* refactor: integrate configMiddleware for centralized configuration handling

* refactor: optimize email domain validation by removing unnecessary async calls

* refactor: simplify multer storage configuration by removing async calls

* refactor: reorder imports for better readability in user.js

* refactor: replace getAppConfig calls with req.config for improved performance

* chore: replace getAppConfig calls with req.config in tests for centralized configuration handling

* chore: remove unused override config

* refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config

* chore: remove customConfig parameter from TTSService constructor

* refactor: pass appConfig from request to processFileCitations for improved configuration handling

* refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config`

* test: add mockAppConfig to processFileCitations tests for improved configuration handling

* fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor

* fix: type safety in useExportConversation

* refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached

* chore: change `MongoUser` typedef to `IUser`

* fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest

* fix: remove unused setAppConfig mock from Server configuration tests
This commit is contained in:
Danny Avila 2025-08-26 12:10:18 -04:00 committed by GitHub
parent e1ad235f17
commit 9a210971f5
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
210 changed files with 4102 additions and 3465 deletions

View file

@ -0,0 +1,68 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys } = require('librechat-data-provider');
const AppService = require('~/server/services/AppService');
const { setCachedTools } = require('./getCachedTools');
const getLogStores = require('~/cache/getLogStores');
/**
* Get the app configuration based on user context
* @param {Object} [options]
* @param {string} [options.role] - User role for role-based config
* @param {boolean} [options.refresh] - Force refresh the cache
* @returns {Promise<AppConfig>}
*/
async function getAppConfig(options = {}) {
const { role, refresh } = options;
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cacheKey = role ? `${CacheKeys.APP_CONFIG}:${role}` : CacheKeys.APP_CONFIG;
if (!refresh) {
const cached = await cache.get(cacheKey);
if (cached) {
return cached;
}
}
let baseConfig = await cache.get(CacheKeys.APP_CONFIG);
if (!baseConfig) {
logger.info('[getAppConfig] App configuration not initialized. Initializing AppService...');
baseConfig = await AppService();
if (!baseConfig) {
throw new Error('Failed to initialize app configuration through AppService.');
}
if (baseConfig.availableTools) {
await setCachedTools(baseConfig.availableTools, { isGlobal: true });
}
await cache.set(CacheKeys.APP_CONFIG, baseConfig);
}
// For now, return the base config
// In the future, this is where we'll apply role-based modifications
if (role) {
// TODO: Apply role-based config modifications
// const roleConfig = await applyRoleBasedConfig(baseConfig, role);
// await cache.set(cacheKey, roleConfig);
// return roleConfig;
}
return baseConfig;
}
/**
* Clear the app configuration cache
* @returns {Promise<boolean>}
*/
async function clearAppConfigCache() {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cacheKey = CacheKeys.APP_CONFIG;
return await cache.delete(cacheKey);
}
module.exports = {
getAppConfig,
clearAppConfigCache,
};

View file

@ -1,69 +0,0 @@
const { isEnabled } = require('@librechat/api');
const { CacheKeys, EModelEndpoint } = require('librechat-data-provider');
const { normalizeEndpointName } = require('~/server/utils');
const loadCustomConfig = require('./loadCustomConfig');
const getLogStores = require('~/cache/getLogStores');
/**
* Retrieves the configuration object
* @function getCustomConfig
* @returns {Promise<TCustomConfig | null>}
* */
async function getCustomConfig() {
const cache = getLogStores(CacheKeys.STATIC_CONFIG);
return (await cache.get(CacheKeys.LIBRECHAT_YAML_CONFIG)) || (await loadCustomConfig());
}
/**
* Retrieves the configuration object
* @function getBalanceConfig
* @returns {Promise<TCustomConfig['balance'] | null>}
* */
async function getBalanceConfig() {
const isLegacyEnabled = isEnabled(process.env.CHECK_BALANCE);
const startBalance = process.env.START_BALANCE;
/** @type {TCustomConfig['balance']} */
const config = {
enabled: isLegacyEnabled,
startBalance: startBalance != null && startBalance ? parseInt(startBalance, 10) : undefined,
};
const customConfig = await getCustomConfig();
if (!customConfig) {
return config;
}
return { ...config, ...(customConfig?.['balance'] ?? {}) };
}
/**
*
* @param {string | EModelEndpoint} endpoint
* @returns {Promise<TEndpoint | undefined>}
*/
const getCustomEndpointConfig = async (endpoint) => {
const customConfig = await getCustomConfig();
if (!customConfig) {
throw new Error(`Config not found for the ${endpoint} custom endpoint.`);
}
const { endpoints = {} } = customConfig;
const customEndpoints = endpoints[EModelEndpoint.custom] ?? [];
return customEndpoints.find(
(endpointConfig) => normalizeEndpointName(endpointConfig.name) === endpoint,
);
};
/**
* @returns {Promise<boolean>}
*/
async function hasCustomUserVars() {
const customConfig = await getCustomConfig();
const mcpServers = customConfig?.mcpServers;
return Object.values(mcpServers ?? {}).some((server) => server.customUserVars);
}
module.exports = {
getCustomConfig,
getBalanceConfig,
hasCustomUserVars,
getCustomEndpointConfig,
};

View file

@ -1,3 +1,4 @@
const { loadCustomEndpointsConfig } = require('@librechat/api');
const {
CacheKeys,
EModelEndpoint,
@ -6,8 +7,8 @@ const {
defaultAgentCapabilities,
} = require('librechat-data-provider');
const loadDefaultEndpointsConfig = require('./loadDefaultEConfig');
const loadConfigEndpoints = require('./loadConfigEndpoints');
const getLogStores = require('~/cache/getLogStores');
const { getAppConfig } = require('./app');
/**
*
@ -21,14 +22,36 @@ async function getEndpointsConfig(req) {
return cachedEndpointsConfig;
}
const defaultEndpointsConfig = await loadDefaultEndpointsConfig(req);
const customConfigEndpoints = await loadConfigEndpoints(req);
const appConfig = req.config ?? (await getAppConfig({ role: req.user?.role }));
const defaultEndpointsConfig = await loadDefaultEndpointsConfig(appConfig);
const customEndpointsConfig = loadCustomEndpointsConfig(appConfig?.endpoints?.custom);
/** @type {TEndpointsConfig} */
const mergedConfig = { ...defaultEndpointsConfig, ...customConfigEndpoints };
if (mergedConfig[EModelEndpoint.assistants] && req.app.locals?.[EModelEndpoint.assistants]) {
const mergedConfig = {
...defaultEndpointsConfig,
...customEndpointsConfig,
};
if (appConfig.endpoints?.[EModelEndpoint.azureOpenAI]) {
/** @type {Omit<TConfig, 'order'>} */
mergedConfig[EModelEndpoint.azureOpenAI] = {
userProvide: false,
};
}
if (appConfig.endpoints?.[EModelEndpoint.azureOpenAI]?.assistants) {
/** @type {Omit<TConfig, 'order'>} */
mergedConfig[EModelEndpoint.azureAssistants] = {
userProvide: false,
};
}
if (
mergedConfig[EModelEndpoint.assistants] &&
appConfig?.endpoints?.[EModelEndpoint.assistants]
) {
const { disableBuilder, retrievalModels, capabilities, version, ..._rest } =
req.app.locals[EModelEndpoint.assistants];
appConfig.endpoints[EModelEndpoint.assistants];
mergedConfig[EModelEndpoint.assistants] = {
...mergedConfig[EModelEndpoint.assistants],
@ -38,9 +61,9 @@ async function getEndpointsConfig(req) {
capabilities,
};
}
if (mergedConfig[EModelEndpoint.agents] && req.app.locals?.[EModelEndpoint.agents]) {
if (mergedConfig[EModelEndpoint.agents] && appConfig?.endpoints?.[EModelEndpoint.agents]) {
const { disableBuilder, capabilities, allowedProviders, ..._rest } =
req.app.locals[EModelEndpoint.agents];
appConfig.endpoints[EModelEndpoint.agents];
mergedConfig[EModelEndpoint.agents] = {
...mergedConfig[EModelEndpoint.agents],
@ -52,10 +75,10 @@ async function getEndpointsConfig(req) {
if (
mergedConfig[EModelEndpoint.azureAssistants] &&
req.app.locals?.[EModelEndpoint.azureAssistants]
appConfig?.endpoints?.[EModelEndpoint.azureAssistants]
) {
const { disableBuilder, retrievalModels, capabilities, version, ..._rest } =
req.app.locals[EModelEndpoint.azureAssistants];
appConfig.endpoints[EModelEndpoint.azureAssistants];
mergedConfig[EModelEndpoint.azureAssistants] = {
...mergedConfig[EModelEndpoint.azureAssistants],
@ -66,8 +89,8 @@ async function getEndpointsConfig(req) {
};
}
if (mergedConfig[EModelEndpoint.bedrock] && req.app.locals?.[EModelEndpoint.bedrock]) {
const { availableRegions } = req.app.locals[EModelEndpoint.bedrock];
if (mergedConfig[EModelEndpoint.bedrock] && appConfig?.endpoints?.[EModelEndpoint.bedrock]) {
const { availableRegions } = appConfig.endpoints[EModelEndpoint.bedrock];
mergedConfig[EModelEndpoint.bedrock] = {
...mergedConfig[EModelEndpoint.bedrock],
availableRegions,

View file

@ -1,12 +1,11 @@
const appConfig = require('./app');
const { config } = require('./EndpointService');
const getCachedTools = require('./getCachedTools');
const getCustomConfig = require('./getCustomConfig');
const mcpToolsCache = require('./mcpToolsCache');
const loadCustomConfig = require('./loadCustomConfig');
const loadConfigModels = require('./loadConfigModels');
const loadDefaultModels = require('./loadDefaultModels');
const getEndpointsConfig = require('./getEndpointsConfig');
const loadOverrideConfig = require('./loadOverrideConfig');
const loadAsyncEndpoints = require('./loadAsyncEndpoints');
module.exports = {
@ -14,10 +13,9 @@ module.exports = {
loadCustomConfig,
loadConfigModels,
loadDefaultModels,
loadOverrideConfig,
loadAsyncEndpoints,
...appConfig,
...getCachedTools,
...getCustomConfig,
...mcpToolsCache,
...getEndpointsConfig,
};

View file

@ -1,16 +1,16 @@
const path = require('path');
const { logger } = require('@librechat/data-schemas');
const { loadServiceKey, isUserProvided } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const { loadServiceKey, isUserProvided } = require('@librechat/api');
const { config } = require('./EndpointService');
const { openAIApiKey, azureOpenAIApiKey, useAzurePlugins, userProvidedOpenAI, googleKey } = config;
/**
* Load async endpoints and return a configuration object
* @param {Express.Request} req - The request object
* @param {AppConfig} [appConfig] - The app configuration object
*/
async function loadAsyncEndpoints(req) {
async function loadAsyncEndpoints(appConfig) {
let serviceKey, googleUserProvides;
/** Check if GOOGLE_KEY is provided at all(including 'user_provided') */
@ -34,7 +34,7 @@ async function loadAsyncEndpoints(req) {
const google = serviceKey || isGoogleKeyProvided ? { userProvide: googleUserProvides } : false;
const useAzure = req.app.locals[EModelEndpoint.azureOpenAI]?.plugins;
const useAzure = !!appConfig?.endpoints?.[EModelEndpoint.azureOpenAI]?.plugins;
const gptPlugins =
useAzure || openAIApiKey || azureOpenAIApiKey
? {

View file

@ -1,73 +0,0 @@
const { EModelEndpoint, extractEnvVariable } = require('librechat-data-provider');
const { isUserProvided, normalizeEndpointName } = require('~/server/utils');
const { getCustomConfig } = require('./getCustomConfig');
/**
* Load config endpoints from the cached configuration object
* @param {Express.Request} req - The request object
* @returns {Promise<TEndpointsConfig>} A promise that resolves to an object containing the endpoints configuration
*/
async function loadConfigEndpoints(req) {
const customConfig = await getCustomConfig();
if (!customConfig) {
return {};
}
const { endpoints = {} } = customConfig ?? {};
const endpointsConfig = {};
if (Array.isArray(endpoints[EModelEndpoint.custom])) {
const customEndpoints = endpoints[EModelEndpoint.custom].filter(
(endpoint) =>
endpoint.baseURL &&
endpoint.apiKey &&
endpoint.name &&
endpoint.models &&
(endpoint.models.fetch || endpoint.models.default),
);
for (let i = 0; i < customEndpoints.length; i++) {
const endpoint = customEndpoints[i];
const {
baseURL,
apiKey,
name: configName,
iconURL,
modelDisplayLabel,
customParams,
} = endpoint;
const name = normalizeEndpointName(configName);
const resolvedApiKey = extractEnvVariable(apiKey);
const resolvedBaseURL = extractEnvVariable(baseURL);
endpointsConfig[name] = {
type: EModelEndpoint.custom,
userProvide: isUserProvided(resolvedApiKey),
userProvideURL: isUserProvided(resolvedBaseURL),
modelDisplayLabel,
iconURL,
customParams,
};
}
}
if (req.app.locals[EModelEndpoint.azureOpenAI]) {
/** @type {Omit<TConfig, 'order'>} */
endpointsConfig[EModelEndpoint.azureOpenAI] = {
userProvide: false,
};
}
if (req.app.locals[EModelEndpoint.azureOpenAI]?.assistants) {
/** @type {Omit<TConfig, 'order'>} */
endpointsConfig[EModelEndpoint.azureAssistants] = {
userProvide: false,
};
}
return endpointsConfig;
}
module.exports = loadConfigEndpoints;

View file

@ -1,43 +1,39 @@
const { isUserProvided, normalizeEndpointName } = require('@librechat/api');
const { EModelEndpoint, extractEnvVariable } = require('librechat-data-provider');
const { isUserProvided, normalizeEndpointName } = require('~/server/utils');
const { fetchModels } = require('~/server/services/ModelService');
const { getCustomConfig } = require('./getCustomConfig');
const { getAppConfig } = require('./app');
/**
* Load config endpoints from the cached configuration object
* @function loadConfigModels
* @param {Express.Request} req - The Express request object.
* @param {ServerRequest} req - The Express request object.
*/
async function loadConfigModels(req) {
const customConfig = await getCustomConfig();
if (!customConfig) {
const appConfig = await getAppConfig({ role: req.user?.role });
if (!appConfig) {
return {};
}
const { endpoints = {} } = customConfig ?? {};
const modelsConfig = {};
const azureEndpoint = endpoints[EModelEndpoint.azureOpenAI];
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
const { modelNames } = azureConfig ?? {};
if (modelNames && azureEndpoint) {
if (modelNames && azureConfig) {
modelsConfig[EModelEndpoint.azureOpenAI] = modelNames;
}
if (modelNames && azureEndpoint && azureEndpoint.plugins) {
if (modelNames && azureConfig && azureConfig.plugins) {
modelsConfig[EModelEndpoint.gptPlugins] = modelNames;
}
if (azureEndpoint?.assistants && azureConfig.assistantModels) {
if (azureConfig?.assistants && azureConfig.assistantModels) {
modelsConfig[EModelEndpoint.azureAssistants] = azureConfig.assistantModels;
}
if (!Array.isArray(endpoints[EModelEndpoint.custom])) {
if (!Array.isArray(appConfig.endpoints?.[EModelEndpoint.custom])) {
return modelsConfig;
}
const customEndpoints = endpoints[EModelEndpoint.custom].filter(
const customEndpoints = appConfig.endpoints[EModelEndpoint.custom].filter(
(endpoint) =>
endpoint.baseURL &&
endpoint.apiKey &&

View file

@ -1,9 +1,9 @@
const { fetchModels } = require('~/server/services/ModelService');
const { getCustomConfig } = require('./getCustomConfig');
const loadConfigModels = require('./loadConfigModels');
const { getAppConfig } = require('./app');
jest.mock('~/server/services/ModelService');
jest.mock('./getCustomConfig');
jest.mock('./app');
const exampleConfig = {
endpoints: {
@ -60,7 +60,7 @@ const exampleConfig = {
};
describe('loadConfigModels', () => {
const mockRequest = { app: { locals: {} }, user: { id: 'testUserId' } };
const mockRequest = { user: { id: 'testUserId' } };
const originalEnv = process.env;
@ -68,6 +68,9 @@ describe('loadConfigModels', () => {
jest.resetAllMocks();
jest.resetModules();
process.env = { ...originalEnv };
// Default mock for getAppConfig
getAppConfig.mockResolvedValue({});
});
afterEach(() => {
@ -75,18 +78,15 @@ describe('loadConfigModels', () => {
});
it('should return an empty object if customConfig is null', async () => {
getCustomConfig.mockResolvedValue(null);
getAppConfig.mockResolvedValue(null);
const result = await loadConfigModels(mockRequest);
expect(result).toEqual({});
});
it('handles azure models and endpoint correctly', async () => {
mockRequest.app.locals.azureOpenAI = { modelNames: ['model1', 'model2'] };
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
azureOpenAI: {
models: ['model1', 'model2'],
},
azureOpenAI: { modelNames: ['model1', 'model2'] },
},
});
@ -97,18 +97,16 @@ describe('loadConfigModels', () => {
it('fetches custom models based on the unique key', async () => {
process.env.BASE_URL = 'http://example.com';
process.env.API_KEY = 'some-api-key';
const customEndpoints = {
custom: [
{
baseURL: '${BASE_URL}',
apiKey: '${API_KEY}',
name: 'CustomModel',
models: { fetch: true },
},
],
};
const customEndpoints = [
{
baseURL: '${BASE_URL}',
apiKey: '${API_KEY}',
name: 'CustomModel',
models: { fetch: true },
},
];
getCustomConfig.mockResolvedValue({ endpoints: customEndpoints });
getAppConfig.mockResolvedValue({ endpoints: { custom: customEndpoints } });
fetchModels.mockResolvedValue(['customModel1', 'customModel2']);
const result = await loadConfigModels(mockRequest);
@ -117,7 +115,7 @@ describe('loadConfigModels', () => {
});
it('correctly associates models to names using unique keys', async () => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: [
{
@ -146,7 +144,7 @@ describe('loadConfigModels', () => {
it('correctly handles multiple endpoints with the same baseURL but different apiKeys', async () => {
// Mock the custom configuration to simulate the user's scenario
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: [
{
@ -210,7 +208,7 @@ describe('loadConfigModels', () => {
process.env.MY_OPENROUTER_API_KEY = 'actual_openrouter_api_key';
// Setup custom configuration with specific API keys for Mistral and OpenRouter
// and "user_provided" for groq and Ollama, indicating no fetch for the latter two
getCustomConfig.mockResolvedValue(exampleConfig);
getAppConfig.mockResolvedValue(exampleConfig);
// Assuming fetchModels would be called only for Mistral and OpenRouter
fetchModels.mockImplementation(({ name }) => {
@ -273,7 +271,7 @@ describe('loadConfigModels', () => {
});
it('falls back to default models if fetching returns an empty array', async () => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: [
{
@ -306,7 +304,7 @@ describe('loadConfigModels', () => {
});
it('falls back to default models if fetching returns a falsy value', async () => {
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: [
{
@ -367,7 +365,7 @@ describe('loadConfigModels', () => {
},
];
getCustomConfig.mockResolvedValue({
getAppConfig.mockResolvedValue({
endpoints: {
custom: testCases,
},

View file

@ -4,11 +4,11 @@ const { config } = require('./EndpointService');
/**
* Load async endpoints and return a configuration object
* @param {Express.Request} req - The request object
* @param {AppConfig} appConfig - The app configuration object
* @returns {Promise<Object.<string, EndpointWithOrder>>} An object whose keys are endpoint names and values are objects that contain the endpoint configuration and an order.
*/
async function loadDefaultEndpointsConfig(req) {
const { google, gptPlugins } = await loadAsyncEndpoints(req);
async function loadDefaultEndpointsConfig(appConfig) {
const { google, gptPlugins } = await loadAsyncEndpoints(appConfig);
const { assistants, azureAssistants, azureOpenAI, chatGPTBrowser } = config;
const enabledEndpoints = getEnabledEndpoints();

View file

@ -11,7 +11,7 @@ const {
* Loads the default models for the application.
* @async
* @function
* @param {Express.Request} req - The Express request object.
* @param {ServerRequest} req - The Express request object.
*/
async function loadDefaultModels(req) {
try {

View file

@ -1,6 +0,0 @@
// fetch some remote config
async function loadOverrideConfig() {
return false;
}
module.exports = loadOverrideConfig;