mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-01-23 18:56:12 +01:00
🛜 refactor: Streamline App Config Usage (#9234)
* WIP: app.locals refactoring
WIP: appConfig
fix: update memory configuration retrieval to use getAppConfig based on user role
fix: update comment for AppConfig interface to clarify purpose
🏷️ refactor: Update tests to use getAppConfig for endpoint configurations
ci: Update AppService tests to initialize app config instead of app.locals
ci: Integrate getAppConfig into remaining tests
refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests
refactor: Rename initializeAppConfig to setAppConfig and update related tests
ci: Mock getAppConfig in various tests to provide default configurations
refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests
chore: rename `Config/getAppConfig` -> `Config/app`
fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters
chore: correct parameter documentation for imageOutputType in ToolService.js
refactor: remove `getCustomConfig` dependency in config route
refactor: update domain validation to use appConfig for allowed domains
refactor: use appConfig registration property
chore: remove app parameter from AppService invocation
refactor: update AppConfig interface to correct registration and turnstile configurations
refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services
refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files
refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type
refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration
ci: update related tests
refactor: update getAppConfig call in getCustomConfigSpeech to include user role
fix: update appConfig usage to access allowedDomains from actions instead of registration
refactor: enhance AppConfig to include fileStrategies and update related file strategy logic
refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions
chore: remove deprecated unused RunManager
refactor: get balance config primarily from appConfig
refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic
refactor: remove getCustomConfig usage and use app config in file citations
refactor: consolidate endpoint loading logic into loadEndpoints function
refactor: update appConfig access to use endpoints structure across various services
refactor: implement custom endpoints configuration and streamline endpoint loading logic
refactor: update getAppConfig call to include user role parameter
refactor: streamline endpoint configuration and enhance appConfig usage across services
refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file
refactor: add type annotation for loadedEndpoints in loadEndpoints function
refactor: move /services/Files/images/parse to TS API
chore: add missing FILE_CITATIONS permission to IRole interface
refactor: restructure toolkits to TS API
refactor: separate manifest logic into its own module
refactor: consolidate tool loading logic into a new tools module for startup logic
refactor: move interface config logic to TS API
refactor: migrate checkEmailConfig to TypeScript and update imports
refactor: add FunctionTool interface and availableTools to AppConfig
refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig`
WIP: fix tests
* fix: rebase conflicts
* refactor: remove app.locals references
* refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware
* refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients
* test: add balance configuration to titleConvo method in AgentClient tests
* chore: remove unused `openai-chat-tokens` package
* chore: remove unused imports in initializeMCPs.js
* refactor: update balance configuration to use getAppConfig instead of getBalanceConfig
* refactor: integrate configMiddleware for centralized configuration handling
* refactor: optimize email domain validation by removing unnecessary async calls
* refactor: simplify multer storage configuration by removing async calls
* refactor: reorder imports for better readability in user.js
* refactor: replace getAppConfig calls with req.config for improved performance
* chore: replace getAppConfig calls with req.config in tests for centralized configuration handling
* chore: remove unused override config
* refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config
* chore: remove customConfig parameter from TTSService constructor
* refactor: pass appConfig from request to processFileCitations for improved configuration handling
* refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config`
* test: add mockAppConfig to processFileCitations tests for improved configuration handling
* fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor
* fix: type safety in useExportConversation
* refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached
* chore: change `MongoUser` typedef to `IUser`
* fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest
* fix: remove unused setAppConfig mock from Server configuration tests
This commit is contained in:
parent
e1ad235f17
commit
9a210971f5
210 changed files with 4102 additions and 3465 deletions
|
|
@ -49,6 +49,7 @@ const initializeAgent = async ({
|
|||
allowedProviders,
|
||||
isInitialAgent = false,
|
||||
}) => {
|
||||
const appConfig = req.config;
|
||||
if (
|
||||
isAgentsEndpoint(endpointOption?.endpoint) &&
|
||||
allowedProviders.size > 0 &&
|
||||
|
|
@ -90,10 +91,11 @@ const initializeAgent = async ({
|
|||
const { attachments, tool_resources } = await primeResources({
|
||||
req,
|
||||
getFiles,
|
||||
appConfig,
|
||||
agentId: agent.id,
|
||||
attachments: currentFiles,
|
||||
tool_resources: agent.tool_resources,
|
||||
requestFileSet: new Set(requestFiles?.map((file) => file.file_id)),
|
||||
agentId: agent.id,
|
||||
});
|
||||
|
||||
const provider = agent.provider;
|
||||
|
|
@ -112,7 +114,7 @@ const initializeAgent = async ({
|
|||
})) ?? {};
|
||||
|
||||
agent.endpoint = provider;
|
||||
const { getOptions, overrideProvider } = await getProviderConfig(provider);
|
||||
const { getOptions, overrideProvider } = getProviderConfig({ provider, appConfig });
|
||||
if (overrideProvider !== agent.provider) {
|
||||
agent.provider = overrideProvider;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
const { logger } = require('@librechat/data-schemas');
|
||||
const { validateAgentModel } = require('@librechat/api');
|
||||
const { createContentAggregator } = require('@librechat/agents');
|
||||
const { validateAgentModel, getCustomEndpointConfig } = require('@librechat/api');
|
||||
const {
|
||||
Constants,
|
||||
EModelEndpoint,
|
||||
|
|
@ -13,7 +13,6 @@ const {
|
|||
} = require('~/server/controllers/agents/callbacks');
|
||||
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
|
||||
const { getModelsConfig } = require('~/server/controllers/ModelController');
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const AgentClient = require('~/server/controllers/agents/client');
|
||||
const { getAgent } = require('~/models/Agent');
|
||||
|
|
@ -58,6 +57,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
|
|||
if (!endpointOption) {
|
||||
throw new Error('Endpoint option not provided');
|
||||
}
|
||||
const appConfig = req.config;
|
||||
|
||||
// TODO: use endpointOption to determine options/modelOptions
|
||||
/** @type {Array<UsageMetadata>} */
|
||||
|
|
@ -97,8 +97,7 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
|
|||
}
|
||||
|
||||
const agentConfigs = new Map();
|
||||
/** @type {Set<string>} */
|
||||
const allowedProviders = new Set(req?.app?.locals?.[EModelEndpoint.agents]?.allowedProviders);
|
||||
const allowedProviders = new Set(appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders);
|
||||
|
||||
const loadTools = createToolLoader(signal);
|
||||
/** @type {Array<MongoFile>} */
|
||||
|
|
@ -158,10 +157,13 @@ const initializeClient = async ({ req, res, signal, endpointOption }) => {
|
|||
}
|
||||
}
|
||||
|
||||
let endpointConfig = req.app.locals[primaryConfig.endpoint];
|
||||
let endpointConfig = appConfig.endpoints?.[primaryConfig.endpoint];
|
||||
if (!isAgentsEndpoint(primaryConfig.endpoint) && !endpointConfig) {
|
||||
try {
|
||||
endpointConfig = await getCustomEndpointConfig(primaryConfig.endpoint);
|
||||
endpointConfig = getCustomEndpointConfig({
|
||||
endpoint: primaryConfig.endpoint,
|
||||
appConfig,
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
'[api/server/controllers/agents/client.js #titleConvo] Error getting custom endpoint config',
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ const { getLLMConfig } = require('~/server/services/Endpoints/anthropic/llm');
|
|||
const AnthropicClient = require('~/app/clients/AnthropicClient');
|
||||
|
||||
const initializeClient = async ({ req, res, endpointOption, overrideModel, optionsOnly }) => {
|
||||
const appConfig = req.config;
|
||||
const { ANTHROPIC_API_KEY, ANTHROPIC_REVERSE_PROXY, PROXY } = process.env;
|
||||
const expiresAt = req.body.key;
|
||||
const isUserProvided = ANTHROPIC_API_KEY === 'user_provided';
|
||||
|
|
@ -23,15 +24,14 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
|
|||
let clientOptions = {};
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const anthropicConfig = req.app.locals[EModelEndpoint.anthropic];
|
||||
const anthropicConfig = appConfig.endpoints?.[EModelEndpoint.anthropic];
|
||||
|
||||
if (anthropicConfig) {
|
||||
clientOptions.streamRate = anthropicConfig.streamRate;
|
||||
clientOptions.titleModel = anthropicConfig.titleModel;
|
||||
}
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
if (allConfig) {
|
||||
clientOptions.streamRate = allConfig.streamRate;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -48,6 +48,7 @@ class Files {
|
|||
}
|
||||
|
||||
const initializeClient = async ({ req, res, version, endpointOption, initAppClient = false }) => {
|
||||
const appConfig = req.config;
|
||||
const { PROXY, OPENAI_ORGANIZATION, AZURE_ASSISTANTS_API_KEY, AZURE_ASSISTANTS_BASE_URL } =
|
||||
process.env;
|
||||
|
||||
|
|
@ -81,7 +82,7 @@ const initializeClient = async ({ req, res, version, endpointOption, initAppClie
|
|||
};
|
||||
|
||||
/** @type {TAzureConfig | undefined} */
|
||||
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
|
||||
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
|
||||
|
||||
/** @type {AzureOptions | undefined} */
|
||||
let azureOptions;
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
// const OpenAI = require('openai');
|
||||
const { ProxyAgent } = require('undici');
|
||||
const { ErrorTypes } = require('librechat-data-provider');
|
||||
const { ErrorTypes, EModelEndpoint } = require('librechat-data-provider');
|
||||
const { getUserKey, getUserKeyExpiry, getUserKeyValues } = require('~/server/services/UserService');
|
||||
const initializeClient = require('./initialize');
|
||||
// const { OpenAIClient } = require('~/app');
|
||||
|
|
@ -12,6 +12,8 @@ jest.mock('~/server/services/UserService', () => ({
|
|||
checkUserKeyExpiry: jest.requireActual('~/server/services/UserService').checkUserKeyExpiry,
|
||||
}));
|
||||
|
||||
// Config is now passed via req.config, not getAppConfig
|
||||
|
||||
const today = new Date();
|
||||
const tenDaysFromToday = new Date(today.setDate(today.getDate() + 10));
|
||||
const isoString = tenDaysFromToday.toISOString();
|
||||
|
|
@ -41,7 +43,11 @@ describe('initializeClient', () => {
|
|||
isUserProvided: jest.fn().mockReturnValueOnce(false),
|
||||
}));
|
||||
|
||||
const req = { user: { id: 'user123' }, app };
|
||||
const req = {
|
||||
user: { id: 'user123' },
|
||||
app,
|
||||
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
|
||||
};
|
||||
const res = {};
|
||||
|
||||
const { openai, openAIApiKey } = await initializeClient({ req, res });
|
||||
|
|
@ -57,7 +63,11 @@ describe('initializeClient', () => {
|
|||
getUserKeyValues.mockResolvedValue({ apiKey: 'user-api-key', baseURL: 'https://user.api.url' });
|
||||
getUserKeyExpiry.mockResolvedValue(isoString);
|
||||
|
||||
const req = { user: { id: 'user123' }, app };
|
||||
const req = {
|
||||
user: { id: 'user123' },
|
||||
app,
|
||||
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
|
||||
};
|
||||
const res = {};
|
||||
|
||||
const { openai, openAIApiKey } = await initializeClient({ req, res });
|
||||
|
|
@ -74,7 +84,7 @@ describe('initializeClient', () => {
|
|||
let userValues = getUserKey();
|
||||
try {
|
||||
userValues = JSON.parse(userValues);
|
||||
} catch (e) {
|
||||
} catch {
|
||||
throw new Error(
|
||||
JSON.stringify({
|
||||
type: ErrorTypes.INVALID_USER_KEY,
|
||||
|
|
@ -84,7 +94,10 @@ describe('initializeClient', () => {
|
|||
return userValues;
|
||||
});
|
||||
|
||||
const req = { user: { id: 'user123' } };
|
||||
const req = {
|
||||
user: { id: 'user123' },
|
||||
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
|
||||
};
|
||||
const res = {};
|
||||
|
||||
await expect(initializeClient({ req, res })).rejects.toThrow(/invalid_user_key/);
|
||||
|
|
@ -93,7 +106,11 @@ describe('initializeClient', () => {
|
|||
test('throws error if API key is not provided', async () => {
|
||||
delete process.env.AZURE_ASSISTANTS_API_KEY; // Simulate missing API key
|
||||
|
||||
const req = { user: { id: 'user123' }, app };
|
||||
const req = {
|
||||
user: { id: 'user123' },
|
||||
app,
|
||||
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
|
||||
};
|
||||
const res = {};
|
||||
|
||||
await expect(initializeClient({ req, res })).rejects.toThrow(/Assistants API key not/);
|
||||
|
|
@ -103,7 +120,11 @@ describe('initializeClient', () => {
|
|||
process.env.AZURE_ASSISTANTS_API_KEY = 'test-key';
|
||||
process.env.PROXY = 'http://proxy.server';
|
||||
|
||||
const req = { user: { id: 'user123' }, app };
|
||||
const req = {
|
||||
user: { id: 'user123' },
|
||||
app,
|
||||
config: { endpoints: { [EModelEndpoint.azureOpenAI]: {} } },
|
||||
};
|
||||
const res = {};
|
||||
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ const {
|
|||
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
|
||||
|
||||
const getOptions = async ({ req, overrideModel, endpointOption }) => {
|
||||
const appConfig = req.config;
|
||||
const {
|
||||
BEDROCK_AWS_SECRET_ACCESS_KEY,
|
||||
BEDROCK_AWS_ACCESS_KEY_ID,
|
||||
|
|
@ -50,14 +51,13 @@ const getOptions = async ({ req, overrideModel, endpointOption }) => {
|
|||
let streamRate = Constants.DEFAULT_STREAM_RATE;
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const bedrockConfig = req.app.locals[EModelEndpoint.bedrock];
|
||||
const bedrockConfig = appConfig.endpoints?.[EModelEndpoint.bedrock];
|
||||
|
||||
if (bedrockConfig && bedrockConfig.streamRate) {
|
||||
streamRate = bedrockConfig.streamRate;
|
||||
}
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
if (allConfig && allConfig.streamRate) {
|
||||
streamRate = allConfig.streamRate;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,3 +1,11 @@
|
|||
const { Providers } = require('@librechat/agents');
|
||||
const {
|
||||
resolveHeaders,
|
||||
isUserProvided,
|
||||
getOpenAIConfig,
|
||||
getCustomEndpointConfig,
|
||||
createHandleLLMNewToken,
|
||||
} = require('@librechat/api');
|
||||
const {
|
||||
CacheKeys,
|
||||
ErrorTypes,
|
||||
|
|
@ -5,22 +13,22 @@ const {
|
|||
FetchTokenConfig,
|
||||
extractEnvVariable,
|
||||
} = require('librechat-data-provider');
|
||||
const { Providers } = require('@librechat/agents');
|
||||
const { getOpenAIConfig, createHandleLLMNewToken, resolveHeaders } = require('@librechat/api');
|
||||
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
const { fetchModels } = require('~/server/services/ModelService');
|
||||
const OpenAIClient = require('~/app/clients/OpenAIClient');
|
||||
const { isUserProvided } = require('~/server/utils');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
|
||||
const { PROXY } = process.env;
|
||||
|
||||
const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrideEndpoint }) => {
|
||||
const appConfig = req.config;
|
||||
const { key: expiresAt } = req.body;
|
||||
const endpoint = overrideEndpoint ?? req.body.endpoint;
|
||||
|
||||
const endpointConfig = await getCustomEndpointConfig(endpoint);
|
||||
const endpointConfig = getCustomEndpointConfig({
|
||||
endpoint,
|
||||
appConfig,
|
||||
});
|
||||
if (!endpointConfig) {
|
||||
throw new Error(`Config not found for the ${endpoint} custom endpoint.`);
|
||||
}
|
||||
|
|
@ -117,8 +125,7 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
|
|||
endpointTokenConfig,
|
||||
};
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
if (allConfig) {
|
||||
customOptions.streamRate = allConfig.streamRate;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,21 +1,16 @@
|
|||
const initializeClient = require('./initialize');
|
||||
|
||||
jest.mock('@librechat/api', () => ({
|
||||
...jest.requireActual('@librechat/api'),
|
||||
resolveHeaders: jest.fn(),
|
||||
getOpenAIConfig: jest.fn(),
|
||||
createHandleLLMNewToken: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('librechat-data-provider', () => ({
|
||||
CacheKeys: { TOKEN_CONFIG: 'token_config' },
|
||||
ErrorTypes: { NO_USER_KEY: 'NO_USER_KEY', NO_BASE_URL: 'NO_BASE_URL' },
|
||||
envVarRegex: /\$\{([^}]+)\}/,
|
||||
FetchTokenConfig: {},
|
||||
extractEnvVariable: jest.fn((value) => value),
|
||||
}));
|
||||
|
||||
jest.mock('@librechat/agents', () => ({
|
||||
Providers: { OLLAMA: 'ollama' },
|
||||
getCustomEndpointConfig: jest.fn().mockReturnValue({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://test.com',
|
||||
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
|
||||
models: { default: ['test-model'] },
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/UserService', () => ({
|
||||
|
|
@ -23,14 +18,7 @@ jest.mock('~/server/services/UserService', () => ({
|
|||
checkUserKeyExpiry: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getCustomEndpointConfig: jest.fn().mockResolvedValue({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://test.com',
|
||||
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
|
||||
models: { default: ['test-model'] },
|
||||
}),
|
||||
}));
|
||||
// Config is now passed via req.config, not getAppConfig
|
||||
|
||||
jest.mock('~/server/services/ModelService', () => ({
|
||||
fetchModels: jest.fn(),
|
||||
|
|
@ -42,10 +30,6 @@ jest.mock('~/app/clients/OpenAIClient', () => {
|
|||
}));
|
||||
});
|
||||
|
||||
jest.mock('~/server/utils', () => ({
|
||||
isUserProvided: jest.fn().mockReturnValue(false),
|
||||
}));
|
||||
|
||||
jest.mock('~/cache/getLogStores', () =>
|
||||
jest.fn().mockReturnValue({
|
||||
get: jest.fn(),
|
||||
|
|
@ -55,13 +39,35 @@ jest.mock('~/cache/getLogStores', () =>
|
|||
describe('custom/initializeClient', () => {
|
||||
const mockRequest = {
|
||||
body: { endpoint: 'test-endpoint' },
|
||||
user: { id: 'user-123', email: 'test@example.com' },
|
||||
user: { id: 'user-123', email: 'test@example.com', role: 'user' },
|
||||
app: { locals: {} },
|
||||
config: {
|
||||
endpoints: {
|
||||
all: {
|
||||
streamRate: 25,
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
const mockResponse = {};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
const { getCustomEndpointConfig, resolveHeaders, getOpenAIConfig } = require('@librechat/api');
|
||||
getCustomEndpointConfig.mockReturnValue({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://test.com',
|
||||
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
|
||||
models: { default: ['test-model'] },
|
||||
});
|
||||
resolveHeaders.mockReturnValue({ 'x-user': 'user-123', 'x-email': 'test@example.com' });
|
||||
getOpenAIConfig.mockReturnValue({
|
||||
useLegacyContent: true,
|
||||
endpointTokenConfig: null,
|
||||
llmConfig: {
|
||||
callbacks: [],
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('calls resolveHeaders with headers, user, and body for body placeholder support', async () => {
|
||||
|
|
@ -69,14 +75,14 @@ describe('custom/initializeClient', () => {
|
|||
await initializeClient({ req: mockRequest, res: mockResponse, optionsOnly: true });
|
||||
expect(resolveHeaders).toHaveBeenCalledWith({
|
||||
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
|
||||
user: { id: 'user-123', email: 'test@example.com' },
|
||||
user: { id: 'user-123', email: 'test@example.com', role: 'user' },
|
||||
body: { endpoint: 'test-endpoint' }, // body - supports {{LIBRECHAT_BODY_*}} placeholders
|
||||
});
|
||||
});
|
||||
|
||||
it('throws if endpoint config is missing', async () => {
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
getCustomEndpointConfig.mockResolvedValueOnce(null);
|
||||
const { getCustomEndpointConfig } = require('@librechat/api');
|
||||
getCustomEndpointConfig.mockReturnValueOnce(null);
|
||||
await expect(
|
||||
initializeClient({ req: mockRequest, res: mockResponse, optionsOnly: true }),
|
||||
).rejects.toThrow('Config not found for the test-endpoint custom endpoint.');
|
||||
|
|
|
|||
|
|
@ -46,10 +46,11 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
|
|||
|
||||
let clientOptions = {};
|
||||
|
||||
const appConfig = req.config;
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const googleConfig = req.app.locals[EModelEndpoint.google];
|
||||
const googleConfig = appConfig.endpoints?.[EModelEndpoint.google];
|
||||
|
||||
if (googleConfig) {
|
||||
clientOptions.streamRate = googleConfig.streamRate;
|
||||
|
|
|
|||
|
|
@ -8,6 +8,8 @@ jest.mock('~/server/services/UserService', () => ({
|
|||
getUserKey: jest.fn().mockImplementation(() => ({})),
|
||||
}));
|
||||
|
||||
// Config is now passed via req.config, not getAppConfig
|
||||
|
||||
const app = { locals: {} };
|
||||
|
||||
describe('google/initializeClient', () => {
|
||||
|
|
@ -26,6 +28,12 @@ describe('google/initializeClient', () => {
|
|||
body: { key: expiresAt },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: {
|
||||
endpoints: {
|
||||
all: {},
|
||||
google: {},
|
||||
},
|
||||
},
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = { modelOptions: { model: 'default-model' } };
|
||||
|
|
@ -48,6 +56,12 @@ describe('google/initializeClient', () => {
|
|||
body: { key: null },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: {
|
||||
endpoints: {
|
||||
all: {},
|
||||
google: {},
|
||||
},
|
||||
},
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = { modelOptions: { model: 'default-model' } };
|
||||
|
|
@ -71,6 +85,12 @@ describe('google/initializeClient', () => {
|
|||
body: { key: expiresAt },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: {
|
||||
endpoints: {
|
||||
all: {},
|
||||
google: {},
|
||||
},
|
||||
},
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = { modelOptions: { model: 'default-model' } };
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
const { isEnabled } = require('@librechat/api');
|
||||
const { EModelEndpoint, CacheKeys, Constants, googleSettings } = require('librechat-data-provider');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const initializeClient = require('./initialize');
|
||||
const { isEnabled } = require('~/server/utils');
|
||||
const { saveConvo } = require('~/models');
|
||||
|
||||
const addTitle = async (req, { text, response, client }) => {
|
||||
|
|
@ -14,7 +14,8 @@ const addTitle = async (req, { text, response, client }) => {
|
|||
return;
|
||||
}
|
||||
const { GOOGLE_TITLE_MODEL } = process.env ?? {};
|
||||
const providerConfig = req.app.locals[EModelEndpoint.google];
|
||||
const appConfig = req.config;
|
||||
const providerConfig = appConfig.endpoints?.[EModelEndpoint.google];
|
||||
let model =
|
||||
providerConfig?.titleModel ??
|
||||
GOOGLE_TITLE_MODEL ??
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
const { Providers } = require('@librechat/agents');
|
||||
const { EModelEndpoint } = require('librechat-data-provider');
|
||||
const { getCustomEndpointConfig } = require('@librechat/api');
|
||||
const initAnthropic = require('~/server/services/Endpoints/anthropic/initialize');
|
||||
const getBedrockOptions = require('~/server/services/Endpoints/bedrock/options');
|
||||
const initOpenAI = require('~/server/services/Endpoints/openAI/initialize');
|
||||
const initCustom = require('~/server/services/Endpoints/custom/initialize');
|
||||
const initGoogle = require('~/server/services/Endpoints/google/initialize');
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
|
||||
/** Check if the provider is a known custom provider
|
||||
* @param {string | undefined} [provider] - The provider string
|
||||
|
|
@ -31,14 +31,16 @@ const providerConfigMap = {
|
|||
|
||||
/**
|
||||
* Get the provider configuration and override endpoint based on the provider string
|
||||
* @param {string} provider - The provider string
|
||||
* @returns {Promise<{
|
||||
* getOptions: Function,
|
||||
* @param {Object} params
|
||||
* @param {string} params.provider - The provider string
|
||||
* @param {AppConfig} params.appConfig - The application configuration
|
||||
* @returns {{
|
||||
* getOptions: (typeof providerConfigMap)[keyof typeof providerConfigMap],
|
||||
* overrideProvider: string,
|
||||
* customEndpointConfig?: TEndpoint
|
||||
* }>}
|
||||
* }}
|
||||
*/
|
||||
async function getProviderConfig(provider) {
|
||||
function getProviderConfig({ provider, appConfig }) {
|
||||
let getOptions = providerConfigMap[provider];
|
||||
let overrideProvider = provider;
|
||||
/** @type {TEndpoint | undefined} */
|
||||
|
|
@ -48,7 +50,7 @@ async function getProviderConfig(provider) {
|
|||
overrideProvider = provider.toLowerCase();
|
||||
getOptions = providerConfigMap[overrideProvider];
|
||||
} else if (!getOptions) {
|
||||
customEndpointConfig = await getCustomEndpointConfig(provider);
|
||||
customEndpointConfig = getCustomEndpointConfig({ endpoint: provider, appConfig });
|
||||
if (!customEndpointConfig) {
|
||||
throw new Error(`Provider ${provider} not supported`);
|
||||
}
|
||||
|
|
@ -57,7 +59,7 @@ async function getProviderConfig(provider) {
|
|||
}
|
||||
|
||||
if (isKnownCustomProvider(overrideProvider) && !customEndpointConfig) {
|
||||
customEndpointConfig = await getCustomEndpointConfig(provider);
|
||||
customEndpointConfig = getCustomEndpointConfig({ endpoint: provider, appConfig });
|
||||
if (!customEndpointConfig) {
|
||||
throw new Error(`Provider ${provider} not supported`);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ const initializeClient = async ({
|
|||
overrideEndpoint,
|
||||
overrideModel,
|
||||
}) => {
|
||||
const appConfig = req.config;
|
||||
const {
|
||||
PROXY,
|
||||
OPENAI_API_KEY,
|
||||
|
|
@ -64,7 +65,7 @@ const initializeClient = async ({
|
|||
|
||||
const isAzureOpenAI = endpoint === EModelEndpoint.azureOpenAI;
|
||||
/** @type {false | TAzureConfig} */
|
||||
const azureConfig = isAzureOpenAI && req.app.locals[EModelEndpoint.azureOpenAI];
|
||||
const azureConfig = isAzureOpenAI && appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
|
||||
let serverless = false;
|
||||
if (isAzureOpenAI && azureConfig) {
|
||||
const { modelGroupMap, groupMap } = azureConfig;
|
||||
|
|
@ -113,15 +114,14 @@ const initializeClient = async ({
|
|||
}
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const openAIConfig = req.app.locals[EModelEndpoint.openAI];
|
||||
const openAIConfig = appConfig.endpoints?.[EModelEndpoint.openAI];
|
||||
|
||||
if (!isAzureOpenAI && openAIConfig) {
|
||||
clientOptions.streamRate = openAIConfig.streamRate;
|
||||
clientOptions.titleModel = openAIConfig.titleModel;
|
||||
}
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
if (allConfig) {
|
||||
clientOptions.streamRate = allConfig.streamRate;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,13 @@
|
|||
jest.mock('~/cache/getLogStores');
|
||||
jest.mock('~/cache/getLogStores', () => ({
|
||||
getLogStores: jest.fn().mockReturnValue({
|
||||
get: jest.fn().mockResolvedValue({
|
||||
openAI: { apiKey: 'test-key' },
|
||||
}),
|
||||
set: jest.fn(),
|
||||
delete: jest.fn(),
|
||||
}),
|
||||
}));
|
||||
|
||||
const { EModelEndpoint, ErrorTypes, validateAzureGroups } = require('librechat-data-provider');
|
||||
const { getUserKey, getUserKeyValues } = require('~/server/services/UserService');
|
||||
const initializeClient = require('./initialize');
|
||||
|
|
@ -11,6 +20,38 @@ jest.mock('~/server/services/UserService', () => ({
|
|||
checkUserKeyExpiry: jest.requireActual('~/server/services/UserService').checkUserKeyExpiry,
|
||||
}));
|
||||
|
||||
const mockAppConfig = {
|
||||
endpoints: {
|
||||
openAI: {
|
||||
apiKey: 'test-key',
|
||||
},
|
||||
azureOpenAI: {
|
||||
apiKey: 'test-azure-key',
|
||||
modelNames: ['gpt-4-vision-preview', 'gpt-3.5-turbo', 'gpt-4'],
|
||||
modelGroupMap: {
|
||||
'gpt-4-vision-preview': {
|
||||
group: 'librechat-westus',
|
||||
deploymentName: 'gpt-4-vision-preview',
|
||||
version: '2024-02-15-preview',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
'librechat-westus': {
|
||||
apiKey: 'WESTUS_API_KEY',
|
||||
instanceName: 'librechat-westus',
|
||||
version: '2023-12-01-preview',
|
||||
models: {
|
||||
'gpt-4-vision-preview': {
|
||||
deploymentName: 'gpt-4-vision-preview',
|
||||
version: '2024-02-15-preview',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
describe('initializeClient', () => {
|
||||
// Set up environment variables
|
||||
const originalEnvironment = process.env;
|
||||
|
|
@ -79,7 +120,7 @@ describe('initializeClient', () => {
|
|||
},
|
||||
];
|
||||
|
||||
const { modelNames, modelGroupMap, groupMap } = validateAzureGroups(validAzureConfigs);
|
||||
const { modelNames } = validateAzureGroups(validAzureConfigs);
|
||||
|
||||
beforeEach(() => {
|
||||
jest.resetModules(); // Clears the cache
|
||||
|
|
@ -99,6 +140,7 @@ describe('initializeClient', () => {
|
|||
body: { key: null, endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -112,25 +154,30 @@ describe('initializeClient', () => {
|
|||
test('should initialize client with Azure credentials when endpoint is azureOpenAI', async () => {
|
||||
process.env.AZURE_API_KEY = 'test-azure-api-key';
|
||||
(process.env.AZURE_OPENAI_API_INSTANCE_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_VERSION = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.OPENAI_API_KEY = 'test-openai-api-key');
|
||||
(process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_VERSION = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.OPENAI_API_KEY = 'test-openai-api-key');
|
||||
process.env.DEBUG_OPENAI = 'false';
|
||||
process.env.OPENAI_SUMMARIZE = 'false';
|
||||
|
||||
const req = {
|
||||
body: { key: null, endpoint: 'azureOpenAI' },
|
||||
body: {
|
||||
key: null,
|
||||
endpoint: 'azureOpenAI',
|
||||
model: 'gpt-4-vision-preview',
|
||||
},
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = { modelOptions: { model: 'test-model' } };
|
||||
const endpointOption = {};
|
||||
|
||||
const client = await initializeClient({ req, res, endpointOption });
|
||||
|
||||
expect(client.openAIApiKey).toBe('test-azure-api-key');
|
||||
expect(client.openAIApiKey).toBe('WESTUS_API_KEY');
|
||||
expect(client.client).toBeInstanceOf(OpenAIClient);
|
||||
});
|
||||
|
||||
|
|
@ -142,6 +189,7 @@ describe('initializeClient', () => {
|
|||
body: { key: null, endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -159,6 +207,7 @@ describe('initializeClient', () => {
|
|||
body: { key: null, endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -177,6 +226,7 @@ describe('initializeClient', () => {
|
|||
body: { key: null, endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -198,6 +248,7 @@ describe('initializeClient', () => {
|
|||
body: { key: expiresAt, endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -216,6 +267,7 @@ describe('initializeClient', () => {
|
|||
body: { key: null, endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -236,6 +288,7 @@ describe('initializeClient', () => {
|
|||
id: '123',
|
||||
},
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
|
||||
const res = {};
|
||||
|
|
@ -260,6 +313,7 @@ describe('initializeClient', () => {
|
|||
body: { key: invalidKey, endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -281,6 +335,7 @@ describe('initializeClient', () => {
|
|||
body: { key: new Date(Date.now() + 10000).toISOString(), endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -291,7 +346,7 @@ describe('initializeClient', () => {
|
|||
let userValues = getUserKey();
|
||||
try {
|
||||
userValues = JSON.parse(userValues);
|
||||
} catch (e) {
|
||||
} catch {
|
||||
throw new Error(
|
||||
JSON.stringify({
|
||||
type: ErrorTypes.INVALID_USER_KEY,
|
||||
|
|
@ -307,6 +362,9 @@ describe('initializeClient', () => {
|
|||
});
|
||||
|
||||
test('should initialize client correctly for Azure OpenAI with valid configuration', async () => {
|
||||
// Set up Azure environment variables
|
||||
process.env.WESTUS_API_KEY = 'test-westus-key';
|
||||
|
||||
const req = {
|
||||
body: {
|
||||
key: null,
|
||||
|
|
@ -314,15 +372,7 @@ describe('initializeClient', () => {
|
|||
model: modelNames[0],
|
||||
},
|
||||
user: { id: '123' },
|
||||
app: {
|
||||
locals: {
|
||||
[EModelEndpoint.azureOpenAI]: {
|
||||
modelNames,
|
||||
modelGroupMap,
|
||||
groupMap,
|
||||
},
|
||||
},
|
||||
},
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -340,6 +390,7 @@ describe('initializeClient', () => {
|
|||
body: { key: null, endpoint: EModelEndpoint.openAI },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
@ -362,6 +413,7 @@ describe('initializeClient', () => {
|
|||
id: '123',
|
||||
},
|
||||
app,
|
||||
config: mockAppConfig,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue