mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-02-04 00:31:50 +01:00
🎉 feat: Code Interpreter API and Agents Release (#4860)
* feat: Code Interpreter API & File Search Agent Uploads chore: add back code files wip: first pass, abstract key dialog refactor: influence checkbox on key changes refactor: update localization keys for 'execute code' to 'run code' wip: run code button refactor: add throwError parameter to loadAuthValues and getUserPluginAuthValue functions feat: first pass, API tool calling fix: handle missing toolId in callTool function and return 404 for non-existent tools feat: show code outputs fix: improve error handling in callTool function and log errors fix: handle potential null value for filepath in attachment destructuring fix: normalize language before rendering and prevent null return fix: add loading indicator in RunCode component while executing code feat: add support for conditional code execution in Markdown components feat: attachments refactor: remove bash fix: pass abort signal to graph/run refactor: debounce and rate limit tool call refactor: increase debounce delay for execute function feat: set code output attachments feat: image attachments refactor: apply message context refactor: pass `partIndex` feat: toolCall schema/model/methods feat: block indexing feat: get tool calls chore: imports chore: typing chore: condense type imports feat: get tool calls fix: block indexing chore: typing refactor: update tool calls mapping to support multiple results fix: add unique key to nav link for rendering wip: first pass, tool call results refactor: update query cache from successful tool call mutation style: improve result switcher styling chore: note on using \`.toObject()\` feat: add agent_id field to conversation schema chore: typing refactor: rename agentMap to agentsMap for consistency feat: Agent Name as chat input placeholder chore: bump agents 📦 chore: update @langchain dependencies to latest versions to match agents package 📦 chore: update @librechat/agents dependency to version 1.8.0 fix: Aborting agent stream removes sender; fix(bedrock): completion removes preset name label refactor: remove direct file parameter to use req.file, add `processAgentFileUpload` for image uploads feat: upload menu feat: prime message_file resources feat: implement conversation access validation in chat route refactor: remove file parameter from processFileUpload and use req.file instead feat: add savedMessageIds set to track saved message IDs in BaseClient, to prevent unnecessary double-write to db feat: prevent duplicate message saves by checking savedMessageIds in AgentController refactor: skip legacy RAG API handling for agents feat: add files field to convoSchema refactor: update request type annotations from Express.Request to ServerRequest in file processing functions feat: track conversation files fix: resendFiles, addPreviousAttachments handling feat: add ID validation for session_id and file_id in download route feat: entity_id for code file uploads/downloads fix: code file edge cases feat: delete related tool calls feat: add stream rate handling for LLM configuration feat: enhance system content with attached file information fix: improve error logging in resource priming function * WIP: PoC, sequential agents WIP: PoC Sequential Agents, first pass content data + bump agents package fix: package-lock WIP: PoC, o1 support, refactor bufferString feat: convertJsonSchemaToZod fix: form issues and schema defining erroneous model fix: max length issue on agent form instructions, limit conversation messages to sequential agents feat: add abort signal support to createRun function and AgentClient feat: PoC, hide prior sequential agent steps fix: update parameter naming from config to metadata in event handlers for clarity, add model to usage data refactor: use only last contentData, track model for usage data chore: bump agents package fix: content parts issue refactor: filter contentParts to include tool calls and relevant indices feat: show function calls refactor: filter context messages to exclude tool calls when no tools are available to the agent fix: ensure tool call content is not undefined in formatMessages feat: add agent_id field to conversationPreset schema feat: hide sequential agents feat: increase upload toast duration to 10 seconds * refactor: tool context handling & update Code API Key Dialog feat: toolContextMap chore: skipSpecs -> useSpecs ci: fix handleTools tests feat: API Key Dialog * feat: Agent Permissions Admin Controls feat: replace label with button for prompt permission toggle feat: update agent permissions feat: enable experimental agents and streamline capability configuration feat: implement access control for agents and enhance endpoint menu items feat: add welcome message for agent selection in localization feat: add agents permission to access control and update version to 0.7.57 * fix: update types in useAssistantListMap and useMentions hooks for better null handling * feat: mention agents * fix: agent tool resource race conditions when deleting agent tool resource files * feat: add error handling for code execution with user feedback * refactor: rename AdminControls to AdminSettings for clarity * style: add gap to button in AdminSettings for improved layout * refactor: separate agent query hooks and check access to enable fetching * fix: remove unused provider from agent initialization options, creates issue with custom endpoints * refactor: remove redundant/deprecated modelOptions from AgentClient processes * chore: update @librechat/agents to version 1.8.5 in package.json and package-lock.json * fix: minor styling issues + agent panel uniformity * fix: agent edge cases when set endpoint is no longer defined * refactor: remove unused cleanup function call from AppService * fix: update link in ApiKeyDialog to point to pricing page * fix: improve type handling and layout calculations in SidePanel component * fix: add missing localization string for agent selection in SidePanel * chore: form styling and localizations for upload filesearch/code interpreter * fix: model selection placeholder logic in AgentConfig component * style: agent capabilities * fix: add localization for provider selection and improve dropdown styling in ModelPanel * refactor: use gpt-4o-mini > gpt-3.5-turbo * fix: agents configuration for loadDefaultInterface and update related tests * feat: DALLE Agents support
This commit is contained in:
parent
affcebd48c
commit
1a815f5e19
189 changed files with 5056 additions and 1815 deletions
|
|
@ -8,7 +8,6 @@ const { loadDefaultInterface } = require('./start/interface');
|
|||
const { azureConfigSetup } = require('./start/azureOpenAI');
|
||||
const { loadAndFormatTools } = require('./ToolService');
|
||||
const { initializeRoles } = require('~/models/Role');
|
||||
const { cleanup } = require('./cleanup');
|
||||
const paths = require('~/config/paths');
|
||||
|
||||
/**
|
||||
|
|
@ -18,7 +17,6 @@ const paths = require('~/config/paths');
|
|||
* @param {Express.Application} app - The Express application object.
|
||||
*/
|
||||
const AppService = async (app) => {
|
||||
cleanup();
|
||||
await initializeRoles();
|
||||
/** @type {TCustomConfig}*/
|
||||
const config = (await loadCustomConfig()) ?? {};
|
||||
|
|
|
|||
|
|
@ -49,10 +49,6 @@ module.exports = {
|
|||
process.env.BEDROCK_AWS_SECRET_ACCESS_KEY ?? process.env.BEDROCK_AWS_DEFAULT_REGION,
|
||||
),
|
||||
/* key will be part of separate config */
|
||||
[EModelEndpoint.agents]: generateConfig(
|
||||
process.env.EXPERIMENTAL_AGENTS,
|
||||
undefined,
|
||||
EModelEndpoint.agents,
|
||||
),
|
||||
[EModelEndpoint.agents]: generateConfig('true', undefined, EModelEndpoint.agents),
|
||||
},
|
||||
};
|
||||
|
|
|
|||
|
|
@ -2,8 +2,14 @@ const { loadAgent } = require('~/models/Agent');
|
|||
const { logger } = require('~/config');
|
||||
|
||||
const buildOptions = (req, endpoint, parsedBody) => {
|
||||
const { agent_id, instructions, spec, ...model_parameters } = parsedBody;
|
||||
|
||||
const {
|
||||
agent_id,
|
||||
instructions,
|
||||
spec,
|
||||
maxContextTokens,
|
||||
resendFiles = true,
|
||||
...model_parameters
|
||||
} = parsedBody;
|
||||
const agentPromise = loadAgent({
|
||||
req,
|
||||
agent_id,
|
||||
|
|
@ -13,12 +19,14 @@ const buildOptions = (req, endpoint, parsedBody) => {
|
|||
});
|
||||
|
||||
const endpointOption = {
|
||||
agent: agentPromise,
|
||||
spec,
|
||||
endpoint,
|
||||
agent_id,
|
||||
resendFiles,
|
||||
instructions,
|
||||
spec,
|
||||
maxContextTokens,
|
||||
model_parameters,
|
||||
agent: agentPromise,
|
||||
};
|
||||
|
||||
return endpointOption;
|
||||
|
|
|
|||
|
|
@ -16,6 +16,8 @@ const { getCustomEndpointConfig } = require('~/server/services/Config');
|
|||
const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const AgentClient = require('~/server/controllers/agents/client');
|
||||
const { getModelMaxTokens } = require('~/utils');
|
||||
const { getAgent } = require('~/models/Agent');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const providerConfigMap = {
|
||||
[EModelEndpoint.openAI]: initOpenAI,
|
||||
|
|
@ -25,6 +27,113 @@ const providerConfigMap = {
|
|||
[Providers.OLLAMA]: initCustom,
|
||||
};
|
||||
|
||||
/**
|
||||
*
|
||||
* @param {Promise<Array<MongoFile | null>> | undefined} _attachments
|
||||
* @param {AgentToolResources | undefined} _tool_resources
|
||||
* @returns {Promise<{ attachments: Array<MongoFile | undefined> | undefined, tool_resources: AgentToolResources | undefined }>}
|
||||
*/
|
||||
const primeResources = async (_attachments, _tool_resources) => {
|
||||
try {
|
||||
if (!_attachments) {
|
||||
return { attachments: undefined, tool_resources: _tool_resources };
|
||||
}
|
||||
/** @type {Array<MongoFile | undefined> | undefined} */
|
||||
const files = await _attachments;
|
||||
const attachments = [];
|
||||
const tool_resources = _tool_resources ?? {};
|
||||
|
||||
for (const file of files) {
|
||||
if (!file) {
|
||||
continue;
|
||||
}
|
||||
if (file.metadata?.fileIdentifier) {
|
||||
const execute_code = tool_resources.execute_code ?? {};
|
||||
if (!execute_code.files) {
|
||||
tool_resources.execute_code = { ...execute_code, files: [] };
|
||||
}
|
||||
tool_resources.execute_code.files.push(file);
|
||||
} else if (file.embedded === true) {
|
||||
const file_search = tool_resources.file_search ?? {};
|
||||
if (!file_search.files) {
|
||||
tool_resources.file_search = { ...file_search, files: [] };
|
||||
}
|
||||
tool_resources.file_search.files.push(file);
|
||||
}
|
||||
|
||||
attachments.push(file);
|
||||
}
|
||||
return { attachments, tool_resources };
|
||||
} catch (error) {
|
||||
logger.error('Error priming resources', error);
|
||||
return { attachments: _attachments, tool_resources: _tool_resources };
|
||||
}
|
||||
};
|
||||
|
||||
const initializeAgentOptions = async ({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
endpointOption,
|
||||
tool_resources,
|
||||
isInitialAgent = false,
|
||||
}) => {
|
||||
const { tools, toolContextMap } = await loadAgentTools({
|
||||
req,
|
||||
tools: agent.tools,
|
||||
agent_id: agent.id,
|
||||
tool_resources,
|
||||
});
|
||||
|
||||
const provider = agent.provider;
|
||||
let getOptions = providerConfigMap[provider];
|
||||
|
||||
if (!getOptions) {
|
||||
const customEndpointConfig = await getCustomEndpointConfig(provider);
|
||||
if (!customEndpointConfig) {
|
||||
throw new Error(`Provider ${provider} not supported`);
|
||||
}
|
||||
getOptions = initCustom;
|
||||
agent.provider = Providers.OPENAI;
|
||||
agent.endpoint = provider.toLowerCase();
|
||||
}
|
||||
|
||||
const model_parameters = agent.model_parameters ?? { model: agent.model };
|
||||
const _endpointOption = isInitialAgent
|
||||
? endpointOption
|
||||
: {
|
||||
model_parameters,
|
||||
};
|
||||
|
||||
const options = await getOptions({
|
||||
req,
|
||||
res,
|
||||
optionsOnly: true,
|
||||
overrideEndpoint: provider,
|
||||
overrideModel: agent.model,
|
||||
endpointOption: _endpointOption,
|
||||
});
|
||||
|
||||
agent.model_parameters = Object.assign(model_parameters, options.llmConfig);
|
||||
if (options.configOptions) {
|
||||
agent.model_parameters.configuration = options.configOptions;
|
||||
}
|
||||
|
||||
if (!agent.model_parameters.model) {
|
||||
agent.model_parameters.model = agent.model;
|
||||
}
|
||||
|
||||
return {
|
||||
...agent,
|
||||
tools,
|
||||
toolContextMap,
|
||||
maxContextTokens:
|
||||
agent.max_context_tokens ??
|
||||
getModelMaxTokens(agent.model_parameters.model, providerEndpointMap[provider]) ??
|
||||
4000,
|
||||
};
|
||||
};
|
||||
|
||||
const initializeClient = async ({ req, res, endpointOption }) => {
|
||||
if (!endpointOption) {
|
||||
throw new Error('Endpoint option not provided');
|
||||
|
|
@ -48,70 +157,68 @@ const initializeClient = async ({ req, res, endpointOption }) => {
|
|||
throw new Error('No agent promise provided');
|
||||
}
|
||||
|
||||
/** @type {Agent | null} */
|
||||
const agent = await endpointOption.agent;
|
||||
if (!agent) {
|
||||
// Initialize primary agent
|
||||
const primaryAgent = await endpointOption.agent;
|
||||
if (!primaryAgent) {
|
||||
throw new Error('Agent not found');
|
||||
}
|
||||
|
||||
const { tools } = await loadAgentTools({
|
||||
req,
|
||||
tools: agent.tools,
|
||||
agent_id: agent.id,
|
||||
tool_resources: agent.tool_resources,
|
||||
});
|
||||
const { attachments, tool_resources } = await primeResources(
|
||||
endpointOption.attachments,
|
||||
primaryAgent.tool_resources,
|
||||
);
|
||||
|
||||
const provider = agent.provider;
|
||||
let modelOptions = { model: agent.model };
|
||||
let getOptions = providerConfigMap[provider];
|
||||
if (!getOptions) {
|
||||
const customEndpointConfig = await getCustomEndpointConfig(provider);
|
||||
if (!customEndpointConfig) {
|
||||
throw new Error(`Provider ${provider} not supported`);
|
||||
}
|
||||
getOptions = initCustom;
|
||||
agent.provider = Providers.OPENAI;
|
||||
agent.endpoint = provider.toLowerCase();
|
||||
}
|
||||
const agentConfigs = new Map();
|
||||
|
||||
// TODO: pass-in override settings that are specific to current run
|
||||
endpointOption.model_parameters.model = agent.model;
|
||||
const options = await getOptions({
|
||||
// Handle primary agent
|
||||
const primaryConfig = await initializeAgentOptions({
|
||||
req,
|
||||
res,
|
||||
agent: primaryAgent,
|
||||
endpointOption,
|
||||
optionsOnly: true,
|
||||
overrideEndpoint: provider,
|
||||
overrideModel: agent.model,
|
||||
tool_resources,
|
||||
isInitialAgent: true,
|
||||
});
|
||||
|
||||
modelOptions = Object.assign(modelOptions, options.llmConfig);
|
||||
if (options.configOptions) {
|
||||
modelOptions.configuration = options.configOptions;
|
||||
const agent_ids = primaryConfig.agent_ids;
|
||||
if (agent_ids?.length) {
|
||||
for (const agentId of agent_ids) {
|
||||
const agent = await getAgent({ id: agentId });
|
||||
if (!agent) {
|
||||
throw new Error(`Agent ${agentId} not found`);
|
||||
}
|
||||
const config = await initializeAgentOptions({
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
endpointOption,
|
||||
});
|
||||
agentConfigs.set(agentId, config);
|
||||
}
|
||||
}
|
||||
|
||||
const sender = getResponseSender({
|
||||
...endpointOption,
|
||||
model: endpointOption.model_parameters.model,
|
||||
});
|
||||
const sender =
|
||||
primaryAgent.name ??
|
||||
getResponseSender({
|
||||
...endpointOption,
|
||||
model: endpointOption.model_parameters.model,
|
||||
});
|
||||
|
||||
const client = new AgentClient({
|
||||
req,
|
||||
agent,
|
||||
tools,
|
||||
agent: primaryConfig,
|
||||
sender,
|
||||
attachments,
|
||||
contentParts,
|
||||
modelOptions,
|
||||
eventHandlers,
|
||||
collectedUsage,
|
||||
artifactPromises,
|
||||
spec: endpointOption.spec,
|
||||
agentConfigs,
|
||||
endpoint: EModelEndpoint.agents,
|
||||
attachments: endpointOption.attachments,
|
||||
maxContextTokens:
|
||||
agent.max_context_tokens ??
|
||||
getModelMaxTokens(modelOptions.model, providerEndpointMap[provider]) ??
|
||||
4000,
|
||||
maxContextTokens: primaryConfig.maxContextTokens,
|
||||
});
|
||||
|
||||
return { client };
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@ const {
|
|||
getResponseSender,
|
||||
} = require('librechat-data-provider');
|
||||
const { getDefaultHandlers } = require('~/server/controllers/agents/callbacks');
|
||||
// const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const getOptions = require('~/server/services/Endpoints/bedrock/options');
|
||||
const AgentClient = require('~/server/controllers/agents/client');
|
||||
const { getModelMaxTokens } = require('~/utils');
|
||||
|
|
@ -20,8 +19,6 @@ const initializeClient = async ({ req, res, endpointOption }) => {
|
|||
const { contentParts, aggregateContent } = createContentAggregator();
|
||||
const eventHandlers = getDefaultHandlers({ res, aggregateContent, collectedUsage });
|
||||
|
||||
// const tools = [createTavilySearchTool()];
|
||||
|
||||
/** @type {Agent} */
|
||||
const agent = {
|
||||
id: EModelEndpoint.bedrock,
|
||||
|
|
@ -36,8 +33,6 @@ const initializeClient = async ({ req, res, endpointOption }) => {
|
|||
agent.instructions = `${agent.instructions ?? ''}\n${endpointOption.artifactsPrompt}`.trim();
|
||||
}
|
||||
|
||||
let modelOptions = { model: agent.model };
|
||||
|
||||
// TODO: pass-in override settings that are specific to current run
|
||||
const options = await getOptions({
|
||||
req,
|
||||
|
|
@ -45,28 +40,34 @@ const initializeClient = async ({ req, res, endpointOption }) => {
|
|||
endpointOption,
|
||||
});
|
||||
|
||||
modelOptions = Object.assign(modelOptions, options.llmConfig);
|
||||
const maxContextTokens =
|
||||
agent.max_context_tokens ??
|
||||
getModelMaxTokens(modelOptions.model, providerEndpointMap[agent.provider]);
|
||||
agent.model_parameters = Object.assign(agent.model_parameters, options.llmConfig);
|
||||
if (options.configOptions) {
|
||||
agent.model_parameters.configuration = options.configOptions;
|
||||
}
|
||||
|
||||
const sender = getResponseSender({
|
||||
...endpointOption,
|
||||
model: endpointOption.model_parameters.model,
|
||||
});
|
||||
const sender =
|
||||
agent.name ??
|
||||
getResponseSender({
|
||||
...endpointOption,
|
||||
model: endpointOption.model_parameters.model,
|
||||
});
|
||||
|
||||
const client = new AgentClient({
|
||||
req,
|
||||
agent,
|
||||
sender,
|
||||
// tools,
|
||||
modelOptions,
|
||||
contentParts,
|
||||
eventHandlers,
|
||||
collectedUsage,
|
||||
maxContextTokens,
|
||||
spec: endpointOption.spec,
|
||||
endpoint: EModelEndpoint.bedrock,
|
||||
configOptions: options.configOptions,
|
||||
resendFiles: endpointOption.resendFiles,
|
||||
maxContextTokens:
|
||||
endpointOption.maxContextTokens ??
|
||||
agent.max_context_tokens ??
|
||||
getModelMaxTokens(agent.model_parameters.model, providerEndpointMap[agent.provider]) ??
|
||||
4000,
|
||||
attachments: endpointOption.attachments,
|
||||
});
|
||||
return { client };
|
||||
|
|
|
|||
|
|
@ -10,8 +10,8 @@ const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/User
|
|||
const { getLLMConfig } = require('~/server/services/Endpoints/openAI/llm');
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
const { fetchModels } = require('~/server/services/ModelService');
|
||||
const { isUserProvided, sleep } = require('~/server/utils');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const { isUserProvided } = require('~/server/utils');
|
||||
const { OpenAIClient } = require('~/app');
|
||||
|
||||
const { PROXY } = process.env;
|
||||
|
|
@ -141,7 +141,18 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
|
|||
},
|
||||
clientOptions,
|
||||
);
|
||||
return getLLMConfig(apiKey, requestOptions);
|
||||
const options = getLLMConfig(apiKey, requestOptions);
|
||||
if (!customOptions.streamRate) {
|
||||
return options;
|
||||
}
|
||||
options.llmConfig.callbacks = [
|
||||
{
|
||||
handleLLMNewToken: async () => {
|
||||
await sleep(customOptions.streamRate);
|
||||
},
|
||||
},
|
||||
];
|
||||
return options;
|
||||
}
|
||||
|
||||
if (clientOptions.reverseProxyUrl) {
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ const {
|
|||
} = require('librechat-data-provider');
|
||||
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
|
||||
const { getLLMConfig } = require('~/server/services/Endpoints/openAI/llm');
|
||||
const { isEnabled, isUserProvided } = require('~/server/utils');
|
||||
const { isEnabled, isUserProvided, sleep } = require('~/server/utils');
|
||||
const { getAzureCredentials } = require('~/utils');
|
||||
const { OpenAIClient } = require('~/app');
|
||||
|
||||
|
|
@ -140,7 +140,18 @@ const initializeClient = async ({
|
|||
},
|
||||
clientOptions,
|
||||
);
|
||||
return getLLMConfig(apiKey, requestOptions);
|
||||
const options = getLLMConfig(apiKey, requestOptions);
|
||||
if (!clientOptions.streamRate) {
|
||||
return options;
|
||||
}
|
||||
options.llmConfig.callbacks = [
|
||||
{
|
||||
handleLLMNewToken: async () => {
|
||||
await sleep(clientOptions.streamRate);
|
||||
},
|
||||
},
|
||||
];
|
||||
return options;
|
||||
}
|
||||
|
||||
const client = new OpenAIClient(apiKey, Object.assign({ req, res }, clientOptions));
|
||||
|
|
|
|||
|
|
@ -40,12 +40,16 @@ async function getCodeOutputDownloadStream(fileIdentifier, apiKey) {
|
|||
* @param {import('fs').ReadStream | import('stream').Readable} params.stream - The read stream for the file.
|
||||
* @param {string} params.filename - The name of the file.
|
||||
* @param {string} params.apiKey - The API key for authentication.
|
||||
* @param {string} [params.entity_id] - Optional entity ID for the file.
|
||||
* @returns {Promise<string>}
|
||||
* @throws {Error} If there's an error during the upload process.
|
||||
*/
|
||||
async function uploadCodeEnvFile({ req, stream, filename, apiKey }) {
|
||||
async function uploadCodeEnvFile({ req, stream, filename, apiKey, entity_id = '' }) {
|
||||
try {
|
||||
const form = new FormData();
|
||||
if (entity_id.length > 0) {
|
||||
form.append('entity_id', entity_id);
|
||||
}
|
||||
form.append('file', stream, filename);
|
||||
|
||||
const baseURL = getCodeBaseURL();
|
||||
|
|
@ -67,7 +71,12 @@ async function uploadCodeEnvFile({ req, stream, filename, apiKey }) {
|
|||
throw new Error(`Error uploading file: ${result.message}`);
|
||||
}
|
||||
|
||||
return `${result.session_id}/${result.files[0].fileId}`;
|
||||
const fileIdentifier = `${result.session_id}/${result.files[0].fileId}`;
|
||||
if (entity_id.length === 0) {
|
||||
return fileIdentifier;
|
||||
}
|
||||
|
||||
return `${fileIdentifier}?entity_id=${entity_id}`;
|
||||
} catch (error) {
|
||||
throw new Error(`Error uploading file: ${error.message}`);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,10 +3,11 @@ const { v4 } = require('uuid');
|
|||
const axios = require('axios');
|
||||
const { getCodeBaseURL } = require('@librechat/agents');
|
||||
const {
|
||||
EToolResources,
|
||||
Tools,
|
||||
FileContext,
|
||||
imageExtRegex,
|
||||
FileSources,
|
||||
imageExtRegex,
|
||||
EToolResources,
|
||||
} = require('librechat-data-provider');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const { convertImage } = require('~/server/services/Files/images/convert');
|
||||
|
|
@ -110,12 +111,20 @@ function checkIfActive(dateString) {
|
|||
async function getSessionInfo(fileIdentifier, apiKey) {
|
||||
try {
|
||||
const baseURL = getCodeBaseURL();
|
||||
const session_id = fileIdentifier.split('/')[0];
|
||||
const [path, queryString] = fileIdentifier.split('?');
|
||||
const session_id = path.split('/')[0];
|
||||
|
||||
let queryParams = {};
|
||||
if (queryString) {
|
||||
queryParams = Object.fromEntries(new URLSearchParams(queryString).entries());
|
||||
}
|
||||
|
||||
const response = await axios({
|
||||
method: 'get',
|
||||
url: `${baseURL}/files/${session_id}`,
|
||||
params: {
|
||||
detail: 'summary',
|
||||
...queryParams,
|
||||
},
|
||||
headers: {
|
||||
'User-Agent': 'LibreChat/1.0',
|
||||
|
|
@ -124,7 +133,7 @@ async function getSessionInfo(fileIdentifier, apiKey) {
|
|||
timeout: 5000,
|
||||
});
|
||||
|
||||
return response.data.find((file) => file.name.startsWith(fileIdentifier))?.lastModified;
|
||||
return response.data.find((file) => file.name.startsWith(path))?.lastModified;
|
||||
} catch (error) {
|
||||
logger.error(`Error fetching session info: ${error.message}`, error);
|
||||
return null;
|
||||
|
|
@ -137,29 +146,56 @@ async function getSessionInfo(fileIdentifier, apiKey) {
|
|||
* @param {ServerRequest} options.req
|
||||
* @param {Agent['tool_resources']} options.tool_resources
|
||||
* @param {string} apiKey
|
||||
* @returns {Promise<Array<{ id: string; session_id: string; name: string }>>}
|
||||
* @returns {Promise<{
|
||||
* files: Array<{ id: string; session_id: string; name: string }>,
|
||||
* toolContext: string,
|
||||
* }>}
|
||||
*/
|
||||
const primeFiles = async (options, apiKey) => {
|
||||
const { tool_resources } = options;
|
||||
const file_ids = tool_resources?.[EToolResources.execute_code]?.file_ids ?? [];
|
||||
const dbFiles = await getFiles({ file_id: { $in: file_ids } });
|
||||
const agentResourceIds = new Set(file_ids);
|
||||
const resourceFiles = tool_resources?.[EToolResources.execute_code]?.files ?? [];
|
||||
const dbFiles = ((await getFiles({ file_id: { $in: file_ids } })) ?? []).concat(resourceFiles);
|
||||
|
||||
const files = [];
|
||||
const sessions = new Map();
|
||||
for (const file of dbFiles) {
|
||||
let toolContext = '';
|
||||
|
||||
for (let i = 0; i < dbFiles.length; i++) {
|
||||
const file = dbFiles[i];
|
||||
if (!file) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (file.metadata.fileIdentifier) {
|
||||
const [session_id, id] = file.metadata.fileIdentifier.split('/');
|
||||
const [path, queryString] = file.metadata.fileIdentifier.split('?');
|
||||
const [session_id, id] = path.split('/');
|
||||
|
||||
const pushFile = () => {
|
||||
if (!toolContext) {
|
||||
toolContext = `- Note: The following files are available in the "${Tools.execute_code}" tool environment:`;
|
||||
}
|
||||
toolContext += `\n\t- /mnt/data/${file.filename}${
|
||||
agentResourceIds.has(file.file_id) ? '' : ' (just attached by user)'
|
||||
}`;
|
||||
files.push({
|
||||
id,
|
||||
session_id,
|
||||
name: file.filename,
|
||||
});
|
||||
};
|
||||
|
||||
if (sessions.has(session_id)) {
|
||||
pushFile();
|
||||
continue;
|
||||
}
|
||||
|
||||
let queryParams = {};
|
||||
if (queryString) {
|
||||
queryParams = Object.fromEntries(new URLSearchParams(queryString).entries());
|
||||
}
|
||||
|
||||
const reuploadFile = async () => {
|
||||
try {
|
||||
const { getDownloadStream } = getStrategyFunctions(file.source);
|
||||
|
|
@ -171,6 +207,7 @@ const primeFiles = async (options, apiKey) => {
|
|||
req: options.req,
|
||||
stream,
|
||||
filename: file.filename,
|
||||
entity_id: queryParams.entity_id,
|
||||
apiKey,
|
||||
});
|
||||
await updateFile({ file_id: file.file_id, metadata: { fileIdentifier } });
|
||||
|
|
@ -198,7 +235,7 @@ const primeFiles = async (options, apiKey) => {
|
|||
}
|
||||
}
|
||||
|
||||
return files;
|
||||
return { files, toolContext };
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
|
|
|
|||
|
|
@ -97,6 +97,7 @@ async function encodeAndFormat(req, files, endpoint, mode) {
|
|||
filepath: file.filepath,
|
||||
filename: file.filename,
|
||||
embedded: !!file.embedded,
|
||||
metadata: file.metadata,
|
||||
};
|
||||
|
||||
if (file.height && file.width) {
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ const {
|
|||
const { EnvVar } = require('@librechat/agents');
|
||||
const { addResourceFileId, deleteResourceFileId } = require('~/server/controllers/assistants/v2');
|
||||
const { convertImage, resizeAndConvert } = require('~/server/services/Files/images');
|
||||
const { addAgentResourceFile, removeAgentResourceFile } = require('~/models/Agent');
|
||||
const { addAgentResourceFile, removeAgentResourceFiles } = require('~/models/Agent');
|
||||
const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
|
||||
const { createFile, updateFileUsage, deleteFiles } = require('~/models/File');
|
||||
const { loadAuthValues } = require('~/app/clients/tools/util');
|
||||
|
|
@ -29,10 +29,34 @@ const { getStrategyFunctions } = require('./strategies');
|
|||
const { determineFileType } = require('~/server/utils');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const processFiles = async (files) => {
|
||||
/**
|
||||
*
|
||||
* @param {Array<MongoFile>} files
|
||||
* @param {Array<string>} [fileIds]
|
||||
* @returns
|
||||
*/
|
||||
const processFiles = async (files, fileIds) => {
|
||||
const promises = [];
|
||||
const seen = new Set();
|
||||
|
||||
for (let file of files) {
|
||||
const { file_id } = file;
|
||||
if (seen.has(file_id)) {
|
||||
continue;
|
||||
}
|
||||
seen.add(file_id);
|
||||
promises.push(updateFileUsage({ file_id }));
|
||||
}
|
||||
|
||||
if (!fileIds) {
|
||||
return await Promise.all(promises);
|
||||
}
|
||||
|
||||
for (let file_id of fileIds) {
|
||||
if (seen.has(file_id)) {
|
||||
continue;
|
||||
}
|
||||
seen.add(file_id);
|
||||
promises.push(updateFileUsage({ file_id }));
|
||||
}
|
||||
|
||||
|
|
@ -44,7 +68,7 @@ const processFiles = async (files) => {
|
|||
* Enqueues the delete operation to the leaky bucket queue if necessary, or adds it directly to promises.
|
||||
*
|
||||
* @param {object} params - The passed parameters.
|
||||
* @param {Express.Request} params.req - The express request object.
|
||||
* @param {ServerRequest} params.req - The express request object.
|
||||
* @param {MongoFile} params.file - The file object to delete.
|
||||
* @param {Function} params.deleteFile - The delete file function.
|
||||
* @param {Promise[]} params.promises - The array of promises to await.
|
||||
|
|
@ -91,7 +115,7 @@ function enqueueDeleteOperation({ req, file, deleteFile, promises, resolvedFileI
|
|||
*
|
||||
* @param {Object} params - The params object.
|
||||
* @param {MongoFile[]} params.files - The file objects to delete.
|
||||
* @param {Express.Request} params.req - The express request object.
|
||||
* @param {ServerRequest} params.req - The express request object.
|
||||
* @param {DeleteFilesBody} params.req.body - The request body.
|
||||
* @param {string} [params.req.body.agent_id] - The agent ID if file uploaded is associated to an agent.
|
||||
* @param {string} [params.req.body.assistant_id] - The assistant ID if file uploaded is associated to an assistant.
|
||||
|
|
@ -128,18 +152,16 @@ const processDeleteRequest = async ({ req, files }) => {
|
|||
await initializeClients();
|
||||
}
|
||||
|
||||
const agentFiles = [];
|
||||
|
||||
for (const file of files) {
|
||||
const source = file.source ?? FileSources.local;
|
||||
|
||||
if (req.body.agent_id && req.body.tool_resource) {
|
||||
promises.push(
|
||||
removeAgentResourceFile({
|
||||
req,
|
||||
file_id: file.file_id,
|
||||
agent_id: req.body.agent_id,
|
||||
tool_resource: req.body.tool_resource,
|
||||
}),
|
||||
);
|
||||
agentFiles.push({
|
||||
tool_resource: req.body.tool_resource,
|
||||
file_id: file.file_id,
|
||||
});
|
||||
}
|
||||
|
||||
if (checkOpenAIStorage(source) && !client[source]) {
|
||||
|
|
@ -183,6 +205,15 @@ const processDeleteRequest = async ({ req, files }) => {
|
|||
enqueueDeleteOperation({ req, file, deleteFile, promises, resolvedFileIds, openai });
|
||||
}
|
||||
|
||||
if (agentFiles.length > 0) {
|
||||
promises.push(
|
||||
removeAgentResourceFiles({
|
||||
agent_id: req.body.agent_id,
|
||||
files: agentFiles,
|
||||
}),
|
||||
);
|
||||
}
|
||||
|
||||
await Promise.allSettled(promises);
|
||||
await deleteFiles(resolvedFileIds);
|
||||
};
|
||||
|
|
@ -242,14 +273,14 @@ const processFileURL = async ({ fileStrategy, userId, URL, fileName, basePath, c
|
|||
* Saves file metadata to the database with an expiry TTL.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {Express.Request} params.req - The Express request object.
|
||||
* @param {ServerRequest} params.req - The Express request object.
|
||||
* @param {Express.Response} [params.res] - The Express response object.
|
||||
* @param {Express.Multer.File} params.file - The uploaded file.
|
||||
* @param {ImageMetadata} params.metadata - Additional metadata for the file.
|
||||
* @param {boolean} params.returnFile - Whether to return the file metadata or return response as normal.
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
const processImageFile = async ({ req, res, file, metadata, returnFile = false }) => {
|
||||
const processImageFile = async ({ req, res, metadata, returnFile = false }) => {
|
||||
const { file } = req;
|
||||
const source = req.app.locals.fileStrategy;
|
||||
const { handleImageUpload } = getStrategyFunctions(source);
|
||||
const { file_id, temp_file_id, endpoint } = metadata;
|
||||
|
|
@ -289,7 +320,7 @@ const processImageFile = async ({ req, res, file, metadata, returnFile = false }
|
|||
* returns minimal file metadata, without saving to the database.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {Express.Request} params.req - The Express request object.
|
||||
* @param {ServerRequest} params.req - The Express request object.
|
||||
* @param {FileContext} params.context - The context of the file (e.g., 'avatar', 'image_generation', etc.)
|
||||
* @param {boolean} [params.resize=true] - Whether to resize and convert the image to target format. Default is `true`.
|
||||
* @param {{ buffer: Buffer, width: number, height: number, bytes: number, filename: string, type: string, file_id: string }} [params.metadata] - Required metadata for the file if resize is false.
|
||||
|
|
@ -335,13 +366,12 @@ const uploadImageBuffer = async ({ req, context, metadata = {}, resize = true })
|
|||
* Files must be deleted from the server filesystem manually.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {Express.Request} params.req - The Express request object.
|
||||
* @param {ServerRequest} params.req - The Express request object.
|
||||
* @param {Express.Response} params.res - The Express response object.
|
||||
* @param {Express.Multer.File} params.file - The uploaded file.
|
||||
* @param {FileMetadata} params.metadata - Additional metadata for the file.
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
const processFileUpload = async ({ req, res, file, metadata }) => {
|
||||
const processFileUpload = async ({ req, res, metadata }) => {
|
||||
const isAssistantUpload = isAssistantsEndpoint(metadata.endpoint);
|
||||
const assistantSource =
|
||||
metadata.endpoint === EModelEndpoint.azureAssistants ? FileSources.azure : FileSources.openai;
|
||||
|
|
@ -355,6 +385,7 @@ const processFileUpload = async ({ req, res, file, metadata }) => {
|
|||
({ openai } = await getOpenAIClient({ req }));
|
||||
}
|
||||
|
||||
const { file } = req;
|
||||
const {
|
||||
id,
|
||||
bytes,
|
||||
|
|
@ -422,13 +453,13 @@ const processFileUpload = async ({ req, res, file, metadata }) => {
|
|||
* Files must be deleted from the server filesystem manually.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {Express.Request} params.req - The Express request object.
|
||||
* @param {ServerRequest} params.req - The Express request object.
|
||||
* @param {Express.Response} params.res - The Express response object.
|
||||
* @param {Express.Multer.File} params.file - The uploaded file.
|
||||
* @param {FileMetadata} params.metadata - Additional metadata for the file.
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
const processAgentFileUpload = async ({ req, res, file, metadata }) => {
|
||||
const processAgentFileUpload = async ({ req, res, metadata }) => {
|
||||
const { file } = req;
|
||||
const { agent_id, tool_resource } = metadata;
|
||||
if (agent_id && !tool_resource) {
|
||||
throw new Error('No tool resource provided for agent file upload');
|
||||
|
|
@ -453,6 +484,7 @@ const processAgentFileUpload = async ({ req, res, file, metadata }) => {
|
|||
stream,
|
||||
filename: file.originalname,
|
||||
apiKey: result[EnvVar.CODE_API_KEY],
|
||||
entity_id: messageAttachment === true ? undefined : agent_id,
|
||||
});
|
||||
fileInfoMetadata = { fileIdentifier };
|
||||
}
|
||||
|
|
@ -576,7 +608,7 @@ const processOpenAIFile = async ({
|
|||
/**
|
||||
* Process OpenAI image files, convert to target format, save and return file metadata.
|
||||
* @param {object} params - The params object.
|
||||
* @param {Express.Request} params.req - The Express request object.
|
||||
* @param {ServerRequest} params.req - The Express request object.
|
||||
* @param {Buffer} params.buffer - The image buffer.
|
||||
* @param {string} params.file_id - The file ID.
|
||||
* @param {string} params.filename - The filename.
|
||||
|
|
@ -708,20 +740,20 @@ async function retrieveAndProcessFile({
|
|||
* Filters a file based on its size and the endpoint origin.
|
||||
*
|
||||
* @param {Object} params - The parameters for the function.
|
||||
* @param {object} params.req - The request object from Express.
|
||||
* @param {ServerRequest} params.req - The request object from Express.
|
||||
* @param {string} [params.req.endpoint]
|
||||
* @param {string} [params.req.file_id]
|
||||
* @param {number} [params.req.width]
|
||||
* @param {number} [params.req.height]
|
||||
* @param {number} [params.req.version]
|
||||
* @param {Express.Multer.File} params.file - The file uploaded to the server via multer.
|
||||
* @param {boolean} [params.image] - Whether the file expected is an image.
|
||||
* @param {boolean} [params.isAvatar] - Whether the file expected is a user or entity avatar.
|
||||
* @returns {void}
|
||||
*
|
||||
* @throws {Error} If a file exception is caught (invalid file size or type, lack of metadata).
|
||||
*/
|
||||
function filterFile({ req, file, image, isAvatar }) {
|
||||
function filterFile({ req, image, isAvatar }) {
|
||||
const { file } = req;
|
||||
const { endpoint, file_id, width, height } = req.body;
|
||||
|
||||
if (!file_id && !isAvatar) {
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ const { logger } = require('~/config');
|
|||
*
|
||||
* @param {string} userId - The unique identifier of the user for whom the plugin authentication value is to be retrieved.
|
||||
* @param {string} authField - The specific authentication field (e.g., 'API_KEY', 'URL') whose value is to be retrieved and decrypted.
|
||||
* @param {boolean} throwError - Whether to throw an error if the authentication value does not exist. Defaults to `true`.
|
||||
* @returns {Promise<string|null>} A promise that resolves to the decrypted authentication value if found, or `null` if no such authentication value exists for the given user and field.
|
||||
*
|
||||
* The function throws an error if it encounters any issue during the retrieval or decryption process, or if the authentication value does not exist.
|
||||
|
|
@ -22,7 +23,7 @@ const { logger } = require('~/config');
|
|||
* @throws {Error} Throws an error if there's an issue during the retrieval or decryption process, or if the authentication value does not exist.
|
||||
* @async
|
||||
*/
|
||||
const getUserPluginAuthValue = async (userId, authField) => {
|
||||
const getUserPluginAuthValue = async (userId, authField, throwError = true) => {
|
||||
try {
|
||||
const pluginAuth = await PluginAuth.findOne({ userId, authField }).lean();
|
||||
if (!pluginAuth) {
|
||||
|
|
@ -32,6 +33,9 @@ const getUserPluginAuthValue = async (userId, authField) => {
|
|||
const decryptedValue = await decrypt(pluginAuth.value);
|
||||
return decryptedValue;
|
||||
} catch (err) {
|
||||
if (!throwError) {
|
||||
return null;
|
||||
}
|
||||
logger.error('[getUserPluginAuthValue]', err);
|
||||
throw err;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { zodToJsonSchema } = require('zod-to-json-schema');
|
||||
const { Calculator } = require('@langchain/community/tools/calculator');
|
||||
const { tool: toolFn, Tool } = require('@langchain/core/tools');
|
||||
const { Calculator } = require('@langchain/community/tools/calculator');
|
||||
const {
|
||||
Tools,
|
||||
ContentTypes,
|
||||
|
|
@ -170,7 +170,7 @@ async function processRequiredActions(client, requiredActions) {
|
|||
requiredActions,
|
||||
);
|
||||
const tools = requiredActions.map((action) => action.tool);
|
||||
const loadedTools = await loadTools({
|
||||
const { loadedTools } = await loadTools({
|
||||
user: client.req.user.id,
|
||||
model: client.req.body.model ?? 'gpt-4o-mini',
|
||||
tools,
|
||||
|
|
@ -183,7 +183,6 @@ async function processRequiredActions(client, requiredActions) {
|
|||
fileStrategy: client.req.app.locals.fileStrategy,
|
||||
returnMetadata: true,
|
||||
},
|
||||
skipSpecs: true,
|
||||
});
|
||||
|
||||
const ToolMap = loadedTools.reduce((map, tool) => {
|
||||
|
|
@ -378,21 +377,21 @@ async function loadAgentTools({ req, agent_id, tools, tool_resources, openAIApiK
|
|||
if (!tools || tools.length === 0) {
|
||||
return {};
|
||||
}
|
||||
const loadedTools = await loadTools({
|
||||
const { loadedTools, toolContextMap } = await loadTools({
|
||||
user: req.user.id,
|
||||
// model: req.body.model ?? 'gpt-4o-mini',
|
||||
tools,
|
||||
functions: true,
|
||||
isAgent: agent_id != null,
|
||||
options: {
|
||||
req,
|
||||
openAIApiKey,
|
||||
tool_resources,
|
||||
returnMetadata: true,
|
||||
processFileURL,
|
||||
uploadImageBuffer,
|
||||
returnMetadata: true,
|
||||
fileStrategy: req.app.locals.fileStrategy,
|
||||
},
|
||||
skipSpecs: true,
|
||||
});
|
||||
|
||||
const agentTools = [];
|
||||
|
|
@ -403,16 +402,19 @@ async function loadAgentTools({ req, agent_id, tools, tool_resources, openAIApiK
|
|||
continue;
|
||||
}
|
||||
|
||||
const toolInstance = toolFn(
|
||||
async (...args) => {
|
||||
return tool['_call'](...args);
|
||||
},
|
||||
{
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
schema: tool.schema,
|
||||
},
|
||||
);
|
||||
const toolDefinition = {
|
||||
name: tool.name,
|
||||
schema: tool.schema,
|
||||
description: tool.description,
|
||||
};
|
||||
|
||||
if (imageGenTools.has(tool.name)) {
|
||||
toolDefinition.responseFormat = 'content_and_artifact';
|
||||
}
|
||||
|
||||
const toolInstance = toolFn(async (...args) => {
|
||||
return tool['_call'](...args);
|
||||
}, toolDefinition);
|
||||
|
||||
agentTools.push(toolInstance);
|
||||
}
|
||||
|
|
@ -476,6 +478,7 @@ async function loadAgentTools({ req, agent_id, tools, tool_resources, openAIApiK
|
|||
|
||||
return {
|
||||
tools: agentTools,
|
||||
toolContextMap,
|
||||
};
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -32,17 +32,20 @@ async function loadDefaultInterface(config, configDefaults, roleName = SystemRol
|
|||
bookmarks: interfaceConfig?.bookmarks ?? defaults.bookmarks,
|
||||
prompts: interfaceConfig?.prompts ?? defaults.prompts,
|
||||
multiConvo: interfaceConfig?.multiConvo ?? defaults.multiConvo,
|
||||
agents: interfaceConfig?.agents ?? defaults.agents,
|
||||
});
|
||||
|
||||
await updateAccessPermissions(roleName, {
|
||||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: loadedInterface.prompts },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: loadedInterface.bookmarks },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: loadedInterface.multiConvo },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: loadedInterface.agents },
|
||||
});
|
||||
await updateAccessPermissions(SystemRoles.ADMIN, {
|
||||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: loadedInterface.prompts },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: loadedInterface.bookmarks },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: loadedInterface.multiConvo },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: loadedInterface.agents },
|
||||
});
|
||||
|
||||
let i = 0;
|
||||
|
|
|
|||
|
|
@ -7,8 +7,15 @@ jest.mock('~/models/Role', () => ({
|
|||
}));
|
||||
|
||||
describe('loadDefaultInterface', () => {
|
||||
it('should call updateAccessPermissions with the correct parameters when prompts and bookmarks are true', async () => {
|
||||
const config = { interface: { prompts: true, bookmarks: true } };
|
||||
it('should call updateAccessPermissions with the correct parameters when permission types are true', async () => {
|
||||
const config = {
|
||||
interface: {
|
||||
prompts: true,
|
||||
bookmarks: true,
|
||||
multiConvo: true,
|
||||
agents: true,
|
||||
},
|
||||
};
|
||||
const configDefaults = { interface: {} };
|
||||
|
||||
await loadDefaultInterface(config, configDefaults);
|
||||
|
|
@ -16,12 +23,20 @@ describe('loadDefaultInterface', () => {
|
|||
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
|
||||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
|
||||
});
|
||||
});
|
||||
|
||||
it('should call updateAccessPermissions with false when prompts and bookmarks are false', async () => {
|
||||
const config = { interface: { prompts: false, bookmarks: false } };
|
||||
it('should call updateAccessPermissions with false when permission types are false', async () => {
|
||||
const config = {
|
||||
interface: {
|
||||
prompts: false,
|
||||
bookmarks: false,
|
||||
multiConvo: false,
|
||||
agents: false,
|
||||
},
|
||||
};
|
||||
const configDefaults = { interface: {} };
|
||||
|
||||
await loadDefaultInterface(config, configDefaults);
|
||||
|
|
@ -29,11 +44,12 @@ describe('loadDefaultInterface', () => {
|
|||
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
|
||||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: false },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: false },
|
||||
});
|
||||
});
|
||||
|
||||
it('should call updateAccessPermissions with undefined when prompts and bookmarks are not specified in config', async () => {
|
||||
it('should call updateAccessPermissions with undefined when permission types are not specified in config', async () => {
|
||||
const config = {};
|
||||
const configDefaults = { interface: {} };
|
||||
|
||||
|
|
@ -43,11 +59,19 @@ describe('loadDefaultInterface', () => {
|
|||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
|
||||
});
|
||||
});
|
||||
|
||||
it('should call updateAccessPermissions with undefined when prompts and bookmarks are explicitly undefined', async () => {
|
||||
const config = { interface: { prompts: undefined, bookmarks: undefined } };
|
||||
it('should call updateAccessPermissions with undefined when permission types are explicitly undefined', async () => {
|
||||
const config = {
|
||||
interface: {
|
||||
prompts: undefined,
|
||||
bookmarks: undefined,
|
||||
multiConvo: undefined,
|
||||
agents: undefined,
|
||||
},
|
||||
};
|
||||
const configDefaults = { interface: {} };
|
||||
|
||||
await loadDefaultInterface(config, configDefaults);
|
||||
|
|
@ -56,11 +80,19 @@ describe('loadDefaultInterface', () => {
|
|||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
|
||||
});
|
||||
});
|
||||
|
||||
it('should call updateAccessPermissions with mixed values for prompts and bookmarks', async () => {
|
||||
const config = { interface: { prompts: true, bookmarks: false } };
|
||||
it('should call updateAccessPermissions with mixed values for permission types', async () => {
|
||||
const config = {
|
||||
interface: {
|
||||
prompts: true,
|
||||
bookmarks: false,
|
||||
multiConvo: undefined,
|
||||
agents: true,
|
||||
},
|
||||
};
|
||||
const configDefaults = { interface: {} };
|
||||
|
||||
await loadDefaultInterface(config, configDefaults);
|
||||
|
|
@ -69,19 +101,28 @@ describe('loadDefaultInterface', () => {
|
|||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
|
||||
});
|
||||
});
|
||||
|
||||
it('should call updateAccessPermissions with true when config is undefined', async () => {
|
||||
const config = undefined;
|
||||
const configDefaults = { interface: { prompts: true, bookmarks: true } };
|
||||
const configDefaults = {
|
||||
interface: {
|
||||
prompts: true,
|
||||
bookmarks: true,
|
||||
multiConvo: true,
|
||||
agents: true,
|
||||
},
|
||||
};
|
||||
|
||||
await loadDefaultInterface(config, configDefaults);
|
||||
|
||||
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
|
||||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -95,6 +136,7 @@ describe('loadDefaultInterface', () => {
|
|||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -108,6 +150,7 @@ describe('loadDefaultInterface', () => {
|
|||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
|
||||
});
|
||||
});
|
||||
|
||||
|
|
@ -121,11 +164,19 @@ describe('loadDefaultInterface', () => {
|
|||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
|
||||
});
|
||||
});
|
||||
|
||||
it('should call updateAccessPermissions with all interface options including multiConvo', async () => {
|
||||
const config = { interface: { prompts: true, bookmarks: false, multiConvo: true } };
|
||||
const config = {
|
||||
interface: {
|
||||
prompts: true,
|
||||
bookmarks: false,
|
||||
multiConvo: true,
|
||||
agents: false,
|
||||
},
|
||||
};
|
||||
const configDefaults = { interface: {} };
|
||||
|
||||
await loadDefaultInterface(config, configDefaults);
|
||||
|
|
@ -134,12 +185,20 @@ describe('loadDefaultInterface', () => {
|
|||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: false },
|
||||
});
|
||||
});
|
||||
|
||||
it('should use default values for multiConvo when config is undefined', async () => {
|
||||
const config = undefined;
|
||||
const configDefaults = { interface: { prompts: true, bookmarks: true, multiConvo: false } };
|
||||
const configDefaults = {
|
||||
interface: {
|
||||
prompts: true,
|
||||
bookmarks: true,
|
||||
multiConvo: false,
|
||||
agents: undefined,
|
||||
},
|
||||
};
|
||||
|
||||
await loadDefaultInterface(config, configDefaults);
|
||||
|
||||
|
|
@ -147,6 +206,7 @@ describe('loadDefaultInterface', () => {
|
|||
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
|
||||
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
|
||||
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue