mirror of
https://github.com/danny-avila/LibreChat.git
synced 2025-09-21 21:50:49 +02:00
✨ feat: Assistants API, General File Support, Side Panel, File Explorer (#1696)
* feat: assistant name/icon in Landing & Header * feat: assistname in textarea placeholder, and use `Assistant` as default name * feat: display non-image files in user messages * fix: only render files if files.length is > 0 * refactor(config -> file-config): move file related configuration values to separate module, add excel types * chore: spreadsheet file rendering * fix(Landing): dark mode style for Assistant Name * refactor: move progress incrementing to own hook, start smaller, cap near limit \(1\) * refactor(useContentHandler): add empty Text part if last part was completed tool or image * chore: add accordion trigger border styling for dark mode * feat: Assistant Builder model selection * chore: use Spinner when Assistant is mutating * fix(get/assistants): return correct response object `AssistantListResponse` * refactor(Spinner): pass size as prop * refactor: make assistant crud mutations optimistic, add types for options * chore: remove assistants route and view * chore: move assistant builder components to separate directory * feat(ContextButton): delete Assistant via context button/dialog, add localization * refactor: conditionally show use and context menu buttons, add localization for create assistant * feat: save side panel states to localStorage * style(SidePanel): improve avatar menu and assistant select styling for dark mode * refactor: make NavToggle reusable for either side (left or right), add SidePanel Toggle with ability to close it completely * fix: resize handle and navToggle behavior * fix(/avatar/:assistant_id): await `deleteFile` and assign unique name to uploaded image * WIP: file UI components from PR #576 * refactor(OpenAIMinimalIcon): pass className * feat: formatDate helper fn * feat: DataTableColumnHeader * feat: add row selection, formatted row values, number of rows selected * WIP: add files to Side panel temporarily * feat: `LB_QueueAsyncCall`: Leaky Bucket queue for external APIs, use in `processDeleteRequest` * fix(TFile): correct `source` type with `FileSources` * fix(useFileHandling): use `continue` instead of return when iterating multiple files, add file type to extendedFile * chore: add generic setter type * refactor(processDeleteRequest): settle promises to prevent rejections from processing deletions, log errors * feat: `useFileDeletion` to reuse file deletion logic * refactor(useFileDeletion): make `setFiles` an optional param and use object as param * feat: useDeleteFilesFromTable * feat: use real `files` data and add deletion action to data table * fix(Table): make headers sticky * feat: add dynamic filtering for columns; only show to user Host or OpenAI storage type * style(DropdownMenu): replace `slate` with `gray` * style(DataTable): apply dark mode themes and other misc styling * style(Columns): add color to OpenAI Storage option * refactor(FileContainer): make file preview reusable * refactor(Images): make image preview reusable * refactor(FilePreview): make file prop optional for FileIcon and FilePreview, fix relative style * feat(Columns): add file/image previews, set a minimum size to show for file size in bytes * WIP: File Panel with real files and formatted * feat: open files dialog from panel * style: file data table mobile and general column styling fixes * refactor(api/files): return files sorted by the most recently updated * refactor: provide fileMap through context to prevent re-selecting files to map in different areas; remove unused imports commented out in PanelColumns * refactor(ExtendFile): make File type optional, add `attached` to prevent attached files from being deleted on remove, make Message.files a partial TFile type * feat: attach files through file panel * refactor(useFileHandling): move files to the start of cache list when uploaded * refactor(useDeleteFilesMutation): delete files from cache when successfully deleted from server * fix(FileRow): handle possible edge case of duplication due to attaching recently uploaded file * style(SidePanel): make resize grip border transparent, remove unnecessary styling on close sidepanel button * feat: action utilities and tests * refactor(actions): add `ValidationResult` type and change wording for no server URL found * refactor(actions): check for empty server URL * fix(data-provider): revert tsconfig to fix type issue resolution * feat(client): first pass of actions input for assistants * refactor(FunctionSignature): change method to output object instead of string * refactor(models/Assistant): add actions field to schema, use searchParams object for methods, and add `getAssistant` * feat: post actions input first pass - create new Action document - add actions to Assistant DB document - create /action/:assistant_id POST route - pass more props down from PanelSwitcher, derive assistant_id from switcher - move privacy policy to ActionInput - reset data on input change/validation - add `useUpdateAction` - conform FunctionSignature type to FunctionTool - add action, assistant doc, update hook related types * refactor: optimize assistant/actions relationship - past domain in metadata as hostname and not a URL - include domain in tool name - add `getActions` for actions retrieval by user - add `getAssistants` for assistant docs retrieval by user - add `assistant_id` to Action schema - move actions to own module as a subroute to `api/assistants` - add `useGetActionsQuery` and `useGetAssistantDocsQuery` hooks - fix Action type def * feat: show assistant actions in assistant builder * feat: switch to actions on action click, editing action styling * fix: add Assistant state for builder panel to allow immediate selection of newly created assistants as well as retaining the current assistant when switching to a different panel within the builder * refactor(SidePanel/NavToggle): offset less from right when SidePanel is completely collapsed * chore: rename `processActions` -> `processRequiredActions` * chore: rename Assistant API Action to RequiredAction * refactor(actions): avoid nesting actual API params under generic `requestBody` to optimize LLM token usage * fix(handleTools): avoid calling `validTool` if not defined, add optional param to skip the loading of specs, which throws an error in the context of assistants * WIP: working first pass of toolCalls generated from openapi specs * WIP: first pass ToolCall styling * feat: programmatic iv encryption/decryption helpers * fix: correct ActionAuth types/enums, and define type for AuthForm * feat: encryption/decryption helpers for Action AuthMetadata * refactor(getActions): remove sensitive fields from query response * refactor(POST/actions): encrypt and remove sensitive fields from mutation response * fix(ActionService): change ESM import to CJS * feat: frontend auth handling for actions + optimistic update on action update/creation * refactor(actions): use the correct variables and types for setAuth method * refactor: POST /:assistant_id action can now handle updating an existing action, add `saved_auth_fields` to determine when user explicitly saves new auth creds. only send auth metadata if user explicitly saved fields * refactor(createActionTool): catch errors and send back meaningful error message, add flag to `getActions` to determine whether to retrieve sensitive values or not * refactor(ToolService): add `action` property to ToolCall PartMetadata to determine if the tool call was an action, fix parsing function name issue with actionDelimiter * fix(ActionRequest): use URL class to correctly join endpoint parts for `execute` call * feat: delete assistant actions * refactor: conditionally show Available actions * refactor: show `retrieval` and `code_interpreter` as Capabilities, swap `Switch` for `Checkbox` * chore: remove shadow-stroke from messages * WIP: first pass of Assistants Knowledge attachments * refactor: remove AssistantsProvider in favor of FormProvider, fix selectedAssistant re-render bug, map Assistant file_ids to files via fileMap, initialize Knowledge component with mapped files if any exist * fix: prevent deleting files on assistant file upload * chore: remove console.log * refactor(useUploadFileMutation): update files and assistants cache on upload * chore: disable oauth option as not supported yet * feat: cancel assistant runs * refactor: initialize OpenAI client with helper function, resolve all related circular dependencies * fix(DALL-E): initialization * fix(process): openai client initialization * fix: select an existing Assistant when the active one is deleted * chore: allow attaching files for assistant endpoint, send back relevant OpenAI error message when uploading, deconstruct openAI initialization correctly, add `message_file` to formData when a file is attached to the message but not the assistant * fix: add assistant_id on newConvo * fix(initializeClient): import fix * chore: swap setAssistant for setOption in useEffect * fix(DALL-E): add processFileURL to loadTools call * chore: add customConfig to debug logs * feat: delete threads on convo delete * chore: replace Assistants icon * chore: remove console.dir() in `abortRun` * feat(AssistantService): accumulate text values from run in openai.responseText * feat: titling for assistants endpoint * chore: move panel file components to appropriate directory, add file checks for attaching files, change icon for Attach Files * refactor: add localizations to tools, plugins, add condition for adding/remove user plugins so tool selections don't affect this value * chore: disable `import from url` action for now * chore: remove textMimeTypes from default fileConfig for now * fix: catch tool errors and send as outputs with error messages * fix: React warning about button as descendant of button * style: retrieval and cancelled icon * WIP: pass isSubmitting to Parts, use InProgressCall to display cancelled tool calls correctly, show domain/function name * fix(meilisearch): fix `postSaveHook` issue where indexing expects a mongo document, and join all text content parts for meili indexing * ci: fix dall-e tests * ci: fix client tests * fix: button types in actions panel * fix: plugin auth form persisting across tool selections * fix(ci): update AppService spec with `loadAndFormatTools` * fix(clearConvos): add id check earlier on * refactor(AssistantAvatar): set previewURL dynamically when emtadata.avatar changes * feat(assistants): addTitle cache setting * fix(useSSE): resolve rebase conflicts * fix: delete mutation * style(SidePanel): make grip visible on active and hover, invisible otherwise * ci: add data-provider tests to workflow, also update eslint/tsconfig to recognize specs, and add `text/csv` to fileConfig * fix: handle edge case where auth object is undefined, and log errors * refactor(actions): resolve schemas, add tests for resolving refs, import specs from separate file for tests * chore: remove comment * fix(ActionsInput): re-render bug when initializing states with action fields * fix(patch/assistant): filter undefined tools * chore: add logging for errors in assistants routes * fix(updateAssistant): map actions to functions to avoid overwriting * fix(actions): properly handle GET paths * fix(convos): unhandled delete thread exception * refactor(AssistantService): pass both thread_id and conversationId when sending intermediate assistant messages, remove `mapMessagesToSteps` from AssistantService * refactor(useSSE): replace all messages with runMessages and pass latestMessageId to abortRun; fix(checkMessageGaps): include tool calls when syncing messages * refactor(assistants/chat): invoke `createOnTextProgress` after thread creation * chore: add typing * style: sidepanel styling * style: action tool call domain styling * feat(assistants): default models, limit retrieval to certain models, add env variables to to env.example * feat: assistants api key in EndpointService * refactor: set assistant model to conversation on assistant switch * refactor: set assistant model to conversation on assistant select from panel * fix(retrieveAndProcessFile): catch attempt to download file with `assistant` purpose which is not allowed; add logging * feat: retrieval styling, handling, and logging * chore: rename ASSISTANTS_REVERSE_PROXY to ASSISTANTS_BASE_URL * feat: FileContext for file metadata * feat: context file mgmt and filtering * style(Select): hover/rounded changes * refactor: explicit conversation switch, endpoint dependent, through `useSelectAssistant`, which does not create new chat if current endpoint is assistant endpoint * fix(AssistantAvatar): make empty previewURL if no avatar present * refactor: side panel mobile styling * style: merge tool and action section, optimize mobile styling for action/tool buttons * fix: localStorage issues * fix(useSelectAssistant): invoke react query hook directly in select hook as Map was not being updated in time * style: light mode fixes * fix: prevent sidepanel nav styling from shifting layout up * refactor: change default layout (collapsed by default) * style: mobile optimization of DataTable * style: datatable * feat: client-side hide right-side panel * chore(useNewConvo): add partial typing for preset * fix(useSelectAssistant): pass correct model name by using template as preset * WIP: assistant presets * refactor(ToolService): add native solution for `TavilySearchResults` and log tool output errors * refactor: organize imports and use native TavilySearchResults * fix(TavilySearchResults): stringify result * fix(ToolCall): show tool call outputs when not an action * chore: rename Prompt Prefix to custom instructions (in user facing text only) * refactor(EditPresetDialog): Optimize setting title by debouncing, reset preset on dialog close to avoid state mixture * feat: add `presetOverride` to overwrite active conversation settings when saving a Preset (relevant for client side updates only) * feat: Assistant preset settings (client-side) * fix(Switcher): only set assistant_id and model if current endpoint is Assistants * feat: use `useDebouncedInput` for updating conversation settings, starting with EditPresetDialog title setting and Assistant instructions setting * feat(Assistants): add instructions field to settings * feat(chat/assistants): pass conversation settings to run body * wip: begin localization and only allow actions if the assistant is created * refactor(AssistantsPanel): knowledge localization, allow tools on creation * feat: experimental: allow 'priming' values before assistant is created, that would normally require an assistant_id to be defined * chore: trim console logs and make more meaningful * chore: toast messages * fix(ci): date test * feat: create file when uploading Assistant Avatar * feat: file upload rate limiting from custom config with dynamic file route initialization * refactor: use file upload limiters on post routes only * refactor(fileConfig): add endpoints field for endpoint specific fileconfigs, add mergeConfig function, add tests * refactor: fileConfig route, dynamic multer instances used on all '/' and '/images' POST routes, data service and query hook * feat: supportedMimeTypesSchema, test for array of regex * feat: configurable file config limits * chore: clarify assistants file knowledge prereq. * chore(useTextarea): default to localized 'Assistant' if assistant name is empty * feat: configurable file limits and toggle file upload per endpoint * fix(useUploadFileMutation): prevent updating assistant.files cache if file upload is a message_file attachment * fix(AssistantSelect): set last selected assistant only when timeout successfully runs * refactor(queries): disable assistant queries if assistants endpoint is not enabled * chore(Switcher): add localization * chore: pluralize `assistant` for `EModelEndpoint key and value * feat: show/hide assistant UI components based on endpoint availability; librechat.yaml config for disabling builder section and setting polling/timeout intervals * fix(compactEndpointSchemas): use EModelEndpoint for schema access * feat(runAssistant): use configured values from `librechat.yaml` for `pollIntervalMs` and `timeout` * fix: naming issue * wip: revert landing * 🎉 happy birthday LibreChat (#1768) * happy birthday LibreChat * Refactor endpoint condition in Landing component * Update birthday message in Eng.tsx * fix(/config): avoid nesting ternaries * refactor(/config): check birthday --------- Co-authored-by: Danny Avila <messagedaniel@protonmail.com> * fix: landing * fix: landing * fix(useMessageHelpers): hardcoded check to use EModelEndpoint instead * fix(ci): convo test revert to main * fix(assistants/chat): fix issue where assistant_id was being saved as model for convo * chore: added logging, promises racing to prevent longer timeouts, explicit setting of maxRetries and timeouts, robust catching of invalid abortRun params * refactor: use recoil state for `showStopButton` and only show for assistants endpoint after syncing conversation data * refactor: optimize abortRun strategy using localStorage, refactor `abortConversation` to use async/await and await the result, refactor how the abortKey cache is set for runs * fix(checkMessageGaps): assign `assistant_id` to synced messages if defined; prevents UI from showing blank assistant for cancelled messages * refactor: re-order sequence of chat route, only allow aborting messages after run is created, cancel abortRun if there was a cancelling error (likely due already cancelled in chat route), and add extra logging * chore(typedefs): add httpAgent type to OpenAIClient * refactor: use custom implementation of retrieving run with axios to allow for timing out run query * fix(waitForRun): handle timed out run retrieval query * refactor: update preset conditions: - presets will retain settings when a different endpoint is selected; for existing convos, either when modular or is assistant switch - no longer use `navigateToConvo` on preset select * fix: temporary calculator hack as expects string input when invoked * fix: cancel abortRun only when cancelling error is a result of the run already being cancelled * chore: remove use of `fileMaxSizeMB` and total counterpart (redundant) * docs: custom config documentation update * docs: assistants api setup and dotenv, new custom config fields * refactor(Switcher): make Assistant switcher sticky in SidePanel * chore(useSSE): remove console log of data and message index * refactor(AssistantPanel): button styling and add secondary select button to bottom of panel * refactor(OpenAIClient): allow passing conversationId to RunManager through titleConvo and initializeLLM to properly record title context tokens used in cases where conversationId was not defined by the client * feat(assistants): token tracking for assistant runs * chore(spendTokens): improve logging * feat: support/exclude specific assistant Ids * chore: add update `librechat.example.yaml`, optimize `AppService` handling, new tests for `AppService`, optimize missing/outdate config logging * chore: mount docker logs to root of project * chore: condense axios errors * chore: bump vite * chore: vite hot reload fix using latest version * chore(getOpenAIModels): sort instruct models to the end of models list * fix(assistants): user provided key * fix(assistants): user provided key, invalidate more queries on revoke --------- Co-authored-by: Marco Beretta <81851188+Berry-13@users.noreply.github.com>
This commit is contained in:
parent
cd2786441a
commit
ecd63eb9f1
316 changed files with 21873 additions and 6315 deletions
|
@ -119,6 +119,14 @@ DEBUG_OPENAI=false
|
|||
|
||||
# OPENAI_ORGANIZATION=
|
||||
|
||||
#====================#
|
||||
# Assistants API #
|
||||
#====================#
|
||||
|
||||
# ASSISTANTS_API_KEY=
|
||||
# ASSISTANTS_BASE_URL=
|
||||
# ASSISTANTS_MODELS=gpt-3.5-turbo-0125,gpt-3.5-turbo-16k-0613,gpt-3.5-turbo-16k,gpt-3.5-turbo,gpt-4,gpt-4-0314,gpt-4-32k-0314,gpt-4-0613,gpt-3.5-turbo-0613,gpt-3.5-turbo-1106,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview
|
||||
|
||||
#============#
|
||||
# OpenRouter #
|
||||
#============#
|
||||
|
|
|
@ -131,6 +131,12 @@ module.exports = {
|
|||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
files: ['./packages/data-provider/specs/**/*.ts'],
|
||||
parserOptions: {
|
||||
project: './packages/data-provider/tsconfig.spec.json',
|
||||
},
|
||||
},
|
||||
],
|
||||
settings: {
|
||||
react: {
|
||||
|
|
3
.github/workflows/backend-review.yml
vendored
3
.github/workflows/backend-review.yml
vendored
|
@ -39,6 +39,9 @@ jobs:
|
|||
- name: Run unit tests
|
||||
run: cd api && npm run test:ci
|
||||
|
||||
- name: Run librechat-data-provider unit tests
|
||||
run: cd packages/data-provider && npm run test:ci
|
||||
|
||||
- name: Run linters
|
||||
uses: wearerequired/lint-action@v2
|
||||
with:
|
||||
|
|
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -89,3 +89,6 @@ auth.json
|
|||
/images
|
||||
|
||||
!client/src/components/Nav/SettingsTabs/Data/
|
||||
|
||||
# User uploads
|
||||
uploads/
|
|
@ -1,5 +1,6 @@
|
|||
require('dotenv').config();
|
||||
const { KeyvFile } = require('keyv-file');
|
||||
const { Constants } = require('librechat-data-provider');
|
||||
const { getUserKey, checkUserKeyExpiry } = require('../server/services/UserService');
|
||||
|
||||
const browserClient = async ({
|
||||
|
@ -48,7 +49,7 @@ const browserClient = async ({
|
|||
options = { ...options, parentMessageId, conversationId };
|
||||
}
|
||||
|
||||
if (parentMessageId === '00000000-0000-0000-0000-000000000000') {
|
||||
if (parentMessageId === Constants.NO_PARENT) {
|
||||
delete options.conversationId;
|
||||
}
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
const crypto = require('crypto');
|
||||
const { supportsBalanceCheck } = require('librechat-data-provider');
|
||||
const { supportsBalanceCheck, Constants } = require('librechat-data-provider');
|
||||
const { getConvo, getMessages, saveMessage, updateMessage, saveConvo } = require('~/models');
|
||||
const { addSpaceIfNeeded, isEnabled } = require('~/server/utils');
|
||||
const checkBalance = require('~/models/checkBalance');
|
||||
|
@ -77,7 +77,7 @@ class BaseClient {
|
|||
const saveOptions = this.getSaveOptions();
|
||||
this.abortController = opts.abortController ?? new AbortController();
|
||||
const conversationId = opts.conversationId ?? crypto.randomUUID();
|
||||
const parentMessageId = opts.parentMessageId ?? '00000000-0000-0000-0000-000000000000';
|
||||
const parentMessageId = opts.parentMessageId ?? Constants.NO_PARENT;
|
||||
const userMessageId = opts.overrideParentMessageId ?? crypto.randomUUID();
|
||||
let responseMessageId = opts.responseMessageId ?? crypto.randomUUID();
|
||||
let head = isEdited ? responseMessageId : parentMessageId;
|
||||
|
@ -552,7 +552,7 @@ class BaseClient {
|
|||
*
|
||||
* Each message object should have an 'id' or 'messageId' property and may have a 'parentMessageId' property.
|
||||
* The 'parentMessageId' is the ID of the message that the current message is a reply to.
|
||||
* If 'parentMessageId' is not present, null, or is '00000000-0000-0000-0000-000000000000',
|
||||
* If 'parentMessageId' is not present, null, or is Constants.NO_PARENT,
|
||||
* the message is considered a root message.
|
||||
*
|
||||
* @param {Object} options - The options for the function.
|
||||
|
@ -607,9 +607,7 @@ class BaseClient {
|
|||
}
|
||||
|
||||
currentMessageId =
|
||||
message.parentMessageId === '00000000-0000-0000-0000-000000000000'
|
||||
? null
|
||||
: message.parentMessageId;
|
||||
message.parentMessageId === Constants.NO_PARENT ? null : message.parentMessageId;
|
||||
}
|
||||
|
||||
orderedMessages.reverse();
|
||||
|
|
|
@ -4,12 +4,13 @@ const { GoogleVertexAI } = require('langchain/llms/googlevertexai');
|
|||
const { ChatGoogleGenerativeAI } = require('@langchain/google-genai');
|
||||
const { ChatGoogleVertexAI } = require('langchain/chat_models/googlevertexai');
|
||||
const { AIMessage, HumanMessage, SystemMessage } = require('langchain/schema');
|
||||
const { encodeAndFormat, validateVisionModel } = require('~/server/services/Files/images');
|
||||
const { encodeAndFormat } = require('~/server/services/Files/images');
|
||||
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('tiktoken');
|
||||
const {
|
||||
validateVisionModel,
|
||||
getResponseSender,
|
||||
EModelEndpoint,
|
||||
endpointSettings,
|
||||
EModelEndpoint,
|
||||
AuthKeys,
|
||||
} = require('librechat-data-provider');
|
||||
const { getModelMaxTokens } = require('~/utils');
|
||||
|
|
|
@ -1,14 +1,19 @@
|
|||
const OpenAI = require('openai');
|
||||
const { HttpsProxyAgent } = require('https-proxy-agent');
|
||||
const { getResponseSender, ImageDetailCost, ImageDetail } = require('librechat-data-provider');
|
||||
const {
|
||||
getResponseSender,
|
||||
validateVisionModel,
|
||||
ImageDetailCost,
|
||||
ImageDetail,
|
||||
} = require('librechat-data-provider');
|
||||
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('tiktoken');
|
||||
const {
|
||||
getModelMaxTokens,
|
||||
genAzureChatCompletion,
|
||||
extractBaseURL,
|
||||
constructAzureURL,
|
||||
getModelMaxTokens,
|
||||
genAzureChatCompletion,
|
||||
} = require('~/utils');
|
||||
const { encodeAndFormat, validateVisionModel } = require('~/server/services/Files/images');
|
||||
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
|
||||
const { truncateText, formatMessage, CUT_OFF_PROMPT } = require('./prompts');
|
||||
const { handleOpenAIErrors } = require('./tools/util');
|
||||
const spendTokens = require('~/models/spendTokens');
|
||||
|
@ -630,6 +635,7 @@ class OpenAIClient extends BaseClient {
|
|||
context,
|
||||
tokenBuffer,
|
||||
initialMessageCount,
|
||||
conversationId,
|
||||
}) {
|
||||
const modelOptions = {
|
||||
modelName: modelName ?? model,
|
||||
|
@ -677,7 +683,7 @@ class OpenAIClient extends BaseClient {
|
|||
callbacks: runManager.createCallbacks({
|
||||
context,
|
||||
tokenBuffer,
|
||||
conversationId: this.conversationId,
|
||||
conversationId: this.conversationId ?? conversationId,
|
||||
initialMessageCount,
|
||||
}),
|
||||
});
|
||||
|
@ -693,12 +699,13 @@ class OpenAIClient extends BaseClient {
|
|||
*
|
||||
* @param {Object} params - The parameters for the conversation title generation.
|
||||
* @param {string} params.text - The user's input.
|
||||
* @param {string} [params.conversationId] - The current conversationId, if not already defined on client initialization.
|
||||
* @param {string} [params.responseText=''] - The AI's immediate response to the user.
|
||||
*
|
||||
* @returns {Promise<string | 'New Chat'>} A promise that resolves to the generated conversation title.
|
||||
* In case of failure, it will return the default title, "New Chat".
|
||||
*/
|
||||
async titleConvo({ text, responseText = '' }) {
|
||||
async titleConvo({ text, conversationId, responseText = '' }) {
|
||||
let title = 'New Chat';
|
||||
const convo = `||>User:
|
||||
"${truncateText(text)}"
|
||||
|
@ -758,7 +765,12 @@ ${convo}
|
|||
|
||||
try {
|
||||
this.abortController = new AbortController();
|
||||
const llm = this.initializeLLM({ ...modelOptions, context: 'title', tokenBuffer: 150 });
|
||||
const llm = this.initializeLLM({
|
||||
...modelOptions,
|
||||
conversationId,
|
||||
context: 'title',
|
||||
tokenBuffer: 150,
|
||||
});
|
||||
title = await runTitleChain({ llm, text, convo, signal: this.abortController.signal });
|
||||
} catch (e) {
|
||||
if (e?.message?.toLowerCase()?.includes('abort')) {
|
||||
|
|
|
@ -3,6 +3,7 @@ const { CallbackManager } = require('langchain/callbacks');
|
|||
const { BufferMemory, ChatMessageHistory } = require('langchain/memory');
|
||||
const { initializeCustomAgent, initializeFunctionsAgent } = require('./agents');
|
||||
const { addImages, buildErrorInput, buildPromptPrefix } = require('./output_parsers');
|
||||
const { processFileURL } = require('~/server/services/Files/process');
|
||||
const { EModelEndpoint } = require('librechat-data-provider');
|
||||
const { formatLangChainMessages } = require('./prompts');
|
||||
const checkBalance = require('~/models/checkBalance');
|
||||
|
@ -113,6 +114,7 @@ class PluginsClient extends OpenAIClient {
|
|||
openAIApiKey: this.openAIApiKey,
|
||||
conversationId: this.conversationId,
|
||||
fileStrategy: this.options.req.app.locals.fileStrategy,
|
||||
processFileURL,
|
||||
message,
|
||||
},
|
||||
});
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
const { formatMessage, formatLangChainMessages, formatFromLangChain } = require('./formatMessages');
|
||||
const { Constants } = require('librechat-data-provider');
|
||||
const { HumanMessage, AIMessage, SystemMessage } = require('langchain/schema');
|
||||
const { formatMessage, formatLangChainMessages, formatFromLangChain } = require('./formatMessages');
|
||||
|
||||
describe('formatMessage', () => {
|
||||
it('formats user message', () => {
|
||||
|
@ -61,7 +62,7 @@ describe('formatMessage', () => {
|
|||
isCreatedByUser: true,
|
||||
isEdited: false,
|
||||
model: null,
|
||||
parentMessageId: '00000000-0000-0000-0000-000000000000',
|
||||
parentMessageId: Constants.NO_PARENT,
|
||||
sender: 'User',
|
||||
text: 'hi',
|
||||
tokenCount: 5,
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
const { Constants } = require('librechat-data-provider');
|
||||
const { initializeFakeClient } = require('./FakeClient');
|
||||
|
||||
jest.mock('../../../lib/db/connectDb');
|
||||
|
@ -307,7 +308,7 @@ describe('BaseClient', () => {
|
|||
const unorderedMessages = [
|
||||
{ id: '3', parentMessageId: '2', text: 'Message 3' },
|
||||
{ id: '2', parentMessageId: '1', text: 'Message 2' },
|
||||
{ id: '1', parentMessageId: '00000000-0000-0000-0000-000000000000', text: 'Message 1' },
|
||||
{ id: '1', parentMessageId: Constants.NO_PARENT, text: 'Message 1' },
|
||||
];
|
||||
|
||||
it('should return ordered messages based on parentMessageId', () => {
|
||||
|
@ -316,7 +317,7 @@ describe('BaseClient', () => {
|
|||
parentMessageId: '3',
|
||||
});
|
||||
expect(result).toEqual([
|
||||
{ id: '1', parentMessageId: '00000000-0000-0000-0000-000000000000', text: 'Message 1' },
|
||||
{ id: '1', parentMessageId: Constants.NO_PARENT, text: 'Message 1' },
|
||||
{ id: '2', parentMessageId: '1', text: 'Message 2' },
|
||||
{ id: '3', parentMessageId: '2', text: 'Message 3' },
|
||||
]);
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
const crypto = require('crypto');
|
||||
const { Constants } = require('librechat-data-provider');
|
||||
const { HumanChatMessage, AIChatMessage } = require('langchain/schema');
|
||||
const PluginsClient = require('../PluginsClient');
|
||||
const crypto = require('crypto');
|
||||
|
||||
jest.mock('~/lib/db/connectDb');
|
||||
jest.mock('~/models/Conversation', () => {
|
||||
|
@ -66,7 +67,7 @@ describe('PluginsClient', () => {
|
|||
TestAgent.setOptions(opts);
|
||||
}
|
||||
const conversationId = opts.conversationId || crypto.randomUUID();
|
||||
const parentMessageId = opts.parentMessageId || '00000000-0000-0000-0000-000000000000';
|
||||
const parentMessageId = opts.parentMessageId || Constants.NO_PARENT;
|
||||
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
|
||||
this.pastMessages = await TestAgent.loadHistory(
|
||||
conversationId,
|
||||
|
|
|
@ -3,8 +3,8 @@ const OpenAI = require('openai');
|
|||
const { v4: uuidv4 } = require('uuid');
|
||||
const { Tool } = require('langchain/tools');
|
||||
const { HttpsProxyAgent } = require('https-proxy-agent');
|
||||
const { FileContext } = require('librechat-data-provider');
|
||||
const { getImageBasename } = require('~/server/services/Files/images');
|
||||
const { processFileURL } = require('~/server/services/Files/process');
|
||||
const extractBaseURL = require('~/utils/extractBaseURL');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
|
@ -14,6 +14,9 @@ class OpenAICreateImage extends Tool {
|
|||
|
||||
this.userId = fields.userId;
|
||||
this.fileStrategy = fields.fileStrategy;
|
||||
if (fields.processFileURL) {
|
||||
this.processFileURL = fields.processFileURL.bind(this);
|
||||
}
|
||||
let apiKey = fields.DALLE2_API_KEY ?? fields.DALLE_API_KEY ?? this.getApiKey();
|
||||
|
||||
const config = { apiKey };
|
||||
|
@ -80,13 +83,21 @@ Guidelines:
|
|||
}
|
||||
|
||||
async _call(input) {
|
||||
const resp = await this.openai.images.generate({
|
||||
let resp;
|
||||
|
||||
try {
|
||||
resp = await this.openai.images.generate({
|
||||
prompt: this.replaceUnwantedChars(input),
|
||||
// TODO: Future idea -- could we ask an LLM to extract these arguments from an input that might contain them?
|
||||
n: 1,
|
||||
// size: '1024x1024'
|
||||
size: '512x512',
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[DALL-E] Problem generating the image:', error);
|
||||
return `Something went wrong when trying to generate the image. The DALL-E API may be unavailable:
|
||||
Error Message: ${error.message}`;
|
||||
}
|
||||
|
||||
const theImageUrl = resp.data[0].url;
|
||||
|
||||
|
@ -110,15 +121,16 @@ Guidelines:
|
|||
});
|
||||
|
||||
try {
|
||||
const result = await processFileURL({
|
||||
const result = await this.processFileURL({
|
||||
fileStrategy: this.fileStrategy,
|
||||
userId: this.userId,
|
||||
URL: theImageUrl,
|
||||
fileName: imageName,
|
||||
basePath: 'images',
|
||||
context: FileContext.image_generation,
|
||||
});
|
||||
|
||||
this.result = this.wrapInMarkdown(result);
|
||||
this.result = this.wrapInMarkdown(result.filepath);
|
||||
} catch (error) {
|
||||
logger.error('Error while saving the image:', error);
|
||||
this.result = `Failed to save the image locally. ${error.message}`;
|
||||
|
|
|
@ -1,35 +1,42 @@
|
|||
const availableTools = require('./manifest.json');
|
||||
// Basic Tools
|
||||
const CodeBrew = require('./CodeBrew');
|
||||
const GoogleSearchAPI = require('./GoogleSearch');
|
||||
const OpenAICreateImage = require('./DALL-E');
|
||||
const DALLE3 = require('./structured/DALLE3');
|
||||
const StructuredSD = require('./structured/StableDiffusion');
|
||||
const StableDiffusionAPI = require('./StableDiffusion');
|
||||
const WolframAlphaAPI = require('./Wolfram');
|
||||
const StructuredWolfram = require('./structured/Wolfram');
|
||||
const SelfReflectionTool = require('./SelfReflection');
|
||||
const AzureAiSearch = require('./AzureAiSearch');
|
||||
const StructuredACS = require('./structured/AzureAISearch');
|
||||
const OpenAICreateImage = require('./DALL-E');
|
||||
const StableDiffusionAPI = require('./StableDiffusion');
|
||||
const SelfReflectionTool = require('./SelfReflection');
|
||||
|
||||
// Structured Tools
|
||||
const DALLE3 = require('./structured/DALLE3');
|
||||
const ChatTool = require('./structured/ChatTool');
|
||||
const E2BTools = require('./structured/E2BTools');
|
||||
const CodeSherpa = require('./structured/CodeSherpa');
|
||||
const StructuredSD = require('./structured/StableDiffusion');
|
||||
const StructuredACS = require('./structured/AzureAISearch');
|
||||
const CodeSherpaTools = require('./structured/CodeSherpaTools');
|
||||
const availableTools = require('./manifest.json');
|
||||
const CodeBrew = require('./CodeBrew');
|
||||
const StructuredWolfram = require('./structured/Wolfram');
|
||||
const TavilySearchResults = require('./structured/TavilySearchResults');
|
||||
|
||||
module.exports = {
|
||||
availableTools,
|
||||
GoogleSearchAPI,
|
||||
OpenAICreateImage,
|
||||
DALLE3,
|
||||
StableDiffusionAPI,
|
||||
StructuredSD,
|
||||
WolframAlphaAPI,
|
||||
StructuredWolfram,
|
||||
SelfReflectionTool,
|
||||
AzureAiSearch,
|
||||
StructuredACS,
|
||||
E2BTools,
|
||||
ChatTool,
|
||||
CodeSherpa,
|
||||
CodeSherpaTools,
|
||||
// Basic Tools
|
||||
CodeBrew,
|
||||
AzureAiSearch,
|
||||
GoogleSearchAPI,
|
||||
WolframAlphaAPI,
|
||||
OpenAICreateImage,
|
||||
StableDiffusionAPI,
|
||||
SelfReflectionTool,
|
||||
// Structured Tools
|
||||
DALLE3,
|
||||
ChatTool,
|
||||
E2BTools,
|
||||
CodeSherpa,
|
||||
StructuredSD,
|
||||
StructuredACS,
|
||||
CodeSherpaTools,
|
||||
StructuredWolfram,
|
||||
TavilySearchResults,
|
||||
};
|
||||
|
|
|
@ -108,6 +108,19 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Tavily Search",
|
||||
"pluginKey": "tavily_search_results_json",
|
||||
"description": "Tavily Search is a robust search API tailored specifically for LLM Agents. It seamlessly integrates with diverse data sources to ensure a superior, relevant search experience.",
|
||||
"icon": "https://tavily.com/favicon.ico",
|
||||
"authConfig": [
|
||||
{
|
||||
"authField": "TAVILY_API_KEY",
|
||||
"label": "Tavily API Key",
|
||||
"description": "Get your API key here: https://app.tavily.com/"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Calculator",
|
||||
"pluginKey": "calculator",
|
||||
|
|
|
@ -19,6 +19,13 @@ class AzureAISearch extends StructuredTool {
|
|||
this.name = 'azure-ai-search';
|
||||
this.description =
|
||||
'Use the \'azure-ai-search\' tool to retrieve search results relevant to your input';
|
||||
/* Used to initialize the Tool without necessary variables. */
|
||||
this.override = fields.override ?? false;
|
||||
|
||||
// Define schema
|
||||
this.schema = z.object({
|
||||
query: z.string().describe('Search word or phrase to Azure AI Search'),
|
||||
});
|
||||
|
||||
// Initialize properties using helper function
|
||||
this.serviceEndpoint = this._initializeField(
|
||||
|
@ -51,12 +58,16 @@ class AzureAISearch extends StructuredTool {
|
|||
);
|
||||
|
||||
// Check for required fields
|
||||
if (!this.serviceEndpoint || !this.indexName || !this.apiKey) {
|
||||
if (!this.override && (!this.serviceEndpoint || !this.indexName || !this.apiKey)) {
|
||||
throw new Error(
|
||||
'Missing AZURE_AI_SEARCH_SERVICE_ENDPOINT, AZURE_AI_SEARCH_INDEX_NAME, or AZURE_AI_SEARCH_API_KEY environment variable.',
|
||||
);
|
||||
}
|
||||
|
||||
if (this.override) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Create SearchClient
|
||||
this.client = new SearchClient(
|
||||
this.serviceEndpoint,
|
||||
|
@ -64,11 +75,6 @@ class AzureAISearch extends StructuredTool {
|
|||
new AzureKeyCredential(this.apiKey),
|
||||
{ apiVersion: this.apiVersion },
|
||||
);
|
||||
|
||||
// Define schema
|
||||
this.schema = z.object({
|
||||
query: z.string().describe('Search word or phrase to Azure AI Search'),
|
||||
});
|
||||
}
|
||||
|
||||
// Improved error handling and logging
|
||||
|
|
|
@ -4,17 +4,25 @@ const OpenAI = require('openai');
|
|||
const { v4: uuidv4 } = require('uuid');
|
||||
const { Tool } = require('langchain/tools');
|
||||
const { HttpsProxyAgent } = require('https-proxy-agent');
|
||||
const { FileContext } = require('librechat-data-provider');
|
||||
const { getImageBasename } = require('~/server/services/Files/images');
|
||||
const { processFileURL } = require('~/server/services/Files/process');
|
||||
const extractBaseURL = require('~/utils/extractBaseURL');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
class DALLE3 extends Tool {
|
||||
constructor(fields = {}) {
|
||||
super();
|
||||
/* Used to initialize the Tool without necessary variables. */
|
||||
this.override = fields.override ?? false;
|
||||
/* Necessary for output to contain all image metadata. */
|
||||
this.returnMetadata = fields.returnMetadata ?? false;
|
||||
|
||||
this.userId = fields.userId;
|
||||
this.fileStrategy = fields.fileStrategy;
|
||||
if (fields.processFileURL) {
|
||||
this.processFileURL = fields.processFileURL.bind(this);
|
||||
}
|
||||
|
||||
let apiKey = fields.DALLE3_API_KEY ?? fields.DALLE_API_KEY ?? this.getApiKey();
|
||||
const config = { apiKey };
|
||||
if (process.env.DALLE_REVERSE_PROXY) {
|
||||
|
@ -81,7 +89,7 @@ class DALLE3 extends Tool {
|
|||
|
||||
getApiKey() {
|
||||
const apiKey = process.env.DALLE3_API_KEY ?? process.env.DALLE_API_KEY ?? '';
|
||||
if (!apiKey) {
|
||||
if (!apiKey && !this.override) {
|
||||
throw new Error('Missing DALLE_API_KEY environment variable.');
|
||||
}
|
||||
return apiKey;
|
||||
|
@ -115,6 +123,7 @@ class DALLE3 extends Tool {
|
|||
n: 1,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[DALL-E-3] Problem generating the image:', error);
|
||||
return `Something went wrong when trying to generate the image. The DALL-E API may be unavailable:
|
||||
Error Message: ${error.message}`;
|
||||
}
|
||||
|
@ -145,15 +154,26 @@ Error Message: ${error.message}`;
|
|||
});
|
||||
|
||||
try {
|
||||
const result = await processFileURL({
|
||||
const result = await this.processFileURL({
|
||||
fileStrategy: this.fileStrategy,
|
||||
userId: this.userId,
|
||||
URL: theImageUrl,
|
||||
fileName: imageName,
|
||||
basePath: 'images',
|
||||
context: FileContext.image_generation,
|
||||
});
|
||||
|
||||
this.result = this.wrapInMarkdown(result);
|
||||
if (this.returnMetadata) {
|
||||
this.result = {
|
||||
file_id: result.file_id,
|
||||
filename: result.filename,
|
||||
filepath: result.filepath,
|
||||
height: result.height,
|
||||
width: result.width,
|
||||
};
|
||||
} else {
|
||||
this.result = this.wrapInMarkdown(result.filepath);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Error while saving the image:', error);
|
||||
this.result = `Failed to save the image locally. ${error.message}`;
|
||||
|
|
|
@ -10,6 +10,9 @@ const { logger } = require('~/config');
|
|||
class StableDiffusionAPI extends StructuredTool {
|
||||
constructor(fields) {
|
||||
super();
|
||||
/* Used to initialize the Tool without necessary variables. */
|
||||
this.override = fields.override ?? false;
|
||||
|
||||
this.name = 'stable-diffusion';
|
||||
this.url = fields.SD_WEBUI_URL || this.getServerURL();
|
||||
this.description_for_model = `// Generate images and visuals using text.
|
||||
|
@ -52,7 +55,7 @@ class StableDiffusionAPI extends StructuredTool {
|
|||
|
||||
getServerURL() {
|
||||
const url = process.env.SD_WEBUI_URL || '';
|
||||
if (!url) {
|
||||
if (!url && !this.override) {
|
||||
throw new Error('Missing SD_WEBUI_URL environment variable.');
|
||||
}
|
||||
return url;
|
||||
|
|
92
api/app/clients/tools/structured/TavilySearchResults.js
Normal file
92
api/app/clients/tools/structured/TavilySearchResults.js
Normal file
|
@ -0,0 +1,92 @@
|
|||
const { z } = require('zod');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
|
||||
|
||||
class TavilySearchResults extends Tool {
|
||||
static lc_name() {
|
||||
return 'TavilySearchResults';
|
||||
}
|
||||
|
||||
constructor(fields = {}) {
|
||||
super(fields);
|
||||
this.envVar = 'TAVILY_API_KEY';
|
||||
/* Used to initialize the Tool without necessary variables. */
|
||||
this.override = fields.override ?? false;
|
||||
this.apiKey = fields.apiKey ?? this.getApiKey();
|
||||
|
||||
this.kwargs = fields?.kwargs ?? {};
|
||||
this.name = 'tavily_search_results_json';
|
||||
this.description =
|
||||
'A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events.';
|
||||
|
||||
this.schema = z.object({
|
||||
query: z.string().min(1).describe('The search query string.'),
|
||||
max_results: z
|
||||
.number()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The maximum number of search results to return. Defaults to 5.'),
|
||||
search_depth: z
|
||||
.enum(['basic', 'advanced'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The depth of the search, affecting result quality and response time (`basic` or `advanced`). Default is basic for quick results and advanced for indepth high quality results but longer response time. Advanced calls equals 2 requests.',
|
||||
),
|
||||
include_images: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe(
|
||||
'Whether to include a list of query-related images in the response. Default is False.',
|
||||
),
|
||||
include_answer: z
|
||||
.boolean()
|
||||
.optional()
|
||||
.describe('Whether to include answers in the search results. Default is False.'),
|
||||
// include_raw_content: z.boolean().optional().describe('Whether to include raw content in the search results. Default is False.'),
|
||||
// include_domains: z.array(z.string()).optional().describe('A list of domains to specifically include in the search results.'),
|
||||
// exclude_domains: z.array(z.string()).optional().describe('A list of domains to specifically exclude from the search results.'),
|
||||
});
|
||||
}
|
||||
|
||||
getApiKey() {
|
||||
const apiKey = getEnvironmentVariable(this.envVar);
|
||||
if (!apiKey && !this.override) {
|
||||
throw new Error(`Missing ${this.envVar} environment variable.`);
|
||||
}
|
||||
return apiKey;
|
||||
}
|
||||
|
||||
async _call(input) {
|
||||
const validationResult = this.schema.safeParse(input);
|
||||
if (!validationResult.success) {
|
||||
throw new Error(`Validation failed: ${JSON.stringify(validationResult.error.issues)}`);
|
||||
}
|
||||
|
||||
const { query, ...rest } = validationResult.data;
|
||||
|
||||
const requestBody = {
|
||||
api_key: this.apiKey,
|
||||
query,
|
||||
...rest,
|
||||
...this.kwargs,
|
||||
};
|
||||
|
||||
const response = await fetch('https://api.tavily.com/search', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(requestBody),
|
||||
});
|
||||
|
||||
const json = await response.json();
|
||||
if (!response.ok) {
|
||||
throw new Error(`Request failed with status ${response.status}: ${json.error}`);
|
||||
}
|
||||
|
||||
return JSON.stringify(json);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = TavilySearchResults;
|
|
@ -7,6 +7,9 @@ const { logger } = require('~/config');
|
|||
class WolframAlphaAPI extends StructuredTool {
|
||||
constructor(fields) {
|
||||
super();
|
||||
/* Used to initialize the Tool without necessary variables. */
|
||||
this.override = fields.override ?? false;
|
||||
|
||||
this.name = 'wolfram';
|
||||
this.apiKey = fields.WOLFRAM_APP_ID || this.getAppId();
|
||||
this.description_for_model = `// Access dynamic computation and curated data from WolframAlpha and Wolfram Cloud.
|
||||
|
@ -55,7 +58,7 @@ class WolframAlphaAPI extends StructuredTool {
|
|||
|
||||
getAppId() {
|
||||
const appId = process.env.WOLFRAM_APP_ID || '';
|
||||
if (!appId) {
|
||||
if (!appId && !this.override) {
|
||||
throw new Error('Missing WOLFRAM_APP_ID environment variable.');
|
||||
}
|
||||
return appId;
|
||||
|
|
|
@ -1,14 +1,11 @@
|
|||
const OpenAI = require('openai');
|
||||
const DALLE3 = require('../DALLE3');
|
||||
const { processFileURL } = require('~/server/services/Files/process');
|
||||
|
||||
const { logger } = require('~/config');
|
||||
|
||||
jest.mock('openai');
|
||||
|
||||
jest.mock('~/server/services/Files/process', () => ({
|
||||
processFileURL: jest.fn(),
|
||||
}));
|
||||
const processFileURL = jest.fn();
|
||||
|
||||
jest.mock('~/server/services/Files/images', () => ({
|
||||
getImageBasename: jest.fn().mockImplementation((url) => {
|
||||
|
@ -69,7 +66,7 @@ describe('DALLE3', () => {
|
|||
jest.resetModules();
|
||||
process.env = { ...originalEnv, DALLE_API_KEY: mockApiKey };
|
||||
// Instantiate DALLE3 for tests that do not depend on DALLE3_SYSTEM_PROMPT
|
||||
dalle = new DALLE3();
|
||||
dalle = new DALLE3({ processFileURL });
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
|
@ -78,7 +75,8 @@ describe('DALLE3', () => {
|
|||
process.env = originalEnv;
|
||||
});
|
||||
|
||||
it('should throw an error if DALLE_API_KEY is missing', () => {
|
||||
it('should throw an error if all potential API keys are missing', () => {
|
||||
delete process.env.DALLE3_API_KEY;
|
||||
delete process.env.DALLE_API_KEY;
|
||||
expect(() => new DALLE3()).toThrow('Missing DALLE_API_KEY environment variable.');
|
||||
});
|
||||
|
@ -112,7 +110,9 @@ describe('DALLE3', () => {
|
|||
};
|
||||
|
||||
generate.mockResolvedValue(mockResponse);
|
||||
processFileURL.mockResolvedValue('http://example.com/img-test.png');
|
||||
processFileURL.mockResolvedValue({
|
||||
filepath: 'http://example.com/img-test.png',
|
||||
});
|
||||
|
||||
const result = await dalle._call(mockData);
|
||||
|
||||
|
|
|
@ -6,19 +6,22 @@ const { OpenAIEmbeddings } = require('langchain/embeddings/openai');
|
|||
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
|
||||
const {
|
||||
availableTools,
|
||||
// Basic Tools
|
||||
CodeBrew,
|
||||
AzureAISearch,
|
||||
GoogleSearchAPI,
|
||||
WolframAlphaAPI,
|
||||
StructuredWolfram,
|
||||
OpenAICreateImage,
|
||||
StableDiffusionAPI,
|
||||
// Structured Tools
|
||||
DALLE3,
|
||||
StructuredSD,
|
||||
AzureAISearch,
|
||||
StructuredACS,
|
||||
E2BTools,
|
||||
CodeSherpa,
|
||||
StructuredSD,
|
||||
StructuredACS,
|
||||
CodeSherpaTools,
|
||||
CodeBrew,
|
||||
StructuredWolfram,
|
||||
TavilySearchResults,
|
||||
} = require('../');
|
||||
const { loadToolSuite } = require('./loadToolSuite');
|
||||
const { loadSpecs } = require('./loadSpecs');
|
||||
|
@ -151,8 +154,10 @@ const loadTools = async ({
|
|||
returnMap = false,
|
||||
tools = [],
|
||||
options = {},
|
||||
skipSpecs = false,
|
||||
}) => {
|
||||
const toolConstructors = {
|
||||
tavily_search_results_json: TavilySearchResults,
|
||||
calculator: Calculator,
|
||||
google: GoogleSearchAPI,
|
||||
wolfram: functions ? StructuredWolfram : WolframAlphaAPI,
|
||||
|
@ -229,10 +234,17 @@ const loadTools = async ({
|
|||
toolConstructors.codesherpa = CodeSherpa;
|
||||
}
|
||||
|
||||
const imageGenOptions = {
|
||||
fileStrategy: options.fileStrategy,
|
||||
processFileURL: options.processFileURL,
|
||||
returnMetadata: options.returnMetadata,
|
||||
};
|
||||
|
||||
const toolOptions = {
|
||||
serpapi: { location: 'Austin,Texas,United States', hl: 'en', gl: 'us' },
|
||||
dalle: { fileStrategy: options.fileStrategy },
|
||||
'dall-e': { fileStrategy: options.fileStrategy },
|
||||
dalle: imageGenOptions,
|
||||
'dall-e': imageGenOptions,
|
||||
'stable-diffusion': imageGenOptions,
|
||||
};
|
||||
|
||||
const toolAuthFields = {};
|
||||
|
@ -271,7 +283,7 @@ const loadTools = async ({
|
|||
}
|
||||
|
||||
let specs = null;
|
||||
if (functions && remainingTools.length > 0) {
|
||||
if (functions && remainingTools.length > 0 && skipSpecs !== true) {
|
||||
specs = await loadSpecs({
|
||||
llm: model,
|
||||
user,
|
||||
|
@ -298,6 +310,9 @@ const loadTools = async ({
|
|||
let result = [];
|
||||
for (const tool of tools) {
|
||||
const validTool = requestedTools[tool];
|
||||
if (!validTool) {
|
||||
continue;
|
||||
}
|
||||
const plugin = await validTool();
|
||||
|
||||
if (Array.isArray(plugin)) {
|
||||
|
|
8
api/cache/getLogStores.js
vendored
8
api/cache/getLogStores.js
vendored
|
@ -33,7 +33,11 @@ const genTitle = isEnabled(USE_REDIS) // ttl: 2 minutes
|
|||
|
||||
const modelQueries = isEnabled(process.env.USE_REDIS)
|
||||
? new Keyv({ store: keyvRedis })
|
||||
: new Keyv({ namespace: 'models' });
|
||||
: new Keyv({ namespace: CacheKeys.MODEL_QUERIES });
|
||||
|
||||
const abortKeys = isEnabled(USE_REDIS)
|
||||
? new Keyv({ store: keyvRedis })
|
||||
: new Keyv({ namespace: CacheKeys.ABORT_KEYS });
|
||||
|
||||
const namespaces = {
|
||||
[CacheKeys.CONFIG_STORE]: config,
|
||||
|
@ -45,7 +49,9 @@ const namespaces = {
|
|||
message_limit: createViolationInstance('message_limit'),
|
||||
token_balance: createViolationInstance('token_balance'),
|
||||
registrations: createViolationInstance('registrations'),
|
||||
[CacheKeys.FILE_UPLOAD_LIMIT]: createViolationInstance(CacheKeys.FILE_UPLOAD_LIMIT),
|
||||
logins: createViolationInstance('logins'),
|
||||
[CacheKeys.ABORT_KEYS]: abortKeys,
|
||||
[CacheKeys.TOKEN_CONFIG]: tokenConfig,
|
||||
[CacheKeys.GEN_TITLE]: genTitle,
|
||||
[CacheKeys.MODEL_QUERIES]: modelQueries,
|
||||
|
|
|
@ -33,6 +33,10 @@ function getMatchingSensitivePatterns(valueStr) {
|
|||
* @returns {string} - The redacted console message.
|
||||
*/
|
||||
function redactMessage(str) {
|
||||
if (!str) {
|
||||
return '';
|
||||
}
|
||||
|
||||
const patterns = getMatchingSensitivePatterns(str);
|
||||
|
||||
if (patterns.length === 0) {
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
const path = require('path');
|
||||
|
||||
module.exports = {
|
||||
uploads: path.resolve(__dirname, '..', '..', 'uploads'),
|
||||
dist: path.resolve(__dirname, '..', '..', 'client', 'dist'),
|
||||
publicPath: path.resolve(__dirname, '..', '..', 'client', 'public'),
|
||||
imageOutput: path.resolve(__dirname, '..', '..', 'client', 'public', 'images'),
|
||||
structuredTools: path.resolve(__dirname, '..', 'app', 'clients', 'tools', 'structured'),
|
||||
pluginManifest: path.resolve(__dirname, '..', 'app', 'clients', 'tools', 'manifest.json'),
|
||||
};
|
||||
|
|
68
api/models/Action.js
Normal file
68
api/models/Action.js
Normal file
|
@ -0,0 +1,68 @@
|
|||
const mongoose = require('mongoose');
|
||||
const actionSchema = require('./schema/action');
|
||||
|
||||
const Action = mongoose.model('action', actionSchema);
|
||||
|
||||
/**
|
||||
* Update an action with new data without overwriting existing properties,
|
||||
* or create a new action if it doesn't exist.
|
||||
*
|
||||
* @param {Object} searchParams - The search parameters to find the action to update.
|
||||
* @param {string} searchParams.action_id - The ID of the action to update.
|
||||
* @param {string} searchParams.user - The user ID of the action's author.
|
||||
* @param {Object} updateData - An object containing the properties to update.
|
||||
* @returns {Promise<Object>} The updated or newly created action document as a plain object.
|
||||
*/
|
||||
const updateAction = async (searchParams, updateData) => {
|
||||
return await Action.findOneAndUpdate(searchParams, updateData, {
|
||||
new: true,
|
||||
upsert: true,
|
||||
}).lean();
|
||||
};
|
||||
|
||||
/**
|
||||
* Retrieves all actions that match the given search parameters.
|
||||
*
|
||||
* @param {Object} searchParams - The search parameters to find matching actions.
|
||||
* @param {boolean} includeSensitive - Flag to include sensitive data in the metadata.
|
||||
* @returns {Promise<Array<Object>>} A promise that resolves to an array of action documents as plain objects.
|
||||
*/
|
||||
const getActions = async (searchParams, includeSensitive = false) => {
|
||||
const actions = await Action.find(searchParams).lean();
|
||||
|
||||
if (!includeSensitive) {
|
||||
for (let i = 0; i < actions.length; i++) {
|
||||
const metadata = actions[i].metadata;
|
||||
if (!metadata) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const sensitiveFields = ['api_key', 'oauth_client_id', 'oauth_client_secret'];
|
||||
for (let field of sensitiveFields) {
|
||||
if (metadata[field]) {
|
||||
delete metadata[field];
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return actions;
|
||||
};
|
||||
|
||||
/**
|
||||
* Deletes an action by its ID.
|
||||
*
|
||||
* @param {Object} searchParams - The search parameters to find the action to update.
|
||||
* @param {string} searchParams.action_id - The ID of the action to update.
|
||||
* @param {string} searchParams.user - The user ID of the action's author.
|
||||
* @returns {Promise<Object>} A promise that resolves to the deleted action document as a plain object, or null if no document was found.
|
||||
*/
|
||||
const deleteAction = async (searchParams) => {
|
||||
return await Action.findOneAndDelete(searchParams).lean();
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
updateAction,
|
||||
getActions,
|
||||
deleteAction,
|
||||
};
|
47
api/models/Assistant.js
Normal file
47
api/models/Assistant.js
Normal file
|
@ -0,0 +1,47 @@
|
|||
const mongoose = require('mongoose');
|
||||
const assistantSchema = require('./schema/assistant');
|
||||
|
||||
const Assistant = mongoose.model('assistant', assistantSchema);
|
||||
|
||||
/**
|
||||
* Update an assistant with new data without overwriting existing properties,
|
||||
* or create a new assistant if it doesn't exist.
|
||||
*
|
||||
* @param {Object} searchParams - The search parameters to find the assistant to update.
|
||||
* @param {string} searchParams.assistant_id - The ID of the assistant to update.
|
||||
* @param {string} searchParams.user - The user ID of the assistant's author.
|
||||
* @param {Object} updateData - An object containing the properties to update.
|
||||
* @returns {Promise<Object>} The updated or newly created assistant document as a plain object.
|
||||
*/
|
||||
const updateAssistant = async (searchParams, updateData) => {
|
||||
return await Assistant.findOneAndUpdate(searchParams, updateData, {
|
||||
new: true,
|
||||
upsert: true,
|
||||
}).lean();
|
||||
};
|
||||
|
||||
/**
|
||||
* Retrieves an assistant document based on the provided ID.
|
||||
*
|
||||
* @param {Object} searchParams - The search parameters to find the assistant to update.
|
||||
* @param {string} searchParams.assistant_id - The ID of the assistant to update.
|
||||
* @param {string} searchParams.user - The user ID of the assistant's author.
|
||||
* @returns {Promise<Object|null>} The assistant document as a plain object, or null if not found.
|
||||
*/
|
||||
const getAssistant = async (searchParams) => await Assistant.findOne(searchParams).lean();
|
||||
|
||||
/**
|
||||
* Retrieves all assistants that match the given search parameters.
|
||||
*
|
||||
* @param {Object} searchParams - The search parameters to find matching assistants.
|
||||
* @returns {Promise<Array<Object>>} A promise that resolves to an array of action documents as plain objects.
|
||||
*/
|
||||
const getAssistants = async (searchParams) => {
|
||||
return await Assistant.find(searchParams).lean();
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
updateAssistant,
|
||||
getAssistants,
|
||||
getAssistant,
|
||||
};
|
|
@ -14,24 +14,32 @@ const findFileById = async (file_id, options = {}) => {
|
|||
};
|
||||
|
||||
/**
|
||||
* Retrieves files matching a given filter.
|
||||
* Retrieves files matching a given filter, sorted by the most recently updated.
|
||||
* @param {Object} filter - The filter criteria to apply.
|
||||
* @param {Object} [_sortOptions] - Optional sort parameters.
|
||||
* @returns {Promise<Array<MongoFile>>} A promise that resolves to an array of file documents.
|
||||
*/
|
||||
const getFiles = async (filter) => {
|
||||
return await File.find(filter).lean();
|
||||
const getFiles = async (filter, _sortOptions) => {
|
||||
const sortOptions = { updatedAt: -1, ..._sortOptions };
|
||||
return await File.find(filter).sort(sortOptions).lean();
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates a new file with a TTL of 1 hour.
|
||||
* @param {MongoFile} data - The file data to be created, must contain file_id.
|
||||
* @param {boolean} disableTTL - Whether to disable the TTL.
|
||||
* @returns {Promise<MongoFile>} A promise that resolves to the created file document.
|
||||
*/
|
||||
const createFile = async (data) => {
|
||||
const createFile = async (data, disableTTL) => {
|
||||
const fileData = {
|
||||
...data,
|
||||
expiresAt: new Date(Date.now() + 3600 * 1000),
|
||||
};
|
||||
|
||||
if (disableTTL) {
|
||||
delete fileData.expiresAt;
|
||||
}
|
||||
|
||||
return await File.findOneAndUpdate({ file_id: data.file_id }, fileData, {
|
||||
new: true,
|
||||
upsert: true,
|
||||
|
@ -75,6 +83,15 @@ const deleteFile = async (file_id) => {
|
|||
return await File.findOneAndDelete({ file_id }).lean();
|
||||
};
|
||||
|
||||
/**
|
||||
* Deletes a file identified by a filter.
|
||||
* @param {object} filter - The filter criteria to apply.
|
||||
* @returns {Promise<MongoFile>} A promise that resolves to the deleted file document or null.
|
||||
*/
|
||||
const deleteFileByFilter = async (filter) => {
|
||||
return await File.findOneAndDelete(filter).lean();
|
||||
};
|
||||
|
||||
/**
|
||||
* Deletes multiple files identified by an array of file_ids.
|
||||
* @param {Array<string>} file_ids - The unique identifiers of the files to delete.
|
||||
|
@ -93,4 +110,5 @@ module.exports = {
|
|||
updateFileUsage,
|
||||
deleteFile,
|
||||
deleteFiles,
|
||||
deleteFileByFilter,
|
||||
};
|
||||
|
|
|
@ -72,11 +72,49 @@ module.exports = {
|
|||
throw new Error('Failed to save message.');
|
||||
}
|
||||
},
|
||||
/**
|
||||
* Records a message in the database.
|
||||
*
|
||||
* @async
|
||||
* @function recordMessage
|
||||
* @param {Object} params - The message data object.
|
||||
* @param {string} params.user - The identifier of the user.
|
||||
* @param {string} params.endpoint - The endpoint where the message originated.
|
||||
* @param {string} params.messageId - The unique identifier for the message.
|
||||
* @param {string} params.conversationId - The identifier of the conversation.
|
||||
* @param {string} [params.parentMessageId] - The identifier of the parent message, if any.
|
||||
* @param {Partial<TMessage>} rest - Any additional properties from the TMessage typedef not explicitly listed.
|
||||
* @returns {Promise<Object>} The updated or newly inserted message document.
|
||||
* @throws {Error} If there is an error in saving the message.
|
||||
*/
|
||||
async recordMessage({ user, endpoint, messageId, conversationId, parentMessageId, ...rest }) {
|
||||
try {
|
||||
// No parsing of convoId as may use threadId
|
||||
const message = {
|
||||
user,
|
||||
endpoint,
|
||||
messageId,
|
||||
conversationId,
|
||||
parentMessageId,
|
||||
...rest,
|
||||
};
|
||||
|
||||
return await Message.findOneAndUpdate({ user, messageId }, message, {
|
||||
upsert: true,
|
||||
new: true,
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error('Error saving message:', err);
|
||||
throw new Error('Failed to save message.');
|
||||
}
|
||||
},
|
||||
async updateMessage(message) {
|
||||
try {
|
||||
const { messageId, ...update } = message;
|
||||
update.isEdited = true;
|
||||
const updatedMessage = await Message.findOneAndUpdate({ messageId }, update, { new: true });
|
||||
const updatedMessage = await Message.findOneAndUpdate({ messageId }, update, {
|
||||
new: true,
|
||||
});
|
||||
|
||||
if (!updatedMessage) {
|
||||
throw new Error('Message not found.');
|
||||
|
|
|
@ -1,12 +1,14 @@
|
|||
const {
|
||||
getMessages,
|
||||
saveMessage,
|
||||
recordMessage,
|
||||
updateMessage,
|
||||
deleteMessagesSince,
|
||||
deleteMessages,
|
||||
} = require('./Message');
|
||||
const { getConvoTitle, getConvo, saveConvo, deleteConvos } = require('./Conversation');
|
||||
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
|
||||
const { hashPassword, getUser, updateUser } = require('./userMethods');
|
||||
const {
|
||||
findFileById,
|
||||
createFile,
|
||||
|
@ -29,8 +31,13 @@ module.exports = {
|
|||
Balance,
|
||||
Transaction,
|
||||
|
||||
hashPassword,
|
||||
updateUser,
|
||||
getUser,
|
||||
|
||||
getMessages,
|
||||
saveMessage,
|
||||
recordMessage,
|
||||
updateMessage,
|
||||
deleteMessagesSince,
|
||||
deleteMessages,
|
||||
|
|
|
@ -183,6 +183,15 @@ const createMeiliMongooseModel = function ({ index, attributesToIndex }) {
|
|||
if (object.conversationId && object.conversationId.includes('|')) {
|
||||
object.conversationId = object.conversationId.replace(/\|/g, '--');
|
||||
}
|
||||
|
||||
if (object.content && Array.isArray(object.content)) {
|
||||
object.text = object.content
|
||||
.filter((item) => item.type === 'text' && item.text && item.text.value)
|
||||
.map((item) => item.text.value)
|
||||
.join(' ');
|
||||
delete object.content;
|
||||
}
|
||||
|
||||
return object;
|
||||
}
|
||||
|
||||
|
|
60
api/models/schema/action.js
Normal file
60
api/models/schema/action.js
Normal file
|
@ -0,0 +1,60 @@
|
|||
const mongoose = require('mongoose');
|
||||
|
||||
const { Schema } = mongoose;
|
||||
|
||||
const AuthSchema = new Schema(
|
||||
{
|
||||
authorization_type: String,
|
||||
custom_auth_header: String,
|
||||
type: {
|
||||
type: String,
|
||||
enum: ['service_http', 'oauth', 'none'],
|
||||
},
|
||||
authorization_content_type: String,
|
||||
authorization_url: String,
|
||||
client_url: String,
|
||||
scope: String,
|
||||
token_exchange_method: {
|
||||
type: String,
|
||||
enum: ['default_post', 'basic_auth_header', null],
|
||||
},
|
||||
},
|
||||
{ _id: false },
|
||||
);
|
||||
|
||||
const actionSchema = new Schema({
|
||||
user: {
|
||||
type: mongoose.Schema.Types.ObjectId,
|
||||
ref: 'User',
|
||||
index: true,
|
||||
required: true,
|
||||
},
|
||||
action_id: {
|
||||
type: String,
|
||||
index: true,
|
||||
required: true,
|
||||
},
|
||||
type: {
|
||||
type: String,
|
||||
default: 'action_prototype',
|
||||
},
|
||||
settings: Schema.Types.Mixed,
|
||||
assistant_id: String,
|
||||
metadata: {
|
||||
api_key: String, // private, encrypted
|
||||
auth: AuthSchema,
|
||||
domain: {
|
||||
type: String,
|
||||
unique: true,
|
||||
required: true,
|
||||
},
|
||||
// json_schema: Schema.Types.Mixed,
|
||||
privacy_policy_url: String,
|
||||
raw_spec: String,
|
||||
oauth_client_id: String, // private, encrypted
|
||||
oauth_client_secret: String, // private, encrypted
|
||||
},
|
||||
});
|
||||
// }, { minimize: false }); // Prevent removal of empty objects
|
||||
|
||||
module.exports = actionSchema;
|
34
api/models/schema/assistant.js
Normal file
34
api/models/schema/assistant.js
Normal file
|
@ -0,0 +1,34 @@
|
|||
const mongoose = require('mongoose');
|
||||
|
||||
const assistantSchema = mongoose.Schema(
|
||||
{
|
||||
user: {
|
||||
type: mongoose.Schema.Types.ObjectId,
|
||||
ref: 'User',
|
||||
required: true,
|
||||
},
|
||||
assistant_id: {
|
||||
type: String,
|
||||
unique: true,
|
||||
index: true,
|
||||
required: true,
|
||||
},
|
||||
avatar: {
|
||||
type: {
|
||||
filepath: String,
|
||||
source: String,
|
||||
},
|
||||
default: undefined,
|
||||
},
|
||||
access_level: {
|
||||
type: Number,
|
||||
},
|
||||
file_ids: { type: [String], default: undefined },
|
||||
actions: { type: [String], default: undefined },
|
||||
},
|
||||
{
|
||||
timestamps: true,
|
||||
},
|
||||
);
|
||||
|
||||
module.exports = assistantSchema;
|
|
@ -11,152 +11,133 @@ const conversationPreset = {
|
|||
// for azureOpenAI, openAI, chatGPTBrowser only
|
||||
model: {
|
||||
type: String,
|
||||
// default: null,
|
||||
required: false,
|
||||
},
|
||||
// for azureOpenAI, openAI only
|
||||
chatGptLabel: {
|
||||
type: String,
|
||||
// default: null,
|
||||
required: false,
|
||||
},
|
||||
// for google only
|
||||
modelLabel: {
|
||||
type: String,
|
||||
// default: null,
|
||||
required: false,
|
||||
},
|
||||
promptPrefix: {
|
||||
type: String,
|
||||
// default: null,
|
||||
required: false,
|
||||
},
|
||||
temperature: {
|
||||
type: Number,
|
||||
// default: 1,
|
||||
required: false,
|
||||
},
|
||||
top_p: {
|
||||
type: Number,
|
||||
// default: 1,
|
||||
required: false,
|
||||
},
|
||||
// for google only
|
||||
topP: {
|
||||
type: Number,
|
||||
// default: 0.95,
|
||||
required: false,
|
||||
},
|
||||
topK: {
|
||||
type: Number,
|
||||
// default: 40,
|
||||
required: false,
|
||||
},
|
||||
maxOutputTokens: {
|
||||
type: Number,
|
||||
// default: 1024,
|
||||
required: false,
|
||||
},
|
||||
presence_penalty: {
|
||||
type: Number,
|
||||
// default: 0,
|
||||
required: false,
|
||||
},
|
||||
frequency_penalty: {
|
||||
type: Number,
|
||||
// default: 0,
|
||||
required: false,
|
||||
},
|
||||
// for bingai only
|
||||
jailbreak: {
|
||||
type: Boolean,
|
||||
// default: false,
|
||||
},
|
||||
context: {
|
||||
type: String,
|
||||
// default: null,
|
||||
},
|
||||
systemMessage: {
|
||||
type: String,
|
||||
// default: null,
|
||||
},
|
||||
toneStyle: {
|
||||
type: String,
|
||||
// default: null,
|
||||
},
|
||||
file_ids: { type: [{ type: String }], default: undefined },
|
||||
// vision
|
||||
resendImages: {
|
||||
type: Boolean,
|
||||
},
|
||||
imageDetail: {
|
||||
type: String,
|
||||
},
|
||||
/* assistants */
|
||||
assistant_id: {
|
||||
type: String,
|
||||
},
|
||||
instructions: {
|
||||
type: String,
|
||||
},
|
||||
};
|
||||
|
||||
const agentOptions = {
|
||||
model: {
|
||||
type: String,
|
||||
// default: null,
|
||||
required: false,
|
||||
},
|
||||
// for azureOpenAI, openAI only
|
||||
chatGptLabel: {
|
||||
type: String,
|
||||
// default: null,
|
||||
required: false,
|
||||
},
|
||||
modelLabel: {
|
||||
type: String,
|
||||
// default: null,
|
||||
required: false,
|
||||
},
|
||||
promptPrefix: {
|
||||
type: String,
|
||||
// default: null,
|
||||
required: false,
|
||||
},
|
||||
temperature: {
|
||||
type: Number,
|
||||
// default: 1,
|
||||
required: false,
|
||||
},
|
||||
top_p: {
|
||||
type: Number,
|
||||
// default: 1,
|
||||
required: false,
|
||||
},
|
||||
// for google only
|
||||
topP: {
|
||||
type: Number,
|
||||
// default: 0.95,
|
||||
required: false,
|
||||
},
|
||||
topK: {
|
||||
type: Number,
|
||||
// default: 40,
|
||||
required: false,
|
||||
},
|
||||
maxOutputTokens: {
|
||||
type: Number,
|
||||
// default: 1024,
|
||||
required: false,
|
||||
},
|
||||
presence_penalty: {
|
||||
type: Number,
|
||||
// default: 0,
|
||||
required: false,
|
||||
},
|
||||
frequency_penalty: {
|
||||
type: Number,
|
||||
// default: 0,
|
||||
required: false,
|
||||
},
|
||||
context: {
|
||||
type: String,
|
||||
// default: null,
|
||||
},
|
||||
systemMessage: {
|
||||
type: String,
|
||||
// default: null,
|
||||
},
|
||||
};
|
||||
|
||||
|
|
|
@ -3,6 +3,8 @@ const mongoose = require('mongoose');
|
|||
|
||||
/**
|
||||
* @typedef {Object} MongoFile
|
||||
* @property {mongoose.Schema.Types.ObjectId} [_id] - MongoDB Document ID
|
||||
* @property {number} [__v] - MongoDB Version Key
|
||||
* @property {mongoose.Schema.Types.ObjectId} user - User ID
|
||||
* @property {string} [conversationId] - Optional conversation ID
|
||||
* @property {string} file_id - File identifier
|
||||
|
@ -17,6 +19,8 @@ const mongoose = require('mongoose');
|
|||
* @property {number} [width] - Optional width of the file
|
||||
* @property {number} [height] - Optional height of the file
|
||||
* @property {Date} [expiresAt] - Optional height of the file
|
||||
* @property {Date} [createdAt] - Date when the file was created
|
||||
* @property {Date} [updatedAt] - Date when the file was updated
|
||||
*/
|
||||
const fileSchema = mongoose.Schema(
|
||||
{
|
||||
|
@ -61,6 +65,10 @@ const fileSchema = mongoose.Schema(
|
|||
type: String,
|
||||
required: true,
|
||||
},
|
||||
context: {
|
||||
type: String,
|
||||
// required: true,
|
||||
},
|
||||
usage: {
|
||||
type: Number,
|
||||
required: true,
|
||||
|
|
|
@ -17,6 +17,7 @@ const messageSchema = mongoose.Schema(
|
|||
user: {
|
||||
type: String,
|
||||
index: true,
|
||||
required: true,
|
||||
default: null,
|
||||
},
|
||||
model: {
|
||||
|
@ -46,12 +47,10 @@ const messageSchema = mongoose.Schema(
|
|||
},
|
||||
sender: {
|
||||
type: String,
|
||||
required: true,
|
||||
meiliIndex: true,
|
||||
},
|
||||
text: {
|
||||
type: String,
|
||||
required: true,
|
||||
meiliIndex: true,
|
||||
},
|
||||
summary: {
|
||||
|
@ -103,6 +102,14 @@ const messageSchema = mongoose.Schema(
|
|||
default: undefined,
|
||||
},
|
||||
plugins: { type: [{ type: mongoose.Schema.Types.Mixed }], default: undefined },
|
||||
content: {
|
||||
type: [{ type: mongoose.Schema.Types.Mixed }],
|
||||
default: undefined,
|
||||
meiliIndex: true,
|
||||
},
|
||||
thread_id: {
|
||||
type: String,
|
||||
},
|
||||
},
|
||||
{ timestamps: true },
|
||||
);
|
||||
|
|
|
@ -21,6 +21,10 @@ const { logger } = require('~/config');
|
|||
*/
|
||||
const spendTokens = async (txData, tokenUsage) => {
|
||||
const { promptTokens, completionTokens } = tokenUsage;
|
||||
logger.debug(`[spendTokens] conversationId: ${txData.conversationId} | Token usage: `, {
|
||||
promptTokens,
|
||||
completionTokens,
|
||||
});
|
||||
let prompt, completion;
|
||||
try {
|
||||
if (promptTokens >= 0) {
|
||||
|
@ -42,7 +46,12 @@ const spendTokens = async (txData, tokenUsage) => {
|
|||
rawAmount: -completionTokens,
|
||||
});
|
||||
|
||||
logger.debug('[spendTokens] post-transaction', { prompt, completion });
|
||||
prompt &&
|
||||
completion &&
|
||||
logger.debug('[spendTokens] Transaction data record against balance:', {
|
||||
prompt,
|
||||
completion,
|
||||
});
|
||||
} catch (err) {
|
||||
logger.error('[spendTokens]', err);
|
||||
}
|
||||
|
|
46
api/models/userMethods.js
Normal file
46
api/models/userMethods.js
Normal file
|
@ -0,0 +1,46 @@
|
|||
const bcrypt = require('bcryptjs');
|
||||
const User = require('./User');
|
||||
|
||||
const hashPassword = async (password) => {
|
||||
const hashedPassword = await new Promise((resolve, reject) => {
|
||||
bcrypt.hash(password, 10, function (err, hash) {
|
||||
if (err) {
|
||||
reject(err);
|
||||
} else {
|
||||
resolve(hash);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
return hashedPassword;
|
||||
};
|
||||
|
||||
/**
|
||||
* Retrieve a user by ID and convert the found user document to a plain object.
|
||||
*
|
||||
* @param {string} userId - The ID of the user to find and return as a plain object.
|
||||
* @returns {Promise<Object>} A plain object representing the user document, or `null` if no user is found.
|
||||
*/
|
||||
const getUser = async function (userId) {
|
||||
return await User.findById(userId).lean();
|
||||
};
|
||||
|
||||
/**
|
||||
* Update a user with new data without overwriting existing properties.
|
||||
*
|
||||
* @param {string} userId - The ID of the user to update.
|
||||
* @param {Object} updateData - An object containing the properties to update.
|
||||
* @returns {Promise<Object>} The updated user document as a plain object, or `null` if no user is found.
|
||||
*/
|
||||
const updateUser = async function (userId, updateData) {
|
||||
return await User.findByIdAndUpdate(userId, updateData, {
|
||||
new: true,
|
||||
runValidators: true,
|
||||
}).lean();
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
hashPassword,
|
||||
updateUser,
|
||||
getUser,
|
||||
};
|
|
@ -31,6 +31,7 @@
|
|||
"@azure/search-documents": "^12.0.0",
|
||||
"@keyv/mongo": "^2.1.8",
|
||||
"@keyv/redis": "^2.8.1",
|
||||
"@langchain/community": "^0.0.17",
|
||||
"@langchain/google-genai": "^0.0.8",
|
||||
"axios": "^1.3.4",
|
||||
"bcryptjs": "^2.4.3",
|
||||
|
@ -44,6 +45,7 @@
|
|||
"express-mongo-sanitize": "^2.2.0",
|
||||
"express-rate-limit": "^6.9.0",
|
||||
"express-session": "^1.17.3",
|
||||
"file-type": "^18.7.0",
|
||||
"firebase": "^10.6.0",
|
||||
"googleapis": "^126.0.1",
|
||||
"handlebars": "^4.7.7",
|
||||
|
@ -58,6 +60,7 @@
|
|||
"librechat-data-provider": "*",
|
||||
"lodash": "^4.17.21",
|
||||
"meilisearch": "^0.33.0",
|
||||
"mime": "^3.0.0",
|
||||
"module-alias": "^2.2.3",
|
||||
"mongoose": "^7.1.1",
|
||||
"multer": "^1.4.5-lts.1",
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
const { getResponseSender } = require('librechat-data-provider');
|
||||
const { getResponseSender, Constants } = require('librechat-data-provider');
|
||||
const { sendMessage, createOnProgress } = require('~/server/utils');
|
||||
const { saveMessage, getConvoTitle, getConvo } = require('~/models');
|
||||
const { createAbortController, handleAbortError } = require('~/server/middleware');
|
||||
|
@ -140,7 +140,7 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
|
|||
|
||||
await saveMessage(userMessage);
|
||||
|
||||
if (addTitle && parentMessageId === '00000000-0000-0000-0000-000000000000' && newConvo) {
|
||||
if (addTitle && parentMessageId === Constants.NO_PARENT && newConvo) {
|
||||
addTitle(req, {
|
||||
text,
|
||||
response,
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
const { CacheKeys } = require('librechat-data-provider');
|
||||
const { CacheKeys, EModelEndpoint } = require('librechat-data-provider');
|
||||
const { loadDefaultEndpointsConfig, loadConfigEndpoints } = require('~/server/services/Config');
|
||||
const { getLogStores } = require('~/cache');
|
||||
|
||||
|
@ -14,6 +14,10 @@ async function endpointController(req, res) {
|
|||
const customConfigEndpoints = await loadConfigEndpoints();
|
||||
|
||||
const endpointsConfig = { ...defaultEndpointsConfig, ...customConfigEndpoints };
|
||||
if (endpointsConfig[EModelEndpoint.assistants] && req.app.locals?.[EModelEndpoint.assistants]) {
|
||||
endpointsConfig[EModelEndpoint.assistants].disableBuilder =
|
||||
req.app.locals[EModelEndpoint.assistants].disableBuilder;
|
||||
}
|
||||
|
||||
await cache.set(CacheKeys.ENDPOINT_CONFIG, endpointsConfig);
|
||||
res.send(JSON.stringify(endpointsConfig));
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
const path = require('path');
|
||||
const { promises: fs } = require('fs');
|
||||
const { CacheKeys } = require('librechat-data-provider');
|
||||
const { addOpenAPISpecs } = require('~/app/clients/tools/util/addOpenAPISpecs');
|
||||
|
@ -56,12 +55,10 @@ const getAvailablePluginsController = async (req, res) => {
|
|||
return;
|
||||
}
|
||||
|
||||
const manifestFile = await fs.readFile(
|
||||
path.join(__dirname, '..', '..', 'app', 'clients', 'tools', 'manifest.json'),
|
||||
'utf8',
|
||||
);
|
||||
const pluginManifest = await fs.readFile(req.app.locals.paths.pluginManifest, 'utf8');
|
||||
|
||||
const jsonData = JSON.parse(manifestFile);
|
||||
const jsonData = JSON.parse(pluginManifest);
|
||||
/** @type {TPlugin[]} */
|
||||
const uniquePlugins = filterUniquePlugins(jsonData);
|
||||
const authenticatedPlugins = uniquePlugins.map((plugin) => {
|
||||
if (isPluginAuthenticated(plugin)) {
|
||||
|
@ -78,6 +75,53 @@ const getAvailablePluginsController = async (req, res) => {
|
|||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Retrieves and returns a list of available tools, either from a cache or by reading a plugin manifest file.
|
||||
*
|
||||
* This function first attempts to retrieve the list of tools from a cache. If the tools are not found in the cache,
|
||||
* it reads a plugin manifest file, filters for unique plugins, and determines if each plugin is authenticated.
|
||||
* Only plugins that are marked as available in the application's local state are included in the final list.
|
||||
* The resulting list of tools is then cached and sent to the client.
|
||||
*
|
||||
* @param {object} req - The request object, containing information about the HTTP request.
|
||||
* @param {object} res - The response object, used to send back the desired HTTP response.
|
||||
* @returns {Promise<void>} A promise that resolves when the function has completed.
|
||||
*/
|
||||
const getAvailableTools = async (req, res) => {
|
||||
try {
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
const cachedTools = await cache.get(CacheKeys.TOOLS);
|
||||
if (cachedTools) {
|
||||
res.status(200).json(cachedTools);
|
||||
return;
|
||||
}
|
||||
|
||||
const pluginManifest = await fs.readFile(req.app.locals.paths.pluginManifest, 'utf8');
|
||||
|
||||
const jsonData = JSON.parse(pluginManifest);
|
||||
/** @type {TPlugin[]} */
|
||||
const uniquePlugins = filterUniquePlugins(jsonData);
|
||||
|
||||
const authenticatedPlugins = uniquePlugins.map((plugin) => {
|
||||
if (isPluginAuthenticated(plugin)) {
|
||||
return { ...plugin, authenticated: true };
|
||||
} else {
|
||||
return plugin;
|
||||
}
|
||||
});
|
||||
|
||||
const tools = authenticatedPlugins.filter(
|
||||
(plugin) => req.app.locals.availableTools[plugin.pluginKey] !== undefined,
|
||||
);
|
||||
|
||||
await cache.set(CacheKeys.TOOLS, tools);
|
||||
res.status(200).json(tools);
|
||||
} catch (error) {
|
||||
res.status(500).json({ message: error.message });
|
||||
}
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
getAvailableTools,
|
||||
getAvailablePluginsController,
|
||||
};
|
||||
|
|
|
@ -8,9 +8,10 @@ const getUserController = async (req, res) => {
|
|||
|
||||
const updateUserPluginsController = async (req, res) => {
|
||||
const { user } = req;
|
||||
const { pluginKey, action, auth } = req.body;
|
||||
const { pluginKey, action, auth, isAssistantTool } = req.body;
|
||||
let authService;
|
||||
try {
|
||||
if (!isAssistantTool) {
|
||||
const userPluginsService = await updateUserPluginsService(user, pluginKey, action);
|
||||
|
||||
if (userPluginsService instanceof Error) {
|
||||
|
@ -18,6 +19,8 @@ const updateUserPluginsController = async (req, res) => {
|
|||
const { status, message } = userPluginsService;
|
||||
res.status(status).send({ message });
|
||||
}
|
||||
}
|
||||
|
||||
if (auth) {
|
||||
const keys = Object.keys(auth);
|
||||
const values = Object.values(auth);
|
||||
|
|
|
@ -76,7 +76,7 @@ const startServer = async () => {
|
|||
app.use('/api/plugins', routes.plugins);
|
||||
app.use('/api/config', routes.config);
|
||||
app.use('/api/assistants', routes.assistants);
|
||||
app.use('/api/files', routes.files);
|
||||
app.use('/api/files', await routes.files.initialize());
|
||||
|
||||
app.use((req, res) => {
|
||||
res.status(404).sendFile(path.join(app.locals.paths.dist, 'index.html'));
|
||||
|
|
|
@ -1,18 +1,24 @@
|
|||
const { EModelEndpoint } = require('librechat-data-provider');
|
||||
const { sendMessage, sendError, countTokens, isEnabled } = require('~/server/utils');
|
||||
const { saveMessage, getConvo, getConvoTitle } = require('~/models');
|
||||
const clearPendingReq = require('~/cache/clearPendingReq');
|
||||
const abortControllers = require('./abortControllers');
|
||||
const { redactMessage } = require('~/config/parsers');
|
||||
const spendTokens = require('~/models/spendTokens');
|
||||
const { abortRun } = require('./abortRun');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
async function abortMessage(req, res) {
|
||||
let { abortKey, conversationId } = req.body;
|
||||
let { abortKey, conversationId, endpoint } = req.body;
|
||||
|
||||
if (!abortKey && conversationId) {
|
||||
abortKey = conversationId;
|
||||
}
|
||||
|
||||
if (endpoint === EModelEndpoint.assistants) {
|
||||
return await abortRun(req, res);
|
||||
}
|
||||
|
||||
if (!abortControllers.has(abortKey) && !res.headersSent) {
|
||||
return res.status(204).send({ message: 'Request not found' });
|
||||
}
|
||||
|
|
87
api/server/middleware/abortRun.js
Normal file
87
api/server/middleware/abortRun.js
Normal file
|
@ -0,0 +1,87 @@
|
|||
const { CacheKeys, RunStatus, isUUID } = require('librechat-data-provider');
|
||||
const { initializeClient } = require('~/server/services/Endpoints/assistant');
|
||||
const { checkMessageGaps, recordUsage } = require('~/server/services/Threads');
|
||||
const { getConvo } = require('~/models/Conversation');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const { sendMessage } = require('~/server/utils');
|
||||
// const spendTokens = require('~/models/spendTokens');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
async function abortRun(req, res) {
|
||||
res.setHeader('Content-Type', 'application/json');
|
||||
const { abortKey } = req.body;
|
||||
const [conversationId, latestMessageId] = abortKey.split(':');
|
||||
|
||||
if (!isUUID.safeParse(conversationId).success) {
|
||||
logger.error('[abortRun] Invalid conversationId', { conversationId });
|
||||
return res.status(400).send({ message: 'Invalid conversationId' });
|
||||
}
|
||||
|
||||
const cacheKey = `${req.user.id}:${conversationId}`;
|
||||
const cache = getLogStores(CacheKeys.ABORT_KEYS);
|
||||
const runValues = await cache.get(cacheKey);
|
||||
const [thread_id, run_id] = runValues.split(':');
|
||||
|
||||
if (!run_id) {
|
||||
logger.warn('[abortRun] Couldn\'t find run for cancel request', { thread_id });
|
||||
return res.status(204).send({ message: 'Run not found' });
|
||||
} else if (run_id === 'cancelled') {
|
||||
logger.warn('[abortRun] Run already cancelled', { thread_id });
|
||||
return res.status(204).send({ message: 'Run already cancelled' });
|
||||
}
|
||||
|
||||
let runMessages = [];
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
try {
|
||||
await cache.set(cacheKey, 'cancelled');
|
||||
const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
|
||||
logger.debug('Cancelled run:', cancelledRun);
|
||||
} catch (error) {
|
||||
logger.error('[abortRun] Error cancelling run', error);
|
||||
if (
|
||||
error?.message?.includes(RunStatus.CANCELLED) ||
|
||||
error?.message?.includes(RunStatus.CANCELLING)
|
||||
) {
|
||||
return res.end();
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
|
||||
await recordUsage({
|
||||
...run.usage,
|
||||
model: run.model,
|
||||
user: req.user.id,
|
||||
conversationId,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[abortRun] Error fetching or processing run', error);
|
||||
}
|
||||
|
||||
runMessages = await checkMessageGaps({
|
||||
openai,
|
||||
latestMessageId,
|
||||
thread_id,
|
||||
run_id,
|
||||
conversationId,
|
||||
});
|
||||
|
||||
const finalEvent = {
|
||||
title: 'New Chat',
|
||||
final: true,
|
||||
conversation: await getConvo(req.user.id, conversationId),
|
||||
runMessages,
|
||||
};
|
||||
|
||||
if (res.headersSent && finalEvent) {
|
||||
return sendMessage(res, finalEvent);
|
||||
}
|
||||
|
||||
res.json(finalEvent);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
abortRun,
|
||||
};
|
|
@ -5,6 +5,7 @@ const anthropic = require('~/server/services/Endpoints/anthropic');
|
|||
const openAI = require('~/server/services/Endpoints/openAI');
|
||||
const custom = require('~/server/services/Endpoints/custom');
|
||||
const google = require('~/server/services/Endpoints/google');
|
||||
const assistant = require('~/server/services/Endpoints/assistant');
|
||||
|
||||
const buildFunction = {
|
||||
[EModelEndpoint.openAI]: openAI.buildOptions,
|
||||
|
@ -13,6 +14,7 @@ const buildFunction = {
|
|||
[EModelEndpoint.azureOpenAI]: openAI.buildOptions,
|
||||
[EModelEndpoint.anthropic]: anthropic.buildOptions,
|
||||
[EModelEndpoint.gptPlugins]: gptPlugins.buildOptions,
|
||||
[EModelEndpoint.assistants]: assistant.buildOptions,
|
||||
};
|
||||
|
||||
function buildEndpointOption(req, res, next) {
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
const crypto = require('crypto');
|
||||
const { saveMessage } = require('~/models');
|
||||
const { getResponseSender, Constants } = require('librechat-data-provider');
|
||||
const { sendMessage, sendError } = require('~/server/utils');
|
||||
const { getResponseSender } = require('librechat-data-provider');
|
||||
const { saveMessage } = require('~/models');
|
||||
|
||||
/**
|
||||
* Denies a request by sending an error message and optionally saves the user's message.
|
||||
|
@ -38,8 +38,7 @@ const denyRequest = async (req, res, errorMessage) => {
|
|||
};
|
||||
sendMessage(res, { message: userMessage, created: true });
|
||||
|
||||
const shouldSaveMessage =
|
||||
_convoId && parentMessageId && parentMessageId !== '00000000-0000-0000-0000-000000000000';
|
||||
const shouldSaveMessage = _convoId && parentMessageId && parentMessageId !== Constants.NO_PARENT;
|
||||
|
||||
if (shouldSaveMessage) {
|
||||
await saveMessage({ ...userMessage, user: req.user.id });
|
||||
|
|
|
@ -4,6 +4,7 @@ const uaParser = require('./uaParser');
|
|||
const setHeaders = require('./setHeaders');
|
||||
const loginLimiter = require('./loginLimiter');
|
||||
const requireJwtAuth = require('./requireJwtAuth');
|
||||
const uploadLimiters = require('./uploadLimiters');
|
||||
const registerLimiter = require('./registerLimiter');
|
||||
const messageLimiters = require('./messageLimiters');
|
||||
const requireLocalAuth = require('./requireLocalAuth');
|
||||
|
@ -16,6 +17,7 @@ const moderateText = require('./moderateText');
|
|||
const noIndex = require('./noIndex');
|
||||
|
||||
module.exports = {
|
||||
...uploadLimiters,
|
||||
...abortMiddleware,
|
||||
...messageLimiters,
|
||||
checkBan,
|
||||
|
|
75
api/server/middleware/uploadLimiters.js
Normal file
75
api/server/middleware/uploadLimiters.js
Normal file
|
@ -0,0 +1,75 @@
|
|||
const rateLimit = require('express-rate-limit');
|
||||
const { CacheKeys } = require('librechat-data-provider');
|
||||
const logViolation = require('~/cache/logViolation');
|
||||
|
||||
const getEnvironmentVariables = () => {
|
||||
const FILE_UPLOAD_IP_MAX = parseInt(process.env.FILE_UPLOAD_IP_MAX) || 100;
|
||||
const FILE_UPLOAD_IP_WINDOW = parseInt(process.env.FILE_UPLOAD_IP_WINDOW) || 15;
|
||||
const FILE_UPLOAD_USER_MAX = parseInt(process.env.FILE_UPLOAD_USER_MAX) || 50;
|
||||
const FILE_UPLOAD_USER_WINDOW = parseInt(process.env.FILE_UPLOAD_USER_WINDOW) || 15;
|
||||
|
||||
const fileUploadIpWindowMs = FILE_UPLOAD_IP_WINDOW * 60 * 1000;
|
||||
const fileUploadIpMax = FILE_UPLOAD_IP_MAX;
|
||||
const fileUploadIpWindowInMinutes = fileUploadIpWindowMs / 60000;
|
||||
|
||||
const fileUploadUserWindowMs = FILE_UPLOAD_USER_WINDOW * 60 * 1000;
|
||||
const fileUploadUserMax = FILE_UPLOAD_USER_MAX;
|
||||
const fileUploadUserWindowInMinutes = fileUploadUserWindowMs / 60000;
|
||||
|
||||
return {
|
||||
fileUploadIpWindowMs,
|
||||
fileUploadIpMax,
|
||||
fileUploadIpWindowInMinutes,
|
||||
fileUploadUserWindowMs,
|
||||
fileUploadUserMax,
|
||||
fileUploadUserWindowInMinutes,
|
||||
};
|
||||
};
|
||||
|
||||
const createFileUploadHandler = (ip = true) => {
|
||||
const {
|
||||
fileUploadIpMax,
|
||||
fileUploadIpWindowInMinutes,
|
||||
fileUploadUserMax,
|
||||
fileUploadUserWindowInMinutes,
|
||||
} = getEnvironmentVariables();
|
||||
|
||||
return async (req, res) => {
|
||||
const type = CacheKeys.FILE_UPLOAD_LIMIT;
|
||||
const errorMessage = {
|
||||
type,
|
||||
max: ip ? fileUploadIpMax : fileUploadUserMax,
|
||||
limiter: ip ? 'ip' : 'user',
|
||||
windowInMinutes: ip ? fileUploadIpWindowInMinutes : fileUploadUserWindowInMinutes,
|
||||
};
|
||||
|
||||
await logViolation(req, res, type, errorMessage);
|
||||
res.status(429).json({ message: 'Too many file upload requests. Try again later' });
|
||||
};
|
||||
};
|
||||
|
||||
const createFileLimiters = () => {
|
||||
const { fileUploadIpWindowMs, fileUploadIpMax, fileUploadUserWindowMs, fileUploadUserMax } =
|
||||
getEnvironmentVariables();
|
||||
|
||||
const fileUploadIpLimiter = rateLimit({
|
||||
windowMs: fileUploadIpWindowMs,
|
||||
max: fileUploadIpMax,
|
||||
handler: createFileUploadHandler(),
|
||||
});
|
||||
|
||||
const fileUploadUserLimiter = rateLimit({
|
||||
windowMs: fileUploadUserWindowMs,
|
||||
max: fileUploadUserMax,
|
||||
handler: createFileUploadHandler(false),
|
||||
keyGenerator: function (req) {
|
||||
return req.user?.id; // Use the user ID or NULL if not available
|
||||
},
|
||||
});
|
||||
|
||||
return { fileUploadIpLimiter, fileUploadUserLimiter };
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
createFileLimiters,
|
||||
};
|
|
@ -1,5 +1,6 @@
|
|||
const crypto = require('crypto');
|
||||
const express = require('express');
|
||||
const { Constants } = require('librechat-data-provider');
|
||||
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('~/models');
|
||||
const { handleError, sendMessage, createOnProgress, handleText } = require('~/server/utils');
|
||||
const { setHeaders } = require('~/server/middleware');
|
||||
|
@ -27,7 +28,7 @@ router.post('/', setHeaders, async (req, res) => {
|
|||
const conversationId = oldConversationId || crypto.randomUUID();
|
||||
const isNewConversation = !oldConversationId;
|
||||
const userMessageId = crypto.randomUUID();
|
||||
const userParentMessageId = parentMessageId || '00000000-0000-0000-0000-000000000000';
|
||||
const userParentMessageId = parentMessageId || Constants.NO_PARENT;
|
||||
const userMessage = {
|
||||
messageId: userMessageId,
|
||||
sender: 'User',
|
||||
|
@ -209,7 +210,7 @@ const ask = async ({
|
|||
});
|
||||
res.end();
|
||||
|
||||
if (userParentMessageId == '00000000-0000-0000-0000-000000000000') {
|
||||
if (userParentMessageId == Constants.NO_PARENT) {
|
||||
// const title = await titleConvo({ endpoint: endpointOption?.endpoint, text, response: responseMessage });
|
||||
const title = await response.details.title;
|
||||
await saveConvo(user, {
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
const express = require('express');
|
||||
const crypto = require('crypto');
|
||||
const express = require('express');
|
||||
const { Constants } = require('librechat-data-provider');
|
||||
const { handleError, sendMessage, createOnProgress, handleText } = require('~/server/utils');
|
||||
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('~/models');
|
||||
const { setHeaders } = require('~/server/middleware');
|
||||
|
@ -28,7 +29,7 @@ router.post('/', setHeaders, async (req, res) => {
|
|||
const conversationId = oldConversationId || crypto.randomUUID();
|
||||
const isNewConversation = !oldConversationId;
|
||||
const userMessageId = messageId;
|
||||
const userParentMessageId = parentMessageId || '00000000-0000-0000-0000-000000000000';
|
||||
const userParentMessageId = parentMessageId || Constants.NO_PARENT;
|
||||
let userMessage = {
|
||||
messageId: userMessageId,
|
||||
sender: 'User',
|
||||
|
@ -238,7 +239,7 @@ const ask = async ({
|
|||
});
|
||||
res.end();
|
||||
|
||||
if (userParentMessageId == '00000000-0000-0000-0000-000000000000') {
|
||||
if (userParentMessageId == Constants.NO_PARENT) {
|
||||
const title = await titleConvoBing({
|
||||
text,
|
||||
response: responseMessage,
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
const express = require('express');
|
||||
const router = express.Router();
|
||||
const { getResponseSender } = require('librechat-data-provider');
|
||||
const { getResponseSender, Constants } = require('librechat-data-provider');
|
||||
const { validateTools } = require('~/app');
|
||||
const { addTitle } = require('~/server/services/Endpoints/openAI');
|
||||
const { initializeClient } = require('~/server/services/Endpoints/gptPlugins');
|
||||
|
@ -204,7 +204,7 @@ router.post('/', validateEndpoint, buildEndpointOption, setHeaders, async (req,
|
|||
});
|
||||
res.end();
|
||||
|
||||
if (parentMessageId === '00000000-0000-0000-0000-000000000000' && newConvo) {
|
||||
if (parentMessageId === Constants.NO_PARENT && newConvo) {
|
||||
addTitle(req, {
|
||||
text,
|
||||
response,
|
||||
|
|
201
api/server/routes/assistants/actions.js
Normal file
201
api/server/routes/assistants/actions.js
Normal file
|
@ -0,0 +1,201 @@
|
|||
const { v4 } = require('uuid');
|
||||
const express = require('express');
|
||||
const { actionDelimiter } = require('librechat-data-provider');
|
||||
const { initializeClient } = require('~/server/services/Endpoints/assistant');
|
||||
const { updateAction, getActions, deleteAction } = require('~/models/Action');
|
||||
const { updateAssistant, getAssistant } = require('~/models/Assistant');
|
||||
const { encryptMetadata } = require('~/server/services/ActionService');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
/**
|
||||
* Retrieves all user's actions
|
||||
* @route GET /actions/
|
||||
* @param {string} req.params.id - Assistant identifier.
|
||||
* @returns {Action[]} 200 - success response - application/json
|
||||
*/
|
||||
router.get('/', async (req, res) => {
|
||||
try {
|
||||
res.json(await getActions({ user: req.user.id }));
|
||||
} catch (error) {
|
||||
res.status(500).json({ error: error.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Adds or updates actions for a specific assistant.
|
||||
* @route POST /actions/:assistant_id
|
||||
* @param {string} req.params.assistant_id - The ID of the assistant.
|
||||
* @param {FunctionTool[]} req.body.functions - The functions to be added or updated.
|
||||
* @param {string} [req.body.action_id] - Optional ID for the action.
|
||||
* @param {ActionMetadata} req.body.metadata - Metadata for the action.
|
||||
* @returns {Object} 200 - success response - application/json
|
||||
*/
|
||||
router.post('/:assistant_id', async (req, res) => {
|
||||
try {
|
||||
const { assistant_id } = req.params;
|
||||
|
||||
/** @type {{ functions: FunctionTool[], action_id: string, metadata: ActionMetadata }} */
|
||||
const { functions, action_id: _action_id, metadata: _metadata } = req.body;
|
||||
if (!functions.length) {
|
||||
return res.status(400).json({ message: 'No functions provided' });
|
||||
}
|
||||
|
||||
let metadata = encryptMetadata(_metadata);
|
||||
|
||||
const { domain } = metadata;
|
||||
if (!domain) {
|
||||
return res.status(400).json({ message: 'No domain provided' });
|
||||
}
|
||||
|
||||
const action_id = _action_id ?? v4();
|
||||
const initialPromises = [];
|
||||
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
initialPromises.push(getAssistant({ assistant_id, user: req.user.id }));
|
||||
initialPromises.push(openai.beta.assistants.retrieve(assistant_id));
|
||||
!!_action_id && initialPromises.push(getActions({ user: req.user.id, action_id }, true));
|
||||
|
||||
/** @type {[AssistantDocument, Assistant, [Action|undefined]]} */
|
||||
const [assistant_data, assistant, actions_result] = await Promise.all(initialPromises);
|
||||
|
||||
if (actions_result && actions_result.length) {
|
||||
const action = actions_result[0];
|
||||
metadata = { ...action.metadata, ...metadata };
|
||||
}
|
||||
|
||||
if (!assistant) {
|
||||
return res.status(404).json({ message: 'Assistant not found' });
|
||||
}
|
||||
|
||||
const { actions: _actions = [] } = assistant_data ?? {};
|
||||
const actions = [];
|
||||
for (const action of _actions) {
|
||||
const [action_domain, current_action_id] = action.split(actionDelimiter);
|
||||
if (action_domain === domain && !_action_id) {
|
||||
// TODO: dupe check on the frontend
|
||||
return res.status(400).json({
|
||||
message: `Action sets cannot have duplicate domains - ${domain} already exists on another action`,
|
||||
});
|
||||
}
|
||||
|
||||
if (current_action_id === action_id) {
|
||||
continue;
|
||||
}
|
||||
|
||||
actions.push(action);
|
||||
}
|
||||
|
||||
actions.push(`${domain}${actionDelimiter}${action_id}`);
|
||||
|
||||
/** @type {{ tools: FunctionTool[] | { type: 'code_interpreter'|'retrieval'}[]}} */
|
||||
const { tools: _tools = [] } = assistant;
|
||||
|
||||
const tools = _tools
|
||||
.filter(
|
||||
(tool) =>
|
||||
!(
|
||||
tool.function &&
|
||||
(tool.function.name.includes(domain) || tool.function.name.includes(action_id))
|
||||
),
|
||||
)
|
||||
.concat(
|
||||
functions.map((tool) => ({
|
||||
...tool,
|
||||
function: {
|
||||
...tool.function,
|
||||
name: `${tool.function.name}${actionDelimiter}${domain}`,
|
||||
},
|
||||
})),
|
||||
);
|
||||
|
||||
const promises = [];
|
||||
promises.push(
|
||||
updateAssistant(
|
||||
{ assistant_id, user: req.user.id },
|
||||
{
|
||||
actions,
|
||||
},
|
||||
),
|
||||
);
|
||||
promises.push(openai.beta.assistants.update(assistant_id, { tools }));
|
||||
promises.push(updateAction({ action_id, user: req.user.id }, { metadata, assistant_id }));
|
||||
|
||||
/** @type {[AssistantDocument, Assistant, Action]} */
|
||||
const resolved = await Promise.all(promises);
|
||||
const sensitiveFields = ['api_key', 'oauth_client_id', 'oauth_client_secret'];
|
||||
for (let field of sensitiveFields) {
|
||||
if (resolved[2].metadata[field]) {
|
||||
delete resolved[2].metadata[field];
|
||||
}
|
||||
}
|
||||
res.json(resolved);
|
||||
} catch (error) {
|
||||
const message = 'Trouble updating the Assistant Action';
|
||||
logger.error(message, error);
|
||||
res.status(500).json({ message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Deletes an action for a specific assistant.
|
||||
* @route DELETE /actions/:assistant_id/:action_id
|
||||
* @param {string} req.params.assistant_id - The ID of the assistant.
|
||||
* @param {string} req.params.action_id - The ID of the action to delete.
|
||||
* @returns {Object} 200 - success response - application/json
|
||||
*/
|
||||
router.delete('/:assistant_id/:action_id', async (req, res) => {
|
||||
try {
|
||||
const { assistant_id, action_id } = req.params;
|
||||
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
const initialPromises = [];
|
||||
initialPromises.push(getAssistant({ assistant_id, user: req.user.id }));
|
||||
initialPromises.push(openai.beta.assistants.retrieve(assistant_id));
|
||||
|
||||
/** @type {[AssistantDocument, Assistant]} */
|
||||
const [assistant_data, assistant] = await Promise.all(initialPromises);
|
||||
|
||||
const { actions } = assistant_data ?? {};
|
||||
const { tools = [] } = assistant ?? {};
|
||||
|
||||
let domain = '';
|
||||
const updatedActions = actions.filter((action) => {
|
||||
if (action.includes(action_id)) {
|
||||
[domain] = action.split(actionDelimiter);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
const updatedTools = tools.filter(
|
||||
(tool) => !(tool.function && tool.function.name.includes(domain)),
|
||||
);
|
||||
|
||||
const promises = [];
|
||||
promises.push(
|
||||
updateAssistant(
|
||||
{ assistant_id, user: req.user.id },
|
||||
{
|
||||
actions: updatedActions,
|
||||
},
|
||||
),
|
||||
);
|
||||
promises.push(openai.beta.assistants.update(assistant_id, { tools: updatedTools }));
|
||||
promises.push(deleteAction({ action_id, user: req.user.id }));
|
||||
|
||||
await Promise.all(promises);
|
||||
res.status(200).json({ message: 'Action deleted successfully' });
|
||||
} catch (error) {
|
||||
const message = 'Trouble deleting the Assistant Action';
|
||||
logger.error(message, error);
|
||||
res.status(500).json({ message });
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
|
@ -1,9 +1,31 @@
|
|||
const OpenAI = require('openai');
|
||||
const multer = require('multer');
|
||||
const express = require('express');
|
||||
const { FileContext, EModelEndpoint } = require('librechat-data-provider');
|
||||
const { updateAssistant, getAssistants } = require('~/models/Assistant');
|
||||
const { initializeClient } = require('~/server/services/Endpoints/assistant');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const { uploadImageBuffer } = require('~/server/services/Files/process');
|
||||
const { deleteFileByFilter } = require('~/models/File');
|
||||
const { logger } = require('~/config');
|
||||
const actions = require('./actions');
|
||||
const tools = require('./tools');
|
||||
|
||||
const upload = multer();
|
||||
const router = express.Router();
|
||||
|
||||
/**
|
||||
* Assistant actions route.
|
||||
* @route GET|POST /assistants/actions
|
||||
*/
|
||||
router.use('/actions', actions);
|
||||
|
||||
/**
|
||||
* Create an assistant.
|
||||
* @route GET /assistants/tools
|
||||
* @returns {TPlugin[]} 200 - application/json
|
||||
*/
|
||||
router.use('/tools', tools);
|
||||
|
||||
/**
|
||||
* Create an assistant.
|
||||
* @route POST /assistants
|
||||
|
@ -12,12 +34,25 @@ const router = express.Router();
|
|||
*/
|
||||
router.post('/', async (req, res) => {
|
||||
try {
|
||||
const openai = new OpenAI(process.env.OPENAI_API_KEY);
|
||||
const assistantData = req.body;
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
const { tools = [], ...assistantData } = req.body;
|
||||
assistantData.tools = tools
|
||||
.map((tool) => {
|
||||
if (typeof tool !== 'string') {
|
||||
return tool;
|
||||
}
|
||||
|
||||
return req.app.locals.availableTools[tool];
|
||||
})
|
||||
.filter((tool) => tool);
|
||||
|
||||
const assistant = await openai.beta.assistants.create(assistantData);
|
||||
logger.debug('/assistants/', assistant);
|
||||
res.status(201).json(assistant);
|
||||
} catch (error) {
|
||||
logger.error('[/assistants] Error creating assistant', error);
|
||||
res.status(500).json({ error: error.message });
|
||||
}
|
||||
});
|
||||
|
@ -30,11 +65,14 @@ router.post('/', async (req, res) => {
|
|||
*/
|
||||
router.get('/:id', async (req, res) => {
|
||||
try {
|
||||
const openai = new OpenAI(process.env.OPENAI_API_KEY);
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
const assistant_id = req.params.id;
|
||||
const assistant = await openai.beta.assistants.retrieve(assistant_id);
|
||||
res.json(assistant);
|
||||
} catch (error) {
|
||||
logger.error('[/assistants/:id] Error retrieving assistant', error);
|
||||
res.status(500).json({ error: error.message });
|
||||
}
|
||||
});
|
||||
|
@ -48,12 +86,25 @@ router.get('/:id', async (req, res) => {
|
|||
*/
|
||||
router.patch('/:id', async (req, res) => {
|
||||
try {
|
||||
const openai = new OpenAI(process.env.OPENAI_API_KEY);
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
const assistant_id = req.params.id;
|
||||
const updateData = req.body;
|
||||
updateData.tools = (updateData.tools ?? [])
|
||||
.map((tool) => {
|
||||
if (typeof tool !== 'string') {
|
||||
return tool;
|
||||
}
|
||||
|
||||
return req.app.locals.availableTools[tool];
|
||||
})
|
||||
.filter((tool) => tool);
|
||||
|
||||
const updatedAssistant = await openai.beta.assistants.update(assistant_id, updateData);
|
||||
res.json(updatedAssistant);
|
||||
} catch (error) {
|
||||
logger.error('[/assistants/:id] Error updating assistant', error);
|
||||
res.status(500).json({ error: error.message });
|
||||
}
|
||||
});
|
||||
|
@ -66,12 +117,15 @@ router.patch('/:id', async (req, res) => {
|
|||
*/
|
||||
router.delete('/:id', async (req, res) => {
|
||||
try {
|
||||
const openai = new OpenAI(process.env.OPENAI_API_KEY);
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
const assistant_id = req.params.id;
|
||||
const deletionStatus = await openai.beta.assistants.del(assistant_id);
|
||||
res.json(deletionStatus);
|
||||
} catch (error) {
|
||||
res.status(500).json({ error: error.message });
|
||||
logger.error('[/assistants/:id] Error deleting assistant', error);
|
||||
res.status(500).json({ error: 'Error deleting assistant' });
|
||||
}
|
||||
});
|
||||
|
||||
|
@ -79,22 +133,121 @@ router.delete('/:id', async (req, res) => {
|
|||
* Returns a list of assistants.
|
||||
* @route GET /assistants
|
||||
* @param {AssistantListParams} req.query - The assistant list parameters for pagination and sorting.
|
||||
* @returns {Array<Assistant>} 200 - success response - application/json
|
||||
* @returns {AssistantListResponse} 200 - success response - application/json
|
||||
*/
|
||||
router.get('/', async (req, res) => {
|
||||
try {
|
||||
const openai = new OpenAI(process.env.OPENAI_API_KEY);
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
const { limit, order, after, before } = req.query;
|
||||
const assistants = await openai.beta.assistants.list({
|
||||
const response = await openai.beta.assistants.list({
|
||||
limit,
|
||||
order,
|
||||
after,
|
||||
before,
|
||||
});
|
||||
res.json(assistants);
|
||||
|
||||
/** @type {AssistantListResponse} */
|
||||
let body = response.body;
|
||||
|
||||
if (req.app.locals?.[EModelEndpoint.assistants]) {
|
||||
/** @type {Partial<TAssistantEndpoint>} */
|
||||
const assistantsConfig = req.app.locals[EModelEndpoint.assistants];
|
||||
const { supportedIds, excludedIds } = assistantsConfig;
|
||||
if (supportedIds?.length) {
|
||||
body.data = body.data.filter((assistant) => supportedIds.includes(assistant.id));
|
||||
} else if (excludedIds?.length) {
|
||||
body.data = body.data.filter((assistant) => !excludedIds.includes(assistant.id));
|
||||
}
|
||||
}
|
||||
|
||||
res.json(body);
|
||||
} catch (error) {
|
||||
logger.error('[/assistants] Error listing assistants', error);
|
||||
res.status(500).json({ error: error.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Returns a list of the user's assistant documents (metadata saved to database).
|
||||
* @route GET /assistants/documents
|
||||
* @returns {AssistantDocument[]} 200 - success response - application/json
|
||||
*/
|
||||
router.get('/documents', async (req, res) => {
|
||||
try {
|
||||
res.json(await getAssistants({ user: req.user.id }));
|
||||
} catch (error) {
|
||||
logger.error('[/assistants/documents] Error listing assistant documents', error);
|
||||
res.status(500).json({ error: error.message });
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Uploads and updates an avatar for a specific assistant.
|
||||
* @route POST /avatar/:assistant_id
|
||||
* @param {string} req.params.assistant_id - The ID of the assistant.
|
||||
* @param {Express.Multer.File} req.file - The avatar image file.
|
||||
* @param {string} [req.body.metadata] - Optional metadata for the assistant's avatar.
|
||||
* @returns {Object} 200 - success response - application/json
|
||||
*/
|
||||
router.post('/avatar/:assistant_id', upload.single('file'), async (req, res) => {
|
||||
try {
|
||||
const { assistant_id } = req.params;
|
||||
if (!assistant_id) {
|
||||
return res.status(400).json({ message: 'Assistant ID is required' });
|
||||
}
|
||||
|
||||
let { metadata: _metadata = '{}' } = req.body;
|
||||
/** @type {{ openai: OpenAI }} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
|
||||
const image = await uploadImageBuffer({ req, context: FileContext.avatar });
|
||||
|
||||
try {
|
||||
_metadata = JSON.parse(_metadata);
|
||||
} catch (error) {
|
||||
logger.error('[/avatar/:assistant_id] Error parsing metadata', error);
|
||||
_metadata = {};
|
||||
}
|
||||
|
||||
if (_metadata.avatar && _metadata.avatar_source) {
|
||||
const { deleteFile } = getStrategyFunctions(_metadata.avatar_source);
|
||||
try {
|
||||
await deleteFile(req, { filepath: _metadata.avatar });
|
||||
await deleteFileByFilter({ filepath: _metadata.avatar });
|
||||
} catch (error) {
|
||||
logger.error('[/avatar/:assistant_id] Error deleting old avatar', error);
|
||||
}
|
||||
}
|
||||
|
||||
const metadata = {
|
||||
..._metadata,
|
||||
avatar: image.filepath,
|
||||
avatar_source: req.app.locals.fileStrategy,
|
||||
};
|
||||
|
||||
const promises = [];
|
||||
promises.push(
|
||||
updateAssistant(
|
||||
{ assistant_id, user: req.user.id },
|
||||
{
|
||||
avatar: {
|
||||
filepath: image.filepath,
|
||||
source: req.app.locals.fileStrategy,
|
||||
},
|
||||
},
|
||||
),
|
||||
);
|
||||
promises.push(openai.beta.assistants.update(assistant_id, { metadata }));
|
||||
|
||||
const resolved = await Promise.all(promises);
|
||||
res.status(201).json(resolved[1]);
|
||||
} catch (error) {
|
||||
const message = 'An error occurred while updating the Assistant Avatar';
|
||||
logger.error(message, error);
|
||||
res.status(500).json({ message });
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
|
|
@ -1,64 +1,217 @@
|
|||
const crypto = require('crypto');
|
||||
const OpenAI = require('openai');
|
||||
const { logger } = require('~/config');
|
||||
const { sendMessage } = require('../../utils');
|
||||
const { initThread, createRun, handleRun } = require('../../services/AssistantService');
|
||||
const { v4 } = require('uuid');
|
||||
const express = require('express');
|
||||
const { EModelEndpoint, Constants, RunStatus, CacheKeys } = require('librechat-data-provider');
|
||||
const {
|
||||
initThread,
|
||||
recordUsage,
|
||||
saveUserMessage,
|
||||
checkMessageGaps,
|
||||
addThreadMetadata,
|
||||
saveAssistantMessage,
|
||||
} = require('~/server/services/Threads');
|
||||
const { runAssistant, createOnTextProgress } = require('~/server/services/AssistantService');
|
||||
const { addTitle, initializeClient } = require('~/server/services/Endpoints/assistant');
|
||||
const { createRun, sleep } = require('~/server/services/Runs');
|
||||
const { getConvo } = require('~/models/Conversation');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const { sendMessage } = require('~/server/utils');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const router = express.Router();
|
||||
const {
|
||||
setHeaders,
|
||||
// handleAbort,
|
||||
// handleAbortError,
|
||||
handleAbort,
|
||||
handleAbortError,
|
||||
// validateEndpoint,
|
||||
// buildEndpointOption,
|
||||
// createAbortController,
|
||||
} = require('../../middleware');
|
||||
buildEndpointOption,
|
||||
} = require('~/server/middleware');
|
||||
|
||||
// const thread = {
|
||||
// id: 'thread_LexzJUVugYFqfslS7c7iL3Zo',
|
||||
// "thread_nZoiCbPauU60LqY1Q0ME1elg"
|
||||
// };
|
||||
router.post('/abort', handleAbort());
|
||||
|
||||
/**
|
||||
* Chat with an assistant.
|
||||
* @route POST /
|
||||
* @desc Chat with an assistant
|
||||
* @access Public
|
||||
* @param {express.Request} req - The request object, containing the request data.
|
||||
* @param {express.Response} res - The response object, used to send back a response.
|
||||
* @returns {void}
|
||||
*/
|
||||
router.post('/', setHeaders, async (req, res) => {
|
||||
try {
|
||||
router.post('/', buildEndpointOption, setHeaders, async (req, res) => {
|
||||
logger.debug('[/assistants/chat/] req.body', req.body);
|
||||
// test message:
|
||||
// How many polls of 500 ms intervals are there in 18 seconds?
|
||||
const {
|
||||
text,
|
||||
model,
|
||||
files = [],
|
||||
promptPrefix,
|
||||
assistant_id,
|
||||
instructions,
|
||||
thread_id: _thread_id,
|
||||
messageId: _messageId,
|
||||
conversationId: convoId,
|
||||
parentMessageId: _parentId = Constants.NO_PARENT,
|
||||
} = req.body;
|
||||
|
||||
const { assistant_id, messages, text: userMessage, messageId } = req.body;
|
||||
const conversationId = req.body.conversationId || crypto.randomUUID();
|
||||
// let thread_id = req.body.thread_id ?? 'thread_nZoiCbPauU60LqY1Q0ME1elg'; // for testing
|
||||
let thread_id = req.body.thread_id;
|
||||
/** @type {Partial<TAssistantEndpoint>} */
|
||||
const assistantsConfig = req.app.locals?.[EModelEndpoint.assistants];
|
||||
|
||||
if (assistantsConfig) {
|
||||
const { supportedIds, excludedIds } = assistantsConfig;
|
||||
const error = { message: 'Assistant not supported' };
|
||||
if (supportedIds?.length && !supportedIds.includes(assistant_id)) {
|
||||
return await handleAbortError(res, req, error, {
|
||||
sender: 'System',
|
||||
conversationId: convoId,
|
||||
messageId: v4(),
|
||||
parentMessageId: _messageId,
|
||||
error,
|
||||
});
|
||||
} else if (excludedIds?.length && excludedIds.includes(assistant_id)) {
|
||||
return await handleAbortError(res, req, error, {
|
||||
sender: 'System',
|
||||
conversationId: convoId,
|
||||
messageId: v4(),
|
||||
parentMessageId: _messageId,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/** @type {OpenAIClient} */
|
||||
let openai;
|
||||
/** @type {string|undefined} - the current thread id */
|
||||
let thread_id = _thread_id;
|
||||
/** @type {string|undefined} - the current run id */
|
||||
let run_id;
|
||||
/** @type {string|undefined} - the parent messageId */
|
||||
let parentMessageId = _parentId;
|
||||
/** @type {TMessage[]} */
|
||||
let previousMessages = [];
|
||||
|
||||
const userMessageId = v4();
|
||||
const responseMessageId = v4();
|
||||
|
||||
/** @type {string} - The conversation UUID - created if undefined */
|
||||
const conversationId = convoId ?? v4();
|
||||
|
||||
const cache = getLogStores(CacheKeys.ABORT_KEYS);
|
||||
const cacheKey = `${req.user.id}:${conversationId}`;
|
||||
|
||||
try {
|
||||
if (convoId && !_thread_id) {
|
||||
throw new Error('Missing thread_id for existing conversation');
|
||||
}
|
||||
|
||||
if (!assistant_id) {
|
||||
throw new Error('Missing assistant_id');
|
||||
}
|
||||
|
||||
const openai = new OpenAI(process.env.OPENAI_API_KEY);
|
||||
console.log(messages);
|
||||
/** @type {{ openai: OpenAIClient }} */
|
||||
const { openai: _openai, client } = await initializeClient({
|
||||
req,
|
||||
res,
|
||||
endpointOption: req.body.endpointOption,
|
||||
initAppClient: true,
|
||||
});
|
||||
|
||||
const initThreadBody = {
|
||||
messages: [
|
||||
{
|
||||
openai = _openai;
|
||||
|
||||
// if (thread_id) {
|
||||
// previousMessages = await checkMessageGaps({ openai, thread_id, conversationId });
|
||||
// }
|
||||
|
||||
if (previousMessages.length) {
|
||||
parentMessageId = previousMessages[previousMessages.length - 1].messageId;
|
||||
}
|
||||
|
||||
const userMessage = {
|
||||
role: 'user',
|
||||
content: userMessage,
|
||||
content: text,
|
||||
metadata: {
|
||||
messageId,
|
||||
messageId: userMessageId,
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
let thread_file_ids = [];
|
||||
if (convoId) {
|
||||
const convo = await getConvo(req.user.id, convoId);
|
||||
if (convo && convo.file_ids) {
|
||||
thread_file_ids = convo.file_ids;
|
||||
}
|
||||
}
|
||||
|
||||
const file_ids = files.map(({ file_id }) => file_id);
|
||||
if (file_ids.length || thread_file_ids.length) {
|
||||
userMessage.file_ids = file_ids;
|
||||
openai.attachedFileIds = new Set([...file_ids, ...thread_file_ids]);
|
||||
}
|
||||
|
||||
// TODO: may allow multiple messages to be created beforehand in a future update
|
||||
const initThreadBody = {
|
||||
messages: [userMessage],
|
||||
metadata: {
|
||||
user: req.user.id,
|
||||
conversationId,
|
||||
},
|
||||
};
|
||||
|
||||
const result = await initThread({ openai, body: initThreadBody, thread_id });
|
||||
// const { messages: _messages } = result;
|
||||
thread_id = result.thread_id;
|
||||
|
||||
createOnTextProgress({
|
||||
openai,
|
||||
conversationId,
|
||||
userMessageId,
|
||||
messageId: responseMessageId,
|
||||
thread_id,
|
||||
});
|
||||
|
||||
const requestMessage = {
|
||||
user: req.user.id,
|
||||
text,
|
||||
messageId: userMessageId,
|
||||
parentMessageId,
|
||||
// TODO: make sure client sends correct format for `files`, use zod
|
||||
files,
|
||||
file_ids,
|
||||
conversationId,
|
||||
isCreatedByUser: true,
|
||||
assistant_id,
|
||||
thread_id,
|
||||
model: assistant_id,
|
||||
};
|
||||
|
||||
previousMessages.push(requestMessage);
|
||||
|
||||
await saveUserMessage({ ...requestMessage, model });
|
||||
|
||||
const conversation = {
|
||||
conversationId,
|
||||
// TODO: title feature
|
||||
title: 'New Chat',
|
||||
endpoint: EModelEndpoint.assistants,
|
||||
promptPrefix: promptPrefix,
|
||||
instructions: instructions,
|
||||
assistant_id,
|
||||
// model,
|
||||
};
|
||||
|
||||
if (file_ids.length) {
|
||||
conversation.file_ids = file_ids;
|
||||
}
|
||||
|
||||
/** @type {CreateRunBody} */
|
||||
const body = {
|
||||
assistant_id,
|
||||
model,
|
||||
};
|
||||
|
||||
if (promptPrefix) {
|
||||
body.additional_instructions = promptPrefix;
|
||||
}
|
||||
|
||||
if (instructions) {
|
||||
body.instructions = instructions;
|
||||
}
|
||||
|
||||
/* NOTE:
|
||||
* By default, a Run will use the model and tools configuration specified in Assistant object,
|
||||
* but you can override most of these when creating the Run for added flexibility:
|
||||
|
@ -66,43 +219,160 @@ router.post('/', setHeaders, async (req, res) => {
|
|||
const run = await createRun({
|
||||
openai,
|
||||
thread_id,
|
||||
body: { assistant_id, model: 'gpt-3.5-turbo-1106' },
|
||||
body,
|
||||
});
|
||||
const response = await handleRun({ openai, thread_id, run_id: run.id });
|
||||
|
||||
run_id = run.id;
|
||||
await cache.set(cacheKey, `${thread_id}:${run_id}`);
|
||||
|
||||
sendMessage(res, {
|
||||
sync: true,
|
||||
conversationId,
|
||||
// messages: previousMessages,
|
||||
requestMessage,
|
||||
responseMessage: {
|
||||
user: req.user.id,
|
||||
messageId: openai.responseMessage.messageId,
|
||||
parentMessageId: userMessageId,
|
||||
conversationId,
|
||||
assistant_id,
|
||||
thread_id,
|
||||
model: assistant_id,
|
||||
},
|
||||
});
|
||||
|
||||
// todo: retry logic
|
||||
let response = await runAssistant({ openai, thread_id, run_id });
|
||||
logger.debug('[/assistants/chat/] response', response);
|
||||
|
||||
if (response.run.status === RunStatus.IN_PROGRESS) {
|
||||
response = await runAssistant({
|
||||
openai,
|
||||
thread_id,
|
||||
run_id,
|
||||
in_progress: openai.in_progress,
|
||||
});
|
||||
}
|
||||
|
||||
/** @type {ResponseMessage} */
|
||||
const responseMessage = {
|
||||
...openai.responseMessage,
|
||||
parentMessageId: userMessageId,
|
||||
conversationId,
|
||||
user: req.user.id,
|
||||
assistant_id,
|
||||
thread_id,
|
||||
model: assistant_id,
|
||||
};
|
||||
|
||||
// TODO: token count from usage returned in run
|
||||
// TODO: parse responses, save to db, send to user
|
||||
|
||||
sendMessage(res, {
|
||||
title: 'New Chat',
|
||||
final: true,
|
||||
conversation: {
|
||||
conversationId: 'fake-convo-id',
|
||||
title: 'New Chat',
|
||||
},
|
||||
conversation,
|
||||
requestMessage: {
|
||||
messageId: 'fake-user-message-id',
|
||||
parentMessageId: '00000000-0000-0000-0000-000000000000',
|
||||
conversationId: 'fake-convo-id',
|
||||
sender: 'User',
|
||||
text: req.body.text,
|
||||
isCreatedByUser: true,
|
||||
},
|
||||
responseMessage: {
|
||||
messageId: 'fake-response-id',
|
||||
conversationId: 'fake-convo-id',
|
||||
parentMessageId: 'fake-user-message-id',
|
||||
isCreatedByUser: false,
|
||||
isEdited: false,
|
||||
model: 'gpt-3.5-turbo-1106',
|
||||
sender: 'Assistant',
|
||||
text: response.choices[0].text,
|
||||
parentMessageId,
|
||||
thread_id,
|
||||
},
|
||||
});
|
||||
res.end();
|
||||
|
||||
await saveAssistantMessage({ ...responseMessage, model });
|
||||
|
||||
if (parentMessageId === Constants.NO_PARENT && !_thread_id) {
|
||||
addTitle(req, {
|
||||
text,
|
||||
responseText: openai.responseText,
|
||||
conversationId,
|
||||
client,
|
||||
});
|
||||
}
|
||||
|
||||
await addThreadMetadata({
|
||||
openai,
|
||||
thread_id,
|
||||
messageId: responseMessage.messageId,
|
||||
messages: response.messages,
|
||||
});
|
||||
|
||||
if (!response.run.usage) {
|
||||
await sleep(3000);
|
||||
const completedRun = await openai.beta.threads.runs.retrieve(thread_id, run.id);
|
||||
if (completedRun.usage) {
|
||||
await recordUsage({
|
||||
...completedRun.usage,
|
||||
user: req.user.id,
|
||||
model: completedRun.model ?? model,
|
||||
conversationId,
|
||||
});
|
||||
}
|
||||
} else {
|
||||
await recordUsage({
|
||||
...response.run.usage,
|
||||
user: req.user.id,
|
||||
model: response.run.model ?? model,
|
||||
conversationId,
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
// res.status(500).json({ error: error.message });
|
||||
if (error.message === 'Run cancelled') {
|
||||
return res.end();
|
||||
}
|
||||
|
||||
logger.error('[/assistants/chat/]', error);
|
||||
res.end();
|
||||
|
||||
if (!openai || !thread_id || !run_id) {
|
||||
return res.status(500).json({ error: 'The Assistant run failed to initialize' });
|
||||
}
|
||||
|
||||
try {
|
||||
await cache.delete(cacheKey);
|
||||
const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
|
||||
logger.debug('Cancelled run:', cancelledRun);
|
||||
} catch (error) {
|
||||
logger.error('[abortRun] Error cancelling run', error);
|
||||
}
|
||||
|
||||
await sleep(2000);
|
||||
try {
|
||||
const run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
|
||||
await recordUsage({
|
||||
...run.usage,
|
||||
model: run.model,
|
||||
user: req.user.id,
|
||||
conversationId,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[/assistants/chat/] Error fetching or processing run', error);
|
||||
}
|
||||
|
||||
try {
|
||||
const runMessages = await checkMessageGaps({
|
||||
openai,
|
||||
run_id,
|
||||
thread_id,
|
||||
conversationId,
|
||||
latestMessageId: responseMessageId,
|
||||
});
|
||||
|
||||
const finalEvent = {
|
||||
title: 'New Chat',
|
||||
final: true,
|
||||
conversation: await getConvo(req.user.id, conversationId),
|
||||
runMessages,
|
||||
};
|
||||
|
||||
if (res.headersSent && finalEvent) {
|
||||
return sendMessage(res, finalEvent);
|
||||
}
|
||||
|
||||
res.json(finalEvent);
|
||||
} catch (error) {
|
||||
logger.error('[/assistants/chat/] Error finalizing error process', error);
|
||||
return res.status(500).json({ error: 'The Assistant run failed' });
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
|
|
8
api/server/routes/assistants/tools.js
Normal file
8
api/server/routes/assistants/tools.js
Normal file
|
@ -0,0 +1,8 @@
|
|||
const express = require('express');
|
||||
const { getAvailableTools } = require('~/server/controllers/PluginController');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
router.get('/', getAvailableTools);
|
||||
|
||||
module.exports = router;
|
|
@ -1,10 +1,10 @@
|
|||
const express = require('express');
|
||||
const { CacheKeys } = require('librechat-data-provider');
|
||||
const { getConvosByPage, deleteConvos } = require('~/models/Conversation');
|
||||
const { initializeClient } = require('~/server/services/Endpoints/assistant');
|
||||
const { getConvosByPage, deleteConvos, getConvo, saveConvo } = require('~/models/Conversation');
|
||||
const requireJwtAuth = require('~/server/middleware/requireJwtAuth');
|
||||
const { sleep } = require('~/server/services/AssistantService');
|
||||
const { sleep } = require('~/server/services/Runs/handle');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const { getConvo, saveConvo } = require('~/models');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const router = express.Router();
|
||||
|
@ -47,9 +47,7 @@ router.post('/gen_title', async (req, res) => {
|
|||
await titleCache.delete(key);
|
||||
res.status(200).json({ title });
|
||||
} else {
|
||||
res
|
||||
.status(404)
|
||||
.json({
|
||||
res.status(404).json({
|
||||
message: 'Title not found or method not implemented for the conversation\'s endpoint',
|
||||
});
|
||||
}
|
||||
|
@ -57,18 +55,29 @@ router.post('/gen_title', async (req, res) => {
|
|||
|
||||
router.post('/clear', async (req, res) => {
|
||||
let filter = {};
|
||||
const { conversationId, source } = req.body.arg;
|
||||
const { conversationId, source, thread_id } = req.body.arg;
|
||||
if (conversationId) {
|
||||
filter = { conversationId };
|
||||
}
|
||||
|
||||
// for debugging deletion source
|
||||
// logger.debug('source:', source);
|
||||
|
||||
if (source === 'button' && !conversationId) {
|
||||
return res.status(200).send('No conversationId provided');
|
||||
}
|
||||
|
||||
if (thread_id) {
|
||||
/** @type {{ openai: OpenAI}} */
|
||||
const { openai } = await initializeClient({ req, res });
|
||||
try {
|
||||
const response = await openai.beta.threads.del(thread_id);
|
||||
logger.debug('Deleted OpenAI thread:', response);
|
||||
} catch (error) {
|
||||
logger.error('Error deleting OpenAI thread:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// for debugging deletion source
|
||||
// logger.debug('source:', source);
|
||||
|
||||
try {
|
||||
const dbResponse = await deleteConvos(req.user.id, filter);
|
||||
res.status(201).json(dbResponse);
|
||||
|
|
|
@ -1,38 +1,36 @@
|
|||
const express = require('express');
|
||||
const multer = require('multer');
|
||||
|
||||
const uploadAvatar = require('~/server/services/Files/images/avatar');
|
||||
const { requireJwtAuth } = require('~/server/middleware/');
|
||||
const User = require('~/models/User');
|
||||
const express = require('express');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const { resizeAvatar } = require('~/server/services/Files/images/avatar');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const upload = multer();
|
||||
const router = express.Router();
|
||||
|
||||
router.post('/', requireJwtAuth, upload.single('input'), async (req, res) => {
|
||||
router.post('/', upload.single('input'), async (req, res) => {
|
||||
try {
|
||||
const userId = req.user.id;
|
||||
const { manual } = req.body;
|
||||
const input = req.file.buffer;
|
||||
|
||||
if (!userId) {
|
||||
throw new Error('User ID is undefined');
|
||||
}
|
||||
|
||||
// TODO: do not use Model directly, instead use a service method that uses the model
|
||||
const user = await User.findById(userId).lean();
|
||||
|
||||
if (!user) {
|
||||
throw new Error('User not found');
|
||||
}
|
||||
const url = await uploadAvatar({
|
||||
input,
|
||||
const fileStrategy = req.app.locals.fileStrategy;
|
||||
const webPBuffer = await resizeAvatar({
|
||||
userId,
|
||||
manual,
|
||||
fileStrategy: req.app.locals.fileStrategy,
|
||||
input,
|
||||
});
|
||||
|
||||
const { processAvatar } = getStrategyFunctions(fileStrategy);
|
||||
const url = await processAvatar({ buffer: webPBuffer, userId, manual });
|
||||
|
||||
res.json({ url });
|
||||
} catch (error) {
|
||||
res.status(500).json({ message: 'An error occurred while uploading the profile picture' });
|
||||
const message = 'An error occurred while uploading the profile picture';
|
||||
logger.error(message, error);
|
||||
res.status(500).json({ message });
|
||||
}
|
||||
});
|
||||
|
||||
|
|
|
@ -1,14 +1,17 @@
|
|||
const { z } = require('zod');
|
||||
const axios = require('axios');
|
||||
const fs = require('fs').promises;
|
||||
const express = require('express');
|
||||
const { FileSources } = require('librechat-data-provider');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const { deleteFiles, getFiles } = require('~/models');
|
||||
const { isUUID } = require('librechat-data-provider');
|
||||
const {
|
||||
filterFile,
|
||||
processFileUpload,
|
||||
processDeleteRequest,
|
||||
} = require('~/server/services/Files/process');
|
||||
const { getFiles } = require('~/models/File');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
const isUUID = z.string().uuid();
|
||||
|
||||
router.get('/', async (req, res) => {
|
||||
try {
|
||||
const files = await getFiles({ user: req.user.id });
|
||||
|
@ -19,6 +22,15 @@ router.get('/', async (req, res) => {
|
|||
}
|
||||
});
|
||||
|
||||
router.get('/config', async (req, res) => {
|
||||
try {
|
||||
res.status(200).json(req.app.locals.fileConfig);
|
||||
} catch (error) {
|
||||
logger.error('[/files] Error getting fileConfig', error);
|
||||
res.status(400).json({ message: 'Error in request', error: error.message });
|
||||
}
|
||||
});
|
||||
|
||||
router.delete('/', async (req, res) => {
|
||||
try {
|
||||
const { files: _files } = req.body;
|
||||
|
@ -31,6 +43,11 @@ router.delete('/', async (req, res) => {
|
|||
if (!file.filepath) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (/^file-/.test(file.file_id)) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return isUUID.safeParse(file.file_id).success;
|
||||
});
|
||||
|
||||
|
@ -39,29 +56,8 @@ router.delete('/', async (req, res) => {
|
|||
return;
|
||||
}
|
||||
|
||||
const file_ids = files.map((file) => file.file_id);
|
||||
const deletionMethods = {};
|
||||
const promises = [];
|
||||
promises.push(await deleteFiles(file_ids));
|
||||
await processDeleteRequest({ req, files });
|
||||
|
||||
for (const file of files) {
|
||||
const source = file.source ?? FileSources.local;
|
||||
|
||||
if (deletionMethods[source]) {
|
||||
promises.push(deletionMethods[source](req, file));
|
||||
continue;
|
||||
}
|
||||
|
||||
const { deleteFile } = getStrategyFunctions(source);
|
||||
if (!deleteFile) {
|
||||
throw new Error(`Delete function not implemented for ${source}`);
|
||||
}
|
||||
|
||||
deletionMethods[source] = deleteFile;
|
||||
promises.push(deleteFile(req, file));
|
||||
}
|
||||
|
||||
await Promise.all(promises);
|
||||
res.status(200).json({ message: 'Files deleted successfully' });
|
||||
} catch (error) {
|
||||
logger.error('[/files] Error deleting files:', error);
|
||||
|
@ -69,4 +65,69 @@ router.delete('/', async (req, res) => {
|
|||
}
|
||||
});
|
||||
|
||||
router.get('/download/:fileId', async (req, res) => {
|
||||
try {
|
||||
const { fileId } = req.params;
|
||||
|
||||
const options = {
|
||||
headers: {
|
||||
// TODO: Client initialization for OpenAI API Authentication
|
||||
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
|
||||
},
|
||||
responseType: 'stream',
|
||||
};
|
||||
|
||||
const fileResponse = await axios.get(`https://api.openai.com/v1/files/${fileId}`, {
|
||||
headers: options.headers,
|
||||
});
|
||||
const { filename } = fileResponse.data;
|
||||
|
||||
const response = await axios.get(`https://api.openai.com/v1/files/${fileId}/content`, options);
|
||||
res.setHeader('Content-Disposition', `attachment; filename="${filename}"`);
|
||||
response.data.pipe(res);
|
||||
} catch (error) {
|
||||
console.error('Error downloading file:', error);
|
||||
res.status(500).send('Error downloading file');
|
||||
}
|
||||
});
|
||||
|
||||
router.post('/', async (req, res) => {
|
||||
const file = req.file;
|
||||
const metadata = req.body;
|
||||
let cleanup = true;
|
||||
|
||||
try {
|
||||
filterFile({ req, file });
|
||||
|
||||
metadata.temp_file_id = metadata.file_id;
|
||||
metadata.file_id = req.file_id;
|
||||
|
||||
await processFileUpload({ req, res, file, metadata });
|
||||
} catch (error) {
|
||||
let message = 'Error processing file';
|
||||
logger.error('[/files] Error processing file:', error);
|
||||
cleanup = false;
|
||||
|
||||
if (error.message?.includes('file_ids')) {
|
||||
message += ': ' + error.message;
|
||||
}
|
||||
|
||||
// TODO: delete remote file if it exists
|
||||
try {
|
||||
await fs.unlink(file.path);
|
||||
} catch (error) {
|
||||
logger.error('[/files] Error deleting file:', error);
|
||||
}
|
||||
res.status(500).json({ message });
|
||||
}
|
||||
|
||||
if (cleanup) {
|
||||
try {
|
||||
await fs.unlink(file.path);
|
||||
} catch (error) {
|
||||
logger.error('[/files/images] Error deleting file after file processing:', error);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
|
|
@ -1,49 +1,29 @@
|
|||
const { z } = require('zod');
|
||||
const path = require('path');
|
||||
const fs = require('fs').promises;
|
||||
const express = require('express');
|
||||
const upload = require('./multer');
|
||||
const { processImageUpload } = require('~/server/services/Files/process');
|
||||
const { filterFile, processImageFile } = require('~/server/services/Files/process');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
router.post('/', upload.single('file'), async (req, res) => {
|
||||
const file = req.file;
|
||||
router.post('/', async (req, res) => {
|
||||
const metadata = req.body;
|
||||
// TODO: add file size/type validation
|
||||
|
||||
const uuidSchema = z.string().uuid();
|
||||
|
||||
try {
|
||||
if (!file) {
|
||||
throw new Error('No file provided');
|
||||
}
|
||||
filterFile({ req, file: req.file, image: true });
|
||||
|
||||
if (!metadata.file_id) {
|
||||
throw new Error('No file_id provided');
|
||||
}
|
||||
|
||||
if (!metadata.width) {
|
||||
throw new Error('No width provided');
|
||||
}
|
||||
|
||||
if (!metadata.height) {
|
||||
throw new Error('No height provided');
|
||||
}
|
||||
/* parse to validate api call */
|
||||
uuidSchema.parse(metadata.file_id);
|
||||
metadata.temp_file_id = metadata.file_id;
|
||||
metadata.file_id = req.file_id;
|
||||
|
||||
await processImageUpload({ req, res, file, metadata });
|
||||
await processImageFile({ req, res, file: req.file, metadata });
|
||||
} catch (error) {
|
||||
// TODO: delete remote file if it exists
|
||||
logger.error('[/files/images] Error processing file:', error);
|
||||
try {
|
||||
const filepath = path.join(
|
||||
req.app.locals.paths.imageOutput,
|
||||
req.user.id,
|
||||
path.basename(file.filename),
|
||||
path.basename(req.file.filename),
|
||||
);
|
||||
await fs.unlink(filepath);
|
||||
} catch (error) {
|
||||
|
@ -51,16 +31,6 @@ router.post('/', upload.single('file'), async (req, res) => {
|
|||
}
|
||||
res.status(500).json({ message: 'Error processing file' });
|
||||
}
|
||||
|
||||
// do this if strategy is not local
|
||||
// finally {
|
||||
// try {
|
||||
// // await fs.unlink(file.path);
|
||||
// } catch (error) {
|
||||
// logger.error('[/files/images] Error deleting file:', error);
|
||||
|
||||
// }
|
||||
// }
|
||||
});
|
||||
|
||||
module.exports = router;
|
||||
|
|
|
@ -1,24 +1,27 @@
|
|||
const express = require('express');
|
||||
const router = express.Router();
|
||||
const {
|
||||
uaParser,
|
||||
checkBan,
|
||||
requireJwtAuth,
|
||||
// concurrentLimiter,
|
||||
// messageIpLimiter,
|
||||
// messageUserLimiter,
|
||||
} = require('../../middleware');
|
||||
const createMulterInstance = require('./multer');
|
||||
const { uaParser, checkBan, requireJwtAuth, createFileLimiters } = require('~/server/middleware');
|
||||
|
||||
const files = require('./files');
|
||||
const images = require('./images');
|
||||
const avatar = require('./avatar');
|
||||
|
||||
router.use(requireJwtAuth);
|
||||
router.use(checkBan);
|
||||
router.use(uaParser);
|
||||
const initialize = async () => {
|
||||
const router = express.Router();
|
||||
router.use(requireJwtAuth);
|
||||
router.use(checkBan);
|
||||
router.use(uaParser);
|
||||
|
||||
router.use('/', files);
|
||||
router.use('/images', images);
|
||||
router.use('/images/avatar', avatar);
|
||||
const upload = await createMulterInstance();
|
||||
const { fileUploadIpLimiter, fileUploadUserLimiter } = createFileLimiters();
|
||||
router.post('*', fileUploadIpLimiter, fileUploadUserLimiter);
|
||||
router.post('/', upload.single('file'));
|
||||
router.post('/images', upload.single('file'));
|
||||
|
||||
module.exports = router;
|
||||
router.use('/', files);
|
||||
router.use('/images', images);
|
||||
router.use('/images/avatar', avatar);
|
||||
return router;
|
||||
};
|
||||
|
||||
module.exports = { initialize };
|
||||
|
|
|
@ -2,13 +2,12 @@ const fs = require('fs');
|
|||
const path = require('path');
|
||||
const crypto = require('crypto');
|
||||
const multer = require('multer');
|
||||
|
||||
const supportedTypes = ['image/jpeg', 'image/jpg', 'image/png', 'image/webp'];
|
||||
const sizeLimit = 20 * 1024 * 1024; // 20 MB
|
||||
const { fileConfig: defaultFileConfig, mergeFileConfig } = require('librechat-data-provider');
|
||||
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
|
||||
|
||||
const storage = multer.diskStorage({
|
||||
destination: function (req, file, cb) {
|
||||
const outputPath = path.join(req.app.locals.paths.imageOutput, 'temp');
|
||||
const outputPath = path.join(req.app.locals.paths.uploads, 'temp', req.user.id);
|
||||
if (!fs.existsSync(outputPath)) {
|
||||
fs.mkdirSync(outputPath, { recursive: true });
|
||||
}
|
||||
|
@ -16,22 +15,30 @@ const storage = multer.diskStorage({
|
|||
},
|
||||
filename: function (req, file, cb) {
|
||||
req.file_id = crypto.randomUUID();
|
||||
const fileExt = path.extname(file.originalname);
|
||||
cb(null, `img-${req.file_id}${fileExt}`);
|
||||
cb(null, `${file.originalname}`);
|
||||
},
|
||||
});
|
||||
|
||||
const fileFilter = (req, file, cb) => {
|
||||
if (!supportedTypes.includes(file.mimetype)) {
|
||||
return cb(
|
||||
new Error('Unsupported file type. Only JPEG, JPG, PNG, and WEBP files are allowed.'),
|
||||
false,
|
||||
);
|
||||
if (!file) {
|
||||
return cb(new Error('No file provided'), false);
|
||||
}
|
||||
|
||||
if (!defaultFileConfig.checkType(file.mimetype)) {
|
||||
return cb(new Error('Unsupported file type: ' + file.mimetype), false);
|
||||
}
|
||||
|
||||
cb(null, true);
|
||||
};
|
||||
|
||||
const upload = multer({ storage, fileFilter, limits: { fileSize: sizeLimit } });
|
||||
const createMulterInstance = async () => {
|
||||
const customConfig = await getCustomConfig();
|
||||
const fileConfig = mergeFileConfig(customConfig?.fileConfig);
|
||||
return multer({
|
||||
storage,
|
||||
fileFilter,
|
||||
limits: { fileSize: fileConfig.serverFileSizeLimit },
|
||||
});
|
||||
};
|
||||
|
||||
module.exports = upload;
|
||||
module.exports = createMulterInstance;
|
||||
|
|
118
api/server/services/ActionService.js
Normal file
118
api/server/services/ActionService.js
Normal file
|
@ -0,0 +1,118 @@
|
|||
const { AuthTypeEnum } = require('librechat-data-provider');
|
||||
const { encryptV2, decryptV2 } = require('~/server/utils/crypto');
|
||||
const { getActions } = require('~/models/Action');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* Loads action sets based on the user and assistant ID.
|
||||
*
|
||||
* @param {Object} params - The parameters for loading action sets.
|
||||
* @param {string} params.user - The user identifier.
|
||||
* @param {string} params.assistant_id - The assistant identifier.
|
||||
* @returns {Promise<Action[] | null>} A promise that resolves to an array of actions or `null` if no match.
|
||||
*/
|
||||
async function loadActionSets({ user, assistant_id }) {
|
||||
return await getActions({ user, assistant_id }, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a general tool for an entire action set.
|
||||
*
|
||||
* @param {Object} params - The parameters for loading action sets.
|
||||
* @param {Action} params.action - The action set. Necessary for decrypting authentication values.
|
||||
* @param {ActionRequest} params.requestBuilder - The ActionRequest builder class to execute the API call.
|
||||
* @returns { { _call: (toolInput: Object) => unknown} } An object with `_call` method to execute the tool input.
|
||||
*/
|
||||
function createActionTool({ action, requestBuilder }) {
|
||||
action.metadata = decryptMetadata(action.metadata);
|
||||
const _call = async (toolInput) => {
|
||||
try {
|
||||
requestBuilder.setParams(toolInput);
|
||||
if (action.metadata.auth && action.metadata.auth.type !== AuthTypeEnum.None) {
|
||||
await requestBuilder.setAuth(action.metadata);
|
||||
}
|
||||
const res = await requestBuilder.execute();
|
||||
if (typeof res.data === 'object') {
|
||||
return JSON.stringify(res.data);
|
||||
}
|
||||
return res.data;
|
||||
} catch (error) {
|
||||
logger.error(`API call to ${action.metadata.domain} failed`, error);
|
||||
if (error.response) {
|
||||
const { status, data } = error.response;
|
||||
return `API call to ${action.metadata.domain} failed with status ${status}: ${data}`;
|
||||
}
|
||||
|
||||
return `API call to ${action.metadata.domain} failed.`;
|
||||
}
|
||||
};
|
||||
|
||||
return {
|
||||
_call,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Encrypts sensitive metadata values for an action.
|
||||
*
|
||||
* @param {ActionMetadata} metadata - The action metadata to encrypt.
|
||||
* @returns {ActionMetadata} The updated action metadata with encrypted values.
|
||||
*/
|
||||
function encryptMetadata(metadata) {
|
||||
const encryptedMetadata = { ...metadata };
|
||||
|
||||
// ServiceHttp
|
||||
if (metadata.auth && metadata.auth.type === AuthTypeEnum.ServiceHttp) {
|
||||
if (metadata.api_key) {
|
||||
encryptedMetadata.api_key = encryptV2(metadata.api_key);
|
||||
}
|
||||
}
|
||||
|
||||
// OAuth
|
||||
else if (metadata.auth && metadata.auth.type === AuthTypeEnum.OAuth) {
|
||||
if (metadata.oauth_client_id) {
|
||||
encryptedMetadata.oauth_client_id = encryptV2(metadata.oauth_client_id);
|
||||
}
|
||||
if (metadata.oauth_client_secret) {
|
||||
encryptedMetadata.oauth_client_secret = encryptV2(metadata.oauth_client_secret);
|
||||
}
|
||||
}
|
||||
|
||||
return encryptedMetadata;
|
||||
}
|
||||
|
||||
/**
|
||||
* Decrypts sensitive metadata values for an action.
|
||||
*
|
||||
* @param {ActionMetadata} metadata - The action metadata to decrypt.
|
||||
* @returns {ActionMetadata} The updated action metadata with decrypted values.
|
||||
*/
|
||||
function decryptMetadata(metadata) {
|
||||
const decryptedMetadata = { ...metadata };
|
||||
|
||||
// ServiceHttp
|
||||
if (metadata.auth && metadata.auth.type === AuthTypeEnum.ServiceHttp) {
|
||||
if (metadata.api_key) {
|
||||
decryptedMetadata.api_key = decryptV2(metadata.api_key);
|
||||
}
|
||||
}
|
||||
|
||||
// OAuth
|
||||
else if (metadata.auth && metadata.auth.type === AuthTypeEnum.OAuth) {
|
||||
if (metadata.oauth_client_id) {
|
||||
decryptedMetadata.oauth_client_id = decryptV2(metadata.oauth_client_id);
|
||||
}
|
||||
if (metadata.oauth_client_secret) {
|
||||
decryptedMetadata.oauth_client_secret = decryptV2(metadata.oauth_client_secret);
|
||||
}
|
||||
}
|
||||
|
||||
return decryptedMetadata;
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
loadActionSets,
|
||||
createActionTool,
|
||||
encryptMetadata,
|
||||
decryptMetadata,
|
||||
};
|
|
@ -1,7 +1,10 @@
|
|||
const { FileSources } = require('librechat-data-provider');
|
||||
const { FileSources, EModelEndpoint, Constants } = require('librechat-data-provider');
|
||||
const { initializeFirebase } = require('./Files/Firebase/initialize');
|
||||
const loadCustomConfig = require('./Config/loadCustomConfig');
|
||||
const handleRateLimits = require('./Config/handleRateLimits');
|
||||
const { loadAndFormatTools } = require('./ToolService');
|
||||
const paths = require('~/config/paths');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
*
|
||||
|
@ -12,13 +15,7 @@ const paths = require('~/config/paths');
|
|||
const AppService = async (app) => {
|
||||
/** @type {TCustomConfig}*/
|
||||
const config = (await loadCustomConfig()) ?? {};
|
||||
const socialLogins = config?.registration?.socialLogins ?? [
|
||||
'google',
|
||||
'facebook',
|
||||
'openid',
|
||||
'github',
|
||||
'discord',
|
||||
];
|
||||
|
||||
const fileStrategy = config.fileStrategy ?? FileSources.local;
|
||||
process.env.CDN_PROVIDER = fileStrategy;
|
||||
|
||||
|
@ -26,11 +23,72 @@ const AppService = async (app) => {
|
|||
initializeFirebase();
|
||||
}
|
||||
|
||||
/** @type {Record<string, FunctionTool} */
|
||||
const availableTools = loadAndFormatTools({
|
||||
directory: paths.structuredTools,
|
||||
filter: new Set([
|
||||
'ChatTool.js',
|
||||
'CodeSherpa.js',
|
||||
'CodeSherpaTools.js',
|
||||
'E2BTools.js',
|
||||
'extractionChain.js',
|
||||
]),
|
||||
});
|
||||
|
||||
if (!Object.keys(config).length) {
|
||||
app.locals = {
|
||||
socialLogins,
|
||||
availableTools,
|
||||
fileStrategy,
|
||||
paths,
|
||||
};
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
if (config.version !== Constants.CONFIG_VERSION) {
|
||||
logger.info(
|
||||
`\nOutdated Config version: ${config.version}. Current version: ${Constants.CONFIG_VERSION}\n\nCheck out the latest config file guide for new options and features.\nhttps://docs.librechat.ai/install/configuration/custom_config.html\n\n`,
|
||||
);
|
||||
}
|
||||
|
||||
handleRateLimits(config?.rateLimits);
|
||||
const socialLogins = config?.registration?.socialLogins ?? [
|
||||
'google',
|
||||
'facebook',
|
||||
'openid',
|
||||
'github',
|
||||
'discord',
|
||||
];
|
||||
|
||||
const endpointLocals = {};
|
||||
if (config?.endpoints?.[EModelEndpoint.assistants]) {
|
||||
const { disableBuilder, pollIntervalMs, timeoutMs, supportedIds, excludedIds } =
|
||||
config.endpoints[EModelEndpoint.assistants];
|
||||
|
||||
if (supportedIds?.length && excludedIds?.length) {
|
||||
logger.warn(
|
||||
`Both \`supportedIds\` and \`excludedIds\` are defined for the ${EModelEndpoint.assistants} endpoint; \`excludedIds\` field will be ignored.`,
|
||||
);
|
||||
}
|
||||
|
||||
/** @type {Partial<TAssistantEndpoint>} */
|
||||
endpointLocals[EModelEndpoint.assistants] = {
|
||||
disableBuilder,
|
||||
pollIntervalMs,
|
||||
timeoutMs,
|
||||
supportedIds,
|
||||
excludedIds,
|
||||
};
|
||||
}
|
||||
|
||||
app.locals = {
|
||||
socialLogins,
|
||||
availableTools,
|
||||
fileStrategy,
|
||||
fileConfig: config?.fileConfig,
|
||||
paths,
|
||||
...endpointLocals,
|
||||
};
|
||||
};
|
||||
|
||||
module.exports = AppService;
|
||||
|
|
|
@ -13,6 +13,24 @@ jest.mock('./Config/loadCustomConfig', () => {
|
|||
jest.mock('./Files/Firebase/initialize', () => ({
|
||||
initializeFirebase: jest.fn(),
|
||||
}));
|
||||
jest.mock('./ToolService', () => ({
|
||||
loadAndFormatTools: jest.fn().mockReturnValue({
|
||||
ExampleTool: {
|
||||
type: 'function',
|
||||
function: {
|
||||
description: 'Example tool function',
|
||||
name: 'exampleFunction',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
param1: { type: 'string', description: 'An example parameter' },
|
||||
},
|
||||
required: ['param1'],
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
}));
|
||||
|
||||
describe('AppService', () => {
|
||||
let app;
|
||||
|
@ -30,10 +48,39 @@ describe('AppService', () => {
|
|||
expect(app.locals).toEqual({
|
||||
socialLogins: ['testLogin'],
|
||||
fileStrategy: 'testStrategy',
|
||||
availableTools: {
|
||||
ExampleTool: {
|
||||
type: 'function',
|
||||
function: expect.objectContaining({
|
||||
description: 'Example tool function',
|
||||
name: 'exampleFunction',
|
||||
parameters: expect.objectContaining({
|
||||
type: 'object',
|
||||
properties: expect.any(Object),
|
||||
required: expect.arrayContaining(['param1']),
|
||||
}),
|
||||
}),
|
||||
},
|
||||
},
|
||||
paths: expect.anything(),
|
||||
});
|
||||
});
|
||||
|
||||
it('should log a warning if the config version is outdated', async () => {
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
|
||||
Promise.resolve({
|
||||
version: '0.9.0', // An outdated version for this test
|
||||
registration: { socialLogins: ['testLogin'] },
|
||||
fileStrategy: 'testStrategy',
|
||||
}),
|
||||
);
|
||||
|
||||
await AppService(app);
|
||||
|
||||
const { logger } = require('~/config');
|
||||
expect(logger.info).toHaveBeenCalledWith(expect.stringContaining('Outdated Config version'));
|
||||
});
|
||||
|
||||
it('should initialize Firebase when fileStrategy is firebase', async () => {
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
|
||||
Promise.resolve({
|
||||
|
@ -48,4 +95,217 @@ describe('AppService', () => {
|
|||
|
||||
expect(process.env.CDN_PROVIDER).toEqual(FileSources.firebase);
|
||||
});
|
||||
|
||||
it('should load and format tools accurately with defined structure', async () => {
|
||||
const { loadAndFormatTools } = require('./ToolService');
|
||||
await AppService(app);
|
||||
|
||||
expect(loadAndFormatTools).toHaveBeenCalledWith({
|
||||
directory: expect.anything(),
|
||||
filter: expect.anything(),
|
||||
});
|
||||
|
||||
expect(app.locals.availableTools.ExampleTool).toBeDefined();
|
||||
expect(app.locals.availableTools.ExampleTool).toEqual({
|
||||
type: 'function',
|
||||
function: {
|
||||
description: 'Example tool function',
|
||||
name: 'exampleFunction',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
param1: { type: 'string', description: 'An example parameter' },
|
||||
},
|
||||
required: ['param1'],
|
||||
},
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should correctly configure endpoints based on custom config', async () => {
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
|
||||
Promise.resolve({
|
||||
endpoints: {
|
||||
assistants: {
|
||||
disableBuilder: true,
|
||||
pollIntervalMs: 5000,
|
||||
timeoutMs: 30000,
|
||||
supportedIds: ['id1', 'id2'],
|
||||
},
|
||||
},
|
||||
}),
|
||||
);
|
||||
|
||||
await AppService(app);
|
||||
|
||||
expect(app.locals).toHaveProperty('assistants');
|
||||
expect(app.locals.assistants).toEqual(
|
||||
expect.objectContaining({
|
||||
disableBuilder: true,
|
||||
pollIntervalMs: 5000,
|
||||
timeoutMs: 30000,
|
||||
supportedIds: expect.arrayContaining(['id1', 'id2']),
|
||||
}),
|
||||
);
|
||||
});
|
||||
|
||||
it('should not modify FILE_UPLOAD environment variables without rate limits', async () => {
|
||||
// Setup initial environment variables
|
||||
process.env.FILE_UPLOAD_IP_MAX = '10';
|
||||
process.env.FILE_UPLOAD_IP_WINDOW = '15';
|
||||
process.env.FILE_UPLOAD_USER_MAX = '5';
|
||||
process.env.FILE_UPLOAD_USER_WINDOW = '20';
|
||||
|
||||
const initialEnv = { ...process.env };
|
||||
|
||||
await AppService(app);
|
||||
|
||||
// Expect environment variables to remain unchanged
|
||||
expect(process.env.FILE_UPLOAD_IP_MAX).toEqual(initialEnv.FILE_UPLOAD_IP_MAX);
|
||||
expect(process.env.FILE_UPLOAD_IP_WINDOW).toEqual(initialEnv.FILE_UPLOAD_IP_WINDOW);
|
||||
expect(process.env.FILE_UPLOAD_USER_MAX).toEqual(initialEnv.FILE_UPLOAD_USER_MAX);
|
||||
expect(process.env.FILE_UPLOAD_USER_WINDOW).toEqual(initialEnv.FILE_UPLOAD_USER_WINDOW);
|
||||
});
|
||||
|
||||
it('should correctly set FILE_UPLOAD environment variables based on rate limits', async () => {
|
||||
// Define and mock a custom configuration with rate limits
|
||||
const rateLimitsConfig = {
|
||||
rateLimits: {
|
||||
fileUploads: {
|
||||
ipMax: '100',
|
||||
ipWindowInMinutes: '60',
|
||||
userMax: '50',
|
||||
userWindowInMinutes: '30',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
|
||||
Promise.resolve(rateLimitsConfig),
|
||||
);
|
||||
|
||||
await AppService(app);
|
||||
|
||||
// Verify that process.env has been updated according to the rate limits config
|
||||
expect(process.env.FILE_UPLOAD_IP_MAX).toEqual('100');
|
||||
expect(process.env.FILE_UPLOAD_IP_WINDOW).toEqual('60');
|
||||
expect(process.env.FILE_UPLOAD_USER_MAX).toEqual('50');
|
||||
expect(process.env.FILE_UPLOAD_USER_WINDOW).toEqual('30');
|
||||
});
|
||||
|
||||
it('should fallback to default FILE_UPLOAD environment variables when rate limits are unspecified', async () => {
|
||||
// Setup initial environment variables to non-default values
|
||||
process.env.FILE_UPLOAD_IP_MAX = 'initialMax';
|
||||
process.env.FILE_UPLOAD_IP_WINDOW = 'initialWindow';
|
||||
process.env.FILE_UPLOAD_USER_MAX = 'initialUserMax';
|
||||
process.env.FILE_UPLOAD_USER_WINDOW = 'initialUserWindow';
|
||||
|
||||
// Mock a custom configuration without specific rate limits
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() => Promise.resolve({}));
|
||||
|
||||
await AppService(app);
|
||||
|
||||
// Verify that process.env falls back to the initial values
|
||||
expect(process.env.FILE_UPLOAD_IP_MAX).toEqual('initialMax');
|
||||
expect(process.env.FILE_UPLOAD_IP_WINDOW).toEqual('initialWindow');
|
||||
expect(process.env.FILE_UPLOAD_USER_MAX).toEqual('initialUserMax');
|
||||
expect(process.env.FILE_UPLOAD_USER_WINDOW).toEqual('initialUserWindow');
|
||||
});
|
||||
});
|
||||
|
||||
describe('AppService updating app.locals', () => {
|
||||
let app;
|
||||
let initialEnv;
|
||||
|
||||
beforeEach(() => {
|
||||
// Store initial environment variables to restore them after each test
|
||||
initialEnv = { ...process.env };
|
||||
|
||||
app = { locals: {} };
|
||||
process.env.CDN_PROVIDER = undefined;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Restore initial environment variables
|
||||
process.env = { ...initialEnv };
|
||||
});
|
||||
|
||||
it('should update app.locals with default values if loadCustomConfig returns undefined', async () => {
|
||||
// Mock loadCustomConfig to return undefined
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() => Promise.resolve(undefined));
|
||||
|
||||
await AppService(app);
|
||||
|
||||
expect(app.locals).toBeDefined();
|
||||
expect(app.locals.paths).toBeDefined();
|
||||
expect(app.locals.availableTools).toBeDefined();
|
||||
expect(app.locals.fileStrategy).toEqual(FileSources.local);
|
||||
});
|
||||
|
||||
it('should update app.locals with values from loadCustomConfig', async () => {
|
||||
// Mock loadCustomConfig to return a specific config object
|
||||
const customConfig = {
|
||||
fileStrategy: 'firebase',
|
||||
registration: { socialLogins: ['testLogin'] },
|
||||
};
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
|
||||
Promise.resolve(customConfig),
|
||||
);
|
||||
|
||||
await AppService(app);
|
||||
|
||||
expect(app.locals).toBeDefined();
|
||||
expect(app.locals.paths).toBeDefined();
|
||||
expect(app.locals.availableTools).toBeDefined();
|
||||
expect(app.locals.fileStrategy).toEqual(customConfig.fileStrategy);
|
||||
expect(app.locals.socialLogins).toEqual(customConfig.registration.socialLogins);
|
||||
});
|
||||
|
||||
it('should apply the assistants endpoint configuration correctly to app.locals', async () => {
|
||||
const mockConfig = {
|
||||
endpoints: {
|
||||
assistants: {
|
||||
disableBuilder: true,
|
||||
pollIntervalMs: 5000,
|
||||
timeoutMs: 30000,
|
||||
supportedIds: ['id1', 'id2'],
|
||||
},
|
||||
},
|
||||
};
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() => Promise.resolve(mockConfig));
|
||||
|
||||
const app = { locals: {} };
|
||||
await AppService(app);
|
||||
|
||||
expect(app.locals).toHaveProperty('assistants');
|
||||
const { assistants } = app.locals;
|
||||
expect(assistants.disableBuilder).toBe(true);
|
||||
expect(assistants.pollIntervalMs).toBe(5000);
|
||||
expect(assistants.timeoutMs).toBe(30000);
|
||||
expect(assistants.supportedIds).toEqual(['id1', 'id2']);
|
||||
expect(assistants.excludedIds).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should log a warning when both supportedIds and excludedIds are provided', async () => {
|
||||
const mockConfig = {
|
||||
endpoints: {
|
||||
assistants: {
|
||||
disableBuilder: false,
|
||||
pollIntervalMs: 3000,
|
||||
timeoutMs: 20000,
|
||||
supportedIds: ['id1', 'id2'],
|
||||
excludedIds: ['id3'],
|
||||
},
|
||||
},
|
||||
};
|
||||
require('./Config/loadCustomConfig').mockImplementationOnce(() => Promise.resolve(mockConfig));
|
||||
|
||||
const app = { locals: {} };
|
||||
await require('./AppService')(app);
|
||||
|
||||
const { logger } = require('~/config');
|
||||
expect(logger.warn).toHaveBeenCalledWith(
|
||||
expect.stringContaining('Both `supportedIds` and `excludedIds` are defined'),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
|
|
@ -1,256 +1,93 @@
|
|||
const RunManager = require('./Runs/RunMananger');
|
||||
const path = require('path');
|
||||
const { klona } = require('klona');
|
||||
const {
|
||||
StepTypes,
|
||||
RunStatus,
|
||||
StepStatus,
|
||||
FilePurpose,
|
||||
ContentTypes,
|
||||
ToolCallTypes,
|
||||
imageExtRegex,
|
||||
imageGenTools,
|
||||
EModelEndpoint,
|
||||
defaultOrderQuery,
|
||||
} = require('librechat-data-provider');
|
||||
const { retrieveAndProcessFile } = require('~/server/services/Files/process');
|
||||
const { RunManager, waitForRun, sleep } = require('~/server/services/Runs');
|
||||
const { processRequiredActions } = require('~/server/services/ToolService');
|
||||
const { createOnProgress, sendMessage } = require('~/server/utils');
|
||||
const { TextStream } = require('~/app/clients');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* @typedef {Object} Message
|
||||
* @property {string} id - The identifier of the message.
|
||||
* @property {string} object - The object type, always 'thread.message'.
|
||||
* @property {number} created_at - The Unix timestamp (in seconds) for when the message was created.
|
||||
* @property {string} thread_id - The thread ID that this message belongs to.
|
||||
* @property {string} role - The entity that produced the message. One of 'user' or 'assistant'.
|
||||
* @property {Object[]} content - The content of the message in an array of text and/or images.
|
||||
* @property {string} content[].type - The type of content, either 'text' or 'image_file'.
|
||||
* @property {Object} [content[].text] - The text content, present if type is 'text'.
|
||||
* @property {string} content[].text.value - The data that makes up the text.
|
||||
* @property {Object[]} [content[].text.annotations] - Annotations for the text content.
|
||||
* @property {Object} [content[].image_file] - The image file content, present if type is 'image_file'.
|
||||
* @property {string} content[].image_file.file_id - The File ID of the image in the message content.
|
||||
* @property {string[]} [file_ids] - Optional list of File IDs for the message.
|
||||
* @property {string|null} [assistant_id] - If applicable, the ID of the assistant that authored this message.
|
||||
* @property {string|null} [run_id] - If applicable, the ID of the run associated with the authoring of this message.
|
||||
* @property {Object} [metadata] - Optional metadata for the message, a map of key-value pairs.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @typedef {Object} FunctionTool
|
||||
* @property {string} type - The type of tool, 'function'.
|
||||
* @property {Object} function - The function definition.
|
||||
* @property {string} function.description - A description of what the function does.
|
||||
* @property {string} function.name - The name of the function to be called.
|
||||
* @property {Object} function.parameters - The parameters the function accepts, described as a JSON Schema object.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @typedef {Object} Tool
|
||||
* @property {string} type - The type of tool, can be 'code_interpreter', 'retrieval', or 'function'.
|
||||
* @property {FunctionTool} [function] - The function tool, present if type is 'function'.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @typedef {Object} Run
|
||||
* @property {string} id - The identifier of the run.
|
||||
* @property {string} object - The object type, always 'thread.run'.
|
||||
* @property {number} created_at - The Unix timestamp (in seconds) for when the run was created.
|
||||
* @property {string} thread_id - The ID of the thread that was executed on as a part of this run.
|
||||
* @property {string} assistant_id - The ID of the assistant used for execution of this run.
|
||||
* @property {string} status - The status of the run (e.g., 'queued', 'completed').
|
||||
* @property {Object} [required_action] - Details on the action required to continue the run.
|
||||
* @property {string} required_action.type - The type of required action, always 'submit_tool_outputs'.
|
||||
* @property {Object} required_action.submit_tool_outputs - Details on the tool outputs needed for the run to continue.
|
||||
* @property {Object[]} required_action.submit_tool_outputs.tool_calls - A list of the relevant tool calls.
|
||||
* @property {string} required_action.submit_tool_outputs.tool_calls[].id - The ID of the tool call.
|
||||
* @property {string} required_action.submit_tool_outputs.tool_calls[].type - The type of tool call the output is required for, always 'function'.
|
||||
* @property {Object} required_action.submit_tool_outputs.tool_calls[].function - The function definition.
|
||||
* @property {string} required_action.submit_tool_outputs.tool_calls[].function.name - The name of the function.
|
||||
* @property {string} required_action.submit_tool_outputs.tool_calls[].function.arguments - The arguments that the model expects you to pass to the function.
|
||||
* @property {Object} [last_error] - The last error associated with this run.
|
||||
* @property {string} last_error.code - One of 'server_error' or 'rate_limit_exceeded'.
|
||||
* @property {string} last_error.message - A human-readable description of the error.
|
||||
* @property {number} [expires_at] - The Unix timestamp (in seconds) for when the run will expire.
|
||||
* @property {number} [started_at] - The Unix timestamp (in seconds) for when the run was started.
|
||||
* @property {number} [cancelled_at] - The Unix timestamp (in seconds) for when the run was cancelled.
|
||||
* @property {number} [failed_at] - The Unix timestamp (in seconds) for when the run failed.
|
||||
* @property {number} [completed_at] - The Unix timestamp (in seconds) for when the run was completed.
|
||||
* @property {string} [model] - The model that the assistant used for this run.
|
||||
* @property {string} [instructions] - The instructions that the assistant used for this run.
|
||||
* @property {Tool[]} [tools] - The list of tools used for this run.
|
||||
* @property {string[]} [file_ids] - The list of File IDs used for this run.
|
||||
* @property {Object} [metadata] - Metadata associated with this run.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @typedef {Object} RunStep
|
||||
* @property {string} id - The identifier of the run step.
|
||||
* @property {string} object - The object type, always 'thread.run.step'.
|
||||
* @property {number} created_at - The Unix timestamp (in seconds) for when the run step was created.
|
||||
* @property {string} assistant_id - The ID of the assistant associated with the run step.
|
||||
* @property {string} thread_id - The ID of the thread that was run.
|
||||
* @property {string} run_id - The ID of the run that this run step is a part of.
|
||||
* @property {string} type - The type of run step, either 'message_creation' or 'tool_calls'.
|
||||
* @property {string} status - The status of the run step, can be 'in_progress', 'cancelled', 'failed', 'completed', or 'expired'.
|
||||
* @property {Object} step_details - The details of the run step.
|
||||
* @property {Object} [last_error] - The last error associated with this run step.
|
||||
* @property {string} last_error.code - One of 'server_error' or 'rate_limit_exceeded'.
|
||||
* @property {string} last_error.message - A human-readable description of the error.
|
||||
* @property {number} [expired_at] - The Unix timestamp (in seconds) for when the run step expired.
|
||||
* @property {number} [cancelled_at] - The Unix timestamp (in seconds) for when the run step was cancelled.
|
||||
* @property {number} [failed_at] - The Unix timestamp (in seconds) for when the run step failed.
|
||||
* @property {number} [completed_at] - The Unix timestamp (in seconds) for when the run step completed.
|
||||
* @property {Object} [metadata] - Metadata associated with this run step, a map of up to 16 key-value pairs.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @typedef {Object} StepMessage
|
||||
* @property {Message} message - The complete message object created by the step.
|
||||
* @property {string} id - The identifier of the run step.
|
||||
* @property {string} object - The object type, always 'thread.run.step'.
|
||||
* @property {number} created_at - The Unix timestamp (in seconds) for when the run step was created.
|
||||
* @property {string} assistant_id - The ID of the assistant associated with the run step.
|
||||
* @property {string} thread_id - The ID of the thread that was run.
|
||||
* @property {string} run_id - The ID of the run that this run step is a part of.
|
||||
* @property {string} type - The type of run step, either 'message_creation' or 'tool_calls'.
|
||||
* @property {string} status - The status of the run step, can be 'in_progress', 'cancelled', 'failed', 'completed', or 'expired'.
|
||||
* @property {Object} step_details - The details of the run step.
|
||||
* @property {Object} [last_error] - The last error associated with this run step.
|
||||
* @property {string} last_error.code - One of 'server_error' or 'rate_limit_exceeded'.
|
||||
* @property {string} last_error.message - A human-readable description of the error.
|
||||
* @property {number} [expired_at] - The Unix timestamp (in seconds) for when the run step expired.
|
||||
* @property {number} [cancelled_at] - The Unix timestamp (in seconds) for when the run step was cancelled.
|
||||
* @property {number} [failed_at] - The Unix timestamp (in seconds) for when the run step failed.
|
||||
* @property {number} [completed_at] - The Unix timestamp (in seconds) for when the run step completed.
|
||||
* @property {Object} [metadata] - Metadata associated with this run step, a map of up to 16 key-value pairs.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Initializes a new thread or adds messages to an existing thread.
|
||||
* Sorts, processes, and flattens messages to a single string.
|
||||
*
|
||||
* @param {Object} params - The parameters for initializing a thread.
|
||||
* @param {OpenAI} params.openai - The OpenAI client instance.
|
||||
* @param {Object} params.body - The body of the request.
|
||||
* @param {Message[]} params.body.messages - A list of messages to start the thread with.
|
||||
* @param {Object} [params.body.metadata] - Optional metadata for the thread.
|
||||
* @param {string} [params.thread_id] - Optional existing thread ID. If provided, a message will be added to this thread.
|
||||
* @return {Promise<Thread>} A promise that resolves to the newly created thread object or the updated thread object.
|
||||
* @param {Object} params - Params for creating the onTextProgress function.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.conversationId - The current conversation ID.
|
||||
* @param {string} params.userMessageId - The user message ID; response's `parentMessageId`.
|
||||
* @param {string} params.messageId - The response message ID.
|
||||
* @param {string} params.thread_id - The current thread ID.
|
||||
* @returns {void}
|
||||
*/
|
||||
async function initThread({ openai, body, thread_id: _thread_id }) {
|
||||
let thread = {};
|
||||
const messages = [];
|
||||
if (_thread_id) {
|
||||
const message = await openai.beta.threads.messages.create(_thread_id, body.messages[0]);
|
||||
messages.push(message);
|
||||
} else {
|
||||
thread = await openai.beta.threads.create(body);
|
||||
}
|
||||
|
||||
const thread_id = _thread_id ?? thread.id;
|
||||
return { messages, thread_id, ...thread };
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a run on a thread using the OpenAI API.
|
||||
*
|
||||
* @param {Object} params - The parameters for creating a run.
|
||||
* @param {OpenAI} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.thread_id - The ID of the thread to run.
|
||||
* @param {Object} params.body - The body of the request to create a run.
|
||||
* @param {string} params.body.assistant_id - The ID of the assistant to use for this run.
|
||||
* @param {string} [params.body.model] - Optional. The ID of the model to be used for this run.
|
||||
* @param {string} [params.body.instructions] - Optional. Override the default system message of the assistant.
|
||||
* @param {Object[]} [params.body.tools] - Optional. Override the tools the assistant can use for this run.
|
||||
* @param {string[]} [params.body.file_ids] - Optional. List of File IDs the assistant can use for this run.
|
||||
* @param {Object} [params.body.metadata] - Optional. Metadata for the run.
|
||||
* @return {Promise<Run>} A promise that resolves to the created run object.
|
||||
*/
|
||||
async function createRun({ openai, thread_id, body }) {
|
||||
const run = await openai.beta.threads.runs.create(thread_id, body);
|
||||
return run;
|
||||
}
|
||||
|
||||
// /**
|
||||
// * Retrieves all steps of a run.
|
||||
// *
|
||||
// * @param {Object} params - The parameters for the retrieveRunSteps function.
|
||||
// * @param {OpenAI} params.openai - The OpenAI client instance.
|
||||
// * @param {string} params.thread_id - The ID of the thread associated with the run.
|
||||
// * @param {string} params.run_id - The ID of the run to retrieve steps for.
|
||||
// * @return {Promise<RunStep[]>} A promise that resolves to an array of RunStep objects.
|
||||
// */
|
||||
// async function retrieveRunSteps({ openai, thread_id, run_id }) {
|
||||
// const runSteps = await openai.beta.threads.runs.steps.list(thread_id, run_id);
|
||||
// return runSteps;
|
||||
// }
|
||||
|
||||
/**
|
||||
* Delays the execution for a specified number of milliseconds.
|
||||
*
|
||||
* @param {number} ms - The number of milliseconds to delay.
|
||||
* @return {Promise<void>} A promise that resolves after the specified delay.
|
||||
*/
|
||||
function sleep(ms) {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
/**
|
||||
* Waits for a run to complete by repeatedly checking its status. It uses a RunManager instance to fetch and manage run steps based on the run status.
|
||||
*
|
||||
* @param {Object} params - The parameters for the waitForRun function.
|
||||
* @param {OpenAI} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.run_id - The ID of the run to wait for.
|
||||
* @param {string} params.thread_id - The ID of the thread associated with the run.
|
||||
* @param {RunManager} params.runManager - The RunManager instance to manage run steps.
|
||||
* @param {number} params.pollIntervalMs - The interval for polling the run status, default is 500 milliseconds.
|
||||
* @return {Promise<Run>} A promise that resolves to the last fetched run object.
|
||||
*/
|
||||
async function waitForRun({ openai, run_id, thread_id, runManager, pollIntervalMs = 500 }) {
|
||||
const timeout = 18000; // 18 seconds
|
||||
let timeElapsed = 0;
|
||||
let run;
|
||||
|
||||
// this runManager will be passed in from the caller
|
||||
// const runManager = new RunManager({
|
||||
// 'in_progress': (step) => { /* ... */ },
|
||||
// 'queued': (step) => { /* ... */ },
|
||||
// });
|
||||
|
||||
while (timeElapsed < timeout) {
|
||||
run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
|
||||
console.log(`Run status: ${run.status}`);
|
||||
|
||||
if (!['in_progress', 'queued'].includes(run.status)) {
|
||||
await runManager.fetchRunSteps({
|
||||
async function createOnTextProgress({
|
||||
openai,
|
||||
thread_id: thread_id,
|
||||
run_id: run_id,
|
||||
runStatus: run.status,
|
||||
final: true,
|
||||
});
|
||||
break;
|
||||
conversationId,
|
||||
userMessageId,
|
||||
messageId,
|
||||
thread_id,
|
||||
}) {
|
||||
openai.responseMessage = {
|
||||
conversationId,
|
||||
parentMessageId: userMessageId,
|
||||
role: 'assistant',
|
||||
messageId,
|
||||
content: [],
|
||||
};
|
||||
|
||||
openai.responseText = '';
|
||||
|
||||
openai.addContentData = (data) => {
|
||||
const { type, index } = data;
|
||||
openai.responseMessage.content[index] = { type, [type]: data[type] };
|
||||
|
||||
if (type === ContentTypes.TEXT) {
|
||||
openai.responseText += data[type].value;
|
||||
return;
|
||||
}
|
||||
|
||||
// may use in future
|
||||
// await runManager.fetchRunSteps({
|
||||
// openai,
|
||||
// thread_id: thread_id,
|
||||
// run_id: run_id,
|
||||
// runStatus: run.status,
|
||||
// });
|
||||
const contentData = {
|
||||
index,
|
||||
type,
|
||||
[type]: data[type],
|
||||
messageId,
|
||||
thread_id,
|
||||
conversationId,
|
||||
};
|
||||
|
||||
await sleep(pollIntervalMs);
|
||||
timeElapsed += pollIntervalMs;
|
||||
}
|
||||
|
||||
return run;
|
||||
logger.debug('Content data:', contentData);
|
||||
sendMessage(openai.res, contentData);
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves the response from an OpenAI run.
|
||||
*
|
||||
* @param {Object} params - The parameters for getting the response.
|
||||
* @param {OpenAI} params.openai - The OpenAI client instance.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.run_id - The ID of the run to get the response for.
|
||||
* @param {string} params.thread_id - The ID of the thread associated with the run.
|
||||
* @return {Promise<OpenAIAssistantFinish | OpenAIAssistantAction[] | Message[] | RequiredActionFunctionToolCall[]>}
|
||||
* @return {Promise<OpenAIAssistantFinish | OpenAIAssistantAction[] | ThreadMessage[] | RequiredActionFunctionToolCall[]>}
|
||||
*/
|
||||
async function getResponse({ openai, run_id, thread_id }) {
|
||||
const run = await waitForRun({ openai, run_id, thread_id, pollIntervalMs: 500 });
|
||||
|
||||
if (run.status === 'completed') {
|
||||
const messages = await openai.beta.threads.messages.list(thread_id, {
|
||||
order: 'asc',
|
||||
});
|
||||
if (run.status === RunStatus.COMPLETED) {
|
||||
const messages = await openai.beta.threads.messages.list(thread_id, defaultOrderQuery);
|
||||
const newMessages = messages.data.filter((msg) => msg.run_id === run_id);
|
||||
|
||||
return newMessages;
|
||||
} else if (run.status === 'requires_action') {
|
||||
} else if (run.status === RunStatus.REQUIRES_ACTION) {
|
||||
const actions = [];
|
||||
run.required_action?.submit_tool_outputs.tool_calls.forEach((item) => {
|
||||
const functionCall = item.function;
|
||||
|
@ -259,7 +96,6 @@ async function getResponse({ openai, run_id, thread_id }) {
|
|||
tool: functionCall.name,
|
||||
toolInput: args,
|
||||
toolCallId: item.id,
|
||||
log: '',
|
||||
run_id,
|
||||
thread_id,
|
||||
});
|
||||
|
@ -273,90 +109,432 @@ async function getResponse({ openai, run_id, thread_id }) {
|
|||
}
|
||||
|
||||
/**
|
||||
* Initializes a RunManager with handlers, then invokes waitForRun to monitor and manage an OpenAI run.
|
||||
*
|
||||
* @param {Object} params - The parameters for managing and monitoring the run.
|
||||
* @param {OpenAI} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.run_id - The ID of the run to manage and monitor.
|
||||
* @param {string} params.thread_id - The ID of the thread associated with the run.
|
||||
* @return {Promise<Object>} A promise that resolves to an object containing the run and managed steps.
|
||||
* Filters the steps to keep only the most recent instance of each unique step.
|
||||
* @param {RunStep[]} steps - The array of RunSteps to filter.
|
||||
* @return {RunStep[]} The filtered array of RunSteps.
|
||||
*/
|
||||
async function handleRun({ openai, run_id, thread_id }) {
|
||||
let steps;
|
||||
let messages;
|
||||
const runManager = new RunManager({
|
||||
// 'in_progress': async ({ step, final, isLast }) => {
|
||||
// // Define logic for handling steps with 'in_progress' status
|
||||
// },
|
||||
// 'queued': async ({ step, final, isLast }) => {
|
||||
// // Define logic for handling steps with 'queued' status
|
||||
// },
|
||||
final: async ({ step, runStatus, stepsByStatus }) => {
|
||||
console.log(`Final step for ${run_id} with status ${runStatus}`);
|
||||
console.dir(step, { depth: null });
|
||||
function filterSteps(steps = []) {
|
||||
if (steps.length <= 1) {
|
||||
return steps;
|
||||
}
|
||||
const stepMap = new Map();
|
||||
|
||||
const promises = [];
|
||||
promises.push(
|
||||
openai.beta.threads.messages.list(thread_id, {
|
||||
order: 'asc',
|
||||
}),
|
||||
steps.forEach((step) => {
|
||||
if (!step) {
|
||||
return;
|
||||
}
|
||||
|
||||
const effectiveTimestamp = Math.max(
|
||||
step.created_at,
|
||||
step.expired_at || 0,
|
||||
step.cancelled_at || 0,
|
||||
step.failed_at || 0,
|
||||
step.completed_at || 0,
|
||||
);
|
||||
|
||||
const finalSteps = stepsByStatus[runStatus];
|
||||
|
||||
// loop across all statuses, may use in the future
|
||||
// for (const [_status, stepsPromises] of Object.entries(stepsByStatus)) {
|
||||
// promises.push(...stepsPromises);
|
||||
// }
|
||||
for (const stepPromise of finalSteps) {
|
||||
promises.push(stepPromise);
|
||||
if (!stepMap.has(step.id) || effectiveTimestamp > stepMap.get(step.id).effectiveTimestamp) {
|
||||
const latestStep = { ...step, effectiveTimestamp };
|
||||
if (latestStep.last_error) {
|
||||
// testing to see if we ever step into this
|
||||
}
|
||||
stepMap.set(step.id, latestStep);
|
||||
}
|
||||
|
||||
const resolved = await Promise.all(promises);
|
||||
const res = resolved.shift();
|
||||
messages = res.data.filter((msg) => msg.run_id === run_id);
|
||||
resolved.push(step);
|
||||
steps = resolved;
|
||||
},
|
||||
});
|
||||
|
||||
const run = await waitForRun({ openai, run_id, thread_id, runManager, pollIntervalMs: 500 });
|
||||
|
||||
return { run, steps, messages };
|
||||
}
|
||||
|
||||
/**
|
||||
* Maps messages to their corresponding steps. Steps with message creation will be paired with their messages,
|
||||
* while steps without message creation will be returned as is.
|
||||
*
|
||||
* @param {RunStep[]} steps - An array of steps from the run.
|
||||
* @param {Message[]} messages - An array of message objects.
|
||||
* @returns {(StepMessage | RunStep)[]} An array where each element is either a step with its corresponding message (StepMessage) or a step without a message (RunStep).
|
||||
*/
|
||||
function mapMessagesToSteps(steps, messages) {
|
||||
// Create a map of messages indexed by their IDs for efficient lookup
|
||||
const messageMap = messages.reduce((acc, msg) => {
|
||||
acc[msg.id] = msg;
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
// Map each step to its corresponding message, or return the step as is if no message ID is present
|
||||
return steps.map((step) => {
|
||||
const messageId = step.step_details?.message_creation?.message_id;
|
||||
|
||||
if (messageId && messageMap[messageId]) {
|
||||
return { step, message: messageMap[messageId] };
|
||||
}
|
||||
return Array.from(stepMap.values()).map((step) => {
|
||||
delete step.effectiveTimestamp;
|
||||
return step;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* @callback InProgressFunction
|
||||
* @param {Object} params - The parameters for the in progress step.
|
||||
* @param {RunStep} params.step - The step object with details about the message creation.
|
||||
* @returns {Promise<void>} - A promise that resolves when the step is processed.
|
||||
*/
|
||||
|
||||
function hasToolCallChanged(previousCall, currentCall) {
|
||||
return JSON.stringify(previousCall) !== JSON.stringify(currentCall);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a handler function for steps in progress, specifically for
|
||||
* processing messages and managing seen completed messages.
|
||||
*
|
||||
* @param {OpenAIClient} openai - The OpenAI client instance.
|
||||
* @param {string} thread_id - The ID of the thread the run is in.
|
||||
* @param {ThreadMessage[]} messages - The accumulated messages for the run.
|
||||
* @return {InProgressFunction} a function to handle steps in progress.
|
||||
*/
|
||||
function createInProgressHandler(openai, thread_id, messages) {
|
||||
openai.index = 0;
|
||||
openai.mappedOrder = new Map();
|
||||
openai.seenToolCalls = new Map();
|
||||
openai.processedFileIds = new Set();
|
||||
openai.completeToolCallSteps = new Set();
|
||||
openai.seenCompletedMessages = new Set();
|
||||
|
||||
/**
|
||||
* The in_progress function for handling message creation steps.
|
||||
*
|
||||
* @type {InProgressFunction}
|
||||
*/
|
||||
async function in_progress({ step }) {
|
||||
if (step.type === StepTypes.TOOL_CALLS) {
|
||||
const { tool_calls } = step.step_details;
|
||||
|
||||
for (const _toolCall of tool_calls) {
|
||||
/** @type {StepToolCall} */
|
||||
const toolCall = _toolCall;
|
||||
const previousCall = openai.seenToolCalls.get(toolCall.id);
|
||||
|
||||
// If the tool call isn't new and hasn't changed
|
||||
if (previousCall && !hasToolCallChanged(previousCall, toolCall)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
let toolCallIndex = openai.mappedOrder.get(toolCall.id);
|
||||
if (toolCallIndex === undefined) {
|
||||
// New tool call
|
||||
toolCallIndex = openai.index;
|
||||
openai.mappedOrder.set(toolCall.id, openai.index);
|
||||
openai.index++;
|
||||
}
|
||||
|
||||
if (step.status === StepStatus.IN_PROGRESS) {
|
||||
toolCall.progress =
|
||||
previousCall && previousCall.progress
|
||||
? Math.min(previousCall.progress + 0.2, 0.95)
|
||||
: 0.01;
|
||||
} else {
|
||||
toolCall.progress = 1;
|
||||
openai.completeToolCallSteps.add(step.id);
|
||||
}
|
||||
|
||||
if (
|
||||
toolCall.type === ToolCallTypes.CODE_INTERPRETER &&
|
||||
step.status === StepStatus.COMPLETED
|
||||
) {
|
||||
const { outputs } = toolCall[toolCall.type];
|
||||
|
||||
for (const output of outputs) {
|
||||
if (output.type !== 'image') {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (openai.processedFileIds.has(output.image?.file_id)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const { file_id } = output.image;
|
||||
const file = await retrieveAndProcessFile({
|
||||
openai,
|
||||
file_id,
|
||||
basename: `${file_id}.png`,
|
||||
});
|
||||
// toolCall.asset_pointer = file.filepath;
|
||||
const prelimImage = {
|
||||
file_id,
|
||||
filename: path.basename(file.filepath),
|
||||
filepath: file.filepath,
|
||||
height: file.height,
|
||||
width: file.width,
|
||||
};
|
||||
// check if every key has a value before adding to content
|
||||
const prelimImageKeys = Object.keys(prelimImage);
|
||||
const validImageFile = prelimImageKeys.every((key) => prelimImage[key]);
|
||||
|
||||
if (!validImageFile) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const image_file = {
|
||||
[ContentTypes.IMAGE_FILE]: prelimImage,
|
||||
type: ContentTypes.IMAGE_FILE,
|
||||
index: openai.index,
|
||||
};
|
||||
openai.addContentData(image_file);
|
||||
openai.processedFileIds.add(file_id);
|
||||
openai.index++;
|
||||
}
|
||||
} else if (
|
||||
toolCall.type === ToolCallTypes.FUNCTION &&
|
||||
step.status === StepStatus.COMPLETED &&
|
||||
imageGenTools.has(toolCall[toolCall.type].name)
|
||||
) {
|
||||
/* If a change is detected, skip image generation tools as already processed */
|
||||
openai.seenToolCalls.set(toolCall.id, toolCall);
|
||||
continue;
|
||||
}
|
||||
|
||||
openai.addContentData({
|
||||
[ContentTypes.TOOL_CALL]: toolCall,
|
||||
index: toolCallIndex,
|
||||
type: ContentTypes.TOOL_CALL,
|
||||
});
|
||||
|
||||
// Update the stored tool call
|
||||
openai.seenToolCalls.set(toolCall.id, toolCall);
|
||||
}
|
||||
} else if (step.type === StepTypes.MESSAGE_CREATION && step.status === StepStatus.COMPLETED) {
|
||||
const { message_id } = step.step_details.message_creation;
|
||||
if (openai.seenCompletedMessages.has(message_id)) {
|
||||
return;
|
||||
}
|
||||
|
||||
openai.seenCompletedMessages.add(message_id);
|
||||
|
||||
const message = await openai.beta.threads.messages.retrieve(thread_id, message_id);
|
||||
messages.push(message);
|
||||
|
||||
let messageIndex = openai.mappedOrder.get(step.id);
|
||||
if (messageIndex === undefined) {
|
||||
// New message
|
||||
messageIndex = openai.index;
|
||||
openai.mappedOrder.set(step.id, openai.index);
|
||||
openai.index++;
|
||||
}
|
||||
|
||||
const result = await processMessages(openai, [message]);
|
||||
openai.addContentData({
|
||||
[ContentTypes.TEXT]: { value: result.text },
|
||||
type: ContentTypes.TEXT,
|
||||
index: messageIndex,
|
||||
});
|
||||
|
||||
// Create the Factory Function to stream the message
|
||||
const { onProgress: progressCallback } = createOnProgress({
|
||||
// todo: add option to save partialText to db
|
||||
// onProgress: () => {},
|
||||
});
|
||||
|
||||
// This creates a function that attaches all of the parameters
|
||||
// specified here to each SSE message generated by the TextStream
|
||||
const onProgress = progressCallback({
|
||||
res: openai.res,
|
||||
index: messageIndex,
|
||||
messageId: openai.responseMessage.messageId,
|
||||
type: ContentTypes.TEXT,
|
||||
stream: true,
|
||||
thread_id,
|
||||
});
|
||||
|
||||
// Create a small buffer before streaming begins
|
||||
await sleep(500);
|
||||
|
||||
const stream = new TextStream(result.text, { delay: 9 });
|
||||
await stream.processTextStream(onProgress);
|
||||
}
|
||||
}
|
||||
|
||||
return in_progress;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes a RunManager with handlers, then invokes waitForRun to monitor and manage an OpenAI run.
|
||||
*
|
||||
* @param {Object} params - The parameters for managing and monitoring the run.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.run_id - The ID of the run to manage and monitor.
|
||||
* @param {string} params.thread_id - The ID of the thread associated with the run.
|
||||
* @param {RunStep[]} params.accumulatedSteps - The accumulated steps for the run.
|
||||
* @param {ThreadMessage[]} params.accumulatedMessages - The accumulated messages for the run.
|
||||
* @param {InProgressFunction} [params.in_progress] - The `in_progress` function from a previous run.
|
||||
* @return {Promise<RunResponse>} A promise that resolves to an object containing the run and managed steps.
|
||||
*/
|
||||
async function runAssistant({
|
||||
openai,
|
||||
run_id,
|
||||
thread_id,
|
||||
accumulatedSteps = [],
|
||||
accumulatedMessages = [],
|
||||
in_progress: inProgress,
|
||||
}) {
|
||||
let steps = accumulatedSteps;
|
||||
let messages = accumulatedMessages;
|
||||
const in_progress = inProgress ?? createInProgressHandler(openai, thread_id, messages);
|
||||
openai.in_progress = in_progress;
|
||||
|
||||
const runManager = new RunManager({
|
||||
in_progress,
|
||||
final: async ({ step, runStatus, stepsByStatus }) => {
|
||||
logger.debug(`[runAssistant] Final step for ${run_id} with status ${runStatus}`, step);
|
||||
|
||||
const promises = [];
|
||||
// promises.push(
|
||||
// openai.beta.threads.messages.list(thread_id, defaultOrderQuery),
|
||||
// );
|
||||
|
||||
// const finalSteps = stepsByStatus[runStatus];
|
||||
// for (const stepPromise of finalSteps) {
|
||||
// promises.push(stepPromise);
|
||||
// }
|
||||
|
||||
// loop across all statuses
|
||||
for (const [_status, stepsPromises] of Object.entries(stepsByStatus)) {
|
||||
promises.push(...stepsPromises);
|
||||
}
|
||||
|
||||
const resolved = await Promise.all(promises);
|
||||
const finalSteps = filterSteps(steps.concat(resolved));
|
||||
|
||||
if (step.type === StepTypes.MESSAGE_CREATION) {
|
||||
const incompleteToolCallSteps = finalSteps.filter(
|
||||
(s) => s && s.type === StepTypes.TOOL_CALLS && !openai.completeToolCallSteps.has(s.id),
|
||||
);
|
||||
for (const incompleteToolCallStep of incompleteToolCallSteps) {
|
||||
await in_progress({ step: incompleteToolCallStep });
|
||||
}
|
||||
}
|
||||
await in_progress({ step });
|
||||
// const res = resolved.shift();
|
||||
// messages = messages.concat(res.data.filter((msg) => msg && msg.run_id === run_id));
|
||||
resolved.push(step);
|
||||
/* Note: no issues without deep cloning, but it's safer to do so */
|
||||
steps = klona(finalSteps);
|
||||
},
|
||||
});
|
||||
|
||||
/** @type {TCustomConfig.endpoints.assistants} */
|
||||
const assistantsEndpointConfig = openai.req.app.locals?.[EModelEndpoint.assistants] ?? {};
|
||||
const { pollIntervalMs, timeoutMs } = assistantsEndpointConfig;
|
||||
|
||||
const run = await waitForRun({
|
||||
openai,
|
||||
run_id,
|
||||
thread_id,
|
||||
runManager,
|
||||
pollIntervalMs,
|
||||
timeout: timeoutMs,
|
||||
});
|
||||
|
||||
if (!run.required_action) {
|
||||
// const { messages: sortedMessages, text } = await processMessages(openai, messages);
|
||||
// return { run, steps, messages: sortedMessages, text };
|
||||
const sortedMessages = messages.sort((a, b) => a.created_at - b.created_at);
|
||||
return { run, steps, messages: sortedMessages };
|
||||
}
|
||||
|
||||
const { submit_tool_outputs } = run.required_action;
|
||||
const actions = submit_tool_outputs.tool_calls.map((item) => {
|
||||
const functionCall = item.function;
|
||||
const args = JSON.parse(functionCall.arguments);
|
||||
return {
|
||||
tool: functionCall.name,
|
||||
toolInput: args,
|
||||
toolCallId: item.id,
|
||||
run_id,
|
||||
thread_id,
|
||||
};
|
||||
});
|
||||
|
||||
const outputs = await processRequiredActions(openai, actions);
|
||||
|
||||
const toolRun = await openai.beta.threads.runs.submitToolOutputs(run.thread_id, run.id, outputs);
|
||||
|
||||
// Recursive call with accumulated steps and messages
|
||||
return await runAssistant({
|
||||
openai,
|
||||
run_id: toolRun.id,
|
||||
thread_id,
|
||||
accumulatedSteps: steps,
|
||||
accumulatedMessages: messages,
|
||||
in_progress,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Sorts, processes, and flattens messages to a single string.
|
||||
*
|
||||
* @param {OpenAIClient} openai - The OpenAI client instance.
|
||||
* @param {ThreadMessage[]} messages - An array of messages.
|
||||
* @returns {Promise<{messages: ThreadMessage[], text: string}>} The sorted messages and the flattened text.
|
||||
*/
|
||||
async function processMessages(openai, messages = []) {
|
||||
const sorted = messages.sort((a, b) => a.created_at - b.created_at);
|
||||
|
||||
let text = '';
|
||||
for (const message of sorted) {
|
||||
message.files = [];
|
||||
for (const content of message.content) {
|
||||
const processImageFile =
|
||||
content.type === 'image_file' && !openai.processedFileIds.has(content.image_file?.file_id);
|
||||
if (processImageFile) {
|
||||
const { file_id } = content.image_file;
|
||||
|
||||
const file = await retrieveAndProcessFile({ openai, file_id, basename: `${file_id}.png` });
|
||||
openai.processedFileIds.add(file_id);
|
||||
message.files.push(file);
|
||||
continue;
|
||||
}
|
||||
|
||||
text += (content.text?.value ?? '') + ' ';
|
||||
|
||||
// Process annotations if they exist
|
||||
if (!content.text?.annotations) {
|
||||
continue;
|
||||
}
|
||||
|
||||
logger.debug('Processing annotations:', content.text.annotations);
|
||||
for (const annotation of content.text.annotations) {
|
||||
logger.debug('Current annotation:', annotation);
|
||||
let file;
|
||||
const processFilePath =
|
||||
annotation.file_path && !openai.processedFileIds.has(annotation.file_path?.file_id);
|
||||
|
||||
if (processFilePath) {
|
||||
const basename = imageExtRegex.test(annotation.text)
|
||||
? path.basename(annotation.text)
|
||||
: null;
|
||||
file = await retrieveAndProcessFile({
|
||||
openai,
|
||||
file_id: annotation.file_path.file_id,
|
||||
basename,
|
||||
});
|
||||
openai.processedFileIds.add(annotation.file_path.file_id);
|
||||
}
|
||||
|
||||
const processFileCitation =
|
||||
annotation.file_citation &&
|
||||
!openai.processedFileIds.has(annotation.file_citation?.file_id);
|
||||
|
||||
if (processFileCitation) {
|
||||
file = await retrieveAndProcessFile({
|
||||
openai,
|
||||
file_id: annotation.file_citation.file_id,
|
||||
unknownType: true,
|
||||
});
|
||||
openai.processedFileIds.add(annotation.file_citation.file_id);
|
||||
}
|
||||
|
||||
if (!file && (annotation.file_path || annotation.file_citation)) {
|
||||
const { file_id } = annotation.file_citation || annotation.file_path || {};
|
||||
file = await retrieveAndProcessFile({ openai, file_id, unknownType: true });
|
||||
openai.processedFileIds.add(file_id);
|
||||
}
|
||||
|
||||
if (!file) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (file.purpose && file.purpose === FilePurpose.Assistants) {
|
||||
text = text.replace(annotation.text, file.filename);
|
||||
} else if (file.filepath) {
|
||||
text = text.replace(annotation.text, file.filepath);
|
||||
}
|
||||
|
||||
message.files.push(file);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { messages: sorted, text };
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
initThread,
|
||||
createRun,
|
||||
waitForRun,
|
||||
getResponse,
|
||||
handleRun,
|
||||
sleep,
|
||||
mapMessagesToSteps,
|
||||
runAssistant,
|
||||
processMessages,
|
||||
createOnTextProgress,
|
||||
};
|
||||
|
|
|
@ -2,6 +2,7 @@ const { EModelEndpoint } = require('librechat-data-provider');
|
|||
|
||||
const {
|
||||
OPENAI_API_KEY: openAIApiKey,
|
||||
ASSISTANTS_API_KEY: assistantsApiKey,
|
||||
AZURE_API_KEY: azureOpenAIApiKey,
|
||||
ANTHROPIC_API_KEY: anthropicApiKey,
|
||||
CHATGPT_TOKEN: chatGPTToken,
|
||||
|
@ -28,7 +29,7 @@ module.exports = {
|
|||
userProvidedOpenAI,
|
||||
googleKey,
|
||||
[EModelEndpoint.openAI]: isUserProvided(openAIApiKey),
|
||||
[EModelEndpoint.assistant]: isUserProvided(openAIApiKey),
|
||||
[EModelEndpoint.assistants]: isUserProvided(assistantsApiKey),
|
||||
[EModelEndpoint.azureOpenAI]: isUserProvided(azureOpenAIApiKey),
|
||||
[EModelEndpoint.chatGPTBrowser]: isUserProvided(chatGPTToken),
|
||||
[EModelEndpoint.anthropic]: isUserProvided(anthropicApiKey),
|
||||
|
|
22
api/server/services/Config/handleRateLimits.js
Normal file
22
api/server/services/Config/handleRateLimits.js
Normal file
|
@ -0,0 +1,22 @@
|
|||
/**
|
||||
*
|
||||
* @param {TCustomConfig['rateLimits'] | undefined} rateLimits
|
||||
*/
|
||||
const handleRateLimits = (rateLimits) => {
|
||||
if (!rateLimits) {
|
||||
return;
|
||||
}
|
||||
const { fileUploads } = rateLimits;
|
||||
if (!fileUploads) {
|
||||
return;
|
||||
}
|
||||
|
||||
process.env.FILE_UPLOAD_IP_MAX = fileUploads.ipMax ?? process.env.FILE_UPLOAD_IP_MAX;
|
||||
process.env.FILE_UPLOAD_IP_WINDOW =
|
||||
fileUploads.ipWindowInMinutes ?? process.env.FILE_UPLOAD_IP_WINDOW;
|
||||
process.env.FILE_UPLOAD_USER_MAX = fileUploads.userMax ?? process.env.FILE_UPLOAD_USER_MAX;
|
||||
process.env.FILE_UPLOAD_USER_WINDOW =
|
||||
fileUploads.userWindowInMinutes ?? process.env.FILE_UPLOAD_USER_WINDOW;
|
||||
};
|
||||
|
||||
module.exports = handleRateLimits;
|
|
@ -7,17 +7,22 @@ const { logger } = require('~/config');
|
|||
const projectRoot = path.resolve(__dirname, '..', '..', '..', '..');
|
||||
const configPath = path.resolve(projectRoot, 'librechat.yaml');
|
||||
|
||||
let i = 0;
|
||||
|
||||
/**
|
||||
* Load custom configuration files and caches the object if the `cache` field at root is true.
|
||||
* Validation via parsing the config file with the config schema.
|
||||
* @function loadCustomConfig
|
||||
* @returns {Promise<TCustomConfig | null>} A promise that resolves to null or the custom config object.
|
||||
* */
|
||||
|
||||
async function loadCustomConfig() {
|
||||
const customConfig = loadYaml(configPath);
|
||||
if (!customConfig) {
|
||||
logger.info('Custom config file missing or YAML format invalid.');
|
||||
i === 0 &&
|
||||
logger.info(
|
||||
'Custom config file missing or YAML format invalid.\n\nCheck out the latest config file guide for configurable options and features.\nhttps://docs.librechat.ai/install/configuration/custom_config.html\n\n',
|
||||
);
|
||||
i === 0 && i++;
|
||||
return null;
|
||||
}
|
||||
|
||||
|
@ -28,6 +33,7 @@ async function loadCustomConfig() {
|
|||
} else {
|
||||
logger.info('Custom config file loaded:');
|
||||
logger.info(JSON.stringify(customConfig, null, 2));
|
||||
logger.debug('Custom config:', customConfig);
|
||||
}
|
||||
|
||||
if (customConfig.cache) {
|
||||
|
|
|
@ -9,10 +9,11 @@ const { config } = require('./EndpointService');
|
|||
*/
|
||||
async function loadDefaultEndpointsConfig() {
|
||||
const { google, gptPlugins } = await loadAsyncEndpoints();
|
||||
const { openAI, bingAI, anthropic, azureOpenAI, chatGPTBrowser } = config;
|
||||
const { openAI, assistants, bingAI, anthropic, azureOpenAI, chatGPTBrowser } = config;
|
||||
|
||||
let enabledEndpoints = [
|
||||
EModelEndpoint.openAI,
|
||||
EModelEndpoint.assistants,
|
||||
EModelEndpoint.azureOpenAI,
|
||||
EModelEndpoint.google,
|
||||
EModelEndpoint.bingAI,
|
||||
|
@ -31,6 +32,7 @@ async function loadDefaultEndpointsConfig() {
|
|||
|
||||
const endpointConfig = {
|
||||
[EModelEndpoint.openAI]: openAI,
|
||||
[EModelEndpoint.assistants]: assistants,
|
||||
[EModelEndpoint.azureOpenAI]: azureOpenAI,
|
||||
[EModelEndpoint.google]: google,
|
||||
[EModelEndpoint.bingAI]: bingAI,
|
||||
|
|
|
@ -7,10 +7,6 @@ const {
|
|||
getChatGPTBrowserModels,
|
||||
} = require('~/server/services/ModelService');
|
||||
|
||||
const fitlerAssistantModels = (str) => {
|
||||
return /gpt-4|gpt-3\\.5/i.test(str) && !/vision|instruct/i.test(str);
|
||||
};
|
||||
|
||||
/**
|
||||
* Loads the default models for the application.
|
||||
* @async
|
||||
|
@ -28,6 +24,7 @@ async function loadDefaultModels(req) {
|
|||
azure: useAzurePlugins,
|
||||
plugins: true,
|
||||
});
|
||||
const assistant = await getOpenAIModels({ assistants: true });
|
||||
|
||||
return {
|
||||
[EModelEndpoint.openAI]: openAI,
|
||||
|
@ -37,7 +34,7 @@ async function loadDefaultModels(req) {
|
|||
[EModelEndpoint.azureOpenAI]: azureOpenAI,
|
||||
[EModelEndpoint.bingAI]: ['BingAI', 'Sydney'],
|
||||
[EModelEndpoint.chatGPTBrowser]: chatGPTBrowser,
|
||||
[EModelEndpoint.assistant]: openAI.filter(fitlerAssistantModels),
|
||||
[EModelEndpoint.assistants]: assistant,
|
||||
};
|
||||
}
|
||||
|
||||
|
|
28
api/server/services/Endpoints/assistant/addTitle.js
Normal file
28
api/server/services/Endpoints/assistant/addTitle.js
Normal file
|
@ -0,0 +1,28 @@
|
|||
const { CacheKeys } = require('librechat-data-provider');
|
||||
const { saveConvo } = require('~/models/Conversation');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const { isEnabled } = require('~/server/utils');
|
||||
|
||||
const addTitle = async (req, { text, responseText, conversationId, client }) => {
|
||||
const { TITLE_CONVO = 'true' } = process.env ?? {};
|
||||
if (!isEnabled(TITLE_CONVO)) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (client.options.titleConvo === false) {
|
||||
return;
|
||||
}
|
||||
|
||||
const titleCache = getLogStores(CacheKeys.GEN_TITLE);
|
||||
const key = `${req.user.id}-${conversationId}`;
|
||||
|
||||
const title = await client.titleConvo({ text, conversationId, responseText });
|
||||
await titleCache.set(key, title);
|
||||
|
||||
await saveConvo(req.user.id, {
|
||||
conversationId,
|
||||
title,
|
||||
});
|
||||
};
|
||||
|
||||
module.exports = addTitle;
|
15
api/server/services/Endpoints/assistant/buildOptions.js
Normal file
15
api/server/services/Endpoints/assistant/buildOptions.js
Normal file
|
@ -0,0 +1,15 @@
|
|||
const buildOptions = (endpoint, parsedBody) => {
|
||||
// eslint-disable-next-line no-unused-vars
|
||||
const { promptPrefix, chatGptLabel, resendImages, imageDetail, ...rest } = parsedBody;
|
||||
const endpointOption = {
|
||||
endpoint,
|
||||
promptPrefix,
|
||||
modelOptions: {
|
||||
...rest,
|
||||
},
|
||||
};
|
||||
|
||||
return endpointOption;
|
||||
};
|
||||
|
||||
module.exports = buildOptions;
|
9
api/server/services/Endpoints/assistant/index.js
Normal file
9
api/server/services/Endpoints/assistant/index.js
Normal file
|
@ -0,0 +1,9 @@
|
|||
const addTitle = require('./addTitle');
|
||||
const buildOptions = require('./buildOptions');
|
||||
const initializeClient = require('./initializeClient');
|
||||
|
||||
module.exports = {
|
||||
addTitle,
|
||||
buildOptions,
|
||||
initializeClient,
|
||||
};
|
80
api/server/services/Endpoints/assistant/initializeClient.js
Normal file
80
api/server/services/Endpoints/assistant/initializeClient.js
Normal file
|
@ -0,0 +1,80 @@
|
|||
const OpenAI = require('openai');
|
||||
const { HttpsProxyAgent } = require('https-proxy-agent');
|
||||
const { EModelEndpoint } = require('librechat-data-provider');
|
||||
const {
|
||||
getUserKey,
|
||||
getUserKeyExpiry,
|
||||
checkUserKeyExpiry,
|
||||
} = require('~/server/services/UserService');
|
||||
const OpenAIClient = require('~/app/clients/OpenAIClient');
|
||||
|
||||
const initializeClient = async ({ req, res, endpointOption, initAppClient = false }) => {
|
||||
const { PROXY, OPENAI_ORGANIZATION, ASSISTANTS_API_KEY, ASSISTANTS_BASE_URL } = process.env;
|
||||
|
||||
const opts = {};
|
||||
const baseURL = ASSISTANTS_BASE_URL ?? null;
|
||||
|
||||
if (baseURL) {
|
||||
opts.baseURL = baseURL;
|
||||
}
|
||||
|
||||
if (PROXY) {
|
||||
opts.httpAgent = new HttpsProxyAgent(PROXY);
|
||||
}
|
||||
|
||||
if (OPENAI_ORGANIZATION) {
|
||||
opts.organization = OPENAI_ORGANIZATION;
|
||||
}
|
||||
|
||||
const credentials = ASSISTANTS_API_KEY;
|
||||
|
||||
const isUserProvided = credentials === 'user_provided';
|
||||
|
||||
let userKey = null;
|
||||
if (isUserProvided) {
|
||||
const expiresAt = getUserKeyExpiry({ userId: req.user.id, name: EModelEndpoint.assistants });
|
||||
checkUserKeyExpiry(
|
||||
expiresAt,
|
||||
'Your Assistants API key has expired. Please provide your API key again.',
|
||||
);
|
||||
userKey = await getUserKey({ userId: req.user.id, name: EModelEndpoint.assistants });
|
||||
}
|
||||
|
||||
let apiKey = isUserProvided ? userKey : credentials;
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error('API key not provided.');
|
||||
}
|
||||
|
||||
/** @type {OpenAIClient} */
|
||||
const openai = new OpenAI({
|
||||
apiKey,
|
||||
...opts,
|
||||
});
|
||||
openai.req = req;
|
||||
openai.res = res;
|
||||
|
||||
if (endpointOption && initAppClient) {
|
||||
const clientOptions = {
|
||||
reverseProxyUrl: baseURL,
|
||||
proxy: PROXY ?? null,
|
||||
req,
|
||||
res,
|
||||
...endpointOption,
|
||||
};
|
||||
|
||||
const client = new OpenAIClient(apiKey, clientOptions);
|
||||
return {
|
||||
client,
|
||||
openai,
|
||||
openAIApiKey: apiKey,
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
openai,
|
||||
openAIApiKey: apiKey,
|
||||
};
|
||||
};
|
||||
|
||||
module.exports = initializeClient;
|
|
@ -1,5 +1,6 @@
|
|||
const fetch = require('node-fetch');
|
||||
const { ref, uploadBytes, getDownloadURL, deleteObject } = require('firebase/storage');
|
||||
const { getBufferMetadata } = require('~/server/utils');
|
||||
const { getFirebaseStorage } = require('./initialize');
|
||||
|
||||
/**
|
||||
|
@ -41,9 +42,8 @@ async function deleteFile(basePath, fileName) {
|
|||
* @param {string} [params.basePath='images'] - Optional. The base basePath in Firebase Storage where the file will
|
||||
* be stored. Defaults to 'images' if not specified.
|
||||
*
|
||||
* @returns {Promise<string|null>}
|
||||
* A promise that resolves to the file name if the file is successfully uploaded, or null if there
|
||||
* is an error in initialization or upload.
|
||||
* @returns {Promise<{ bytes: number, type: string, dimensions: Record<string, number>} | null>}
|
||||
* A promise that resolves to the file metadata if the file is successfully saved, or null if there is an error.
|
||||
*/
|
||||
async function saveURLToFirebase({ userId, URL, fileName, basePath = 'images' }) {
|
||||
const storage = getFirebaseStorage();
|
||||
|
@ -53,10 +53,12 @@ async function saveURLToFirebase({ userId, URL, fileName, basePath = 'images' })
|
|||
}
|
||||
|
||||
const storageRef = ref(storage, `${basePath}/${userId.toString()}/${fileName}`);
|
||||
const response = await fetch(URL);
|
||||
const buffer = await response.buffer();
|
||||
|
||||
try {
|
||||
await uploadBytes(storageRef, await fetch(URL).then((response) => response.buffer()));
|
||||
return fileName;
|
||||
await uploadBytes(storageRef, buffer);
|
||||
return await getBufferMetadata(buffer);
|
||||
} catch (error) {
|
||||
console.error('Error uploading file to Firebase Storage:', error.message);
|
||||
return null;
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const sharp = require('sharp');
|
||||
const { resizeImage } = require('../images/resize');
|
||||
const { resizeImageBuffer } = require('../images/resize');
|
||||
const { updateUser } = require('~/models/userMethods');
|
||||
const { saveBufferToFirebase } = require('./crud');
|
||||
const { updateFile } = require('~/models/File');
|
||||
const { logger } = require('~/config');
|
||||
|
@ -11,7 +12,7 @@ const { logger } = require('~/config');
|
|||
* resolution.
|
||||
*
|
||||
*
|
||||
* @param {Object} req - The request object from Express. It should have a `user` property with an `id`
|
||||
* @param {Express.Request} req - The request object from Express. It should have a `user` property with an `id`
|
||||
* representing the user, and an `app.locals.paths` object with an `imageOutput` path.
|
||||
* @param {Express.Multer.File} file - The file object, which is part of the request. The file object should
|
||||
* have a `path` property that points to the location of the uploaded file.
|
||||
|
@ -26,7 +27,8 @@ const { logger } = require('~/config');
|
|||
*/
|
||||
async function uploadImageToFirebase(req, file, resolution = 'high') {
|
||||
const inputFilePath = file.path;
|
||||
const { buffer: resizedBuffer, width, height } = await resizeImage(inputFilePath, resolution);
|
||||
const inputBuffer = await fs.promises.readFile(inputFilePath);
|
||||
const { buffer: resizedBuffer, width, height } = await resizeImageBuffer(inputBuffer, resolution);
|
||||
const extension = path.extname(inputFilePath);
|
||||
const userId = req.user.id;
|
||||
|
||||
|
@ -73,15 +75,15 @@ async function prepareImageURL(req, file) {
|
|||
*
|
||||
* @param {object} params - The parameters object.
|
||||
* @param {Buffer} params.buffer - The Buffer containing the avatar image in WebP format.
|
||||
* @param {object} params.User - The User document (mongoose); TODO: remove direct use of Model, `User`
|
||||
* @param {string} params.userId - The user ID.
|
||||
* @param {string} params.manual - A string flag indicating whether the update is manual ('true' or 'false').
|
||||
* @returns {Promise<string>} - A promise that resolves with the URL of the uploaded avatar.
|
||||
* @throws {Error} - Throws an error if Firebase is not initialized or if there is an error in uploading.
|
||||
*/
|
||||
async function processFirebaseAvatar({ buffer, User, manual }) {
|
||||
async function processFirebaseAvatar({ buffer, userId, manual }) {
|
||||
try {
|
||||
const downloadURL = await saveBufferToFirebase({
|
||||
userId: User._id.toString(),
|
||||
userId,
|
||||
buffer,
|
||||
fileName: 'avatar.png',
|
||||
});
|
||||
|
@ -91,8 +93,7 @@ async function processFirebaseAvatar({ buffer, User, manual }) {
|
|||
const url = `${downloadURL}?manual=${isManual}`;
|
||||
|
||||
if (isManual) {
|
||||
User.avatar = url;
|
||||
await User.save();
|
||||
await updateUser(userId, { avatar: url });
|
||||
}
|
||||
|
||||
return url;
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const axios = require('axios');
|
||||
const { logger } = require('~/config');
|
||||
const { getBufferMetadata } = require('~/server/utils');
|
||||
const paths = require('~/config/paths');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* Saves a file to a specified output path with a new filename.
|
||||
|
@ -13,7 +14,7 @@ const paths = require('~/config/paths');
|
|||
* @returns {Promise<string>} The full path of the saved file.
|
||||
* @throws Will throw an error if the file saving process fails.
|
||||
*/
|
||||
async function saveFile(file, outputPath, outputFilename) {
|
||||
async function saveLocalFile(file, outputPath, outputFilename) {
|
||||
try {
|
||||
if (!fs.existsSync(outputPath)) {
|
||||
fs.mkdirSync(outputPath, { recursive: true });
|
||||
|
@ -44,9 +45,41 @@ async function saveFile(file, outputPath, outputFilename) {
|
|||
const saveLocalImage = async (req, file, filename) => {
|
||||
const imagePath = req.app.locals.paths.imageOutput;
|
||||
const outputPath = path.join(imagePath, req.user.id ?? '');
|
||||
await saveFile(file, outputPath, filename);
|
||||
await saveLocalFile(file, outputPath, filename);
|
||||
};
|
||||
|
||||
/**
|
||||
* Saves a buffer to a specified directory on the local file system.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {string} params.userId - The user's unique identifier. This is used to create a user-specific directory.
|
||||
* @param {Buffer} params.buffer - The buffer to be saved.
|
||||
* @param {string} params.fileName - The name of the file to be saved.
|
||||
* @param {string} [params.basePath='images'] - Optional. The base path where the file will be stored.
|
||||
* Defaults to 'images' if not specified.
|
||||
* @returns {Promise<string>} - A promise that resolves to the path of the saved file.
|
||||
*/
|
||||
async function saveLocalBuffer({ userId, buffer, fileName, basePath = 'images' }) {
|
||||
try {
|
||||
const { publicPath, uploads } = paths;
|
||||
|
||||
const directoryPath = path.join(basePath === 'images' ? publicPath : uploads, basePath, userId);
|
||||
|
||||
if (!fs.existsSync(directoryPath)) {
|
||||
fs.mkdirSync(directoryPath, { recursive: true });
|
||||
}
|
||||
|
||||
fs.writeFileSync(path.join(directoryPath, fileName), buffer);
|
||||
|
||||
const filePath = path.posix.join('/', basePath, userId, fileName);
|
||||
|
||||
return filePath;
|
||||
} catch (error) {
|
||||
logger.error('[saveLocalBuffer] Error while saving the buffer:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Saves a file from a given URL to a local directory. The function fetches the file using the provided URL,
|
||||
* determines the content type, and saves it to a specified local directory with the correct file extension.
|
||||
|
@ -62,20 +95,18 @@ const saveLocalImage = async (req, file, filename) => {
|
|||
* @param {string} [params.basePath='images'] - Optional. The base directory where the file will be saved.
|
||||
* Defaults to 'images' if not specified.
|
||||
*
|
||||
* @returns {Promise<string|null>}
|
||||
* A promise that resolves to the file name if the file is successfully saved, or null if there is an error.
|
||||
* @returns {Promise<{ bytes: number, type: string, dimensions: Record<string, number>} | null>}
|
||||
* A promise that resolves to the file metadata if the file is successfully saved, or null if there is an error.
|
||||
*/
|
||||
async function saveFileFromURL({ userId, URL, fileName, basePath = 'images' }) {
|
||||
try {
|
||||
// Fetch the file from the URL
|
||||
const response = await axios({
|
||||
url: URL,
|
||||
responseType: 'stream',
|
||||
responseType: 'arraybuffer',
|
||||
});
|
||||
|
||||
// Get the content type from the response headers
|
||||
const contentType = response.headers['content-type'];
|
||||
let extension = contentType.split('/').pop();
|
||||
const buffer = Buffer.from(response.data, 'binary');
|
||||
const { bytes, type, dimensions, extension } = await getBufferMetadata(buffer);
|
||||
|
||||
// Construct the outputPath based on the basePath and userId
|
||||
const outputPath = path.join(paths.publicPath, basePath, userId.toString());
|
||||
|
@ -92,17 +123,15 @@ async function saveFileFromURL({ userId, URL, fileName, basePath = 'images' }) {
|
|||
fileName += `.${extension}`;
|
||||
}
|
||||
|
||||
// Create a writable stream for the output path
|
||||
const outputFilePath = path.join(outputPath, path.basename(fileName));
|
||||
const writer = fs.createWriteStream(outputFilePath);
|
||||
// Save the file to the output path
|
||||
const outputFilePath = path.join(outputPath, fileName);
|
||||
fs.writeFileSync(outputFilePath, buffer);
|
||||
|
||||
// Pipe the response data to the output file
|
||||
response.data.pipe(writer);
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
writer.on('finish', () => resolve(fileName));
|
||||
writer.on('error', reject);
|
||||
});
|
||||
return {
|
||||
bytes,
|
||||
type,
|
||||
dimensions,
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error('[saveFileFromURL] Error while saving the file:', error);
|
||||
return null;
|
||||
|
@ -171,4 +200,11 @@ const deleteLocalFile = async (req, file) => {
|
|||
await fs.promises.unlink(filepath);
|
||||
};
|
||||
|
||||
module.exports = { saveFile, saveLocalImage, saveFileFromURL, getLocalFileURL, deleteLocalFile };
|
||||
module.exports = {
|
||||
saveLocalFile,
|
||||
saveLocalImage,
|
||||
saveLocalBuffer,
|
||||
saveFileFromURL,
|
||||
getLocalFileURL,
|
||||
deleteLocalFile,
|
||||
};
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const sharp = require('sharp');
|
||||
const { resizeImage } = require('../images/resize');
|
||||
const { resizeImageBuffer } = require('../images/resize');
|
||||
const { updateUser } = require('~/models/userMethods');
|
||||
const { updateFile } = require('~/models/File');
|
||||
|
||||
/**
|
||||
|
@ -28,7 +29,8 @@ const { updateFile } = require('~/models/File');
|
|||
*/
|
||||
async function uploadLocalImage(req, file, resolution = 'high') {
|
||||
const inputFilePath = file.path;
|
||||
const { buffer: resizedBuffer, width, height } = await resizeImage(inputFilePath, resolution);
|
||||
const inputBuffer = await fs.promises.readFile(inputFilePath);
|
||||
const { buffer: resizedBuffer, width, height } = await resizeImageBuffer(inputBuffer, resolution);
|
||||
const extension = path.extname(inputFilePath);
|
||||
|
||||
const { imageOutput } = req.app.locals.paths;
|
||||
|
@ -96,17 +98,17 @@ async function prepareImagesLocal(req, file) {
|
|||
}
|
||||
|
||||
/**
|
||||
* Uploads a user's avatar to Firebase Storage and returns the URL.
|
||||
* Uploads a user's avatar to local server storage and returns the URL.
|
||||
* If the 'manual' flag is set to 'true', it also updates the user's avatar URL in the database.
|
||||
*
|
||||
* @param {object} params - The parameters object.
|
||||
* @param {Buffer} params.buffer - The Buffer containing the avatar image in WebP format.
|
||||
* @param {object} params.User - The User document (mongoose); TODO: remove direct use of Model, `User`
|
||||
* @param {string} params.userId - The user ID.
|
||||
* @param {string} params.manual - A string flag indicating whether the update is manual ('true' or 'false').
|
||||
* @returns {Promise<string>} - A promise that resolves with the URL of the uploaded avatar.
|
||||
* @throws {Error} - Throws an error if Firebase is not initialized or if there is an error in uploading.
|
||||
*/
|
||||
async function processLocalAvatar({ buffer, User, manual }) {
|
||||
async function processLocalAvatar({ buffer, userId, manual }) {
|
||||
const userDir = path.resolve(
|
||||
__dirname,
|
||||
'..',
|
||||
|
@ -117,10 +119,11 @@ async function processLocalAvatar({ buffer, User, manual }) {
|
|||
'client',
|
||||
'public',
|
||||
'images',
|
||||
User._id.toString(),
|
||||
userId,
|
||||
);
|
||||
|
||||
const fileName = `avatar-${new Date().getTime()}.png`;
|
||||
const urlRoute = `/images/${User._id.toString()}/${fileName}`;
|
||||
const urlRoute = `/images/${userId}/${fileName}`;
|
||||
const avatarPath = path.join(userDir, fileName);
|
||||
|
||||
await fs.promises.mkdir(userDir, { recursive: true });
|
||||
|
@ -130,8 +133,7 @@ async function processLocalAvatar({ buffer, User, manual }) {
|
|||
let url = `${urlRoute}?manual=${isManual}`;
|
||||
|
||||
if (isManual) {
|
||||
User.avatar = url;
|
||||
await User.save();
|
||||
await updateUser(userId, { avatar: url });
|
||||
}
|
||||
|
||||
return url;
|
||||
|
|
49
api/server/services/Files/OpenAI/crud.js
Normal file
49
api/server/services/Files/OpenAI/crud.js
Normal file
|
@ -0,0 +1,49 @@
|
|||
const fs = require('fs');
|
||||
|
||||
/**
|
||||
* Uploads a file that can be used across various OpenAI services.
|
||||
*
|
||||
* @param {Express.Request} req - The request object from Express. It should have a `user` property with an `id`
|
||||
* representing the user, and an `app.locals.paths` object with an `imageOutput` path.
|
||||
* @param {Express.Multer.File} file - The file uploaded to the server via multer.
|
||||
* @param {OpenAI} openai - The initialized OpenAI client.
|
||||
* @returns {Promise<OpenAIFile>}
|
||||
*/
|
||||
async function uploadOpenAIFile(req, file, openai) {
|
||||
try {
|
||||
const uploadedFile = await openai.files.create({
|
||||
file: fs.createReadStream(file.path),
|
||||
purpose: 'assistants',
|
||||
});
|
||||
|
||||
console.log('File uploaded successfully to OpenAI');
|
||||
|
||||
return uploadedFile;
|
||||
} catch (error) {
|
||||
console.error('Error uploading file to OpenAI:', error.message);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Deletes a file previously uploaded to OpenAI.
|
||||
*
|
||||
* @param {Express.Request} req - The request object from Express.
|
||||
* @param {MongoFile} file - The database representation of the uploaded file.
|
||||
* @param {OpenAI} openai - The initialized OpenAI client.
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function deleteOpenAIFile(req, file, openai) {
|
||||
try {
|
||||
const res = await openai.files.del(file.file_id);
|
||||
if (!res.deleted) {
|
||||
throw new Error('OpenAI returned `false` for deleted status');
|
||||
}
|
||||
console.log('File deleted successfully from OpenAI');
|
||||
} catch (error) {
|
||||
console.error('Error deleting file from OpenAI:', error.message);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { uploadOpenAIFile, deleteOpenAIFile };
|
5
api/server/services/Files/OpenAI/index.js
Normal file
5
api/server/services/Files/OpenAI/index.js
Normal file
|
@ -0,0 +1,5 @@
|
|||
const crud = require('./crud');
|
||||
|
||||
module.exports = {
|
||||
...crud,
|
||||
};
|
|
@ -1,42 +1,29 @@
|
|||
const sharp = require('sharp');
|
||||
const fs = require('fs').promises;
|
||||
const fetch = require('node-fetch');
|
||||
const User = require('~/models/User');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const { resizeAndConvert } = require('./resize');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
async function convertToWebP(inputBuffer) {
|
||||
return sharp(inputBuffer).resize({ width: 150 }).toFormat('webp').toBuffer();
|
||||
}
|
||||
|
||||
/**
|
||||
* Uploads an avatar image for a user. This function can handle various types of input (URL, Buffer, or File object),
|
||||
* processes the image to a square format, converts it to WebP format, and then uses a specified file strategy for
|
||||
* further processing. It performs validation on the user ID and the input type. The function can throw errors for
|
||||
* invalid input types, fetching issues, or other processing errors.
|
||||
* processes the image to a square format, converts it to WebP format, and returns the resized buffer.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {string} params.userId - The unique identifier of the user for whom the avatar is being uploaded.
|
||||
* @param {FileSources} params.fileStrategy - The file handling strategy to use, determining how the avatar is processed.
|
||||
* @param {(string|Buffer|File)} params.input - The input representing the avatar image. Can be a URL (string),
|
||||
* a Buffer, or a File object.
|
||||
* @param {string} params.manual - A string flag indicating whether the upload process is manual.
|
||||
*
|
||||
* @returns {Promise<any>}
|
||||
* A promise that resolves to the result of the `processAvatar` function, specific to the chosen file
|
||||
* strategy. Throws an error if any step in the process fails.
|
||||
* A promise that resolves to a resized buffer.
|
||||
*
|
||||
* @throws {Error} Throws an error if the user ID is undefined, the input type is invalid, the image fetching fails,
|
||||
* or any other error occurs during the processing.
|
||||
*/
|
||||
async function uploadAvatar({ userId, fileStrategy, input, manual }) {
|
||||
async function resizeAvatar({ userId, input }) {
|
||||
try {
|
||||
if (userId === undefined) {
|
||||
throw new Error('User ID is undefined');
|
||||
}
|
||||
const _id = userId;
|
||||
// TODO: remove direct use of Model, `User`
|
||||
const oldUser = await User.findOne({ _id });
|
||||
|
||||
let imageBuffer;
|
||||
if (typeof input === 'string') {
|
||||
|
@ -66,13 +53,12 @@ async function uploadAvatar({ userId, fileStrategy, input, manual }) {
|
|||
})
|
||||
.toBuffer();
|
||||
|
||||
const webPBuffer = await convertToWebP(squaredBuffer);
|
||||
const { processAvatar } = getStrategyFunctions(fileStrategy);
|
||||
return await processAvatar({ buffer: webPBuffer, User: oldUser, manual });
|
||||
const { buffer } = await resizeAndConvert(squaredBuffer);
|
||||
return buffer;
|
||||
} catch (error) {
|
||||
logger.error('Error uploading the avatar:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = uploadAvatar;
|
||||
module.exports = { resizeAvatar };
|
||||
|
|
69
api/server/services/Files/images/convert.js
Normal file
69
api/server/services/Files/images/convert.js
Normal file
|
@ -0,0 +1,69 @@
|
|||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const sharp = require('sharp');
|
||||
const { resizeImageBuffer } = require('./resize');
|
||||
const { getStrategyFunctions } = require('../strategies');
|
||||
|
||||
/**
|
||||
* Converts an image file or buffer to WebP format with specified resolution.
|
||||
*
|
||||
* @param {Express.Request} req - The request object, containing user and app configuration data.
|
||||
* @param {Buffer | Express.Multer.File} file - The file object, containing either a path or a buffer.
|
||||
* @param {'low' | 'high'} [resolution='high'] - The desired resolution for the output image.
|
||||
* @param {string} [basename=''] - The basename of the input file, if it is a buffer.
|
||||
* @returns {Promise<{filepath: string, bytes: number, width: number, height: number}>} An object containing the path, size, and dimensions of the converted image.
|
||||
* @throws Throws an error if there is an issue during the conversion process.
|
||||
*/
|
||||
async function convertToWebP(req, file, resolution = 'high', basename = '') {
|
||||
try {
|
||||
let inputBuffer;
|
||||
let outputBuffer;
|
||||
let extension = path.extname(file.path ?? basename).toLowerCase();
|
||||
|
||||
// Check if the input is a buffer or a file path
|
||||
if (Buffer.isBuffer(file)) {
|
||||
inputBuffer = file;
|
||||
} else if (file && file.path) {
|
||||
const inputFilePath = file.path;
|
||||
inputBuffer = await fs.promises.readFile(inputFilePath);
|
||||
} else {
|
||||
throw new Error('Invalid input: file must be a buffer or contain a valid path.');
|
||||
}
|
||||
|
||||
// Resize the image buffer
|
||||
const {
|
||||
buffer: resizedBuffer,
|
||||
width,
|
||||
height,
|
||||
} = await resizeImageBuffer(inputBuffer, resolution);
|
||||
|
||||
// Check if the file is already in WebP format
|
||||
// If it isn't, convert it:
|
||||
if (extension === '.webp') {
|
||||
outputBuffer = resizedBuffer;
|
||||
} else {
|
||||
outputBuffer = await sharp(resizedBuffer).toFormat('webp').toBuffer();
|
||||
extension = '.webp';
|
||||
}
|
||||
|
||||
// Generate a new filename for the output file
|
||||
const newFileName =
|
||||
path.basename(file.path ?? basename, path.extname(file.path ?? basename)) + extension;
|
||||
|
||||
const { saveBuffer } = getStrategyFunctions(req.app.locals.fileStrategy);
|
||||
|
||||
const savedFilePath = await saveBuffer({
|
||||
userId: req.user.id,
|
||||
buffer: outputBuffer,
|
||||
fileName: newFileName,
|
||||
});
|
||||
|
||||
const bytes = Buffer.byteLength(outputBuffer);
|
||||
return { filepath: savedFilePath, bytes, width, height };
|
||||
} catch (err) {
|
||||
console.error(err);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { convertToWebP };
|
|
@ -1,13 +1,13 @@
|
|||
const avatar = require('./avatar');
|
||||
const convert = require('./convert');
|
||||
const encode = require('./encode');
|
||||
const parse = require('./parse');
|
||||
const resize = require('./resize');
|
||||
const validate = require('./validate');
|
||||
|
||||
module.exports = {
|
||||
...convert,
|
||||
...encode,
|
||||
...parse,
|
||||
...resize,
|
||||
...validate,
|
||||
avatar,
|
||||
};
|
||||
|
|
|
@ -1,6 +1,16 @@
|
|||
const sharp = require('sharp');
|
||||
|
||||
async function resizeImage(inputFilePath, resolution) {
|
||||
/**
|
||||
* Resizes an image from a given buffer based on the specified resolution.
|
||||
*
|
||||
* @param {Buffer} inputBuffer - The buffer of the image to be resized.
|
||||
* @param {'low' | 'high'} resolution - The resolution to resize the image to.
|
||||
* 'low' for a maximum of 512x512 resolution,
|
||||
* 'high' for a maximum of 768x2000 resolution.
|
||||
* @returns {Promise<{buffer: Buffer, width: number, height: number}>} An object containing the resized image buffer and its dimensions.
|
||||
* @throws Will throw an error if the resolution parameter is invalid.
|
||||
*/
|
||||
async function resizeImageBuffer(inputBuffer, resolution) {
|
||||
const maxLowRes = 512;
|
||||
const maxShortSideHighRes = 768;
|
||||
const maxLongSideHighRes = 2000;
|
||||
|
@ -12,7 +22,7 @@ async function resizeImage(inputFilePath, resolution) {
|
|||
resizeOptions.width = maxLowRes;
|
||||
resizeOptions.height = maxLowRes;
|
||||
} else if (resolution === 'high') {
|
||||
const metadata = await sharp(inputFilePath).metadata();
|
||||
const metadata = await sharp(inputBuffer).metadata();
|
||||
const isWidthShorter = metadata.width < metadata.height;
|
||||
|
||||
if (isWidthShorter) {
|
||||
|
@ -43,10 +53,28 @@ async function resizeImage(inputFilePath, resolution) {
|
|||
throw new Error('Invalid resolution parameter');
|
||||
}
|
||||
|
||||
const resizedBuffer = await sharp(inputFilePath).rotate().resize(resizeOptions).toBuffer();
|
||||
const resizedBuffer = await sharp(inputBuffer).rotate().resize(resizeOptions).toBuffer();
|
||||
|
||||
const resizedMetadata = await sharp(resizedBuffer).metadata();
|
||||
return { buffer: resizedBuffer, width: resizedMetadata.width, height: resizedMetadata.height };
|
||||
}
|
||||
|
||||
module.exports = { resizeImage };
|
||||
/**
|
||||
* Resizes an image buffer to webp format as well as reduces 150 px width.
|
||||
*
|
||||
* @param {Buffer} inputBuffer - The buffer of the image to be resized.
|
||||
* @returns {Promise<{ buffer: Buffer, width: number, height: number, bytes: number }>} An object containing the resized image buffer, its size and dimensions.
|
||||
* @throws Will throw an error if the resolution parameter is invalid.
|
||||
*/
|
||||
async function resizeAndConvert(inputBuffer) {
|
||||
const resizedBuffer = await sharp(inputBuffer).resize({ width: 150 }).toFormat('webp').toBuffer();
|
||||
const resizedMetadata = await sharp(resizedBuffer).metadata();
|
||||
return {
|
||||
buffer: resizedBuffer,
|
||||
width: resizedMetadata.width,
|
||||
height: resizedMetadata.height,
|
||||
bytes: Buffer.byteLength(resizedBuffer),
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = { resizeImageBuffer, resizeAndConvert };
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
const { visionModels } = require('librechat-data-provider');
|
||||
|
||||
function validateVisionModel(model) {
|
||||
if (!model) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return visionModels.some((visionModel) => model.includes(visionModel));
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
validateVisionModel,
|
||||
};
|
|
@ -1,7 +1,25 @@
|
|||
const { updateFileUsage, createFile } = require('~/models');
|
||||
const path = require('path');
|
||||
const { v4 } = require('uuid');
|
||||
const mime = require('mime/lite');
|
||||
const {
|
||||
isUUID,
|
||||
megabyte,
|
||||
FileContext,
|
||||
FileSources,
|
||||
imageExtRegex,
|
||||
EModelEndpoint,
|
||||
mergeFileConfig,
|
||||
} = require('librechat-data-provider');
|
||||
const { convertToWebP, resizeAndConvert } = require('~/server/services/Files/images');
|
||||
const { initializeClient } = require('~/server/services/Endpoints/assistant');
|
||||
const { createFile, updateFileUsage, deleteFiles } = require('~/models/File');
|
||||
const { isEnabled, determineFileType } = require('~/server/utils');
|
||||
const { LB_QueueAsyncCall } = require('~/server/utils/queue');
|
||||
const { getStrategyFunctions } = require('./strategies');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const { GPTS_DOWNLOAD_IMAGES = 'true' } = process.env;
|
||||
|
||||
const processFiles = async (files) => {
|
||||
const promises = [];
|
||||
for (let file of files) {
|
||||
|
@ -13,6 +31,99 @@ const processFiles = async (files) => {
|
|||
return await Promise.all(promises);
|
||||
};
|
||||
|
||||
/**
|
||||
* Enqueues the delete operation to the leaky bucket queue if necessary, or adds it directly to promises.
|
||||
*
|
||||
* @param {Express.Request} req - The express request object.
|
||||
* @param {MongoFile} file - The file object to delete.
|
||||
* @param {Function} deleteFile - The delete file function.
|
||||
* @param {Promise[]} promises - The array of promises to await.
|
||||
* @param {OpenAI | undefined} [openai] - If an OpenAI file, the initialized OpenAI client.
|
||||
*/
|
||||
function enqueueDeleteOperation(req, file, deleteFile, promises, openai) {
|
||||
if (file.source === FileSources.openai) {
|
||||
// Enqueue to leaky bucket
|
||||
promises.push(
|
||||
new Promise((resolve, reject) => {
|
||||
LB_QueueAsyncCall(
|
||||
() => deleteFile(req, file, openai),
|
||||
[],
|
||||
(err, result) => {
|
||||
if (err) {
|
||||
logger.error('Error deleting file from OpenAI source', err);
|
||||
reject(err);
|
||||
} else {
|
||||
resolve(result);
|
||||
}
|
||||
},
|
||||
);
|
||||
}),
|
||||
);
|
||||
} else {
|
||||
// Add directly to promises
|
||||
promises.push(
|
||||
deleteFile(req, file).catch((err) => {
|
||||
logger.error('Error deleting file', err);
|
||||
return Promise.reject(err);
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: refactor as currently only image files can be deleted this way
|
||||
// as other filetypes will not reside in public path
|
||||
/**
|
||||
* Deletes a list of files from the server filesystem and the database.
|
||||
*
|
||||
* @param {Object} params - The params object.
|
||||
* @param {MongoFile[]} params.files - The file objects to delete.
|
||||
* @param {Express.Request} params.req - The express request object.
|
||||
* @param {DeleteFilesBody} params.req.body - The request body.
|
||||
* @param {string} [params.req.body.assistant_id] - The assistant ID if file uploaded is associated to an assistant.
|
||||
*
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
const processDeleteRequest = async ({ req, files }) => {
|
||||
const file_ids = files.map((file) => file.file_id);
|
||||
|
||||
const deletionMethods = {};
|
||||
const promises = [];
|
||||
promises.push(deleteFiles(file_ids));
|
||||
|
||||
/** @type {OpenAI | undefined} */
|
||||
let openai;
|
||||
if (req.body.assistant_id) {
|
||||
({ openai } = await initializeClient({ req }));
|
||||
}
|
||||
|
||||
for (const file of files) {
|
||||
const source = file.source ?? FileSources.local;
|
||||
|
||||
if (source === FileSources.openai && !openai) {
|
||||
({ openai } = await initializeClient({ req }));
|
||||
}
|
||||
|
||||
if (req.body.assistant_id) {
|
||||
promises.push(openai.beta.assistants.files.del(req.body.assistant_id, file.file_id));
|
||||
}
|
||||
|
||||
if (deletionMethods[source]) {
|
||||
enqueueDeleteOperation(req, file, deletionMethods[source], promises, openai);
|
||||
continue;
|
||||
}
|
||||
|
||||
const { deleteFile } = getStrategyFunctions(source);
|
||||
if (!deleteFile) {
|
||||
throw new Error(`Delete function not implemented for ${source}`);
|
||||
}
|
||||
|
||||
deletionMethods[source] = deleteFile;
|
||||
enqueueDeleteOperation(req, file, deleteFile, promises, openai);
|
||||
}
|
||||
|
||||
await Promise.allSettled(promises);
|
||||
};
|
||||
|
||||
/**
|
||||
* Processes a file URL using a specified file handling strategy. This function accepts a strategy name,
|
||||
* fetches the corresponding file processing functions (for saving and retrieving file URLs), and then
|
||||
|
@ -21,25 +132,38 @@ const processFiles = async (files) => {
|
|||
* exception with an appropriate message.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {FileSources} params.fileStrategy - The file handling strategy to use. Must be a value from the
|
||||
* `FileSources` enum, which defines different file handling
|
||||
* strategies (like saving to Firebase, local storage, etc.).
|
||||
* @param {FileSources} params.fileStrategy - The file handling strategy to use.
|
||||
* Must be a value from the `FileSources` enum, which defines different file
|
||||
* handling strategies (like saving to Firebase, local storage, etc.).
|
||||
* @param {string} params.userId - The user's unique identifier. Used for creating user-specific paths or
|
||||
* references in the file handling process.
|
||||
* @param {string} params.URL - The URL of the file to be processed.
|
||||
* @param {string} params.fileName - The name that will be used to save the file. This should include the
|
||||
* file extension.
|
||||
* @param {string} params.fileName - The name that will be used to save the file (including extension)
|
||||
* @param {string} params.basePath - The base path or directory where the file will be saved or retrieved from.
|
||||
*
|
||||
* @returns {Promise<string>}
|
||||
* A promise that resolves to the URL of the processed file. It throws an error if the file processing
|
||||
* fails at any stage.
|
||||
* @param {FileContext} params.context - The context of the file (e.g., 'avatar', 'image_generation', etc.)
|
||||
* @returns {Promise<MongoFile>} A promise that resolves to the DB representation (MongoFile)
|
||||
* of the processed file. It throws an error if the file processing fails at any stage.
|
||||
*/
|
||||
const processFileURL = async ({ fileStrategy, userId, URL, fileName, basePath }) => {
|
||||
const processFileURL = async ({ fileStrategy, userId, URL, fileName, basePath, context }) => {
|
||||
const { saveURL, getFileURL } = getStrategyFunctions(fileStrategy);
|
||||
try {
|
||||
await saveURL({ userId, URL, fileName, basePath });
|
||||
return await getFileURL({ fileName: `${userId}/${fileName}`, basePath });
|
||||
const { bytes, type, dimensions } = await saveURL({ userId, URL, fileName, basePath });
|
||||
const filepath = await getFileURL({ fileName: `${userId}/${fileName}`, basePath });
|
||||
return await createFile(
|
||||
{
|
||||
user: userId,
|
||||
file_id: v4(),
|
||||
bytes,
|
||||
filepath,
|
||||
filename: fileName,
|
||||
source: fileStrategy,
|
||||
type,
|
||||
context,
|
||||
width: dimensions.width,
|
||||
height: dimensions.height,
|
||||
},
|
||||
true,
|
||||
);
|
||||
} catch (error) {
|
||||
logger.error(`Error while processing the image with ${fileStrategy}:`, error);
|
||||
throw new Error(`Failed to process the image with ${fileStrategy}. ${error.message}`);
|
||||
|
@ -49,7 +173,6 @@ const processFileURL = async ({ fileStrategy, userId, URL, fileName, basePath })
|
|||
/**
|
||||
* Applies the current strategy for image uploads.
|
||||
* Saves file metadata to the database with an expiry TTL.
|
||||
* Files must be deleted from the server filesystem manually.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {Express.Request} params.req - The Express request object.
|
||||
|
@ -58,7 +181,7 @@ const processFileURL = async ({ fileStrategy, userId, URL, fileName, basePath })
|
|||
* @param {ImageMetadata} params.metadata - Additional metadata for the file.
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
const processImageUpload = async ({ req, res, file, metadata }) => {
|
||||
const processImageFile = async ({ req, res, file, metadata }) => {
|
||||
const source = req.app.locals.fileStrategy;
|
||||
const { handleImageUpload } = getStrategyFunctions(source);
|
||||
const { file_id, temp_file_id } = metadata;
|
||||
|
@ -71,6 +194,7 @@ const processImageUpload = async ({ req, res, file, metadata }) => {
|
|||
bytes,
|
||||
filepath,
|
||||
filename: file.originalname,
|
||||
context: FileContext.message_attachment,
|
||||
source,
|
||||
type: 'image/webp',
|
||||
width,
|
||||
|
@ -81,8 +205,271 @@ const processImageUpload = async ({ req, res, file, metadata }) => {
|
|||
res.status(200).json({ message: 'File uploaded and processed successfully', ...result });
|
||||
};
|
||||
|
||||
/**
|
||||
* Applies the current strategy for image uploads and
|
||||
* returns minimal file metadata, without saving to the database.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {Express.Request} params.req - The Express request object.
|
||||
* @param {FileContext} params.context - The context of the file (e.g., 'avatar', 'image_generation', etc.)
|
||||
* @returns {Promise<{ filepath: string, filename: string, source: string, type: 'image/webp'}>}
|
||||
*/
|
||||
const uploadImageBuffer = async ({ req, context }) => {
|
||||
const source = req.app.locals.fileStrategy;
|
||||
const { saveBuffer } = getStrategyFunctions(source);
|
||||
const { buffer, width, height, bytes } = await resizeAndConvert(req.file.buffer);
|
||||
const file_id = v4();
|
||||
const fileName = `img-${file_id}.webp`;
|
||||
|
||||
const filepath = await saveBuffer({ userId: req.user.id, fileName, buffer });
|
||||
return await createFile(
|
||||
{
|
||||
user: req.user.id,
|
||||
file_id,
|
||||
bytes,
|
||||
filepath,
|
||||
filename: req.file.originalname,
|
||||
context,
|
||||
source,
|
||||
type: 'image/webp',
|
||||
width,
|
||||
height,
|
||||
},
|
||||
true,
|
||||
);
|
||||
};
|
||||
|
||||
/**
|
||||
* Applies the current strategy for file uploads.
|
||||
* Saves file metadata to the database with an expiry TTL.
|
||||
* Files must be deleted from the server filesystem manually.
|
||||
*
|
||||
* @param {Object} params - The parameters object.
|
||||
* @param {Express.Request} params.req - The Express request object.
|
||||
* @param {Express.Response} params.res - The Express response object.
|
||||
* @param {Express.Multer.File} params.file - The uploaded file.
|
||||
* @param {FileMetadata} params.metadata - Additional metadata for the file.
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
const processFileUpload = async ({ req, res, file, metadata }) => {
|
||||
const isAssistantUpload = metadata.endpoint === EModelEndpoint.assistants;
|
||||
const source = isAssistantUpload ? FileSources.openai : req.app.locals.fileStrategy;
|
||||
const { handleFileUpload } = getStrategyFunctions(source);
|
||||
const { file_id, temp_file_id } = metadata;
|
||||
|
||||
/** @type {OpenAI | undefined} */
|
||||
let openai;
|
||||
if (source === FileSources.openai) {
|
||||
({ openai } = await initializeClient({ req }));
|
||||
}
|
||||
|
||||
const { id, bytes, filename, filepath } = await handleFileUpload(req, file, openai);
|
||||
|
||||
if (isAssistantUpload && !metadata.message_file) {
|
||||
await openai.beta.assistants.files.create(metadata.assistant_id, {
|
||||
file_id: id,
|
||||
});
|
||||
}
|
||||
|
||||
const result = await createFile(
|
||||
{
|
||||
user: req.user.id,
|
||||
file_id: id ?? file_id,
|
||||
temp_file_id,
|
||||
bytes,
|
||||
filepath: isAssistantUpload ? `https://api.openai.com/v1/files/${id}` : filepath,
|
||||
filename: filename ?? file.originalname,
|
||||
context: isAssistantUpload ? FileContext.assistants : FileContext.message_attachment,
|
||||
source,
|
||||
type: file.mimetype,
|
||||
},
|
||||
true,
|
||||
);
|
||||
res.status(200).json({ message: 'File uploaded and processed successfully', ...result });
|
||||
};
|
||||
|
||||
/**
|
||||
* Retrieves and processes an OpenAI file based on its type.
|
||||
*
|
||||
* @param {Object} params - The params passed to the function.
|
||||
* @param {OpenAIClient} params.openai - The params passed to the function.
|
||||
* @param {string} params.file_id - The ID of the file to retrieve.
|
||||
* @param {string} params.basename - The basename of the file (if image); e.g., 'image.jpg'.
|
||||
* @param {boolean} [params.unknownType] - Whether the file type is unknown.
|
||||
* @returns {Promise<{file_id: string, filepath: string, source: string, bytes?: number, width?: number, height?: number} | null>}
|
||||
* - Returns null if `file_id` is not defined; else, the file metadata if successfully retrieved and processed.
|
||||
*/
|
||||
async function retrieveAndProcessFile({ openai, file_id, basename: _basename, unknownType }) {
|
||||
if (!file_id) {
|
||||
return null;
|
||||
}
|
||||
|
||||
if (openai.attachedFileIds?.has(file_id)) {
|
||||
return {
|
||||
file_id,
|
||||
// filepath: TODO: local source filepath?,
|
||||
source: FileSources.openai,
|
||||
};
|
||||
}
|
||||
|
||||
let basename = _basename;
|
||||
const downloadImages = isEnabled(GPTS_DOWNLOAD_IMAGES);
|
||||
|
||||
/**
|
||||
* @param {string} file_id - The ID of the file to retrieve.
|
||||
* @param {boolean} [save] - Whether to save the file metadata to the database.
|
||||
*/
|
||||
const retrieveFile = async (file_id, save = false) => {
|
||||
const _file = await openai.files.retrieve(file_id);
|
||||
const filepath = `/api/files/download/${file_id}`;
|
||||
const file = {
|
||||
..._file,
|
||||
type: mime.getType(_file.filename),
|
||||
filepath,
|
||||
usage: 1,
|
||||
file_id,
|
||||
context: _file.purpose ?? FileContext.message_attachment,
|
||||
source: FileSources.openai,
|
||||
};
|
||||
|
||||
if (save) {
|
||||
await createFile(file, true);
|
||||
} else {
|
||||
try {
|
||||
await updateFileUsage({ file_id });
|
||||
} catch (error) {
|
||||
logger.error('Error updating file usage', error);
|
||||
}
|
||||
}
|
||||
|
||||
return file;
|
||||
};
|
||||
|
||||
// If image downloads are not enabled or no basename provided, return only the file metadata
|
||||
if (!downloadImages || (!basename && !downloadImages)) {
|
||||
return await retrieveFile(file_id, true);
|
||||
}
|
||||
|
||||
let data;
|
||||
try {
|
||||
const response = await openai.files.content(file_id);
|
||||
data = await response.arrayBuffer();
|
||||
} catch (error) {
|
||||
logger.error('Error downloading file from OpenAI:', error);
|
||||
return await retrieveFile(file_id);
|
||||
}
|
||||
|
||||
if (!data) {
|
||||
return await retrieveFile(file_id);
|
||||
}
|
||||
const dataBuffer = Buffer.from(data);
|
||||
|
||||
/**
|
||||
* @param {Buffer} dataBuffer
|
||||
* @param {string} fileExt
|
||||
*/
|
||||
const processAsImage = async (dataBuffer, fileExt) => {
|
||||
// Logic to process image files, convert to webp, etc.
|
||||
const _file = await convertToWebP(openai.req, dataBuffer, 'high', `${file_id}${fileExt}`);
|
||||
const file = {
|
||||
..._file,
|
||||
type: 'image/webp',
|
||||
usage: 1,
|
||||
file_id,
|
||||
source: FileSources.openai,
|
||||
};
|
||||
createFile(file, true);
|
||||
return file;
|
||||
};
|
||||
|
||||
/** @param {Buffer} dataBuffer */
|
||||
const processOtherFileTypes = async (dataBuffer) => {
|
||||
// Logic to handle other file types
|
||||
logger.debug('[retrieveAndProcessFile] Non-image file type detected');
|
||||
return { filepath: `/api/files/download/${file_id}`, bytes: dataBuffer.length };
|
||||
};
|
||||
|
||||
// If the filetype is unknown, inspect the file
|
||||
if (unknownType || !path.extname(basename)) {
|
||||
const detectedExt = await determineFileType(dataBuffer);
|
||||
if (detectedExt && imageExtRegex.test('.' + detectedExt)) {
|
||||
return await processAsImage(dataBuffer, detectedExt);
|
||||
} else {
|
||||
return await processOtherFileTypes(dataBuffer);
|
||||
}
|
||||
}
|
||||
|
||||
// Existing logic for processing known image types
|
||||
if (downloadImages && basename && path.extname(basename) && imageExtRegex.test(basename)) {
|
||||
return await processAsImage(dataBuffer, path.extname(basename));
|
||||
} else {
|
||||
logger.debug('[retrieveAndProcessFile] Not an image or invalid extension: ', basename);
|
||||
return await processOtherFileTypes(dataBuffer);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Filters a file based on its size and the endpoint origin.
|
||||
*
|
||||
* @param {Object} params - The parameters for the function.
|
||||
* @param {Express.Request} params.req - The request object from Express.
|
||||
* @param {Express.Multer.File} params.file - The file uploaded to the server via multer.
|
||||
* @param {boolean} [params.image] - Whether the file expected is an image.
|
||||
* @returns {void}
|
||||
*
|
||||
* @throws {Error} If a file exception is caught (invalid file size or type, lack of metadata).
|
||||
*/
|
||||
function filterFile({ req, file, image }) {
|
||||
const { endpoint, file_id, width, height } = req.body;
|
||||
|
||||
if (!file_id) {
|
||||
throw new Error('No file_id provided');
|
||||
}
|
||||
|
||||
/* parse to validate api call, throws error on fail */
|
||||
isUUID.parse(file_id);
|
||||
|
||||
if (!endpoint) {
|
||||
throw new Error('No endpoint provided');
|
||||
}
|
||||
|
||||
const fileConfig = mergeFileConfig(req.app.locals.fileConfig);
|
||||
|
||||
const { fileSizeLimit, supportedMimeTypes } =
|
||||
fileConfig.endpoints[endpoint] ?? fileConfig.endpoints.default;
|
||||
|
||||
if (file.size > fileSizeLimit) {
|
||||
throw new Error(
|
||||
`File size limit of ${fileSizeLimit / megabyte} MB exceeded for ${endpoint} endpoint`,
|
||||
);
|
||||
}
|
||||
|
||||
const isSupportedMimeType = fileConfig.checkType(file.mimetype, supportedMimeTypes);
|
||||
|
||||
if (!isSupportedMimeType) {
|
||||
throw new Error('Unsupported file type');
|
||||
}
|
||||
|
||||
if (!image) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!width) {
|
||||
throw new Error('No width provided');
|
||||
}
|
||||
|
||||
if (!height) {
|
||||
throw new Error('No height provided');
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
processImageUpload,
|
||||
filterFile,
|
||||
processFiles,
|
||||
processFileURL,
|
||||
processImageFile,
|
||||
uploadImageBuffer,
|
||||
processFileUpload,
|
||||
processDeleteRequest,
|
||||
retrieveAndProcessFile,
|
||||
};
|
||||
|
|
|
@ -4,46 +4,82 @@ const {
|
|||
prepareImageURL,
|
||||
saveURLToFirebase,
|
||||
deleteFirebaseFile,
|
||||
saveBufferToFirebase,
|
||||
uploadImageToFirebase,
|
||||
processFirebaseAvatar,
|
||||
} = require('./Firebase');
|
||||
const {
|
||||
// saveLocalFile,
|
||||
getLocalFileURL,
|
||||
saveFileFromURL,
|
||||
saveLocalBuffer,
|
||||
deleteLocalFile,
|
||||
uploadLocalImage,
|
||||
prepareImagesLocal,
|
||||
processLocalAvatar,
|
||||
} = require('./Local');
|
||||
const { uploadOpenAIFile, deleteOpenAIFile } = require('./OpenAI');
|
||||
|
||||
// Firebase Strategy Functions
|
||||
/**
|
||||
* Firebase Storage Strategy Functions
|
||||
*
|
||||
* */
|
||||
const firebaseStrategy = () => ({
|
||||
// saveFile:
|
||||
saveURL: saveURLToFirebase,
|
||||
getFileURL: getFirebaseURL,
|
||||
deleteFile: deleteFirebaseFile,
|
||||
saveBuffer: saveBufferToFirebase,
|
||||
prepareImagePayload: prepareImageURL,
|
||||
processAvatar: processFirebaseAvatar,
|
||||
handleImageUpload: uploadImageToFirebase,
|
||||
});
|
||||
|
||||
// Local Strategy Functions
|
||||
/**
|
||||
* Local Server Storage Strategy Functions
|
||||
*
|
||||
* */
|
||||
const localStrategy = () => ({
|
||||
// saveFile: ,
|
||||
// saveFile: saveLocalFile,
|
||||
saveURL: saveFileFromURL,
|
||||
getFileURL: getLocalFileURL,
|
||||
saveBuffer: saveLocalBuffer,
|
||||
deleteFile: deleteLocalFile,
|
||||
processAvatar: processLocalAvatar,
|
||||
handleImageUpload: uploadLocalImage,
|
||||
prepareImagePayload: prepareImagesLocal,
|
||||
});
|
||||
|
||||
/**
|
||||
* OpenAI Strategy Functions
|
||||
*
|
||||
* Note: null values mean that the strategy is not supported.
|
||||
* */
|
||||
const openAIStrategy = () => ({
|
||||
/** @type {typeof saveFileFromURL | null} */
|
||||
saveURL: null,
|
||||
/** @type {typeof getLocalFileURL | null} */
|
||||
getFileURL: null,
|
||||
/** @type {typeof saveLocalBuffer | null} */
|
||||
saveBuffer: null,
|
||||
/** @type {typeof processLocalAvatar | null} */
|
||||
processAvatar: null,
|
||||
/** @type {typeof uploadLocalImage | null} */
|
||||
handleImageUpload: null,
|
||||
/** @type {typeof prepareImagesLocal | null} */
|
||||
prepareImagePayload: null,
|
||||
deleteFile: deleteOpenAIFile,
|
||||
handleFileUpload: uploadOpenAIFile,
|
||||
});
|
||||
|
||||
// Strategy Selector
|
||||
const getStrategyFunctions = (fileSource) => {
|
||||
if (fileSource === FileSources.firebase) {
|
||||
return firebaseStrategy();
|
||||
} else if (fileSource === FileSources.local) {
|
||||
return localStrategy();
|
||||
} else if (fileSource === FileSources.openai) {
|
||||
return openAIStrategy();
|
||||
} else {
|
||||
throw new Error('Invalid file source');
|
||||
}
|
||||
|
|
|
@ -69,8 +69,33 @@ const fetchModels = async ({
|
|||
await cache.set(name, endpointTokenConfig);
|
||||
}
|
||||
models = input.data.map((item) => item.id);
|
||||
} catch (err) {
|
||||
logger.error(`Failed to fetch models from ${azure ? 'Azure ' : ''}${name} API`, err);
|
||||
} catch (error) {
|
||||
const logMessage = `Failed to fetch models from ${azure ? 'Azure ' : ''}${name} API`;
|
||||
if (error.response) {
|
||||
logger.error(
|
||||
`${logMessage} The request was made and the server responded with a status code that falls out of the range of 2xx: ${
|
||||
error.message ? error.message : ''
|
||||
}`,
|
||||
{
|
||||
headers: error.response.headers,
|
||||
status: error.response.status,
|
||||
data: error.response.data,
|
||||
},
|
||||
);
|
||||
} else if (error.request) {
|
||||
logger.error(
|
||||
`${logMessage} The request was made but no response was received: ${
|
||||
error.message ? error.message : ''
|
||||
}`,
|
||||
{
|
||||
request: error.request,
|
||||
},
|
||||
);
|
||||
} else {
|
||||
logger.error(`${logMessage} Something happened in setting up the request`, {
|
||||
message: error.message ? error.message : '',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return models;
|
||||
|
@ -131,6 +156,9 @@ const fetchOpenAIModels = async (opts, _models = []) => {
|
|||
if (baseURL === openaiBaseURL) {
|
||||
const regex = /(text-davinci-003|gpt-)/;
|
||||
models = models.filter((model) => regex.test(model));
|
||||
const instructModels = models.filter((model) => model.includes('instruct'));
|
||||
const otherModels = models.filter((model) => !model.includes('instruct'));
|
||||
models = otherModels.concat(instructModels);
|
||||
}
|
||||
|
||||
await modelsCache.set(baseURL, models);
|
||||
|
@ -147,7 +175,11 @@ const fetchOpenAIModels = async (opts, _models = []) => {
|
|||
* @param {boolean} [opts.plugins=false] - Whether to fetch models from the plugins.
|
||||
*/
|
||||
const getOpenAIModels = async (opts) => {
|
||||
let models = defaultModels.openAI;
|
||||
let models = defaultModels[EModelEndpoint.openAI];
|
||||
|
||||
if (opts.assistants) {
|
||||
models = defaultModels[EModelEndpoint.assistants];
|
||||
}
|
||||
|
||||
if (opts.plugins) {
|
||||
models = models.filter(
|
||||
|
@ -161,7 +193,9 @@ const getOpenAIModels = async (opts) => {
|
|||
}
|
||||
|
||||
let key;
|
||||
if (opts.azure) {
|
||||
if (opts.assistants) {
|
||||
key = 'ASSISTANTS_MODELS';
|
||||
} else if (opts.azure) {
|
||||
key = 'AZURE_OPENAI_MODELS';
|
||||
} else if (opts.plugins) {
|
||||
key = 'PLUGIN_MODELS';
|
||||
|
@ -178,6 +212,10 @@ const getOpenAIModels = async (opts) => {
|
|||
return models;
|
||||
}
|
||||
|
||||
if (opts.assistants) {
|
||||
return models;
|
||||
}
|
||||
|
||||
return await fetchOpenAIModels(opts, models);
|
||||
};
|
||||
|
||||
|
|
|
@ -210,3 +210,49 @@ describe('getOpenAIModels with mocked config', () => {
|
|||
expect(models).toContain('some-default-model');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getOpenAIModels sorting behavior', () => {
|
||||
beforeEach(() => {
|
||||
axios.get.mockResolvedValue({
|
||||
data: {
|
||||
data: [
|
||||
{ id: 'gpt-3.5-turbo-instruct-0914' },
|
||||
{ id: 'gpt-3.5-turbo-instruct' },
|
||||
{ id: 'gpt-3.5-turbo' },
|
||||
{ id: 'gpt-4-0314' },
|
||||
{ id: 'gpt-4-turbo-preview' },
|
||||
],
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('ensures instruct models are listed last', async () => {
|
||||
const models = await getOpenAIModels({ user: 'user456' });
|
||||
|
||||
// Check if the last model is an "instruct" model
|
||||
expect(models[models.length - 1]).toMatch(/instruct/);
|
||||
|
||||
// Check if the "instruct" models are placed at the end
|
||||
const instructIndexes = models
|
||||
.map((model, index) => (model.includes('instruct') ? index : -1))
|
||||
.filter((index) => index !== -1);
|
||||
const nonInstructIndexes = models
|
||||
.map((model, index) => (!model.includes('instruct') ? index : -1))
|
||||
.filter((index) => index !== -1);
|
||||
|
||||
expect(Math.max(...nonInstructIndexes)).toBeLessThan(Math.min(...instructIndexes));
|
||||
|
||||
const expectedOrder = [
|
||||
'gpt-3.5-turbo',
|
||||
'gpt-4-0314',
|
||||
'gpt-4-turbo-preview',
|
||||
'gpt-3.5-turbo-instruct-0914',
|
||||
'gpt-3.5-turbo-instruct',
|
||||
];
|
||||
expect(models).toEqual(expectedOrder);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
});
|
||||
|
|
|
@ -90,8 +90,7 @@ const updateUserPluginAuth = async (userId, authField, pluginKey, value) => {
|
|||
|
||||
const deleteUserPluginAuth = async (userId, authField) => {
|
||||
try {
|
||||
const response = await PluginAuth.deleteOne({ userId, authField });
|
||||
return response;
|
||||
return await PluginAuth.deleteOne({ userId, authField });
|
||||
} catch (err) {
|
||||
logger.error('[deleteUserPluginAuth]', err);
|
||||
return err;
|
||||
|
|
|
@ -44,15 +44,23 @@ class RunManager {
|
|||
*/
|
||||
async fetchRunSteps({ openai, thread_id, run_id, runStatus, final = false }) {
|
||||
// const { data: steps, first_id, last_id, has_more } = await openai.beta.threads.runs.steps.list(thread_id, run_id);
|
||||
const { data: _steps } = await openai.beta.threads.runs.steps.list(thread_id, run_id);
|
||||
const { data: _steps } = await openai.beta.threads.runs.steps.list(
|
||||
thread_id,
|
||||
run_id,
|
||||
{},
|
||||
{
|
||||
timeout: 3000,
|
||||
maxRetries: 5,
|
||||
},
|
||||
);
|
||||
const steps = _steps.sort((a, b) => a.created_at - b.created_at);
|
||||
for (const [i, step] of steps.entries()) {
|
||||
if (this.seenSteps.has(step.id)) {
|
||||
if (!final && this.seenSteps.has(`${step.id}-${step.status}`)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const isLast = i === steps.length - 1;
|
||||
this.seenSteps.add(step.id);
|
||||
this.seenSteps.add(`${step.id}-${step.status}`);
|
||||
this.stepsByStatus[runStatus] = this.stepsByStatus[runStatus] || [];
|
||||
|
||||
const currentStepPromise = (async () => {
|
||||
|
@ -64,6 +72,13 @@ class RunManager {
|
|||
return await currentStepPromise;
|
||||
}
|
||||
|
||||
if (step.type === 'tool_calls') {
|
||||
await currentStepPromise;
|
||||
}
|
||||
if (step.type === 'message_creation' && step.status === 'completed') {
|
||||
await currentStepPromise;
|
||||
}
|
||||
|
||||
this.lastStepPromiseByStatus[runStatus] = currentStepPromise;
|
||||
this.stepsByStatus[runStatus].push(currentStepPromise);
|
||||
}
|
||||
|
@ -79,7 +94,7 @@ class RunManager {
|
|||
*/
|
||||
async handleStep({ step, runStatus, final, isLast }) {
|
||||
if (this.handlers[runStatus]) {
|
||||
return this.handlers[runStatus]({ step, final, isLast });
|
||||
return await this.handlers[runStatus]({ step, final, isLast });
|
||||
}
|
||||
|
||||
if (final && isLast && this.handlers['final']) {
|
270
api/server/services/Runs/handle.js
Normal file
270
api/server/services/Runs/handle.js
Normal file
|
@ -0,0 +1,270 @@
|
|||
const { RunStatus, defaultOrderQuery, CacheKeys } = require('librechat-data-provider');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const { retrieveRun } = require('./methods');
|
||||
const RunManager = require('./RunManager');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
async function withTimeout(promise, timeoutMs, timeoutMessage) {
|
||||
let timeoutHandle;
|
||||
|
||||
const timeoutPromise = new Promise((_, reject) => {
|
||||
timeoutHandle = setTimeout(() => {
|
||||
logger.debug(timeoutMessage);
|
||||
reject(new Error('Operation timed out'));
|
||||
}, timeoutMs);
|
||||
});
|
||||
|
||||
try {
|
||||
return await Promise.race([promise, timeoutPromise]);
|
||||
} finally {
|
||||
clearTimeout(timeoutHandle);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a run on a thread using the OpenAI API.
|
||||
*
|
||||
* @param {Object} params - The parameters for creating a run.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.thread_id - The ID of the thread to run.
|
||||
* @param {Object} params.body - The body of the request to create a run.
|
||||
* @param {string} params.body.assistant_id - The ID of the assistant to use for this run.
|
||||
* @param {string} [params.body.model] - Optional. The ID of the model to be used for this run.
|
||||
* @param {string} [params.body.instructions] - Optional. Override the default system message of the assistant.
|
||||
* @param {string} [params.body.additional_instructions] - Optional. Appends additional instructions
|
||||
* at theend of the instructions for the run. This is useful for modifying
|
||||
* the behavior on a per-run basis without overriding other instructions.
|
||||
* @param {Object[]} [params.body.tools] - Optional. Override the tools the assistant can use for this run.
|
||||
* @param {string[]} [params.body.file_ids] - Optional.
|
||||
* List of File IDs the assistant can use for this run.
|
||||
*
|
||||
* **Note:** The API seems to prefer files added to messages, not runs.
|
||||
* @param {Object} [params.body.metadata] - Optional. Metadata for the run.
|
||||
* @return {Promise<Run>} A promise that resolves to the created run object.
|
||||
*/
|
||||
async function createRun({ openai, thread_id, body }) {
|
||||
return await openai.beta.threads.runs.create(thread_id, body);
|
||||
}
|
||||
|
||||
/**
|
||||
* Delays the execution for a specified number of milliseconds.
|
||||
*
|
||||
* @param {number} ms - The number of milliseconds to delay.
|
||||
* @return {Promise<void>} A promise that resolves after the specified delay.
|
||||
*/
|
||||
function sleep(ms) {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
/**
|
||||
* Waits for a run to complete by repeatedly checking its status. It uses a RunManager instance to fetch and manage run steps based on the run status.
|
||||
*
|
||||
* @param {Object} params - The parameters for the waitForRun function.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.run_id - The ID of the run to wait for.
|
||||
* @param {string} params.thread_id - The ID of the thread associated with the run.
|
||||
* @param {RunManager} params.runManager - The RunManager instance to manage run steps.
|
||||
* @param {number} [params.pollIntervalMs=750] - The interval for polling the run status; default is 750 milliseconds.
|
||||
* @param {number} [params.timeout=180000] - The period to wait until timing out polling; default is 3 minutes (in ms).
|
||||
* @return {Promise<Run>} A promise that resolves to the last fetched run object.
|
||||
*/
|
||||
async function waitForRun({
|
||||
openai,
|
||||
run_id,
|
||||
thread_id,
|
||||
runManager,
|
||||
pollIntervalMs = 750,
|
||||
timeout = 60000 * 3,
|
||||
}) {
|
||||
let timeElapsed = 0;
|
||||
let run;
|
||||
|
||||
const cache = getLogStores(CacheKeys.ABORT_KEYS);
|
||||
const cacheKey = `${openai.req.user.id}:${openai.responseMessage.conversationId}`;
|
||||
|
||||
let i = 0;
|
||||
let lastSeenStatus = null;
|
||||
const runIdLog = `run_id: ${run_id}`;
|
||||
const runInfo = `user: ${openai.req.user.id} | thread_id: ${thread_id} | ${runIdLog}`;
|
||||
const raceTimeoutMs = 3000;
|
||||
let maxRetries = 5;
|
||||
let attempt = 0;
|
||||
while (timeElapsed < timeout) {
|
||||
i++;
|
||||
logger.debug(`[heartbeat ${i}] ${runIdLog} | Retrieving run status...`);
|
||||
let updatedRun;
|
||||
|
||||
const startTime = Date.now();
|
||||
while (!updatedRun && attempt < maxRetries) {
|
||||
try {
|
||||
updatedRun = await withTimeout(
|
||||
retrieveRun({ thread_id, run_id, timeout: raceTimeoutMs, openai }),
|
||||
raceTimeoutMs,
|
||||
`[heartbeat ${i}] ${runIdLog} | Run retrieval timed out at ${timeElapsed} ms. Trying again (attempt ${
|
||||
attempt + 1
|
||||
} of ${maxRetries})...`,
|
||||
);
|
||||
attempt++;
|
||||
} catch (error) {
|
||||
logger.warn(`${runIdLog} | Error retrieving run status: ${error}`);
|
||||
}
|
||||
}
|
||||
const endTime = Date.now();
|
||||
logger.debug(
|
||||
`[heartbeat ${i}] ${runIdLog} | Elapsed run retrieval time: ${endTime - startTime}`,
|
||||
);
|
||||
if (!updatedRun) {
|
||||
const errorMessage = `[waitForRun] ${runIdLog} | Run retrieval failed after ${maxRetries} attempts`;
|
||||
throw new Error(errorMessage);
|
||||
}
|
||||
run = updatedRun;
|
||||
attempt = 0;
|
||||
const runStatus = `${runInfo} | status: ${run.status}`;
|
||||
|
||||
if (run.status !== lastSeenStatus) {
|
||||
logger.debug(`[${run.status}] ${runInfo}`);
|
||||
lastSeenStatus = run.status;
|
||||
}
|
||||
|
||||
logger.debug(`[heartbeat ${i}] ${runStatus}`);
|
||||
|
||||
let cancelStatus;
|
||||
try {
|
||||
const timeoutMessage = `[heartbeat ${i}] ${runIdLog} | Cancel Status check operation timed out.`;
|
||||
cancelStatus = await withTimeout(cache.get(cacheKey), raceTimeoutMs, timeoutMessage);
|
||||
} catch (error) {
|
||||
logger.warn(`Error retrieving cancel status: ${error}`);
|
||||
}
|
||||
|
||||
if (cancelStatus === 'cancelled') {
|
||||
logger.warn(`[waitForRun] ${runStatus} | RUN CANCELLED`);
|
||||
throw new Error('Run cancelled');
|
||||
}
|
||||
|
||||
if (![RunStatus.IN_PROGRESS, RunStatus.QUEUED].includes(run.status)) {
|
||||
logger.debug(`[FINAL] ${runInfo} | status: ${run.status}`);
|
||||
await runManager.fetchRunSteps({
|
||||
openai,
|
||||
thread_id: thread_id,
|
||||
run_id: run_id,
|
||||
runStatus: run.status,
|
||||
final: true,
|
||||
});
|
||||
break;
|
||||
}
|
||||
|
||||
// may use in future; for now, just fetch from the final status
|
||||
await runManager.fetchRunSteps({
|
||||
openai,
|
||||
thread_id: thread_id,
|
||||
run_id: run_id,
|
||||
runStatus: run.status,
|
||||
});
|
||||
|
||||
await sleep(pollIntervalMs);
|
||||
timeElapsed += pollIntervalMs;
|
||||
}
|
||||
|
||||
if (timeElapsed >= timeout) {
|
||||
const timeoutMessage = `[waitForRun] ${runInfo} | status: ${run.status} | timed out after ${timeout} ms`;
|
||||
logger.warn(timeoutMessage);
|
||||
throw new Error(timeoutMessage);
|
||||
}
|
||||
|
||||
return run;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves all steps of a run.
|
||||
*
|
||||
* @deprecated: Steps are handled with runAssistant now.
|
||||
* @param {Object} params - The parameters for the retrieveRunSteps function.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.thread_id - The ID of the thread associated with the run.
|
||||
* @param {string} params.run_id - The ID of the run to retrieve steps for.
|
||||
* @return {Promise<RunStep[]>} A promise that resolves to an array of RunStep objects.
|
||||
*/
|
||||
async function _retrieveRunSteps({ openai, thread_id, run_id }) {
|
||||
const runSteps = await openai.beta.threads.runs.steps.list(thread_id, run_id);
|
||||
return runSteps;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes a RunManager with handlers, then invokes waitForRun to monitor and manage an OpenAI run.
|
||||
*
|
||||
* @deprecated Use runAssistant instead.
|
||||
* @param {Object} params - The parameters for managing and monitoring the run.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.run_id - The ID of the run to manage and monitor.
|
||||
* @param {string} params.thread_id - The ID of the thread associated with the run.
|
||||
* @return {Promise<Object>} A promise that resolves to an object containing the run and managed steps.
|
||||
*/
|
||||
async function _handleRun({ openai, run_id, thread_id }) {
|
||||
let steps = [];
|
||||
let messages = [];
|
||||
const runManager = new RunManager({
|
||||
// 'in_progress': async ({ step, final, isLast }) => {
|
||||
// // Define logic for handling steps with 'in_progress' status
|
||||
// },
|
||||
// 'queued': async ({ step, final, isLast }) => {
|
||||
// // Define logic for handling steps with 'queued' status
|
||||
// },
|
||||
final: async ({ step, runStatus, stepsByStatus }) => {
|
||||
console.log(`Final step for ${run_id} with status ${runStatus}`);
|
||||
console.dir(step, { depth: null });
|
||||
|
||||
const promises = [];
|
||||
promises.push(openai.beta.threads.messages.list(thread_id, defaultOrderQuery));
|
||||
|
||||
// const finalSteps = stepsByStatus[runStatus];
|
||||
// for (const stepPromise of finalSteps) {
|
||||
// promises.push(stepPromise);
|
||||
// }
|
||||
|
||||
// loop across all statuses
|
||||
for (const [_status, stepsPromises] of Object.entries(stepsByStatus)) {
|
||||
promises.push(...stepsPromises);
|
||||
}
|
||||
|
||||
const resolved = await Promise.all(promises);
|
||||
const res = resolved.shift();
|
||||
messages = res.data.filter((msg) => msg.run_id === run_id);
|
||||
resolved.push(step);
|
||||
steps = resolved;
|
||||
},
|
||||
});
|
||||
|
||||
const run = await waitForRun({
|
||||
openai,
|
||||
run_id,
|
||||
thread_id,
|
||||
runManager,
|
||||
pollIntervalMs: 750,
|
||||
timeout: 60000,
|
||||
});
|
||||
const actions = [];
|
||||
if (run.required_action) {
|
||||
const { submit_tool_outputs } = run.required_action;
|
||||
submit_tool_outputs.tool_calls.forEach((item) => {
|
||||
const functionCall = item.function;
|
||||
const args = JSON.parse(functionCall.arguments);
|
||||
actions.push({
|
||||
tool: functionCall.name,
|
||||
toolInput: args,
|
||||
toolCallId: item.id,
|
||||
run_id,
|
||||
thread_id,
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
return { run, steps, messages, actions };
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
sleep,
|
||||
createRun,
|
||||
waitForRun,
|
||||
// _handleRun,
|
||||
// retrieveRunSteps,
|
||||
};
|
9
api/server/services/Runs/index.js
Normal file
9
api/server/services/Runs/index.js
Normal file
|
@ -0,0 +1,9 @@
|
|||
const handle = require('./handle');
|
||||
const methods = require('./methods');
|
||||
const RunManager = require('./RunManager');
|
||||
|
||||
module.exports = {
|
||||
...handle,
|
||||
...methods,
|
||||
RunManager,
|
||||
};
|
76
api/server/services/Runs/methods.js
Normal file
76
api/server/services/Runs/methods.js
Normal file
|
@ -0,0 +1,76 @@
|
|||
const axios = require('axios');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* @typedef {Object} RetrieveOptions
|
||||
* @property {string} thread_id - The ID of the thread to retrieve.
|
||||
* @property {string} run_id - The ID of the run to retrieve.
|
||||
* @property {number} [timeout] - Optional timeout for the API call.
|
||||
* @property {number} [maxRetries] - TODO: not yet implemented; Optional maximum number of retries for the API call.
|
||||
* @property {OpenAIClient} openai - Configuration and credentials for OpenAI API access.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Asynchronously retrieves data from an API endpoint based on provided thread and run IDs.
|
||||
*
|
||||
* @param {RetrieveOptions} options - The options for the retrieve operation.
|
||||
* @returns {Promise<Object>} The data retrieved from the API.
|
||||
*/
|
||||
async function retrieveRun({ thread_id, run_id, timeout, openai }) {
|
||||
const { apiKey, baseURL, httpAgent, organization } = openai;
|
||||
const url = `${baseURL}/threads/${thread_id}/runs/${run_id}`;
|
||||
|
||||
const headers = {
|
||||
Authorization: `Bearer ${apiKey}`,
|
||||
'OpenAI-Beta': 'assistants=v1',
|
||||
};
|
||||
|
||||
if (organization) {
|
||||
headers['OpenAI-Organization'] = organization;
|
||||
}
|
||||
|
||||
try {
|
||||
const axiosConfig = {
|
||||
headers: headers,
|
||||
timeout: timeout,
|
||||
};
|
||||
|
||||
if (httpAgent) {
|
||||
axiosConfig.httpAgent = httpAgent;
|
||||
axiosConfig.httpsAgent = httpAgent;
|
||||
}
|
||||
|
||||
const response = await axios.get(url, axiosConfig);
|
||||
return response.data;
|
||||
} catch (error) {
|
||||
const logMessage = '[retrieveRun] Failed to retrieve run data:';
|
||||
if (error.response) {
|
||||
logger.error(
|
||||
`${logMessage} The request was made and the server responded with a status code that falls out of the range of 2xx: ${
|
||||
error.message ? error.message : ''
|
||||
}`,
|
||||
{
|
||||
headers: error.response.headers,
|
||||
status: error.response.status,
|
||||
data: error.response.data,
|
||||
},
|
||||
);
|
||||
} else if (error.request) {
|
||||
logger.error(
|
||||
`${logMessage} The request was made but no response was received: ${
|
||||
error.message ? error.message : ''
|
||||
}`,
|
||||
{
|
||||
request: error.request,
|
||||
},
|
||||
);
|
||||
} else {
|
||||
logger.error(`${logMessage} Something happened in setting up the request`, {
|
||||
message: error.message ? error.message : '',
|
||||
});
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { retrieveRun };
|
5
api/server/services/Threads/index.js
Normal file
5
api/server/services/Threads/index.js
Normal file
|
@ -0,0 +1,5 @@
|
|||
const manage = require('./manage');
|
||||
|
||||
module.exports = {
|
||||
...manage,
|
||||
};
|
495
api/server/services/Threads/manage.js
Normal file
495
api/server/services/Threads/manage.js
Normal file
|
@ -0,0 +1,495 @@
|
|||
const { v4 } = require('uuid');
|
||||
const {
|
||||
EModelEndpoint,
|
||||
Constants,
|
||||
defaultOrderQuery,
|
||||
ContentTypes,
|
||||
} = require('librechat-data-provider');
|
||||
const { recordMessage, getMessages } = require('~/models/Message');
|
||||
const { saveConvo } = require('~/models/Conversation');
|
||||
const spendTokens = require('~/models/spendTokens');
|
||||
const { countTokens } = require('~/server/utils');
|
||||
|
||||
/**
|
||||
* Initializes a new thread or adds messages to an existing thread.
|
||||
*
|
||||
* @param {Object} params - The parameters for initializing a thread.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {Object} params.body - The body of the request.
|
||||
* @param {ThreadMessage[]} params.body.messages - A list of messages to start the thread with.
|
||||
* @param {Object} [params.body.metadata] - Optional metadata for the thread.
|
||||
* @param {string} [params.thread_id] - Optional existing thread ID. If provided, a message will be added to this thread.
|
||||
* @return {Promise<Thread>} A promise that resolves to the newly created thread object or the updated thread object.
|
||||
*/
|
||||
async function initThread({ openai, body, thread_id: _thread_id }) {
|
||||
let thread = {};
|
||||
const messages = [];
|
||||
if (_thread_id) {
|
||||
const message = await openai.beta.threads.messages.create(_thread_id, body.messages[0]);
|
||||
messages.push(message);
|
||||
} else {
|
||||
thread = await openai.beta.threads.create(body);
|
||||
}
|
||||
|
||||
const thread_id = _thread_id ?? thread.id;
|
||||
return { messages, thread_id, ...thread };
|
||||
}
|
||||
|
||||
/**
|
||||
* Saves a user message to the DB in the Assistants endpoint format.
|
||||
*
|
||||
* @param {Object} params - The parameters of the user message
|
||||
* @param {string} params.user - The user's ID.
|
||||
* @param {string} params.text - The user's prompt.
|
||||
* @param {string} params.messageId - The user message Id.
|
||||
* @param {string} params.model - The model used by the assistant.
|
||||
* @param {string} params.assistant_id - The current assistant Id.
|
||||
* @param {string} params.thread_id - The thread Id.
|
||||
* @param {string} params.conversationId - The message's conversationId
|
||||
* @param {string} [params.parentMessageId] - Optional if initial message.
|
||||
* Defaults to Constants.NO_PARENT.
|
||||
* @param {string} [params.instructions] - Optional: from preset for `instructions` field.
|
||||
* Overrides the instructions of the assistant.
|
||||
* @param {string} [params.promptPrefix] - Optional: from preset for `additional_instructions` field.
|
||||
* @param {import('librechat-data-provider').TFile[]} [params.files] - Optional. List of Attached File Objects.
|
||||
* @param {string[]} [params.file_ids] - Optional. List of File IDs attached to the userMessage.
|
||||
* @return {Promise<Run>} A promise that resolves to the created run object.
|
||||
*/
|
||||
async function saveUserMessage(params) {
|
||||
const tokenCount = await countTokens(params.text);
|
||||
|
||||
// todo: do this on the frontend
|
||||
// const { file_ids = [] } = params;
|
||||
// let content;
|
||||
// if (file_ids.length) {
|
||||
// content = [
|
||||
// {
|
||||
// value: params.text,
|
||||
// },
|
||||
// ...(
|
||||
// file_ids
|
||||
// .filter(f => f)
|
||||
// .map((file_id) => ({
|
||||
// file_id,
|
||||
// }))
|
||||
// ),
|
||||
// ];
|
||||
// }
|
||||
|
||||
const userMessage = {
|
||||
user: params.user,
|
||||
endpoint: EModelEndpoint.assistants,
|
||||
messageId: params.messageId,
|
||||
conversationId: params.conversationId,
|
||||
parentMessageId: params.parentMessageId ?? Constants.NO_PARENT,
|
||||
/* For messages, use the assistant_id instead of model */
|
||||
model: params.assistant_id,
|
||||
thread_id: params.thread_id,
|
||||
sender: 'User',
|
||||
text: params.text,
|
||||
isCreatedByUser: true,
|
||||
tokenCount,
|
||||
};
|
||||
|
||||
const convo = {
|
||||
endpoint: EModelEndpoint.assistants,
|
||||
conversationId: params.conversationId,
|
||||
promptPrefix: params.promptPrefix,
|
||||
instructions: params.instructions,
|
||||
assistant_id: params.assistant_id,
|
||||
model: params.model,
|
||||
};
|
||||
|
||||
if (params.files?.length) {
|
||||
userMessage.files = params.files.map(({ file_id }) => ({ file_id }));
|
||||
convo.file_ids = params.file_ids;
|
||||
}
|
||||
|
||||
const message = await recordMessage(userMessage);
|
||||
await saveConvo(params.user, convo);
|
||||
|
||||
return message;
|
||||
}
|
||||
|
||||
/**
|
||||
* Saves an Assistant message to the DB in the Assistants endpoint format.
|
||||
*
|
||||
* @param {Object} params - The parameters of the Assistant message
|
||||
* @param {string} params.user - The user's ID.
|
||||
* @param {string} params.messageId - The message Id.
|
||||
* @param {string} params.assistant_id - The assistant Id.
|
||||
* @param {string} params.thread_id - The thread Id.
|
||||
* @param {string} params.model - The model used by the assistant.
|
||||
* @param {ContentPart[]} params.content - The message content parts.
|
||||
* @param {string} params.conversationId - The message's conversationId
|
||||
* @param {string} params.parentMessageId - The latest user message that triggered this response.
|
||||
* @param {string} [params.instructions] - Optional: from preset for `instructions` field.
|
||||
* Overrides the instructions of the assistant.
|
||||
* @param {string} [params.promptPrefix] - Optional: from preset for `additional_instructions` field.
|
||||
* @return {Promise<Run>} A promise that resolves to the created run object.
|
||||
*/
|
||||
async function saveAssistantMessage(params) {
|
||||
const text = params.content.reduce((acc, part) => {
|
||||
if (!part.value) {
|
||||
return acc;
|
||||
}
|
||||
|
||||
return acc + ' ' + part.value;
|
||||
}, '');
|
||||
|
||||
// const tokenCount = // TODO: need to count each content part
|
||||
|
||||
const message = await recordMessage({
|
||||
user: params.user,
|
||||
endpoint: EModelEndpoint.assistants,
|
||||
messageId: params.messageId,
|
||||
conversationId: params.conversationId,
|
||||
parentMessageId: params.parentMessageId,
|
||||
thread_id: params.thread_id,
|
||||
/* For messages, use the assistant_id instead of model */
|
||||
model: params.assistant_id,
|
||||
content: params.content,
|
||||
sender: 'Assistant',
|
||||
isCreatedByUser: false,
|
||||
text: text.trim(),
|
||||
// tokenCount,
|
||||
});
|
||||
|
||||
await saveConvo(params.user, {
|
||||
endpoint: EModelEndpoint.assistants,
|
||||
conversationId: params.conversationId,
|
||||
promptPrefix: params.promptPrefix,
|
||||
instructions: params.instructions,
|
||||
assistant_id: params.assistant_id,
|
||||
model: params.model,
|
||||
});
|
||||
|
||||
return message;
|
||||
}
|
||||
|
||||
/**
|
||||
* Records LibreChat messageId to all response messages' metadata
|
||||
*
|
||||
* @param {Object} params - The parameters for initializing a thread.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} params.thread_id - Response thread ID.
|
||||
* @param {string} params.messageId - The response `messageId` generated by LibreChat.
|
||||
* @param {StepMessage[] | Message[]} params.messages - A list of messages to start the thread with.
|
||||
* @return {Promise<ThreadMessage[]>} A promise that resolves to the updated messages
|
||||
*/
|
||||
async function addThreadMetadata({ openai, thread_id, messageId, messages }) {
|
||||
const promises = [];
|
||||
for (const message of messages) {
|
||||
promises.push(
|
||||
openai.beta.threads.messages.update(thread_id, message.id, {
|
||||
metadata: {
|
||||
messageId,
|
||||
},
|
||||
}),
|
||||
);
|
||||
}
|
||||
|
||||
return await Promise.all(promises);
|
||||
}
|
||||
|
||||
/**
|
||||
* Synchronizes LibreChat messages to Thread Messages.
|
||||
* Updates the LibreChat DB with any missing Thread Messages and
|
||||
* updates the missing Thread Messages' metadata with their corresponding db messageId's.
|
||||
*
|
||||
* Also updates the existing conversation's file_ids with any new file_ids.
|
||||
*
|
||||
* @param {Object} params - The parameters for synchronizing messages.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {TMessage[]} params.dbMessages - The LibreChat DB messages.
|
||||
* @param {ThreadMessage[]} params.apiMessages - The thread messages from the API.
|
||||
* @param {string} params.conversationId - The current conversation ID.
|
||||
* @param {string} params.thread_id - The current thread ID.
|
||||
* @param {string} [params.assistant_id] - The current assistant ID.
|
||||
* @return {Promise<TMessage[]>} A promise that resolves to the updated messages
|
||||
*/
|
||||
async function syncMessages({
|
||||
openai,
|
||||
apiMessages,
|
||||
dbMessages,
|
||||
conversationId,
|
||||
thread_id,
|
||||
assistant_id,
|
||||
}) {
|
||||
let result = [];
|
||||
let dbMessageMap = new Map(dbMessages.map((msg) => [msg.messageId, msg]));
|
||||
|
||||
const modifyPromises = [];
|
||||
const recordPromises = [];
|
||||
|
||||
/**
|
||||
*
|
||||
* Modify API message and save newMessage to DB
|
||||
*
|
||||
* @param {Object} params - The parameters object
|
||||
* @param {TMessage} params.dbMessage
|
||||
* @param {dbMessage} params.apiMessage
|
||||
*/
|
||||
const processNewMessage = async ({ dbMessage, apiMessage }) => {
|
||||
recordPromises.push(recordMessage({ ...dbMessage, user: openai.req.user.id }));
|
||||
|
||||
if (!apiMessage.id.includes('msg_')) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (dbMessage.aggregateMessages?.length > 1) {
|
||||
modifyPromises.push(
|
||||
addThreadMetadata({
|
||||
openai,
|
||||
thread_id,
|
||||
messageId: dbMessage.messageId,
|
||||
messages: dbMessage.aggregateMessages,
|
||||
}),
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
modifyPromises.push(
|
||||
openai.beta.threads.messages.update(thread_id, apiMessage.id, {
|
||||
metadata: {
|
||||
messageId: dbMessage.messageId,
|
||||
},
|
||||
}),
|
||||
);
|
||||
};
|
||||
|
||||
let lastMessage = null;
|
||||
|
||||
for (let i = 0; i < apiMessages.length; i++) {
|
||||
const apiMessage = apiMessages[i];
|
||||
|
||||
// Check if the message exists in the database based on metadata
|
||||
const dbMessageId = apiMessage.metadata && apiMessage.metadata.messageId;
|
||||
let dbMessage = dbMessageMap.get(dbMessageId);
|
||||
|
||||
if (dbMessage) {
|
||||
// If message exists in DB, use its messageId and update parentMessageId
|
||||
dbMessage.parentMessageId = lastMessage ? lastMessage.messageId : Constants.NO_PARENT;
|
||||
lastMessage = dbMessage;
|
||||
result.push(dbMessage);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (apiMessage.role === 'assistant' && lastMessage && lastMessage.role === 'assistant') {
|
||||
// Aggregate assistant messages
|
||||
lastMessage.content = [...lastMessage.content, ...apiMessage.content];
|
||||
lastMessage.files = [...(lastMessage.files ?? []), ...(apiMessage.files ?? [])];
|
||||
lastMessage.aggregateMessages.push({ id: apiMessage.id });
|
||||
} else {
|
||||
// Handle new or missing message
|
||||
const newMessage = {
|
||||
thread_id,
|
||||
conversationId,
|
||||
messageId: v4(),
|
||||
endpoint: EModelEndpoint.assistants,
|
||||
parentMessageId: lastMessage ? lastMessage.messageId : Constants.NO_PARENT,
|
||||
role: apiMessage.role,
|
||||
isCreatedByUser: apiMessage.role === 'user',
|
||||
// TODO: process generated files in content parts
|
||||
content: apiMessage.content,
|
||||
aggregateMessages: [{ id: apiMessage.id }],
|
||||
model: apiMessage.role === 'user' ? null : apiMessage.assistant_id,
|
||||
user: openai.req.user.id,
|
||||
};
|
||||
|
||||
if (apiMessage.file_ids?.length) {
|
||||
// TODO: retrieve file objects from API
|
||||
newMessage.files = apiMessage.file_ids.map((file_id) => ({ file_id }));
|
||||
}
|
||||
|
||||
/* Assign assistant_id if defined */
|
||||
if (assistant_id && apiMessage.role === 'assistant' && !newMessage.model) {
|
||||
apiMessage.model = assistant_id;
|
||||
newMessage.model = assistant_id;
|
||||
}
|
||||
|
||||
result.push(newMessage);
|
||||
lastMessage = newMessage;
|
||||
|
||||
if (apiMessage.role === 'user') {
|
||||
processNewMessage({ dbMessage: newMessage, apiMessage });
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
const nextMessage = apiMessages[i + 1];
|
||||
const processAssistant = !nextMessage || nextMessage.role === 'user';
|
||||
|
||||
if (apiMessage.role === 'assistant' && processAssistant) {
|
||||
processNewMessage({ dbMessage: lastMessage, apiMessage });
|
||||
}
|
||||
}
|
||||
|
||||
const attached_file_ids = apiMessages.reduce((acc, msg) => {
|
||||
if (msg.role === 'user' && msg.file_ids?.length) {
|
||||
return [...acc, ...msg.file_ids];
|
||||
}
|
||||
|
||||
return acc;
|
||||
}, []);
|
||||
|
||||
await Promise.all(modifyPromises);
|
||||
await Promise.all(recordPromises);
|
||||
|
||||
await saveConvo(openai.req.user.id, {
|
||||
conversationId,
|
||||
file_ids: attached_file_ids,
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Maps messages to their corresponding steps. Steps with message creation will be paired with their messages,
|
||||
* while steps without message creation will be returned as is.
|
||||
*
|
||||
* @param {RunStep[]} steps - An array of steps from the run.
|
||||
* @param {Message[]} messages - An array of message objects.
|
||||
* @returns {(StepMessage | RunStep)[]} An array where each element is either a step with its corresponding message (StepMessage) or a step without a message (RunStep).
|
||||
*/
|
||||
function mapMessagesToSteps(steps, messages) {
|
||||
// Create a map of messages indexed by their IDs for efficient lookup
|
||||
const messageMap = messages.reduce((acc, msg) => {
|
||||
acc[msg.id] = msg;
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
// Map each step to its corresponding message, or return the step as is if no message ID is present
|
||||
return steps
|
||||
.sort((a, b) => a.created_at - b.created_at)
|
||||
.map((step) => {
|
||||
const messageId = step.step_details?.message_creation?.message_id;
|
||||
|
||||
if (messageId && messageMap[messageId]) {
|
||||
return { step, message: messageMap[messageId] };
|
||||
}
|
||||
return step;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks for any missing messages; if missing,
|
||||
* synchronizes LibreChat messages to Thread Messages
|
||||
*
|
||||
* @param {Object} params - The parameters for initializing a thread.
|
||||
* @param {OpenAIClient} params.openai - The OpenAI client instance.
|
||||
* @param {string} [params.latestMessageId] - Optional: The latest message ID from LibreChat.
|
||||
* @param {string} params.thread_id - Response thread ID.
|
||||
* @param {string} params.run_id - Response Run ID.
|
||||
* @param {string} params.conversationId - LibreChat conversation ID.
|
||||
* @return {Promise<TMessage[]>} A promise that resolves to the updated messages
|
||||
*/
|
||||
async function checkMessageGaps({ openai, latestMessageId, thread_id, run_id, conversationId }) {
|
||||
const promises = [];
|
||||
promises.push(openai.beta.threads.messages.list(thread_id, defaultOrderQuery));
|
||||
promises.push(openai.beta.threads.runs.steps.list(thread_id, run_id));
|
||||
/** @type {[{ data: ThreadMessage[] }, { data: RunStep[] }]} */
|
||||
const [response, stepsResponse] = await Promise.all(promises);
|
||||
|
||||
const steps = mapMessagesToSteps(stepsResponse.data, response.data);
|
||||
/** @type {ThreadMessage} */
|
||||
const currentMessage = {
|
||||
id: v4(),
|
||||
content: [],
|
||||
assistant_id: null,
|
||||
created_at: Math.floor(new Date().getTime() / 1000),
|
||||
object: 'thread.message',
|
||||
role: 'assistant',
|
||||
run_id,
|
||||
thread_id,
|
||||
metadata: {
|
||||
messageId: latestMessageId,
|
||||
},
|
||||
};
|
||||
|
||||
for (const step of steps) {
|
||||
if (!currentMessage.assistant_id && step.assistant_id) {
|
||||
currentMessage.assistant_id = step.assistant_id;
|
||||
}
|
||||
if (step.message) {
|
||||
currentMessage.id = step.message.id;
|
||||
currentMessage.created_at = step.message.created_at;
|
||||
currentMessage.content = currentMessage.content.concat(step.message.content);
|
||||
} else if (step.step_details?.type === 'tool_calls' && step.step_details?.tool_calls?.length) {
|
||||
currentMessage.content = currentMessage.content.concat(
|
||||
step.step_details?.tool_calls.map((toolCall) => ({
|
||||
[ContentTypes.TOOL_CALL]: {
|
||||
...toolCall,
|
||||
progress: 2,
|
||||
},
|
||||
type: ContentTypes.TOOL_CALL,
|
||||
})),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let addedCurrentMessage = false;
|
||||
const apiMessages = response.data.map((msg) => {
|
||||
if (msg.id === currentMessage.id) {
|
||||
addedCurrentMessage = true;
|
||||
return currentMessage;
|
||||
}
|
||||
return msg;
|
||||
});
|
||||
|
||||
if (!addedCurrentMessage) {
|
||||
apiMessages.push(currentMessage);
|
||||
}
|
||||
|
||||
const dbMessages = await getMessages({ conversationId });
|
||||
const assistant_id = dbMessages?.[0]?.model;
|
||||
|
||||
const syncedMessages = await syncMessages({
|
||||
openai,
|
||||
dbMessages,
|
||||
apiMessages,
|
||||
thread_id,
|
||||
conversationId,
|
||||
assistant_id,
|
||||
});
|
||||
|
||||
return Object.values(
|
||||
[...dbMessages, ...syncedMessages].reduce(
|
||||
(acc, message) => ({ ...acc, [message.messageId]: message }),
|
||||
{},
|
||||
),
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Records token usage for a given completion request.
|
||||
*
|
||||
* @param {Object} params - The parameters for initializing a thread.
|
||||
* @param {number} params.prompt_tokens - The number of prompt tokens used.
|
||||
* @param {number} params.completion_tokens - The number of completion tokens used.
|
||||
* @param {string} params.model - The model used by the assistant run.
|
||||
* @param {string} params.user - The user's ID.
|
||||
* @param {string} params.conversationId - LibreChat conversation ID.
|
||||
* @return {Promise<TMessage[]>} A promise that resolves to the updated messages
|
||||
*/
|
||||
const recordUsage = async ({ prompt_tokens, completion_tokens, model, user, conversationId }) => {
|
||||
await spendTokens(
|
||||
{
|
||||
user,
|
||||
model,
|
||||
context: 'message',
|
||||
conversationId,
|
||||
},
|
||||
{ promptTokens: prompt_tokens, completionTokens: completion_tokens },
|
||||
);
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
initThread,
|
||||
recordUsage,
|
||||
saveUserMessage,
|
||||
checkMessageGaps,
|
||||
addThreadMetadata,
|
||||
mapMessagesToSteps,
|
||||
saveAssistantMessage,
|
||||
};
|
317
api/server/services/ToolService.js
Normal file
317
api/server/services/ToolService.js
Normal file
|
@ -0,0 +1,317 @@
|
|||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { StructuredTool } = require('langchain/tools');
|
||||
const { zodToJsonSchema } = require('zod-to-json-schema');
|
||||
const { Calculator } = require('langchain/tools/calculator');
|
||||
const {
|
||||
ContentTypes,
|
||||
imageGenTools,
|
||||
openapiToFunction,
|
||||
validateAndParseOpenAPISpec,
|
||||
actionDelimiter,
|
||||
} = require('librechat-data-provider');
|
||||
const { loadActionSets, createActionTool } = require('./ActionService');
|
||||
const { processFileURL } = require('~/server/services/Files/process');
|
||||
const { loadTools } = require('~/app/clients/tools/util');
|
||||
const { redactMessage } = require('~/config/parsers');
|
||||
const { sleep } = require('./Runs/handle');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* Loads and formats tools from the specified tool directory.
|
||||
*
|
||||
* The directory is scanned for JavaScript files, excluding any files in the filter set.
|
||||
* For each file, it attempts to load the file as a module and instantiate a class, if it's a subclass of `StructuredTool`.
|
||||
* Each tool instance is then formatted to be compatible with the OpenAI Assistant.
|
||||
* Additionally, instances of LangChain Tools are included in the result.
|
||||
*
|
||||
* @param {object} params - The parameters for the function.
|
||||
* @param {string} params.directory - The directory path where the tools are located.
|
||||
* @param {Set<string>} [params.filter=new Set()] - A set of filenames to exclude from loading.
|
||||
* @returns {Record<string, FunctionTool>} An object mapping each tool's plugin key to its instance.
|
||||
*/
|
||||
function loadAndFormatTools({ directory, filter = new Set() }) {
|
||||
const tools = [];
|
||||
/* Structured Tools Directory */
|
||||
const files = fs.readdirSync(directory);
|
||||
|
||||
for (const file of files) {
|
||||
if (file.endsWith('.js') && !filter.has(file)) {
|
||||
const filePath = path.join(directory, file);
|
||||
let ToolClass = null;
|
||||
try {
|
||||
ToolClass = require(filePath);
|
||||
} catch (error) {
|
||||
logger.error(`[loadAndFormatTools] Error loading tool from ${filePath}:`, error);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!ToolClass) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ToolClass.prototype instanceof StructuredTool) {
|
||||
/** @type {StructuredTool | null} */
|
||||
let toolInstance = null;
|
||||
try {
|
||||
toolInstance = new ToolClass({ override: true });
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`[loadAndFormatTools] Error initializing \`${file}\` tool; if it requires authentication, is the \`override\` field configured?`,
|
||||
error,
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!toolInstance) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const formattedTool = formatToOpenAIAssistantTool(toolInstance);
|
||||
tools.push(formattedTool);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Basic Tools; schema: { input: string }
|
||||
*/
|
||||
const basicToolInstances = [new Calculator()];
|
||||
|
||||
for (const toolInstance of basicToolInstances) {
|
||||
const formattedTool = formatToOpenAIAssistantTool(toolInstance);
|
||||
tools.push(formattedTool);
|
||||
}
|
||||
|
||||
return tools.reduce((map, tool) => {
|
||||
map[tool.function.name] = tool;
|
||||
return map;
|
||||
}, {});
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats a `StructuredTool` instance into a format that is compatible
|
||||
* with OpenAI's ChatCompletionFunctions. It uses the `zodToJsonSchema`
|
||||
* function to convert the schema of the `StructuredTool` into a JSON
|
||||
* schema, which is then used as the parameters for the OpenAI function.
|
||||
*
|
||||
* @param {StructuredTool} tool - The StructuredTool to format.
|
||||
* @returns {FunctionTool} The OpenAI Assistant Tool.
|
||||
*/
|
||||
function formatToOpenAIAssistantTool(tool) {
|
||||
return {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
parameters: zodToJsonSchema(tool.schema),
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Processes return required actions from run.
|
||||
*
|
||||
* @param {OpenAIClient} openai - OpenAI Client.
|
||||
* @param {RequiredAction[]} requiredActions - The required actions to submit outputs for.
|
||||
* @returns {Promise<ToolOutputs>} The outputs of the tools.
|
||||
*
|
||||
*/
|
||||
async function processRequiredActions(openai, requiredActions) {
|
||||
logger.debug(
|
||||
`[required actions] user: ${openai.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id}`,
|
||||
requiredActions,
|
||||
);
|
||||
const tools = requiredActions.map((action) => action.tool);
|
||||
const loadedTools = await loadTools({
|
||||
user: openai.req.user.id,
|
||||
model: openai.req.body.model ?? 'gpt-3.5-turbo-1106',
|
||||
tools,
|
||||
functions: true,
|
||||
options: {
|
||||
processFileURL,
|
||||
openAIApiKey: openai.apiKey,
|
||||
fileStrategy: openai.req.app.locals.fileStrategy,
|
||||
returnMetadata: true,
|
||||
},
|
||||
skipSpecs: true,
|
||||
});
|
||||
|
||||
const ToolMap = loadedTools.reduce((map, tool) => {
|
||||
map[tool.name] = tool;
|
||||
return map;
|
||||
}, {});
|
||||
|
||||
const promises = [];
|
||||
|
||||
/** @type {Action[]} */
|
||||
let actionSets = [];
|
||||
let isActionTool = false;
|
||||
const ActionToolMap = {};
|
||||
const ActionBuildersMap = {};
|
||||
|
||||
for (let i = 0; i < requiredActions.length; i++) {
|
||||
const currentAction = requiredActions[i];
|
||||
let tool = ToolMap[currentAction.tool] ?? ActionToolMap[currentAction.tool];
|
||||
|
||||
const handleToolOutput = async (output) => {
|
||||
requiredActions[i].output = output;
|
||||
|
||||
/** @type {FunctionToolCall & PartMetadata} */
|
||||
const toolCall = {
|
||||
function: {
|
||||
name: currentAction.tool,
|
||||
arguments: JSON.stringify(currentAction.toolInput),
|
||||
output,
|
||||
},
|
||||
id: currentAction.toolCallId,
|
||||
type: 'function',
|
||||
progress: 1,
|
||||
action: isActionTool,
|
||||
};
|
||||
|
||||
const toolCallIndex = openai.mappedOrder.get(toolCall.id);
|
||||
|
||||
if (imageGenTools.has(currentAction.tool)) {
|
||||
const imageOutput = output;
|
||||
toolCall.function.output = `${currentAction.tool} displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.`;
|
||||
|
||||
// Streams the "Finished" state of the tool call in the UI
|
||||
openai.addContentData({
|
||||
[ContentTypes.TOOL_CALL]: toolCall,
|
||||
index: toolCallIndex,
|
||||
type: ContentTypes.TOOL_CALL,
|
||||
});
|
||||
|
||||
await sleep(500);
|
||||
|
||||
/** @type {ImageFile} */
|
||||
const imageDetails = {
|
||||
...imageOutput,
|
||||
...currentAction.toolInput,
|
||||
};
|
||||
|
||||
const image_file = {
|
||||
[ContentTypes.IMAGE_FILE]: imageDetails,
|
||||
type: ContentTypes.IMAGE_FILE,
|
||||
// Replace the tool call output with Image file
|
||||
index: toolCallIndex,
|
||||
};
|
||||
|
||||
openai.addContentData(image_file);
|
||||
|
||||
// Update the stored tool call
|
||||
openai.seenToolCalls.set(toolCall.id, toolCall);
|
||||
|
||||
return {
|
||||
tool_call_id: currentAction.toolCallId,
|
||||
output: toolCall.function.output,
|
||||
};
|
||||
}
|
||||
|
||||
openai.seenToolCalls.set(toolCall.id, toolCall);
|
||||
openai.addContentData({
|
||||
[ContentTypes.TOOL_CALL]: toolCall,
|
||||
index: toolCallIndex,
|
||||
type: ContentTypes.TOOL_CALL,
|
||||
// TODO: to append tool properties to stream, pass metadata rest to addContentData
|
||||
// result: tool.result,
|
||||
});
|
||||
|
||||
return {
|
||||
tool_call_id: currentAction.toolCallId,
|
||||
output,
|
||||
};
|
||||
};
|
||||
|
||||
if (!tool) {
|
||||
// throw new Error(`Tool ${currentAction.tool} not found.`);
|
||||
|
||||
if (!actionSets.length) {
|
||||
actionSets =
|
||||
(await loadActionSets({
|
||||
user: openai.req.user.id,
|
||||
assistant_id: openai.req.body.assistant_id,
|
||||
})) ?? [];
|
||||
}
|
||||
|
||||
const actionSet = actionSets.find((action) =>
|
||||
currentAction.tool.includes(action.metadata.domain),
|
||||
);
|
||||
|
||||
if (!actionSet) {
|
||||
// TODO: try `function` if no action set is found
|
||||
// throw new Error(`Tool ${currentAction.tool} not found.`);
|
||||
continue;
|
||||
}
|
||||
|
||||
let builders = ActionBuildersMap[actionSet.metadata.domain];
|
||||
|
||||
if (!builders) {
|
||||
const validationResult = validateAndParseOpenAPISpec(actionSet.metadata.raw_spec);
|
||||
if (!validationResult.spec) {
|
||||
throw new Error(
|
||||
`Invalid spec: user: ${openai.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id}`,
|
||||
);
|
||||
}
|
||||
const { requestBuilders } = openapiToFunction(validationResult.spec);
|
||||
ActionToolMap[actionSet.metadata.domain] = requestBuilders;
|
||||
builders = requestBuilders;
|
||||
}
|
||||
|
||||
const functionName = currentAction.tool.replace(
|
||||
`${actionDelimiter}${actionSet.metadata.domain}`,
|
||||
'',
|
||||
);
|
||||
const requestBuilder = builders[functionName];
|
||||
|
||||
if (!requestBuilder) {
|
||||
// throw new Error(`Tool ${currentAction.tool} not found.`);
|
||||
continue;
|
||||
}
|
||||
|
||||
tool = createActionTool({ action: actionSet, requestBuilder });
|
||||
isActionTool = !!tool;
|
||||
ActionToolMap[currentAction.tool] = tool;
|
||||
}
|
||||
|
||||
if (currentAction.tool === 'calculator') {
|
||||
currentAction.toolInput = currentAction.toolInput.input;
|
||||
}
|
||||
|
||||
try {
|
||||
const promise = tool
|
||||
._call(currentAction.toolInput)
|
||||
.then(handleToolOutput)
|
||||
.catch((error) => {
|
||||
logger.error(`Error processing tool ${currentAction.tool}`, error);
|
||||
return {
|
||||
tool_call_id: currentAction.toolCallId,
|
||||
output: `Error processing tool ${currentAction.tool}: ${redactMessage(error.message)}`,
|
||||
};
|
||||
});
|
||||
promises.push(promise);
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`tool_call_id: ${currentAction.toolCallId} | Error processing tool ${currentAction.tool}`,
|
||||
error,
|
||||
);
|
||||
promises.push(
|
||||
Promise.resolve({
|
||||
tool_call_id: currentAction.toolCallId,
|
||||
error: error.message,
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
tool_outputs: await Promise.all(promises),
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
formatToOpenAIAssistantTool,
|
||||
loadAndFormatTools,
|
||||
processRequiredActions,
|
||||
};
|
|
@ -3,6 +3,20 @@ const p50k_base = require('tiktoken/encoders/p50k_base.json');
|
|||
const cl100k_base = require('tiktoken/encoders/cl100k_base.json');
|
||||
const logger = require('~/config/winston');
|
||||
|
||||
/**
|
||||
* Counts the number of tokens in a given text using a specified encoding model.
|
||||
*
|
||||
* This function utilizes the 'Tiktoken' library to encode text based on the selected model.
|
||||
* It supports two models, 'text-davinci-003' and 'gpt-3.5-turbo', each with its own encoding strategy.
|
||||
* For 'text-davinci-003', the 'p50k_base' encoder is used, whereas for other models, the 'cl100k_base' encoder is applied.
|
||||
* In case of an error during encoding, the error is logged, and the function returns 0.
|
||||
*
|
||||
* @async
|
||||
* @param {string} text - The text to be tokenized. Defaults to an empty string if not provided.
|
||||
* @param {string} modelName - The name of the model used for tokenizing. Defaults to 'gpt-3.5-turbo'.
|
||||
* @returns {Promise<number>} The number of tokens in the provided text. Returns 0 if an error occurs.
|
||||
* @throws Logs the error to a logger and rethrows if any error occurs during tokenization.
|
||||
*/
|
||||
const countTokens = async (text = '', modelName = 'gpt-3.5-turbo') => {
|
||||
let encoder = null;
|
||||
try {
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue