* WIP: first pass ModelSpecs
* refactor(onSelectEndpoint): use `getConvoSwitchLogic`
* feat: introduce iconURL, greeting, frontend fields for conversations/presets/messages
* feat: conversation.iconURL & greeting in Landing
* feat: conversation.iconURL & greeting in New Chat button
* feat: message.iconURL
* refactor: ConversationIcon -> ConvoIconURL
* WIP: add spec as a conversation field
* refactor: useAppStartup, set spec on initial load for new chat, allow undefined spec, add localStorage keys enum, additional type fields for spec
* feat: handle `showIconInMenu`, `showIconInHeader`, undefined `iconURL` and no specs on initial load
* chore: handle undefined or empty modelSpecs
* WIP: first pass, modelSpec schema for custom config
* refactor: move default filtered tools definition to ToolService
* feat: pass modelSpecs from backend via startupConfig
* refactor: modelSpecs config, return and define list
* fix: react error and include iconURL in responseMessage
* refactor: add iconURL to responseMessage only
* refactor: getIconEndpoint
* refactor: pass TSpecsConfig
* fix(assistants): differentiate compactAssistantSchema, correctly resets shared conversation state with other endpoints
* refactor: assistant id prefix localStorage key
* refactor: add more LocalStorageKeys and replace hardcoded values
* feat: prioritize spec on new chat behavior: last selected modelSpec behavior (localStorage)
* feat: first pass, interface config
* chore: WIP, todo: add warnings based on config.modelSpecs settings.
* feat: enforce modelSpecs if configured
* feat: show config file yaml errors
* chore: delete unused legacy Plugins component
* refactor: set tools to localStorage from recoil store
* chore: add stable recoil setter to useEffect deps
* refactor: save tools to conversation documents
* style(MultiSelectPop): dynamic height, remove unused import
* refactor(react-query): use localstorage keys and pass config to useAvailablePluginsQuery
* feat(utils): add mapPlugins
* refactor(Convo): use conversation.tools if defined, lastSelectedTools if not
* refactor: remove unused legacy code using `useSetOptions`, remove conditional flag `isMultiChat` for using legacy settings
* refactor(PluginStoreDialog): add exhaustive-deps which are stable react state setters
* fix(HeaderOptions): pass `popover` as true
* refactor(useSetStorage): use project enums
* refactor: use LocalStorageKeys enum
* fix: prevent setConversation from setting falsy values in lastSelectedTools
* refactor: use map for availableTools state and available Plugins query
* refactor(updateLastSelectedModel): organize logic better and add note on purpose
* fix(setAgentOption): prevent reseting last model to secondary model for gptPlugins
* refactor(buildDefaultConvo): use enum
* refactor: remove `useSetStorage` and consolidate areas where conversation state is saved to localStorage
* fix: conversations retain tools on refresh
* fix(gptPlugins): prevent nullish tools from being saved
* chore: delete useServerStream
* refactor: move initial plugins logic to useAppStartup
* refactor(MultiSelectDropDown): add more pass-in className props
* feat: use tools in presets
* chore: delete unused usePresetOptions
* refactor: new agentOptions default handling
* chore: note
* feat: add label and custom instructions to agents
* chore: remove 'disabled with tools' message
* style: move plugins to 2nd column in parameters
* fix: TPreset type for agentOptions
* fix: interface controls
* refactor: add interfaceConfig, use Separator within Switcher
* refactor: hide Assistants panel if interface.parameters are disabled
* fix(Header): only modelSpecs if list is greater than 0
* refactor: separate MessageIcon logic from useMessageHelpers for better react rule-following
* fix(AppService): don't use reserved keyword 'interface'
* feat: set existing Icon for custom endpoints through iconURL
* fix(ci): tests passing for App Service
* docs: refactor custom_config.md for readability and better organization, also include missing values
* docs: interface section and re-organize docs
* docs: update modelSpecs info
* chore: remove unused files
* chore: remove unused files
* chore: move useSetIndexOptions
* chore: remove unused file
* chore: move useConversation(s)
* chore: move useDefaultConvo
* chore: move useNavigateToConvo
* refactor: use plugin install hook so it can be used elsewhere
* chore: import order
* update docs
* refactor(OpenAI/Plugins): allow modelLabel as an initial value for chatGptLabel
* chore: remove unused EndpointOptionsPopover and hide 'Save as Preset' button if preset UI visibility disabled
* feat(loadDefaultInterface): issue warnings based on values
* feat: changelog for custom config file
* docs: add additional changelog note
* fix: prevent unavailable tool selection from preset and update availableTools on Plugin installations
* feat: add `filteredTools` option in custom config
* chore: changelog
* fix(MessageIcon): always overwrite conversation.iconURL in messageSettings
* fix(ModelSpecsMenu): icon edge cases
* fix(NewChat): dynamic icon
* fix(PluginsClient): always include endpoint in responseMessage
* fix: always include endpoint and iconURL in responseMessage across different response methods
* feat: interchangeable keys for modelSpec enforcing
* feat: `stop` conversation parameter
* feat: Tag primitive
* feat: dynamic tags
* refactor: update tag styling
* feat: add stop sequences to OpenAI settings
* fix(Presentation): prevent `SidePanel` re-renders that flicker side panel
* refactor: use stop placeholder
* feat: type and schema update for `stop` and `TPreset` in generation param related types
* refactor: pass conversation to dynamic settings
* refactor(OpenAIClient): remove default handling for `modelOptions.stop`
* docs: fix Google AI Setup formatting
* feat: current_model
* docs: WIP update
* fix(ChatRoute): prevent default preset override before `hasSetConversation.current` becomes true by including latest conversation state as template
* docs: update docs with more info on `stop`
* chore: bump config_version
* refactor: CURRENT_MODEL handling
* fix: remove use of transactions
* fix: remove use of transactions
* refactor(actions): perform OpenAI API operation first, before attempting database updates
* chore: bump data-provider
* feat: script to check recent dependency updates
* fix: override vite/rollup version for vite build fix
- also remove unused vite-plugin-html
- add vite build to file output command
* chore: bump rollup override to last known working version (v4.16.0 is breaking)
* chore(vite): increase file size cache for workbox
* fix: resolve openai to last known version using assistants v1 latest features and default header
* chore: update openrouter examples
* chore: replace violation cache accessors with enum
* chore: fix test
* chore(fileSchema): index timestamps
* fix(ActionService): use encoding/caching strategy for handling assistant function character length limit
* refactor(actions): async `domainParser` also resolve retrieved model (which is deployment name) to user-defined model
* style(AssistantAction): add `whitespace-nowrap` for ellipsis
* refactor(ActionService): if domain is less than or equal to encoded domain fixed length, return domain with replacement of separator
* refactor(actions): use sessions/transactions for updating Assistant Action database records
* chore: remove TTL from ENCODED_DOMAINS cache
* refactor(domainParser): minor optimization and add tests
* fix(spendTokens): use txData.user for token usage logging
* refactor(actions): add helper function `withSession` for database operations with sessions/transactions
* fix(PluginsClient): logger debug `message` field edge case
* fix(useCreateAssistantMutation): force re-render of assistants map by avoiding use shallow reference of listRes.data
* fix(AppService): regression by not including azure assistant defaults when no assistant endpoint values are set
* chore: bump example config version
* refactor(AppService): issue warnings from separate modules where possible
* refactor(AppService): consolidate AppService logic to separate modules as much as possible
* chore: bump data-provider
* chore: remove unn. variable definition
* chore: warning wording
* avatar fix
* chore: ensure `imageOutputType` is always defined
* ci(AppService): extra test for default value
* chore: replace default value for `desiredFormat` with `EImageOutputType` enum
* WIP: gemini-1.5 support
* feat: extended vertex ai support
* fix: handle possibly undefined modelName
* fix: gpt-4-turbo-preview invalid vision model
* feat: specify `fileConfig.imageOutputType` and make PNG default image conversion type
* feat: better truncation for errors including base64 strings
* fix: gemini inlineData formatting
* feat: RAG augmented prompt for gemini-1.5
* feat: gemini-1.5 rates and token window
* chore: adjust tokens, update docs, update vision Models
* chore: add back `ChatGoogleVertexAI` for chat models via vertex ai
* refactor: ask/edit controllers to not use `unfinished` field for google endpoint
* chore: remove comment
* chore(ci): fix AppService test
* chore: remove comment
* refactor(GoogleSearch): use `GOOGLE_SEARCH_API_KEY` instead, issue warning for old variable
* chore: bump data-provider to 0.5.4
* chore: update docs
* fix: condition for gemini-1.5 using generative ai lib
* chore: update docs
* ci: add additional AppService test for `imageOutputType`
* refactor: optimize new config value `imageOutputType`
* chore: bump CONFIG_VERSION
* fix(assistants): avatar upload
* Patch for OpenID username
`username` is generally based on email, rather than `given_name`. The challenge with `given_name` is that it can be a multi-value array (ex: "Nick, Fullname"), which completely breaks the system with:
```
LibreChat | ValidationError: User validation failed: username: Cast to string failed for value "[ 'Nickname', 'Firstname' ]" (type Array) at path "username"
LibreChat | at Document.invalidate (/app/node_modules/mongoose/lib/document.js:3200:32)
LibreChat | at model.$set (/app/node_modules/mongoose/lib/document.js:1459:12)
LibreChat | at model.set [as username] (/app/node_modules/mongoose/lib/helpers/document/compile.js:205:19)
LibreChat | at OpenIDConnectStrategy._verify (/app/api/strategies/openidStrategy.js:127:27)
LibreChat | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
```
* Update openidStrategy.js
* refactor(openidStrategy): add helper function for stringy username
---------
Co-authored-by: Danny Avila <danny@librechat.ai>
* refactor(getFiledownload): explicit accept of `application/octet-stream`
* chore: test compose file
* chore: test compose file fix
* chore(files/download): add more logs
* Fix proxy_pass URLs in nginx.conf
* fix: proxy_pass URLs in nginx.conf to fix file downloads from URL
* chore: move test compose file to utils dir
* refactor(useFileDownload): simplify API request by passing `file_id` instead of `filepath`
* fix(processModelData): handle `openrouter/auto` edge case
* fix(Tx.create): prevent negative multiplier edge case and prevent balance from becoming negative
* fix(NavLinks): render 0 balance properly
* refactor(NavLinks): show only up to 2 decimal places for balance
* fix(OpenAIClient/titleConvo): fix cohere condition and record token usage for `this.options.titleMethod === 'completion'`
* fix(deleteVectors): handle errors gracefully
* chore: update docs based on new alternate env vars prefixed with RAG to avoid conflicts with LibreChat keys
* fix(processMessages): properly handle assistant file citations and add sources list
* feat: improve file download UX by making any downloaded files accessible within the app post-download
* refactor(processOpenAIImageOutput): correctly handle two different outputs for images since OpenAI generates a file in their storage, shares filepath for image rendering
* refactor: create `addFileToCache` helper to use across frontend
* refactor: add ImageFile parts to cache on processing content stream
* chore: add TEndpoint type/typedef
* refactor(loadConfigModels.spec): stricter default model matching (fails with current impl.)
* refactor(loadConfigModels): return default models on endpoint basis and not fetch basis
* refactor: rename `uniqueKeyToNameMap` to `uniqueKeyToEndpointsMap` for clarity
* WIP: basic route for file downloads and file strategy for generating readablestream to pipe as res
* chore(DALLE3): add typing for OpenAI client
* chore: add `CONSOLE_JSON` notes to dotenv.md
* WIP: first pass OpenAI Assistants File Output handling
* feat: first pass assistants output file download from openai
* chore: yml vs. yaml variation to .gitignore for `librechat.yml`
* refactor(retrieveAndProcessFile): remove redundancies
* fix(syncMessages): explicit sort of apiMessages to fix message order on abort
* chore: add logs for warnings and errors, show toast on frontend
* chore: add logger where console was still being used
* fix(initializeClient.spec.js): remove condition failing test on local installations
* docs: remove comments and invalid html as is required by embeddings generator and add new documentation guidelines
* refactor: use debug statement runStepCompleted message
* fix(ChatRoute): prevent use of `newConversation` from reseting `latestMessage`, which would fire asynchronously and finalize after `latestMessage` was already correctly set
* fix(assistants): default query to limit of 100 and `desc` order
* refactor(useMultiSearch): use object as params and fix styling for assistants
* feat: informative message for thread initialization failing due to long message
* refactor(assistants/chat): use promises to speed up initialization, initialize shared variables, include `attachedFileIds` to streamRunManager
* chore: additional typedefs
* fix(OpenAIClient): handle edge case where attachments promise is resolved
* feat: createVisionPrompt
* feat: Vision Support for Assistants
* feat: add claude-3-haiku-20240307 to default anthropic list
* refactor: optimize `saveMessage` calls mid-stream via throttling
* chore: remove addMetadata operations and consolidate in BaseClient
* fix(listAssistantsForAzure): attempt to specify correct model mapping as accurately as possible (#2177)
* refactor(client): update last conversation setup with current assistant model, call newConvo again when assistants load to allow fast initial load and ensure assistant model is always the default, not the last selected model
* refactor(cache): explicitly add TTL of 2 minutes when setting titleCache and add default TTL of 10 minutes to abortKeys cache
* feat(AnthropicClient): conversation titling using Anthropic Function Calling
* chore: remove extraneous token usage logging
* fix(convos): unhandled edge case for conversation grouping (undefined conversation)
* style: Improved style of Search Bar after recent UI update
* chore: remove unused code, content part helpers
* feat: always show code option
* feat: new vector file processing strategy
* chore: remove unused client files
* chore: remove more unused client files
* chore: remove more unused client files and move used to new dir
* chore(DataIcon): add className
* WIP: Model Endpoint Settings Update, draft additional context settings
* feat: improve parsing for augmented prompt, add full context option
* chore: remove volume mounting from rag.yml as no longer necessary
* chore: bump openai to 4.29.0 and npm audit fix
* chore: remove unnecessary stream field from ContentData
* feat: new enum and types for AssistantStreamEvent
* refactor(AssistantService): remove stream field and add conversationId to text ContentData
> - return `finalMessage` and `text` on run completion
> - move `processMessages` to services/Threads to avoid circular dependencies with new stream handling
> - refactor(processMessages/retrieveAndProcessFile): add new `client` field to differentiate new RunClient type
* WIP: new assistants stream handling
* chore: stores messages to StreamRunManager
* chore: add additional typedefs
* fix: pass req and openai to StreamRunManager
* fix(AssistantService): pass openai as client to `retrieveAndProcessFile`
* WIP: streaming tool i/o, handle in_progress and completed run steps
* feat(assistants): process required actions with streaming enabled
* chore: condense early return check for useSSE useEffect
* chore: remove unnecessary comments and only handle completed tool calls when not function
* feat: add TTL for assistants run abort cacheKey
* feat: abort stream runs
* fix(assistants): render streaming cursor
* fix(assistants): hide edit icon as functionality is not supported
* fix(textArea): handle pasting edge cases; first, when onChange events wouldn't fire; second, when textarea wouldn't resize
* chore: memoize Conversations
* chore(useTextarea): reverse args order
* fix: load default capabilities when an azure is configured to support assistants, but `assistants` endpoint is not configured
* fix(AssistantSelect): update form assistant model on assistant form select
* fix(actions): handle azure strict validation for function names to fix crud for actions
* chore: remove content data debug log as it fires in rapid succession
* feat: improve UX for assistant errors mid-request
* feat: add tool call localizations and replace any domain separators from azure action names
* refactor(chat): error out tool calls without outputs during handleError
* fix(ToolService): handle domain separators allowing Azure use of actions
* refactor(StreamRunManager): types and throw Error if tool submission fails