* avatar fix
* chore: ensure `imageOutputType` is always defined
* ci(AppService): extra test for default value
* chore: replace default value for `desiredFormat` with `EImageOutputType` enum
* WIP: gemini-1.5 support
* feat: extended vertex ai support
* fix: handle possibly undefined modelName
* fix: gpt-4-turbo-preview invalid vision model
* feat: specify `fileConfig.imageOutputType` and make PNG default image conversion type
* feat: better truncation for errors including base64 strings
* fix: gemini inlineData formatting
* feat: RAG augmented prompt for gemini-1.5
* feat: gemini-1.5 rates and token window
* chore: adjust tokens, update docs, update vision Models
* chore: add back `ChatGoogleVertexAI` for chat models via vertex ai
* refactor: ask/edit controllers to not use `unfinished` field for google endpoint
* chore: remove comment
* chore(ci): fix AppService test
* chore: remove comment
* refactor(GoogleSearch): use `GOOGLE_SEARCH_API_KEY` instead, issue warning for old variable
* chore: bump data-provider to 0.5.4
* chore: update docs
* fix: condition for gemini-1.5 using generative ai lib
* chore: update docs
* ci: add additional AppService test for `imageOutputType`
* refactor: optimize new config value `imageOutputType`
* chore: bump CONFIG_VERSION
* fix(assistants): avatar upload
* Patch for OpenID username
`username` is generally based on email, rather than `given_name`. The challenge with `given_name` is that it can be a multi-value array (ex: "Nick, Fullname"), which completely breaks the system with:
```
LibreChat | ValidationError: User validation failed: username: Cast to string failed for value "[ 'Nickname', 'Firstname' ]" (type Array) at path "username"
LibreChat | at Document.invalidate (/app/node_modules/mongoose/lib/document.js:3200:32)
LibreChat | at model.$set (/app/node_modules/mongoose/lib/document.js:1459:12)
LibreChat | at model.set [as username] (/app/node_modules/mongoose/lib/helpers/document/compile.js:205:19)
LibreChat | at OpenIDConnectStrategy._verify (/app/api/strategies/openidStrategy.js:127:27)
LibreChat | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
```
* Update openidStrategy.js
* refactor(openidStrategy): add helper function for stringy username
---------
Co-authored-by: Danny Avila <danny@librechat.ai>
* refactor(getFiledownload): explicit accept of `application/octet-stream`
* chore: test compose file
* chore: test compose file fix
* chore(files/download): add more logs
* Fix proxy_pass URLs in nginx.conf
* fix: proxy_pass URLs in nginx.conf to fix file downloads from URL
* chore: move test compose file to utils dir
* refactor(useFileDownload): simplify API request by passing `file_id` instead of `filepath`
* fix(processModelData): handle `openrouter/auto` edge case
* fix(Tx.create): prevent negative multiplier edge case and prevent balance from becoming negative
* fix(NavLinks): render 0 balance properly
* refactor(NavLinks): show only up to 2 decimal places for balance
* fix(OpenAIClient/titleConvo): fix cohere condition and record token usage for `this.options.titleMethod === 'completion'`
* fix(deleteVectors): handle errors gracefully
* chore: update docs based on new alternate env vars prefixed with RAG to avoid conflicts with LibreChat keys
* fix(processMessages): properly handle assistant file citations and add sources list
* feat: improve file download UX by making any downloaded files accessible within the app post-download
* refactor(processOpenAIImageOutput): correctly handle two different outputs for images since OpenAI generates a file in their storage, shares filepath for image rendering
* refactor: create `addFileToCache` helper to use across frontend
* refactor: add ImageFile parts to cache on processing content stream
* chore: add TEndpoint type/typedef
* refactor(loadConfigModels.spec): stricter default model matching (fails with current impl.)
* refactor(loadConfigModels): return default models on endpoint basis and not fetch basis
* refactor: rename `uniqueKeyToNameMap` to `uniqueKeyToEndpointsMap` for clarity
* WIP: basic route for file downloads and file strategy for generating readablestream to pipe as res
* chore(DALLE3): add typing for OpenAI client
* chore: add `CONSOLE_JSON` notes to dotenv.md
* WIP: first pass OpenAI Assistants File Output handling
* feat: first pass assistants output file download from openai
* chore: yml vs. yaml variation to .gitignore for `librechat.yml`
* refactor(retrieveAndProcessFile): remove redundancies
* fix(syncMessages): explicit sort of apiMessages to fix message order on abort
* chore: add logs for warnings and errors, show toast on frontend
* chore: add logger where console was still being used
* fix(initializeClient.spec.js): remove condition failing test on local installations
* docs: remove comments and invalid html as is required by embeddings generator and add new documentation guidelines
* refactor: use debug statement runStepCompleted message
* fix(ChatRoute): prevent use of `newConversation` from reseting `latestMessage`, which would fire asynchronously and finalize after `latestMessage` was already correctly set
* fix(assistants): default query to limit of 100 and `desc` order
* refactor(useMultiSearch): use object as params and fix styling for assistants
* feat: informative message for thread initialization failing due to long message
* refactor(assistants/chat): use promises to speed up initialization, initialize shared variables, include `attachedFileIds` to streamRunManager
* chore: additional typedefs
* fix(OpenAIClient): handle edge case where attachments promise is resolved
* feat: createVisionPrompt
* feat: Vision Support for Assistants
* feat: add claude-3-haiku-20240307 to default anthropic list
* refactor: optimize `saveMessage` calls mid-stream via throttling
* chore: remove addMetadata operations and consolidate in BaseClient
* fix(listAssistantsForAzure): attempt to specify correct model mapping as accurately as possible (#2177)
* refactor(client): update last conversation setup with current assistant model, call newConvo again when assistants load to allow fast initial load and ensure assistant model is always the default, not the last selected model
* refactor(cache): explicitly add TTL of 2 minutes when setting titleCache and add default TTL of 10 minutes to abortKeys cache
* feat(AnthropicClient): conversation titling using Anthropic Function Calling
* chore: remove extraneous token usage logging
* fix(convos): unhandled edge case for conversation grouping (undefined conversation)
* style: Improved style of Search Bar after recent UI update
* chore: remove unused code, content part helpers
* feat: always show code option
* feat: new vector file processing strategy
* chore: remove unused client files
* chore: remove more unused client files
* chore: remove more unused client files and move used to new dir
* chore(DataIcon): add className
* WIP: Model Endpoint Settings Update, draft additional context settings
* feat: improve parsing for augmented prompt, add full context option
* chore: remove volume mounting from rag.yml as no longer necessary
* chore: bump openai to 4.29.0 and npm audit fix
* chore: remove unnecessary stream field from ContentData
* feat: new enum and types for AssistantStreamEvent
* refactor(AssistantService): remove stream field and add conversationId to text ContentData
> - return `finalMessage` and `text` on run completion
> - move `processMessages` to services/Threads to avoid circular dependencies with new stream handling
> - refactor(processMessages/retrieveAndProcessFile): add new `client` field to differentiate new RunClient type
* WIP: new assistants stream handling
* chore: stores messages to StreamRunManager
* chore: add additional typedefs
* fix: pass req and openai to StreamRunManager
* fix(AssistantService): pass openai as client to `retrieveAndProcessFile`
* WIP: streaming tool i/o, handle in_progress and completed run steps
* feat(assistants): process required actions with streaming enabled
* chore: condense early return check for useSSE useEffect
* chore: remove unnecessary comments and only handle completed tool calls when not function
* feat: add TTL for assistants run abort cacheKey
* feat: abort stream runs
* fix(assistants): render streaming cursor
* fix(assistants): hide edit icon as functionality is not supported
* fix(textArea): handle pasting edge cases; first, when onChange events wouldn't fire; second, when textarea wouldn't resize
* chore: memoize Conversations
* chore(useTextarea): reverse args order
* fix: load default capabilities when an azure is configured to support assistants, but `assistants` endpoint is not configured
* fix(AssistantSelect): update form assistant model on assistant form select
* fix(actions): handle azure strict validation for function names to fix crud for actions
* chore: remove content data debug log as it fires in rapid succession
* feat: improve UX for assistant errors mid-request
* feat: add tool call localizations and replace any domain separators from azure action names
* refactor(chat): error out tool calls without outputs during handleError
* fix(ToolService): handle domain separators allowing Azure use of actions
* refactor(StreamRunManager): types and throw Error if tool submission fails
- note: To put it in a different way, if you put rejectUnauthorized: true, it means that self-signed certificates should not be allowed. This means, that EMAIL_ALLOW_SELFSIGNED is set to false
* refactor: re-purpose `resendImages` as `resendFiles`
* refactor: re-purpose `resendImages` as `resendFiles`
* feat: upload general files
* feat: embed file during upload
* feat: delete file embeddings on file deletion
* chore(fileConfig): add epub+zip type
* feat(encodeAndFormat): handle non-image files
* feat(createContextHandlers): build context prompt from file attachments and successful RAG
* fix: prevent non-temp files as well as embedded files to be deleted on new conversation
* fix: remove temp_file_id on usage, prevent non-temp files as well as embedded files to be deleted on new conversation
* fix: prevent non-temp files as well as embedded files to be deleted on new conversation
* feat(OpenAI/Anthropic/Google): basic RAG support
* fix: delete `resendFiles` only when true (Default)
* refactor(RAG): update endpoints and pass JWT
* fix(resendFiles): default values
* fix(context/processFile): query unique ids only
* feat: rag-api.yaml
* feat: file upload improved ux for longer uploads
* chore: await embed call and catch embedding errors
* refactor: store augmentedPrompt in Client
* refactor(processFileUpload): throw error if not assistant file upload
* fix(useFileHandling): handle markdown empty mimetype issue
* chore: necessary compose file changes
* fix: remove unique field from assistant_id, which can be shared between different users
* refactor: remove unique user fields from actions/assistant queries
* feat: only allow user who saved action to delete it
* refactor: allow deletions for anyone with builder access
* refactor: update user.id when updating assistants/actions records, instead of searching with it
* fix: stringify response data in case it's an object
* fix: correctly handle path input
* fix(decryptV2): handle edge case where value is already decrypted
* chore: add assistants to supportsBalanceCheck
* feat(Transaction): getTransactions and refactor export of model
* refactor: use enum: ViolationTypes.TOKEN_BALANCE
* feat(assistants): check balance
* refactor(assistants): only add promptBuffer if new convo (for title), and remove endpoint definition
* refactor(assistants): Count tokens up to the current context window
* fix(Switcher): make Select list explicitly controlled
* feat(assistants): use assistant's default model when no model is specified instead of the last selected assistant, prevent assistant_id from being recorded in non-assistant endpoints
* chore(assistants/chat): import order
* chore: bump librechat-data-provider due to changes
* chore: rename dir from `assistant` to plural
* feat: `assistants` field for azure config, spread options in AppService
* refactor: rename constructAzureURL param for azure as `azureOptions`
* chore: bump openai and bun
* chore(loadDefaultModels): change naming of assistant -> assistants
* feat: load azure settings with currect baseURL for assistants' initializeClient
* refactor: add `assistants` flags to groups and model configs, add mapGroupToAzureConfig
* feat(loadConfigEndpoints): initialize assistants endpoint if azure flag `assistants` is enabled
* feat(AppService): determine assistant models on startup, throw Error if none
* refactor(useDeleteAssistantMutation): send model along with assistant id for delete mutations
* feat: support listing and deleting assistants with azure
* feat: add model query to assistant avatar upload
* feat: add azure support for retrieveRun method
* refactor: update OpenAIClient initialization
* chore: update README
* fix(ci): tests passing
* refactor(uploadOpenAIFile): improve logging and use more efficient REST API method
* refactor(useFileHandling): add model to metadata to target Azure region compatible with current model
* chore(files): add azure naming pattern for valid file id recognition
* fix(assistants): initialize openai with first available assistant model if none provided
* refactor(uploadOpenAIFile): add content type for azure, initialize formdata before azure options
* refactor(sleep): move sleep function out of Runs and into `~/server/utils`
* fix(azureOpenAI/assistants): make sure to only overwrite models with assistant models if `assistants` flag is enabled
* refactor(uploadOpenAIFile): revert to old method
* chore(uploadOpenAIFile): use enum for file purpose
* docs: azureOpenAI update guide with more info, examples
* feat: enable/disable assistant capabilities and specify retrieval models
* refactor: optional chain conditional statement in loadConfigModels.js
* docs: add assistants examples
* chore: update librechat.example.yaml
* docs(azure): update note of file upload behavior in Azure OpenAI Assistants
* chore: update docs and add descriptive message about assistant errors
* fix: prevent message submission with invalid assistant or if files loading
* style: update Landing icon & text when assistant is not selected
* chore: bump librechat-data-provider to 0.4.8
* fix(assistants/azure): assign req.body.model for proper azure init to abort runs