* WIP: gemini-1.5 support
* feat: extended vertex ai support
* fix: handle possibly undefined modelName
* fix: gpt-4-turbo-preview invalid vision model
* feat: specify `fileConfig.imageOutputType` and make PNG default image conversion type
* feat: better truncation for errors including base64 strings
* fix: gemini inlineData formatting
* feat: RAG augmented prompt for gemini-1.5
* feat: gemini-1.5 rates and token window
* chore: adjust tokens, update docs, update vision Models
* chore: add back `ChatGoogleVertexAI` for chat models via vertex ai
* refactor: ask/edit controllers to not use `unfinished` field for google endpoint
* chore: remove comment
* chore(ci): fix AppService test
* chore: remove comment
* refactor(GoogleSearch): use `GOOGLE_SEARCH_API_KEY` instead, issue warning for old variable
* chore: bump data-provider to 0.5.4
* chore: update docs
* fix: condition for gemini-1.5 using generative ai lib
* chore: update docs
* ci: add additional AppService test for `imageOutputType`
* refactor: optimize new config value `imageOutputType`
* chore: bump CONFIG_VERSION
* fix(assistants): avatar upload
* refactor(getFiledownload): explicit accept of `application/octet-stream`
* chore: test compose file
* chore: test compose file fix
* chore(files/download): add more logs
* Fix proxy_pass URLs in nginx.conf
* fix: proxy_pass URLs in nginx.conf to fix file downloads from URL
* chore: move test compose file to utils dir
* refactor(useFileDownload): simplify API request by passing `file_id` instead of `filepath`
* fix(deleteVectors): handle errors gracefully
* chore: update docs based on new alternate env vars prefixed with RAG to avoid conflicts with LibreChat keys
* fix(processMessages): properly handle assistant file citations and add sources list
* feat: improve file download UX by making any downloaded files accessible within the app post-download
* refactor(processOpenAIImageOutput): correctly handle two different outputs for images since OpenAI generates a file in their storage, shares filepath for image rendering
* refactor: create `addFileToCache` helper to use across frontend
* refactor: add ImageFile parts to cache on processing content stream
* chore: add TEndpoint type/typedef
* refactor(loadConfigModels.spec): stricter default model matching (fails with current impl.)
* refactor(loadConfigModels): return default models on endpoint basis and not fetch basis
* refactor: rename `uniqueKeyToNameMap` to `uniqueKeyToEndpointsMap` for clarity
* WIP: basic route for file downloads and file strategy for generating readablestream to pipe as res
* chore(DALLE3): add typing for OpenAI client
* chore: add `CONSOLE_JSON` notes to dotenv.md
* WIP: first pass OpenAI Assistants File Output handling
* feat: first pass assistants output file download from openai
* chore: yml vs. yaml variation to .gitignore for `librechat.yml`
* refactor(retrieveAndProcessFile): remove redundancies
* fix(syncMessages): explicit sort of apiMessages to fix message order on abort
* chore: add logs for warnings and errors, show toast on frontend
* chore: add logger where console was still being used
* fix(initializeClient.spec.js): remove condition failing test on local installations
* docs: remove comments and invalid html as is required by embeddings generator and add new documentation guidelines
* refactor: use debug statement runStepCompleted message
* fix(ChatRoute): prevent use of `newConversation` from reseting `latestMessage`, which would fire asynchronously and finalize after `latestMessage` was already correctly set
* fix(assistants): default query to limit of 100 and `desc` order
* refactor(useMultiSearch): use object as params and fix styling for assistants
* feat: informative message for thread initialization failing due to long message
* refactor(assistants/chat): use promises to speed up initialization, initialize shared variables, include `attachedFileIds` to streamRunManager
* chore: additional typedefs
* fix(OpenAIClient): handle edge case where attachments promise is resolved
* feat: createVisionPrompt
* feat: Vision Support for Assistants
* feat: add claude-3-haiku-20240307 to default anthropic list
* refactor: optimize `saveMessage` calls mid-stream via throttling
* chore: remove addMetadata operations and consolidate in BaseClient
* fix(listAssistantsForAzure): attempt to specify correct model mapping as accurately as possible (#2177)
* refactor(client): update last conversation setup with current assistant model, call newConvo again when assistants load to allow fast initial load and ensure assistant model is always the default, not the last selected model
* refactor(cache): explicitly add TTL of 2 minutes when setting titleCache and add default TTL of 10 minutes to abortKeys cache
* feat(AnthropicClient): conversation titling using Anthropic Function Calling
* chore: remove extraneous token usage logging
* fix(convos): unhandled edge case for conversation grouping (undefined conversation)
* style: Improved style of Search Bar after recent UI update
* chore: remove unused code, content part helpers
* feat: always show code option
* feat: new vector file processing strategy
* chore: remove unused client files
* chore: remove more unused client files
* chore: remove more unused client files and move used to new dir
* chore(DataIcon): add className
* WIP: Model Endpoint Settings Update, draft additional context settings
* feat: improve parsing for augmented prompt, add full context option
* chore: remove volume mounting from rag.yml as no longer necessary
* chore: bump openai to 4.29.0 and npm audit fix
* chore: remove unnecessary stream field from ContentData
* feat: new enum and types for AssistantStreamEvent
* refactor(AssistantService): remove stream field and add conversationId to text ContentData
> - return `finalMessage` and `text` on run completion
> - move `processMessages` to services/Threads to avoid circular dependencies with new stream handling
> - refactor(processMessages/retrieveAndProcessFile): add new `client` field to differentiate new RunClient type
* WIP: new assistants stream handling
* chore: stores messages to StreamRunManager
* chore: add additional typedefs
* fix: pass req and openai to StreamRunManager
* fix(AssistantService): pass openai as client to `retrieveAndProcessFile`
* WIP: streaming tool i/o, handle in_progress and completed run steps
* feat(assistants): process required actions with streaming enabled
* chore: condense early return check for useSSE useEffect
* chore: remove unnecessary comments and only handle completed tool calls when not function
* feat: add TTL for assistants run abort cacheKey
* feat: abort stream runs
* fix(assistants): render streaming cursor
* fix(assistants): hide edit icon as functionality is not supported
* fix(textArea): handle pasting edge cases; first, when onChange events wouldn't fire; second, when textarea wouldn't resize
* chore: memoize Conversations
* chore(useTextarea): reverse args order
* fix: load default capabilities when an azure is configured to support assistants, but `assistants` endpoint is not configured
* fix(AssistantSelect): update form assistant model on assistant form select
* fix(actions): handle azure strict validation for function names to fix crud for actions
* chore: remove content data debug log as it fires in rapid succession
* feat: improve UX for assistant errors mid-request
* feat: add tool call localizations and replace any domain separators from azure action names
* refactor(chat): error out tool calls without outputs during handleError
* fix(ToolService): handle domain separators allowing Azure use of actions
* refactor(StreamRunManager): types and throw Error if tool submission fails
- note: To put it in a different way, if you put rejectUnauthorized: true, it means that self-signed certificates should not be allowed. This means, that EMAIL_ALLOW_SELFSIGNED is set to false
* refactor: re-purpose `resendImages` as `resendFiles`
* refactor: re-purpose `resendImages` as `resendFiles`
* feat: upload general files
* feat: embed file during upload
* feat: delete file embeddings on file deletion
* chore(fileConfig): add epub+zip type
* feat(encodeAndFormat): handle non-image files
* feat(createContextHandlers): build context prompt from file attachments and successful RAG
* fix: prevent non-temp files as well as embedded files to be deleted on new conversation
* fix: remove temp_file_id on usage, prevent non-temp files as well as embedded files to be deleted on new conversation
* fix: prevent non-temp files as well as embedded files to be deleted on new conversation
* feat(OpenAI/Anthropic/Google): basic RAG support
* fix: delete `resendFiles` only when true (Default)
* refactor(RAG): update endpoints and pass JWT
* fix(resendFiles): default values
* fix(context/processFile): query unique ids only
* feat: rag-api.yaml
* feat: file upload improved ux for longer uploads
* chore: await embed call and catch embedding errors
* refactor: store augmentedPrompt in Client
* refactor(processFileUpload): throw error if not assistant file upload
* fix(useFileHandling): handle markdown empty mimetype issue
* chore: necessary compose file changes
* fix: remove unique field from assistant_id, which can be shared between different users
* refactor: remove unique user fields from actions/assistant queries
* feat: only allow user who saved action to delete it
* refactor: allow deletions for anyone with builder access
* refactor: update user.id when updating assistants/actions records, instead of searching with it
* fix: stringify response data in case it's an object
* fix: correctly handle path input
* fix(decryptV2): handle edge case where value is already decrypted
* chore: add assistants to supportsBalanceCheck
* feat(Transaction): getTransactions and refactor export of model
* refactor: use enum: ViolationTypes.TOKEN_BALANCE
* feat(assistants): check balance
* refactor(assistants): only add promptBuffer if new convo (for title), and remove endpoint definition
* refactor(assistants): Count tokens up to the current context window
* fix(Switcher): make Select list explicitly controlled
* feat(assistants): use assistant's default model when no model is specified instead of the last selected assistant, prevent assistant_id from being recorded in non-assistant endpoints
* chore(assistants/chat): import order
* chore: bump librechat-data-provider due to changes
* chore: rename dir from `assistant` to plural
* feat: `assistants` field for azure config, spread options in AppService
* refactor: rename constructAzureURL param for azure as `azureOptions`
* chore: bump openai and bun
* chore(loadDefaultModels): change naming of assistant -> assistants
* feat: load azure settings with currect baseURL for assistants' initializeClient
* refactor: add `assistants` flags to groups and model configs, add mapGroupToAzureConfig
* feat(loadConfigEndpoints): initialize assistants endpoint if azure flag `assistants` is enabled
* feat(AppService): determine assistant models on startup, throw Error if none
* refactor(useDeleteAssistantMutation): send model along with assistant id for delete mutations
* feat: support listing and deleting assistants with azure
* feat: add model query to assistant avatar upload
* feat: add azure support for retrieveRun method
* refactor: update OpenAIClient initialization
* chore: update README
* fix(ci): tests passing
* refactor(uploadOpenAIFile): improve logging and use more efficient REST API method
* refactor(useFileHandling): add model to metadata to target Azure region compatible with current model
* chore(files): add azure naming pattern for valid file id recognition
* fix(assistants): initialize openai with first available assistant model if none provided
* refactor(uploadOpenAIFile): add content type for azure, initialize formdata before azure options
* refactor(sleep): move sleep function out of Runs and into `~/server/utils`
* fix(azureOpenAI/assistants): make sure to only overwrite models with assistant models if `assistants` flag is enabled
* refactor(uploadOpenAIFile): revert to old method
* chore(uploadOpenAIFile): use enum for file purpose
* docs: azureOpenAI update guide with more info, examples
* feat: enable/disable assistant capabilities and specify retrieval models
* refactor: optional chain conditional statement in loadConfigModels.js
* docs: add assistants examples
* chore: update librechat.example.yaml
* docs(azure): update note of file upload behavior in Azure OpenAI Assistants
* chore: update docs and add descriptive message about assistant errors
* fix: prevent message submission with invalid assistant or if files loading
* style: update Landing icon & text when assistant is not selected
* chore: bump librechat-data-provider to 0.4.8
* fix(assistants/azure): assign req.body.model for proper azure init to abort runs
* feat: make assistants endpoint appendable since message state is not managed by LibreChat
* fix(ask): search currentMessages for thread_id if it's not defined
* refactor(abortMiddleware): remove use of `overrideProps` and spread unknown fields instead
* chore: remove console.log in `abortConversation`
* refactor(assistants): improve error handling/cancellation flow
* chore: bump anthropic SDK
* chore: update anthropic config settings (fileSupport, default models)
* feat: anthropic multi modal formatting
* refactor: update vision models and use endpoint specific max long side resizing
* feat(anthropic): multimodal messages, retry logic, and messages payload
* chore: add more safety to trimming content due to whitespace error for assistant messages
* feat(anthropic): token accounting and resending multiple images in progress
* chore: bump data-provider
* feat(anthropic): resendImages feature
* chore: optimize Edit/Ask controllers, switch model back to req model
* fix: false positive of invalid model
* refactor(validateVisionModel): use object as arg, pass in additional/available models
* refactor(validateModel): use helper function, `getModelsConfig`
* feat: add modelsConfig to endpointOption so it gets passed to all clients, use for properly validating vision models
* refactor: initialize default vision model and make sure it's available before assigning it
* refactor(useSSE): avoid resetting model if user selected a new model between request and response
* feat: show rate in transaction logging
* fix: return tokenCountMap regardless of payload shape
* chore: remove unused code in progressCallback, as well as handle reply.trim(), post `getCompletion`
* chore(Dockerfile): remove curl installation
* experimental: dev image parallelized with matrix strategy and building for amd64/arm64 support
* make platforms explicit
* fix(useContentHandler): retain undefined parts and handle them within `ContentParts` rendering
* fix(AssistantService/in_progress): skip empty messages
* refactor(RunManager): create highly specific `seenSteps` Set keys for RunSteps with use of `getDetailsSignature` and `getToolCallSignature`,to ensure changes from polling are always captured
* chore: bump browserslist-db@latest
* refactor(EndpointService): simplify with `generateConfig`, utilize optional baseURL for OpenAI-based endpoints, use `isUserProvided` helper fn wherever needed
* refactor(custom/initializeClient): use standardized naming for common variables
* feat: user provided baseURL for openAI-based endpoints
* refactor(custom/initializeClient): re-order operations
* fix: knownendpoints enum definition and add FetchTokenConfig, bump data-provider
* refactor(custom): use tokenKey dependent on userProvided conditions for caching and fetching endpointTokenConfig, anticipate token rates from custom config
* refactor(custom): assure endpointTokenConfig is only accessed from cache if qualifies for fetching
* fix(ci): update tests for initializeClient based on userProvideURL changes
* fix(EndpointService): correct baseURL env var for assistants: `ASSISTANTS_BASE_URL`
* fix: unnecessary run cancellation on res.close() when response.run is completed
* feat(assistants): user provided URL option
* ci: update tests and add test for `assistants` endpoint
* chore: leaner condition for request closing
* chore: more descriptive error message to provide keys again
* fix(bun): fix bun compatibility to allow gzip header: https://github.com/oven-sh/bun/issues/267#issuecomment-1854460357
* chore: update custom config examples
* fix(OpenAIClient.chatCompletion): remove redundant call of stream.controller.abort() as `break` aborts the request and prevents abort errors when not called redundantly
* chore: bump bun.lockb
* fix: remove result-thinking class when message is no longer streaming
* fix(bun): improve Bun support by forcing use of old method in bun env, also update old methods with new customizable params
* fix(ci): pass tests
* feat(data-provider): add Azure serverless inference handling through librechat.yaml
* feat(azureOpenAI): serverless inference handling in api
* docs: update docs with new azureOpenAI endpoint config fields and serverless inference endpoint setup
* chore: remove unnecessary checks for apiKey as schema would not allow apiKey to be undefined
* ci(azureOpenAI): update tests for serverless configurations
* wip: first pass for azure endpoint schema
* refactor: azure config to return groupMap and modelConfigMap
* wip: naming and schema changes
* refactor(errorsToString): move to data-provider
* feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors
* feat(AppService): load Azure groups
* refactor(azure): use imported types, write `mapModelToAzureConfig`
* refactor: move `extractEnvVariable` to data-provider
* refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests
* refactor(AppService): ensure each model is properly configured on startup
* refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config
* feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file
* refactor: redefine types as well as load azureOpenAI models from config file
* chore(ci): fix test description naming
* feat(azureOpenAI): use validated model grouping for request authentication
* chore: bump data-provider following rebase
* chore: bump config file version noting significant changes
* feat: add title options and switch azure configs for titling and vision requests
* feat: enable azure plugins from config file
* fix(ci): pass tests
* chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated
* fix(fetchModels): early return if apiKey not passed
* chore: fix azure config typing
* refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions
* feat(createLLM): use `azureOpenAIBasePath`
* feat(parsers): resolveHeaders
* refactor(extractBaseURL): handle invalid input
* feat(OpenAIClient): handle headers and baseURL for azureConfig
* fix(ci): pass `OpenAIClient` tests
* chore: extract env var for azureOpenAI group config, baseURL
* docs: azureOpenAI config setup docs
* feat: safe check of potential conflicting env vars that map to unique placeholders
* fix: reset apiKey when model switches from originally requested model (vision or title)
* chore: linting
* docs: CONFIG_PATH notes in custom_config.md