* chore: use node-fetch for OpenAIClient fetch key for non-crashing usage of AbortController in Bun runtime
* chore: variable order
* fix(useSSE): prevent finalHandler call in abortConversation to update messages/conversation after user navigated away
* chore: params order
* refactor: organize intermediate message logic and ensure correct variables are passed
* fix: Add stt and tts routes before upload limiters, prevent bans
* fix(abortRun): temp fix to delete unfinished messages to avoid message thread parent relationship issues
* refactor: Update AnthropicClient to use node-fetch for fetch key and add proxy support
* fix(gptPlugins): ensure parentMessageId/messageId relationship is maintained
* feat(BaseClient): custom fetch function to analyze/edit payloads just before sending (also prevents abortController crash on Bun runtime)
* feat: `directEndpoint` and `titleMessageRole` custom endpoint options
* chore: Bump version to 0.6.6 in data-provider package.json
* chore: bump browserslist-db@latest
* refactor(EndpointService): simplify with `generateConfig`, utilize optional baseURL for OpenAI-based endpoints, use `isUserProvided` helper fn wherever needed
* refactor(custom/initializeClient): use standardized naming for common variables
* feat: user provided baseURL for openAI-based endpoints
* refactor(custom/initializeClient): re-order operations
* fix: knownendpoints enum definition and add FetchTokenConfig, bump data-provider
* refactor(custom): use tokenKey dependent on userProvided conditions for caching and fetching endpointTokenConfig, anticipate token rates from custom config
* refactor(custom): assure endpointTokenConfig is only accessed from cache if qualifies for fetching
* fix(ci): update tests for initializeClient based on userProvideURL changes
* fix(EndpointService): correct baseURL env var for assistants: `ASSISTANTS_BASE_URL`
* fix: unnecessary run cancellation on res.close() when response.run is completed
* feat(assistants): user provided URL option
* ci: update tests and add test for `assistants` endpoint
* chore: leaner condition for request closing
* chore: more descriptive error message to provide keys again
* wip: first pass for azure endpoint schema
* refactor: azure config to return groupMap and modelConfigMap
* wip: naming and schema changes
* refactor(errorsToString): move to data-provider
* feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors
* feat(AppService): load Azure groups
* refactor(azure): use imported types, write `mapModelToAzureConfig`
* refactor: move `extractEnvVariable` to data-provider
* refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests
* refactor(AppService): ensure each model is properly configured on startup
* refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config
* feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file
* refactor: redefine types as well as load azureOpenAI models from config file
* chore(ci): fix test description naming
* feat(azureOpenAI): use validated model grouping for request authentication
* chore: bump data-provider following rebase
* chore: bump config file version noting significant changes
* feat: add title options and switch azure configs for titling and vision requests
* feat: enable azure plugins from config file
* fix(ci): pass tests
* chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated
* fix(fetchModels): early return if apiKey not passed
* chore: fix azure config typing
* refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions
* feat(createLLM): use `azureOpenAIBasePath`
* feat(parsers): resolveHeaders
* refactor(extractBaseURL): handle invalid input
* feat(OpenAIClient): handle headers and baseURL for azureConfig
* fix(ci): pass `OpenAIClient` tests
* chore: extract env var for azureOpenAI group config, baseURL
* docs: azureOpenAI config setup docs
* feat: safe check of potential conflicting env vars that map to unique placeholders
* fix: reset apiKey when model switches from originally requested model (vision or title)
* chore: linting
* docs: CONFIG_PATH notes in custom_config.md
* fix: handle non-assistant role ChatCompletionMessage error
* refactor(ModelController): decouple res.send from loading/caching models
* fix(custom/initializeClient): only fetch custom endpoint models if models.fetch is true
* refactor(validateModel): load models if modelsConfig is not yet cached
* docs: update on file upload rate limiting
* feat: send the LibreChat user ID as a query param when fetching the list of models
* chore: update bun
* chore: change bun command for building data-provider
* refactor: prefer use of `getCustomConfig` to access custom config, also move to `server/services/Config`
* refactor: make endpoints/custom option for the config optional, add userIdQuery, and use modelQueries log store in ModelService
* refactor(ModelService): use env variables at runtime, use default models from data-provider, and add tests
* docs: add `userIdQuery`
* fix(ci): import changed
* style(Icon): remove error bubble from message icon
* fix(custom): `initializeClient` now throws error if apiKey or baseURL are admin provided but no env var was found
* refactor(tPresetSchema): match `conversationId` type to `tConversationSchema` but optional, use `extendedModelEndpointSchema` for `endpoint`
* fix(useSSE): minor improvements
- use `completed` set to avoid submitting unecessary abort request
- set preset with `newConversation` calls using initial conversation settings to prevent default Preset override as well as default settings
- return if there is a parsing error within `onerror` as expected errors from server are properly formatted
* WIP(backend/api): custom endpoint
* WIP(frontend/client): custom endpoint
* chore: adjust typedefs for configs
* refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility
* feat: loadYaml utility
* refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults
* refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config
* refactor(EndpointController): rename variables for clarity
* feat: initial load custom config
* feat(server/utils): add simple `isUserProvided` helper
* chore(types): update TConfig type
* refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models
* feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels
* chore: reorganize server init imports, invoke loadCustomConfig
* refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint
* refactor(Endpoint/ModelController): spread config values after default (temporary)
* chore(client): fix type issues
* WIP: first pass for multiple custom endpoints
- add endpointType to Conversation schema
- add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion)
- use `endpointType` value as `endpoint` where mapping to type is necessary using this field
- use custom defined `endpoint` value and not type for mapping to modelsConfig
- misc: add return type to `getDefaultEndpoint`
- in `useNewConvo`, add the endpointType if it wasn't already added to conversation
- EndpointsMenu: use user-defined endpoint name as Title in menu
- TODO: custom icon via custom config, change unknown to robot icon
* refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code
* chore: remove unused availableModels field in TConfig type
* refactor(parseCompactConvo): pass args as an object and change where used accordingly
* feat: chat through custom endpoint
* chore(message/convoSchemas): avoid saving empty arrays
* fix(BaseClient/saveMessageToDatabase): save endpointType
* refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve
* fix(useConversation): assign endpointType if it's missing
* fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset)
* chore: recorganize types order for TConfig, add `iconURL`
* feat: custom endpoint icon support:
- use UnknownIcon in all icon contexts
- add mistral and openrouter as known endpoints, and add their icons
- iconURL support
* fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults
* refactor(Settings/OpenAI): remove legacy `isOpenAI` flag
* fix(OpenAIClient): do not invoke abortCompletion on completion error
* feat: add responseSender/label support for custom endpoints:
- use defaultModelLabel field in endpointOption
- add model defaults for custom endpoints in `getResponseSender`
- add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel`
- include defaultModelLabel from endpointConfig in custom endpoint client options
- pass `endpointType` to `getResponseSender`
* feat(OpenAIClient): use custom options from config file
* refactor: rename `defaultModelLabel` to `modelDisplayLabel`
* refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere
* feat: `iconURL` and extract environment variables from custom endpoint config values
* feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml`
* docs: custom config docs and examples
* fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions
* fix(custom/initializeClient): extract env var and use `isUserProvided` function
* Update librechat.example.yaml
* feat(InputWithLabel): add className props, and forwardRef
* fix(streamResponse): handle error edge case where either messages or convos query throws an error
* fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error
* feat: user_provided keys for custom endpoints
* fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name`
* feat(loadConfigModels): extract env variables and optimize fetching models
* feat: support custom endpoint iconURL for messages and Nav
* feat(OpenAIClient): add/dropParams support
* docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY`
* docs: update docs with additional notes
* feat(maxTokensMap): add mistral models (32k context)
* docs: update openrouter notes
* Update ai_setup.md
* docs(custom_config): add table of contents and fix note about custom name
* docs(custom_config): reorder ToC
* Update custom_config.md
* Add note about `max_tokens` field in custom_config.md