* feat: send the LibreChat user ID as a query param when fetching the list of models
* chore: update bun
* chore: change bun command for building data-provider
* refactor: prefer use of `getCustomConfig` to access custom config, also move to `server/services/Config`
* refactor: make endpoints/custom option for the config optional, add userIdQuery, and use modelQueries log store in ModelService
* refactor(ModelService): use env variables at runtime, use default models from data-provider, and add tests
* docs: add `userIdQuery`
* fix(ci): import changed
* WIP(backend/api): custom endpoint
* WIP(frontend/client): custom endpoint
* chore: adjust typedefs for configs
* refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility
* feat: loadYaml utility
* refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults
* refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config
* refactor(EndpointController): rename variables for clarity
* feat: initial load custom config
* feat(server/utils): add simple `isUserProvided` helper
* chore(types): update TConfig type
* refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models
* feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels
* chore: reorganize server init imports, invoke loadCustomConfig
* refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint
* refactor(Endpoint/ModelController): spread config values after default (temporary)
* chore(client): fix type issues
* WIP: first pass for multiple custom endpoints
- add endpointType to Conversation schema
- add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion)
- use `endpointType` value as `endpoint` where mapping to type is necessary using this field
- use custom defined `endpoint` value and not type for mapping to modelsConfig
- misc: add return type to `getDefaultEndpoint`
- in `useNewConvo`, add the endpointType if it wasn't already added to conversation
- EndpointsMenu: use user-defined endpoint name as Title in menu
- TODO: custom icon via custom config, change unknown to robot icon
* refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code
* chore: remove unused availableModels field in TConfig type
* refactor(parseCompactConvo): pass args as an object and change where used accordingly
* feat: chat through custom endpoint
* chore(message/convoSchemas): avoid saving empty arrays
* fix(BaseClient/saveMessageToDatabase): save endpointType
* refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve
* fix(useConversation): assign endpointType if it's missing
* fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset)
* chore: recorganize types order for TConfig, add `iconURL`
* feat: custom endpoint icon support:
- use UnknownIcon in all icon contexts
- add mistral and openrouter as known endpoints, and add their icons
- iconURL support
* fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults
* refactor(Settings/OpenAI): remove legacy `isOpenAI` flag
* fix(OpenAIClient): do not invoke abortCompletion on completion error
* feat: add responseSender/label support for custom endpoints:
- use defaultModelLabel field in endpointOption
- add model defaults for custom endpoints in `getResponseSender`
- add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel`
- include defaultModelLabel from endpointConfig in custom endpoint client options
- pass `endpointType` to `getResponseSender`
* feat(OpenAIClient): use custom options from config file
* refactor: rename `defaultModelLabel` to `modelDisplayLabel`
* refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere
* feat: `iconURL` and extract environment variables from custom endpoint config values
* feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml`
* docs: custom config docs and examples
* fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions
* fix(custom/initializeClient): extract env var and use `isUserProvided` function
* Update librechat.example.yaml
* feat(InputWithLabel): add className props, and forwardRef
* fix(streamResponse): handle error edge case where either messages or convos query throws an error
* fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error
* feat: user_provided keys for custom endpoints
* fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name`
* feat(loadConfigModels): extract env variables and optimize fetching models
* feat: support custom endpoint iconURL for messages and Nav
* feat(OpenAIClient): add/dropParams support
* docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY`
* docs: update docs with additional notes
* feat(maxTokensMap): add mistral models (32k context)
* docs: update openrouter notes
* Update ai_setup.md
* docs(custom_config): add table of contents and fix note about custom name
* docs(custom_config): reorder ToC
* Update custom_config.md
* Add note about `max_tokens` field in custom_config.md
* feat: add GOOGLE_MODELS env var
* feat: add gemini vision support
* refactor(GoogleClient): adjust clientOptions handling depending on model
* fix(logger): fix redact logic and redact errors only
* fix(GoogleClient): do not allow non-multiModal messages when gemini-pro-vision is selected
* refactor(OpenAIClient): use `isVisionModel` client property to avoid calling validateVisionModel multiple times
* refactor: better debug logging by correctly traversing, redacting sensitive info, and logging condensed versions of long values
* refactor(GoogleClient): allow response errors to be thrown/caught above client handling so user receives meaningful error message
debug orderedMessages, parentMessageId, and buildMessages result
* refactor(AskController): use model from client.modelOptions.model when saving intermediate messages, which requires for the progress callback to be initialized after the client is initialized
* feat(useSSE): revert to previous model if the model was auto-switched by backend due to message attachments
* docs: update with google updates, notes about Gemini Pro Vision
* fix: redis should not be initialized without USE_REDIS and increase max listeners to 20
* refactor: add gemini-pro to google Models list; use defaultModels for central model listing
* refactor(SetKeyDialog): create useMultipleKeys hook to use for Azure, export `isJson` from utils, use EModelEndpoint
* refactor(useUserKey): change variable names to make keyName setting more clear
* refactor(FileUpload): allow passing container className string
* feat(GoogleClient): Gemini support
* refactor(GoogleClient): alternate stream speed for Gemini models
* feat(Gemini): styling/settings configuration for Gemini
* refactor(GoogleClient): substract max response tokens from max context tokens if context is above 32k (I/O max is combined between the two)
* refactor(tokens): correct google max token counts and subtract max response tokens when input/output count are combined towards max context count
* feat(google/initializeClient): handle both local and user_provided credentials and write tests
* fix(GoogleClient): catch if credentials are undefined, handle if serviceKey is string or object correctly, handle no examples passed, throw error if not a Generative Language model and no service account JSON key is provided, throw error if it is a Generative m
odel, but not google API key was provided
* refactor(loadAsyncEndpoints/google): activate Google endpoint if either the service key JSON file is provided in /api/data, or a GOOGLE_KEY is defined.
* docs: updated Google configuration
* fix(ci): Mock import of Service Account Key JSON file (auth.json)
* Update apis_and_tokens.md
* feat: increase max output tokens slider for gemini pro
* refactor(GoogleSettings): handle max and default maxOutputTokens on model change
* chore: add sensitive redact regex
* docs: add warning about data privacy
* Update apis_and_tokens.md
* WIP: initial logging changes
add several transports in ~/config/winston
omit messages in logs, truncate long strings
add short blurb in dotenv for debug logging
GoogleClient: using logger
OpenAIClient: using logger, handleOpenAIErrors
Adding typedef for payload message
bumped winston and using winston-daily-rotate-file
moved config for server paths to ~/config dir
Added `DEBUG_LOGGING=true` to .env.example
* WIP: Refactor logging statements in code
* WIP: Refactor logging statements and import configurations
* WIP: Refactor logging statements and import configurations
* refactor: broadcast Redis initialization message with `info` not `debug`
* refactor: complete Refactor logging statements and import configurations
* chore: delete unused tools
* fix: circular dependencies due to accessing logger
* refactor(handleText): handle booleans and write tests
* refactor: redact sensitive values, better formatting
* chore: improve log formatting, avoid passing strings to 2nd arg
* fix(ci): fix jest tests due to logger changes
* refactor(getAvailablePluginsController): cache plugins as they are static and avoids async addOpenAPISpecs call every time
* chore: update docs
* chore: update docs
* chore: create separate meiliSync logger, clean up logs to avoid being unnecessarily verbose
* chore: spread objects where they are commonly logged to allow string truncation
* chore: improve error log formatting
* refactor: move endpoint services to own directory
* refactor: make endpointconfig handling more concise, separate logic, and cache result for subsequent serving
* refactor: ModelController gets same treatment as EndpointController, draft OverrideController
* wip: flesh out override controller more to return real value
* refactor: client/api changes in anticipation of override
* fix: correct preset title for Anthropic endpoint
* fix(Settings/Anthropic): show correct default value for LLM temperature
* fix(AnthropicClient): use `getModelMaxTokens` to get the correct LLM max context tokens, correctly set default temperature to 1, use only 2 params for class constructor, use `getResponseSender` to add correct sender to response message
* refactor(/api/ask|edit/anthropic): save messages to database after the final response is sent to the client, and do not save conversation from route controller
* fix(initializeClient/anthropic): correctly pass client options (endpointOption) to class initialization
* feat(ModelService/Anthropic): add claude-1.2
* feat: add claude-2.1 to default anthropic models
* chore: remove console log in NavLinks
* fix: issue with response sender not using model name, change anthropic default value to Claude
* fix: preset will not be selected on edit
* feat(azureOpenAI): allow switching deployment name by model name
* ci: add unit tests and throw error on no api key provided to avoid API call
* fix(gptPlugins/initializeClient): check if azure is enabled; ci: add unit tests for gptPlugins/initializeClient
* fix(ci): fix expected error message for partial regex match: unexpected token
* refactor: use keyv for search caching with 1 min expirations
* feat: keyvRedis; chore: bump keyv, bun.lockb, add jsconfig for vscode file resolution
* feat: api/search redis support
* refactor(redis) use ioredis cluster for keyv
fix(OpenID): when redis is configured, use redis memory store for express-session
* fix: revert using uri for keyvredis
* fix(SearchBar): properly debounce search queries, fix weird render behaviors
* refactor: add authentication to search endpoint and show error messages in results
* feat: redis support for violation logs
* fix(logViolation): ensure a number is always being stored in cache
* feat(concurrentLimiter): uses clearPendingReq, clears pendingReq on abort, redis support
* fix(api/search/enable): query only when authenticated
* feat(ModelService): redis support
* feat(checkBan): redis support
* refactor(api/search): consolidate keyv logic
* fix(ci): add default empty value for REDIS_URI
* refactor(keyvRedis): use condition to initialize keyvRedis assignment
* refactor(connectDb): handle disconnected state (should create a new conn)
* fix(ci/e2e): handle case where cleanUp did not successfully run
* fix(getDefaultEndpoint): return endpoint from localStorage if defined and endpointsConfig is default
* ci(e2e): remove afterAll messages as startup/cleanUp will clear messages
* ci(e2e): remove teardown for CI until further notice
* chore: bump playwright/test
* ci(e2e): reinstate teardown as CI issue is specific to github env
* fix(ci): click settings menu trigger by testid
* fix(OpenAIClient): handle completions request in reverse proxy, also force prompt by env var
* fix(reverseProxyUrl): allow url without /v1/ but add server warning as it will not be compatible with plugins
* fix(ModelService): handle reverse proxy without v1
* refactor: make changes cleaner
* ci(OpenAIClient): add tests for OPENROUTER_API_KEY, FORCE_PROMPT, and reverseProxyUrl handling in setOptions
* chore(ChatGPTClient.js): add support for OpenRouter API
chore(OpenAIClient.js): add support for OpenRouter API
* chore: comment out token debugging
* chore: add back streamResult assignment
* chore: remove double condition/assignment from merging
* refactor(routes/endpoints): -> controller/services logic
* feat: add openrouter model fetching
* chore: remove unused endpointsConfig in cleanupPreset function
* refactor: separate models concern from endpointsConfig
* refactor(data-provider): add TModels type and make TEndpointsConfig adaptible to new endpoint keys
* refactor: complete models endpoint service in data-provider
* refactor: onMutate for refreshToken and login, invalidate models query
* feat: complete models endpoint logic for frontend
* chore: remove requireJwtAuth from /api/endpoints and /api/models as not implemented yet
* fix: endpoint will not be overwritten and instead use active value
* feat: openrouter support for plugins
* chore(EndpointOptionsDialog): remove unused recoil value
* refactor(schemas/parseConvo): add handling of secondaryModels to use first of defined secondary models, which includes last selected one as first, or default to the convo's secondary model value
* refactor: remove hooks from store and move to hooks
refactor(switchToConversation): make switchToConversation use latest recoil state, which is necessary to get the most up-to-date models list, replace wrapper function
refactor(getDefaultConversation): factor out logic into 3 pieces to reduce complexity.
* fix: backend tests
* feat: optimistic update by calling newConvo when models are fetched
* feat: openrouter support for titling convos
* feat: cache models fetch
* chore: add missing dep to AuthContext useEffect
* chore: fix useTimeout types
* chore: delete old getDefaultConvo file
* chore: remove newConvo logic from Root, remove console log from api models caching
* chore: ensure bun is used for building in b:client script
* fix: default endpoint will not default to null on a completely fresh login (no localStorage/cookies)
* chore: add openrouter docs to free_ai_apis.md and .env.example
* chore: remove openrouter console logs
* feat: add debugging env variable for Plugins