* Updated Style Cursor like ChatGPT
* style(Markdown.tsx): add space before cursor when there is text
* fix: revert OpenAIClient.tokens.js change
* fix:(Markdown.tsx): revert change of unused file
* fix(convos.spec.ts): test fix
* chore: remove raw HTML for cursor animations
---------
Co-authored-by: Danny Avila <danacordially@gmail.com>
Co-authored-by: Danny Avila <messagedaniel@protonmail.com>
* refactor(DALL-E): retrieve env variables at runtime and not from memory
* feat(plugins): add alternate env variable handling to allow setting one api key for multiple plugins
* docs: update docs
* feat: send the LibreChat user ID as a query param when fetching the list of models
* chore: update bun
* chore: change bun command for building data-provider
* refactor: prefer use of `getCustomConfig` to access custom config, also move to `server/services/Config`
* refactor: make endpoints/custom option for the config optional, add userIdQuery, and use modelQueries log store in ModelService
* refactor(ModelService): use env variables at runtime, use default models from data-provider, and add tests
* docs: add `userIdQuery`
* fix(ci): import changed
* refactor(Login & Registration)
* fix(Registration) test errors
* refactor(LoginForm & ResetPassword)
* fix(LoginForm): display 'undefined' when loading page; style(SocialButton): match OpenAI's graphics
* some refactor and style update for social logins
* style: width like OpenAI; feat: custom social login order; refactor: alphabetical socials
* fix(Registration & Login) test
* Update .env.example
* Update .env.example
* Update dotenv.md
* refactor: remove `SOCIAL_LOGIN_ORDER` for `socialLogins` configured from `librechat.yaml`
- initialized by AppService, attached as app.locals property
- rename socialLoginOrder and loginOrder to socialLogins app-wide for consistency
- update types and docs
- initialize config variable as array and not singular string to parse
- bump data-provider to 0.3.9
---------
Co-authored-by: Danny Avila <messagedaniel@protonmail.com>
* feat: allow only certain domain
* Update dotenv.md
* refactor( registrationController) & handle ALLOWED_REGISTRATION_DOMAINS not specified
* cleanup and moved to AuthService for better error handling
* refactor: replace environment variable with librechat config item, add typedef for custom config, update docs for new registration object and allowedDomains values
* ci(AuthService): test for `isDomainAllowed`
---------
Co-authored-by: Danny Avila <messagedaniel@protonmail.com>
* Style: Infinite Scroll and Group convos by date
* Style: Infinite Scroll and Group convos by date- Redesign NavBar
* Style: Infinite Scroll and Group convos by date- Redesign NavBar - Clean code
* Style: Infinite Scroll and Group convos by date- Redesign NavBar - Redesign NewChat Component
* Style: Infinite Scroll and Group convos by date- Redesign NavBar - Redesign NewChat Component
* Style: Infinite Scroll and Group convos by date- Redesign NavBar - Redesign NewChat Component
* Including OpenRouter and Mistral icon
* refactor(Conversations): cleanup use of utility functions and typing
* refactor(Nav/NewChat): use localStorage `lastConversationSetup` to determine the endpoint to use, as well as icons -> JSX components, remove use of `endpointSelected`
* refactor: remove use of `isFirstToday`
* refactor(Nav): remove use of `endpointSelected`, consolidate scrolling logic to its own hook `useNavScrolling`, remove use of recoil `conversation`
* refactor: Add spinner to bottom of list, throttle fetching, move query hooks to client workspace
* chore: sort by `updatedAt` field
* refactor: optimize conversation infinite query, use optimistic updates, add conversation helpers for managing pagination, remove unnecessary operations
* feat: gen_title route for generating the title for the conversation
* style(Convo): change hover bg-color
* refactor: memoize groupedConversations and return as array of tuples, correctly update convos pre/post message stream, only call genTitle if conversation is new, make `addConversation` dynamically either add/update depending if convo exists in pages already, reorganize type definitions
* style: rename Header NewChat Button -> HeaderNewChat, add NewChatIcon, closely match main Nav New Chat button to ChatGPT
* style(NewChat): add hover bg color
* style: cleanup comments, match ChatGPT nav styling, redesign search bar, make part of new chat sticky header, move Nav under same parent as outlet/mobilenav, remove legacy code, search only if searchQuery is not empty
* feat: add tests for conversation helpers and ensure no duplicate conversations are ever grouped
* style: hover bg-color
* feat: alt-click on convo item to open conversation in new tab
* chore: send error message when `gen_title` fails
---------
Co-authored-by: Walber Cardoso <walbercardoso@gmail.com>
* refactor(custom): add all recognized models to maxTokensMap for custom endpoint
* feat(librechat.yaml): log the custom config file on initial load
* fix(OpenAIClient): pass endpointType/endpoint to `getModelMaxTokens` call
* refactor(gptPlugins): prevent edge case where exact word `azure` could be found in azure api Key detection when not an azure key
* refactor(SetKeyDialog): cleanup OpenAI config, show \'set azure key\' when `PLUGINS_USE_AZURE` env var is enabled
* refactor(extractBaseURL): add handling for all possible Cloudflare AI Gateway endpoints
* chore: added endpointoption todo for updating type and optimizing handling app-wide
* feat(azureUtils):
- `genAzureChatCompletion`: allow optional client pass to update azure property
- `constructAzureURL`: optionally replace placeholders for instance and deployment names of an azure baseURL
- add tests for module
* refactor(extractBaseURL): return entire input when cloudflare `azure-openai` suffix detected
- also add more tests for both construct and extract URL
* refactor(genAzureChatCompletion): only allow omitting instance name if baseURL is not set
* refactor(initializeClient): determine `reverseProxyUrl` based on endpoint (azure or openai)
* refactor: utitlize `constructAzureURL` when `AZURE_OPENAI_BASEURL` is set
* docs: update docs on `AZURE_OPENAI_BASEURL`
* fix(ci): update expected error message for `azureUtils` tests
* chore: fix `endpoint` typescript issues and typo in console info message
* feat(api): files GET endpoint and save only file_id references to messages
* refactor(client): `useGetFiles` query hook, update file types, optimistic update of filesQuery on file upload
* refactor(buildTree): update to use params object and accept fileMap
* feat: map files to messages; refactor(ChatView): messages only available after files are fetched
* fix: fetch files only when authenticated
* feat(api): AppService
- rename app.locals.configs to app.locals.paths
- load custom config use fileStrategy from yaml config in app.locals
* refactor: separate Firebase and Local strategies, call based on config
* refactor: modularize file strategies and employ with use of DALL-E
* refactor(librechat.yaml): add fileStrategy field
* feat: add source to MongoFile schema, as well as BatchFile, and ExtendedFile types
* feat: employ file strategies for upload/delete files
* refactor(deleteFirebaseFile): add user id validation for firebase file deletion
* chore(deleteFirebaseFile): update jsdocs
* feat: employ strategies for vision requests
* fix(client): handle messages with deleted files
* fix(client): ensure `filesToDelete` always saves/sends `file.source`
* feat(openAI): configurable `resendImages` and `imageDetail`
* refactor(getTokenCountForMessage): recursive process only when array of Objects and only their values (not keys) aside from `image_url` types
* feat(OpenAIClient): calculateImageTokenCost
* chore: remove comment
* refactor(uploadAvatar): employ fileStrategy for avatars, from social logins or user upload
* docs: update docs on how to configure fileStrategy
* fix(ci): mock winston and winston related modules, update DALLE3.spec.js with changes made
* refactor(redis): change terminal message to reflect current development state
* fix(DALL-E-2): pass fileStrategy to dall-e
* style(Icon): remove error bubble from message icon
* fix(custom): `initializeClient` now throws error if apiKey or baseURL are admin provided but no env var was found
* refactor(tPresetSchema): match `conversationId` type to `tConversationSchema` but optional, use `extendedModelEndpointSchema` for `endpoint`
* fix(useSSE): minor improvements
- use `completed` set to avoid submitting unecessary abort request
- set preset with `newConversation` calls using initial conversation settings to prevent default Preset override as well as default settings
- return if there is a parsing error within `onerror` as expected errors from server are properly formatted
* fix(custom): prevent presets using removed custom endpoints from causing frontend errors
* refactor(abortMiddleware): send 204 status when abortController is not found/active, set expected header `application/json` when not set
* fix(useSSE): general improvements:
- Add endpointType to fetch URL in useSSE hook
- use EndpointURLs enum
- handle 204 response by setting `data` to initiated response
- add better error handling UX, make clear when there is an explicit error
* fix: load all existing conversation settings on refresh
* refactor(buildDefaultConvo): use `lastConversationSetup.endpointType` before `conversation.endpointType`
* refactor(TMessage/messageSchema): add `endpoint` field to messages to differentiate generation origin
* feat(useNewConvo): `keepLatestMessage` param to prevent reseting the `latestMessage` mid-conversation
* style(Settings): adjust height styling to allow more space in dialog for additional settings
* feat: Modular Chat: experimental setting to Enable switching Endpoints mid-conversation
* fix(ChatRoute): fix potential parsing issue with tPresetSchema
* fix(api): version mismatch between langchain packages `@langchain/google-genai` & `langchain`
* chore(loadYaml): silence config file not found error
* chore: improve firebase init message when not configured (generalized)
* fix(deploy-compose.yml): mount `librechat.yaml` config file
* WIP(backend/api): custom endpoint
* WIP(frontend/client): custom endpoint
* chore: adjust typedefs for configs
* refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility
* feat: loadYaml utility
* refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults
* refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config
* refactor(EndpointController): rename variables for clarity
* feat: initial load custom config
* feat(server/utils): add simple `isUserProvided` helper
* chore(types): update TConfig type
* refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models
* feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels
* chore: reorganize server init imports, invoke loadCustomConfig
* refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint
* refactor(Endpoint/ModelController): spread config values after default (temporary)
* chore(client): fix type issues
* WIP: first pass for multiple custom endpoints
- add endpointType to Conversation schema
- add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion)
- use `endpointType` value as `endpoint` where mapping to type is necessary using this field
- use custom defined `endpoint` value and not type for mapping to modelsConfig
- misc: add return type to `getDefaultEndpoint`
- in `useNewConvo`, add the endpointType if it wasn't already added to conversation
- EndpointsMenu: use user-defined endpoint name as Title in menu
- TODO: custom icon via custom config, change unknown to robot icon
* refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code
* chore: remove unused availableModels field in TConfig type
* refactor(parseCompactConvo): pass args as an object and change where used accordingly
* feat: chat through custom endpoint
* chore(message/convoSchemas): avoid saving empty arrays
* fix(BaseClient/saveMessageToDatabase): save endpointType
* refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve
* fix(useConversation): assign endpointType if it's missing
* fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset)
* chore: recorganize types order for TConfig, add `iconURL`
* feat: custom endpoint icon support:
- use UnknownIcon in all icon contexts
- add mistral and openrouter as known endpoints, and add their icons
- iconURL support
* fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults
* refactor(Settings/OpenAI): remove legacy `isOpenAI` flag
* fix(OpenAIClient): do not invoke abortCompletion on completion error
* feat: add responseSender/label support for custom endpoints:
- use defaultModelLabel field in endpointOption
- add model defaults for custom endpoints in `getResponseSender`
- add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel`
- include defaultModelLabel from endpointConfig in custom endpoint client options
- pass `endpointType` to `getResponseSender`
* feat(OpenAIClient): use custom options from config file
* refactor: rename `defaultModelLabel` to `modelDisplayLabel`
* refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere
* feat: `iconURL` and extract environment variables from custom endpoint config values
* feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml`
* docs: custom config docs and examples
* fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions
* fix(custom/initializeClient): extract env var and use `isUserProvided` function
* Update librechat.example.yaml
* feat(InputWithLabel): add className props, and forwardRef
* fix(streamResponse): handle error edge case where either messages or convos query throws an error
* fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error
* feat: user_provided keys for custom endpoints
* fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name`
* feat(loadConfigModels): extract env variables and optimize fetching models
* feat: support custom endpoint iconURL for messages and Nav
* feat(OpenAIClient): add/dropParams support
* docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY`
* docs: update docs with additional notes
* feat(maxTokensMap): add mistral models (32k context)
* docs: update openrouter notes
* Update ai_setup.md
* docs(custom_config): add table of contents and fix note about custom name
* docs(custom_config): reorder ToC
* Update custom_config.md
* Add note about `max_tokens` field in custom_config.md
* fix(Message): avoid overwriting unprovided properties
* fix(OpenAIClient): return intermediateReply on user abort
* fix(AskController): do not send/save final message if abort was triggered
* fix(countTokens): avoid fetching remote registry and exclusively use cl100k_base or p50k_base weights for token counting
* refactor(Message/messageSchema): rely on messageSchema for default values when saving messages
* fix(EditController): do not send/save final message if abort was triggered
* fix(config/helpers): fix module resolution error
* chore: bump langchain to v0.0.213 from v0.0.186
* fix: handle abort edge cases:
- abort message server-side if response experienced error mid-generation
- attempt to recover message if aborting resulted in error
- if abortKey is not provided, use conversationId if it exists
- if headers were already sent, send an Event stream message
- issue warning for possible Google censor/filter
refactor(streamResponse): for `sendError`, allow passing overrides so that error can include partial generation, improve typing for `sendMessage`
* chore(MessageContent): remove eslint warning for unused `i`, rephrase unfinished message text
* fix(useSSE): avoid invoking cancelHandler if the abort response was 404
* chore(TMessage): remove unnecessary, unused legacy message property `submitting`
* chore(TMessage): remove unnecessary legacy message property `cancelled`
* chore(abortMiddleware): remove unused `errorText` property to avoid confusion
* localization + api-endpoint
* docs: added firebase documentation
* chore: icons
* chore: SettingsTabs
* feat: account pannel; fix: gear icons
* docs: position update
* feat: firebase
* feat: plugin support
* route
* fixed bugs with firebase and moved a lot of files
* chore(DALLE3): using UUID v4
* feat: support for social strategies; moved '/images' path
* fix: data ignored
* gitignore update
* docs: update firebase guide
* refactor: Firebase
- use singleton pattern for firebase initialization, initially on server start
- reorganize imports, move firebase specific files to own service under Files
- rename modules to remove 'avatar' redundancy
- fix imports based on changes
* ci(DALLE/DALLE3): fix tests to use logger and new expected outputs, add firebase tests
* refactor(loadToolWithAuth): pass userId to tool as field
* feat(images/parse): feat: Add URL Image Basename Extraction
Implement a new module to extract the basename of an image from a given URL. This addition includes the function, which parses the URL and retrieves the basename using the Node.js 'url' and 'path' modules. The function is documented with JSDoc comments for better maintainability and understanding. This feature enhances the application's ability to handle and process image URLs efficiently.
* refactor(addImages): function to use a more specific regular expression for observedImagePath based on the generated image markdown standard across the app
* refactor(DALLE/DALLE3): utilize `getImageBasename` and `this.userId`; fix: pass correct image path to firebase url helper
* fix(addImages): make more general to match any image markdown descriptor
* fix(parse/getImageBasename): test result of this function for an actual image basename
* ci(DALLE3): mock getImageBasename
* refactor(AuthContext): use Recoil atom state for user
* feat: useUploadAvatarMutation, react-query hook for avatar upload
* fix(Toast): stack z-order of Toast over all components (1000)
* refactor(showToast): add optional status field to avoid importing NotificationSeverity on each use of the function
* refactor(routes/avatar): remove unnecessary get route, get userId from req.user.id, require auth on POST request
* chore(uploadAvatar): TODO: remove direct use of Model, `User`
* fix(client): fix Spinner imports
* refactor(Avatar): use react-query hook, Toast, remove unnecessary states, add optimistic UI to upload
* fix(avatar/localStrategy): correctly save local profile picture and cache bust for immediate rendering; fix: firebase init info message (only show once)
* fix: use `includes` instead of `endsWith` for checking manual query of avatar image path in case more queries are appended (as is done in avatar/localStrategy)
---------
Co-authored-by: Danny Avila <messagedaniel@protonmail.com>
* feat: add GOOGLE_MODELS env var
* feat: add gemini vision support
* refactor(GoogleClient): adjust clientOptions handling depending on model
* fix(logger): fix redact logic and redact errors only
* fix(GoogleClient): do not allow non-multiModal messages when gemini-pro-vision is selected
* refactor(OpenAIClient): use `isVisionModel` client property to avoid calling validateVisionModel multiple times
* refactor: better debug logging by correctly traversing, redacting sensitive info, and logging condensed versions of long values
* refactor(GoogleClient): allow response errors to be thrown/caught above client handling so user receives meaningful error message
debug orderedMessages, parentMessageId, and buildMessages result
* refactor(AskController): use model from client.modelOptions.model when saving intermediate messages, which requires for the progress callback to be initialized after the client is initialized
* feat(useSSE): revert to previous model if the model was auto-switched by backend due to message attachments
* docs: update with google updates, notes about Gemini Pro Vision
* fix: redis should not be initialized without USE_REDIS and increase max listeners to 20
* refactor(Ask/Edit): consolidate ask/edit controllers between the main modules and openAI controllers to reduce repetition of code and increase reusability
* fix(winston/logger): circular dependency issue
* fix(config/scripts): fix script imports
* refactor(indexSync): make not configured message an info log message
* chore: create a rollup script for api/server/index.js to check circular dependencies
* chore: bump @keyv/redis
* refactor: only remove conversation states from localStorage on login/logout but not on refresh
* chore: add debugging log for azure completion url
* chore: add api-key to redact regex
* fix: do not show endpoint selector if endpoint is falsy
* chore: remove logger from genAzureChatCompletion
* feat(ci): mock fetchEventSource
* refactor(ci): mock all model methods in BaseClient.test, as well as mock the implementation for getCompletion in FakeClient
* fix(OpenAIClient): consider chatCompletion if model name includes `gpt` as opposed to `gpt-`
* fix(ChatGPTClient/azureOpenAI): Remove 'model' option for Azure compatibility (cannot be sent in payload body)
* feat(ci): write new test suite that significantly increase test coverage for OpenAIClient and BaseClient by covering most of the real implementation of the `sendMessage` method
- test for the azure edge case where model option is appended to modelOptions, ensuring removal before sent to the azure endpoint
- test for expected azure url being passed to SSE POST request
- test for AZURE_OPENAI_DEFAULT_MODEL being set, but is not included in the URL deployment name as expected
- test getCompletion method to have correct payload
fix(ci/OpenAIClient.test.js): correctly mock hanging/async methods
* refactor(addTitle): allow azure to title as it aborts signal on completion
* refactor: add gemini-pro to google Models list; use defaultModels for central model listing
* refactor(SetKeyDialog): create useMultipleKeys hook to use for Azure, export `isJson` from utils, use EModelEndpoint
* refactor(useUserKey): change variable names to make keyName setting more clear
* refactor(FileUpload): allow passing container className string
* feat(GoogleClient): Gemini support
* refactor(GoogleClient): alternate stream speed for Gemini models
* feat(Gemini): styling/settings configuration for Gemini
* refactor(GoogleClient): substract max response tokens from max context tokens if context is above 32k (I/O max is combined between the two)
* refactor(tokens): correct google max token counts and subtract max response tokens when input/output count are combined towards max context count
* feat(google/initializeClient): handle both local and user_provided credentials and write tests
* fix(GoogleClient): catch if credentials are undefined, handle if serviceKey is string or object correctly, handle no examples passed, throw error if not a Generative Language model and no service account JSON key is provided, throw error if it is a Generative m
odel, but not google API key was provided
* refactor(loadAsyncEndpoints/google): activate Google endpoint if either the service key JSON file is provided in /api/data, or a GOOGLE_KEY is defined.
* docs: updated Google configuration
* fix(ci): Mock import of Service Account Key JSON file (auth.json)
* Update apis_and_tokens.md
* feat: increase max output tokens slider for gemini pro
* refactor(GoogleSettings): handle max and default maxOutputTokens on model change
* chore: add sensitive redact regex
* docs: add warning about data privacy
* Update apis_and_tokens.md
* WIP: initial logging changes
add several transports in ~/config/winston
omit messages in logs, truncate long strings
add short blurb in dotenv for debug logging
GoogleClient: using logger
OpenAIClient: using logger, handleOpenAIErrors
Adding typedef for payload message
bumped winston and using winston-daily-rotate-file
moved config for server paths to ~/config dir
Added `DEBUG_LOGGING=true` to .env.example
* WIP: Refactor logging statements in code
* WIP: Refactor logging statements and import configurations
* WIP: Refactor logging statements and import configurations
* refactor: broadcast Redis initialization message with `info` not `debug`
* refactor: complete Refactor logging statements and import configurations
* chore: delete unused tools
* fix: circular dependencies due to accessing logger
* refactor(handleText): handle booleans and write tests
* refactor: redact sensitive values, better formatting
* chore: improve log formatting, avoid passing strings to 2nd arg
* fix(ci): fix jest tests due to logger changes
* refactor(getAvailablePluginsController): cache plugins as they are static and avoids async addOpenAPISpecs call every time
* chore: update docs
* chore: update docs
* chore: create separate meiliSync logger, clean up logs to avoid being unnecessarily verbose
* chore: spread objects where they are commonly logged to allow string truncation
* chore: improve error log formatting
* chore: bump vite, vitejs/plugin-react, mark client package as esm, move react-query as a peer dep in data-provider
* chore: import changes due to new data-provider export strategy, also fix type imports where applicable
* chore: export react-query services as separate to avoid react dependencies in /api/
* chore: suppress sourcemap warnings and polyfill node:path which is used by filenamify
TODO: replace filenamify with an alternative and REMOVE polyfill
* chore: /api/ changes to support `librechat-data-provider`
* refactor: rewrite Dockerfile.multi in light of /api/ changes to support `librechat-data-provider`
* chore: remove volume mapping to node_modules directories in default compose file
* chore: remove schemas from /api/ as is no longer needed with use of `librechat-data-provider`
* fix(ci): jest `librechat-data-provider/react-query` module resolution
* feat: update PaLM icons
* feat: add additional google models
* POC: formatting inputs for Vertex AI streaming
* refactor: move endpoints services outside of /routes dir to /services/Endpoints
* refactor: shorten schemas import
* refactor: rename PALM to GOOGLE
* feat: make Google editable endpoint
* feat: reusable Ask and Edit controllers based off Anthropic
* chore: organize imports/logic
* fix(parseConvo): include examples in googleSchema
* fix: google only allows odd number of messages to be sent
* fix: pass proxy to AnthropicClient
* refactor: change `google` altName to `Google`
* refactor: update getModelMaxTokens and related functions to handle maxTokensMap with nested endpoint model key/values
* refactor: google Icon and response sender changes (Codey and Google logo instead of PaLM in all cases)
* feat: google support for maxTokensMap
* feat: google updated endpoints with Ask/Edit controllers, buildOptions, and initializeClient
* feat(GoogleClient): now builds prompt for text models and supports real streaming from Vertex AI through langchain
* chore(GoogleClient): remove comments, left before for reference in git history
* docs: update google instructions (WIP)
* docs(apis_and_tokens.md): add images to google instructions
* docs: remove typo apis_and_tokens.md
* Update apis_and_tokens.md
* feat(Google): use default settings map, fully support context for both text and chat models, fully support examples for chat models
* chore: update more PaLM references to Google
* chore: move playwright out of workflows to avoid failing tests
* refactor: move endpoint services to own directory
* refactor: make endpointconfig handling more concise, separate logic, and cache result for subsequent serving
* refactor: ModelController gets same treatment as EndpointController, draft OverrideController
* wip: flesh out override controller more to return real value
* refactor: client/api changes in anticipation of override
* fix: type issues with icons
* refactor: use react query for presets, show toasts on preset crud, refactor mutations, remove presetsQuery from Root (breaking change)
* refactor: change preset titling
* refactor: update preset schemas and methods for necessary new properties `order` and `defaultPreset`
* feat: add `defaultPreset` Recoil value
* refactor(getPresetTitle): make logic cleaner and more concise
* feat: complete UI portion of defaultPreset feature, with animations added to preset items
* chore: remove console.logs()
* feat: complete default preset handling
* refactor: remove user sensitive values on logout
* fix: allow endpoint selection without default preset overwriting
* refactor(addTitle): avoid generating title when a request was aborted
* chore: bump openai to latest
* fix: catch OpenAIError Uncaught error as last resort
* fix: handle final messages excludes role=assistant
* Update OpenAIClient.js
* chore: fix linting errors
* fix: typo for passwordReset.handlebars
* fix(useSSE): prevent unnecessary JSON.parse abort error, handle immediate abort-submit gracefully by reverting to previous state before immediate abort-submit, add showStopButton state to explicitly render disabled sendButton when message generation is cancelled, filter undefined messages and replace undefined convo for cancelHandler
* fix: attempt to catch more errors, especially when generation started
* fix: pass the right properties to getResponseSender
* chore: Update .eslintrc.js and fix sendEmail.js linting errors
* fix: correct preset title for Anthropic endpoint
* fix(Settings/Anthropic): show correct default value for LLM temperature
* fix(AnthropicClient): use `getModelMaxTokens` to get the correct LLM max context tokens, correctly set default temperature to 1, use only 2 params for class constructor, use `getResponseSender` to add correct sender to response message
* refactor(/api/ask|edit/anthropic): save messages to database after the final response is sent to the client, and do not save conversation from route controller
* fix(initializeClient/anthropic): correctly pass client options (endpointOption) to class initialization
* feat(ModelService/Anthropic): add claude-1.2
* fix: handle webp images correctly
* refactor: use the userPath from the start of the filecycle to avoid handling the blob, whose loading may fail upon user request
* refactor: delete temp files on reload and new chat