Commit graph

779 commits

Author SHA1 Message Date
Marco Beretta
a2c35e8415
🔄🔐 refactor: auth; style: match OpenAI; feat: custom social login order (#1421)
* refactor(Login & Registration)

* fix(Registration) test errors

* refactor(LoginForm & ResetPassword)

* fix(LoginForm): display 'undefined' when loading page; style(SocialButton): match OpenAI's graphics

* some refactor and style update for social logins

* style: width like OpenAI; feat: custom social login order; refactor: alphabetical socials

* fix(Registration & Login) test

* Update .env.example

* Update .env.example

* Update dotenv.md

* refactor: remove `SOCIAL_LOGIN_ORDER` for `socialLogins` configured from `librechat.yaml`
- initialized by AppService, attached as app.locals property
- rename socialLoginOrder and loginOrder to socialLogins app-wide for consistency
- update types and docs
- initialize config variable as array and not singular string to parse
- bump data-provider to 0.3.9

---------

Co-authored-by: Danny Avila <messagedaniel@protonmail.com>
2024-02-05 03:31:18 -05:00
Marco Beretta
25da90657d
🔒✉️ feat: allow only certain domain (#1562)
* feat: allow only certain domain

* Update dotenv.md

* refactor( registrationController) & handle ALLOWED_REGISTRATION_DOMAINS not specified

* cleanup and moved to AuthService for better  error handling

* refactor: replace environment variable with librechat config item, add typedef for custom config, update docs for new registration object and allowedDomains values

* ci(AuthService): test for `isDomainAllowed`

---------

Co-authored-by: Danny Avila <messagedaniel@protonmail.com>
2024-02-05 02:14:52 -05:00
Danny Avila
74459d6261
♾️ style: Infinite Scroll Nav and Sort Convos by Date/Usage (#1708)
* Style: Infinite Scroll and Group convos by date

* Style: Infinite Scroll and Group convos by date- Redesign NavBar

* Style: Infinite Scroll and Group convos by date- Redesign NavBar - Clean code

* Style: Infinite Scroll and Group convos by date- Redesign NavBar - Redesign NewChat Component

* Style: Infinite Scroll and Group convos by date- Redesign NavBar - Redesign NewChat Component

* Style: Infinite Scroll and Group convos by date- Redesign NavBar - Redesign NewChat Component

* Including OpenRouter and Mistral icon

* refactor(Conversations): cleanup use of utility functions and typing

* refactor(Nav/NewChat): use localStorage `lastConversationSetup` to determine the endpoint to use, as well as icons -> JSX components, remove use of `endpointSelected`

* refactor: remove use of `isFirstToday`

* refactor(Nav): remove use of `endpointSelected`, consolidate scrolling logic to its own hook `useNavScrolling`, remove use of recoil `conversation`

* refactor: Add spinner to bottom of list, throttle fetching, move query hooks to client workspace

* chore: sort by `updatedAt` field

* refactor: optimize conversation infinite query, use optimistic updates, add conversation helpers for managing pagination, remove unnecessary operations

* feat: gen_title route for generating the title for the conversation

* style(Convo): change hover bg-color

* refactor: memoize groupedConversations and return as array of tuples, correctly update convos pre/post message stream, only call genTitle if conversation is new, make `addConversation` dynamically either add/update depending if convo exists in pages already, reorganize type definitions

* style: rename Header NewChat Button -> HeaderNewChat, add NewChatIcon, closely match main Nav New Chat button to ChatGPT

* style(NewChat): add hover bg color

* style: cleanup comments, match ChatGPT nav styling, redesign search bar, make part of new chat sticky header, move Nav under same parent as outlet/mobilenav, remove legacy code, search only if searchQuery is not empty

* feat: add tests for conversation helpers and ensure no duplicate conversations are ever grouped

* style: hover bg-color

* feat: alt-click on convo item to open conversation in new tab

* chore: send error message when `gen_title` fails

---------

Co-authored-by: Walber Cardoso <walbercardoso@gmail.com>
2024-02-03 20:25:35 -05:00
Danny Avila
8479ac7293
🚀 feat: Support for GPT-3.5 Turbo/0125 Model (#1704)
* 🚀 feat: Support for GPT-3.5 Turbo/0125 Model

* ci: fix tx test
2024-02-02 01:01:11 -05:00
Danny Avila
30e143e96d
🪙 feat: Use OpenRouter Model Data for Token Cost and Context (#1703)
* feat: use openrouter data for model token cost/context

* chore: add ttl for tokenConfig and refetch models if cache expired
2024-02-02 00:42:11 -05:00
Danny Avila
fcbaa74e4a
🚀 feat: Support for GPT-4 Turbo/0125 Models (#1643) 2024-01-25 22:57:18 -05:00
Danny Avila
5a74ac9a60
: Release v0.6.6 (#1597) 2024-01-19 15:34:06 -05:00
Danny Avila
e73608ba46
🪶 feat: Add Support for Azure OpenAI Base URL (#1596)
* refactor(extractBaseURL): add handling for all possible Cloudflare AI Gateway endpoints

* chore: added endpointoption todo for updating type and optimizing handling app-wide

* feat(azureUtils):
- `genAzureChatCompletion`: allow optional client pass to update azure property
- `constructAzureURL`: optionally replace placeholders for instance and deployment names of an azure baseURL
- add tests for module

* refactor(extractBaseURL): return entire input when cloudflare `azure-openai` suffix detected
- also add more tests for both construct and extract URL

* refactor(genAzureChatCompletion): only allow omitting instance name if baseURL is not set

* refactor(initializeClient): determine `reverseProxyUrl` based on endpoint (azure or openai)

* refactor: utitlize `constructAzureURL` when `AZURE_OPENAI_BASEURL` is set

* docs: update docs on `AZURE_OPENAI_BASEURL`

* fix(ci): update expected error message for `azureUtils` tests
2024-01-19 14:57:03 -05:00
Danny Avila
7e2e19a134
🎯 feat(config): Custom Endpoint Request Headers (#1588) 2024-01-18 20:11:42 -05:00
Danny Avila
a8d6bfde7a
✏️ feat: LaTeX parsing for Messages (#1585)
* feat: Beta features tab in Settings and LaTeX Parsing toggle

* feat: LaTex parsing with spec
2024-01-18 14:44:10 -05:00
Danny Avila
d20970f5c5
🚀 Feat: Streamline File Strategies & GPT-4-Vision Settings (#1535)
* chore: fix `endpoint` typescript issues and typo in console info message

* feat(api): files GET endpoint and save only file_id references to messages

* refactor(client): `useGetFiles` query hook, update file types, optimistic update of filesQuery on file upload

* refactor(buildTree): update to use params object and accept fileMap

* feat: map files to messages; refactor(ChatView): messages only available after files are fetched

* fix: fetch files only when authenticated

* feat(api): AppService
- rename app.locals.configs to app.locals.paths
- load custom config use fileStrategy from yaml config in app.locals

* refactor: separate Firebase and Local strategies, call based on config

* refactor: modularize file strategies and employ with use of DALL-E

* refactor(librechat.yaml): add fileStrategy field

* feat: add source to MongoFile schema, as well as BatchFile, and ExtendedFile types

* feat: employ file strategies for upload/delete files

* refactor(deleteFirebaseFile): add user id validation for firebase file deletion

* chore(deleteFirebaseFile): update jsdocs

* feat: employ strategies for vision requests

* fix(client): handle messages with deleted files

* fix(client): ensure `filesToDelete` always saves/sends `file.source`

* feat(openAI): configurable `resendImages` and `imageDetail`

* refactor(getTokenCountForMessage): recursive process only when array of Objects and only their values (not keys) aside from `image_url` types

* feat(OpenAIClient): calculateImageTokenCost

* chore: remove comment

* refactor(uploadAvatar): employ fileStrategy for avatars, from social logins or user upload

* docs: update docs on how to configure fileStrategy

* fix(ci): mock winston and winston related modules, update DALLE3.spec.js with changes made

* refactor(redis): change terminal message to reflect current development state

* fix(DALL-E-2): pass fileStrategy to dall-e
2024-01-11 11:37:54 -05:00
Danny Avila
ead1c3c797
🔧 fix: Error Handling Improvements (#1518)
* style(Icon): remove error bubble from message icon

* fix(custom): `initializeClient` now throws error if apiKey or baseURL are admin provided but no env var was found

* refactor(tPresetSchema): match `conversationId` type to `tConversationSchema` but optional, use `extendedModelEndpointSchema` for `endpoint`

* fix(useSSE): minor improvements
- use `completed` set to avoid submitting unecessary abort request
- set preset with `newConversation` calls using initial conversation settings to prevent default Preset override as well as default settings
- return if there is a parsing error within `onerror` as expected errors from server are properly formatted
2024-01-08 09:30:38 -05:00
Danny Avila
9864fc8700
🔧 fix: Improve Endpoint Handling and Address Edge Cases (#1486)
* fix(TEndpointsConfig): resolve property access issues with typesafe helper function

* fix: undefined or null endpoint edge case

* refactor(mapEndpoints -> endpoints): renamed module to be more general for endpoint handling, wrote unit tests, export all helpers
2024-01-04 10:17:15 -05:00
Danny Avila
e1a529b5ae
🧪 feat: Experimental: Enable Switching Endpoints Mid-Conversation (#1483)
* fix: load all existing conversation settings on refresh

* refactor(buildDefaultConvo): use `lastConversationSetup.endpointType` before `conversation.endpointType`

* refactor(TMessage/messageSchema): add `endpoint` field to messages to differentiate generation origin

* feat(useNewConvo): `keepLatestMessage` param to prevent reseting the `latestMessage` mid-conversation

* style(Settings): adjust height styling to allow more space in dialog for additional settings

* feat: Modular Chat: experimental setting to Enable switching Endpoints mid-conversation

* fix(ChatRoute): fix potential parsing issue with tPresetSchema
2024-01-03 19:17:42 -05:00
Danny Avila
29473a72db
💫 feat: Config File & Custom Endpoints (#1474)
* WIP(backend/api): custom endpoint

* WIP(frontend/client): custom endpoint

* chore: adjust typedefs for configs

* refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility

* feat: loadYaml utility

* refactor: rename back to  from  and proof-of-concept for creating schemas from user-defined defaults

* refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config

* refactor(EndpointController): rename variables for clarity

* feat: initial load custom config

* feat(server/utils): add simple `isUserProvided` helper

* chore(types): update TConfig type

* refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models

* feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels

* chore: reorganize server init imports, invoke loadCustomConfig

* refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint

* refactor(Endpoint/ModelController): spread config values after default (temporary)

* chore(client): fix type issues

* WIP: first pass for multiple custom endpoints
- add endpointType to Conversation schema
- add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion)
- use `endpointType` value as `endpoint` where mapping to type is necessary using this field
- use custom defined `endpoint` value and not type for mapping to modelsConfig
- misc: add return type to `getDefaultEndpoint`
- in `useNewConvo`, add the endpointType if it wasn't already added to conversation
- EndpointsMenu: use user-defined endpoint name as Title in menu
- TODO: custom icon via custom config, change unknown to robot icon

* refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code

* chore: remove unused availableModels field in TConfig type

* refactor(parseCompactConvo): pass args as an object and change where used accordingly

* feat: chat through custom endpoint

* chore(message/convoSchemas): avoid saving empty arrays

* fix(BaseClient/saveMessageToDatabase): save endpointType

* refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve

* fix(useConversation): assign endpointType if it's missing

* fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset)

* chore: recorganize types order for TConfig, add `iconURL`

* feat: custom endpoint icon support:
- use UnknownIcon in all icon contexts
- add mistral and openrouter as known endpoints, and add their icons
- iconURL support

* fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults

* refactor(Settings/OpenAI): remove legacy `isOpenAI` flag

* fix(OpenAIClient): do not invoke abortCompletion on completion error

* feat: add responseSender/label support for custom endpoints:
- use defaultModelLabel field in endpointOption
- add model defaults for custom endpoints in `getResponseSender`
- add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel`
- include defaultModelLabel from endpointConfig in custom endpoint client options
- pass `endpointType` to `getResponseSender`

* feat(OpenAIClient): use custom options from config file

* refactor: rename `defaultModelLabel` to `modelDisplayLabel`

* refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere

* feat: `iconURL` and extract environment variables from custom endpoint config values

* feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml`

* docs: custom config docs and examples

* fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions

* fix(custom/initializeClient): extract env var and use `isUserProvided` function

* Update librechat.example.yaml

* feat(InputWithLabel): add className props, and forwardRef

* fix(streamResponse): handle error edge case where either messages or convos query throws an error

* fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error

* feat: user_provided keys for custom endpoints

* fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name`

* feat(loadConfigModels): extract env variables and optimize fetching models

* feat: support custom endpoint iconURL for messages and Nav

* feat(OpenAIClient): add/dropParams support

* docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY`

* docs: update docs with additional notes

* feat(maxTokensMap): add mistral models (32k context)

* docs: update openrouter notes

* Update ai_setup.md

* docs(custom_config): add table of contents and fix note about custom name

* docs(custom_config): reorder ToC

* Update custom_config.md

* Add note about `max_tokens` field in custom_config.md
2024-01-03 09:22:48 -05:00
Danny Avila
379e470e38
🧹fix: Handle Abort Message Edge Cases (#1462)
* chore: bump langchain to v0.0.213 from v0.0.186

* fix: handle abort edge cases:
- abort message server-side if response experienced error mid-generation
- attempt to recover message if aborting resulted in error
- if abortKey is not provided, use conversationId if it exists
- if headers were already sent, send an Event stream message
- issue warning for possible Google censor/filter

refactor(streamResponse): for `sendError`, allow passing overrides so that error can include partial generation, improve typing for `sendMessage`

* chore(MessageContent): remove eslint warning for unused `i`, rephrase unfinished message text

* fix(useSSE): avoid invoking cancelHandler if the abort response was 404

* chore(TMessage): remove unnecessary, unused legacy message property `submitting`

* chore(TMessage): remove unnecessary legacy message property `cancelled`

* chore(abortMiddleware): remove unused `errorText` property to avoid confusion
2023-12-30 12:34:23 -05:00
Marco Beretta
f19f5dca8e
🔥🚀 feat: CDN (Firebase) & feat: account section (#1438)
* localization + api-endpoint

* docs: added firebase documentation

* chore: icons

* chore: SettingsTabs

* feat: account pannel; fix: gear icons

* docs: position update

* feat: firebase

* feat: plugin support

* route

* fixed bugs with firebase and moved a lot of files

* chore(DALLE3): using UUID v4

* feat: support for social strategies; moved '/images' path

* fix: data ignored

* gitignore update

* docs: update firebase guide

* refactor: Firebase
- use singleton pattern for firebase initialization, initially on server start
- reorganize imports, move firebase specific files to own service under Files
- rename modules to remove 'avatar' redundancy
- fix imports based on changes

* ci(DALLE/DALLE3): fix tests to use logger and new expected outputs, add firebase tests

* refactor(loadToolWithAuth): pass userId to tool as field

* feat(images/parse): feat: Add URL Image Basename Extraction

Implement a new module to extract the basename of an image from a given URL. This addition includes the  function, which parses the URL and retrieves the basename using the Node.js 'url' and 'path' modules. The function is documented with JSDoc comments for better maintainability and understanding. This feature enhances the application's ability to handle and process image URLs efficiently.

* refactor(addImages): function to use a more specific regular expression for observedImagePath based on the generated image markdown standard across the app

* refactor(DALLE/DALLE3): utilize `getImageBasename` and `this.userId`; fix: pass correct image path to firebase url helper

* fix(addImages): make more general to match any image markdown descriptor

* fix(parse/getImageBasename): test result of this function for an actual image basename

* ci(DALLE3): mock getImageBasename

* refactor(AuthContext): use Recoil atom state for user

* feat: useUploadAvatarMutation, react-query hook for avatar upload

* fix(Toast): stack z-order of Toast over all components (1000)

* refactor(showToast): add optional status field to avoid importing NotificationSeverity on each use of the function

* refactor(routes/avatar): remove unnecessary get route, get userId from req.user.id, require auth on POST request

* chore(uploadAvatar): TODO: remove direct use of Model, `User`

* fix(client): fix Spinner imports

* refactor(Avatar): use react-query hook, Toast, remove unnecessary states, add optimistic UI to upload

* fix(avatar/localStrategy): correctly save local profile picture and cache bust for immediate rendering; fix: firebase init info message (only show once)

* fix: use `includes` instead of `endsWith` for checking manual query of avatar image path in case more queries are appended (as is done in avatar/localStrategy)

---------

Co-authored-by: Danny Avila <messagedaniel@protonmail.com>
2023-12-29 21:42:19 -05:00
Danny Avila
5b28362282
Release v0.6.5 (#1391)
*  Release v0.6.5

* fix(ci): use dynamic currentDateString
2023-12-19 01:09:42 -05:00
Danny Avila
8d563d61f1
feat: Azure Vision Support & Docs Update (#1389)
* feat(AzureOpenAI): Vision Support

* chore(ci/OpenAIClient.test): update test to reflect Azure now uses chatCompletion method as opposed to getCompletion, while still testing the latter method

* docs: update documentation mainly revolving around Azure setup, but also reformatting the 'Tokens and API' section completely

* docs: add images and links to ai_setup.md

* docs: ai setup reference
2023-12-18 18:43:50 -05:00
Danny Avila
c9d3e0ab6a
🚩 fix: Initialize Conversation Only when Necessary Data is Fetched (#1379)
* fix(ChatRoute): only initialize conversation after all data is fetched (models, endpoints, initialConversationQuery if not `new`)

* chore: remove unnecessary packages for rolling up api

* chore: bump data-provider package.json
2023-12-17 18:56:01 -05:00
Danny Avila
0c326797dd
📸 feat: Gemini vision, Improved Logs and Multi-modal Handling (#1368)
* feat: add GOOGLE_MODELS env var

* feat: add gemini vision support

* refactor(GoogleClient): adjust clientOptions handling depending on model

* fix(logger): fix redact logic and redact errors only

* fix(GoogleClient): do not allow non-multiModal messages when gemini-pro-vision is selected

* refactor(OpenAIClient): use `isVisionModel` client property to avoid calling validateVisionModel multiple times

* refactor: better debug logging by correctly traversing, redacting sensitive info, and logging condensed versions of long values

* refactor(GoogleClient): allow response errors to be thrown/caught above client handling so user receives meaningful error message
debug orderedMessages, parentMessageId, and buildMessages result

* refactor(AskController): use model from client.modelOptions.model when saving intermediate messages, which requires for the progress callback to be initialized after the client is initialized

* feat(useSSE): revert to previous model if the model was auto-switched by backend due to message attachments

* docs: update with google updates, notes about Gemini Pro Vision

* fix: redis should not be initialized without USE_REDIS and increase max listeners to 20
2023-12-16 20:45:27 -05:00
Danny Avila
2dfade1c42
Update package.json 2023-12-15 15:52:56 -05:00
Danny Avila
509b1e5c63
🔄 refactor: Consolidate Ask/Edit Controllers (#1365)
* refactor(Ask/Edit): consolidate ask/edit controllers between the main modules and openAI controllers to reduce repetition of code and increase reusability

* fix(winston/logger): circular dependency issue

* fix(config/scripts): fix script imports

* refactor(indexSync): make not configured message an info log message

* chore: create a rollup script for api/server/index.js to check circular dependencies

* chore: bump @keyv/redis
2023-12-15 15:47:40 -05:00
Danny Avila
0958db3825
fix: Enhance Test Coverage and Fix Compatibility Issues 👷‍♂️ (#1363)
* refactor: only remove conversation states from localStorage on login/logout but not on refresh

* chore: add debugging log for azure completion url

* chore: add api-key to redact regex

* fix: do not show endpoint selector if endpoint is falsy

* chore: remove logger from genAzureChatCompletion

* feat(ci): mock fetchEventSource

* refactor(ci): mock all model methods in BaseClient.test, as well as mock the implementation for getCompletion in FakeClient

* fix(OpenAIClient): consider chatCompletion if model name includes `gpt` as opposed to `gpt-`

* fix(ChatGPTClient/azureOpenAI): Remove 'model' option for Azure compatibility (cannot be sent in payload body)

* feat(ci): write new test suite that significantly increase test coverage for OpenAIClient and BaseClient by covering most of the real implementation of the `sendMessage` method
- test for the azure edge case where model option is appended to modelOptions, ensuring removal before sent to the azure endpoint
- test for expected azure url being passed to SSE POST request
- test for AZURE_OPENAI_DEFAULT_MODEL being set, but is not included in the URL deployment name as expected
- test getCompletion method to have correct payload
fix(ci/OpenAIClient.test.js): correctly mock hanging/async methods

* refactor(addTitle): allow azure to title as it aborts signal on completion
2023-12-15 13:27:13 -05:00
Danny Avila
ff59a2e41d
fix: Avoid Throwing Errors for Unsupported Token Count Endpoints 🪙 (#1356) 2023-12-15 02:40:15 -05:00
Danny Avila
561ce8e86a
feat: Google Gemini ❇️ (#1355)
* refactor: add gemini-pro to google Models list; use defaultModels for central model listing

* refactor(SetKeyDialog): create useMultipleKeys hook to use for Azure, export `isJson` from utils, use EModelEndpoint

* refactor(useUserKey): change variable names to make keyName setting more clear

* refactor(FileUpload): allow passing container className string

* feat(GoogleClient): Gemini support

* refactor(GoogleClient): alternate stream speed for Gemini models

* feat(Gemini): styling/settings configuration for Gemini

* refactor(GoogleClient): substract max response tokens from max context tokens if context is above 32k (I/O max is combined between the two)

* refactor(tokens): correct google max token counts and subtract max response tokens when input/output count are combined towards max context count

* feat(google/initializeClient): handle both local and user_provided credentials and write tests

* fix(GoogleClient): catch if credentials are undefined, handle if serviceKey is string or object correctly, handle no examples passed, throw error if not a Generative Language model and no service account JSON key is provided, throw error if it is a Generative m
odel, but not google API key was provided

* refactor(loadAsyncEndpoints/google): activate Google endpoint if either the service key JSON file is provided in /api/data, or a GOOGLE_KEY is defined.

* docs: updated Google configuration

* fix(ci): Mock import of Service Account Key JSON file (auth.json)

* Update apis_and_tokens.md

* feat: increase max output tokens slider for gemini pro

* refactor(GoogleSettings): handle max and default maxOutputTokens on model change

* chore: add sensitive redact regex

* docs: add warning about data privacy

* Update apis_and_tokens.md
2023-12-15 02:18:07 -05:00
Danny Avila
fac2580a19
fix(librechat-data-provider): Update types paths in package.json (#1333)
* fix(librechat-data-provider): Update types paths in package.json

* chore: bump version

* fix(librechat-data-provider): Update types paths in react-query/package.json
2023-12-12 09:16:29 -05:00
Danny Avila
df1dfa7d46
refactor: Use librechat-data-provider app-wide 🔄 (#1326)
* chore: bump vite, vitejs/plugin-react, mark client package as esm, move react-query as a peer dep in data-provider

* chore: import changes due to new data-provider export strategy, also fix type imports where applicable

* chore: export react-query services as separate to avoid react dependencies in /api/

* chore: suppress sourcemap warnings and polyfill node:path which is used by filenamify
TODO: replace filenamify with an alternative and REMOVE polyfill

* chore: /api/ changes to support `librechat-data-provider`

* refactor: rewrite Dockerfile.multi in light of /api/ changes to support `librechat-data-provider`

* chore: remove volume mapping to node_modules directories in default compose file

* chore: remove schemas from /api/ as is no longer needed with use of `librechat-data-provider`

* fix(ci): jest `librechat-data-provider/react-query` module resolution
2023-12-11 14:48:40 -05:00
Danny Avila
968b8ccdbd
🔒 fix: Robust Cache Reset on User Logout (#1324)
* refactor(Logout): rely on hooks for mutation behavior

* fix: logging out now correctly resets cache, disallowing any cache mixing between the next logged in user on the same browser

* chore: remove additional localStorage values on logout
2023-12-10 17:13:42 -05:00
Danny Avila
583e978a82
feat(Google): Support all Text/Chat Models, Response streaming, PaLM -> Google 🤖 (#1316)
* feat: update PaLM icons

* feat: add additional google models

* POC: formatting inputs for Vertex AI streaming

* refactor: move endpoints services outside of /routes dir to /services/Endpoints

* refactor: shorten schemas import

* refactor: rename PALM to GOOGLE

* feat: make Google editable endpoint

* feat: reusable Ask and Edit controllers based off Anthropic

* chore: organize imports/logic

* fix(parseConvo): include examples in googleSchema

* fix: google only allows odd number of messages to be sent

* fix: pass proxy to AnthropicClient

* refactor: change `google` altName to `Google`

* refactor: update getModelMaxTokens and related functions to handle maxTokensMap with nested endpoint model key/values

* refactor: google Icon and response sender changes (Codey and Google logo instead of PaLM in all cases)

* feat: google support for maxTokensMap

* feat: google updated endpoints with Ask/Edit controllers, buildOptions, and initializeClient

* feat(GoogleClient): now builds prompt for text models and supports real streaming from Vertex AI through langchain

* chore(GoogleClient): remove comments, left before for reference in git history

* docs: update google instructions (WIP)

* docs(apis_and_tokens.md): add images to google instructions

* docs: remove typo apis_and_tokens.md

* Update apis_and_tokens.md

* feat(Google): use default settings map, fully support context for both text and chat models, fully support examples for chat models

* chore: update more PaLM references to Google

* chore: move playwright out of workflows to avoid failing tests
2023-12-10 14:54:13 -05:00
Danny Avila
0bae503a0a
refactor: Speed up Config fetching and Setup Config Groundwork 👷🚧 (#1297)
* refactor: move endpoint services to own directory

* refactor: make endpointconfig handling more concise, separate logic, and cache result for subsequent serving

* refactor: ModelController gets same treatment as EndpointController, draft OverrideController

* wip: flesh out override controller more to return real value

* refactor: client/api changes in anticipation of override
2023-12-06 19:36:57 -05:00
Danny Avila
ca64efec1b
feat: Implement Default Preset Selection for Conversations 📌 (#1275)
* fix: type issues with icons

* refactor: use react query for presets, show toasts on preset crud, refactor mutations, remove presetsQuery from Root (breaking change)

* refactor: change preset titling

* refactor: update preset schemas and methods for necessary new properties `order` and `defaultPreset`

* feat: add `defaultPreset` Recoil value

* refactor(getPresetTitle): make logic cleaner and more concise

* feat: complete UI portion of defaultPreset feature, with animations added to preset items

* chore: remove console.logs()

* feat: complete default preset handling

* refactor: remove user sensitive values on logout

* fix: allow endpoint selection without default preset overwriting
2023-12-06 14:00:15 -05:00
Marco Beretta
fdb65366d7
📧 feat: disable login (ALLOW_EMAIL_LOGIN) (#1282)
* added ALLOW_EMAIL_LOGIN

* update .env.example

* fix(config) email login true by default

* Update dotenv.md
2023-12-06 07:08:49 -05:00
Danny Avila
5e6f8cbce7
fix: Correct Default Model Name in Response Sender and Update Anthropics 🤖 (#1208)
* feat: add claude-2.1 to default anthropic models

* chore: remove console log in NavLinks

* fix: issue with response sender not using model name, change anthropic default value to Claude

* fix: preset will not be selected on edit
2023-11-22 18:29:09 -05:00
Danny Avila
f3402401f1
feat: Order/disable endpoints with ENDPOINTS env var (#1206)
* fix: endpoint will not be select if disabled

* feat: order and disable endpoints with ENDPOINTS env var

* chore: remove console.log
2023-11-22 13:56:38 -05:00
Danny Avila
317cdd3f77
feat: Vision Support + New UI (#1203)
* feat: add timer duration to showToast, show toast for preset selection

* refactor: replace old /chat/ route with /c/. e2e tests will fail here

* refactor: move typedefs to root of /api/ and add a few to assistant types in TS

* refactor: reorganize data-provider imports, fix dependency cycle, strategize new plan to separate react dependent packages

* feat: add dataService for uploading images

* feat(data-provider): add mutation keys

* feat: file resizing and upload

* WIP: initial API image handling

* fix: catch JSON.parse of localStorage tools

* chore: experimental: use module-alias for absolute imports

* refactor: change temp_file_id strategy

* fix: updating files state by using Map and defining react query callbacks in a way that keeps them during component unmount, initial delete handling

* feat: properly handle file deletion

* refactor: unexpose complete filepath and resize from server for higher fidelity

* fix: make sure resized height, width is saved, catch bad requests

* refactor: use absolute imports

* fix: prevent setOptions from being called more than once for OpenAIClient, made note to fix for PluginsClient

* refactor: import supportsFiles and models vars from schemas

* fix: correctly replace temp file id

* refactor(BaseClient): use absolute imports, pass message 'opts' to buildMessages method, count tokens for nested objects/arrays

* feat: add validateVisionModel to determine if model has vision capabilities

* chore(checkBalance): update jsdoc

* feat: formatVisionMessage: change message content format dependent on role and image_urls passed

* refactor: add usage to File schema, make create and updateFile, correctly set and remove TTL

* feat: working vision support
TODO: file size, type, amount validations, making sure they are styled right, and making sure you can add images from the clipboard/dragging

* feat: clipboard support for uploading images

* feat: handle files on drop to screen, refactor top level view code to Presentation component so the useDragHelpers hook  has ChatContext

* fix(Images): replace uploaded images in place

* feat: add filepath validation to protect sensitive files

* fix: ensure correct file_ids are push and not the Map key values

* fix(ToastContext): type issue

* feat: add basic file validation

* fix(useDragHelpers): correct context issue with `files` dependency

* refactor: consolidate setErrors logic to setError

* feat: add dialog Image overlay on image click

* fix: close endpoints menu on click

* chore: set detail to auto, make note for configuration

* fix: react warning (button desc. of button)

* refactor: optimize filepath handling, pass file_ids to images for easier re-use

* refactor: optimize image file handling, allow re-using files in regen, pass more file metadata in messages

* feat: lazy loading images including use of upload preview

* fix: SetKeyDialog closing, stopPropagation on Dialog content click

* style(EndpointMenuItem): tighten up the style, fix dark theme showing in lightmode, make menu more ux friendly

* style: change maxheight of all settings textareas to 138px from 300px

* style: better styling for textarea and enclosing buttons

* refactor(PresetItems): swap back edit and delete icons

* feat: make textarea placeholder dynamic to endpoint

* style: show user hover buttons only on hover when message is streaming

* fix: ordered list not going past 9, fix css

* feat: add User/AI labels; style: hide loading spinner

* feat: add back custom footer, change original footer text

* feat: dynamic landing icons based on endpoint

* chore: comment out assistants route

* fix: autoScroll to newest on /c/ view

* fix: Export Conversation on new UI

* style: match message style of official more closely

* ci: fix api jest unit tests, comment out e2e tests for now as they will fail until addressed

* feat: more file validation and use blob in preview field, not filepath, to fix temp deletion

* feat: filefilter for multer

* feat: better AI labels based on custom name, model, and endpoint instead of  `ChatGPT`
2023-11-21 20:12:48 -05:00
Danny Avila
bac1fb67d2
WIP: Update UI to match Official Style; Vision and Assistants 👷🏽 (#1190)
* wip: initial client side code

* wip: initial api code

* refactor: export query keys from own module, export assistant hooks

* refactor(SelectDropDown): more customization via props

* feat: create Assistant and render real Assistants

* refactor: major refactor of UI components to allow multi-chat, working alongside CreationPanel

* refactor: move assistant routes to own directory

* fix(CreationHeader): state issue with assistant select

* refactor: style changes for form, fix setSiblingIdx from useChatHelpers to use latestMessageParentId, fix render issue with ChatView and change location

* feat: parseCompactConvo: begin refactor of slimmer JSON payloads between client/api

* refactor(endpoints): add assistant endpoint, also use EModelEndpoint as much as possible

* refactor(useGetConversationsQuery): use object to access query data easily

* fix(MultiMessage): react warning of bad state set, making use of effect during render (instead of useEffect)

* fix(useNewConvo): use correct atom key (index instead of convoId) for reset latestMessageFamily

* refactor: make routing navigation/conversation change simpler

* chore: add removeNullishValues for smaller payloads, remove unused fields, setup frontend pinging of assistant endpoint

* WIP: initial complete assistant run handling

* fix: CreationPanel form correctly setting internal state

* refactor(api/assistants/chat): revise functions to working run handling strategy

* refactor(UI): initial major refactor of ChatForm and options

* feat: textarea hook

* refactor: useAuthRedirect hook and change directory name

* feat: add ChatRoute (/c/), make optionsBar absolute and change on textarea height, add temp header

* feat: match new toggle Nav open button to ChatGPT's

* feat: add OpenAI custom classnames

* feat: useOriginNavigate

* feat: messages loading view

* fix: conversation navigation and effects

* refactor: make toggle change nav opacity

* WIP: new endpoint menu

* feat: NewEndpointsMenu complete

* fix: ensure set key dialog shows on endpoint change, and new conversation resets messages

* WIP: textarea styling fix, add temp footer, create basic file handling component

* feat: image file handling (UI)

* feat: PopOver and ModelSelect in Header, remove GenButtons

* feat: drop file handling

* refactor: bug fixes
use SSE at route level
add opts to useOriginNavigate
delay render of unfinishedMessage to avoid flickering
pass params (convoId) to chatHelpers to set messages query data based on param when the route is new (fixes can't continue convo on /new/)
style(MessagesView): matches height to official
fix(SSE): pass paramId and invalidate convos
style(Message): make bg uniform

* refactor(useSSE): setStorage within setConversation updates

* feat: conversationKeysAtom, allConversationsSelector, update convos query data on created message (if new), correctly handle convo deletion (individual)

* feat: add popover select dropdowns to allow options in header while allowing horizontal scroll for mobile

* style(pluginsSelect): styling changes

* refactor(NewEndpointsMenu): make UI components modular

* feat: Presets complete

* fix: preset editing, make by index

* fix: conversations not setting on inital navigation, fix getMessages() based on query param

* fix: changing preset no longer resets latestMessage

* feat: useOnClickOutside for OptionsPopover and fix bug that causes selection of preset when deleting

* fix: revert /chat/ switchToConvo, also use NewDeleteButton in Convo

* fix: Popover correctly closes on close Popover button using custom condition for useOnClickOutside

* style: new message and nav styling

* style: hover/sibling buttons and preset menu scrolling

* feat: new convo header button

* style(Textarea): minor style changes to textarea buttons

* feat: stop/continue generating and hide hoverbuttons when submitting

* feat: compact AI Provider schemas to make json payloads and db saves smaller

* style: styling changes for consistency on chat route

* fix: created usePresetIndexOptions to prevent bugs between /c/ and /chat/ routes when editing presets, removed redundant code from the new dialog

* chore: make /chat/ route default for now since we still lack full image support
2023-11-16 10:42:24 -05:00
Danny Avila
adbeb46399
v0.6.1 (#1189) 2023-11-16 08:53:09 -05:00
Danny Avila
3a38b4b842
feat: DALL-E-3 support 🎨 (#1147)
* feat: DALL-E-3 support

* fix(ci): lock-in openai dependency for types used in data-provider
2023-11-06 19:45:59 -05:00
Danny Avila
05c4c7e551
feat: add CUSTOM_FOOTER env variable (#1098) 2023-10-23 21:08:18 -04:00
Danny Avila
4073b7d05d
Refactor: replace endpointsConfig recoil store with react query (#1085) 2023-10-21 13:50:29 -04:00
Danny Avila
6cb561abcf
fix: getLogStores Property and Handle 401 Error from Refresh Token Request (#1084)
* fix(getLogStores): correct wrong prop passed to keyv opts: duration -> ttl

* fix: edge case where we get a blank screen if the initially intercepted 401 error is from a refresh token request; in this case, make explicit to the server that we are retrying from a refreshToken request
2023-10-21 12:39:08 -04:00
Danny Avila
fd99bac121
fix(data-provider): typo 'messsages' -> 'messages', export named default (#1073) 2023-10-18 11:10:06 -04:00
Danny Avila
ddf56db316
fix(auth/refresh): send 403 res for invalid token to properly invalidate session (#1068) 2023-10-17 08:34:14 -04:00
Danny Avila
b3aac97710
fix(balance/models): request only when authenticated, modelsQuery "optimistic" update (#1031)
* fix(balanceQuery/modelsQuery): request only when authenticated

* style: match new chat capitalization to official

* fix(modelsQuery): update selected model optimistically

* ci: update e2e changes, disable title in ci env

* fix(ci): get new chat button by data-testid and not text
2023-10-09 15:10:23 -04:00
Danny Avila
365c39c405
feat: Accurate Token Usage Tracking & Optional Balance (#1018)
* refactor(Chains/llms): allow passing callbacks

* refactor(BaseClient): accurately count completion tokens as generation only

* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM

* wip: summary prompt tokens

* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing

* refactor(createLLM): make streaming prop false by default

* chore: remove use of getTokenCountForResponse

* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace

* chore: remove passing of streaming prop, also console log useful vars for tracing

* feat: formatFromLangChain helper function to count tokens for ChatModelStart

* refactor(initializeLLM): add role for LLM tracing

* chore(formatFromLangChain): update JSDoc

* feat(formatMessages): formats langChain messages into OpenAI payload format

* chore: install openai-chat-tokens

* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring

* feat: accurate prompt tokens for ChatModelStart before generation

* refactor(handleChatModelStart): move to callbacks dir, use factory function

* refactor(initializeLLM): rename 'role' to 'context'

* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file

* refactor(initializeClient): add req,res objects to client options

* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update

* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match

* refactor(Tx): add fair fallback value multiplier incase the config result is undefined

* refactor(Balance): rename 'tokens' to 'tokenCredits'

* feat: balance check, add tx.js for new tx-related methods and tests

* chore(summaryPrompts): update prompt token count

* refactor(callbacks): pass req, res
wip: check balance

* refactor(Tx): make convoId a String type, fix(calculateTokenValue)

* refactor(BaseClient): add conversationId as client prop when assigned

* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls

* feat(spendTokens): helper to spend prompt/completion tokens

* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance

* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance

* chore: remove prompt and completion token logging from route handler

* chore(spendTokens): add JSDoc

* feat(logTokenCost): record transactions for basic api calls

* chore(ask/edit): invoke getResponseSender only once per API call

* refactor(ask/edit): pass promptTokens to getIds and include in abort data

* refactor(getIds -> getReqData): rename function

* refactor(Tx): increase value if incomplete message

* feat: record tokenUsage when message is aborted

* refactor: subtract tokens when payload includes function_call

* refactor: add namespace for token_balance

* fix(spendTokens): only execute if corresponding token type amounts are defined

* refactor(checkBalance): throws Error if not enough token credits

* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'

* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens

* fix: properly cancel title requests when there isn't enough tokens to generate

* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain

* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError

* refactor(createStartHandler): if summary, add error details to runs

* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer

* refactor(logTokenCost -> recordTokenUsage): rename

* refactor(checkBalance): include promptTokens in errorMessage

* refactor(checkBalance/spendTokens): move to models dir

* fix(createLanguageChain): correctly pass config

* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check

* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called

* refactor(createStartHandler): add error to run if context is plugins as well

* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run

* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic

* chore: use absolute equality for addTitle condition

* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional

* style: icon changes to match official

* fix(BaseClient): getTokenCountForResponse -> getTokenCount

* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc

* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled

* fix(e2e/cleanUp): cleanup new collections, import all model methods from index

* fix(config/add-balance): add uncaughtException listener

* fix: circular dependency

* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance

* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped

* fix(createStartHandler): correct condition for generations

* chore: bump postcss due to moderate severity vulnerability

* chore: bump zod due to low severity vulnerability

* chore: bump openai & data-provider version

* feat(types): OpenAI Message types

* chore: update bun lockfile

* refactor(CodeBlock): add error block formatting

* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON

* chore(logViolation): delete user_id after error is logged

* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex

* fix(DALL-E): use latest openai SDK

* chore: reorganize imports, fix type issue

* feat(server): add balance route

* fix(api/models): add auth

* feat(data-provider): /api/balance query

* feat: show balance if checking is enabled, refetch on final message or error

* chore: update docs, .env.example with token_usage info, add balance script command

* fix(Balance): fallback to empty obj for balance query

* style: slight adjustment of balance element

* docs(token_usage): add PR notes
2023-10-05 18:34:10 -04:00
Danny Avila
317a1bd8da
feat: ConversationSummaryBufferMemory (#973)
* refactor: pass model in message edit payload, use encoder in standalone util function

* feat: add summaryBuffer helper

* refactor(api/messages): use new countTokens helper and add auth middleware at top

* wip: ConversationSummaryBufferMemory

* refactor: move pre-generation helpers to prompts dir

* chore: remove console log

* chore: remove test as payload will no longer carry tokenCount

* chore: update getMessagesWithinTokenLimit JSDoc

* refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests

* refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message

* chore: add newer model to token map

* fix: condition was point to prop of array instead of message prop

* refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary
refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present
fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present

* chore: log previous_summary if debugging

* refactor(formatMessage): assume if role is defined that it's a valid value

* refactor(getMessagesWithinTokenLimit): remove summary logic
refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned
refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary
refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method
refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic

* fix: undefined handling and summarizing only when shouldRefineContext is true

* chore(BaseClient): fix test results omitting system role for summaries and test edge case

* chore: export summaryBuffer from index file

* refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer

* feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine'

* refactor: rename refineMessages method to summarizeMessages for clarity

* chore: clarify summary future intent in .env.example

* refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed

* feat(gptPlugins): enable summarization for plugins

* refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner

* refactor(agents): use ConversationSummaryBufferMemory for both agent types

* refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests

* refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers

* fix: forgot to spread formatMessages also took opportunity to pluralize filename

* refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing

* ci(formatMessages): add more exhaustive checks for langchain messages

* feat: add debug env var for OpenAI

* chore: delete unnecessary comments

* chore: add extra note about summary feature

* fix: remove tokenCount from payload instructions

* fix: test fail

* fix: only pass instructions to payload when defined or not empty object

* refactor: fromPromptMessages is deprecated, use renamed method fromMessages

* refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility

* fix(PluginsClient.buildPromptBody): handle undefined message strings

* chore: log langchain titling error

* feat: getModelMaxTokens helper

* feat: tokenSplit helper

* feat: summary prompts updated

* fix: optimize _CUT_OFF_SUMMARIZER prompt

* refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context

* fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context,
refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this
refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning

* fix(handleContextStrategy): handle case where incoming prompt is bigger than model context

* chore: rename refinedContent to splitText

* chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
Danny Avila
8580f1c3d3
v0.5.9 (#970)
*  v0.5.9

* chore: bump data-provider
2023-09-18 17:23:32 -04:00
Danny Avila
6358383001
feat(db & e2e): Enhance DB Schemas/Controllers and Improve E2E Tests (#966)
* feat: add global teardown to remove test data and add registration/log-out to auth flow

* refactor(models/Conversation): index user field and add JSDoc to deleteConvos

* refactor: add user index to message schema and ensure user is saved to each Message

* refactor: add user to each saveMessage call

* fix: handle case where title is null in zod schema

* feat(e2e): ensure messages are deleted on cleanUp

* fix: set last convo for all endpoints on conversation update

* fix: enable registration for CI env
2023-09-18 15:19:50 -04:00
Danny Avila
fd70e21732
feat: OpenRouter Support & Improve Model Fetching ⇆ (#936)
* chore(ChatGPTClient.js): add support for OpenRouter API
chore(OpenAIClient.js): add support for OpenRouter API

* chore: comment out token debugging

* chore: add back streamResult assignment

* chore: remove double condition/assignment from merging

* refactor(routes/endpoints): -> controller/services logic

* feat: add openrouter model fetching

* chore: remove unused endpointsConfig in cleanupPreset function

* refactor: separate models concern from endpointsConfig

* refactor(data-provider): add TModels type and make TEndpointsConfig adaptible to new endpoint keys

* refactor: complete models endpoint service in data-provider

* refactor: onMutate for refreshToken and login, invalidate models query

* feat: complete models endpoint logic for frontend

* chore: remove requireJwtAuth from /api/endpoints and /api/models as not implemented yet

* fix: endpoint will not be overwritten and instead use active value

* feat: openrouter support for plugins

* chore(EndpointOptionsDialog): remove unused recoil value

* refactor(schemas/parseConvo): add handling of secondaryModels to use first of defined secondary models, which includes last selected one as first, or default to the convo's secondary model value

* refactor: remove hooks from store and move to hooks
refactor(switchToConversation): make switchToConversation use latest recoil state, which is necessary to get the most up-to-date models list, replace wrapper function
refactor(getDefaultConversation): factor out logic into 3 pieces to reduce complexity.

* fix: backend tests

* feat: optimistic update by calling newConvo when models are fetched

* feat: openrouter support for titling convos

* feat: cache models fetch

* chore: add missing dep to AuthContext useEffect

* chore: fix useTimeout types

* chore: delete old getDefaultConvo file

* chore: remove newConvo logic from Root, remove console log from api models caching

* chore: ensure bun is used for building in b:client script

* fix: default endpoint will not default to null on a completely fresh login (no localStorage/cookies)

* chore: add openrouter docs to free_ai_apis.md and .env.example

* chore: remove openrouter console logs

* feat: add debugging env variable for Plugins
2023-09-18 12:55:51 -04:00