* Basic implementation of ChatGPT conversation import
* remove debug code
* Handle citations
* Fix updatedAt in import
* update default model
* Use job scheduler to handle import requests
* import job status endpoint
* Add wrapper around Agenda
* Rate limits for import endpoint
* rename import api path
* Batch save import to mongo
* Improve naming
* Add documenting comments
* Test for importers
* Change button for importing conversations
* Frontend changes
* Import job status endpoint
* Import endpoint response
* Add translations to new phrases
* Fix conversations refreshing
* cleanup unused functions
* set timeout for import job status polling
* Add documentation
* get extra spaces back
* Improve error message
* Fix translation files after merge
* fix translation files 2
* Add zh translation for import functionality
* Sync mailisearch index after import
* chore: add dummy uri for jest tests, as MONGO_URI should only be real for E2E tests
* docs: fix links
* docs: fix conversationsImport section
* fix: user role issue for librechat imports
* refactor: import conversations from json
- organize imports
- add additional jsdocs
- use multer with diskStorage to avoid loading file into memory outside of job
- use filepath instead of loading data string for imports
- replace console logs and some logger.info() with logger.debug
- only use multer for import route
* fix: undefined metadata edge case and replace ChatGtp -> ChatGpt
* Refactor importChatGptConvo function to handle undefined metadata edge case and replace ChatGtp with ChatGpt
* fix: chatgpt importer
* feat: maintain tree relationship for librechat messages
* chore: use enum
* refactor: saveMessage to use single object arg, replace console logs, add userId to log message
* chore: additional comment
* chore: multer edge case
* feat: first pass, maintain tree relationship
* chore: organize
* chore: remove log
* ci: add heirarchy test for chatgpt
* ci: test maintaining of heirarchy for librechat
* wip: allow non-text content type messages
* refactor: import content part object json string
* refactor: more content types to format
* chore: consolidate messageText formatting
* docs: update on changes, bump data-provider/config versions, update readme
* refactor(indexSync): singleton pattern for MeiliSearchClient
* refactor: debug log after batch is done
* chore: add back indexSync error handling
---------
Co-authored-by: jakubmieszczak <jakub.mieszczak@zendesk.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
* add integration with Apple MLX
* fix: apple icon + image mkd link
---------
Co-authored-by: “Extremys” <“Extremys@email.com”>
Co-authored-by: Danny Avila <danny@librechat.ai>
* WIP: first pass ModelSpecs
* refactor(onSelectEndpoint): use `getConvoSwitchLogic`
* feat: introduce iconURL, greeting, frontend fields for conversations/presets/messages
* feat: conversation.iconURL & greeting in Landing
* feat: conversation.iconURL & greeting in New Chat button
* feat: message.iconURL
* refactor: ConversationIcon -> ConvoIconURL
* WIP: add spec as a conversation field
* refactor: useAppStartup, set spec on initial load for new chat, allow undefined spec, add localStorage keys enum, additional type fields for spec
* feat: handle `showIconInMenu`, `showIconInHeader`, undefined `iconURL` and no specs on initial load
* chore: handle undefined or empty modelSpecs
* WIP: first pass, modelSpec schema for custom config
* refactor: move default filtered tools definition to ToolService
* feat: pass modelSpecs from backend via startupConfig
* refactor: modelSpecs config, return and define list
* fix: react error and include iconURL in responseMessage
* refactor: add iconURL to responseMessage only
* refactor: getIconEndpoint
* refactor: pass TSpecsConfig
* fix(assistants): differentiate compactAssistantSchema, correctly resets shared conversation state with other endpoints
* refactor: assistant id prefix localStorage key
* refactor: add more LocalStorageKeys and replace hardcoded values
* feat: prioritize spec on new chat behavior: last selected modelSpec behavior (localStorage)
* feat: first pass, interface config
* chore: WIP, todo: add warnings based on config.modelSpecs settings.
* feat: enforce modelSpecs if configured
* feat: show config file yaml errors
* chore: delete unused legacy Plugins component
* refactor: set tools to localStorage from recoil store
* chore: add stable recoil setter to useEffect deps
* refactor: save tools to conversation documents
* style(MultiSelectPop): dynamic height, remove unused import
* refactor(react-query): use localstorage keys and pass config to useAvailablePluginsQuery
* feat(utils): add mapPlugins
* refactor(Convo): use conversation.tools if defined, lastSelectedTools if not
* refactor: remove unused legacy code using `useSetOptions`, remove conditional flag `isMultiChat` for using legacy settings
* refactor(PluginStoreDialog): add exhaustive-deps which are stable react state setters
* fix(HeaderOptions): pass `popover` as true
* refactor(useSetStorage): use project enums
* refactor: use LocalStorageKeys enum
* fix: prevent setConversation from setting falsy values in lastSelectedTools
* refactor: use map for availableTools state and available Plugins query
* refactor(updateLastSelectedModel): organize logic better and add note on purpose
* fix(setAgentOption): prevent reseting last model to secondary model for gptPlugins
* refactor(buildDefaultConvo): use enum
* refactor: remove `useSetStorage` and consolidate areas where conversation state is saved to localStorage
* fix: conversations retain tools on refresh
* fix(gptPlugins): prevent nullish tools from being saved
* chore: delete useServerStream
* refactor: move initial plugins logic to useAppStartup
* refactor(MultiSelectDropDown): add more pass-in className props
* feat: use tools in presets
* chore: delete unused usePresetOptions
* refactor: new agentOptions default handling
* chore: note
* feat: add label and custom instructions to agents
* chore: remove 'disabled with tools' message
* style: move plugins to 2nd column in parameters
* fix: TPreset type for agentOptions
* fix: interface controls
* refactor: add interfaceConfig, use Separator within Switcher
* refactor: hide Assistants panel if interface.parameters are disabled
* fix(Header): only modelSpecs if list is greater than 0
* refactor: separate MessageIcon logic from useMessageHelpers for better react rule-following
* fix(AppService): don't use reserved keyword 'interface'
* feat: set existing Icon for custom endpoints through iconURL
* fix(ci): tests passing for App Service
* docs: refactor custom_config.md for readability and better organization, also include missing values
* docs: interface section and re-organize docs
* docs: update modelSpecs info
* chore: remove unused files
* chore: remove unused files
* chore: move useSetIndexOptions
* chore: remove unused file
* chore: move useConversation(s)
* chore: move useDefaultConvo
* chore: move useNavigateToConvo
* refactor: use plugin install hook so it can be used elsewhere
* chore: import order
* update docs
* refactor(OpenAI/Plugins): allow modelLabel as an initial value for chatGptLabel
* chore: remove unused EndpointOptionsPopover and hide 'Save as Preset' button if preset UI visibility disabled
* feat(loadDefaultInterface): issue warnings based on values
* feat: changelog for custom config file
* docs: add additional changelog note
* fix: prevent unavailable tool selection from preset and update availableTools on Plugin installations
* feat: add `filteredTools` option in custom config
* chore: changelog
* fix(MessageIcon): always overwrite conversation.iconURL in messageSettings
* fix(ModelSpecsMenu): icon edge cases
* fix(NewChat): dynamic icon
* fix(PluginsClient): always include endpoint in responseMessage
* fix: always include endpoint and iconURL in responseMessage across different response methods
* feat: interchangeable keys for modelSpec enforcing
* feat: `stop` conversation parameter
* feat: Tag primitive
* feat: dynamic tags
* refactor: update tag styling
* feat: add stop sequences to OpenAI settings
* fix(Presentation): prevent `SidePanel` re-renders that flicker side panel
* refactor: use stop placeholder
* feat: type and schema update for `stop` and `TPreset` in generation param related types
* refactor: pass conversation to dynamic settings
* refactor(OpenAIClient): remove default handling for `modelOptions.stop`
* docs: fix Google AI Setup formatting
* feat: current_model
* docs: WIP update
* fix(ChatRoute): prevent default preset override before `hasSetConversation.current` becomes true by including latest conversation state as template
* docs: update docs with more info on `stop`
* chore: bump config_version
* refactor: CURRENT_MODEL handling
* chore: bump data-provider
* feat: script to check recent dependency updates
* fix: override vite/rollup version for vite build fix
- also remove unused vite-plugin-html
- add vite build to file output command
* chore: bump rollup override to last known working version (v4.16.0 is breaking)
* chore(vite): increase file size cache for workbox
* fix: resolve openai to last known version using assistants v1 latest features and default header
* chore: update openrouter examples
* WIP: gemini-1.5 support
* feat: extended vertex ai support
* fix: handle possibly undefined modelName
* fix: gpt-4-turbo-preview invalid vision model
* feat: specify `fileConfig.imageOutputType` and make PNG default image conversion type
* feat: better truncation for errors including base64 strings
* fix: gemini inlineData formatting
* feat: RAG augmented prompt for gemini-1.5
* feat: gemini-1.5 rates and token window
* chore: adjust tokens, update docs, update vision Models
* chore: add back `ChatGoogleVertexAI` for chat models via vertex ai
* refactor: ask/edit controllers to not use `unfinished` field for google endpoint
* chore: remove comment
* chore(ci): fix AppService test
* chore: remove comment
* refactor(GoogleSearch): use `GOOGLE_SEARCH_API_KEY` instead, issue warning for old variable
* chore: bump data-provider to 0.5.4
* chore: update docs
* fix: condition for gemini-1.5 using generative ai lib
* chore: update docs
* ci: add additional AppService test for `imageOutputType`
* refactor: optimize new config value `imageOutputType`
* chore: bump CONFIG_VERSION
* fix(assistants): avatar upload
Sometimes Traefik created a race condition where LibreChat was up on tcp/3080, and while Traefik was up on tcp/443, it could not route to the LibreChat container due to the multiple interfaces -- depending on how they came up. This is easily solved by simply using one interface.
* fix(deleteVectors): handle errors gracefully
* chore: update docs based on new alternate env vars prefixed with RAG to avoid conflicts with LibreChat keys
* docs: update URL to access ollama and comment on 'host.docker.internal'
* Update ai_endpoints.md
---------
Co-authored-by: Danny Avila <danacordially@gmail.com>
* WIP: basic route for file downloads and file strategy for generating readablestream to pipe as res
* chore(DALLE3): add typing for OpenAI client
* chore: add `CONSOLE_JSON` notes to dotenv.md
* WIP: first pass OpenAI Assistants File Output handling
* feat: first pass assistants output file download from openai
* chore: yml vs. yaml variation to .gitignore for `librechat.yml`
* refactor(retrieveAndProcessFile): remove redundancies
* fix(syncMessages): explicit sort of apiMessages to fix message order on abort
* chore: add logs for warnings and errors, show toast on frontend
* chore: add logger where console was still being used
* fix(initializeClient.spec.js): remove condition failing test on local installations
* docs: remove comments and invalid html as is required by embeddings generator and add new documentation guidelines
The default `.env` contains the line `ASSISTANTS_API_KEY=user_provided`. When pre-configuring Azure OpenAI models, this setting makes it impossible to use assistants due to a missing user provided key. Only by commenting the line out the Azure setup works.