* filter out unavailable servers
* bump render time
* Fix import path for useGetStartupConfig
* refactor: Change configuredServers to use Set for improved filtering of available MCPs
---------
Co-authored-by: Danny Avila <danny@librechat.ai>
* refactor: Default Model Spec Retrieval Logic, allowing last selected spec on new chat if last selection was a spec
* chore: Replace hardcoded 'new' conversation ID with Constants.NEW_CONVO for consistency
* chore: remove redundant condition for model spec preset selection in useNewConvo hook
* updated gpt5-pro
it is here and on openrouter
https://platform.openai.com/docs/models/gpt-5-pro
* feat: Add gpt-5-pro pricing
- Implemented handling for the new gpt-5-pro model in the getValueKey function.
- Updated tests to ensure correct behavior for gpt-5-pro across various scenarios.
- Adjusted token limits and multipliers for gpt-5-pro in the tokens utility files.
- Enhanced model matching functionality to include gpt-5-pro variations.
* refactor: optimize model pricing and validation logic
- Added new model pricing entries for llama2, llama3, and qwen variants in tx.js.
- Updated tokenValues to include additional models and their pricing structures.
- Implemented validation tests in tx.spec.js to ensure all models resolve correctly to pricing.
- Refactored getValueKey function to improve model matching and resolution efficiency.
- Removed outdated model entries from tokens.ts to streamline pricing management.
* fix: add missing pricing
* chore: update model pricing for qwen and gemma variants
* chore: update model pricing and add validation for context windows
- Removed outdated model entries from tx.js and updated tokenValues with new models.
- Added a test in tx.spec.js to ensure all models with pricing have corresponding context windows defined in tokens.ts.
- Introduced 'command-text' model pricing in tokens.ts to maintain consistency across model definitions.
* chore: update model names and pricing for AI21 and Amazon models
- Refactored model names in tx.js for AI21 and Amazon models to remove versioning and improve consistency.
- Updated pricing values in tokens.ts to reflect the new model names.
- Added comprehensive tests in tx.spec.js to validate pricing for both short and full model names across AI21 and Amazon models.
* feat: add pricing and validation for Claude Haiku 4.5 model
* chore: increase default max context tokens to 18000 for agents
* feat: add Qwen3 model pricing and validation tests
* chore: reorganize and update Qwen model pricing in tx.js and tokens.ts
---------
Co-authored-by: khfung <68192841+khfung@users.noreply.github.com>
* fix: reapply chat font size on load
* refactor: streamline font size handling in localStorage
* fix: update matchMedia mock to accurately reflect desktop and touchscreen capabilities
* refactor: implement Jotai for font size management and initialize on app load
- Replaced Recoil with Jotai for font size state management across components.
- Added a new `fontSize` atom to handle font size changes and persist them in localStorage.
- Implemented `initializeFontSize` function to apply saved font size on app load.
- Updated relevant components to utilize the new font size atom.
---------
Co-authored-by: ddooochii <ddooochii@gmail.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
* fix: Remove ephemeral cache control from document encoding function
* refactor: Improve document encoding types and add file context for anthropic messages api
- Added AnthropicDocumentBlock interface to define the structure for documents from the Anthropic provider.
- Updated encodeAndFormatDocuments function to utilize the new type and include optional context for filenames.
- Refactored DocumentResult to use a union type for various document formats, improving type safety and clarity.
* 🔧 feat: Enhance Redis Configuration and Connection Handling in Cache Flush Utility
- Added support for Redis username, password, and CA certificate.
- Improved Redis client creation to handle both cluster and single instance configurations.
- Implemented a helper function to read the Redis CA certificate with error handling.
- Updated connection logic to include timeout and error handling for Redis connections.
* refactor: flush cache if redis URI is defined
Remove deprecated OpenAI models and add latest GPT-5, o3/o4, and GPT-4.1 series models based on current API offerings as of October 2025.
Removed deprecated models:
- gpt-4.5-preview (deprecated July 2025)
- o1-preview, o1-mini (deprecated July/October 2025)
- gpt-4-vision-preview (shut down December 2024)
- Dated GPT-3.5 and GPT-4 variants (consolidated into base versions)
Added new flagship models:
- GPT-5 series: gpt-5, gpt-5-mini, gpt-5-nano
- o3/o4 reasoning models: o3, o4-mini, o3-pro, o3-mini
- GPT-4.1 series: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano
Reorganized list with newest models first for better discoverability.
References:
- https://platform.openai.com/docs/models
- https://platform.openai.com/docs/deprecations
Enables word wrapping for text, markdown, and plaintext code blocks
to prevent horizontal scrolling on long lines. Improves readability
for prose content in fenced code blocks.
* Add Markdown rendering support for artifacts
* Add tests
* Remove custom code for mermaid
* Remove unnecessary dark mode hook
* refactor: optimize mermaid dependencies
- Added support for additional MIME types in artifact templates.
- Updated mermaidDependencies to include new packages: class-variance-authority, clsx, tailwind-merge, and @radix-ui/react-slot.
- Refactored zoom and refresh icons in MermaidDiagram component for improved clarity and maintainability.
* fix: add Markdown support for artifacts rendering
* feat: support 'text/md' as an additional MIME type for Markdown artifacts
* refactor: simplify markdownDependencies structure in artifacts utility
---------
Co-authored-by: Danny Avila <danny@librechat.ai>
* feat: Add group field to modelSpecs for flexible grouping
* resolve lint issues
* fix test
* docs: enhance modelSpecs group field documentation for clarity
---------
Co-authored-by: Danny Avila <danny@librechat.ai>
* Update prompt value for gemini-2.5-flash-lite
New Input price (text, image, video) --> $0.10
https://ai.google.dev/gemini-api/docs/pricing
* Fix formatting of gemini-2.5-flash-lite values
changed 0.10 to 0.1 to follow standards
* feat: Enhance shared link functionality with target message support
* refactor: Remove comment on compound index in share schema
* chore: Reorganize imports in ShareButton component for clarity
* refactor: Integrate Recoil for latest message tracking in ShareButton component
---------
Co-authored-by: Danny Avila <danny@librechat.ai>
- problem: `addImageUrls` had a side effect that was being leveraged before to populate both the `ocr` message field, now `fileContext`, and `client.options.attachments`, which would record the user's uploaded message attachments to the user message when saved to the database and returned at the end of the request lifecycle
- solution: created dedicated handling for file context, and made sure to populate `allFiles` with non-provider attachments
* chore: enhance logging for latest message actions in message components
* fix: Extract previous convoId from latest text in message helpers and process hooks
- Updated `useMessageHelpers` and `useMessageProcess` to extract `convoId` from the previous text key for improved message handling.
- Refactored `getLengthAndLastTenChars` to `getLengthAndLastNChars` for better flexibility in character length retrieval.
- Introduced `getLatestContentForKey` function to streamline content extraction from messages.
* chore: Enhance logging for clearing latest messages in conversation hooks
* refactor: Update message key formatting for improved URL parameter handling
- Modified `getLatestContentForKey` to change the format from `${text}-${i}` to `${text}&i=${i}` for better URL parameter structure.
- Adjusted `getTextKey` to increase character length retrieval from 12 to 16 in `getLengthAndLastNChars` for enhanced text processing.
* refactor: Simplify convoId extraction and enhance message formatting
- Updated `useMessageHelpers` and `useMessageProcess` to extract `convoId` using a new format for improved clarity.
- Refactored `getLatestContentForKey` to streamline content formatting and ensure consistent use of `Constants.COMMON_DIVIDER` for better message structure.
- Removed redundant length and last character extraction logic from `getLengthAndLastNChars` for cleaner code.
* chore: linting
* chore: Simplify pre-commit hook by removing unnecessary lines
* feat: Add support for users to be admins when logging in using OpenID
* fix: Linting issues
* fix: whitespace
* chore: add unit tests for OIDC_ADMIN_ROLE
* refactor: Replace custom property retrieval function with lodash's get for improved readability and maintainability
* feat: Enhance OpenID role extraction and error handling in setupOpenId function
- Improved role validation to check for both array and string types.
- Added detailed error messages for missing or invalid role paths in tokens.
- Expanded unit tests to cover various scenarios for nested role extraction and error handling.
* fix: Improve error handling for role extraction in OpenID strategy
- Enhanced validation to check for invalid role types (array or string).
- Updated error messages for clarity when roles are missing or of incorrect type.
- Added unit tests to cover scenarios where roles return invalid types (object, number).
* feat: Implement user role demotion in OpenID strategy when admin role is absent from token
- Added logic to demote users from 'ADMIN' to 'USER' if the admin role is not present in the token.
- Enhanced logging to capture role changes for better traceability.
- Introduced unit tests to verify the demotion behavior and ensure correct handling when admin role environment variables are not configured.
---------
Co-authored-by: Danny Avila <danny@librechat.ai>
* Check file size of conversation being imported against a configured max size to prevent bringing down the application by uploading a large file
chore: remove non-english localization as needs to be added via locize
* feat: Implement file size validation for conversation imports to prevent oversized uploads
---------
Co-authored-by: Marc Amick <MarcAmick@jhu.edu>
Co-authored-by: Danny Avila <danny@librechat.ai>
* 🔧 refactor: Improve accessibility and styling in ChatGroupItem and FilterPrompts components
* 🔧 fix: Add button type and keyboard accessibility to dropdown menu trigger in ChatGroupItem
* 🔧 fix(757): Enhance accessibility by updating aria-labels and adding localization for prompt groups
* 🔧 fix(618): Update version to 0.3.1 and enhance accessibility in InfoHoverCard component
* 🔧 fix(618): Update aria-label in InfoHoverCard to use dynamic text prop for improved accessibility
* 🔧 fix: Enhance accessibility by updating aria-labels and roles in Conversations components
* 🔧 fix(620): Enhance accessibility by adding tabIndex to Tabs.Content components in ArtifactTabs, Settings, and Speech components
* refactor: remove RevokeKeysButton component and update related components for accessibility
- Deleted RevokeKeysButton component.
- Updated SharedLinks and General components to use Label for accessibility.
- Enhanced Personalization component with aria-labelledby and aria-describedby attributes.
- Refactored ConversationModeSwitch to use ToggleSwitch for better state management.
- Improved AutoSendTextSelector with local state management and accessibility attributes.
- Replaced Switch components with ToggleSwitch in various Speech and TTS components for consistency.
- Added aria-labelledby attributes to Dropdown components for better accessibility.
- Updated translation.json to include new localization keys and improved existing ones.
- Enhanced Slider component to support aria attributes for better accessibility.
* 🔧 fix: Enhance user feedback for API key operations with success and error messages
* 🔧 fix: Update aria-labels in Avatar component for improved localization and accessibility
* 🔧 fix: Refactor handleFile and handleDrop functions for improved readability and maintainability
* 📎 feat: Direct Provider Attachment Support for Multimodal Content
* 📑 feat: Anthropic Direct Provider Upload (#9072)
* feat: implement Anthropic native PDF support with document preservation
- Add comprehensive debug logging throughout PDF processing pipeline
- Refactor attachment processing to separate image and document handling
- Create distinct addImageURLs(), addDocuments(), and processAttachments() methods
- Fix critical bugs in stream handling and parameter passing
- Add streamToBuffer utility for proper stream-to-buffer conversion
- Remove api/agents submodule from repository
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* chore: remove out of scope formatting changes
* fix: stop duplication of file in chat on end of response stream
* chore: bring back file search and ocr options
* chore: localize upload to provider string in file menu
* refactor: change createMenuItems args to fit new pattern introduced by anthropic-native-pdf-support
* feat: add cache point for pdfs processed by anthropic endpoint since they are unlikely to change and should benefit from caching
* feat: combine Upload Image into Upload to Provider since they both perform direct upload and change provider upload icon to reflect multimodal upload
* feat: add citations support according to docs
* refactor: remove redundant 'document' check since documents are handled properly by formatMessage in the agents repo now
* refactor: change upload logic so anthropic endpoint isn't exempted from normal upload path using Agents for consistency with the rest of the upload logic
* fix: include width and height in return from uploadLocalFile so images are correctly identified when going through an AgentUpload in addImageURLs
* chore: remove client specific handling since the direct provider stuff is handled by the agent client
* feat: handle documents in AgentClient so no need for change to agents repo
* chore: removed unused changes
* chore: remove auto generated comments from OG commit
* feat: add logic for agents to use direct to provider uploads if supported (currently just anthropic)
* fix: reintroduce role check to fix render error because of undefined value for Content Part
* fix: actually fix render bug by using proper isCreatedByUser check and making sure our mutation of formattedMessage.content is consistent
---------
Co-authored-by: Andres Restrepo <andres@thelinuxkid.com>
Co-authored-by: Claude <noreply@anthropic.com>
📁 feat: Send Attachments Directly to Provider (OpenAI) (#9098)
* refactor: change references from direct upload to direct attach to better reflect functionality
since we are just using base64 encoding strategy now rather than Files/File API for sending our attachments directly to the provider, the upload nomenclature no longer makes sense. direct_attach better describes the different methods of sending attachments to providers anyways even if we later introduce direct upload support
* feat: add upload to provider option for openai (and agent) ui
* chore: move anthropic pdf validator over to packages/api
* feat: simple pdf validation according to openai docs
* feat: add provider agnostic validatePdf logic to start handling multiple endpoints
* feat: add handling for openai specific documentPart formatting
* refactor: move require statement to proper place at top of file
* chore: add in openAI endpoint for the rest of the document handling logic
* feat: add direct attach support for azureOpenAI endpoint and agents
* feat: add pdf validation for azureOpenAI endpoint
* refactor: unify all the endpoint checks with isDocumentSupportedEndpoint
* refactor: consolidate Upload to Provider vs Upload image logic for clarity
* refactor: remove anthropic from anthropic_multimodal fileType since we support multiple providers now
🗂️ feat: Send Attachments Directly to Provider (Google) (#9100)
* feat: add validation for google PDFs and add google endpoint as a document supporting endpoint
* feat: add proper pdf formatting for google endpoints (requires PR #14 in agents)
* feat: add multimodal support for google endpoint attachments
* feat: add audio file svg
* fix: refactor attachments logic so multi-attachment messages work properly
* feat: add video file svg
* fix: allows for followup questions of uploaded multimodal attachments
* fix: remove incorrect final message filtering that was breaking Attachment component rendering
fix: manualy rename 'documents' to 'Documents' in git since it wasn't picked up due to case insensitivity in dir name
fix: add logic so filepicker for a google agent has proper filetype filtering
🛫 refactor: Move Encoding Logic to packages/api (#9182)
* refactor: move audio encode over to TS
* refactor: audio encoding now functional in LC again
* refactor: move video encode over to TS
* refactor: move document encode over to TS
* refactor: video encoding now functional in LC again
* refactor: document encoding now functional in LC again
* fix: extend file type options in AttachFileMenu to include 'google_multimodal' and update dependency array to include agent?.provider
* feat: only accept pdfs if responses api is enabled for openai convos
chore: address ESLint comments
chore: add missing audio mimetype
* fix: type safety for message content parts and improve null handling
* chore: reorder AttachFileMenuProps for consistency and clarity
* chore: import order in AttachFileMenu
* fix: improve null handling for text parts in parseTextParts function
* fix: remove no longer used unsupported capability error message for file uploads
* fix: OpenAI Direct File Attachment Format
* fix: update encodeAndFormatDocuments to support OpenAI responses API and enhance document result types
* refactor: broaden providers supported for documents
* feat: enhance DragDrop context and modal to support document uploads based on provider capabilities
* fix: reorder import statements for consistency in video encoding module
---------
Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
* 🔧 chore: Update @librechat/agents to v2.4.84 in package.json and package-lock.json
* feat: Serper as new scraperProvider for Web Search and add firecrawlVersion support
* fix: TWebSearchKeys and ensure unique API keys extraction
* chore: Add build:packages script to streamline package builds
- Added a new function `deleteDocumentsWithoutUserField` to remove documents lacking the user field from the specified MeiliSearch index.
- Integrated this function into the `ensureFilterableAttributes` method to clean up orphaned documents when existing messages or conversations are found without a user field.
🔧 feat: Enhance Index Synchronization Logic
- Updated `ensureFilterableAttributes` to return an object indicating whether settings were updated and if orphaned documents were found.
- Integrated orphaned document cleanup directly into the index synchronization process without forcing a full re-sync unless settings were updated.
- Improved logging for clarity on index configuration updates and orphaned document handling.
🔧 feat: Improve Flow State Management in Index Synchronization
- Refactored flow state management logic to ensure cleanup occurs after synchronization, regardless of success or error.
- Enhanced logging for flow state cleanup to provide better visibility into the synchronization process.
- Streamlined the structure of the index synchronization function for improved readability.
* fix: update @librechat/agents to v2.4.83 to handle reasoning edge case encountered with GLM models
* feat: GLM Context Window & Pricing Support
* feat: Add support for glm4 model in token values and tests
* chore: linting for `loadCustomConfig`
* refactor: decouple CDN init and variable/health checks from AppService
* refactor: move AppService to packages/data-schemas
* chore: update AppConfig import path to use data-schemas
* chore: update JsonSchemaType import path to use data-schemas
* refactor: update UserController to import webSearchKeys and redefine FunctionTool typedef
* chore: remove AppService.js
* refactor: update AppConfig interface to use Partial<TCustomConfig> and make paths and fileStrategies optional
* refactor: update checkConfig function to accept Partial<TCustomConfig>
* chore: fix types
* refactor: move handleRateLimits to startup checks as is an effect
* test: remove outdated rate limit tests from AppService.spec and add new handleRateLimits tests in checks.spec
* ⚙️ chore: Resolve Build Warning and `keyvMongo` types
* 🔄 chore: Update mongodb version to ^6.14.2 in package.json and package-lock.json
* chore: remove @langchain/openai dep
* 🔄 refactor: Change log level from warn to debug for missing endpoint config
* 🔄 refactor: Improve temp chat expiration date calculation in tests and implementation
* initialize servers sequentially
* adjust for exported properties that are not nullable anymore
* use underscore separator
* mock with set
* customize init timeout via env var
* refactor for readability, use loaded conns for tool functions
* address PR comments
* clean up fire-and-forget
* fix tests
* Refactor: Moved Redis cache infra logic into `packages/api`
- Moved cacheFactory and redisClients from `api/cache` into `packages/api/src/cache` so that features in `packages/api` can use cache without importing backward from the backend.
- Converted all moved files into TS with proper typing.
- Created integration tests to run against actual Redis servers for redisClients and cacheFactory.
- Added a GitHub workflow to run integration tests for the cache feature.
- Bug fix: keyvRedisClient now implements the PING feature properly.
* chore: consolidate imports in getLogStores.js
* chore: reorder imports
* chore: re-add fs-extra as dev dep.
* chore: reorder imports in cacheConfig.ts, cacheFactory.ts, and keyvMongo.ts
---------
Co-authored-by: Danny Avila <danny@librechat.ai>