|
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Waiting to run
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Waiting to run
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Blocked by required conditions
* fix: check supportedMimeTypes before routing unrecognized file types In processAttachments, files not matching the hardcoded mime type categories (image, PDF, video, audio) were silently dropped. Now resolves the endpoint's file config and checks the file type against supportedMimeTypes before routing to the documents pipeline. Files not matching any config are still skipped (original behavior). Closes #12482 * feat: encode generic document types for supported providers Remove restrictive mime type filter in encodeAndFormatDocuments that only allowed PDFs and application/* types. Add a generic encoding path for non-PDF, non-Bedrock files using the provider's native format (Anthropic base64 document, OpenAI file block, Google media block). Files are already validated upstream by supportedMimeTypes. * fix: guard file.type and cache file config in processAttachments - Add file.type truthiness check before checkType to prevent coercion of null/undefined to string 'null'/'undefined' - Cache mergedFileConfig and endpointFileConfig on the instance so addPreviousAttachments doesn't recompute per message * refactor: harden generic document encoding with validation and tests - Extract formatDocumentBlock helper to eliminate ~30 lines of duplicate provider-dispatch code between PDF and generic paths - Add size validation in generic encoding path using configuredFileSizeLimit (was fetched but unused) - Guard Bedrock from generic path — non-bedrockDocumentFormats types are now skipped instead of silently tracking metadata - Only push metadata to result.files when a document block was actually created, preventing silent inconsistent state - Enable Anthropic citations for text/plain, text/html, text/markdown (supported by Anthropic's document API) - Fix != to !== for Providers.AZURE comparison - Add 9 tests covering all four provider branches, Bedrock exclusion, size limit enforcement, and unhandled provider * fix: resolve filename type mismatch in formatDocumentBlock filename parameter is string | undefined but OpenAIFileBlock and OpenAIInputFileBlock require string. Default to 'document' when filename is undefined. * fix: use endpoint name for file config lookup in processAttachments Agent runs can have agent.provider set to a base provider (e.g., openAI) while agent.endpoint is a custom endpoint name. Using provider for the getEndpointFileConfig lookup bypassed custom endpoint supportedMimeTypes config. Now uses agent.endpoint, matching the pattern in addDocuments. * perf: filter non-Bedrock files before fetching streams Bedrock only supports types in bedrockDocumentFormats. Previously, getFileStream was called for all files and unsupported types were discarded after download. Now pre-filters the file list for Bedrock to avoid unnecessary network and memory overhead for large unsupported attachments. * refactor: clean up processAttachments file config handling - Remove redundant ?? null intermediaries; use instance properties directly in the else-if condition - Add JSDoc @type annotations for _mergedFileConfig and _endpointFileConfig in the constructor * refactor: harden document encoding and add routing tests - Hoist configuredFileSizeLimit above the loop to avoid recomputing mergeFileConfig per file - Replace Buffer.from decode with base64 length formula in the generic size check to avoid unnecessary heap allocation - Use nullish coalescing (??) for filename fallback - Clean up test: remove unnecessary type cast, use createMockRequest helper for size-limit test - Add 14 tests for processAttachments categorization logic covering supportedMimeTypes routing, null/undefined guards, standard type passthrough, and edge cases * fix: use optional chaining for checkType in routing tests FileConfig.checkType is typed as optional. Use optional chaining to satisfy strict type checking. * fix: skip stream fetches for unsupported providers, block Bedrock generic routing - Return early from encodeAndFormatDocuments when the provider is neither document-supported nor Bedrock, avoiding unnecessary getFileStream calls for providers that would discard all results - Add !isBedrock guard to the supportedMimeTypes fallback branch in processAttachments so permissive patterns like '.*' don't route non-Bedrock types into documents that would be silently dropped - Add test for Bedrock + non-Bedrock-document-type skipping * fix: respect supportedMimeTypes config for Bedrock endpoints Remove !isBedrock guard from the generic supportedMimeTypes routing branch. If a user configures permissive supportedMimeTypes for a Bedrock endpoint, the upload validation already accepted the file. The encoding layer pre-filters to Bedrock-supported types before fetching streams, so unsupported types are handled there without silently dropping files the user explicitly allowed. |
||
|---|---|---|
| .devcontainer | ||
| .github | ||
| .husky | ||
| .vscode | ||
| api | ||
| client | ||
| config | ||
| e2e | ||
| helm | ||
| packages | ||
| redis-config | ||
| src/tests | ||
| utils | ||
| .dockerignore | ||
| .env.example | ||
| .gitattributes | ||
| .gitignore | ||
| .prettierrc | ||
| AGENTS.md | ||
| bun.lock | ||
| CLAUDE.md | ||
| deploy-compose.yml | ||
| docker-compose.override.yml.example | ||
| docker-compose.yml | ||
| Dockerfile | ||
| Dockerfile.multi | ||
| eslint.config.mjs | ||
| librechat.example.yaml | ||
| LICENSE | ||
| package-lock.json | ||
| package.json | ||
| rag.yml | ||
| README.md | ||
| README.zh.md | ||
| turbo.json | ||
LibreChat
English · 中文
✨ Features
-
🖥️ UI & Experience inspired by ChatGPT with enhanced design and features
-
🤖 AI Model Selection:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Responses API (incl. Azure)
- Custom Endpoints: Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with Local & Remote AI Providers:
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Helicone, Perplexity, ShuttleAI, Deepseek, Qwen, and more
-
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
- Seamless File Handling: Upload, process, and download files directly
- No Privacy Concerns: Fully isolated and secure execution
-
🔦 Agents & Tools Integration:
- LibreChat Agents:
- No-Code Custom Assistants: Build specialized, AI-driven helpers
- Agent Marketplace: Discover and deploy community-built agents
- Collaborative Sharing: Share agents with specific users and groups
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- Model Context Protocol (MCP) Support for Tools
- LibreChat Agents:
-
🔍 Web Search:
- Search the internet and retrieve relevant information to enhance your AI context
- Combines search providers, content scrapers, and result rerankers for optimal results
- Customizable Jina Reranking: Configure custom Jina API URLs for reranking services
- Learn More →
-
🪄 Generative UI with Code Artifacts:
- Code Artifacts allow creation of React, HTML, and Mermaid diagrams directly in chat
-
🎨 Image Generation & Editing
- Text-to-image and image-to-image with GPT-Image-1
- Text-to-image with DALL-E (3/2), Stable Diffusion, Flux, or any MCP server
- Produce stunning visuals from prompts or refine existing images with a single instruction
-
💾 Presets & Context Management:
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- Create and share prompts with specific users and groups
- Fork Messages & Conversations for Advanced Context control
-
💬 Multimodal & File Interactions:
- Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
-
🌎 Multilingual UI:
- English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
- Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
- Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
- Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
-
🧠 Reasoning UI:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1
-
🎨 Customizable Interface:
- Customizable Dropdown & Interface that adapts to both power users and newcomers
-
- Never lose a response: AI responses automatically reconnect and resume if your connection drops
- Multi-Tab & Multi-Device Sync: Open the same chat in multiple tabs or pick up on another device
- Production-Ready: Works from single-server setups to horizontally scaled deployments with Redis
-
🗣️ Speech & Audio:
- Chat hands-free with Speech-to-Text and Text-to-Speech
- Automatically send and play Audio
- Supports OpenAI, Azure OpenAI, and Elevenlabs
-
📥 Import & Export Conversations:
- Import Conversations from LibreChat, ChatGPT, Chatbot UI
- Export conversations as screenshots, markdown, text, json
-
🔍 Search & Discovery:
- Search all messages/conversations
-
👥 Multi-User & Secure Access:
- Multi-User, Secure Authentication with OAuth2, LDAP, & Email Login Support
- Built-in Moderation, and Token spend tools
-
⚙️ Configuration & Deployment:
- Configure Proxy, Reverse Proxy, Docker, & many Deployment options
- Use completely local or deploy on the cloud
-
📖 Open-Source & Community:
- Completely Open-Source & Built in Public
- Community-driven development, support, and feedback
For a thorough review of our features, see our docs here 📚
🪶 All-In-One AI Conversations with LibreChat
LibreChat is a self-hosted AI chat platform that unifies all major AI providers in a single, privacy-focused interface.
Beyond chat, LibreChat provides AI Agents, Model Context Protocol (MCP) support, Artifacts, Code Interpreter, custom actions, conversation search, and enterprise-ready multi-user authentication.
Open source, actively developed, and built for anyone who values control over their AI infrastructure.
🌐 Resources
GitHub Repo:
- RAG API: github.com/danny-avila/rag_api
- Website: github.com/LibreChat-AI/librechat.ai
Other:
- Website: librechat.ai
- Documentation: librechat.ai/docs
- Blog: librechat.ai/blog
📝 Changelog
Keep up with the latest updates by visiting the releases page and notes:
⚠️ Please consult the changelog for breaking changes before updating.
⭐ Star History
✨ Contributions
Contributions, suggestions, bug reports and fixes are welcome!
For new features, components, or extensions, please open an issue and discuss before sending a PR.
If you'd like to help translate LibreChat into your language, we'd love your contribution! Improving our translations not only makes LibreChat more accessible to users around the world but also enhances the overall user experience. Please check out our Translation Guide.
💖 This project exists in its current state thanks to all the people who contribute
🎉 Special Thanks
We thank Locize for their translation management tools that support multiple languages in LibreChat.