|
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Waiting to run
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Waiting to run
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Blocked by required conditions
* feat: Add support for Apache Parquet MIME types - Introduced 'application/x-parquet' to the full MIME types list and code interpreter MIME types list. - Updated application MIME types regex to include 'x-parquet' and 'vnd.apache.parquet'. - Added mapping for '.parquet' files to 'application/x-parquet' in code type mapping, enhancing file format support. * feat: Implement atomic file claiming for code execution outputs - Added a new `claimCodeFile` function to atomically claim a file_id for code execution outputs, preventing duplicates by using a compound key of filename and conversationId. - Updated `processCodeOutput` to utilize the new claiming mechanism, ensuring that concurrent calls for the same filename converge on a single record. - Refactored related tests to validate the new atomic claiming behavior and its impact on file usage tracking and versioning. * fix: Update image file handling to use cache-busting filepath - Modified the `processCodeOutput` function to generate a cache-busting filepath for updated image files, improving browser caching behavior. - Adjusted related tests to reflect the change from versioned filenames to cache-busted filepaths, ensuring accurate validation of image updates. * fix: Update step handler to prevent undefined content for non-tool call types - Modified the condition in useStepHandler to ensure that undefined content is only assigned for specific content types, enhancing the robustness of content handling. * fix: Update bedrockOutputParser to handle maxTokens for adaptive models - Modified the bedrockOutputParser logic to ensure that maxTokens is not set for adaptive models when neither maxTokens nor maxOutputTokens are provided, improving the handling of adaptive thinking configurations. - Updated related tests to reflect these changes, ensuring accurate validation of the output for adaptive models. * chore: Update @librechat/agents to version 3.1.38 in package.json and package-lock.json * fix: Enhance file claiming and error handling in code processing - Updated the `processCodeOutput` function to use a consistent file ID for claiming files, preventing duplicates and improving concurrency handling. - Refactored the `createFileMethods` to include error handling for failed file claims, ensuring robust behavior when claiming files for conversations. - These changes enhance the reliability of file management in the application. * fix: Update adaptive thinking test for Opus 4.6 model - Modified the test for configuring adaptive thinking to reflect that no default maxTokens should be set for the Opus 4.6 model. - Updated assertions to ensure that maxTokens is undefined, aligning with the expected behavior for adaptive models. |
||
|---|---|---|
| .devcontainer | ||
| .github | ||
| .husky | ||
| .vscode | ||
| api | ||
| client | ||
| config | ||
| e2e | ||
| helm | ||
| packages | ||
| redis-config | ||
| src/tests | ||
| utils | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| .prettierrc | ||
| bun.lock | ||
| CHANGELOG.md | ||
| deploy-compose.yml | ||
| docker-compose.override.yml.example | ||
| docker-compose.yml | ||
| Dockerfile | ||
| Dockerfile.multi | ||
| eslint.config.mjs | ||
| librechat.example.yaml | ||
| LICENSE | ||
| package-lock.json | ||
| package.json | ||
| rag.yml | ||
| README.md | ||
LibreChat
✨ Features
-
🖥️ UI & Experience inspired by ChatGPT with enhanced design and features
-
🤖 AI Model Selection:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Responses API (incl. Azure)
- Custom Endpoints: Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with Local & Remote AI Providers:
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Helicone, Perplexity, ShuttleAI, Deepseek, Qwen, and more
-
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
- Seamless File Handling: Upload, process, and download files directly
- No Privacy Concerns: Fully isolated and secure execution
-
🔦 Agents & Tools Integration:
- LibreChat Agents:
- No-Code Custom Assistants: Build specialized, AI-driven helpers
- Agent Marketplace: Discover and deploy community-built agents
- Collaborative Sharing: Share agents with specific users and groups
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- Model Context Protocol (MCP) Support for Tools
- LibreChat Agents:
-
🔍 Web Search:
- Search the internet and retrieve relevant information to enhance your AI context
- Combines search providers, content scrapers, and result rerankers for optimal results
- Customizable Jina Reranking: Configure custom Jina API URLs for reranking services
- Learn More →
-
🪄 Generative UI with Code Artifacts:
- Code Artifacts allow creation of React, HTML, and Mermaid diagrams directly in chat
-
🎨 Image Generation & Editing
- Text-to-image and image-to-image with GPT-Image-1
- Text-to-image with DALL-E (3/2), Stable Diffusion, Flux, or any MCP server
- Produce stunning visuals from prompts or refine existing images with a single instruction
-
💾 Presets & Context Management:
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- Create and share prompts with specific users and groups
- Fork Messages & Conversations for Advanced Context control
-
💬 Multimodal & File Interactions:
- Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
-
🌎 Multilingual UI:
- English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
- Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
- Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
- Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
-
🧠 Reasoning UI:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1
-
🎨 Customizable Interface:
- Customizable Dropdown & Interface that adapts to both power users and newcomers
-
- Never lose a response: AI responses automatically reconnect and resume if your connection drops
- Multi-Tab & Multi-Device Sync: Open the same chat in multiple tabs or pick up on another device
- Production-Ready: Works from single-server setups to horizontally scaled deployments with Redis
-
🗣️ Speech & Audio:
- Chat hands-free with Speech-to-Text and Text-to-Speech
- Automatically send and play Audio
- Supports OpenAI, Azure OpenAI, and Elevenlabs
-
📥 Import & Export Conversations:
- Import Conversations from LibreChat, ChatGPT, Chatbot UI
- Export conversations as screenshots, markdown, text, json
-
🔍 Search & Discovery:
- Search all messages/conversations
-
👥 Multi-User & Secure Access:
- Multi-User, Secure Authentication with OAuth2, LDAP, & Email Login Support
- Built-in Moderation, and Token spend tools
-
⚙️ Configuration & Deployment:
- Configure Proxy, Reverse Proxy, Docker, & many Deployment options
- Use completely local or deploy on the cloud
-
📖 Open-Source & Community:
- Completely Open-Source & Built in Public
- Community-driven development, support, and feedback
For a thorough review of our features, see our docs here 📚
🪶 All-In-One AI Conversations with LibreChat
LibreChat is a self-hosted AI chat platform that unifies all major AI providers in a single, privacy-focused interface.
Beyond chat, LibreChat provides AI Agents, Model Context Protocol (MCP) support, Artifacts, Code Interpreter, custom actions, conversation search, and enterprise-ready multi-user authentication.
Open source, actively developed, and built for anyone who values control over their AI infrastructure.
🌐 Resources
GitHub Repo:
- RAG API: github.com/danny-avila/rag_api
- Website: github.com/LibreChat-AI/librechat.ai
Other:
- Website: librechat.ai
- Documentation: librechat.ai/docs
- Blog: librechat.ai/blog
📝 Changelog
Keep up with the latest updates by visiting the releases page and notes:
⚠️ Please consult the changelog for breaking changes before updating.
⭐ Star History
✨ Contributions
Contributions, suggestions, bug reports and fixes are welcome!
For new features, components, or extensions, please open an issue and discuss before sending a PR.
If you'd like to help translate LibreChat into your language, we'd love your contribution! Improving our translations not only makes LibreChat more accessible to users around the world but also enhances the overall user experience. Please check out our Translation Guide.
💖 This project exists in its current state thanks to all the people who contribute
🎉 Special Thanks
We thank Locize for their translation management tools that support multiple languages in LibreChat.