* 🔧 refactor: Simplify MCP selection logic in useMCPSelect hook - Removed redundant useEffect for setting ephemeral agent when MCP values change. - Integrated ephemeral agent update directly into the MCP value change handler, improving code clarity and reducing unnecessary re-renders. - Updated dependencies in the effect hook to ensure proper state management. Why Effect 2 Was Added (PR #9528) PR #9528 was a refactor that migrated MCP state from useLocalStorage hooks to Jotai atomWithStorage. Before that PR, useLocalStorage handled bidirectional sync between localStorage and Recoil in one abstraction. After the migration, the two useEffect hooks were introduced to bridge Jotai ↔ Recoil: - Effect 1 (Recoil → Jotai): When ephemeralAgent.mcp changes externally, update the Jotai atom (which drives the UI dropdown) - Effect 2 (Jotai → Recoil): When mcpValues changes, push it back to ephemeralAgent.mcp (which is read at submission time) Effect 2 was needed because in that PR's design, setMCPValues only wrote to Jotai — it never touched Recoil. Effect 2 was the bridge to propagate user selections into the ephemeral agent. Why Removing It Is Correct All user-initiated MCP changes go through setMCPValues. The callers are in useMCPServerManager: toggleServerSelection, batchToggleServers, OAuth success callbacks, and access revocation. Our change puts the Recoil write directly in that callback, so all these paths are covered. All external changes go through Recoil, handled by Effect 1 (kept). Model spec application (applyModelSpecEphemeralAgent), agent template application after submission, and BadgeRowContext initialization all write directly to ephemeralAgentByConvoId. Effect 1 watches ephemeralAgent?.mcp and syncs those into the Jotai atom for the UI. There is no code path where mcpValues changes without going through setMCPValues or Effect 1. The only other source is atomWithStorage's getOnInit reading from localStorage on mount — that's just restoring persisted state and is harmless (overwritten by Effect 1 if the ephemeral agent has values). Additional Benefits - Eliminates the race condition. Effect 2 fired on mount with Jotai's stale default ([]), overwriting ephemeralAgent.mcp that had been set by a model spec. Our change prevents that because the imperative sync only fires on explicit user action. - Eliminates infinite loop risk. The old bidirectional two-effect approach relied on isEqual/JSON.stringify checks to break cycles. The new unidirectional-reactive (Effect 1) + imperative (setMCPValues) approach has no such risk. - Effect 1's enhancements are preserved. The mcp_clear sentinel handling and configuredServers filtering (both added after PR #9528) continue to work correctly. * ✨ feat: Add artifacts support to model specifications and ephemeral agents - Introduced `artifacts` property in the model specification and ephemeral agent types, allowing for string or boolean values. - Updated `applyModelSpecEphemeralAgent` to handle artifacts, defaulting to 'default' if true or an empty string if not specified. - Enhanced localStorage handling to store artifacts alongside other agent properties, improving state management for ephemeral agents. * 🔧 refactor: Update BadgeRowContext to improve localStorage handling - Modified the logic to only apply values from localStorage that were actually stored, preventing unnecessary overrides of the ephemeral agent. - Simplified the setting of ephemeral agent values by directly using initialValues, enhancing code clarity and maintainability. * 🔧 refactor: Enhance ephemeral agent handling in BadgeRowContext and model spec application - Updated BadgeRowContext to apply localStorage values only for tools not already set in ephemeralAgent, improving state management. - Modified useApplyModelSpecEffects to reset the ephemeral agent when no spec is provided but specs are configured, ensuring localStorage defaults are applied correctly. - Streamlined the logic for applying model spec properties, enhancing clarity and maintainability. * refactor: Isolate spec and non-spec tool/MCP state with environment-keyed storage Spec tool state (badges, MCP) and non-spec user preferences previously shared conversation-keyed localStorage, causing cross-pollination when switching between spec and non-spec models. This introduces environment-keyed storage so each context maintains independent persisted state. Key changes: - Spec active: no localStorage persistence — admin config always applied fresh - Non-spec (with specs configured): tool/MCP state persisted to __defaults__ key - No specs configured: zero behavior change (conversation-keyed storage) - Per-conversation isolation preserved for existing conversations - Dual-write on user interaction updates both conversation and environment keys - Remove mcp_clear sentinel in favor of null ephemeral agent reset * refactor: Enhance ephemeral agent initialization and MCP handling in BadgeRowContext and useMCPSelect - Updated BadgeRowContext to clarify the handling of localStorage values for ephemeral agents, ensuring proper initialization based on conversation state. - Improved useMCPSelect tests to accurately reflect behavior when setting empty MCP values, ensuring the visual selection clears as expected. - Introduced environment-keyed storage logic to maintain independent state for spec and non-spec contexts, enhancing user experience during context switching. * test: Add comprehensive tests for useToolToggle and applyModelSpecEphemeralAgent hooks - Introduced unit tests for the useToolToggle hook, covering dual-write behavior in non-spec mode and per-conversation isolation. - Added tests for applyModelSpecEphemeralAgent, ensuring correct application of model specifications and user overrides from localStorage. - Enhanced test coverage for ephemeral agent state management during conversation transitions, validating expected behaviors for both new and existing conversations. |
||
|---|---|---|
| .devcontainer | ||
| .github | ||
| .husky | ||
| .vscode | ||
| api | ||
| client | ||
| config | ||
| e2e | ||
| helm | ||
| packages | ||
| redis-config | ||
| src/tests | ||
| utils | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| .prettierrc | ||
| bun.lock | ||
| CHANGELOG.md | ||
| deploy-compose.yml | ||
| docker-compose.override.yml.example | ||
| docker-compose.yml | ||
| Dockerfile | ||
| Dockerfile.multi | ||
| eslint.config.mjs | ||
| librechat.example.yaml | ||
| LICENSE | ||
| package-lock.json | ||
| package.json | ||
| rag.yml | ||
| README.md | ||
| turbo.json | ||
LibreChat
✨ Features
-
🖥️ UI & Experience inspired by ChatGPT with enhanced design and features
-
🤖 AI Model Selection:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Responses API (incl. Azure)
- Custom Endpoints: Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with Local & Remote AI Providers:
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Helicone, Perplexity, ShuttleAI, Deepseek, Qwen, and more
-
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
- Seamless File Handling: Upload, process, and download files directly
- No Privacy Concerns: Fully isolated and secure execution
-
🔦 Agents & Tools Integration:
- LibreChat Agents:
- No-Code Custom Assistants: Build specialized, AI-driven helpers
- Agent Marketplace: Discover and deploy community-built agents
- Collaborative Sharing: Share agents with specific users and groups
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- Model Context Protocol (MCP) Support for Tools
- LibreChat Agents:
-
🔍 Web Search:
- Search the internet and retrieve relevant information to enhance your AI context
- Combines search providers, content scrapers, and result rerankers for optimal results
- Customizable Jina Reranking: Configure custom Jina API URLs for reranking services
- Learn More →
-
🪄 Generative UI with Code Artifacts:
- Code Artifacts allow creation of React, HTML, and Mermaid diagrams directly in chat
-
🎨 Image Generation & Editing
- Text-to-image and image-to-image with GPT-Image-1
- Text-to-image with DALL-E (3/2), Stable Diffusion, Flux, or any MCP server
- Produce stunning visuals from prompts or refine existing images with a single instruction
-
💾 Presets & Context Management:
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- Create and share prompts with specific users and groups
- Fork Messages & Conversations for Advanced Context control
-
💬 Multimodal & File Interactions:
- Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
-
🌎 Multilingual UI:
- English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
- Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
- Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
- Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
-
🧠 Reasoning UI:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1
-
🎨 Customizable Interface:
- Customizable Dropdown & Interface that adapts to both power users and newcomers
-
- Never lose a response: AI responses automatically reconnect and resume if your connection drops
- Multi-Tab & Multi-Device Sync: Open the same chat in multiple tabs or pick up on another device
- Production-Ready: Works from single-server setups to horizontally scaled deployments with Redis
-
🗣️ Speech & Audio:
- Chat hands-free with Speech-to-Text and Text-to-Speech
- Automatically send and play Audio
- Supports OpenAI, Azure OpenAI, and Elevenlabs
-
📥 Import & Export Conversations:
- Import Conversations from LibreChat, ChatGPT, Chatbot UI
- Export conversations as screenshots, markdown, text, json
-
🔍 Search & Discovery:
- Search all messages/conversations
-
👥 Multi-User & Secure Access:
- Multi-User, Secure Authentication with OAuth2, LDAP, & Email Login Support
- Built-in Moderation, and Token spend tools
-
⚙️ Configuration & Deployment:
- Configure Proxy, Reverse Proxy, Docker, & many Deployment options
- Use completely local or deploy on the cloud
-
📖 Open-Source & Community:
- Completely Open-Source & Built in Public
- Community-driven development, support, and feedback
For a thorough review of our features, see our docs here 📚
🪶 All-In-One AI Conversations with LibreChat
LibreChat is a self-hosted AI chat platform that unifies all major AI providers in a single, privacy-focused interface.
Beyond chat, LibreChat provides AI Agents, Model Context Protocol (MCP) support, Artifacts, Code Interpreter, custom actions, conversation search, and enterprise-ready multi-user authentication.
Open source, actively developed, and built for anyone who values control over their AI infrastructure.
🌐 Resources
GitHub Repo:
- RAG API: github.com/danny-avila/rag_api
- Website: github.com/LibreChat-AI/librechat.ai
Other:
- Website: librechat.ai
- Documentation: librechat.ai/docs
- Blog: librechat.ai/blog
📝 Changelog
Keep up with the latest updates by visiting the releases page and notes:
⚠️ Please consult the changelog for breaking changes before updating.
⭐ Star History
✨ Contributions
Contributions, suggestions, bug reports and fixes are welcome!
For new features, components, or extensions, please open an issue and discuss before sending a PR.
If you'd like to help translate LibreChat into your language, we'd love your contribution! Improving our translations not only makes LibreChat more accessible to users around the world but also enhances the overall user experience. Please check out our Translation Guide.
💖 This project exists in its current state thanks to all the people who contribute
🎉 Special Thanks
We thank Locize for their translation management tools that support multiple languages in LibreChat.