* debug: add instrumentation to MessageIcon arePropsEqual + render cycle tests
Temporary debug commit to identify which field triggers MessageIcon
re-renders during message creation and streaming.
arePropsEqual now logs 'icon_memo_diff' with the exact field name and
prev/next values whenever it returns false. Filter browser console for
'icon_memo_diff' to see the trigger.
Also adds render-level integration tests that simulate the message
lifecycle (initial mount, streaming chunks, context updates) and
assert render counts via logger spy.
* perf: stabilize MultiMessage key to prevent unmount/remount during SSE lifecycle
messageId changes 3 times during the SSE message lifecycle:
1. useChatFunctions creates initialResponse with client-generated UUID
2. createdHandler replaces it with userMessageId + '_'
3. finalHandler replaces it with server-assigned messageId
Since MultiMessage used key={message.messageId}, each change caused
React to destroy and recreate the entire message component subtree,
unmounting MessageIcon and all memoized children. This produced visible
icon/image flickering that no memo comparator could prevent.
Switch to key={parentMessageId + '_' + siblingIdx}:
- parentMessageId is stable from creation through final response
- siblingIdx ensures sibling switches still get clean remounts
- Eliminates 2 unnecessary unmount/remount cycles per message
Add key stability tests verifying:
- Current key={messageId} causes 3 mounts / 2 unmounts per lifecycle
- Stable key causes 1 mount / 0 unmounts per lifecycle
- Sibling switches still trigger clean remounts with stable key
* perf: stabilize root MultiMessage key across new conversation lifecycle
When a user sends their first message in a new conversation,
conversationId transitions from null/'new' to the server-assigned
UUID. MessagesView used key={conversationId} on the root MultiMessage,
so this transition destroyed the entire message tree and rebuilt it
from scratch — causing all MessageIcons to unmount/remount (visible
as image flickering).
Use a ref-based stable key that captures the first real conversationId
and only changes on genuine conversation switches (navigating to a
different conversation), not on the null→UUID transition within the
same conversation.
* debug: add mount/unmount lifecycle tracking to MessageIcon
Adds icon_lifecycle logs (MOUNT/UNMOUNT) and render count to
distinguish between fresh mounts (memo comparator not called)
and internal re-renders (hook bypassing memo).
Enable: localStorage.setItem('DEBUG_LOGGING', 'icon_lifecycle,icon_data,icon_memo_diff')
* debug: add key and root tracking to MultiMessage and MessagesView
Logs multi_message_key (stableKey, messageId, parentMessageId, route)
and messages_view_key (rootKey, conversationId) to trace which key
changes trigger unmount/remount cycles.
Enable: localStorage.setItem('DEBUG_LOGGING', 'icon_lifecycle,icon_data,icon_memo_diff,multi_message_key,messages_view_key')
* perf: remove key from root MultiMessage to prevent tree destruction
The ref-based stable key still changed during 'new' → real UUID
transition, destroying the entire tree. The root MultiMessage is the
sole child at its position, so React reuses the instance via
positional reconciliation without any key. The messageId prop
(conversationId) naturally resets Recoil siblingIdxFamily state on
conversation switches.
* perf: remove unstable keys from MultiMessage to prevent SSE lifecycle remounts
Both messageId and parentMessageId change during the SSE lifecycle
(client UUID → CREATED server ID → FINAL server ID), making neither
viable as a stable React key. Each key change caused React to destroy
and recreate the entire message component subtree, including all
memoized children — visible as icon/image flickering.
Remove explicit keys entirely and rely on React's positional
reconciliation. MultiMessage always renders exactly one child at
the same position, so React reuses the component instance and
updates props in place. The existing memo comparators on
ContentRender/MessageRender handle field-level diffing correctly.
Update tests to verify: key={messageId} causes 3 mounts/2 unmounts
per lifecycle, while no key causes 1 mount/0 unmounts.
* perf: remove unstable keys from child MultiMessage in message wrappers
Message.tsx, MessageContent.tsx, and MessageParts.tsx each render a
child MultiMessage with key={messageId} for the current message's
children. Since messageId changes during the SSE lifecycle (CREATED
event replaces the user message ID), the child MultiMessage gets
destroyed and recreated, unmounting the entire agent response subtree
including its MessageIcon.
Remove these keys for the same reason as the parent MultiMessage:
each child MultiMessage renders exactly one child at a fixed position,
so positional reconciliation correctly reuses the instance.
* chore: remove MultiMessage key tests — they test React behavior, not our code
The tests verified that key={messageId} causes remounts while no key
doesn't, but this is React's own reconciliation behavior. No unit test
can prevent someone from re-adding a key prop to JSX. The JSDoc comments
on MultiMessage document the decision and rationale.
|
||
|---|---|---|
| .devcontainer | ||
| .github | ||
| .husky | ||
| .vscode | ||
| api | ||
| client | ||
| config | ||
| e2e | ||
| helm | ||
| packages | ||
| redis-config | ||
| src/tests | ||
| utils | ||
| .dockerignore | ||
| .env.example | ||
| .gitattributes | ||
| .gitignore | ||
| .prettierrc | ||
| AGENTS.md | ||
| bun.lock | ||
| CLAUDE.md | ||
| deploy-compose.yml | ||
| docker-compose.override.yml.example | ||
| docker-compose.yml | ||
| Dockerfile | ||
| Dockerfile.multi | ||
| eslint.config.mjs | ||
| librechat.example.yaml | ||
| LICENSE | ||
| package-lock.json | ||
| package.json | ||
| rag.yml | ||
| README.md | ||
| README.zh.md | ||
| turbo.json | ||
LibreChat
English · 中文
✨ Features
-
🖥️ UI & Experience inspired by ChatGPT with enhanced design and features
-
🤖 AI Model Selection:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Responses API (incl. Azure)
- Custom Endpoints: Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with Local & Remote AI Providers:
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Helicone, Perplexity, ShuttleAI, Deepseek, Qwen, and more
-
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
- Seamless File Handling: Upload, process, and download files directly
- No Privacy Concerns: Fully isolated and secure execution
-
🔦 Agents & Tools Integration:
- LibreChat Agents:
- No-Code Custom Assistants: Build specialized, AI-driven helpers
- Agent Marketplace: Discover and deploy community-built agents
- Collaborative Sharing: Share agents with specific users and groups
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- Model Context Protocol (MCP) Support for Tools
- LibreChat Agents:
-
🔍 Web Search:
- Search the internet and retrieve relevant information to enhance your AI context
- Combines search providers, content scrapers, and result rerankers for optimal results
- Customizable Jina Reranking: Configure custom Jina API URLs for reranking services
- Learn More →
-
🪄 Generative UI with Code Artifacts:
- Code Artifacts allow creation of React, HTML, and Mermaid diagrams directly in chat
-
🎨 Image Generation & Editing
- Text-to-image and image-to-image with GPT-Image-1
- Text-to-image with DALL-E (3/2), Stable Diffusion, Flux, or any MCP server
- Produce stunning visuals from prompts or refine existing images with a single instruction
-
💾 Presets & Context Management:
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- Create and share prompts with specific users and groups
- Fork Messages & Conversations for Advanced Context control
-
💬 Multimodal & File Interactions:
- Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
-
🌎 Multilingual UI:
- English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
- Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
- Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
- Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
-
🧠 Reasoning UI:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1
-
🎨 Customizable Interface:
- Customizable Dropdown & Interface that adapts to both power users and newcomers
-
- Never lose a response: AI responses automatically reconnect and resume if your connection drops
- Multi-Tab & Multi-Device Sync: Open the same chat in multiple tabs or pick up on another device
- Production-Ready: Works from single-server setups to horizontally scaled deployments with Redis
-
🗣️ Speech & Audio:
- Chat hands-free with Speech-to-Text and Text-to-Speech
- Automatically send and play Audio
- Supports OpenAI, Azure OpenAI, and Elevenlabs
-
📥 Import & Export Conversations:
- Import Conversations from LibreChat, ChatGPT, Chatbot UI
- Export conversations as screenshots, markdown, text, json
-
🔍 Search & Discovery:
- Search all messages/conversations
-
👥 Multi-User & Secure Access:
- Multi-User, Secure Authentication with OAuth2, LDAP, & Email Login Support
- Built-in Moderation, and Token spend tools
-
⚙️ Configuration & Deployment:
- Configure Proxy, Reverse Proxy, Docker, & many Deployment options
- Use completely local or deploy on the cloud
-
📖 Open-Source & Community:
- Completely Open-Source & Built in Public
- Community-driven development, support, and feedback
For a thorough review of our features, see our docs here 📚
🪶 All-In-One AI Conversations with LibreChat
LibreChat is a self-hosted AI chat platform that unifies all major AI providers in a single, privacy-focused interface.
Beyond chat, LibreChat provides AI Agents, Model Context Protocol (MCP) support, Artifacts, Code Interpreter, custom actions, conversation search, and enterprise-ready multi-user authentication.
Open source, actively developed, and built for anyone who values control over their AI infrastructure.
🌐 Resources
GitHub Repo:
- RAG API: github.com/danny-avila/rag_api
- Website: github.com/LibreChat-AI/librechat.ai
Other:
- Website: librechat.ai
- Documentation: librechat.ai/docs
- Blog: librechat.ai/blog
📝 Changelog
Keep up with the latest updates by visiting the releases page and notes:
⚠️ Please consult the changelog for breaking changes before updating.
⭐ Star History
✨ Contributions
Contributions, suggestions, bug reports and fixes are welcome!
For new features, components, or extensions, please open an issue and discuss before sending a PR.
If you'd like to help translate LibreChat into your language, we'd love your contribution! Improving our translations not only makes LibreChat more accessible to users around the world but also enhances the overall user experience. Please check out our Translation Guide.
💖 This project exists in its current state thanks to all the people who contribute
🎉 Special Thanks
We thank Locize for their translation management tools that support multiple languages in LibreChat.