Enhanced ChatGPT Clone: Features Agents, MCP, DeepSeek, Anthropic, AWS, OpenAI, Responses API, Azure, Groq, o1, GPT-5, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code Interpreter, langchain, DALL-E-3, OpenAPI Actions, Functions, Secure Multi-User Auth, Presets, open-source for self-hosting. Active. https://librechat.ai/
Find a file
Danny Avila 359cc63b41
refactor: Use in-memory cache for App MCP configs to avoid Redis SCAN (#12410)
*  perf: Use in-memory cache for App MCP configs to avoid Redis SCAN

The 'App' namespace holds static YAML-loaded configs identical on every
instance. Storing them in Redis and retrieving via SCAN + batch-GET
caused 60s+ stalls under concurrent load (#11624). Since these configs
are already loaded into memory at startup, bypass Redis entirely by
always returning ServerConfigsCacheInMemory for the 'App' namespace.

* ♻️ refactor: Extract APP_CACHE_NAMESPACE constant and harden tests

- Extract magic string 'App' to a shared `APP_CACHE_NAMESPACE` constant
  used by both ServerConfigsCacheFactory and MCPServersRegistry
- Document that `leaderOnly` is ignored for the App namespace
- Reset `cacheConfig.USE_REDIS` in test `beforeEach` to prevent
  ordering-dependent flakiness
- Fix import order in test file (longest to shortest)

* 🐛 fix: Populate App cache on follower instances in cluster mode

In cluster deployments, only the leader runs MCPServersInitializer to
inspect and cache MCP server configs. Followers previously read these
from Redis, but with the App namespace now using in-memory storage,
followers would have an empty cache.

Add populateLocalCache() so follower processes independently initialize
their own in-memory App cache from the same YAML configs after the
leader signals completion. The method is idempotent — if the cache is
already populated (leader case), it's a no-op.

* 🐛 fix: Use static flag for populateLocalCache idempotency

Replace getAllServerConfigs() idempotency check with a static
localCachePopulated flag. The previous check merged App + DB caches,
causing false early returns in deployments with publicly shared DB
configs, and poisoned the TTL read-through cache with stale results.

The static flag is zero-cost (no async/Redis/DB calls), immune to
DB config interference, and is reset alongside hasInitializedThisProcess
in resetProcessFlag() for test teardown.

Also set localCachePopulated=true after leader initialization completes,
so subsequent calls on the leader don't redundantly re-run populateLocalCache.

* 📝 docs: Document process-local reset() semantics for App cache

With the App namespace using in-memory storage, reset() only clears the
calling process's cache. Add JSDoc noting this behavioral change so
callers in cluster deployments know each instance must reset independently.

*  test: Add follower cache population tests for MCPServersInitializer

Cover the populateLocalCache code path:
- Follower populates its own App cache after leader signals completion
- localCachePopulated flag prevents redundant re-initialization
- Fresh follower process independently initializes all servers

* 🧹 style: Fix import order to longest-to-shortest convention

* 🔬 test: Add Redis perf benchmark to isolate getAll() bottleneck

Benchmarks that run against a live Redis instance to measure:
1. SCAN vs batched GET phases independently
2. SCAN cost scaling with total keyspace size (noise keys)
3. Concurrent getAll() at various concurrency levels (1/10/50/100)
4. Alternative: single aggregate key vs SCAN+GET
5. Alternative: raw MGET vs Keyv batch GET (serialization overhead)

Run with: npx jest --config packages/api/jest.config.mjs \
  --testPathPatterns="perf_benchmark" --coverage=false

*  feat: Add aggregate-key Redis cache for MCP App configs

ServerConfigsCacheRedisAggregateKey stores all configs under a single
Redis key, making getAll() a single GET instead of SCAN + N GETs.

This eliminates the O(keyspace_size) SCAN that caused 60s+ stalls in
large deployments while preserving cross-instance visibility — all
instances read/write the same Redis key, so reinspection results
propagate automatically after readThroughCache TTL expiry.

* ♻️ refactor: Use aggregate-key cache for App namespace in factory

Update ServerConfigsCacheFactory to return ServerConfigsCacheRedisAggregateKey
for the App namespace when Redis is enabled, instead of ServerConfigsCacheInMemory.

This preserves cross-instance visibility (reinspection results propagate
through Redis) while eliminating SCAN. Non-App namespaces still use the
standard per-key ServerConfigsCacheRedis.

* 🗑️ revert: Remove populateLocalCache — no longer needed with aggregate key

With App configs stored under a single Redis key (aggregate approach),
followers read from Redis like before. The populateLocalCache mechanism
and its localCachePopulated flag are no longer necessary.

Also reverts the process-local reset() JSDoc since reset() is now
cluster-wide again via Redis.

* 🐛 fix: Add write mutex to aggregate cache and exclude perf benchmark from CI

- Add promise-based write lock to ServerConfigsCacheRedisAggregateKey to
  prevent concurrent read-modify-write races during parallel initialization
  (Promise.allSettled runs multiple addServer calls concurrently, causing
  last-write-wins data loss on the aggregate key)
- Rename perf benchmark to cache_integration pattern so CI skips it
  (requires live Redis)

* 🔧 fix: Rename perf benchmark to *.manual.spec.ts to exclude from all CI

The cache_integration pattern is picked up by test:cache-integration:mcp
in CI. Rename to *.manual.spec.ts which isn't matched by any CI runner.

*  test: Add cache integration tests for ServerConfigsCacheRedisAggregateKey

Tests against a live Redis instance covering:
- CRUD operations (add, get, update, remove)
- getAll with empty/populated cache
- Duplicate add rejection, missing update/remove errors
- Concurrent write safety (20 parallel adds without data loss)
- Concurrent read safety (50 parallel getAll calls)
- Reset clears all configs

* 🔧 fix: Rename perf benchmark to *.manual.spec.ts to exclude from all CI

The perf benchmark file was renamed to *.manual.spec.ts but no
testPathIgnorePatterns existed for that convention. Add .*manual\.spec\.
to both test and test:ci scripts, plus jest.config.mjs, so manual-only
tests never run in CI unit test jobs.

* fix: Address review findings for aggregate key cache

- Add successCheck() to all write paths (add/update/remove) so Redis
  SET failures throw instead of being silently swallowed
- Override reset() to use targeted cache.delete(AGGREGATE_KEY) instead
  of inherited SCAN-based cache.clear() — consistent with eliminating
  SCAN operations
- Document cross-instance write race invariant in class JSDoc: the
  promise-based writeLock is process-local only; callers must enforce
  single-writer semantics externally (leader-only init)
- Use definite-assignment assertion (let resolve!:) instead of non-null
  assertion at call site
- Fix import type convention in integration test
- Verify Promise.allSettled rejections explicitly in concurrent write test
- Fix broken run command in benchmark file header

* style: Fix import ordering per AGENTS.md convention

Local/project imports sorted longest to shortest.

* chore: Update import ordering and clean up unused imports in MCPServersRegistry.ts

* chore: import order

* chore: import order
2026-03-26 14:44:31 -04:00
.devcontainer 🪦 refactor: Remove Legacy Code (#10533) 2025-12-11 16:36:12 -05:00
.github refactor: Parallelize CI Workflows with Isolated Caching and Fan-Out Test Jobs (#12088) 2026-03-05 13:56:07 -05:00
.husky 🎨 refactor: Redesign Sidebar with Unified Icon Strip Layout (#12013) 2026-03-22 01:15:20 -04:00
.vscode 🔐 feat: Granular Role-based Permissions + Entra ID Group Discovery (#7804) 2025-08-13 16:24:17 -04:00
api 🎛️ feat: DB-Backed Per-Principal Config System (#12354) 2026-03-25 19:39:29 -04:00
client 🏁 fix: Invalidate Message Cache on Stream 404 Instead of Showing Error (#12411) 2026-03-26 12:27:31 -04:00
config 📦 refactor: Consolidate DB models, encapsulating Mongoose usage in data-schemas (#11830) 2026-03-21 14:28:53 -04:00
e2e 🐘 feat: FerretDB Compatibility (#11769) 2026-03-21 14:28:49 -04:00
helm v0.8.4 (#12339) 2026-03-20 18:01:00 -04:00
packages refactor: Use in-memory cache for App MCP configs to avoid Redis SCAN (#12410) 2026-03-26 14:44:31 -04:00
redis-config 🔄 refactor: Migrate Cache Logic to TypeScript (#9771) 2025-10-02 09:33:58 -04:00
src/tests 🆔 feat: Add OpenID Connect Federated Provider Token Support (#9931) 2025-11-21 09:51:11 -05:00
utils 🐳 chore: Update image registry references in Docker/Helm configurations (#12026) 2026-03-02 22:14:50 -05:00
.dockerignore 🐳 : Further Docker build Cleanup & Docs Update (#1502) 2024-01-06 11:59:08 -05:00
.env.example ⚗️ feat: Agent Context Compaction/Summarization (#12287) 2026-03-21 14:28:56 -04:00
.gitattributes 🎛️ feat: DB-Backed Per-Principal Config System (#12354) 2026-03-25 19:39:29 -04:00
.gitignore 🧩 feat: Redesign Tool Call UI with Contextual Icons, Smart Grouping, and Rich Output Rendering (#12163) 2026-03-25 12:31:39 -04:00
.prettierrc 🧹 chore: Migrate to Flat ESLint Config & Update Prettier Settings (#5737) 2025-02-09 12:15:20 -05:00
AGENTS.md 🎛️ feat: DB-Backed Per-Principal Config System (#12354) 2026-03-25 19:39:29 -04:00
bun.lock v0.8.4 (#12339) 2026-03-20 18:01:00 -04:00
CLAUDE.md ✳️ docs: Point CLAUDE.md to AGENTS.md (#11886) 2026-02-20 16:23:33 -05:00
deploy-compose.yml 🔒 chore: Bump MongoDB from 8.0.17 to 8.0.20 in Docker Compose Files (#12399) 2026-03-25 13:56:43 -04:00
docker-compose.override.yml.example 🐳 chore: Update image registry references in Docker/Helm configurations (#12026) 2026-03-02 22:14:50 -05:00
docker-compose.yml 🔒 chore: Bump MongoDB from 8.0.17 to 8.0.20 in Docker Compose Files (#12399) 2026-03-25 13:56:43 -04:00
Dockerfile v0.8.4 (#12339) 2026-03-20 18:01:00 -04:00
Dockerfile.multi v0.8.4 (#12339) 2026-03-20 18:01:00 -04:00
eslint.config.mjs 📦 refactor: Consolidate DB models, encapsulating Mongoose usage in data-schemas (#11830) 2026-03-21 14:28:53 -04:00
librechat.example.yaml v0.8.3 (#12161) 2026-03-09 15:19:57 -04:00
LICENSE ⚖️ docs: Update LICENSE.md Year: 2024 -> 2025 (#5915) 2025-02-17 10:39:46 -05:00
package-lock.json 🔄 refactor: Migrate to react-resizable-panels v4 with Artifacts Header polish (#12356) 2026-03-22 02:21:27 -04:00
package.json v0.8.4 (#12339) 2026-03-20 18:01:00 -04:00
rag.yml 🐳 chore: Update image registry references in Docker/Helm configurations (#12026) 2026-03-02 22:14:50 -05:00
README.md 📝 docs: add UTM tracking parameters to Railway deployment links (#12228) 2026-03-26 13:43:33 -04:00
README.zh.md 🏮 docs: Add Simplified Chinese README Translation (#12323) 2026-03-20 13:08:48 -04:00
turbo.json 🏎️ feat: Smart Reinstall with Turborepo Caching for Better DX (#11785) 2026-02-13 14:25:26 -05:00

LibreChat

English · 中文

Deploy on Railway Deploy on Zeabur Deploy on Sealos

Translation Progress

Features

  • 🖥️ UI & Experience inspired by ChatGPT with enhanced design and features

  • 🤖 AI Model Selection:

    • Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Responses API (incl. Azure)
    • Custom Endpoints: Use any OpenAI-compatible API with LibreChat, no proxy required
    • Compatible with Local & Remote AI Providers:
      • Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
      • OpenRouter, Helicone, Perplexity, ShuttleAI, Deepseek, Qwen, and more
  • 🔧 Code Interpreter API:

    • Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
    • Seamless File Handling: Upload, process, and download files directly
    • No Privacy Concerns: Fully isolated and secure execution
  • 🔦 Agents & Tools Integration:

    • LibreChat Agents:
      • No-Code Custom Assistants: Build specialized, AI-driven helpers
      • Agent Marketplace: Discover and deploy community-built agents
      • Collaborative Sharing: Share agents with specific users and groups
      • Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
      • Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
      • Model Context Protocol (MCP) Support for Tools
  • 🔍 Web Search:

    • Search the internet and retrieve relevant information to enhance your AI context
    • Combines search providers, content scrapers, and result rerankers for optimal results
    • Customizable Jina Reranking: Configure custom Jina API URLs for reranking services
    • Learn More →
  • 🪄 Generative UI with Code Artifacts:

    • Code Artifacts allow creation of React, HTML, and Mermaid diagrams directly in chat
  • 🎨 Image Generation & Editing

  • 💾 Presets & Context Management:

    • Create, Save, & Share Custom Presets
    • Switch between AI Endpoints and Presets mid-chat
    • Edit, Resubmit, and Continue Messages with Conversation branching
    • Create and share prompts with specific users and groups
    • Fork Messages & Conversations for Advanced Context control
  • 💬 Multimodal & File Interactions:

    • Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
    • Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
  • 🌎 Multilingual UI:

    • English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
    • Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
    • Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
    • Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
  • 🧠 Reasoning UI:

    • Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1
  • 🎨 Customizable Interface:

    • Customizable Dropdown & Interface that adapts to both power users and newcomers
  • 🌊 Resumable Streams:

    • Never lose a response: AI responses automatically reconnect and resume if your connection drops
    • Multi-Tab & Multi-Device Sync: Open the same chat in multiple tabs or pick up on another device
    • Production-Ready: Works from single-server setups to horizontally scaled deployments with Redis
  • 🗣️ Speech & Audio:

    • Chat hands-free with Speech-to-Text and Text-to-Speech
    • Automatically send and play Audio
    • Supports OpenAI, Azure OpenAI, and Elevenlabs
  • 📥 Import & Export Conversations:

    • Import Conversations from LibreChat, ChatGPT, Chatbot UI
    • Export conversations as screenshots, markdown, text, json
  • 🔍 Search & Discovery:

    • Search all messages/conversations
  • 👥 Multi-User & Secure Access:

    • Multi-User, Secure Authentication with OAuth2, LDAP, & Email Login Support
    • Built-in Moderation, and Token spend tools
  • ⚙️ Configuration & Deployment:

    • Configure Proxy, Reverse Proxy, Docker, & many Deployment options
    • Use completely local or deploy on the cloud
  • 📖 Open-Source & Community:

    • Completely Open-Source & Built in Public
    • Community-driven development, support, and feedback

For a thorough review of our features, see our docs here 📚

🪶 All-In-One AI Conversations with LibreChat

LibreChat is a self-hosted AI chat platform that unifies all major AI providers in a single, privacy-focused interface.

Beyond chat, LibreChat provides AI Agents, Model Context Protocol (MCP) support, Artifacts, Code Interpreter, custom actions, conversation search, and enterprise-ready multi-user authentication.

Open source, actively developed, and built for anyone who values control over their AI infrastructure.


🌐 Resources

GitHub Repo:

Other:


📝 Changelog

Keep up with the latest updates by visiting the releases page and notes:

⚠️ Please consult the changelog for breaking changes before updating.


Star History

Star History Chart

danny-avila%2FLibreChat | Trendshift ROSS Index - Fastest Growing Open-Source Startups in Q1 2024 | Runa Capital


Contributions

Contributions, suggestions, bug reports and fixes are welcome!

For new features, components, or extensions, please open an issue and discuss before sending a PR.

If you'd like to help translate LibreChat into your language, we'd love your contribution! Improving our translations not only makes LibreChat more accessible to users around the world but also enhances the overall user experience. Please check out our Translation Guide.


💖 This project exists in its current state thanks to all the people who contribute


🎉 Special Thanks

We thank Locize for their translation management tools that support multiple languages in LibreChat.

Locize Logo