* chore: bump browserslist-db@latest * refactor(EndpointService): simplify with `generateConfig`, utilize optional baseURL for OpenAI-based endpoints, use `isUserProvided` helper fn wherever needed * refactor(custom/initializeClient): use standardized naming for common variables * feat: user provided baseURL for openAI-based endpoints * refactor(custom/initializeClient): re-order operations * fix: knownendpoints enum definition and add FetchTokenConfig, bump data-provider * refactor(custom): use tokenKey dependent on userProvided conditions for caching and fetching endpointTokenConfig, anticipate token rates from custom config * refactor(custom): assure endpointTokenConfig is only accessed from cache if qualifies for fetching * fix(ci): update tests for initializeClient based on userProvideURL changes * fix(EndpointService): correct baseURL env var for assistants: `ASSISTANTS_BASE_URL` * fix: unnecessary run cancellation on res.close() when response.run is completed * feat(assistants): user provided URL option * ci: update tests and add test for `assistants` endpoint * chore: leaner condition for request closing * chore: more descriptive error message to provide keys again |
||
|---|---|---|
| .devcontainer | ||
| .github | ||
| .husky | ||
| api | ||
| client | ||
| config | ||
| docs | ||
| e2e | ||
| packages/data-provider | ||
| .dockerignore | ||
| .env.example | ||
| .eslintrc.js | ||
| .gitignore | ||
| bun.lockb | ||
| deploy-compose.yml | ||
| docker-compose.override.yml.example | ||
| docker-compose.yml | ||
| Dockerfile | ||
| Dockerfile.multi | ||
| index.html | ||
| librechat.example.yaml | ||
| LICENSE | ||
| mkdocs.yml | ||
| package-lock.json | ||
| package.json | ||
| prettier.config.js | ||
| README.md | ||
LibreChat
📃 Features
- 🖥️ UI matching ChatGPT, including Dark mode, Streaming, and 11-2023 updates
- 💬 Multimodal Chat:
- Upload and analyze images with GPT-4 and Gemini Vision 📸
- More filetypes and Assistants API integration in Active Development 🚧
- 🌎 Multilingual UI:
- English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro,
- Русский, 日本語, Svenska, 한국어, Tiếng Việt, 繁體中文, العربية, Türkçe, Nederlands
- 🤖 AI model selection: OpenAI API, Azure, BingAI, ChatGPT, Google Vertex AI, Anthropic (Claude), Plugins
- 💾 Create, Save, & Share Custom Presets
- 🔄 Edit, Resubmit, and Continue messages with conversation branching
- 📤 Export conversations as screenshots, markdown, text, json.
- 🔍 Search all messages/conversations
- 🔌 Plugins, including web access, image generation with DALL-E-3 and more
- 👥 Multi-User, Secure Authentication with Moderation and Token spend tools
- ⚙️ Configure Proxy, Reverse Proxy, Docker, many Deployment options, and completely Open-Source
For a thorough review of our features, see our docs here 📚
🪶 All-In-One AI Conversations with LibreChat
LibreChat brings together the future of assistant AIs with the revolutionary technology of OpenAI's ChatGPT. Celebrating the original styling, LibreChat gives you the ability to integrate multiple AI models. It also integrates and enhances original client features such as conversation and message search, prompt templates and plugins.
With LibreChat, you no longer need to opt for ChatGPT Plus and can instead use free or pay-per-call APIs. We welcome contributions, cloning, and forking to enhance the capabilities of this advanced chatbot platform.
Click on the thumbnail to open the video☝️
📚 Documentation
For more information on how to use our advanced features, install and configure our software, and access our guidelines and tutorials, please check out our documentation at docs.librechat.ai
📝 Changelog
Keep up with the latest updates by visiting the releases page - Releases
⚠️ Breaking Changes Please consult the breaking changes before updating.
⭐ Star History
✨ Contributions
Contributions, suggestions, bug reports and fixes are welcome!
For new features, components, or extensions, please open an issue and discuss before sending a PR.