Enhanced ChatGPT Clone: Features Agents, MCP, DeepSeek, Anthropic, AWS, OpenAI, Responses API, Azure, Groq, o1, GPT-5, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code Interpreter, langchain, DALL-E-3, OpenAPI Actions, Functions, Secure Multi-User Auth, Presets, open-source for self-hosting. Active. https://librechat.ai/
Find a file
Danny Avila 317a1bd8da
feat: ConversationSummaryBufferMemory (#973)
* refactor: pass model in message edit payload, use encoder in standalone util function

* feat: add summaryBuffer helper

* refactor(api/messages): use new countTokens helper and add auth middleware at top

* wip: ConversationSummaryBufferMemory

* refactor: move pre-generation helpers to prompts dir

* chore: remove console log

* chore: remove test as payload will no longer carry tokenCount

* chore: update getMessagesWithinTokenLimit JSDoc

* refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests

* refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message

* chore: add newer model to token map

* fix: condition was point to prop of array instead of message prop

* refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary
refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present
fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present

* chore: log previous_summary if debugging

* refactor(formatMessage): assume if role is defined that it's a valid value

* refactor(getMessagesWithinTokenLimit): remove summary logic
refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned
refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary
refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method
refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic

* fix: undefined handling and summarizing only when shouldRefineContext is true

* chore(BaseClient): fix test results omitting system role for summaries and test edge case

* chore: export summaryBuffer from index file

* refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer

* feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine'

* refactor: rename refineMessages method to summarizeMessages for clarity

* chore: clarify summary future intent in .env.example

* refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed

* feat(gptPlugins): enable summarization for plugins

* refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner

* refactor(agents): use ConversationSummaryBufferMemory for both agent types

* refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests

* refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers

* fix: forgot to spread formatMessages also took opportunity to pluralize filename

* refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing

* ci(formatMessages): add more exhaustive checks for langchain messages

* feat: add debug env var for OpenAI

* chore: delete unnecessary comments

* chore: add extra note about summary feature

* fix: remove tokenCount from payload instructions

* fix: test fail

* fix: only pass instructions to payload when defined or not empty object

* refactor: fromPromptMessages is deprecated, use renamed method fromMessages

* refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility

* fix(PluginsClient.buildPromptBody): handle undefined message strings

* chore: log langchain titling error

* feat: getModelMaxTokens helper

* feat: tokenSplit helper

* feat: summary prompts updated

* fix: optimize _CUT_OFF_SUMMARIZER prompt

* refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context

* fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context,
refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this
refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning

* fix(handleContextStrategy): handle case where incoming prompt is bigger than model context

* chore: rename refinedContent to splitText

* chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
.devcontainer fix: devcontainer image and networking (#891) 2023-09-07 07:19:03 -04:00
.github Update CONTRIBUTING.md 2023-09-26 11:43:57 -04:00
.husky chore: move files out of root to declutter 2023-09-06 14:00:36 -04:00
api feat: ConversationSummaryBufferMemory (#973) 2023-09-26 21:02:28 -04:00
client feat: ConversationSummaryBufferMemory (#973) 2023-09-26 21:02:28 -04:00
config chore: Remove Unused Dependencies 🧹 (#939) 2023-09-14 15:12:22 -04:00
docs docs: update render.md to include meilisearch guide (#982) 2023-09-22 07:28:52 -04:00
e2e feat(db & e2e): Enhance DB Schemas/Controllers and Improve E2E Tests (#966) 2023-09-18 15:19:50 -04:00
packages/data-provider feat: ConversationSummaryBufferMemory (#973) 2023-09-26 21:02:28 -04:00
pyserver feat: Add Code Interpreter Plugin (#837) 2023-08-28 09:13:50 -04:00
.dockerignore chore: Update docker, Minor Styling fix (#528) 2023-06-17 11:38:48 -04:00
.env.example feat: ConversationSummaryBufferMemory (#973) 2023-09-26 21:02:28 -04:00
.eslintrc.js refactor(client): Refactors recent typescript changes for best practices (#763) 2023-08-05 16:45:26 -04:00
.gitignore feat: Message Rate Limiters, Violation Logging, & Ban System 🔨 (#903) 2023-09-13 10:57:07 -04:00
bun.lockb chore: Remove Unused Dependencies 🧹 (#939) 2023-09-14 15:12:22 -04:00
deploy-compose.yml chore(docker-compose.yml): comment out meilisearch ports in docker-compose.yml (#807) 2023-08-14 10:23:00 -04:00
docker-compose.yml chore(docker-compose.yml): comment out meilisearch ports in docker-compose.yml (#807) 2023-08-14 10:23:00 -04:00
Dockerfile Add podman installation instructions. Update dockerfile to stub env (#819) 2023-08-24 20:20:37 -04:00
Dockerfile.multi chore(Dockerfile.multi): add data-provider package build and copy step 2023-07-30 11:50:24 -04:00
index.html Update index.html to replace ChatGPT Clone with LibreChat (#724) 2023-07-28 19:14:58 -04:00
mkdocs.yml docs: Utilize Meilisearch Using LibreChat in Render (#972) 2023-09-22 07:25:49 -04:00
package-lock.json feat: Logins log for Fail2Ban (#986) 2023-09-24 12:18:10 -04:00
package.json v0.5.9 (#970) 2023-09-18 17:23:32 -04:00
prettier.config.js refactor: Settings/Presets UI Restructure, convert many files to TS (#740) 2023-08-04 13:56:44 -04:00
README.md feat: Message Rate Limiters, Violation Logging, & Ban System 🔨 (#903) 2023-09-13 10:57:07 -04:00

LibreChat

All-In-One AI Conversations with LibreChat

LibreChat brings together the future of assistant AIs with the revolutionary technology of OpenAI's ChatGPT. Celebrating the original styling, LibreChat gives you the ability to integrate multiple AI models. It also integrates and enhances original client features such as conversation and message search, prompt templates and plugins.

With LibreChat, you no longer need to opt for ChatGPT Plus and can instead use free or pay-per-call APIs. We welcome contributions, cloning, and forking to enhance the capabilities of this advanced chatbot platform.

Watch the video Click on the thumbnail to open the video☝️

Features

  • Response streaming identical to ChatGPT through server-sent events
  • UI from original ChatGPT, including Dark mode
  • AI model selection: OpenAI API, BingAI, ChatGPT Browser, PaLM2, Anthropic (Claude), Plugins
  • Create, Save, & Share custom presets - More info on prompt presets here
  • Edit and Resubmit messages with conversation branching
  • Search all messages/conversations - More info here
  • Plugins now available (including web access, image generation and more)

⚠️ Breaking Changes ⚠️

Please read this before updating from a previous version


Changelog

Keep up with the latest updates by visiting the releases page - Releases


Table of Contents

Getting Started
General Information
Features
Cloud Deployment
Contributions

Star History

Star History Chart


Sponsors

Sponsored by @mjtechguy, @SphaeroX, @DavidDev1334, @fuegovic, @Pharrcyde


Contributors

Contributions and suggestions bug reports and fixes are welcome! Please read the documentation before you do!


For new features, components, or extensions, please open an issue and discuss before sending a PR.

This project exists in its current state thanks to all the people who contribute