LibreChat/api/app/clients/llm
Danny Avila a45b384bbc
💾 feat: Anthropic Prompt Caching (#3670)
* wip: initial cache control implementation, add typing for transactions handling

* feat: first pass of Anthropic Prompt Caching

* feat: standardize stream usage as pass in when calculating token counts

* feat: Add getCacheMultiplier function to calculate cache multiplier for different valueKeys and cacheTypes

* chore: imports order

* refactor: token usage recording in AnthropicClient, no need to "correct" as we have the correct amount

* feat: more accurate token counting using stream usage data

* feat: Improve token counting accuracy with stream usage data

* refactor: ensure more accurate than not token estimations if custom instructions or files are not being resent with every request

* refactor: cleanup updateUserMessageTokenCount to allow transactions to be as accurate as possible even if we shouldn't update user message token counts

* ci: fix tests
2024-08-17 03:24:09 -04:00
..
createCoherePayload.js 🧠 fix(Cohere): map to expected SDK params (#2329) 2024-04-05 16:45:18 -04:00
createLLM.js 🅰️ feat: Azure OpenAI Assistants API Support (#1992) 2024-03-14 17:21:42 -04:00
index.js 🧠 feat: Cohere support as Custom Endpoint (#2328) 2024-04-05 15:19:41 -04:00
RunManager.js 💾 feat: Anthropic Prompt Caching (#3670) 2024-08-17 03:24:09 -04:00