🌉 feat: Integrate Helicone AI Gateway Provider (#10287)

* feat: integrate Helicone AI gateway provider

- Add Helicone provider support with automatic model fetching
- Implement custom API logic for Helicone model registry endpoint
- Enable access to 75+ models from multiple AI providers through Helicone gateway
- Add Helicone to supported providers list in README
- Include Helicone configuration in example YAML

* docs: add Helicone to supported providers list

* fix comments

* fixed backgroundless helicone icon asset

* removed unecessesary changes

* replace svg helicone image instead of png
This commit is contained in:
_juliettech 2025-11-13 08:45:32 -05:00 committed by GitHub
parent 6e19026c48
commit bc561840bb
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
5 changed files with 37 additions and 4 deletions

View file

@ -56,7 +56,7 @@
- [Custom Endpoints](https://www.librechat.ai/docs/quick_start/custom_endpoints): Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with [Local & Remote AI Providers](https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints):
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Perplexity, ShuttleAI, Deepseek, Qwen, and more
- OpenRouter, Helicone, Perplexity, ShuttleAI, Deepseek, Qwen, and more
- 🔧 **[Code Interpreter API](https://www.librechat.ai/docs/features/code_interpreter)**:
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran