5.5 KiB
| title | description | weight |
|---|---|---|
| 🚅 LiteLLM and Ollama | Using LibreChat with LiteLLM Proxy | -7 |
Using LibreChat with LiteLLM Proxy
Use LiteLLM Proxy for:
- Calling 100+ LLMs Huggingface/Bedrock/TogetherAI/etc. in the OpenAI ChatCompletions & Completions format
- Load balancing - between Multiple Models + Deployments of the same model LiteLLM proxy can handle 1k+ requests/second during load tests
- Authentication & Spend Tracking Virtual Keys
Start LiteLLM Proxy Server
Pip install litellm
pip install litellm
Create a config.yaml for litellm proxy
More information on LiteLLM configurations here: docs.litellm.ai/docs/simple_proxy
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/gpt-turbo-small-eu
api_base: https://my-endpoint-europe-berri-992.openai.azure.com/
api_key:
rpm: 6 # Rate limit for this deployment: in requests per minute (rpm)
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/gpt-turbo-small-ca
api_base: https://my-endpoint-canada-berri992.openai.azure.com/
api_key:
rpm: 6
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/gpt-turbo-large
api_base: https://openai-france-1234.openai.azure.com/
api_key:
rpm: 1440
Start the proxy
litellm --config /path/to/config.yaml
#INFO: Proxy running on http://0.0.0.0:8000
Use LiteLLM Proxy Server with LibreChat
1. Clone the repo
git clone https://github.com/danny-avila/LibreChat.git
2. Modify Librechat's docker-compose.yml
OPENAI_REVERSE_PROXY=http://host.docker.internal:8000/v1/chat/completions
Important: As of v0.6.6, it's recommend you use the librechat.yaml Configuration file (guide here) to add Reverse Proxies as separate endpoints.
3. Save fake OpenAI key in Librechat's .env
Copy Librechat's .env.example to .env and overwrite the default OPENAI_API_KEY (by default it requires the user to pass a key).
OPENAI_API_KEY=sk-1234
4. Run LibreChat:
docker compose up
Why use LiteLLM?
-
Access to Multiple LLMs: It allows calling over 100 LLMs from platforms like Huggingface, Bedrock, TogetherAI, etc., using OpenAI's ChatCompletions and Completions format.
-
Load Balancing: Capable of handling over 1,000 requests per second during load tests, it balances load across various models and deployments.
-
Authentication & Spend Tracking: The server supports virtual keys for authentication and tracks spending.
Key components and features include:
- Installation: Easy installation.
- Testing: Testing features to route requests to specific models.
- Server Endpoints: Offers multiple endpoints for chat completions, completions, embeddings, model lists, and key generation.
- Supported LLMs: Supports a wide range of LLMs, including AWS Bedrock, Azure OpenAI, Huggingface, AWS Sagemaker, Anthropic, and more.
- Proxy Configurations: Allows setting various parameters like model list, server settings, environment variables, and more.
- Multiple Models Management: Configurations can be set up for managing multiple models with fallbacks, cooldowns, retries, and timeouts.
- Embedding Models Support: Special configurations for embedding models.
- Authentication Management: Features for managing authentication through virtual keys, model upgrades/downgrades, and tracking spend.
- Custom Configurations: Supports setting model-specific parameters, caching responses, and custom prompt templates.
- Debugging Tools: Options for debugging and logging proxy input/output.
- Deployment and Performance: Information on deploying LiteLLM Proxy and its performance metrics.
- Proxy CLI Arguments: A wide range of command-line arguments for customization.
Overall, LiteLLM Server offers a comprehensive suite of tools for managing, deploying, and interacting with a variety of LLMs, making it a versatile choice for large-scale AI applications.
Ollama
Use Ollama for
- Run large language models on local hardware
- Host multiple models
- Dynamically load the model upon request
docker-compose.yaml with GPU
version: "3.8"
services:
litellm:
image: ghcr.io/berriai/litellm:main-v1.18.8
volumes:
- ./litellm/litellm-config.yaml:/app/config.yaml
command: [ "--config", "/app/config.yaml", "--port", "8000", "--num_workers", "8" ]
ollama:
image: ollama/ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [compute, utility]
ports:
- "11434:11434"
volumes:
- ./ollama:/root/.ollama
Loading Models in Ollama
- Browse the available models at Ollama Library
- Run
docker exec -it ollama /bin/bash - Copy the text from the Tags tab from the library website. It should begin with 'ollama run'
- Check model size. Models that can run in GPU memory perform the best.
- Use /bye to exit the terminal
Litellm Ollama Configuration
Add the below lines to the config to access the Ollama models
- model_name: mixtral
litellm_params:
model: ollama/mixtral:8x7b-instruct-v0.1-q5_K_M
api_base: http://ollama:11434
stream: True
- model_name: mistral
litellm_params:
model: ollama/mistral
api_base: http://ollama:11434
stream: True