🦙 doc update: llama3 (#2470)

* docs: update breaking_changes.md

* docs: update ai_endpoints.md -> llama3 for Ollama and groq

* librechat.yaml: update groq models

* Update breaking_changes.md

logs location

* Update breaking_changes.md

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
This commit is contained in:
Fuegovic 2024-04-19 21:40:12 -04:00 committed by GitHub
parent e6310c806a
commit 4196a86fa9
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 51 additions and 7 deletions

View file

@ -11,14 +11,22 @@ weight: -10
Certain changes in the updates may impact cookies, leading to unexpected behaviors if not cleared properly. Certain changes in the updates may impact cookies, leading to unexpected behaviors if not cleared properly.
--- ---
## v0.7.1+
!!! info "🔍 Google Search Plugin"
- **[Google Search Plugin](../features/plugins/google_search.md)**: Changed the environment variable for this plugin from `GOOGLE_API_KEY` to `GOOGLE_SEARCH_API_KEY` due to a conflict with the Google Generative AI library pulling this variable automatically. If you are using this plugin, please update your `.env` file accordingly.
## v0.7.0+ ## v0.7.0+
!!! failure "Error Messages (UI)"
![image](https://github.com/danny-avila/LibreChat/assets/32828263/0ab27798-5515-49b4-ac29-e4ad83d73d7c)
Client-facing error messages now display this warning asking to contact the admin. For the full error consult the console logs or the additional logs located in `./logs`
!!! warning "🪵Logs Location"
- The full logs are now in `./logs` (they are still in `./api/logs` for local, non-docker installations)
!!! warning "🔍 Google Search Plugin"
- **[Google Search Plugin](../features/plugins/google_search.md)**: Changed the environment variable for this plugin from `GOOGLE_API_KEY` to `GOOGLE_SEARCH_API_KEY` due to a conflict with the Google Generative AI library pulling this variable automatically. If you are using this plugin, please update your `.env` file accordingly.
!!! info "🗃️ RAG API (Chat with Files)" !!! info "🗃️ RAG API (Chat with Files)"
- **RAG API Update**: The default Docker compose files now include a Python API and Vector Database for RAG (Retrieval-Augmented Generation). Read more about this in the [RAG API page](../features/rag_api.md) - **RAG API Update**: The default Docker compose files now include a Python API and Vector Database for RAG (Retrieval-Augmented Generation). Read more about this in the [RAG API page](../features/rag_api.md)

View file

@ -64,9 +64,11 @@ Some of the endpoints are marked as **Known,** which means they might have speci
baseURL: "https://api.groq.com/openai/v1/" baseURL: "https://api.groq.com/openai/v1/"
models: models:
default: [ default: [
"llama3-70b-8192",
"llama3-8b-8192",
"llama2-70b-4096", "llama2-70b-4096",
"mixtral-8x7b-32768", "mixtral-8x7b-32768",
"gemma-7b-it" "gemma-7b-it",
] ]
fetch: false fetch: false
titleConvo: true titleConvo: true
@ -374,3 +376,31 @@ Some of the endpoints are marked as **Known,** which means they might have speci
forcePrompt: false forcePrompt: false
modelDisplayLabel: "Ollama" modelDisplayLabel: "Ollama"
``` ```
!!! tip "Ollama -> llama3"
To prevent the behavior where llama3 does not stop generating, add this `addParams` block to the config:
```yaml
- name: "Ollama"
apiKey: "ollama"
baseURL: "http://host.docker.internal:11434/v1/"
models:
default: [
"llama3"
]
fetch: false # fetching list of models is not supported
titleConvo: true
titleModel: "llama3"
summarize: false
summaryModel: "llama3"
forcePrompt: false
modelDisplayLabel: "Ollama"
addParams:
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
"<|eot_id|>",
"<|reserved_special_token"
]
```

View file

@ -50,7 +50,13 @@ endpoints:
apiKey: '${GROQ_API_KEY}' apiKey: '${GROQ_API_KEY}'
baseURL: 'https://api.groq.com/openai/v1/' baseURL: 'https://api.groq.com/openai/v1/'
models: models:
default: ['llama2-70b-4096', 'mixtral-8x7b-32768', 'gemma-7b-it'] default: [
"llama3-70b-8192",
"llama3-8b-8192",
"llama2-70b-4096",
"mixtral-8x7b-32768",
"gemma-7b-it",
]
fetch: false fetch: false
titleConvo: true titleConvo: true
titleModel: 'mixtral-8x7b-32768' titleModel: 'mixtral-8x7b-32768'