mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-02-20 09:24:10 +01:00
🧹📚 docs: refactor and clean up (#1392)
* 📑 update mkdocs * rename docker override file and add to gitignore * update .env.example - GOOGLE_MODELS * update index.md * doc refactor: split installation and configuration in two sub-folders * doc update: installation guides * doc update: configuration guides * doc: new docker override guide * doc: new beginner's guide for contributions - Thanks @Berry-13 * doc: update documentation_guidelines.md * doc: update testing.md * doc: update deployment guides * doc: update /dev readme * doc: update general_info * doc: add 0 value to doc weight * doc: add index.md to every doc folders * doc: add weight to index.md and move openrouter from free_ai_apis.md to ai_setup.md * doc: update toc so they display properly on the right had side in mkdocs * doc: update pandoranext.md * doc: index logging_system.md * doc: update readme.md * doc: update litellm.md * doc: update ./dev/readme.md * doc:🔖 new presets.md * doc: minor corrections * doc update: user_auth_system.md and presets.md, doc feat: add mermaid support to mkdocs * doc update: add screenshots to presets.md * doc update: add screenshots to - OpenID with AWS Cognito * doc update: BingAI cookie instruction * doc update: discord auth * doc update: facebook auth * doc: corrections to user_auth_system.md * doc update: github auth * doc update: google auth * doc update: auth clean up * doc organization: installation * doc organization: configuration * doc organization: features+plugins & update:plugins screenshots * doc organization: deploymend + general_info & update: tech_stack.md * doc organization: contributions * doc: minor fixes * doc: minor fixes
This commit is contained in:
parent
5c27fa304a
commit
51050cc4d3
66 changed files with 1617 additions and 869 deletions
401
docs/install/configuration/ai_setup.md
Normal file
401
docs/install/configuration/ai_setup.md
Normal file
|
|
@ -0,0 +1,401 @@
|
|||
---
|
||||
title: 🤖 AI Setup
|
||||
weight: -8
|
||||
---
|
||||
|
||||
<!-- # Table of Contents
|
||||
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [AI Setup](#ai-setup)
|
||||
- [General](#general)
|
||||
- [Free AI APIs](#free-ai-apis)
|
||||
- [Setting a Default Endpoint](#setting-a-default-endpoint)
|
||||
- [Setting a Default Preset](#setting-a-default-preset)
|
||||
- [OpenAI](#openai)
|
||||
- [Anthropic](#anthropic)
|
||||
- [Google](#google)
|
||||
- [Generative Language API (Gemini)](#generative-language-api-gemini)
|
||||
- [Vertex AI (PaLM 2 \& Codey)](#vertex-ai-palm-2--codey)
|
||||
- [1. Once signed up, Enable the Vertex AI API on Google Cloud:](#1-once-signed-up-enable-the-vertex-ai-api-on-google-cloud)
|
||||
- [2. Create a Service Account with Vertex AI role:](#2-create-a-service-account-with-vertex-ai-role)
|
||||
- [3. Create a JSON key to Save in your Project Directory:](#3-create-a-json-key-to-save-in-your-project-directory)
|
||||
- [Azure OpenAI](#azure-openai)
|
||||
- [Required Variables](#required-variables)
|
||||
- [Model Deployments](#model-deployments)
|
||||
- [Setting a Default Model for Azure](#setting-a-default-model-for-azure)
|
||||
- [Enabling Auto-Generated Titles with Azure](#enabling-auto-generated-titles-with-azure)
|
||||
- [Using GPT-4 Vision with Azure](#using-gpt-4-vision-with-azure)
|
||||
- [Optional Variables](#optional-variables)
|
||||
- [Using Plugins with Azure](#using-plugins-with-azure)
|
||||
- [OpenRouter](#openrouter)
|
||||
- [Unofficial APIs](#unofficial-apis)
|
||||
- [ChatGPTBrowser](#chatgptbrowser)
|
||||
- [BingAI](#bingai)
|
||||
- [Conclusion](#conclusion) -->
|
||||
|
||||
---
|
||||
|
||||
# AI Setup
|
||||
|
||||
This doc explains how to setup your AI providers, their APIs and credentials.
|
||||
|
||||
**"Endpoints"** refer to the AI provider, configuration or API to use, which determines what models and settings are available for the current chat request.
|
||||
|
||||
For example, OpenAI, Google, Plugins, Azure OpenAI, Anthropic, are all different "endpoints". Since OpenAI was the first supported endpoint, it's listed first by default.
|
||||
|
||||
Using the default environment values from `.env.example` will enable several endpoints, with credentials to be provided on a per-user basis from the web app. Alternatively, you can provide credentials for all users of your instance.
|
||||
|
||||
This guide will walk you through setting up each Endpoint as needed.
|
||||
|
||||
**Reminder: If you use docker, you should [rebuild the docker image (here's how)](dotenv.md) each time you update your credentials**
|
||||
|
||||
*Note: Configuring pre-made Endpoint/model/conversation settings as singular options for your users is a planned feature. See the related discussion here: [System-wide custom model settings (lightweight GPTs) #1291](https://github.com/danny-avila/LibreChat/discussions/1291)*
|
||||
|
||||
## General
|
||||
|
||||
### [Free AI APIs](free_ai_apis.md)
|
||||
|
||||
### Setting a Default Endpoint
|
||||
|
||||
In the case where you have multiple endpoints setup, but want a specific one to be first in the order, you need to set the following environment variable.
|
||||
|
||||
```bash
|
||||
# .env file
|
||||
# No spaces between values
|
||||
ENDPOINTS=azureOpenAI,openAI,google
|
||||
```
|
||||
|
||||
Note that LibreChat will use your last selected endpoint when creating a new conversation. So if Azure OpenAI is first in the order, but you used or view an OpenAI conversation last, when you hit "New Chat," OpenAI will be selected with its default conversation settings.
|
||||
|
||||
To override this behavior, you need a preset and you need to set that specific preset as the default one to use on every new chat.
|
||||
|
||||
### Setting a Default Preset
|
||||
|
||||
A preset refers to a specific Endpoint/Model/Conversation Settings that you can save.
|
||||
|
||||
The default preset will always be used when creating a new conversation.
|
||||
|
||||
Here's a video to demonstrate:
|
||||
|
||||
https://github.com/danny-avila/LibreChat/assets/110412045/bbde830f-18d9-4884-88e5-1bd8f7ac585d
|
||||
|
||||
---
|
||||
|
||||
## OpenAI
|
||||
|
||||
To get your OpenAI API key, you need to:
|
||||
|
||||
- Go to [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)
|
||||
- Create an account or log in with your existing one
|
||||
- Add a payment method to your account (this is not free, sorry 😬)
|
||||
- Copy your secret key (sk-...) and save it in ./.env as OPENAI_API_KEY
|
||||
|
||||
Notes:
|
||||
- Selecting a vision model for messages with attachments is not necessary as it will be switched behind the scenes for you. If you didn't outright select a vision model, it will only be used for the vision request and you should still see the non-vision model you had selected after the request is successful
|
||||
- OpenAI Vision models allow for messages without attachments
|
||||
|
||||
---
|
||||
|
||||
## Anthropic
|
||||
|
||||
- Create an account at [https://console.anthropic.com/](https://console.anthropic.com/)
|
||||
- Go to [https://console.anthropic.com/account/keys](https://console.anthropic.com/account/keys) and get your api key
|
||||
- add it to `ANTHROPIC_API_KEY=` in the `.env` file
|
||||
|
||||
---
|
||||
|
||||
## Google
|
||||
|
||||
For the Google Endpoint, you can either use the **Generative Language API** (for Gemini models), or the **Vertex AI API** (for PaLM2 & Codey models, Gemini support coming soon).
|
||||
|
||||
The Generative Language API uses an API key, which you can get from **Google AI Studio**.
|
||||
|
||||
For Vertex AI, you need a Service Account JSON key file, with appropriate access configured.
|
||||
|
||||
Instructions for both are given below.
|
||||
|
||||
### Generative Language API (Gemini)
|
||||
|
||||
**60 Gemini requests/minute are currently free until early next year when it enters general availability.**
|
||||
|
||||
⚠️ Google will be using that free input/output to help improve the model, with data de-identified from your Google Account and API key.
|
||||
⚠️ During this period, your messages “may be accessible to trained reviewers.”
|
||||
|
||||
To use Gemini models, you'll need an API key. If you don't already have one, create a key in Google AI Studio.
|
||||
|
||||
<p><a class="button button-primary" href="https://makersuite.google.com/app/apikey" target="_blank" rel="noopener noreferrer">Get an API key here</a></p>
|
||||
|
||||
Once you have your key, provide the key in your .env file, which allows all users of your instance to use it.
|
||||
|
||||
```bash
|
||||
GOOGLE_KEY=mY_SeCreT_w9347w8_kEY
|
||||
```
|
||||
|
||||
Or, you can make users provide it from the frontend by setting the following:
|
||||
```bash
|
||||
GOOGLE_KEY=user_provided
|
||||
```
|
||||
|
||||
Notes:
|
||||
- PaLM2 and Codey models cannot be accessed through the Generative Language API, only through Vertex AI.
|
||||
- Selecting `gemini-pro-vision` for messages with attachments is not necessary as it will be switched behind the scenes for you
|
||||
- Since `gemini-pro-vision`does not accept non-attachment messages, messages without attachments are automatically switched to use `gemini-pro` (otherwise, Google responds with an error)
|
||||
|
||||
Setting `GOOGLE_KEY=user_provided` in your .env file will configure both the Vertex AI Service Account JSON key file and the Generative Language API key to be provided from the frontend like so:
|
||||
|
||||

|
||||
|
||||
### Vertex AI (PaLM 2 & Codey)
|
||||
|
||||
To setup Google LLMs (via Google Cloud Vertex AI), first, signup for Google Cloud: https://cloud.google.com/
|
||||
|
||||
You can usually get **$300 starting credit**, which makes this option free for 90 days.
|
||||
|
||||
### 1. Once signed up, Enable the Vertex AI API on Google Cloud:
|
||||
- Go to [Vertex AI page on Google Cloud console](https://console.cloud.google.com/vertex-ai)
|
||||
- Click on "Enable API" if prompted
|
||||
### 2. Create a Service Account with Vertex AI role:
|
||||
- **[Click here to create a Service Account](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts/create?walkthrough_id=iam--create-service-account#step_index=1)**
|
||||
- **Select or create a project**
|
||||
- ### Enter a service account ID (required), name and description are optional
|
||||
- 
|
||||
- ### Click on "Create and Continue" to give at least the "Vertex AI User" role
|
||||
- 
|
||||
- **Click on "Continue/Done"**
|
||||
### 3. Create a JSON key to Save in your Project Directory:
|
||||
- **Go back to [the Service Accounts page](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts)**
|
||||
- **Select your service account**
|
||||
- ### Click on "Keys"
|
||||
- 
|
||||
- ### Click on "Add Key" and then "Create new key"
|
||||
- 
|
||||
- **Choose JSON as the key type and click on "Create"**
|
||||
- **Download the key file and rename it as 'auth.json'**
|
||||
- **Save it within the project directory, in `/api/data/`**
|
||||
- 
|
||||
|
||||
**Saving your JSON key file in the project directory which allows all users of your LibreChat instance to use it.**
|
||||
|
||||
Alternatively, you can make users provide it from the frontend by setting the following:
|
||||
|
||||
```bash
|
||||
# Note: this configures both the Vertex AI Service Account JSON key file
|
||||
# and the Generative Language API key to be provided from the frontend.
|
||||
GOOGLE_KEY=user_provided
|
||||
```
|
||||
|
||||
Note: Using Gemini models through Vertex AI is possible but not yet supported.
|
||||
|
||||
---
|
||||
|
||||
## Azure OpenAI
|
||||
|
||||
In order to use Azure OpenAI with this project, specific environment variables must be set in your `.env` file. These variables will be used for constructing the API URLs.
|
||||
|
||||
The variables needed are outlined below:
|
||||
|
||||
### Required Variables
|
||||
|
||||
These variables construct the API URL for Azure OpenAI.
|
||||
|
||||
* `AZURE_API_KEY`: Your Azure OpenAI API key.
|
||||
* `AZURE_OPENAI_API_INSTANCE_NAME`: The instance name of your Azure OpenAI API.
|
||||
* `AZURE_OPENAI_API_DEPLOYMENT_NAME`: The deployment name of your Azure OpenAI API.
|
||||
* `AZURE_OPENAI_API_VERSION`: The version of your Azure OpenAI API.
|
||||
|
||||
For example, with these variables, the URL for chat completion would look something like:
|
||||
```plaintext
|
||||
https://{AZURE_OPENAI_API_INSTANCE_NAME}.openai.azure.com/openai/deployments/{AZURE_OPENAI_API_DEPLOYMENT_NAME}/chat/completions?api-version={AZURE_OPENAI_API_VERSION}
|
||||
```
|
||||
You should also consider changing the `AZURE_OPENAI_MODELS` variable to the models available in your deployment.
|
||||
|
||||
```bash
|
||||
# .env file
|
||||
AZURE_OPENAI_MODELS=gpt-4-1106-preview,gpt-4,gpt-3.5-turbo,gpt-3.5-turbo-1106,gpt-4-vision-preview
|
||||
```
|
||||
|
||||
Overriding the construction of the API URL will be possible but is not yet implemented. Follow progress on this feature here: [Issue #1266](https://github.com/danny-avila/LibreChat/issues/1266)
|
||||
|
||||
### Model Deployments
|
||||
|
||||
*Note: a change will be developed to improve current configuration settings, to allow multiple deployments/model configurations setup with ease: [#1390](https://github.com/danny-avila/LibreChat/issues/1390)*
|
||||
|
||||
As of 2023-12-18, the Azure API allows only one model per deployment.
|
||||
|
||||
**It's highly recommended** to name your deployments *after* the model name (e.g., "gpt-3.5-turbo") for easy deployment switching.
|
||||
|
||||
When you do so, LibreChat will correctly switch the deployment, while associating the correct max context per model, if you have the following environment variable set:
|
||||
|
||||
```bash
|
||||
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE
|
||||
```
|
||||
|
||||
For example, when you have set `AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE`, the following deployment configuration provides the most seamless, error-free experience for LibreChat, including Vision support and tracking the correct max context tokens:
|
||||
|
||||

|
||||
|
||||
|
||||
Alternatively, you can use custom deployment names and set `AZURE_OPENAI_DEFAULT_MODEL` for expected functionality.
|
||||
|
||||
- **`AZURE_OPENAI_MODELS`**: List the available models, separated by commas without spaces. The first listed model will be the default. If left blank, internal settings will be used. Note that deployment names can't have periods, which are removed when generating the endpoint.
|
||||
|
||||
Example use:
|
||||
|
||||
```bash
|
||||
# .env file
|
||||
AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4,gpt-5
|
||||
|
||||
```
|
||||
|
||||
- **`AZURE_USE_MODEL_AS_DEPLOYMENT_NAME`**: Enable using the model name as the deployment name for the API URL.
|
||||
|
||||
Example use:
|
||||
|
||||
```bash
|
||||
# .env file
|
||||
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE
|
||||
|
||||
```
|
||||
|
||||
### Setting a Default Model for Azure
|
||||
|
||||
This section is relevant when you are **not** naming deployments after model names as shown above.
|
||||
|
||||
**Important:** The Azure OpenAI API does not use the `model` field in the payload but is a necessary identifier for LibreChat. If your deployment names do not correspond to the model names, and you're having issues with the model not being recognized, you should set this field to explicitly tell LibreChat to treat your Azure OpenAI API requests as if the specified model was selected.
|
||||
|
||||
If AZURE_USE_MODEL_AS_DEPLOYMENT_NAME is enabled, the model you set with `AZURE_OPENAI_DEFAULT_MODEL` will **not** be recognized and will **not** be used as the deployment name; instead, it will use the model selected by the user as the "deployment" name.
|
||||
|
||||
- **`AZURE_OPENAI_DEFAULT_MODEL`**: Override the model setting for Azure, useful if using custom deployment names.
|
||||
|
||||
Example use:
|
||||
|
||||
```bash
|
||||
# .env file
|
||||
# MUST be a real OpenAI model, named exactly how it is recognized by OpenAI API (not Azure)
|
||||
AZURE_OPENAI_DEFAULT_MODEL=gpt-3.5-turbo # do include periods in the model name here
|
||||
|
||||
```
|
||||
|
||||
### Enabling Auto-Generated Titles with Azure
|
||||
|
||||
The default titling model is set to `gpt-3.5-turbo`.
|
||||
|
||||
If you're using `AZURE_USE_MODEL_AS_DEPLOYMENT_NAME` and have "gpt-35-turbo" setup as a deployment name, this should work out-of-the-box.
|
||||
|
||||
In any case, you can adjust the title model as such: `OPENAI_TITLE_MODEL=your-title-model`
|
||||
|
||||
### Using GPT-4 Vision with Azure
|
||||
|
||||
Currently, the best way to setup Vision is to use your deployment names as the model names, as [shown here](#model-deployments)
|
||||
|
||||
This will work seamlessly as it does with the [OpenAI endpoint](#openai) (no need to select the vision model, it will be switched behind the scenes)
|
||||
|
||||
Alternatively, you can set the [required variables](#required-variables) to explicitly use your vision deployment, but this may limit you to exclusively using your vision deployment for all Azure chat settings.
|
||||
|
||||
As of December 18th, 2023, Vision models seem to have degraded performance with Azure OpenAI when compared to [OpenAI](#openai)
|
||||
|
||||

|
||||
|
||||
|
||||
*Note: a change will be developed to improve current configuration settings, to allow multiple deployments/model configurations setup with ease: [#1390](https://github.com/danny-avila/LibreChat/issues/1390)*
|
||||
|
||||
### Optional Variables
|
||||
|
||||
*These variables are currently not used by LibreChat*
|
||||
|
||||
* `AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME`: The deployment name for completion. This is currently not in use but may be used in future.
|
||||
* `AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME`: The deployment name for embedding. This is currently not in use but may be used in future.
|
||||
|
||||
These two variables are optional but may be used in future updates of this project.
|
||||
|
||||
### Using Plugins with Azure
|
||||
|
||||
Note: To use the Plugins endpoint with Azure OpenAI, you need a deployment supporting [function calling](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/function-calling-is-now-available-in-azure-openai-service/ba-p/3879241). Otherwise, you need to set "Functions" off in the Agent settings. When you are not using "functions" mode, it's recommend to have "skip completion" off as well, which is a review step of what the agent generated.
|
||||
|
||||
To use Azure with the Plugins endpoint, make sure the following environment variables are set:
|
||||
|
||||
* `PLUGINS_USE_AZURE`: If set to "true" or any truthy value, this will enable the program to use Azure with the Plugins endpoint.
|
||||
* `AZURE_API_KEY`: Your Azure API key must be set with an environment variable.
|
||||
|
||||
---
|
||||
|
||||
## [OpenRouter](https://openrouter.ai/)
|
||||
|
||||
[OpenRouter](https://openrouter.ai/) is a legitimate proxy service to a multitude of LLMs, both closed and open source, including:
|
||||
- OpenAI models (great if you are barred from their API for whatever reason)
|
||||
- Anthropic Claude models (same as above)
|
||||
- Meta's Llama models
|
||||
- pygmalionai/mythalion-13b
|
||||
- and many more open source models. Newer integrations are usually discounted, too!
|
||||
|
||||
> See their available models and pricing here: [Supported Models](https://openrouter.ai/docs#models)
|
||||
|
||||
OpenRouter is so great, I decided to integrate it to the project as a standalone feature.
|
||||
|
||||
**Setup:**
|
||||
- Signup to [OpenRouter](https://openrouter.ai/) and create a key. You should name it and set a limit as well.
|
||||
- Set the environment variable `OPENROUTER_API_KEY` in your .env file to the key you just created.
|
||||
- Set something in the `OPENAI_API_KEY`, it can be anyting, but **do not** leave it blank or set to `user_provided`
|
||||
- Restart your LibreChat server and use the OpenAI or Plugins endpoints.
|
||||
|
||||
**Notes:**
|
||||
- [TODO] **In the future, you will be able to set up OpenRouter from the frontend as well.**
|
||||
- This will override the official OpenAI API or your reverse proxy settings for both Plugins and OpenAI.
|
||||
- On initial setup, you may need to refresh your page twice to see all their supported models populate automatically.
|
||||
- Plugins: Functions Agent works with OpenRouter when using OpenAI models.
|
||||
- Plugins: Turn functions off to try plugins with non-OpenAI models (ChatGPT plugins will not work and others may not work as expected).
|
||||
- Plugins: Make sure `PLUGINS_USE_AZURE` is not set in your .env file when wanting to use OpenRouter and you have Azure configured.
|
||||
|
||||
---
|
||||
|
||||
## Unofficial APIs
|
||||
|
||||
**Important:** Stability for Unofficial APIs are not guaranteed. Access methods to these APIs are hacky, prone to errors, and patching, and are marked lowest in priority in LibreChat's development.
|
||||
|
||||
### ChatGPTBrowser
|
||||
|
||||
**Backend Access to https://chat.openai.com/api**
|
||||
|
||||
This is not to be confused with [OpenAI's Official API](#openai)!
|
||||
|
||||
> Note that this is disabled by default and requires additional configuration to work.
|
||||
> Also, using this may have your data exposed to 3rd parties if using a proxy, and OpenAI may flag your account.
|
||||
> See: [ChatGPT Reverse Proxy](../../features/pandoranext.md)
|
||||
|
||||
To get your Access token for ChatGPT Browser Access, you need to:
|
||||
|
||||
- Go to [https://chat.openai.com](https://chat.openai.com)
|
||||
- Create an account or log in with your existing one
|
||||
- Visit [https://chat.openai.com/api/auth/session](https://chat.openai.com/api/auth/session)
|
||||
- Copy the value of the "accessToken" field and save it in ./.env as CHATGPT_ACCESS_TOKEN
|
||||
|
||||
Warning: There may be a chance of your account being banned if you deploy the app to multiple users with this method. Use at your own risk. 😱
|
||||
|
||||
---
|
||||
|
||||
### BingAI
|
||||
I recommend using Microsoft Edge for this:
|
||||
|
||||
- Navigate to **[Bing Chat](https://www.bing.com/chat)**
|
||||
- **Login** if you haven't already
|
||||
- Initiate a conversation with Bing
|
||||
- Open `Dev Tools`, usually with `F12` or `Ctrl + Shift + C`
|
||||
- Navigate to the `Network` tab
|
||||
- Look for `lsp.asx` (if it's not there look into the other entries for one with a **very long** cookie)
|
||||
- Copy the whole cookie value. (Yes it's very long 😉)
|
||||
- Use this **"full cookie string"** for your "BingAI Token"
|
||||
|
||||
<p align="left">
|
||||
<img src="https://github.com/danny-avila/LibreChat/assets/32828263/d4dfd370-eddc-4694-ab16-076f913ff430" width="50%">
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
<h3>That's it! You're all set. 🎉</h3>
|
||||
|
||||
---
|
||||
|
||||
>⚠️ Note: If you're having trouble, before creating a new issue, please search for similar ones on our [#issues thread on our discord](https://discord.gg/weqZFtD9C4) or our [troubleshooting discussion](https://github.com/danny-avila/LibreChat/discussions/categories/troubleshooting) on our Discussions page. If you don't find a relevant issue, feel free to create a new one and provide as much detail as possible.
|
||||
|
||||
41
docs/install/configuration/default_language.md
Normal file
41
docs/install/configuration/default_language.md
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: 🌍 Default Language
|
||||
weight: -3
|
||||
---
|
||||
|
||||
# Default Language 🌍
|
||||
|
||||
## How to change the default language
|
||||
|
||||
- Open this file `client\src\store\language.ts`
|
||||
- Modify the "default" in the lang variable with your locale identifier :
|
||||
|
||||
Example:
|
||||
from **English** as default
|
||||
|
||||
```js
|
||||
import { atom } from 'recoil';
|
||||
|
||||
const lang = atom({
|
||||
key: 'lang',
|
||||
default: localStorage.getItem('lang') || 'en-US',
|
||||
});
|
||||
|
||||
export default { lang };
|
||||
```
|
||||
|
||||
to **Italian** as default
|
||||
|
||||
```js
|
||||
import { atom } from 'recoil';
|
||||
|
||||
const lang = atom({
|
||||
key: 'lang',
|
||||
default: localStorage.getItem('lang') || 'it-IT',
|
||||
});
|
||||
|
||||
export default { lang };
|
||||
```
|
||||
---
|
||||
|
||||
> **❗If you wish to contribute your own translation to LibreChat, please refer to this document for instructions: [Contribute a Translation](../../contributions/translation_contribution.md)**
|
||||
76
docs/install/configuration/docker_override.md
Normal file
76
docs/install/configuration/docker_override.md
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
title: 🐋 Docker Compose Override
|
||||
weight: -9
|
||||
---
|
||||
|
||||
# How to Use the Docker Compose Override File
|
||||
|
||||
In Docker Compose, an override file is a powerful feature that allows you to modify the default configuration provided by the main `docker-compose.yml` without the need to directly edit or duplicate the whole file. The primary use of the override file is for local development customizations, and Docker Compose merges the configurations of the `docker-compose.yml` and the `docker-compose.override.yml` files when you run `docker-compose up`.
|
||||
|
||||
Here's a quick guide on how to use the `docker-compose.override.yml`:
|
||||
|
||||
> Note: Please consult the `docker-compose.override.yml.example` for more examples
|
||||
|
||||
See the the official docker documentation for more info:
|
||||
|
||||
- **[docker docs - understanding-multiple-compose-files](https://docs.docker.com/compose/multiple-compose-files/extends/#understanding-multiple-compose-files)**
|
||||
- **[docker docs - merge-compose-files](https://docs.docker.com/compose/multiple-compose-files/merge/#merge-compose-files)**
|
||||
- **[docker docs - specifying-multiple-compose-files](https://docs.docker.com/compose/reference/#specifying-multiple-compose-files)**
|
||||
|
||||
## Step 1: Create a `docker-compose.override.yml` file
|
||||
|
||||
If you don't already have a `docker-compose.override.yml` file, you can create one by copying the example override content:
|
||||
|
||||
```bash
|
||||
cp docker-compose.override.yml.example docker-compose.override.yml
|
||||
```
|
||||
|
||||
This file will be picked up by Docker Compose automatically when you run docker-compose commands.
|
||||
|
||||
## Step 2: Edit the override file
|
||||
|
||||
Open your `docker-compose.override.yml` file with vscode or any text editor.
|
||||
|
||||
Make your desired changes by uncommenting the relevant sections and customizing them as needed.
|
||||
|
||||
For example, if you want to use a prebuilt image for the `api` service and expose MongoDB's port, your `docker-compose.override.yml` might look like this:
|
||||
|
||||
```yaml
|
||||
version: '3.4'
|
||||
|
||||
services:
|
||||
api:
|
||||
image: ghcr.io/danny-avila/librechat:latest
|
||||
|
||||
mongodb:
|
||||
ports:
|
||||
- 27018:27017
|
||||
```
|
||||
|
||||
> Note: Be cautious with exposing ports like MongoDB to the public, as it can make your database vulnerable to attacks.
|
||||
|
||||
## Step 3: Apply the changes
|
||||
|
||||
To apply your configuration changes, simply run Docker Compose as usual. Docker Compose automatically takes into account both the `docker-compose.yml` and the `docker-compose.override.yml` files:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
If you want to invoke a build with the changes before starting containers:
|
||||
|
||||
```bash
|
||||
docker-compose build
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
## Step 4: Verify the changes
|
||||
|
||||
After starting your services with the modified configuration, you can verify that the changes have been applied using the `docker ps` command to list the running containers and their properties, such as ports.
|
||||
|
||||
## Important Considerations
|
||||
|
||||
- **Order of Precedence**: Values defined in the override file take precedence over those specified in the original `docker-compose.yml` file.
|
||||
- **Security**: When customizing ports and publicly exposing services, always be conscious of the security implications. Avoid using defaults for production or sensitive environments.
|
||||
|
||||
By following these steps and considerations, you can easily and safely modify your Docker Compose configuration without altering the original `docker-compose.yml` file, making it simpler to manage and maintain different environments or local customizations.
|
||||
759
docs/install/configuration/dotenv.md
Normal file
759
docs/install/configuration/dotenv.md
Normal file
|
|
@ -0,0 +1,759 @@
|
|||
---
|
||||
title: ⚙️ Environment Variables
|
||||
weight: -10
|
||||
---
|
||||
|
||||
# .env File Configuration
|
||||
Welcome to the comprehensive guide for configuring your application's environment with the `.env` file. This document is your one-stop resource for understanding and customizing the environment variables that will shape your application's behavior in different contexts.
|
||||
|
||||
While the default settings provide a solid foundation for a standard `docker` installation, delving into this guide will unveil the full potential of LibreChat. This guide empowers you to tailor LibreChat to your precise needs. Discover how to adjust language model availability, integrate social logins, manage the automatic moderation system, and much more. It's all about giving you the control to fine-tune LibreChat for an optimal user experience.
|
||||
|
||||
**If you use docker, you should rebuild the docker image each time you update your environment variables**
|
||||
|
||||
Rebuild command:
|
||||
```bash
|
||||
npm run update:docker
|
||||
|
||||
# OR, if you don't have npm
|
||||
docker-compose build --no-cache
|
||||
```
|
||||
|
||||
Alternatively, you can create a new file named `docker-compose.override.yml` in the same directory as your main `docker-compose.yml` file for LibreChat, where you can set your .env variables as needed under `environment`, or modify the default configuration provided by the main `docker-compose.yml`, without the need to directly edit or duplicate the whole file.
|
||||
|
||||
For more info see:
|
||||
|
||||
- Our quick guide:
|
||||
- **[Docker Override](../configuration/docker_override.md)**
|
||||
|
||||
- The official docker documentation:
|
||||
- **[docker docs - understanding-multiple-compose-files](https://docs.docker.com/compose/multiple-compose-files/extends/#understanding-multiple-compose-files)**
|
||||
- **[docker docs - merge-compose-files](https://docs.docker.com/compose/multiple-compose-files/merge/#merge-compose-files)**
|
||||
- **[docker docs - specifying-multiple-compose-files](https://docs.docker.com/compose/reference/#specifying-multiple-compose-files)**
|
||||
|
||||
- You can also view an example of an override file for LibreChat in your LibreChat folder and on GitHub:
|
||||
- **[docker-compose.override.example](https://github.com/danny-avila/LibreChat/blob/main/docker-compose.override.yaml.example)**
|
||||
|
||||
---
|
||||
|
||||
## Server Configuration
|
||||
|
||||
### Customization
|
||||
- Here you can change the app title and footer
|
||||
- Uncomment to add a custom footer.
|
||||
- Uncomment and make empty "" to remove the footer.
|
||||
|
||||
```bash
|
||||
APP_TITLE=LibreChat
|
||||
CUSTOM_FOOTER="My custom footer"
|
||||
```
|
||||
|
||||
### Port
|
||||
|
||||
- The server will listen to localhost:3080 by default. You can change the target IP as you want. If you want to make this server available externally, for example to share the server with others or expose this from a Docker container, set host to 0.0.0.0 or your external IP interface.
|
||||
|
||||
> Tips: Setting host to 0.0.0.0 means listening on all interfaces. It's not a real IP.
|
||||
|
||||
- Use localhost:port rather than 0.0.0.0:port to access the server.
|
||||
|
||||
```bash
|
||||
HOST=localhost
|
||||
PORT=3080
|
||||
```
|
||||
|
||||
### MongoDB Database
|
||||
|
||||
- Change this to your MongoDB URI if different. You should also add `LibreChat` or your own `APP_TITLE` as the database name in the URI. For example:
|
||||
- if you are using docker, the URI format is `mongodb://<ip>:<port>/<database>`. Your `MONGO_URI` should look like this: `mongodb://127.0.0.1:27018/LibreChat`
|
||||
- if you are using an online db, the URI format is `mongodb+srv://<username>:<password>@<host>/<database>?<options>`. Your `MONGO_URI` should look like this: `mongodb+srv://username:password@host.mongodb.net/LibreChat?retryWrites=true` (`retryWrites=true` is the only option you need when using the online db)
|
||||
- Instruction on how to create an online MongoDB database (useful for use without docker):
|
||||
- [Online MongoDB](./mongodb.md)
|
||||
- Securely access your docker MongoDB database:
|
||||
- [Manage your database](../../features/manage_your_database.md)
|
||||
|
||||
```bash
|
||||
MONGO_URI=mongodb://127.0.0.1:27018/LibreChat
|
||||
```
|
||||
|
||||
### Application Domains
|
||||
|
||||
- To use LibreChat locally, set `DOMAIN_CLIENT` and `DOMAIN_SERVER` to `http://localhost:3080` (3080 being the port previously configured)
|
||||
- When deploying LibreChat to a custom domain, set `DOMAIN_CLIENT` and `DOMAIN_SERVER` to your deployed URL, e.g. `https://librechat.example.com`
|
||||
|
||||
```bash
|
||||
DOMAIN_CLIENT=http://localhost:3080
|
||||
DOMAIN_SERVER=http://localhost:3080
|
||||
```
|
||||
|
||||
### Prevent Public Search Engines Indexing
|
||||
By default, your website will not be indexed by public search engines (e.g. Google, Bing, …). This means that people will not be able to find your website through these search engines. If you want to make your website more visible and searchable, you can change the following setting to `false`
|
||||
|
||||
```bash
|
||||
NO_INDEX=true
|
||||
```
|
||||
|
||||
> ❗**Note:** This method is not guaranteed to work for all search engines, and some search engines may still index your website or web page for other purposes, such as caching or archiving. Therefore, you should not rely solely on this method to protect sensitive or confidential information on your website or web page.
|
||||
|
||||
### Logging
|
||||
|
||||
LibreChat has built-in central logging, see [Logging System](../../features/logging_system.md) for more info.
|
||||
|
||||
- Debug logging is enabled by default and crucial for development.
|
||||
- To report issues, reproduce the error and submit logs from `./api/logs/debug-%DATE%.log` at [LibreChat GitHub Issues](https://github.com/danny-avila/LibreChat/issues).
|
||||
- Error logs are stored in the same location.
|
||||
- Keep debug logs active by default or disable them by setting `DEBUG_LOGGING=false` in the environment variable.
|
||||
- For more information about this feature, read our docs: https://docs.librechat.ai/features/logging_system.html
|
||||
|
||||
```bash
|
||||
DEBUG_LOGGING=true
|
||||
```
|
||||
|
||||
- Enable verbose server output in the console with `DEBUG_CONSOLE=TRUE`, though it's not recommended due to high verbosity.
|
||||
|
||||
```bash
|
||||
DEBUG_CONSOLE=false
|
||||
```
|
||||
|
||||
This is not recommend, however, as the outputs can be quite verbose, and so it's disabled by default.
|
||||
|
||||
### Permission
|
||||
> UID and GID are numbers assigned by Linux to each user and group on the system. If you have permission problems, set here the UID and GID of the user running the docker compose command. The applications in the container will run with these uid/gid.
|
||||
|
||||
```bash
|
||||
UID=1000
|
||||
GID=1000
|
||||
```
|
||||
|
||||
## Endpoints
|
||||
In this section you can configure the endpoints and models selection, their API keys, and the proxy and reverse proxy settings for the endpoints that support it.
|
||||
|
||||
### General Config
|
||||
- Uncomment `ENDPOINTS` to customize the available endpoints in LibreChat
|
||||
- `PROXY` is to be used by all endpoints (leave blank by default)
|
||||
|
||||
```bash
|
||||
ENDPOINTS=openAI,azureOpenAI,bingAI,chatGPTBrowser,google,gptPlugins,anthropic
|
||||
PROXY=
|
||||
```
|
||||
|
||||
### Anthropic
|
||||
see: [Anthropic Endpoint](./ai_setup.md#anthropic)
|
||||
- You can request an access key from https://console.anthropic.com/
|
||||
- Leave `ANTHROPIC_API_KEY=` blank to disable this endpoint
|
||||
- Set `ANTHROPIC_API_KEY=` to "user_provided" to allow users to provide their own API key from the WebUI
|
||||
- If you have access to a reverse proxy for `Anthropic`, you can set it with `ANTHROPIC_REVERSE_PROXY=`
|
||||
- leave blank or comment it out to use default base url
|
||||
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=user_provided
|
||||
ANTHROPIC_MODELS=claude-1,claude-instant-1,claude-2
|
||||
ANTHROPIC_REVERSE_PROXY=
|
||||
```
|
||||
|
||||
### Azure
|
||||
**Important:** See [the complete Azure OpenAI setup guide](./ai_setup.md#azure-openai) for thorough instructions on enabling Azure OpenAI
|
||||
|
||||
- To use Azure with this project, set the following variables. These will be used to build the API URL.
|
||||
|
||||
```bash
|
||||
AZURE_API_KEY=
|
||||
AZURE_OPENAI_API_INSTANCE_NAME=
|
||||
AZURE_OPENAI_API_DEPLOYMENT_NAME=
|
||||
AZURE_OPENAI_API_VERSION=
|
||||
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=
|
||||
AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=
|
||||
```
|
||||
> Note: As of 2023-11-10, the Azure API only allows one model per deployment,
|
||||
|
||||
- Chat completion: `https://{AZURE_OPENAI_API_INSTANCE_NAME}.openai.azure.com/openai/deployments/{AZURE_OPENAI_API_DEPLOYMENT_NAME}/chat/completions?api-version={AZURE_OPENAI_API_VERSION}`
|
||||
- You should also consider changing the `OPENAI_MODELS` variable to the models available in your instance/deployment.
|
||||
|
||||
> Note: `AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME` and `AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME` are optional but might be used in the future
|
||||
|
||||
- It's recommended to name your deployments after the model name, e.g. `gpt-35-turbo,` which allows for fast deployment switching and `AZURE_USE_MODEL_AS_DEPLOYMENT_NAME` **enabled**. However, you can use non-model deployment names and setting the `AZURE_OPENAI_DEFAULT_MODEL` to ensure it works as expected.
|
||||
|
||||
- Identify the available models, separated by commas *without spaces*. The first will be default. Leave it blank or as is to use internal settings.
|
||||
|
||||
> Note: as deployment names can't have periods, they will be removed when the endpoint is generated.
|
||||
|
||||
```bash
|
||||
AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4
|
||||
```
|
||||
|
||||
- This enables the use of the model name as the deployment name, e.g. "gpt-3.5-turbo" as the deployment name **(Advanced)**
|
||||
|
||||
```bash
|
||||
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE
|
||||
```
|
||||
|
||||
- To use Azure with the Plugins endpoint, you need the variables above, and uncomment the following variable:
|
||||
|
||||
> Note: This may not work as expected and Azure OpenAI may not support OpenAI Functions yet
|
||||
> Omit/leave it commented to use the default OpenAI API
|
||||
|
||||
```bash
|
||||
PLUGINS_USE_AZURE="true"
|
||||
```
|
||||
|
||||
### BingAI
|
||||
Bing, also used for Sydney, jailbreak, and Bing Image Creator, see: [Bing Access token](./ai_setup.md#bingai) and [Bing Jailbreak](../../features/bing_jailbreak.md)
|
||||
|
||||
- Follow these instructions to get your bing access token (it's best to use the full cookie string for that purpose): [Bing Access Token](https://github.com/danny-avila/LibreChat/issues/370#issuecomment-1560382302)
|
||||
- Leave `BINGAI_TOKEN=` blank to disable this endpoint
|
||||
- Set `BINGAI_TOKEN=` to "user_provided" to allow users to provide their own API key from the WebUI
|
||||
|
||||
> Note: It is recommended to leave it as "user_provided" and provide the token from the WebUI.
|
||||
|
||||
- `BINGAI_HOST` can be necessary for some people in different countries, e.g. China (https://cn.bing.com). Leave it blank or commented out to use default server.
|
||||
|
||||
```bash
|
||||
BINGAI_TOKEN=user_provided
|
||||
BINGAI_HOST=
|
||||
```
|
||||
|
||||
### ChatGPT
|
||||
see: [ChatGPT Free Access token](./ai_setup.md#chatgptbrowser)
|
||||
|
||||
> **Warning**: To use this endpoint you'll have to set up your own reverse proxy. Here is the installation guide to deploy your own (based on [PandoraNext](https://github.com/pandora-next/deploy)): **[PandoraNext Deployment Guide](../../features/pandoranext.md)**
|
||||
|
||||
```bash
|
||||
CHATGPT_REVERSE_PROXY=<YOUR-REVERSE-PROXY>
|
||||
```
|
||||
|
||||
> ~~Note: If you're a GPT plus user you can add gpt-4, gpt-4-plugins, gpt-4-code-interpreter, and gpt-4-browsing to the list above and use the models for these features; however, the view/display portion of these features are not supported, but you can use the underlying models, which have higher token context~~
|
||||
> **Note:** The current method only works with `text-davinci-002-render-sha`
|
||||
|
||||
- Leave `CHATGPT_TOKEN=` blank to disable this endpoint
|
||||
- Set `CHATGPT_TOKEN=` to "user_provided" to allow users to provide their own API key from the WebUI
|
||||
- It is not recommended to provide your token in the `.env` file since it expires often and sharing it could get you banned.
|
||||
|
||||
```bash
|
||||
CHATGPT_TOKEN=
|
||||
CHATGPT_MODELS=text-davinci-002-render-sha
|
||||
```
|
||||
|
||||
### Google
|
||||
Follow these instructions to setup the [Google Endpoint](./ai_setup.md#google)
|
||||
|
||||
```bash
|
||||
GOOGLE_KEY=user_provided
|
||||
GOOGLE_REVERSE_PROXY=
|
||||
```
|
||||
|
||||
- Customize the available models, separated by commas, **without spaces**.
|
||||
- The first will be default.
|
||||
- Leave it blank or commented out to use internal settings (default: all listed below).
|
||||
|
||||
```bash
|
||||
# all available models as of 12/16/23
|
||||
GOOGLE_MODELS=gemini-pro,gemini-pro-vision,chat-bison,chat-bison-32k,codechat-bison,codechat-bison-32k,text-bison,text-bison-32k,text-unicorn,code-gecko,code-bison,code-bison-32k
|
||||
```
|
||||
|
||||
### OpenAI
|
||||
|
||||
- To get your OpenAI API key, you need to:
|
||||
- Go to https://platform.openai.com/account/api-keys
|
||||
- Create an account or log in with your existing one
|
||||
- Add a payment method to your account (this is not free, sorry 😬)
|
||||
- Copy your secret key (sk-...) to `OPENAI_API_KEY`
|
||||
|
||||
- Leave `OPENAI_API_KEY=` blank to disable this endpoint
|
||||
- Set `OPENAI_API_KEY=` to "user_provided" to allow users to provide their own API key from the WebUI
|
||||
|
||||
```bash
|
||||
OPENAI_API_KEY=user_provided
|
||||
```
|
||||
|
||||
- Set to true to enable debug mode for the OpenAI endpoint
|
||||
|
||||
```bash
|
||||
DEBUG_OPENAI=false
|
||||
```
|
||||
|
||||
- Customize the available models, separated by commas, **without spaces**.
|
||||
- The first will be default.
|
||||
- Leave it blank or commented out to use internal settings.
|
||||
|
||||
```bash
|
||||
OPENAI_MODELS=gpt-3.5-turbo-1106,gpt-4-1106-preview,gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,text-davinci-003,gpt-4,gpt-4-0314,gpt-4-0613
|
||||
```
|
||||
|
||||
- Titling is enabled by default when initiating a conversation.
|
||||
- Set to false to disable this feature.
|
||||
|
||||
```bash
|
||||
TITLE_CONVO=true
|
||||
```
|
||||
|
||||
- The default model used for titling by is gpt-3.5-turbo. You can change it by uncommenting the following and setting the desired model. **(Optional)**
|
||||
|
||||
> **Note:** Must be compatible with the OpenAI Endpoint.
|
||||
|
||||
```bash
|
||||
OPENAI_TITLE_MODEL=gpt-3.5-turbo
|
||||
```
|
||||
|
||||
- Enable message summarization by uncommenting the following **(Optional/Experimental)**
|
||||
|
||||
> **Note:** this may affect response time when a summary is being generated.
|
||||
|
||||
```bash
|
||||
OPENAI_SUMMARIZE=true
|
||||
```
|
||||
|
||||
> **Not yet implemented**: this will be a conversation option enabled by default to save users on tokens. We are using the ConversationSummaryBufferMemory method to summarize messages. To learn more about this, see this article: [https://www.pinecone.io/learn/series/langchain/langchain-conversational-memory/](https://www.pinecone.io/learn/series/langchain/langchain-conversational-memory/)
|
||||
|
||||
- Reverse proxy settings for OpenAI:
|
||||
- see: [LiteLLM](./litellm.md)
|
||||
- see also: [Free AI APIs](./free_ai_apis.md#nagaai)
|
||||
|
||||
```bash
|
||||
OPENAI_REVERSE_PROXY=
|
||||
```
|
||||
|
||||
- Sometimes when using Local LLM APIs, you may need to force the API to be called with a `prompt` payload instead of a `messages` payload; to mimic the `/v1/completions` request instead of `/v1/chat/completions`. This may be the case for LocalAI with some models. To do so, uncomment the following **(Advanced)**
|
||||
|
||||
```bash
|
||||
OPENAI_FORCE_PROMPT=true
|
||||
```
|
||||
|
||||
### OpenRouter
|
||||
See [OpenRouter](./free_ai_apis.md#openrouter-preferred) for more info.
|
||||
|
||||
- OpenRouter is a legitimate proxy service to a multitude of LLMs, both closed and open source, including: OpenAI models, Anthropic models, Meta's Llama models, pygmalionai/mythalion-13b and many more open source models. Newer integrations are usually discounted, too!
|
||||
|
||||
> Note: this overrides the OpenAI and Plugins Endpoints.
|
||||
|
||||
```bash
|
||||
OPENROUTER_API_KEY=
|
||||
```
|
||||
|
||||
### Plugins
|
||||
Here are some useful documentation about plugins:
|
||||
|
||||
- [Introduction](../../features/plugins/introduction.md)
|
||||
- [Make Your Own](../../features/plugins/make_your_own.md)
|
||||
- [Using official ChatGPT Plugins](../../features/plugins/chatgpt_plugins_openapi.md)
|
||||
|
||||
#### General Configuration:
|
||||
- Identify the available models, separated by commas **without spaces**. The first model in the list will be set as default. Leave it blank or commented out to use internal settings.
|
||||
|
||||
```bash
|
||||
PLUGIN_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,gpt-4,gpt-4-0314,gpt-4-0613
|
||||
```
|
||||
|
||||
- Set to false or comment out to disable debug mode for plugins
|
||||
|
||||
```bash
|
||||
DEBUG_PLUGINS=true
|
||||
```
|
||||
|
||||
- For securely storing credentials, you need a fixed key and IV. You can set them here for prod and dev environments.
|
||||
- You need a 32-byte key (64 characters in hex) and 16-byte IV (32 characters in hex) You can use this replit to generate some quickly: [Key Generator](https://replit.com/@daavila/crypto#index.js)
|
||||
|
||||
> Warning: If you don't set them, the app will crash on startup.
|
||||
|
||||
```bash
|
||||
CREDS_KEY=f34be427ebb29de8d88c107a71546019685ed8b241d8f2ed00c3df97ad2566f0
|
||||
CREDS_IV=e2341419ec3dd3d19b13a1a87fafcbfb
|
||||
```
|
||||
|
||||
#### Azure AI Search
|
||||
This plugin supports searching Azure AI Search for answers to your questions. See: [Azure AI Search](../../features/plugins/azure_ai_search.md)
|
||||
|
||||
```bash
|
||||
AZURE_AI_SEARCH_SERVICE_ENDPOINT=
|
||||
AZURE_AI_SEARCH_INDEX_NAME=
|
||||
AZURE_AI_SEARCH_API_KEY=
|
||||
|
||||
AZURE_AI_SEARCH_API_VERSION=
|
||||
AZURE_AI_SEARCH_SEARCH_OPTION_QUERY_TYPE=
|
||||
AZURE_AI_SEARCH_SEARCH_OPTION_TOP=
|
||||
AZURE_AI_SEARCH_SEARCH_OPTION_SELECT=
|
||||
```
|
||||
|
||||
#### DALL-E 3:
|
||||
- OpenAI API key for DALL-E / DALL-E-3. Leave commented out to have the user provide their own key when installing the plugin. If you want to provide your own key for all users you can uncomment this line and add your OpenAI API key here.
|
||||
|
||||
```bash
|
||||
# DALLE_API_KEY=
|
||||
```
|
||||
|
||||
- For customization of the DALL-E-3 System prompt, uncomment the following, and provide your own prompt. **(Advanced)**
|
||||
- See official prompt for reference: [DALL-E System Prompt](https://github.com/spdustin/ChatGPT-AutoExpert/blob/main/_system-prompts/dall-e.md)
|
||||
|
||||
```bash
|
||||
DALLE3_SYSTEM_PROMPT="Your System Prompt here"
|
||||
```
|
||||
|
||||
- DALL-E Proxy settings. This is separate from its OpenAI counterpart for customization purposes **(Advanced)**
|
||||
|
||||
> Reverse proxy settings, changes the baseURL for the DALL-E-3 API Calls
|
||||
> The URL must match the "url/v1," pattern, the "openai" suffix is also allowed.
|
||||
> ```
|
||||
> Examples:
|
||||
> - https://open.ai/v1
|
||||
> - https://open.ai/v1/ACCOUNT/GATEWAY/openai
|
||||
> - https://open.ai/v1/hi/openai
|
||||
> ```
|
||||
|
||||
```bash
|
||||
DALLE_REVERSE_PROXY=
|
||||
```
|
||||
|
||||
> Note: if you have PROXY set, it will be used for DALL-E calls also, which is universal for the app
|
||||
|
||||
#### Google Search
|
||||
See detailed instructions here: [Google Search](../../features/plugins/google_search.md)
|
||||
|
||||
```bash
|
||||
GOOGLE_API_KEY=
|
||||
GOOGLE_CSE_ID=
|
||||
```
|
||||
|
||||
#### SerpAPI
|
||||
SerpApi is a real-time API to access Google search results (not as performant)
|
||||
|
||||
```bash
|
||||
SERPAPI_API_KEY=
|
||||
```
|
||||
|
||||
#### Stable Diffusion (Automatic1111)
|
||||
See detailed instructions here: [Stable Diffusion](../../features/plugins/stable_diffusion.md)
|
||||
|
||||
- Use "http://127.0.0.1:7860" with local install and "http://host.docker.internal:7860" for docker
|
||||
|
||||
```bash
|
||||
SD_WEBUI_URL=http://host.docker.internal:7860
|
||||
```
|
||||
|
||||
#### WolframAlpha
|
||||
See detailed instructions here: [Wolfram Alpha](../../features/plugins/wolfram.md)
|
||||
|
||||
```bash
|
||||
WOLFRAM_APP_ID=
|
||||
```
|
||||
|
||||
#### Zapier
|
||||
- You need a Zapier account. Get your API key from here: [Zapier](https://nla.zapier.com/credentials/)
|
||||
- Create allowed actions - Follow step 3 in this getting start guide from Zapier
|
||||
|
||||
> Note: zapier is known to be finicky with certain actions. Writing email drafts is probably the best use of it.
|
||||
|
||||
```bash
|
||||
ZAPIER_NLA_API_KEY=
|
||||
```
|
||||
|
||||
## Search (Meilisearch)
|
||||
|
||||
Enables search in messages and conversations:
|
||||
|
||||
```bash
|
||||
SEARCH=true
|
||||
```
|
||||
|
||||
> Note: If you're not using docker, it requires the installation of the free self-hosted Meilisearch or a paid remote plan
|
||||
|
||||
To disable anonymized telemetry analytics for MeiliSearch for absolute privacy, set to true:
|
||||
|
||||
```bash
|
||||
MEILI_NO_ANALYTICS=true
|
||||
```
|
||||
|
||||
For the API server to connect to the search server. Replace '0.0.0.0' with 'meilisearch' if serving MeiliSearch with docker-compose.
|
||||
|
||||
```bash
|
||||
MEILI_HOST=http://0.0.0.0:7700
|
||||
```
|
||||
|
||||
MeiliSearch HTTP Address, mainly for docker-compose to expose the search server. Replace '0.0.0.0' with 'meilisearch' if serving MeiliSearch with docker-compose.
|
||||
|
||||
```bash
|
||||
MEILI_HTTP_ADDR=0.0.0.0:7700
|
||||
```
|
||||
|
||||
This master key must be at least 16 bytes, composed of valid UTF-8 characters. MeiliSearch will throw an error and refuse to launch if no master key is provided or if it is under 16 bytes. MeiliSearch will suggest a secure autogenerated master key. This is a ready made secure key for docker-compose, you can replace it with your own.
|
||||
|
||||
```bash
|
||||
MEILI_MASTER_KEY=DrhYf7zENyR6AlUCKmnz0eYASOQdl6zxH7s7MKFSfFCt
|
||||
```
|
||||
|
||||
## User System
|
||||
This section contains the configuration for:
|
||||
|
||||
- [Automated Moderation](#moderation)
|
||||
- [Balance/Token Usage](#balance)
|
||||
- [Registration and Social Logins](#registration-and-login)
|
||||
- [Email Password Reset](#email-password-reset)
|
||||
|
||||
### Moderation
|
||||
The Automated Moderation System uses a scoring mechanism to track user violations. As users commit actions like excessive logins, registrations, or messaging, they accumulate violation scores. Upon reaching a set threshold, the user and their IP are temporarily banned. This system ensures platform security by monitoring and penalizing rapid or suspicious activities.
|
||||
|
||||
see: [Automated Moderation](../../features/mod_system.md)
|
||||
|
||||
#### Basic Moderation Settings
|
||||
|
||||
- `OPENAI_MODERATION`: Set to true or false, Whether or not to enable OpenAI moderation on the **OpenAI** and **Plugins** endpoints
|
||||
- `OPENAI_MODERATION_API_KEY`: Your OpenAI API key
|
||||
- `OPENAI_MODERATION_REVERSE_PROXY`: Note: Commented out by default, this is not working with all reverse proxys
|
||||
|
||||
```bash
|
||||
OPENAI_MODERATION=false
|
||||
OPENAI_MODERATION_API_KEY=
|
||||
OPENAI_MODERATION_REVERSE_PROXY=
|
||||
```
|
||||
|
||||
- `BAN_VIOLATIONS`: Whether or not to enable banning users for violations (they will still be logged)
|
||||
- `BAN_DURATION`: How long the user and associated IP are banned for (in milliseconds)
|
||||
- `BAN_INTERVAL`: The user will be banned everytime their score reaches/crosses over the interval threshold
|
||||
|
||||
```bash
|
||||
BAN_VIOLATIONS=true
|
||||
BAN_DURATION=1000 * 60 * 60 * 2
|
||||
BAN_INTERVAL=20
|
||||
```
|
||||
|
||||
#### Score for each violation
|
||||
|
||||
```bash
|
||||
LOGIN_VIOLATION_SCORE=1
|
||||
REGISTRATION_VIOLATION_SCORE=1
|
||||
CONCURRENT_VIOLATION_SCORE=1
|
||||
MESSAGE_VIOLATION_SCORE=1
|
||||
NON_BROWSER_VIOLATION_SCORE=20
|
||||
```
|
||||
|
||||
#### Login and registration rate limiting.
|
||||
- `LOGIN_MAX`: The max amount of logins allowed per IP per `LOGIN_WINDOW`
|
||||
- `LOGIN_WINDOW`: In minutes, determines the window of time for `LOGIN_MAX` logins
|
||||
- `REGISTER_MAX`: The max amount of registrations allowed per IP per `REGISTER_WINDOW`
|
||||
- `REGISTER_WINDOW`: In minutes, determines the window of time for `REGISTER_MAX` registrations
|
||||
|
||||
```bash
|
||||
LOGIN_MAX=7
|
||||
LOGIN_WINDOW=5
|
||||
REGISTER_MAX=5
|
||||
REGISTER_WINDOW=60
|
||||
```
|
||||
|
||||
#### Message rate limiting (per user & IP)
|
||||
|
||||
- `LIMIT_CONCURRENT_MESSAGES`: Whether to limit the amount of messages a user can send per request
|
||||
- `CONCURRENT_MESSAGE_MAX`: The max amount of messages a user can send per request
|
||||
|
||||
```bash
|
||||
LIMIT_CONCURRENT_MESSAGES=true
|
||||
CONCURRENT_MESSAGE_MAX=2
|
||||
```
|
||||
|
||||
#### Limiters
|
||||
|
||||
> Note: You can utilize both limiters, but default is to limit by IP only.
|
||||
|
||||
- **IP Limiter:**
|
||||
- `LIMIT_MESSAGE_IP`: Whether to limit the amount of messages an IP can send per `MESSAGE_IP_WINDOW`
|
||||
- `MESSAGE_IP_MAX`: The max amount of messages an IP can send per `MESSAGE_IP_WINDOW`
|
||||
- `MESSAGE_IP_WINDOW`: In minutes, determines the window of time for `MESSAGE_IP_MAX` messages
|
||||
|
||||
```bash
|
||||
LIMIT_MESSAGE_IP=true
|
||||
MESSAGE_IP_MAX=40
|
||||
MESSAGE_IP_WINDOW=1
|
||||
```
|
||||
|
||||
- **User Limiter:**
|
||||
- `LIMIT_MESSAGE_USER`: Whether to limit the amount of messages an IP can send per `MESSAGE_USER_WINDOW`
|
||||
- `MESSAGE_USER_MAX`: The max amount of messages an IP can send per `MESSAGE_USER_WINDOW`
|
||||
- `MESSAGE_USER_WINDOW`: In minutes, determines the window of time for `MESSAGE_USER_MAX` messages
|
||||
|
||||
|
||||
```bash
|
||||
LIMIT_MESSAGE_USER=false
|
||||
MESSAGE_USER_MAX=40
|
||||
MESSAGE_USER_WINDOW=1
|
||||
```
|
||||
|
||||
### Balance
|
||||
The following enables user balances for the OpenAI/Plugins endpoints, which you can add manually or you will need to build out a balance accruing system for users.
|
||||
|
||||
see: [Token Usage](../../features/token_usage.md)
|
||||
|
||||
- To manually add balances, run the following command:`npm run add-balance`
|
||||
- You can also specify the email and token credit amount to add, e.g.:`npm run add-balance example@example.com 1000`
|
||||
|
||||
> **Note:** 1000 credits = $0.001 (1 mill USD)
|
||||
|
||||
- Set to `true` to enable token credit balances for the OpenAI/Plugins endpoints
|
||||
|
||||
```bash
|
||||
CHECK_BALANCE=false
|
||||
```
|
||||
|
||||
### Registration and Login
|
||||
see: [User/Auth System](../configuration/user_auth_system.md)
|
||||
|
||||

|
||||
|
||||
- General Settings:
|
||||
- `ALLOW_EMAIL_LOGIN`: Email login. Set to `true` or `false` to enable or disable ONLY email login.
|
||||
- `ALLOW_REGISTRATION`: Email registration of new users. Set to `true` or `false` to enable or disable Email registration.
|
||||
- `ALLOW_SOCIAL_LOGIN`: Allow users to connect to LibreChat with various social networks, see below. Set to `true` or `false` to enable or disable.
|
||||
- `ALLOW_SOCIAL_REGISTRATION`: Enable or disable registration of new user using various social network. Set to `true` or `false` to enable or disable.
|
||||
|
||||
> **Quick Tip:** Even with registration disabled, add users directly to the database using `npm run create-user`.
|
||||
|
||||
```bash
|
||||
ALLOW_EMAIL_LOGIN=true
|
||||
ALLOW_REGISTRATION=true
|
||||
ALLOW_SOCIAL_LOGIN=false
|
||||
ALLOW_SOCIAL_REGISTRATION=false
|
||||
```
|
||||
|
||||
- Default values: session expiry: 15 minutes, refresh token expiry: 7 days
|
||||
- For more information: [Refresh Token](https://github.com/danny-avila/LibreChat/pull/927)
|
||||
|
||||
```bash
|
||||
SESSION_EXPIRY=1000 * 60 * 15
|
||||
REFRESH_TOKEN_EXPIRY=(1000 * 60 * 60 * 24) * 7
|
||||
```
|
||||
|
||||
- You should use new secure values. The examples given are 32-byte keys (64 characters in hex).
|
||||
- Use this replit to generate some quickly: [JWT Keys](https://replit.com/@daavila/crypto#index.js)
|
||||
|
||||
```bash
|
||||
JWT_SECRET=16f8c0ef4a5d391b26034086c628469d3f9f497f08163ab9b40137092f2909ef
|
||||
JWT_REFRESH_SECRET=eaa5191f2914e30b9387fd84e254e4ba6fc51b4654968a9b0803b456a54b8418
|
||||
```
|
||||
|
||||
### Social Logins
|
||||
|
||||
#### [Discord](../configuration/user_auth_system.md#discord-authentication)
|
||||
|
||||
for more information: [Discord](../configuration/user_auth_system.md#discord-authentication)
|
||||
|
||||
```bash
|
||||
# Discord
|
||||
DISCORD_CLIENT_ID=your_client_id
|
||||
DISCORD_CLIENT_SECRET=your_client_secret
|
||||
DISCORD_CALLBACK_URL=/oauth/discord/callback
|
||||
```
|
||||
|
||||
#### [Facebook](../configuration/user_auth_system.md#facebook-authentication)
|
||||
|
||||
for more information: [Facebook](../configuration/user_auth_system.md#facebook-authentication)
|
||||
|
||||
```bash
|
||||
# Facebook
|
||||
FACEBOOK_CLIENT_ID=
|
||||
FACEBOOK_CLIENT_SECRET=
|
||||
FACEBOOK_CALLBACK_URL=/oauth/facebook/callback
|
||||
|
||||
```
|
||||
#### [GitHub](../configuration/user_auth_system.md#github-authentication)
|
||||
|
||||
for more information: [GitHub](../configuration/user_auth_system.md#github-authentication)
|
||||
|
||||
```bash
|
||||
# GitHub
|
||||
GITHUB_CLIENT_ID=your_client_id
|
||||
GITHUB_CLIENT_SECRET=your_client_secret
|
||||
GITHUB_CALLBACK_URL=/oauth/github/callback
|
||||
```
|
||||
|
||||
#### [Google](../configuration/user_auth_system.md#google-authentication)
|
||||
|
||||
for more information: [Google](../configuration/user_auth_system.md#google-authentication)
|
||||
|
||||
```bash
|
||||
# Google
|
||||
GOOGLE_CLIENT_ID=
|
||||
GOOGLE_CLIENT_SECRET=
|
||||
GOOGLE_CALLBACK_URL=/oauth/google/callback
|
||||
```
|
||||
|
||||
#### [OpenID](../configuration/user_auth_system.md#openid-authentication-with-azure-ad)
|
||||
|
||||
for more information: [Azure OpenID](../configuration/user_auth_system.md#openid-authentication-with-azure-ad) or [AWS Cognito OpenID](../configuration/user_auth_system.md#openid-authentication-with-aws-cognito)
|
||||
|
||||
```bash
|
||||
# OpenID
|
||||
OPENID_CLIENT_ID=
|
||||
OPENID_CLIENT_SECRET=
|
||||
OPENID_ISSUER=
|
||||
OPENID_SESSION_SECRET=
|
||||
OPENID_SCOPE="openid profile email"
|
||||
OPENID_CALLBACK_URL=/oauth/openid/callback
|
||||
|
||||
OPENID_BUTTON_LABEL=
|
||||
OPENID_IMAGE_URL=
|
||||
```
|
||||
|
||||
### Email Password Reset
|
||||
Email is used for password reset. See: [Email Password Reset](../configuration/user_auth_system.md#email-and-password-reset)
|
||||
|
||||
- Note that all either service or host, username and password and the From address must be set for email to work.
|
||||
|
||||
> If using `EMAIL_SERVICE`, **do NOT** set the extended connection parameters:
|
||||
>
|
||||
> `HOST`, `PORT`, `ENCRYPTION`, `ENCRYPTION_HOSTNAME`, `ALLOW_SELFSIGNED`
|
||||
>
|
||||
> Failing to set valid values here will result in LibreChat using the unsecured password reset!
|
||||
|
||||
See: [nodemailer well-known-services](https://community.nodemailer.com/2-0-0-beta/setup-smtp/well-known-services/)
|
||||
|
||||
```bash
|
||||
EMAIL_SERVICE=
|
||||
```
|
||||
|
||||
If `EMAIL_SERVICE` is not set, connect to this server:
|
||||
|
||||
```bash
|
||||
EMAIL_HOST=
|
||||
```
|
||||
|
||||
Mail server port to connect to with EMAIL_HOST (usually 25, 465, 587, 2525):
|
||||
|
||||
```bash
|
||||
EMAIL_PORT=25
|
||||
```
|
||||
|
||||
Encryption valid values: `starttls` (force STARTTLS), `tls` (obligatory TLS), anything else (use STARTTLS if available):
|
||||
|
||||
```bash
|
||||
EMAIL_ENCRYPTION=
|
||||
```
|
||||
|
||||
Check the name in the certificate against this instead of `EMAIL_HOST`:
|
||||
|
||||
```bash
|
||||
EMAIL_ENCRYPTION_HOSTNAME=
|
||||
```
|
||||
|
||||
Set to true to allow self-signed, anything else will disallow self-signed:
|
||||
|
||||
```bash
|
||||
EMAIL_ALLOW_SELFSIGNED=
|
||||
```
|
||||
|
||||
Username used for authentication. For consumer services, this MUST usually match EMAIL_FROM:
|
||||
|
||||
```bash
|
||||
EMAIL_USERNAME=
|
||||
```
|
||||
|
||||
Password used for authentication:
|
||||
|
||||
```bash
|
||||
EMAIL_PASSWORD=
|
||||
```
|
||||
|
||||
The human-readable address in the From is constructed as `EMAIL_FROM_NAME <EMAIL_FROM>`. Defaults to `APP_TITLE`:
|
||||
|
||||
```bash
|
||||
EMAIL_FROM_NAME=
|
||||
```
|
||||
|
||||
Mail address for from field. It is **REQUIRED** to set a value here (even if it's not porperly working):
|
||||
|
||||
```bash
|
||||
EMAIL_FROM=noreply@librechat.ai
|
||||
```
|
||||
53
docs/install/configuration/free_ai_apis.md
Normal file
53
docs/install/configuration/free_ai_apis.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: 💸 Free AI APIs
|
||||
weight: -6
|
||||
---
|
||||
|
||||
# Free AI APIs
|
||||
|
||||
There are APIs offering free/free-trial access to AI APIs via reverse proxy.
|
||||
|
||||
Here is a well-maintained public list of [Free AI APIs](https://github.com/zukixa/cool-ai-stuff) that may or may not be compatible with LibreChat
|
||||
|
||||
> ⚠️ [OpenRouter](./ai_setup.md#openrouter) is in a category of its own, and is highly recommended over the "free" services below. NagaAI and other 'free' API proxies tend to have intermittent issues, data leaks, and/or problems with the guidelines of the platforms they advertise on. Use the below at your own risk.
|
||||
|
||||
### NagaAI
|
||||
|
||||
Since NagaAI works with LibreChat, and offers Llama2 along with OpenAI models, let's start with that one: [NagaAI](https://t.me/chimera_ai)
|
||||
|
||||
> ⚠️ Never trust 3rd parties. Use at your own risk of privacy loss. Your data may be used for AI training at best or for nefarious reasons at worst; this is true in all cases, even with official endpoints: never give an LLM sensitive/identifying information. If something is free, you are the product. If errors arise, they are more likely to be due to the 3rd party, and not this project, as I test the official endpoints first and foremost.
|
||||
|
||||
You will get your API key from the discord server. The instructions are pretty clear when you join so I won't repeat them.
|
||||
|
||||
Once you have the API key, you should adjust your .env file like this:
|
||||
|
||||
```bash
|
||||
##########################
|
||||
# OpenAI Endpoint:
|
||||
##########################
|
||||
|
||||
OPENAI_API_KEY=your-naga-ai-api-key
|
||||
# Reverse proxy settings for OpenAI:
|
||||
OPENAI_REVERSE_PROXY=https://api.naga.ac/v1/chat/completions
|
||||
|
||||
# OPENAI_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,text-davinci-003,gpt-4,gpt-4-0314,gpt-4-0613
|
||||
```
|
||||
|
||||
**Note:** The `OPENAI_MODELS` variable is commented out so that the server can fetch nagaai/api/v1/models for all available models. Uncomment and adjust if you wish to specify which exact models you want to use.
|
||||
|
||||
It's worth noting that not all models listed by their API will work, with or without this project. The exact URL may also change, just make sure you include `/v1/chat/completions` in the reverse proxy URL if it ever changes.
|
||||
|
||||
You can set `OPENAI_API_KEY=user_provided` if you would like the user to add their own NagaAI API key, just be sure you specify the models with `OPENAI_MODELS` in this case since they won't be able to be fetched without an admin set API key.
|
||||
|
||||
## That's it! You're all set. 🎉
|
||||
|
||||
### Here's me using Llama2 via NagaAI
|
||||
|
||||

|
||||
|
||||
### Plugins also work with this reverse proxy (OpenAI models). [More info on plugins here](https://docs.librechat.ai/features/plugins/introduction.html)
|
||||

|
||||
|
||||
---
|
||||
|
||||
>⚠️ Note: If you're having trouble, before creating a new issue, please search for similar ones on our [#issues thread on our discord](https://discord.gg/weqZFtD9C4) or our [troubleshooting discussion](https://github.com/danny-avila/LibreChat/discussions/categories/troubleshooting) on our Discussions page. If you don't find a relevant issue, feel free to create a new one and provide as much detail as possible.
|
||||
18
docs/install/configuration/index.md
Normal file
18
docs/install/configuration/index.md
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Configuration
|
||||
weight: 2
|
||||
---
|
||||
|
||||
# Configuration
|
||||
|
||||
* ⚙️ [Environment Variables](./dotenv.md)
|
||||
* 🐋 [Docker Compose Override](./docker_override.md)
|
||||
---
|
||||
* 🤖 [AI Setup](./ai_setup.md)
|
||||
* 🚅 [LiteLLM](./litellm.md)
|
||||
* 💸 [Free AI APIs](./free_ai_apis.md)
|
||||
---
|
||||
* 🛂 [Authentication System](./user_auth_system.md)
|
||||
* 🍃 [Online MongoDB](./mongodb.md)
|
||||
* 🌍 [Default Language](./default_language.md)
|
||||
* 🌀 [Miscellaneous](./misc.md)
|
||||
103
docs/install/configuration/litellm.md
Normal file
103
docs/install/configuration/litellm.md
Normal file
|
|
@ -0,0 +1,103 @@
|
|||
---
|
||||
title: 🚅 LiteLLM
|
||||
weight: -7
|
||||
---
|
||||
|
||||
# Using LibreChat with LiteLLM Proxy
|
||||
Use [LiteLLM Proxy](https://docs.litellm.ai/docs/simple_proxy) for:
|
||||
* Calling 100+ LLMs Huggingface/Bedrock/TogetherAI/etc. in the OpenAI ChatCompletions & Completions format
|
||||
* Load balancing - between Multiple Models + Deployments of the same model LiteLLM proxy can handle 1k+ requests/second during load tests
|
||||
* Authentication & Spend Tracking Virtual Keys
|
||||
|
||||
https://docs.litellm.ai/docs/simple_proxy
|
||||
|
||||
## Start LiteLLM Proxy Server
|
||||
### Pip install litellm
|
||||
```shell
|
||||
pip install litellm
|
||||
```
|
||||
|
||||
### Create a config.yaml for litellm proxy
|
||||
More information on LiteLLM configurations here: https://docs.litellm.ai/docs/simple_proxy#proxy-configs
|
||||
|
||||
```yaml
|
||||
model_list:
|
||||
- model_name: gpt-3.5-turbo
|
||||
litellm_params:
|
||||
model: azure/gpt-turbo-small-eu
|
||||
api_base: https://my-endpoint-europe-berri-992.openai.azure.com/
|
||||
api_key:
|
||||
rpm: 6 # Rate limit for this deployment: in requests per minute (rpm)
|
||||
- model_name: gpt-3.5-turbo
|
||||
litellm_params:
|
||||
model: azure/gpt-turbo-small-ca
|
||||
api_base: https://my-endpoint-canada-berri992.openai.azure.com/
|
||||
api_key:
|
||||
rpm: 6
|
||||
- model_name: gpt-3.5-turbo
|
||||
litellm_params:
|
||||
model: azure/gpt-turbo-large
|
||||
api_base: https://openai-france-1234.openai.azure.com/
|
||||
api_key:
|
||||
rpm: 1440
|
||||
```
|
||||
|
||||
### Start the proxy
|
||||
```shell
|
||||
litellm --config /path/to/config.yaml
|
||||
|
||||
#INFO: Proxy running on http://0.0.0.0:8000
|
||||
```
|
||||
|
||||
## Use LiteLLM Proxy Server with LibreChat
|
||||
|
||||
|
||||
#### 1. Clone the repo
|
||||
```shell
|
||||
git clone https://github.com/danny-avila/LibreChat.git
|
||||
```
|
||||
|
||||
|
||||
#### 2. Modify Librechat's `docker-compose.yml`
|
||||
```yaml
|
||||
OPENAI_REVERSE_PROXY=http://host.docker.internal:8000/v1/chat/completions
|
||||
```
|
||||
|
||||
#### 3. Save fake OpenAI key in Librechat's `.env`
|
||||
|
||||
Copy Librechat's `.env.example` to `.env` and overwrite the default OPENAI_API_KEY (by default it requires the user to pass a key).
|
||||
```env
|
||||
OPENAI_API_KEY=sk-1234
|
||||
```
|
||||
|
||||
#### 4. Run LibreChat:
|
||||
```shell
|
||||
docker compose up
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Why use LiteLLM?
|
||||
|
||||
1. **Access to Multiple LLMs**: It allows calling over 100 LLMs from platforms like Huggingface, Bedrock, TogetherAI, etc., using OpenAI's ChatCompletions and Completions format.
|
||||
|
||||
2. **Load Balancing**: Capable of handling over 1,000 requests per second during load tests, it balances load across various models and deployments.
|
||||
|
||||
3. **Authentication & Spend Tracking**: The server supports virtual keys for authentication and tracks spending.
|
||||
|
||||
Key components and features include:
|
||||
|
||||
- **Installation**: Easy installation.
|
||||
- **Testing**: Testing features to route requests to specific models.
|
||||
- **Server Endpoints**: Offers multiple endpoints for chat completions, completions, embeddings, model lists, and key generation.
|
||||
- **Supported LLMs**: Supports a wide range of LLMs, including AWS Bedrock, Azure OpenAI, Huggingface, AWS Sagemaker, Anthropic, and more.
|
||||
- **Proxy Configurations**: Allows setting various parameters like model list, server settings, environment variables, and more.
|
||||
- **Multiple Models Management**: Configurations can be set up for managing multiple models with fallbacks, cooldowns, retries, and timeouts.
|
||||
- **Embedding Models Support**: Special configurations for embedding models.
|
||||
- **Authentication Management**: Features for managing authentication through virtual keys, model upgrades/downgrades, and tracking spend.
|
||||
- **Custom Configurations**: Supports setting model-specific parameters, caching responses, and custom prompt templates.
|
||||
- **Debugging Tools**: Options for debugging and logging proxy input/output.
|
||||
- **Deployment and Performance**: Information on deploying LiteLLM Proxy and its performance metrics.
|
||||
- **Proxy CLI Arguments**: A wide range of command-line arguments for customization.
|
||||
|
||||
Overall, LiteLLM Server offers a comprehensive suite of tools for managing, deploying, and interacting with a variety of LLMs, making it a versatile choice for large-scale AI applications.
|
||||
74
docs/install/configuration/misc.md
Normal file
74
docs/install/configuration/misc.md
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
title: 🌀 Miscellaneous
|
||||
weight: -2
|
||||
author: danny-avila and jerkstorecaller
|
||||
---
|
||||
|
||||
As LibreChat has varying use cases and environment possibilities, this page will host niche setup/configurations, as contributed by the community, that are not better delegated to any of the other guides.
|
||||
|
||||
# Using LibreChat behind a reverse proxy with Basic Authentication
|
||||
|
||||
Written by [@danny-avila](https://github.com/danny-avila) and [@jerkstorecaller](https://github.com/jerkstorecaller)
|
||||
|
||||
### Basic Authentication (Basic Auth)
|
||||
|
||||
Basic Authentication is a simple authentication scheme built into the HTTP protocol. When a client sends a request to a server, the server can respond with a `401 Unauthorized` status code, prompting the client to provide a username and password. This username and password are then sent with subsequent requests in the HTTP header, encoded in Base64 format.
|
||||
|
||||
For example, if the username is `Aladdin` and the password is `open sesame`, the client sends:
|
||||
|
||||
```
|
||||
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
|
||||
```
|
||||
|
||||
Where `QWxhZGRpbjpvcGVuIHNlc2FtZQ==` is the Base64 encoding of `Aladdin:open sesame`.
|
||||
|
||||
**Note**: Basic Auth is not considered very secure on its own because the credentials are sent in easily decodable Base64 format. It should always be used in conjunction with HTTPS to encrypt the credentials during transmission.
|
||||
|
||||
### Reverse Proxy
|
||||
|
||||
A reverse proxy is a server that sits between client devices and a web server, forwarding client requests to the web server and returning the server's responses back to the clients. This is useful for load balancing, caching, and, in this context, adding an additional layer of security or authentication.
|
||||
|
||||
### The Issue with LibreChat and Basic Auth
|
||||
|
||||
If LibreChat is behind a webserver acting as a reverse proxy with Basic Auth (a common scenario for casual users), LibreChat will not function properly without some extra configuration. You will connect to LibreChat, be prompted to enter Basic Auth credentials, enter your username/password, LibreChat will load, but then you will not get a response from the AI services.
|
||||
|
||||
The reason is that LibreChat uses Bearer authentication when calling the backend API at domain.com/api. Because those calls will use Bearer rather than Basic auth, your webserver will view this as unauthenticated connection attempt and return 401.
|
||||
|
||||
The solution is to enable Basic Auth, but disable it specifically for the /api/ endpoint. (it's safe because the API calls still require an authenticated user)
|
||||
|
||||
You will therefore need to create a new rule that disables Basic Auth for /api/. This rule must be higher priority than the rule activating Basic Auth.
|
||||
|
||||
### Nginx Configuration
|
||||
|
||||
For example, for nginx, you might do:
|
||||
|
||||
```
|
||||
#https://librechat.domain.com
|
||||
server {
|
||||
listen 443 ssl;
|
||||
listen [::]:443 ssl;
|
||||
server_name librechat.*;
|
||||
include /config/nginx/ssl.conf;
|
||||
|
||||
#all connections to librechat.domain.com require basic_auth
|
||||
location / {
|
||||
auth_basic "Access Restricted";
|
||||
auth_basic_user_file /config/nginx/.htpasswd;
|
||||
include /config/nginx/proxy_params.conf;
|
||||
proxy_pass http://127.0.0.1:3080;
|
||||
}
|
||||
|
||||
#...except for /api/, which will use LibreChat's own auth system
|
||||
location ~ ^/api/ {
|
||||
auth_basic off;
|
||||
include /config/nginx/proxy_params.conf;
|
||||
proxy_pass http://127.0.0.1:3080;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The provided Nginx configuration sets up a server block for `librechat.domain.com`:
|
||||
|
||||
1. **Basic Auth for All Requests**: The `location /` block sets up Basic Auth for all requests to `librechat.domain.com`. The `auth_basic` directive activates Basic Auth, and the `auth_basic_user_file` directive points to the file containing valid usernames and passwords.
|
||||
|
||||
2. **Exception for `/api/` Endpoint**: The `location ~ ^/api/` block matches any URL path starting with `/api/`. For these requests, Basic Auth is turned off using `auth_basic off;`. This ensures that LibreChat's own authentication system can operate without interference.
|
||||
96
docs/install/configuration/mongodb.md
Normal file
96
docs/install/configuration/mongodb.md
Normal file
|
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
title: 🍃 Online MongoDB
|
||||
weight: -4
|
||||
---
|
||||
|
||||
# Set Up an Online MongoDB Database
|
||||
|
||||
## Create an account
|
||||
- Open a new tab and go to [https://account.mongodb.com/account/register](https://account.mongodb.com/account/register) to create an account.
|
||||
|
||||
## Create a project
|
||||
- Once you have set up your account, create a new project and name it (the name can be anything):
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## Build a database
|
||||
- Now select `Build a Database`:
|
||||
|
||||

|
||||
|
||||
## Choose your cloud environment
|
||||
- Select the free tier:
|
||||
|
||||

|
||||
|
||||
## Name your cluster
|
||||
- Name your cluster (leave everything else default) and click create:
|
||||
|
||||

|
||||
|
||||
## Database credentials
|
||||
- Enter a user name and a secure password:
|
||||
|
||||

|
||||
|
||||
## Select environment
|
||||
- Select `Cloud Environement`:
|
||||
|
||||

|
||||
|
||||
## Complete database configuration
|
||||
- Click `Finish and Close`:
|
||||
|
||||

|
||||
|
||||
## Go to your database
|
||||
- Click `Go to Databases`:
|
||||
|
||||

|
||||
|
||||
## Network access
|
||||
- Click on `Network Access` in the side menu:
|
||||
|
||||

|
||||
|
||||
## Add IP adress
|
||||
- Add a IP Adress:
|
||||
|
||||

|
||||
|
||||
## Allow access
|
||||
- Select `Allow access from anywhere` and `Confirm`:
|
||||
|
||||

|
||||
|
||||
## Get your connection string
|
||||
|
||||
- Select `Database` in the side menu
|
||||
|
||||

|
||||
|
||||
- Select `Connect`:
|
||||
|
||||

|
||||
|
||||
|
||||
- Select the first option (`Drivers`)
|
||||
|
||||

|
||||
|
||||
|
||||
- Copy the `connection string`:
|
||||
|
||||

|
||||
|
||||
- The URI format is `mongodb+srv://<username>:<password>@<host>/<database>?<options>`. Make sure to replace `<password>` with the database password you created in the "[database credentials](#database-credentials)" section above. Do not forget to remove the `<` `>` around the password. Also remove `&w=majority` at the end of the connection string. `retryWrites=true` is the only option you need to keep. You should also add `LibreChat` or your own `APP_TITLE` as the database name in the URI.
|
||||
- example:
|
||||
```
|
||||
mongodb+srv://fuegovic:1Gr8Banana@render-librechat.fgycwpi.mongo.net/LibreChat?retryWrites=true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
>⚠️ Note: If you're having trouble, before creating a new issue, please search for similar ones on our [#issues thread on our discord](https://discord.gg/weqZFtD9C4) or our [troubleshooting discussion](https://github.com/danny-avila/LibreChat/discussions/categories/troubleshooting) on our Discussions page. If you don't find a relevant issue, feel free to create a new one and provide as much detail as possible.
|
||||
576
docs/install/configuration/user_auth_system.md
Normal file
576
docs/install/configuration/user_auth_system.md
Normal file
|
|
@ -0,0 +1,576 @@
|
|||
---
|
||||
title: 🛂 Authentication System
|
||||
weight: -5
|
||||
---
|
||||
|
||||
# User Authentication System
|
||||
|
||||
LibreChat has a user authentication system that allows users to sign up and log in securely and easily. The system is scalable and can handle a large number of concurrent users without compromising performance or security.
|
||||
|
||||
By default, we have email signup and login enabled, which means users can create an account using their email address and a password. They can also reset their password if they forget it.
|
||||
|
||||
Additionally, our system can integrate social logins from various platforms such as Google, GitHub, Discord, OpenID, and more. This means users can log in using their existing accounts on these platforms, without having to create a new account or remember another password.
|
||||
|
||||
>❗**Important:** When you run the app for the first time, you need to create a new account by clicking on "Sign up" on the login page. The first account you make will be the admin account. The admin account doesn't have any special features right now, but it might be useful if you want to make an admin dashboard to manage other users later.
|
||||
|
||||
>> **Note:** The first account created should ideally be a local account (email and password).
|
||||
|
||||
## Basic Configuration:
|
||||
|
||||
### General
|
||||
|
||||
Here's an overview of the general configuration, located in the `.env` file at the root of the LibreChat folder.
|
||||
|
||||
- `ALLOW_EMAIL_LOGIN`: Email login. Set to `true` or `false` to enable or disable ONLY email login.
|
||||
- `ALLOW_REGISTRATION`: Email registration of new users. Set to `true` or `false` to enable or disable Email registration.
|
||||
- `ALLOW_SOCIAL_LOGIN`: Allow users to connect to LibreChat with various social networks, see below. Set to `true` or `false` to enable or disable.
|
||||
- `ALLOW_SOCIAL_REGISTRATION`: Enable or disable registration of new user using various social network. Set to `true` or `false` to enable or disable.
|
||||
|
||||
> **Note:** OpenID does not support the ability to disable only registration.
|
||||
|
||||
>> **Quick Tip:** Even with registration disabled, add users directly to the database using `npm run create-user`. If you can't get npm to work, try `sudo docker exec -ti LibreChat sh` first to "ssh" into the container.
|
||||
|
||||

|
||||
|
||||
```bash
|
||||
ALLOW_EMAIL_LOGIN=true
|
||||
ALLOW_REGISTRATION=true
|
||||
ALLOW_SOCIAL_LOGIN=false
|
||||
ALLOW_SOCIAL_REGISTRATION=false
|
||||
```
|
||||
|
||||
### Session Expiry and Refresh Token
|
||||
|
||||
- Default values: session expiry: 15 minutes, refresh token expiry: 7 days
|
||||
- For more information: [Refresh Token](https://github.com/danny-avila/LibreChat/pull/927)
|
||||
|
||||
```bash
|
||||
SESSION_EXPIRY=1000 * 60 * 15
|
||||
REFRESH_TOKEN_EXPIRY=(1000 * 60 * 60 * 24) * 7
|
||||
```
|
||||
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
Client->>Server: Login request with credentials
|
||||
Server->>Passport: Use authentication strategy (e.g., 'local', 'google', etc.)
|
||||
Passport-->>Server: User object or false/error
|
||||
Note over Server: If valid user...
|
||||
Server->>Server: Generate access and refresh tokens
|
||||
Server->>Database: Store hashed refresh token
|
||||
Server-->>Client: Access token and refresh token
|
||||
Client->>Client: Store access token in HTTP Header and refresh token in HttpOnly cookie
|
||||
Client->>Server: Request with access token from HTTP Header
|
||||
Server-->>Client: Requested data
|
||||
Note over Client,Server: Access token expires
|
||||
Client->>Server: Request with expired access token
|
||||
Server-->>Client: Unauthorized
|
||||
Client->>Server: Request with refresh token from HttpOnly cookie
|
||||
Server->>Database: Retrieve hashed refresh token
|
||||
Server->>Server: Compare hash of provided refresh token with stored hash
|
||||
Note over Server: If hashes match...
|
||||
Server-->>Client: New access token and refresh token
|
||||
Client->>Server: Retry request with new access token
|
||||
Server-->>Client: Requested data
|
||||
```
|
||||
|
||||
### JWT Secret and Refresh Secret
|
||||
|
||||
- You should use new secure values. The examples given are 32-byte keys (64 characters in hex).
|
||||
- Use this replit to generate some quickly: [JWT Keys](https://replit.com/@daavila/crypto#index.js)
|
||||
|
||||
```bash
|
||||
JWT_SECRET=16f8c0ef4a5d391b26034086c628469d3f9f497f08163ab9b40137092f2909ef
|
||||
JWT_REFRESH_SECRET=eaa5191f2914e30b9387fd84e254e4ba6fc51b4654968a9b0803b456a54b8418
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Automated Moderation System (optional)
|
||||
|
||||
The Automated Moderation System is enabled by default. It uses a scoring mechanism to track user violations. As users commit actions like excessive logins, registrations, or messaging, they accumulate violation scores. Upon reaching a set threshold, the user and their IP are temporarily banned. This system ensures platform security by monitoring and penalizing rapid or suspicious activities.
|
||||
|
||||
To set up the mod system, review [the setup guide](../../features/mod_system.md).
|
||||
|
||||
> *Please Note: If you want this to work in development mode, you will need to create a file called `.env.development` in the root directory and set `DOMAIN_CLIENT` to `http://localhost:3090` or whatever port is provided by vite when runnning `npm run frontend-dev`*
|
||||
|
||||
---
|
||||
|
||||
## **Email and Password Reset**
|
||||
|
||||
### General setup
|
||||
|
||||
in the .env file modify these variables:
|
||||
|
||||
```
|
||||
EMAIL_SERVICE= # eg. gmail - see https://community.nodemailer.com/2-0-0-beta/setup-smtp/well-known-services/
|
||||
EMAIL_HOST= # eg. example.com - if EMAIL_SERVICE is not set, connect to this server.
|
||||
EMAIL_PORT=25 # eg. 25 - mail server port to connect to with EMAIL_HOST (usually 25, 465, 587)
|
||||
EMAIL_ENCRYPTION= # eg. starttls - valid values: starttls (force STARTTLS), tls (obligatory TLS), anything else (use STARTTLS if available)
|
||||
EMAIL_ENCRYPTION_HOSTNAME= # eg. example.com - check the name in the certificate against this instead of EMAIL_HOST
|
||||
EMAIL_ALLOW_SELFSIGNED= # eg. true - valid values: true (allow self-signed), anything else (disallow self-signed)
|
||||
EMAIL_USERNAME= # eg. me@gmail.com - the username used for authentication. For consumer services, this MUST usually match EMAIL_FROM.
|
||||
EMAIL_PASSWORD= # eg. password - the password used for authentication
|
||||
EMAIL_FROM_NAME= # eg. LibreChat - the human-readable address in the From is constructed as "EMAIL_FROM_NAME <EMAIL_FROM>". Defaults to APP_TITLE.
|
||||
```
|
||||
|
||||
If you want to use one of the predefined services, configure only these variables:
|
||||
|
||||
EMAIL\_SERVICE is the name of the email service you are using (Gmail, Outlook, Yahoo Mail, ProtonMail, iCloud Mail, etc.) as defined in the NodeMailer well-known services linked above.
|
||||
EMAIL\_USERNAME is the username of the email service (usually, it will be the email address, but in some cases, it can be an actual username used to access the account).
|
||||
EMAIL\_PASSWORD is the password used to access the email service. This is not the password to access the email account directly, but a password specifically generated for this service.
|
||||
EMAIL\_FROM is the email address that will appear in the "from" field when a user receives an email.
|
||||
EMAIL\_FROM\_NAME is the name that will appear in the "from" field when a user receives an email. If left unset, it defaults to the app title.
|
||||
|
||||
If you want to use a generic SMTP service or need advanced configuration for one of the predefined providers, configure these variables:
|
||||
|
||||
EMAIL\_HOST is the hostname to connect to, or an IP address.
|
||||
EMAIL\_PORT is the port to connect to. Be aware that different ports usually come with different requirements - 25 is for mailserver-to-mailserver, 465 requires encryption at the start of the connection, and 587 allows submission of mail as a user.
|
||||
EMAIL\_ENCRYPTION defines if encryption is required at the start (`tls`) or started after the connection is set up (`starttls`). If either of these values are set, they are enforced. If they are not set, an encrypted connection is started if available.
|
||||
EMAIL\_ENCRYPTION\_HOSTNAME allows specification of a hostname against which the certificate is validated. Use this if the mail server does have a valid certificate, but you are connecting with an IP or a different name for some reason.
|
||||
EMAIL\_ALLOW\_SELFSIGNED defines whether self-signed certificates can be accepted from the server. As the mails being sent contain sensitive information, ONLY use this for testing.
|
||||
|
||||
NOTE: ⚠️ **Failing to perform either of the below setups will result in LibreChat using the unsecured password reset! This allows anyone to reset any password on your server immediately, without mail being sent at all!** The variable EMAIL\_FROM does not support all email providers **but is still required**. To stay updated, check the bug fixes [here](https://github.com/danny-avila/LibreChat/tags).
|
||||
|
||||
### Setup with Gmail
|
||||
|
||||
1. Create a Google Account and enable 2-step verification.
|
||||
2. In the [Google Account settings](https://myaccount.google.com/), click on the "Security" tab and open "2-step verification."
|
||||
3. Scroll down and open "App passwords." Choose "Mail" for the app and select "Other" for the device, then give it a random name.
|
||||
4. Click on "Generate" to create a password, and copy the generated password.
|
||||
5. In the .env file, modify the variables as follows:
|
||||
|
||||
```
|
||||
EMAIL_SERVICE=gmail
|
||||
EMAIL_USERNAME=your-email
|
||||
EMAIL_PASSWORD=your-app-password
|
||||
EMAIL_FROM=email address for the from field, e.g., noreply@librechat.ai
|
||||
EMAIL_FROM_NAME="My LibreChat Server"
|
||||
```
|
||||
|
||||
### Setup with custom mail server
|
||||
|
||||
1. Gather your SMTP login data from your provider. The steps are different for each, but they will usually list values for all variables.
|
||||
2. In the .env file, modify the variables as follows, assuming some sensible example values:
|
||||
|
||||
```
|
||||
EMAIL_HOST=mail.example.com
|
||||
EMAIL_PORT=587
|
||||
EMAIL_ENCRYPTION=starttls
|
||||
EMAIL_USERNAME=your-email
|
||||
EMAIL_PASSWORD=your-app-password
|
||||
EMAIL_FROM=email address for the from field, e.g., noreply@librechat.ai
|
||||
EMAIL_FROM_NAME="My LibreChat Server"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Social Authentication - Setup and Configuration
|
||||
|
||||

|
||||
|
||||
### Discord
|
||||
|
||||
#### Create a new Discord Application
|
||||
|
||||
- Go to **[Discord Developer Portal](https://discord.com/developers)**
|
||||
|
||||
- Create a new Application and give it a name
|
||||
|
||||

|
||||
|
||||
#### Discord Application Configuration
|
||||
|
||||
- In the OAuth2 general settings add a valid redirect URL:
|
||||
- Example for localhost: `http://localhost:3080/oauth/discord/callback`
|
||||
- Example for a domain: `https://example.com/oauth/discord/callback`
|
||||
|
||||

|
||||
|
||||
- In `Default Authorization Link`, select `In-app Authorization` and set the scopes to `applications.commands`
|
||||
|
||||

|
||||
|
||||
- Save changes and reset the Client Secret
|
||||
|
||||

|
||||

|
||||
|
||||
#### .env Configuration
|
||||
|
||||
- Paste your `Client ID` and `Client Secret` in the `.env` file:
|
||||
|
||||
```bash
|
||||
DISCORD_CLIENT_ID=your_client_id
|
||||
DISCORD_CLIENT_SECRET=your_client_secret
|
||||
DISCORD_CALLBACK_URL=/oauth/discord/callback
|
||||
```
|
||||
|
||||
- Save the `.env` file
|
||||
|
||||
> Note: If using docker, run `docker-compose up -d` to apply the .env configuration changes
|
||||
|
||||
---
|
||||
|
||||
### Facebook - WIP
|
||||
|
||||
> ⚠️ **Warning: Work in progress, not currently functional**
|
||||
|
||||
> ❗ Note: Facebook Authentication will not work from `localhost`
|
||||
|
||||
#### Create a Facebook Application
|
||||
|
||||
- Go to the **[Facebook Developer Portal](https://developers.facebook.com/)**
|
||||
|
||||
- Click on "My Apps" in the header menu
|
||||
|
||||

|
||||
|
||||
- Create a new application
|
||||
|
||||

|
||||
|
||||
- Select "Authenticate and request data from users with Facebook Login"
|
||||
|
||||

|
||||
|
||||
- Choose "No, I'm not creating a game"
|
||||
|
||||

|
||||
|
||||
- Provide an `app name` and `App contact email` and click `Create app`
|
||||
|
||||

|
||||
|
||||
#### Facebook Application Configuration
|
||||
|
||||
- In the side menu, select "Use cases" and click "Customize" under "Authentication and account creation."
|
||||
|
||||

|
||||
|
||||
- Add the `email permission`
|
||||
|
||||

|
||||
|
||||
- Now click `Go to settings`
|
||||
|
||||

|
||||
|
||||
- Ensure that `Client OAuth login`, `Web OAuth login` and `Enforce HTTPS` are **enabled**.
|
||||
|
||||

|
||||
|
||||
- Add a `Valid OAuth Redirect URIs` and "Save changes"
|
||||
- Example for a domain: `https://example.com/oauth/facebook/callback`
|
||||
|
||||

|
||||
|
||||
- Click `Go back` and select `Basic` in the `App settings` tab
|
||||
|
||||

|
||||
|
||||
- Click "Show" next to the App secret.
|
||||
|
||||

|
||||
|
||||
#### .env Configuration
|
||||
|
||||
- Copy the `App ID` and `App Secret` and paste them into the `.env` file as follows:
|
||||
|
||||
```bash
|
||||
FACEBOOK_CLIENT_ID=your_app_id
|
||||
FACEBOOK_CLIENT_SECRET=your_app_secret
|
||||
FACEBOOK_CALLBACK_URL=/oauth/facebook/callback
|
||||
```
|
||||
|
||||
- Save the `.env` file.
|
||||
|
||||
> Note: If using docker, run `docker-compose up -d` to apply the .env configuration changes
|
||||
|
||||
---
|
||||
|
||||
### GitHub
|
||||
|
||||
#### Create a GitHub Application
|
||||
|
||||
- Go to your **[Github Developer settings](https://github.com/settings/apps)**
|
||||
- Create a new Github app
|
||||
|
||||

|
||||
|
||||
#### GitHub Application Configuration
|
||||
|
||||
- Give it a `GitHub App name` and set your `Homepage URL`
|
||||
- Example for localhost: `http://localhost:3080`
|
||||
- Example for a domain: `https://example.com`
|
||||
|
||||

|
||||
|
||||
- Add a valid `Callback URL`:
|
||||
- Example for localhost: `http://localhost:3080/oauth/github/callback`
|
||||
- Example for a domain: `https://example.com/oauth/github/callback`
|
||||
|
||||

|
||||
|
||||
- Uncheck the box labeled `Active` in the `Webhook` section
|
||||
|
||||

|
||||
|
||||
- Scroll down to `Account permissions` and set `Email addresses` to `Access: Read-only`
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
- Click on `Create GitHub App`
|
||||
|
||||

|
||||
|
||||
#### .env Configuration
|
||||
|
||||
- Click `Generate a new client secret`
|
||||
|
||||

|
||||
|
||||
- Copy the `Client ID` and `Client Secret` in the `.env` file
|
||||
|
||||

|
||||
|
||||
```bash
|
||||
GITHUB_CLIENT_ID=your_client_id
|
||||
GITHUB_CLIENT_SECRET=your_client_secret
|
||||
GITHUB_CALLBACK_URL=/oauth/github/callback
|
||||
```
|
||||
|
||||
- Save the `.env` file
|
||||
|
||||
> Note: If using docker, run `docker-compose up -d` to apply the .env configuration changes
|
||||
|
||||
---
|
||||
|
||||
### Google
|
||||
|
||||
#### Create a Google Application
|
||||
|
||||
- Visit: **[Google Cloud Console](https://cloud.google.com)** and open the `Console`
|
||||
|
||||

|
||||
|
||||
- Create a New Project and give it a name
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
#### Google Application Configuration
|
||||
|
||||
- Select the project you just created and go to `APIs and Services`
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
- Select `Credentials` and click `CONFIGURE CONSENT SCREEN`
|
||||
|
||||

|
||||
|
||||
- Select `External` then click `CREATE`
|
||||
|
||||

|
||||
|
||||
- Fill in your App information
|
||||
|
||||
> Note: You can get a logo from your LibreChat folder here: `docs\assets\favicon_package\android-chrome-192x192.png`
|
||||
|
||||

|
||||
|
||||
- Configure your `App domain` and add your `Developer contact information` then click `SAVE AND CONTINUE`
|
||||
|
||||

|
||||
|
||||
- Configure the `Sopes`
|
||||
- Add `email`,`profile` and `openid`
|
||||
- Click `UPDATE` and `SAVE AND CONTINUE`
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
- Click `SAVE AND CONTINUE`
|
||||
- Review your app and go back to dashboard
|
||||
|
||||
- Go back to the `Credentials` tab, click on `+ CREATE CREDENTIALS` and select `OAuth client ID`
|
||||
|
||||

|
||||
|
||||
- Select `Web application` and give it a name
|
||||
|
||||

|
||||
|
||||
- Configure the `Authorized JavaScript origins`, you can add both your domain and localhost if you desire
|
||||
- Example for localhost: `http://localhost:3080`
|
||||
- Example for a domain: `https://example.com`
|
||||
|
||||

|
||||
|
||||
- Add a valid `Authorized redirect URIs`
|
||||
- Example for localhost: `http://localhost:3080/oauth/google/callback`
|
||||
- Example for a domain: `https://example.com/oauth/google/callback`
|
||||
|
||||

|
||||
|
||||
#### .env Configuration
|
||||
|
||||
- Click `CREATE` and copy your `Client ID` and `Client secret`
|
||||
|
||||

|
||||
|
||||
- Add them to your `.env` file:
|
||||
|
||||
```bash
|
||||
GOOGLE_CLIENT_ID=your_client_id
|
||||
GOOGLE_CLIENT_SECRET=your_client_secret
|
||||
GOOGLE_CALLBACK_URL=/oauth/github/callback
|
||||
```
|
||||
|
||||
- Save the `.env` file
|
||||
|
||||
> Note: If using docker, run `docker-compose up -d` to apply the .env configuration changes
|
||||
|
||||
---
|
||||
|
||||
### OpenID with AWS Cognito
|
||||
|
||||
#### Create a new User Pool in Cognito
|
||||
|
||||
- Visit: **[https://console.aws.amazon.com/cognito/](https://console.aws.amazon.com/cognito/)**
|
||||
- Sign in as Root User
|
||||
- Click on `Create user pool`
|
||||
|
||||

|
||||
|
||||
#### Configure sign-in experience
|
||||
|
||||
Your Cognito user pool sign-in options should include `User Name` and `Email`.
|
||||
|
||||

|
||||
|
||||
#### Configure Security Requirements
|
||||
|
||||
You can configure the password requirements now if you desire
|
||||
|
||||

|
||||
|
||||
#### Configure sign-up experience
|
||||
|
||||
Choose the attributes required at signup. The minimum required is `name`. If you want to require users to use their full name at sign up use: `given_name` and `family_name` as required attributes.
|
||||
|
||||

|
||||
|
||||
#### Configure message delivery
|
||||
|
||||
Send email with Cognito can be used for free for up to 50 emails a day
|
||||
|
||||

|
||||
|
||||
#### Integrate your app
|
||||
|
||||
Select `Use Cognitio Hosted UI` and chose a domain name
|
||||
|
||||

|
||||
|
||||
Set the app type to `Confidential client`
|
||||
Make sure `Generate a client secret` is set.
|
||||
Set the `Allowed callback URLs` to `https://YOUR_DOMAIN/oauth/openid/callback`
|
||||
|
||||

|
||||
|
||||
Under `Advanced app client settings` make sure `Profile` is included in the `OpenID Connect scopes` (in the bottom)
|
||||
|
||||

|
||||
|
||||
#### Review and create
|
||||
You can now make last minute changes, click on `Create user pool` when you're done reviewing the configuration
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
#### Get your environment variables
|
||||
|
||||
1. Open your User Pool
|
||||
|
||||

|
||||
|
||||
2. The `User Pool ID` and your AWS region will be used to construct the `OPENID_ISSUER` (see below)
|
||||
|
||||

|
||||

|
||||
|
||||
3. Go to the `App Integrations` tab
|
||||
|
||||

|
||||
|
||||
4. Open the app client
|
||||
|
||||

|
||||
|
||||
5. Toggle `Show Client Secret`
|
||||
|
||||

|
||||
|
||||
- Use the `Client ID` for `OPENID_CLIENT_ID`
|
||||
|
||||
- Use the `Client secret` for `OPENID_CLIENT_SECRET`
|
||||
|
||||
- Generate a random string for the `OPENID_SESSION_SECRET`
|
||||
|
||||
> The `OPENID_SCOPE` and `OPENID_CALLBACK_URL` are pre-configured with the correct values
|
||||
|
||||
6. Open the `.env` file at the root of your LibreChat folder and add the following variables with the values you copied:
|
||||
|
||||
```bash
|
||||
OPENID_CLIENT_ID=Your client ID
|
||||
OPENID_CLIENT_SECRET=Your client secret
|
||||
OPENID_ISSUER=https://cognito-idp.[AWS REGION].amazonaws.com/[USER POOL ID]/.well-known/openid-configuration
|
||||
OPENID_SESSION_SECRET=Any random string
|
||||
OPENID_SCOPE=openid profile email
|
||||
OPENID_CALLBACK_URL=/oauth/openid/callback
|
||||
```
|
||||
7. Save the .env file
|
||||
|
||||
> Note: If using docker, run `docker-compose up -d` to apply the .env configuration changes
|
||||
|
||||
|
||||
---
|
||||
|
||||
### OpenID with Azure AD
|
||||
|
||||
1. Go to the [Azure Portal](https://portal.azure.com/) and sign in with your account.
|
||||
2. In the search box, type "Azure Active Directory" and click on it.
|
||||
3. On the left menu, click on App registrations and then on New registration.
|
||||
4. Give your app a name and select Web as the platform type.
|
||||
5. In the Redirect URI field, enter `http://localhost:3080/oauth/openid/callback` and click on Register.
|
||||
6. You will see an Overview page with some information about your app. Copy the Application (client) ID and the Directory (tenant) ID and save them somewhere.
|
||||
7. On the left menu, click on Authentication and check the boxes for Access tokens and ID tokens under Implicit grant and hybrid flows.
|
||||
8. On the left menu, click on Certificates & Secrets and then on New client secret. Give your secret a name and an expiration date and click on Add.
|
||||
9. You will see a Value column with your secret. Copy it and save it somewhere. Don't share it with anyone!
|
||||
10. Open the .env file in your project folder and add the following variables with the values you copied:
|
||||
|
||||
```bash
|
||||
OPENID_CLIENT_ID=Your Application (client) ID
|
||||
OPENID_CLIENT_SECRET=Your client secret
|
||||
OPENID_ISSUER=https://login.microsoftonline.com/Your Directory (tenant ID)/v2.0/
|
||||
OPENID_SESSION_SECRET=Any random string
|
||||
OPENID_SCOPE=openid profile email #DO NOT CHANGE THIS
|
||||
OPENID_CALLBACK_URL=/oauth/openid/callback # this should be the same for everyone
|
||||
```
|
||||
11. Save the .env file
|
||||
|
||||
> Note: If using docker, run `docker-compose up -d` to apply the .env configuration changes
|
||||
|
||||
|
||||
---
|
||||
Loading…
Add table
Add a link
Reference in a new issue