
* refactor: add gemini-pro to google Models list; use defaultModels for central model listing * refactor(SetKeyDialog): create useMultipleKeys hook to use for Azure, export `isJson` from utils, use EModelEndpoint * refactor(useUserKey): change variable names to make keyName setting more clear * refactor(FileUpload): allow passing container className string * feat(GoogleClient): Gemini support * refactor(GoogleClient): alternate stream speed for Gemini models * feat(Gemini): styling/settings configuration for Gemini * refactor(GoogleClient): substract max response tokens from max context tokens if context is above 32k (I/O max is combined between the two) * refactor(tokens): correct google max token counts and subtract max response tokens when input/output count are combined towards max context count * feat(google/initializeClient): handle both local and user_provided credentials and write tests * fix(GoogleClient): catch if credentials are undefined, handle if serviceKey is string or object correctly, handle no examples passed, throw error if not a Generative Language model and no service account JSON key is provided, throw error if it is a Generative m odel, but not google API key was provided * refactor(loadAsyncEndpoints/google): activate Google endpoint if either the service key JSON file is provided in /api/data, or a GOOGLE_KEY is defined. * docs: updated Google configuration * fix(ci): Mock import of Service Account Key JSON file (auth.json) * Update apis_and_tokens.md * feat: increase max output tokens slider for gemini pro * refactor(GoogleSettings): handle max and default maxOutputTokens on model change * chore: add sensitive redact regex * docs: add warning about data privacy * Update apis_and_tokens.md
11 KiB
How to setup various tokens and APIs for the project
This doc explains how to setup various tokens and APIs for the project. You will need some of these tokens and APIs to run the app and use its features. You must set up at least one of these tokens or APIs to run the app.
Docker notes
If you use docker, you should rebuild the docker image each time you update your credentials
Rebuild command:
npm run update:docker
# OR, if you don't have npm
docker-compose build --no-cache
Alternatively, you can create a new file named docker-compose.override.yml
in the same directory as your main docker-compose.yml
file for LibreChat, where you can set your .env variables as needed under environment
. See the docker docs for more info, and you can also view an example of an override file for LibreChat in the "Manage Your Database" section
OpenAI API key
To get your OpenAI API key, you need to:
- Go to https://platform.openai.com/account/api-keys
- Create an account or log in with your existing one
- Add a payment method to your account (this is not free, sorry 😬)
- Copy your secret key (sk-...) and save it in ./.env as OPENAI_API_KEY
ChatGPT Free Access token
Note that this is disabled by default and requires additional configuration to work. See: ChatGPT Reverse Proxy
To get your Access token for ChatGPT 'Web Version', you need to:
- Go to https://chat.openai.com
- Create an account or log in with your existing one
- Visit https://chat.openai.com/api/auth/session
- Copy the value of the "accessToken" field and save it in ./.env as CHATGPT_ACCESS_TOKEN
Warning: There may be a chance of your account being banned if you deploy the app to multiple users with this method. Use at your own risk. 😱
Bing Access Token
To get your Bing Access Token, you have a few options:
-
You can try leaving it blank and see if it works (fingers crossed 🤞)
-
You can follow these new instructions (thanks @danny-avila for sharing 🙌)
-
You can use MS Edge, navigate to bing.com, and do the following:
- Make sure you are logged in
- Open the DevTools by pressing F12 on your keyboard
- Click on the tab "Application" (On the left of the DevTools)
- Expand the "Cookies" (Under "Storage")
- Copy the value of the "_U" cookie and save it in ./.env as BING_ACCESS_TOKEN
Anthropic Endpoint (Claude)
- Create an account at https://console.anthropic.com/
- Go to https://console.anthropic.com/account/keys and get your api key
- add it to
ANTHROPIC_API_KEY=
in the.env
file
For the Google Endpoint, you can either use the Generative Language API (for Gemini models), or the Vertex AI API (for PaLM2 & Codey models, Gemini support coming soon).
The Generative Language API uses an API key, which you can get from Google AI Studio.
For Vertex AI, you need a Service Account JSON key file, with appropriate access configured.
Instructions for both are given below.
Setting GOOGLE_KEY=user_provided
in your .env file will configure both values to be provided from the client (or frontend) like so:
Generative Language API (Gemini)
60 Gemini requests/minute are currently free until early next year when it enters general availability.
⚠️ Google will be using that free input/output to help improve the model, with data de-identified from your Google Account and API key. ⚠️ During this period, your messages “may be accessible to trained reviewers.”
To use Gemini models, you'll need an API key. If you don't already have one, create a key in Google AI Studio.
Once you have your key, you can either provide it from the frontend by setting the following:
GOOGLE_KEY=user_provided
Or, provide the key in your .env file, which allows all users of your instance to use it.
GOOGLE_KEY=mY_SeCreT_w9347w8_kEY
Notes:
- As of 12/15/23, Gemini Pro Vision is not yet supported but is planned.
- PaLM2 and Codey models cannot be accessed through the Generative Language API.
Vertex AI (PaLM 2 & Codey)
To setup Google LLMs (via Google Cloud Vertex AI), first, signup for Google Cloud: https://cloud.google.com/
You can usually get $300 starting credit, which makes this option free for 90 days.
1. Once signed up, Enable the Vertex AI API on Google Cloud:
- Go to Vertex AI page on Google Cloud console
- Click on "Enable API" if prompted
2. Create a Service Account with Vertex AI role:
- Click here to create a Service Account
- Select or create a project
-
Enter a service account ID (required), name and description are optional
-
Click on "Create and Continue" to give at least the "Vertex AI User" role
- Click on "Continue/Done"
3. Create a JSON key to Save in your Project Directory:
- Go back to the Service Accounts page
- Select your service account
-
Click on "Keys"
-
Click on "Add Key" and then "Create new key"
- Choose JSON as the key type and click on "Create"
- Download the key file and rename it as 'auth.json'
- Save it within the project directory, in
/api/data/
Saving your JSON key file in the project directory which allows all users of your LibreChat instance to use it.
Alternatively, Once you have your JSON key file, you can also provide it from the frontend on a user-basis by setting the following:
GOOGLE_KEY=user_provided
Notes:
- As of 12/15/23, Gemini and Gemini Pro Vision are not yet supported through Vertex AI but are planned.
Azure OpenAI
In order to use Azure OpenAI with this project, specific environment variables must be set in your .env
file. These variables will be used for constructing the API URLs.
The variables needed are outlined below:
Required Variables
AZURE_API_KEY
: Your Azure OpenAI API key.AZURE_OPENAI_API_INSTANCE_NAME
: The instance name of your Azure OpenAI API.AZURE_OPENAI_API_DEPLOYMENT_NAME
: The deployment name of your Azure OpenAI API.AZURE_OPENAI_API_VERSION
: The version of your Azure OpenAI API.
For example, with these variables, the URL for chat completion would look something like:
https://{AZURE_OPENAI_API_INSTANCE_NAME}.openai.azure.com/openai/deployments/{AZURE_OPENAI_API_DEPLOYMENT_NAME}/chat/completions?api-version={AZURE_OPENAI_API_VERSION}
You should also consider changing the AZURE_OPENAI_MODELS
variable to the models available in your deployment.
Additional Configuration Notes
-
Endpoint Construction: The provided variables help customize the construction of the API URL for Azure.
-
Model Deployment Naming: As of 2023-11-10, the Azure API allows only one model per deployment. It's advisable to name your deployments after the model name (e.g., "gpt-3.5-turbo") for easy deployment switching. This is facilitated by setting
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME
toTRUE
.
Alternatively, use custom deployment names and set AZURE_OPENAI_DEFAULT_MODEL
for expected functionality.
AZURE_OPENAI_MODELS
: List the available models, separated by commas without spaces. The first listed model will be the default. If left blank, internal settings will be used. Note that deployment names can't have periods, which are removed when generating the endpoint.
Example use:
# .env file
AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4,gpt-5
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME
: Enable using the model name as the deployment name for the API URL.
Example use:
# .env file
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE
Note: Azure API does not use the model
in the payload and is more of an identifying field for the LibreChat App. If using non-model deployment names, but you're having issues with the model not being recognized, you should set this field. It will also not be used as the deployment name if AZURE_USE_MODEL_AS_DEPLOYMENT_NAME is enabled, which will prioritize what the user selects as the model.
AZURE_OPENAI_DEFAULT_MODEL
: Override the model setting for Azure, useful if using custom deployment names.
Example use:
# .env file
AZURE_OPENAI_DEFAULT_MODEL=gpt-3.5-turbo # do include periods in the model name here
Optional Variables
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME
: The deployment name for completion. This is currently not in use but may be used in future.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
: The deployment name for embedding. This is currently not in use but may be used in future.
These two variables are optional but may be used in future updates of this project.
Using Plugins with Azure
Note: To use the Plugins endpoint with Azure OpenAI, you need a deployment supporting function calling. Otherwise, you need to set "Functions" off in the Agent settings. When you are not using "functions" mode, it's recommend to have "skip completion" off as well, which is a review step of what the agent generated.
To use Azure with the Plugins endpoint, make sure the following environment variables are set:
PLUGINS_USE_AZURE
: If set to "true" or any truthy value, this will enable the program to use Azure with the Plugins endpoint.AZURE_API_KEY
: Your Azure API key must be set with an environment variable.
That's it! You're all set. 🎉
Free AI APIs
⚠️ Note: If you're having trouble, before creating a new issue, please search for similar ones on our #issues thread on our discord or our troubleshooting discussion on our Discussions page. If you don't find a relevant issue, feel free to create a new one and provide as much detail as possible.