🤖 docs: add copilot-gpt4-service AI setup info (#1695)

Adds information and setup details for [aaamon's copilot-gpt4-service](https://github.com/aaamoon/copilot-gpt4-service) to Unofficial APIs section of the documentation.

Utilizes Github's Copilot to access OpenAI api.
This commit is contained in:
zimmra 2024-01-31 13:21:12 -08:00 committed by GitHub
parent b37f55cd3a
commit a9220375d3
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -457,11 +457,76 @@ I recommend using Microsoft Edge for this:
- Look for `lsp.asx` (if it's not there look into the other entries for one with a **very long** cookie)
- Copy the whole cookie value. (Yes it's very long 😉)
- Use this **"full cookie string"** for your "BingAI Token"
<p align="left">
- <p align="left">
<img src="https://github.com/danny-avila/LibreChat/assets/32828263/d4dfd370-eddc-4694-ab16-076f913ff430" width="50%">
</p>
### copilot-gpt4-service
For this setup, an additional docker container will need to be setup.
***It is necessary to obtain your token first.***
Follow these instructions provided at **[copilot-gpt4-service#obtaining-token](https://github.com/aaamoon/copilot-gpt4-service#obtaining-copilot-token)** and keep your token for use within the service. Additionally, more detailed instructions for setting copilot-gpt4-service are available at the [GitHub repo](https://github.com/aaamoon/copilot-gpt4-service).
It is *not* recommended to use the copilot token obtained directly, instead use the `SUPER_TOKEN` variable. (You can generate your own `SUPER_TOKEN` with the OpenSSL command `openssl rand -hex 16` and set the `ENABLE_SUPER_TOKEN` variable to `true`)
1. Once your Docker environment is ready and your tokens are generated, proceed with this Docker run command to start the service:
```
docker run -d \
--name copilot-gpt4-service \
-e HOST=0.0.0.0 \
-e COPILOT_TOKEN=ghp_xxxxxxx \
-e SUPER_TOKEN=your_super_token \
-e ENABLE_SUPER_TOKEN=true \
--restart always \
-p 8080:8080 \
aaamoon/copilot-gpt4-service:latest
```
2. For Docker Compose users, use the equivalent yaml configuration provided below:
```yaml
version: '3.8'
services:
copilot-gpt4-service:
image: aaamoon/copilot-gpt4-service:latest
environment:
- HOST=0.0.0.0
- COPILOT_TOKEN=ghp_xxxxxxx # Default GitHub Copilot Token, if this item is set, the Token carried with the request will be ignored. Default is empty.
- SUPER_TOKEN=your_super_token # Super Token is a user-defined standalone token that can access COPILOT_TOKEN above. This allows you to share the service without exposing your COPILOT_TOKEN. Multiple tokens are separated by commas. Default is empty.
- ENABLE_SUPER_TOKEN=true # Whether to enable SUPER_TOKEN, default is false. If false, but COPILOT_TOKEN is not empty, COPILOT_TOKEN will be used without any authentication for all requests.
ports:
- 8080:8080
restart: unless-stopped
container_name: copilot-gpt4-service
```
3. After setting up the Docker container for `copilot-gpt4-service`, you can add it to your `librechat.yaml` configuration. Here is an example configuration:
```yaml
version: 1.0.1
cache: true
endpoints:
custom:
- name: "OpenAI via Copilot"
apiKey: "your_super_token"
baseURL: "http://[copilotgpt4service_host_ip]:8080/v1"
models:
default: ["gpt-4", "gpt-3.5-turbo"] # *See Notes
titleConvo: true
titleModel: "gpt-3.5-turbo"
summarize: true
summaryModel: "gpt-3.5-turbo"
forcePrompt: false
modelDisplayLabel: "OpenAI"
dropParams: ["user"]
```
Replace `your_super_token` with the token you obtained following the instructions highlighted above and `[copilotgpt4service_host_ip]` with the IP of your Docker host. *****See Notes***
Restart Librechat after adding the needed configuration, and select `OpenAI via Copilot` to start using!
>Notes:
> - *Only allowed models are `gpt-4` and `gpt-3.5-turbo`.
> - **Advanced users can add this to their existing docker-compose file/existing docker network and avoid having to expose port 8080 (or any port) to the copilot-gpt4-service container.
---
## Conclusion