mirror of
https://github.com/danny-avila/LibreChat.git
synced 2025-12-16 08:20:14 +01:00
📝 docs update: remove ChatGPTbrowser and other small fixes (#1686)
* 🧼 docs: remove references to ChatGPTbrowser and PandoraNext * docs: clean up .env file Update OpenAI models with the list of automatically fetched models, update Plugin models with the list of models supporting functions, comment out ToC in custom_config.md since it conflicts with mkdock right sidebar ToC * 🖋️ docs: fix formatting in linux_install.md * docs: update example model lists in dotenv.md * docs: update breaking_changesv.md
This commit is contained in:
parent
972402e029
commit
b37f55cd3a
8 changed files with 54 additions and 257 deletions
36
.env.example
36
.env.example
|
|
@ -44,7 +44,7 @@ DEBUG_CONSOLE=false
|
|||
# Endpoints #
|
||||
#===================================================#
|
||||
|
||||
# ENDPOINTS=openAI,azureOpenAI,bingAI,chatGPTBrowser,google,gptPlugins,anthropic
|
||||
# ENDPOINTS=openAI,azureOpenAI,bingAI,google,gptPlugins,anthropic
|
||||
|
||||
PROXY=
|
||||
|
||||
|
|
@ -80,14 +80,6 @@ AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE
|
|||
BINGAI_TOKEN=user_provided
|
||||
# BINGAI_HOST=https://cn.bing.com
|
||||
|
||||
#============#
|
||||
# ChatGPT #
|
||||
#============#
|
||||
|
||||
CHATGPT_TOKEN=
|
||||
CHATGPT_MODELS=text-davinci-002-render-sha
|
||||
# CHATGPT_REVERSE_PROXY=<YOUR REVERSE PROXY>
|
||||
|
||||
#============#
|
||||
# Google #
|
||||
#============#
|
||||
|
|
@ -101,7 +93,7 @@ GOOGLE_KEY=user_provided
|
|||
#============#
|
||||
|
||||
OPENAI_API_KEY=user_provided
|
||||
# OPENAI_MODELS=gpt-3.5-turbo-1106,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,gpt-4,gpt-4-0314,gpt-4-0613
|
||||
# OPENAI_MODELS=gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
|
||||
|
||||
DEBUG_OPENAI=false
|
||||
|
||||
|
|
@ -127,7 +119,7 @@ DEBUG_OPENAI=false
|
|||
# Plugins #
|
||||
#============#
|
||||
|
||||
# PLUGIN_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,gpt-4,gpt-4-0314,gpt-4-0613
|
||||
# PLUGIN_MODELS=gpt-4,gpt-4-turbo-preview,gpt-4-0125-preview,gpt-4-1106-preview,gpt-4-0613,gpt-3.5-turbo,gpt-3.5-turbo-1106,gpt-3.5-turbo-0613
|
||||
|
||||
DEBUG_PLUGINS=true
|
||||
|
||||
|
|
@ -147,20 +139,20 @@ AZURE_AI_SEARCH_SEARCH_OPTION_SELECT=
|
|||
|
||||
# DALL·E
|
||||
#----------------
|
||||
# DALLE_API_KEY= # Key for both DALL-E-2 and DALL-E-3
|
||||
# DALLE3_API_KEY= # Key for DALL-E-3 only
|
||||
# DALLE2_API_KEY= # Key for DALL-E-2 only
|
||||
# DALLE3_SYSTEM_PROMPT="Your DALL-E-3 System Prompt here"
|
||||
# DALLE2_SYSTEM_PROMPT="Your DALL-E-2 System Prompt here"
|
||||
# DALLE_REVERSE_PROXY= # Reverse proxy for DALL-E-2 and DALL-E-3
|
||||
# DALLE3_BASEURL= # Base URL for DALL-E-3
|
||||
# DALLE2_BASEURL= # Base URL for DALL-E-2
|
||||
# DALLE_API_KEY=
|
||||
# DALLE3_API_KEY=
|
||||
# DALLE2_API_KEY=
|
||||
# DALLE3_SYSTEM_PROMPT=
|
||||
# DALLE2_SYSTEM_PROMPT=
|
||||
# DALLE_REVERSE_PROXY=
|
||||
# DALLE3_BASEURL=
|
||||
# DALLE2_BASEURL=
|
||||
|
||||
# DALL·E (via Azure OpenAI)
|
||||
# Note: requires some of the variables above to be set
|
||||
#----------------
|
||||
# DALLE3_AZURE_API_VERSION= # Azure OpenAI API version for DALL-E-3
|
||||
# DALLE2_AZURE_API_VERSION= # Azure OpenAI API versiion for DALL-E-2
|
||||
# DALLE3_AZURE_API_VERSION=
|
||||
# DALLE2_AZURE_API_VERSION=
|
||||
|
||||
# Google
|
||||
#-----------------
|
||||
|
|
@ -202,7 +194,7 @@ MEILI_MASTER_KEY=DrhYf7zENyR6AlUCKmnz0eYASOQdl6zxH7s7MKFSfFCt
|
|||
|
||||
OPENAI_MODERATION=false
|
||||
OPENAI_MODERATION_API_KEY=
|
||||
# OPENAI_MODERATION_REVERSE_PROXY=not working with some reverse proxys
|
||||
# OPENAI_MODERATION_REVERSE_PROXY=
|
||||
|
||||
BAN_VIOLATIONS=true
|
||||
BAN_DURATION=1000 * 60 * 60 * 2
|
||||
|
|
|
|||
|
|
@ -26,7 +26,6 @@ weight: 2
|
|||
* 🔥 [Firebase CDN](./firebase.md)
|
||||
* 🍃 [Manage Your Database](./manage_your_database.md)
|
||||
* 🪵 [Logging System](./logging_system.md)
|
||||
* 📦 [PandoraNext](./pandoranext.md)
|
||||
* 😈 [Bing Jailbreak](./bing_jailbreak.md)
|
||||
|
||||
---
|
||||
|
|
|
|||
|
|
@ -1,172 +0,0 @@
|
|||
---
|
||||
title: 📦 PandoraNext
|
||||
description: How to deploy PandoraNext to enable the `CHATGPT_REVERSE_PROXY` for use with LibreChat.
|
||||
weight: -3
|
||||
---
|
||||
|
||||
# PandoraNext Deployment Guide
|
||||
|
||||
If you're looking to use the `ChatGPT` Endpoint in LibreChat, setting up a reverse proxy is a essential. PandoraNext offers a robust solution for this purpose. This guide will walk you through deploying PandoraNext to enable the `CHATGPT_REVERSE_PROXY` for use with LibreChat.
|
||||
|
||||
> Using this method you will only be able to use `text-davinci-002-render-sha` with PandoraNext in LibreChat. Other models offered with the `plus` subscription do not work.
|
||||
|
||||
You can use it locally in docker or deploy it onthe web for remote access.
|
||||
|
||||
---
|
||||
|
||||
## Deploy Locally Using Docker
|
||||
|
||||
For local deployment using Docker, the steps are as follows:
|
||||
|
||||
### 1. **Clone or Download the Repository:**
|
||||
Get the latest release from the [PandoraNext GitHub repository](https://github.com/pandora-next/deploy).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/pandora-next/deploy.git
|
||||
```
|
||||
|
||||
### 2. Get your PandoraNext `License ID`
|
||||
Visit the **[PandoraNext Dashboard](https://dash.pandoranext.com/)** to get your `license ID`
|
||||
|
||||
### 3. **Configure `config.json`:**
|
||||
Within the cloned repository, in the `data` folder, edit `config.json`. Specify your `license_id` and `proxy_api_prefix`. For the `proxy_api_prefix`, use at least 8 characters, avoid characters that can't be used in a URL and make sure it's unique.
|
||||
|
||||
Here's the `config.json` for your reference:
|
||||
|
||||
```json
|
||||
{
|
||||
"bind": "0.0.0.0:8181",
|
||||
"tls": {
|
||||
"enabled": false,
|
||||
"cert_file": "",
|
||||
"key_file": ""
|
||||
},
|
||||
"timeout": 600,
|
||||
"proxy_url": "",
|
||||
"license_id": "",
|
||||
"public_share": false,
|
||||
"site_password": "",
|
||||
"setup_password": "",
|
||||
"server_tokens": true,
|
||||
"proxy_api_prefix": "",
|
||||
"isolated_conv_title": "*",
|
||||
"captcha": {
|
||||
"provider": "",
|
||||
"site_key": "",
|
||||
"site_secret": "",
|
||||
"site_login": false,
|
||||
"setup_login": false,
|
||||
"oai_username": false,
|
||||
"oai_password": false
|
||||
},
|
||||
"whitelist": null
|
||||
}
|
||||
```
|
||||
|
||||
### 4. **Set Up the LibreChat `.env` Filer:**
|
||||
In the `.env` file within your LibreChat directory, you'll need to set the `CHATGPT_REVERSE_PROXY` variable:
|
||||
|
||||
```bash
|
||||
CHATGPT_REVERSE_PROXY=http://host.docker.internal:8181/your_proxy_api_prefix_here/backend-api/conversation
|
||||
```
|
||||
- Replace `your_proxy_api_prefix_here` with the actual proxy API prefix.
|
||||
|
||||
### 5. **Start Docker Containers:**
|
||||
From the PandoraNext directory, run the following command to launch the Docker containers:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Deploy Online on Hugging Face
|
||||
|
||||
To deploy PandoraNext online by duplicating the Hugging Face Space, follow these steps:
|
||||
|
||||
### 1. Get your PandoraNext `License ID`
|
||||
Visit the **[PandoraNext Dashboard](https://dash.pandoranext.com/)** to get your `license ID`
|
||||
|
||||
### 2. **Configure `config.json`:**
|
||||
Edit the following `config.json`. Specify your `license_id` and `proxy_api_prefix`. For the `proxy_api_prefix`, use at least 8 characters, avoid characters that can't be used in a URL and make sure it's unique.
|
||||
|
||||
Here's the `config.json` for your reference:
|
||||
|
||||
```json
|
||||
{
|
||||
"bind": "0.0.0.0:8181",
|
||||
"tls": {
|
||||
"enabled": false,
|
||||
"cert_file": "",
|
||||
"key_file": ""
|
||||
},
|
||||
"timeout": 600,
|
||||
"proxy_url": "",
|
||||
"license_id": "",
|
||||
"public_share": false,
|
||||
"site_password": "",
|
||||
"setup_password": "",
|
||||
"server_tokens": true,
|
||||
"proxy_api_prefix": "",
|
||||
"isolated_conv_title": "*",
|
||||
"captcha": {
|
||||
"provider": "",
|
||||
"site_key": "",
|
||||
"site_secret": "",
|
||||
"site_login": false,
|
||||
"setup_login": false,
|
||||
"oai_username": false,
|
||||
"oai_password": false
|
||||
},
|
||||
"whitelist": null
|
||||
}
|
||||
```
|
||||
|
||||
### 3. **Hugging Face Space:**
|
||||
Visit the [PandoraNext LibreChat Space](https://huggingface.co/spaces/LibreChat/PandoraNext) on Hugging Face.
|
||||
|
||||
### 4. **Duplicate the Space:**
|
||||
Utilize the available options to duplicate or fork the space into your own Hugging Face account.
|
||||
|
||||
### 5. **Fill the required secrets**
|
||||
When asked for the `SECRETS`,
|
||||
- for `CONFIG_JSON` use the whole content of the `config.json` you just modified,
|
||||
- for `TOKENS_JSON` use the following default `token.json`:
|
||||
```json
|
||||
{
|
||||
"test-1": {
|
||||
"token": "access token / session token / refresh token",
|
||||
"shared": true,
|
||||
"show_user_info": false
|
||||
},
|
||||
"test-2": {
|
||||
"token": "access token / session token / refresh token",
|
||||
"shared": true,
|
||||
"show_user_info": true,
|
||||
"plus": true
|
||||
},
|
||||
"test2": {
|
||||
"token": "access token / session token / refresh token / share token",
|
||||
"password": "12345"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6. **Configure LibreChat:**
|
||||
In the .env file (or secrets settings if you host LibreChat on Hugging Face), set the `CHATGPT_REVERSE_PROXY` variable using the following format:
|
||||
|
||||
```bash
|
||||
CHATGPT_REVERSE_PROXY=http://your_server_domain.com/your_proxy_api_prefix_here/backend-api/conversation
|
||||
```
|
||||
|
||||
- Replace `your_server_domain.com` with the domain of your deployed space.
|
||||
- you can use this format: `https://username-pandoranext.hf.space` (replace `username` with your Huggingface username)
|
||||
- Replace `your_proxy_api_prefix_here` with the `proxy_api_prefix` you have set in your `config.json`.
|
||||
- The resulting URL should look similar to:
|
||||
`https://username-pandoranext.hf.space/your_proxy_api_prefix_here/backend-api/conversation`
|
||||
|
||||
## Final Notes
|
||||
|
||||
- The `proxy_api_prefix` should be sufficiently random and unique to prevent errors.
|
||||
- The default `token.json` doesn't need to be modified to use with LibreChat
|
||||
- Ensure you have obtained a license ID from the [PandoraNext Dashboard](https://dash.pandoranext.com/).
|
||||
|
|
@ -9,6 +9,12 @@ weight: -10
|
|||
**If you experience any issues after updating, we recommend clearing your browser cache and cookies.**
|
||||
Certain changes in the updates may impact cookies, leading to unexpected behaviors if not cleared properly.
|
||||
|
||||
## January 30th 2024
|
||||
- Since PandoraNext has shut down, the ChatGPTbrowser endpoint is no longer available in LibreChat.
|
||||
- For more info:
|
||||
- [https://github.com/danny-avila/LibreChat/discussions/1663](https://github.com/danny-avila/LibreChat/discussions/1663#discussioncomment-8314025)
|
||||
- [https://linux.do/t/topic/1051](https://linux.do/t/topic/1051)
|
||||
|
||||
## v0.6.6
|
||||
|
||||
- **DALL-E Update**: user-provided keys for DALL-E are now specific to each DALL-E version, i.e.: `DALLE3_API_KEY` and `DALLE2_API_KEY`
|
||||
|
|
|
|||
|
|
@ -30,7 +30,6 @@ weight: -8
|
|||
- [Using Plugins with Azure](#using-plugins-with-azure)
|
||||
- [OpenRouter](#openrouter)
|
||||
- [Unofficial APIs](#unofficial-apis)
|
||||
- [ChatGPTBrowser](#chatgptbrowser)
|
||||
- [BingAI](#bingai)
|
||||
- [Conclusion](#conclusion) -->
|
||||
|
||||
|
|
@ -447,27 +446,6 @@ OpenRouter is integrated to the LibreChat by overriding the OpenAI endpoint.
|
|||
|
||||
**Important:** Stability for Unofficial APIs are not guaranteed. Access methods to these APIs are hacky, prone to errors, and patching, and are marked lowest in priority in LibreChat's development.
|
||||
|
||||
### ChatGPTBrowser
|
||||
|
||||
**Backend Access to https://chat.openai.com/api**
|
||||
|
||||
This is not to be confused with [OpenAI's Official API](#openai)!
|
||||
|
||||
> Note that this is disabled by default and requires additional configuration to work.
|
||||
> Also, using this may have your data exposed to 3rd parties if using a proxy, and OpenAI may flag your account.
|
||||
> See: [ChatGPT Reverse Proxy](../../features/pandoranext.md)
|
||||
|
||||
To get your Access token for ChatGPT Browser Access, you need to:
|
||||
|
||||
- Go to **[https://chat.openai.com](https://chat.openai.com)**
|
||||
- Create an account or log in with your existing one
|
||||
- Visit **[https://chat.openai.com/api/auth/session](https://chat.openai.com/api/auth/session)**
|
||||
- Copy the value of the "accessToken" field and save it in ./.env as CHATGPT_ACCESS_TOKEN
|
||||
|
||||
Warning: There may be a chance of your account being banned if you deploy the app to multiple users with this method. Use at your own risk.
|
||||
|
||||
---
|
||||
|
||||
### BingAI
|
||||
I recommend using Microsoft Edge for this:
|
||||
|
||||
|
|
|
|||
|
|
@ -33,20 +33,37 @@ Future updates will streamline configuration further by migrating some settings
|
|||
|
||||
Stay tuned for ongoing enhancements to customize your LibreChat instance!
|
||||
|
||||
# Table of Contents
|
||||
<!-- # Table of Contents
|
||||
|
||||
1. [Intro](#librechat-configuration-guide)
|
||||
- [Setup](#setup)
|
||||
- [Docker Setup](#docker-setup)
|
||||
- [Config Structure](#config-structure)
|
||||
- [1. Version](#1-version)
|
||||
- [2. Cache Settings](#2-cache-settings)
|
||||
- [3. Endpoints](#3-endpoints)
|
||||
- [Endpoint Object Structure](#endpoint-object-structure)
|
||||
- [Additional Notes](#additional-notes)
|
||||
- [Default Parameters](#default-parameters)
|
||||
- [Breakdown of Default Params](#breakdown-of-default-params)
|
||||
- [Example Config](#example-config)
|
||||
- [LibreChat Configuration Guide](#librechat-configuration-guide)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [Setup](#setup)
|
||||
- [Docker Setup](#docker-setup)
|
||||
- [Config Structure](#config-structure)
|
||||
- [Version](#version)
|
||||
- [Cache Settings](#cache-settings)
|
||||
- [File Strategy](#file-strategy)
|
||||
- [Endpoints](#endpoints)
|
||||
- [Endpoint Object Structure](#endpoint-object-structure)
|
||||
- [**name**:](#name)
|
||||
- [**apiKey**:](#apikey)
|
||||
- [**baseURL**:](#baseurl)
|
||||
- [**iconURL**:](#iconurl)
|
||||
- [**models**:](#models)
|
||||
- [**titleConvo**:](#titleconvo)
|
||||
- [**titleMethod**:](#titlemethod)
|
||||
- [**titleModel**:](#titlemodel)
|
||||
- [**summarize**:](#summarize)
|
||||
- [**summaryModel**:](#summarymodel)
|
||||
- [**forcePrompt**:](#forceprompt)
|
||||
- [**modelDisplayLabel**:](#modeldisplaylabel)
|
||||
- [**addParams**:](#addparams)
|
||||
- [**dropParams**:](#dropparams)
|
||||
- [**headers**:](#headers)
|
||||
- [Additional Notes](#additional-notes)
|
||||
- [Default Parameters](#default-parameters)
|
||||
- [Breakdown of Default Params](#breakdown-of-default-params)
|
||||
- [Example Config](#example-config) -->
|
||||
|
||||
## Setup
|
||||
|
||||
|
|
@ -58,7 +75,7 @@ The example config file has some options ready to go for Mistral AI and Openrout
|
|||
|
||||
## Docker Setup
|
||||
|
||||
For Docker, you need to make use of an [override file](./docker_override), named `docker-compose.override.yml`, to ensure the config file works for you.
|
||||
For Docker, you need to make use of an [override file](./docker_override.md), named `docker-compose.override.yml`, to ensure the config file works for you.
|
||||
|
||||
- First, make sure your containers stop running with `docker-compose down`
|
||||
- Create or edit existing `docker-compose.override.yml` at the root of the project:
|
||||
|
|
|
|||
|
|
@ -135,7 +135,7 @@ In this section you can configure the endpoints and models selection, their API
|
|||
- `PROXY` is to be used by all endpoints (leave blank by default)
|
||||
|
||||
```bash
|
||||
ENDPOINTS=openAI,azureOpenAI,bingAI,chatGPTBrowser,google,gptPlugins,anthropic
|
||||
ENDPOINTS=openAI,azureOpenAI,bingAI,google,gptPlugins,anthropic
|
||||
PROXY=
|
||||
```
|
||||
|
||||
|
|
@ -245,28 +245,6 @@ BINGAI_TOKEN=user_provided
|
|||
BINGAI_HOST=
|
||||
```
|
||||
|
||||
### ChatGPT
|
||||
see: [ChatGPT Free Access token](../configuration/ai_setup.md#chatgptbrowser)
|
||||
|
||||
> **Warning**: To use this endpoint you'll have to set up your own reverse proxy. Here is the installation guide to deploy your own (based on [PandoraNext](https://github.com/pandora-next/deploy)): **[PandoraNext Deployment Guide](../../features/pandoranext.md)**
|
||||
|
||||
```bash
|
||||
CHATGPT_REVERSE_PROXY=<YOUR-REVERSE-PROXY>
|
||||
```
|
||||
|
||||
> **Note:** If you're a GPT plus user you can try adding `gpt-4`, `gpt-4-plugins`, `gpt-4-code-interpreter`, and `gpt-4-browsing` to the list above and use the models for these features; **however, the view/display portion of these features are not supported**, but you can use the underlying models, which have higher token context
|
||||
|
||||
> This method **might only works** with `text-davinci-002-render-sha` and **might stop working** at any moment.
|
||||
|
||||
- Leave `CHATGPT_TOKEN=` blank to disable this endpoint
|
||||
- Set `CHATGPT_TOKEN=` to "user_provided" to allow users to provide their own API key from the WebUI
|
||||
- It is not recommended to provide your token in the `.env` file since it expires often and sharing it could get you banned.
|
||||
|
||||
```bash
|
||||
CHATGPT_TOKEN=
|
||||
CHATGPT_MODELS=text-davinci-002-render-sha
|
||||
```
|
||||
|
||||
### Google
|
||||
Follow these instructions to setup the [Google Endpoint](./ai_setup.md#google)
|
||||
|
||||
|
|
@ -317,7 +295,7 @@ DEBUG_OPENAI=false
|
|||
- Leave it blank or commented out to use internal settings.
|
||||
|
||||
```bash
|
||||
OPENAI_MODELS=gpt-3.5-turbo-1106,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,gpt-4,gpt-4-0314,gpt-4-0613
|
||||
OPENAI_MODELS=gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
|
||||
```
|
||||
|
||||
- Titling is enabled by default when initiating a conversation.
|
||||
|
|
@ -383,7 +361,7 @@ Here are some useful documentation about plugins:
|
|||
- Identify the available models, separated by commas **without spaces**. The first model in the list will be set as default. Leave it blank or commented out to use internal settings.
|
||||
|
||||
```bash
|
||||
PLUGIN_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,gpt-4,gpt-4-0314,gpt-4-0613
|
||||
PLUGIN_MODELS=gpt-4,gpt-4-turbo-preview,gpt-4-0125-preview,gpt-4-1106-preview,gpt-4-0613,gpt-3.5-turbo,gpt-3.5-turbo-1106,gpt-3.5-turbo-0613
|
||||
```
|
||||
|
||||
- Set to false or comment out to disable debug mode for plugins
|
||||
|
|
|
|||
|
|
@ -26,7 +26,6 @@ In this video, you will learn how to install and run LibreChat, using Docker on
|
|||
|
||||
#### Instructions
|
||||
|
||||
Here are the steps to follow:
|
||||
- Update the system: `sudo apt update`
|
||||
- Clone LibreChat: `git clone https://github.com/danny-avila/LibreChat.git`
|
||||
- Install Docker: `sudo apt install docker.io && apt install docker-compose -y`
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue