mirror of
https://github.com/danny-avila/LibreChat.git
synced 2025-12-21 02:40:14 +01:00
📥 feat: Import Conversations from LibreChat, ChatGPT, Chatbot UI (#2355)
* Basic implementation of ChatGPT conversation import * remove debug code * Handle citations * Fix updatedAt in import * update default model * Use job scheduler to handle import requests * import job status endpoint * Add wrapper around Agenda * Rate limits for import endpoint * rename import api path * Batch save import to mongo * Improve naming * Add documenting comments * Test for importers * Change button for importing conversations * Frontend changes * Import job status endpoint * Import endpoint response * Add translations to new phrases * Fix conversations refreshing * cleanup unused functions * set timeout for import job status polling * Add documentation * get extra spaces back * Improve error message * Fix translation files after merge * fix translation files 2 * Add zh translation for import functionality * Sync mailisearch index after import * chore: add dummy uri for jest tests, as MONGO_URI should only be real for E2E tests * docs: fix links * docs: fix conversationsImport section * fix: user role issue for librechat imports * refactor: import conversations from json - organize imports - add additional jsdocs - use multer with diskStorage to avoid loading file into memory outside of job - use filepath instead of loading data string for imports - replace console logs and some logger.info() with logger.debug - only use multer for import route * fix: undefined metadata edge case and replace ChatGtp -> ChatGpt * Refactor importChatGptConvo function to handle undefined metadata edge case and replace ChatGtp with ChatGpt * fix: chatgpt importer * feat: maintain tree relationship for librechat messages * chore: use enum * refactor: saveMessage to use single object arg, replace console logs, add userId to log message * chore: additional comment * chore: multer edge case * feat: first pass, maintain tree relationship * chore: organize * chore: remove log * ci: add heirarchy test for chatgpt * ci: test maintaining of heirarchy for librechat * wip: allow non-text content type messages * refactor: import content part object json string * refactor: more content types to format * chore: consolidate messageText formatting * docs: update on changes, bump data-provider/config versions, update readme * refactor(indexSync): singleton pattern for MeiliSearchClient * refactor: debug log after batch is done * chore: add back indexSync error handling --------- Co-authored-by: jakubmieszczak <jakub.mieszczak@zendesk.com> Co-authored-by: Danny Avila <danny@librechat.ai>
This commit is contained in:
parent
3b44741cf9
commit
ab6fbe48f1
64 changed files with 3795 additions and 98 deletions
|
|
@ -239,7 +239,7 @@ Applying these setup requirements thoughtfully will ensure a correct and efficie
|
|||
|
||||
### Model Deployments
|
||||
|
||||
The list of models available to your users are determined by the model groupings specified in your [`azureOpenAI` endpoint config.](./custom_config.md#models-1)
|
||||
The list of models available to your users are determined by the model groupings specified in your [`azureOpenAI` endpoint config.](./custom_config.md#models_1)
|
||||
|
||||
For example:
|
||||
|
||||
|
|
@ -408,7 +408,7 @@ endpoints:
|
|||
|
||||
To use Vision (image analysis) with Azure OpenAI, you need to make sure `gpt-4-vision-preview` is a specified model [in one of your groupings](#model-deployments)
|
||||
|
||||
This will work seamlessly as it does with the [OpenAI endpoint](#openai) (no need to select the vision model, it will be switched behind the scenes)
|
||||
This will work seamlessly as it does with the [OpenAI endpoint](./ai_setup.md#openai) (no need to select the vision model, it will be switched behind the scenes)
|
||||
|
||||
### Generate images with Azure OpenAI Service (DALL-E)
|
||||
|
||||
|
|
@ -639,15 +639,15 @@ In any case, you can adjust the title model as such: `OPENAI_TITLE_MODEL=your-ti
|
|||
|
||||
Currently, the best way to setup Vision is to use your deployment names as the model names, as [shown here](#model-deployments)
|
||||
|
||||
This will work seamlessly as it does with the [OpenAI endpoint](#openai) (no need to select the vision model, it will be switched behind the scenes)
|
||||
This will work seamlessly as it does with the [OpenAI endpoint](./ai_setup.md#openai) (no need to select the vision model, it will be switched behind the scenes)
|
||||
|
||||
Alternatively, you can set the [required variables](#required-variables) to explicitly use your vision deployment, but this may limit you to exclusively using your vision deployment for all Azure chat settings.
|
||||
Alternatively, you can set the [required variables](#required-fields) to explicitly use your vision deployment, but this may limit you to exclusively using your vision deployment for all Azure chat settings.
|
||||
|
||||
|
||||
**Notes:**
|
||||
|
||||
- If using `AZURE_OPENAI_BASEURL`, you should not specify instance and deployment names instead of placeholders as the vision request will fail.
|
||||
- As of December 18th, 2023, Vision models seem to have degraded performance with Azure OpenAI when compared to [OpenAI](#openai)
|
||||
- As of December 18th, 2023, Vision models seem to have degraded performance with Azure OpenAI when compared to [OpenAI](./ai_setup.md#openai)
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -6,6 +6,10 @@ weight: -10
|
|||
|
||||
# 🖥️ Config Changelog
|
||||
|
||||
## v1.0.9
|
||||
|
||||
- Added `conversationsImport` to [rateLimits](./custom_config.md#ratelimits) along with the [new feature](https://github.com/danny-avila/LibreChat/pull/2355) for importing conversations from LibreChat, ChatGPT, and Chatbot UI.
|
||||
|
||||
## v1.0.8
|
||||
|
||||
- Added additional fields to [interface config](./custom_config.md#interface-object-structure) to toggle access to specific features:
|
||||
|
|
|
|||
|
|
@ -112,6 +112,13 @@ docker compose up
|
|||
userMax: 50
|
||||
# Rate limit window for file uploads per user
|
||||
userWindowInMinutes: 60
|
||||
conversationsImport:
|
||||
ipMax: 100
|
||||
# Rate limit window for file uploads per IP
|
||||
ipWindowInMinutes: 60
|
||||
userMax: 50
|
||||
# Rate limit window for file uploads per user
|
||||
userWindowInMinutes: 60
|
||||
registration:
|
||||
socialLogins: ["google", "facebook", "github", "discord", "openid"]
|
||||
allowedDomains:
|
||||
|
|
@ -278,7 +285,7 @@ docker compose up
|
|||
- `fileUploads`
|
||||
- **Type**: Object
|
||||
- **Description**: Configures rate limits specifically for file upload operations.
|
||||
- **Sub-keys:**
|
||||
- <u>**Sub-keys:**</u>
|
||||
- `ipMax`
|
||||
- **Type**: Number
|
||||
- **Description**: Maximum number of uploads allowed per IP address per window.
|
||||
|
|
@ -291,6 +298,22 @@ docker compose up
|
|||
- `userWindowInMinutes`
|
||||
- **Type**: Number
|
||||
- **Description**: Time window in minutes for the user-based upload limit.
|
||||
- `conversationsImport`
|
||||
- **Type**: Object
|
||||
- **Description**: Configures rate limits specifically for conversation import operations.
|
||||
- <u>**Sub-keys:**</u>
|
||||
- `ipMax`
|
||||
- **Type**: Number
|
||||
- **Description**: Maximum number of imports allowed per IP address per window.
|
||||
- `ipWindowInMinutes`
|
||||
- **Type**: Number
|
||||
- **Description**: Time window in minutes for the IP-based imports limit.
|
||||
- `userMax`
|
||||
- **Type**: Number
|
||||
- **Description**: Maximum number of imports per user per window.
|
||||
- `userWindowInMinutes`
|
||||
- **Type**: Number
|
||||
- **Description**: Time window in minutes for the user-based imports limit.
|
||||
|
||||
- **Example**:
|
||||
```yaml
|
||||
|
|
@ -300,6 +323,11 @@ docker compose up
|
|||
ipWindowInMinutes: 60
|
||||
userMax: 50
|
||||
userWindowInMinutes: 60
|
||||
conversationsImport:
|
||||
ipMax: 100
|
||||
ipWindowInMinutes: 60
|
||||
userMax: 50
|
||||
userWindowInMinutes: 60
|
||||
```
|
||||
|
||||
### registration
|
||||
|
|
@ -308,8 +336,8 @@ docker compose up
|
|||
- **Type**: Object
|
||||
- **Description**: Configures registration-related settings for the application.
|
||||
- <u>**Sub-keys:**</u>
|
||||
- `socialLogins`: [More info](#socialLogins)
|
||||
- `allowedDomains`: [More info](#allowedDomains)
|
||||
- `socialLogins`: [More info](#sociallogins)
|
||||
- `allowedDomains`: [More info](#alloweddomains)
|
||||
- [Registration Object Structure](#registration-object-structure)
|
||||
|
||||
### interface
|
||||
|
|
@ -1015,7 +1043,7 @@ The preset field for a modelSpec list item is made up of a comprehensive configu
|
|||
```yaml
|
||||
socialLogins: ["google", "facebook", "github", "discord", "openid"]
|
||||
```
|
||||
- **Note**: The order of the providers in the list determines their appearance order on the login/registration page. Each provider listed must be [properly configured](./user_auth_system.md#social-authentication-setup-and-configuration) within the system to be active and available for users. This configuration allows for a tailored authentication experience, emphasizing the most relevant or preferred social login options for your user base.
|
||||
- **Note**: The order of the providers in the list determines their appearance order on the login/registration page. Each provider listed must be [properly configured](./user_auth_system.md#social-authentication) within the system to be active and available for users. This configuration allows for a tailored authentication experience, emphasizing the most relevant or preferred social login options for your user base.
|
||||
|
||||
### **allowedDomains**
|
||||
|
||||
|
|
@ -1656,7 +1684,7 @@ Custom endpoints share logic with the OpenAI endpoint, and thus have default par
|
|||
- `stream`: If set, partial message deltas will be sent, like in ChatGPT. Otherwise, generation will only be available when completed.
|
||||
- `messages`: [OpenAI format for messages](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages); the `name` field is added to messages with `system` and `assistant` roles when a custom name is specified via preset.
|
||||
|
||||
**Note:** The `max_tokens` field is not sent to use the maximum amount of tokens available, which is default OpenAI API behavior. Some alternate APIs require this field, or it may default to a very low value and your responses may appear cut off; in this case, you should add it to `addParams` field as shown in the [Endpoint Object Structure](#endpoint-object-structure).
|
||||
**Note:** The `max_tokens` field is not sent to use the maximum amount of tokens available, which is default OpenAI API behavior. Some alternate APIs require this field, or it may default to a very low value and your responses may appear cut off; in this case, you should add it to `addParams` field as shown in the [Custom Endpoint Object Structure](#custom-endpoint-object-structure).
|
||||
|
||||
### Additional Notes
|
||||
|
||||
|
|
|
|||
|
|
@ -232,7 +232,7 @@ AZURE_OPENAI_BASEURL=https://gateway.ai.cloudflare.com/v1/ACCOUNT_TAG/GATEWAY/az
|
|||
- Sets the base URL for Azure OpenAI API requests.
|
||||
- Can include `${INSTANCE_NAME}` and `${DEPLOYMENT_NAME}` placeholders or specific credentials.
|
||||
- Example: "https://gateway.ai.cloudflare.com/v1/ACCOUNT_TAG/GATEWAY/azure-openai/${INSTANCE_NAME}/${DEPLOYMENT_NAME}"
|
||||
- [More info about `AZURE_OPENAI_BASEURL` here](./ai_setup.md#using-a-specified-base-url-with-azure)
|
||||
- [More info about `AZURE_OPENAI_BASEURL` here](./azure_openai.md#using-a-specified-base-url-with-azure)
|
||||
|
||||
> Note: as deployment names can't have periods, they will be removed when the endpoint is generated.
|
||||
|
||||
|
|
@ -412,7 +412,7 @@ ASSISTANTS_BASE_URL=http://your-alt-baseURL:3080/
|
|||
- There is additional, optional configuration, depending on your needs, such as disabling the assistant builder UI, and determining which assistants can be used, that are available via the [`librechat.yaml` custom config file](./custom_config.md#assistants-endpoint-object-structure).
|
||||
|
||||
### OpenRouter
|
||||
See [OpenRouter](./free_ai_apis.md#openrouter-preferred) for more info.
|
||||
See [OpenRouter](./ai_endpoints.md#openrouter) for more info.
|
||||
|
||||
- OpenRouter is a legitimate proxy service to a multitude of LLMs, both closed and open source, including: OpenAI models, Anthropic models, Meta's Llama models, pygmalionai/mythalion-13b and many more open source models. Newer integrations are usually discounted, too!
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue