mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-02-07 10:11:49 +01:00
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Waiting to run
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Waiting to run
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Blocked by required conditions
* feat: Implement new features for Claude Opus 4.6 model - Added support for tiered pricing based on input token count for the Claude Opus 4.6 model. - Updated token value calculations to include inputTokenCount for accurate pricing. - Enhanced transaction handling to apply premium rates when input tokens exceed defined thresholds. - Introduced comprehensive tests to validate pricing logic for both standard and premium rates across various scenarios. - Updated related utility functions and models to accommodate new pricing structure. This change improves the flexibility and accuracy of token pricing for the Claude Opus 4.6 model, ensuring users are charged appropriately based on their usage. * feat: Add effort field to conversation and preset schemas - Introduced a new optional `effort` field of type `String` in both the `IPreset` and `IConversation` interfaces. - Updated the `conversationPreset` schema to include the `effort` field, enhancing the data structure for better context management. * chore: Clean up unused variable and comments in initialize function * chore: update dependencies and SDK versions - Updated @anthropic-ai/sdk to version 0.73.0 in package.json and overrides. - Updated @anthropic-ai/vertex-sdk to version 0.14.3 in packages/api/package.json. - Updated @librechat/agents to version 3.1.34 in packages/api/package.json. - Refactored imports in packages/api/src/endpoints/anthropic/vertex.ts for consistency. * chore: remove postcss-loader from dependencies * feat: Bedrock model support for adaptive thinking configuration - Updated .env.example to include new Bedrock model IDs for Claude Opus 4.6. - Refactored bedrockInputParser to support adaptive thinking for Opus models, allowing for dynamic thinking configurations. - Introduced a new function to check model compatibility with adaptive thinking. - Added an optional `effort` field to the input schemas and updated related configurations. - Enhanced tests to validate the new adaptive thinking logic and model configurations. * feat: Add tests for Opus 4.6 adaptive thinking configuration * feat: Update model references for Opus 4.6 by removing version suffix * feat: Update @librechat/agents to version 3.1.35 in package.json and package-lock.json * chore: @librechat/agents to version 3.1.36 in package.json and package-lock.json * feat: Normalize inputTokenCount for spendTokens and enhance transaction handling - Introduced normalization for promptTokens to ensure inputTokenCount does not go negative. - Updated transaction logic to reflect normalized inputTokenCount in pricing calculations. - Added comprehensive tests to validate the new normalization logic and its impact on transaction rates for both standard and premium models. - Refactored related functions to improve clarity and maintainability of token value calculations. * chore: Simplify adaptive thinking configuration in helpers.ts - Removed unnecessary type casting for the thinking property in updatedOptions. - Ensured that adaptive thinking is directly assigned when conditions are met, improving code clarity. * refactor: Replace hard-coded token values with dynamic retrieval from maxTokensMap in model tests * fix: Ensure non-negative token values in spendTokens calculations - Updated token value retrieval to use Math.max for prompt and completion tokens, preventing negative values. - Enhanced clarity in token calculations for both prompt and completion transactions. * test: Add test for normalization of negative structured token values in spendStructuredTokens - Implemented a test to ensure that negative structured token values are normalized to zero during token spending. - Verified that the transaction rates remain consistent with the expected standard values after normalization. * refactor: Bedrock model support for adaptive thinking and context handling - Added tests for various alternate naming conventions of Claude models to validate adaptive thinking and context support. - Refactored `supportsAdaptiveThinking` and `supportsContext1m` functions to utilize new parsing methods for model version extraction. - Updated `bedrockInputParser` to handle effort configurations more effectively and strip unnecessary fields for non-adaptive models. - Improved handling of anthropic model configurations in the input parser. * fix: Improve token value retrieval in getMultiplier function - Updated the token value retrieval logic to use optional chaining for better safety against undefined values. - Added a test case to ensure that the function returns the default rate when the provided valueKey does not exist in tokenValues.
840 lines
27 KiB
Text
840 lines
27 KiB
Text
#=====================================================================#
|
|
# LibreChat Configuration #
|
|
#=====================================================================#
|
|
# Please refer to the reference documentation for assistance #
|
|
# with configuring your LibreChat environment. #
|
|
# #
|
|
# https://www.librechat.ai/docs/configuration/dotenv #
|
|
#=====================================================================#
|
|
|
|
#==================================================#
|
|
# Server Configuration #
|
|
#==================================================#
|
|
|
|
HOST=localhost
|
|
PORT=3080
|
|
|
|
MONGO_URI=mongodb://127.0.0.1:27017/LibreChat
|
|
#The maximum number of connections in the connection pool. */
|
|
MONGO_MAX_POOL_SIZE=
|
|
#The minimum number of connections in the connection pool. */
|
|
MONGO_MIN_POOL_SIZE=
|
|
#The maximum number of connections that may be in the process of being established concurrently by the connection pool. */
|
|
MONGO_MAX_CONNECTING=
|
|
#The maximum number of milliseconds that a connection can remain idle in the pool before being removed and closed. */
|
|
MONGO_MAX_IDLE_TIME_MS=
|
|
#The maximum time in milliseconds that a thread can wait for a connection to become available. */
|
|
MONGO_WAIT_QUEUE_TIMEOUT_MS=
|
|
# Set to false to disable automatic index creation for all models associated with this connection. */
|
|
MONGO_AUTO_INDEX=
|
|
# Set to `false` to disable Mongoose automatically calling `createCollection()` on every model created on this connection. */
|
|
MONGO_AUTO_CREATE=
|
|
|
|
DOMAIN_CLIENT=http://localhost:3080
|
|
DOMAIN_SERVER=http://localhost:3080
|
|
|
|
NO_INDEX=true
|
|
# Use the address that is at most n number of hops away from the Express application.
|
|
# req.socket.remoteAddress is the first hop, and the rest are looked for in the X-Forwarded-For header from right to left.
|
|
# A value of 0 means that the first untrusted address would be req.socket.remoteAddress, i.e. there is no reverse proxy.
|
|
# Defaulted to 1.
|
|
TRUST_PROXY=1
|
|
|
|
# Minimum password length for user authentication
|
|
# Default: 8
|
|
# Note: When using LDAP authentication, you may want to set this to 1
|
|
# to bypass local password validation, as LDAP servers handle their own
|
|
# password policies.
|
|
# MIN_PASSWORD_LENGTH=8
|
|
|
|
#===============#
|
|
# JSON Logging #
|
|
#===============#
|
|
|
|
# Use when process console logs in cloud deployment like GCP/AWS
|
|
CONSOLE_JSON=false
|
|
|
|
#===============#
|
|
# Debug Logging #
|
|
#===============#
|
|
|
|
DEBUG_LOGGING=true
|
|
DEBUG_CONSOLE=false
|
|
|
|
#=============#
|
|
# Permissions #
|
|
#=============#
|
|
|
|
# UID=1000
|
|
# GID=1000
|
|
|
|
#==============#
|
|
# Node Options #
|
|
#==============#
|
|
|
|
# NOTE: NODE_MAX_OLD_SPACE_SIZE is NOT recognized by Node.js directly.
|
|
# This variable is used as a build argument for Docker or CI/CD workflows,
|
|
# and is NOT used by Node.js to set the heap size at runtime.
|
|
# To configure Node.js memory, use NODE_OPTIONS, e.g.:
|
|
# NODE_OPTIONS="--max-old-space-size=6144"
|
|
# See: https://nodejs.org/api/cli.html#--max-old-space-sizesize-in-mib
|
|
NODE_MAX_OLD_SPACE_SIZE=6144
|
|
|
|
#===============#
|
|
# Configuration #
|
|
#===============#
|
|
# Use an absolute path, a relative path, or a URL
|
|
|
|
# CONFIG_PATH="/alternative/path/to/librechat.yaml"
|
|
|
|
#==================#
|
|
# Langfuse Tracing #
|
|
#==================#
|
|
|
|
# Get Langfuse API keys for your project from the project settings page: https://cloud.langfuse.com
|
|
|
|
# LANGFUSE_PUBLIC_KEY=
|
|
# LANGFUSE_SECRET_KEY=
|
|
# LANGFUSE_BASE_URL=
|
|
|
|
#===================================================#
|
|
# Endpoints #
|
|
#===================================================#
|
|
|
|
# ENDPOINTS=openAI,assistants,azureOpenAI,google,anthropic
|
|
|
|
PROXY=
|
|
|
|
#===================================#
|
|
# Known Endpoints - librechat.yaml #
|
|
#===================================#
|
|
# https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints
|
|
|
|
# ANYSCALE_API_KEY=
|
|
# APIPIE_API_KEY=
|
|
# COHERE_API_KEY=
|
|
# DEEPSEEK_API_KEY=
|
|
# DATABRICKS_API_KEY=
|
|
# FIREWORKS_API_KEY=
|
|
# GROQ_API_KEY=
|
|
# HUGGINGFACE_TOKEN=
|
|
# MISTRAL_API_KEY=
|
|
# OPENROUTER_KEY=
|
|
# PERPLEXITY_API_KEY=
|
|
# SHUTTLEAI_API_KEY=
|
|
# TOGETHERAI_API_KEY=
|
|
# UNIFY_API_KEY=
|
|
# XAI_API_KEY=
|
|
|
|
#============#
|
|
# Anthropic #
|
|
#============#
|
|
|
|
ANTHROPIC_API_KEY=user_provided
|
|
# ANTHROPIC_MODELS=claude-opus-4-6,claude-opus-4-20250514,claude-sonnet-4-20250514,claude-3-7-sonnet-20250219,claude-3-5-sonnet-20241022,claude-3-5-haiku-20241022,claude-3-opus-20240229,claude-3-sonnet-20240229,claude-3-haiku-20240307
|
|
# ANTHROPIC_REVERSE_PROXY=
|
|
|
|
# Set to true to use Anthropic models through Google Vertex AI instead of direct API
|
|
# ANTHROPIC_USE_VERTEX=
|
|
# ANTHROPIC_VERTEX_REGION=us-east5
|
|
|
|
#============#
|
|
# Azure #
|
|
#============#
|
|
|
|
# Note: these variables are DEPRECATED
|
|
# Use the `librechat.yaml` configuration for `azureOpenAI` instead
|
|
# You may also continue to use them if you opt out of using the `librechat.yaml` configuration
|
|
|
|
# AZURE_OPENAI_DEFAULT_MODEL=gpt-3.5-turbo # Deprecated
|
|
# AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4 # Deprecated
|
|
# AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE # Deprecated
|
|
# AZURE_API_KEY= # Deprecated
|
|
# AZURE_OPENAI_API_INSTANCE_NAME= # Deprecated
|
|
# AZURE_OPENAI_API_DEPLOYMENT_NAME= # Deprecated
|
|
# AZURE_OPENAI_API_VERSION= # Deprecated
|
|
# AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME= # Deprecated
|
|
# AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME= # Deprecated
|
|
|
|
#=================#
|
|
# AWS Bedrock #
|
|
#=================#
|
|
|
|
# BEDROCK_AWS_DEFAULT_REGION=us-east-1 # A default region must be provided
|
|
# BEDROCK_AWS_ACCESS_KEY_ID=someAccessKey
|
|
# BEDROCK_AWS_SECRET_ACCESS_KEY=someSecretAccessKey
|
|
# BEDROCK_AWS_SESSION_TOKEN=someSessionToken
|
|
|
|
# Note: This example list is not meant to be exhaustive. If omitted, all known, supported model IDs will be included for you.
|
|
# BEDROCK_AWS_MODELS=anthropic.claude-opus-4-6-v1,anthropic.claude-3-5-sonnet-20240620-v1:0,meta.llama3-1-8b-instruct-v1:0
|
|
# Cross-region inference model IDs: us.anthropic.claude-opus-4-6-v1,global.anthropic.claude-opus-4-6-v1
|
|
|
|
# See all Bedrock model IDs here: https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns
|
|
|
|
# Notes on specific models:
|
|
# The following models are not support due to not supporting streaming:
|
|
# ai21.j2-mid-v1
|
|
|
|
# The following models are not support due to not supporting conversation history:
|
|
# ai21.j2-ultra-v1, cohere.command-text-v14, cohere.command-light-text-v14
|
|
|
|
#============#
|
|
# Google #
|
|
#============#
|
|
|
|
GOOGLE_KEY=user_provided
|
|
|
|
# GOOGLE_REVERSE_PROXY=
|
|
# Some reverse proxies do not support the X-goog-api-key header, uncomment to pass the API key in Authorization header instead.
|
|
# GOOGLE_AUTH_HEADER=true
|
|
|
|
# Gemini API (AI Studio)
|
|
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.0-flash,gemini-2.0-flash-lite
|
|
|
|
# Vertex AI
|
|
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.0-flash-001,gemini-2.0-flash-lite-001
|
|
|
|
# GOOGLE_TITLE_MODEL=gemini-2.0-flash-lite-001
|
|
|
|
# Google Cloud region for Vertex AI (used by both chat and image generation)
|
|
# GOOGLE_LOC=us-central1
|
|
|
|
# Alternative region env var for Gemini Image Generation
|
|
# GOOGLE_CLOUD_LOCATION=global
|
|
|
|
# Vertex AI Service Account Configuration
|
|
# Path to your Google Cloud service account JSON file
|
|
# GOOGLE_SERVICE_KEY_FILE=/path/to/service-account.json
|
|
|
|
# Google Safety Settings
|
|
# NOTE: These settings apply to both Vertex AI and Gemini API (AI Studio)
|
|
#
|
|
# For Vertex AI:
|
|
# To use the BLOCK_NONE setting, you need either:
|
|
# (a) Access through an allowlist via your Google account team, or
|
|
# (b) Switch to monthly invoiced billing: https://cloud.google.com/billing/docs/how-to/invoiced-billing
|
|
#
|
|
# For Gemini API (AI Studio):
|
|
# BLOCK_NONE is available by default, no special account requirements.
|
|
#
|
|
# Available options: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE
|
|
#
|
|
# GOOGLE_SAFETY_SEXUALLY_EXPLICIT=BLOCK_ONLY_HIGH
|
|
# GOOGLE_SAFETY_HATE_SPEECH=BLOCK_ONLY_HIGH
|
|
# GOOGLE_SAFETY_HARASSMENT=BLOCK_ONLY_HIGH
|
|
# GOOGLE_SAFETY_DANGEROUS_CONTENT=BLOCK_ONLY_HIGH
|
|
# GOOGLE_SAFETY_CIVIC_INTEGRITY=BLOCK_ONLY_HIGH
|
|
|
|
#========================#
|
|
# Gemini Image Generation #
|
|
#========================#
|
|
|
|
# Gemini Image Generation Tool (for Agents)
|
|
# Supports multiple authentication methods in priority order:
|
|
# 1. User-provided API key (via GUI)
|
|
# 2. GEMINI_API_KEY env var (admin-configured)
|
|
# 3. GOOGLE_KEY env var (shared with Google chat endpoint)
|
|
# 4. Vertex AI service account (via GOOGLE_SERVICE_KEY_FILE)
|
|
|
|
# Option A: Use dedicated Gemini API key for image generation
|
|
# GEMINI_API_KEY=your-gemini-api-key
|
|
|
|
# Option B: Use Vertex AI (no API key needed, uses service account)
|
|
# Set this to enable Vertex AI and allow tool without requiring API keys
|
|
# GEMINI_VERTEX_ENABLED=true
|
|
|
|
# Vertex AI model for image generation (defaults to gemini-2.5-flash-image)
|
|
# GEMINI_IMAGE_MODEL=gemini-2.5-flash-image
|
|
|
|
#============#
|
|
# OpenAI #
|
|
#============#
|
|
|
|
OPENAI_API_KEY=user_provided
|
|
# OPENAI_MODELS=gpt-5,gpt-5-codex,gpt-5-mini,gpt-5-nano,o3-pro,o3,o4-mini,gpt-4.1,gpt-4.1-mini,gpt-4.1-nano,o3-mini,o1-pro,o1,gpt-4o,gpt-4o-mini
|
|
|
|
DEBUG_OPENAI=false
|
|
|
|
# TITLE_CONVO=false
|
|
# OPENAI_TITLE_MODEL=gpt-4o-mini
|
|
|
|
# OPENAI_SUMMARIZE=true
|
|
# OPENAI_SUMMARY_MODEL=gpt-4o-mini
|
|
|
|
# OPENAI_FORCE_PROMPT=true
|
|
|
|
# OPENAI_REVERSE_PROXY=
|
|
|
|
# OPENAI_ORGANIZATION=
|
|
|
|
#====================#
|
|
# Assistants API #
|
|
#====================#
|
|
|
|
ASSISTANTS_API_KEY=user_provided
|
|
# ASSISTANTS_BASE_URL=
|
|
# ASSISTANTS_MODELS=gpt-4o,gpt-4o-mini,gpt-3.5-turbo-0125,gpt-3.5-turbo-16k-0613,gpt-3.5-turbo-16k,gpt-3.5-turbo,gpt-4,gpt-4-0314,gpt-4-32k-0314,gpt-4-0613,gpt-3.5-turbo-0613,gpt-3.5-turbo-1106,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview
|
|
|
|
#==========================#
|
|
# Azure Assistants API #
|
|
#==========================#
|
|
|
|
# Note: You should map your credentials with custom variables according to your Azure OpenAI Configuration
|
|
# The models for Azure Assistants are also determined by your Azure OpenAI configuration.
|
|
|
|
# More info, including how to enable use of Assistants with Azure here:
|
|
# https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints/azure#using-assistants-with-azure
|
|
|
|
CREDS_KEY=f34be427ebb29de8d88c107a71546019685ed8b241d8f2ed00c3df97ad2566f0
|
|
CREDS_IV=e2341419ec3dd3d19b13a1a87fafcbfb
|
|
|
|
# Azure AI Search
|
|
#-----------------
|
|
AZURE_AI_SEARCH_SERVICE_ENDPOINT=
|
|
AZURE_AI_SEARCH_INDEX_NAME=
|
|
AZURE_AI_SEARCH_API_KEY=
|
|
|
|
AZURE_AI_SEARCH_API_VERSION=
|
|
AZURE_AI_SEARCH_SEARCH_OPTION_QUERY_TYPE=
|
|
AZURE_AI_SEARCH_SEARCH_OPTION_TOP=
|
|
AZURE_AI_SEARCH_SEARCH_OPTION_SELECT=
|
|
|
|
# OpenAI Image Tools Customization
|
|
#----------------
|
|
# IMAGE_GEN_OAI_API_KEY= # Create or reuse OpenAI API key for image generation tool
|
|
# IMAGE_GEN_OAI_BASEURL= # Custom OpenAI base URL for image generation tool
|
|
# IMAGE_GEN_OAI_AZURE_API_VERSION= # Custom Azure OpenAI deployments
|
|
# IMAGE_GEN_OAI_MODEL=gpt-image-1 # OpenAI image model (e.g., gpt-image-1, gpt-image-1.5)
|
|
# IMAGE_GEN_OAI_DESCRIPTION=
|
|
# IMAGE_GEN_OAI_DESCRIPTION_WITH_FILES=Custom description for image generation tool when files are present
|
|
# IMAGE_GEN_OAI_DESCRIPTION_NO_FILES=Custom description for image generation tool when no files are present
|
|
# IMAGE_EDIT_OAI_DESCRIPTION=Custom description for image editing tool
|
|
# IMAGE_GEN_OAI_PROMPT_DESCRIPTION=Custom prompt description for image generation tool
|
|
# IMAGE_EDIT_OAI_PROMPT_DESCRIPTION=Custom prompt description for image editing tool
|
|
|
|
# DALL·E
|
|
#----------------
|
|
# DALLE_API_KEY=
|
|
# DALLE3_API_KEY=
|
|
# DALLE2_API_KEY=
|
|
# DALLE3_SYSTEM_PROMPT=
|
|
# DALLE2_SYSTEM_PROMPT=
|
|
# DALLE_REVERSE_PROXY=
|
|
# DALLE3_BASEURL=
|
|
# DALLE2_BASEURL=
|
|
|
|
# DALL·E (via Azure OpenAI)
|
|
# Note: requires some of the variables above to be set
|
|
#----------------
|
|
# DALLE3_AZURE_API_VERSION=
|
|
# DALLE2_AZURE_API_VERSION=
|
|
|
|
# Flux
|
|
#-----------------
|
|
FLUX_API_BASE_URL=https://api.us1.bfl.ai
|
|
# FLUX_API_BASE_URL = 'https://api.bfl.ml';
|
|
|
|
# Get your API key at https://api.us1.bfl.ai/auth/profile
|
|
# FLUX_API_KEY=
|
|
|
|
# Google
|
|
#-----------------
|
|
GOOGLE_SEARCH_API_KEY=
|
|
GOOGLE_CSE_ID=
|
|
|
|
# Stable Diffusion
|
|
#-----------------
|
|
SD_WEBUI_URL=http://host.docker.internal:7860
|
|
|
|
# Tavily
|
|
#-----------------
|
|
TAVILY_API_KEY=
|
|
|
|
# Traversaal
|
|
#-----------------
|
|
TRAVERSAAL_API_KEY=
|
|
|
|
# WolframAlpha
|
|
#-----------------
|
|
WOLFRAM_APP_ID=
|
|
|
|
# Zapier
|
|
#-----------------
|
|
ZAPIER_NLA_API_KEY=
|
|
|
|
#==================================================#
|
|
# Search #
|
|
#==================================================#
|
|
|
|
SEARCH=true
|
|
MEILI_NO_ANALYTICS=true
|
|
MEILI_HOST=http://0.0.0.0:7700
|
|
MEILI_MASTER_KEY=DrhYf7zENyR6AlUCKmnz0eYASOQdl6zxH7s7MKFSfFCt
|
|
|
|
# Optional: Disable indexing, useful in a multi-node setup
|
|
# where only one instance should perform an index sync.
|
|
# MEILI_NO_SYNC=true
|
|
|
|
#==================================================#
|
|
# Speech to Text & Text to Speech #
|
|
#==================================================#
|
|
|
|
STT_API_KEY=
|
|
TTS_API_KEY=
|
|
|
|
#==================================================#
|
|
# RAG #
|
|
#==================================================#
|
|
# More info: https://www.librechat.ai/docs/configuration/rag_api
|
|
|
|
# RAG_OPENAI_BASEURL=
|
|
# RAG_OPENAI_API_KEY=
|
|
# RAG_USE_FULL_CONTEXT=
|
|
# EMBEDDINGS_PROVIDER=openai
|
|
# EMBEDDINGS_MODEL=text-embedding-3-small
|
|
|
|
#===================================================#
|
|
# User System #
|
|
#===================================================#
|
|
|
|
#========================#
|
|
# Moderation #
|
|
#========================#
|
|
|
|
OPENAI_MODERATION=false
|
|
OPENAI_MODERATION_API_KEY=
|
|
# OPENAI_MODERATION_REVERSE_PROXY=
|
|
|
|
BAN_VIOLATIONS=true
|
|
BAN_DURATION=1000 * 60 * 60 * 2
|
|
BAN_INTERVAL=20
|
|
|
|
LOGIN_VIOLATION_SCORE=1
|
|
REGISTRATION_VIOLATION_SCORE=1
|
|
CONCURRENT_VIOLATION_SCORE=1
|
|
MESSAGE_VIOLATION_SCORE=1
|
|
NON_BROWSER_VIOLATION_SCORE=20
|
|
TTS_VIOLATION_SCORE=0
|
|
STT_VIOLATION_SCORE=0
|
|
FORK_VIOLATION_SCORE=0
|
|
IMPORT_VIOLATION_SCORE=0
|
|
FILE_UPLOAD_VIOLATION_SCORE=0
|
|
|
|
LOGIN_MAX=7
|
|
LOGIN_WINDOW=5
|
|
REGISTER_MAX=5
|
|
REGISTER_WINDOW=60
|
|
|
|
LIMIT_CONCURRENT_MESSAGES=true
|
|
CONCURRENT_MESSAGE_MAX=2
|
|
|
|
LIMIT_MESSAGE_IP=true
|
|
MESSAGE_IP_MAX=40
|
|
MESSAGE_IP_WINDOW=1
|
|
|
|
LIMIT_MESSAGE_USER=false
|
|
MESSAGE_USER_MAX=40
|
|
MESSAGE_USER_WINDOW=1
|
|
|
|
ILLEGAL_MODEL_REQ_SCORE=5
|
|
|
|
#========================#
|
|
# Balance #
|
|
#========================#
|
|
|
|
# CHECK_BALANCE=false
|
|
# START_BALANCE=20000 # note: the number of tokens that will be credited after registration.
|
|
|
|
#========================#
|
|
# Registration and Login #
|
|
#========================#
|
|
|
|
ALLOW_EMAIL_LOGIN=true
|
|
ALLOW_REGISTRATION=true
|
|
ALLOW_SOCIAL_LOGIN=false
|
|
ALLOW_SOCIAL_REGISTRATION=false
|
|
ALLOW_PASSWORD_RESET=false
|
|
# ALLOW_ACCOUNT_DELETION=true # note: enabled by default if omitted/commented out
|
|
ALLOW_UNVERIFIED_EMAIL_LOGIN=true
|
|
|
|
SESSION_EXPIRY=1000 * 60 * 15
|
|
REFRESH_TOKEN_EXPIRY=(1000 * 60 * 60 * 24) * 7
|
|
|
|
JWT_SECRET=16f8c0ef4a5d391b26034086c628469d3f9f497f08163ab9b40137092f2909ef
|
|
JWT_REFRESH_SECRET=eaa5191f2914e30b9387fd84e254e4ba6fc51b4654968a9b0803b456a54b8418
|
|
|
|
# Discord
|
|
DISCORD_CLIENT_ID=
|
|
DISCORD_CLIENT_SECRET=
|
|
DISCORD_CALLBACK_URL=/oauth/discord/callback
|
|
|
|
# Facebook
|
|
FACEBOOK_CLIENT_ID=
|
|
FACEBOOK_CLIENT_SECRET=
|
|
FACEBOOK_CALLBACK_URL=/oauth/facebook/callback
|
|
|
|
# GitHub
|
|
GITHUB_CLIENT_ID=
|
|
GITHUB_CLIENT_SECRET=
|
|
GITHUB_CALLBACK_URL=/oauth/github/callback
|
|
# GitHub Enterprise
|
|
# GITHUB_ENTERPRISE_BASE_URL=
|
|
# GITHUB_ENTERPRISE_USER_AGENT=
|
|
|
|
# Google
|
|
GOOGLE_CLIENT_ID=
|
|
GOOGLE_CLIENT_SECRET=
|
|
GOOGLE_CALLBACK_URL=/oauth/google/callback
|
|
|
|
# Apple
|
|
APPLE_CLIENT_ID=
|
|
APPLE_TEAM_ID=
|
|
APPLE_KEY_ID=
|
|
APPLE_PRIVATE_KEY_PATH=
|
|
APPLE_CALLBACK_URL=/oauth/apple/callback
|
|
|
|
# OpenID
|
|
OPENID_CLIENT_ID=
|
|
OPENID_CLIENT_SECRET=
|
|
OPENID_ISSUER=
|
|
OPENID_SESSION_SECRET=
|
|
OPENID_SCOPE="openid profile email"
|
|
OPENID_CALLBACK_URL=/oauth/openid/callback
|
|
OPENID_REQUIRED_ROLE=
|
|
OPENID_REQUIRED_ROLE_TOKEN_KIND=
|
|
OPENID_REQUIRED_ROLE_PARAMETER_PATH=
|
|
OPENID_ADMIN_ROLE=
|
|
OPENID_ADMIN_ROLE_PARAMETER_PATH=
|
|
OPENID_ADMIN_ROLE_TOKEN_KIND=
|
|
# Set to determine which user info property returned from OpenID Provider to store as the User's username
|
|
OPENID_USERNAME_CLAIM=
|
|
# Set to determine which user info property returned from OpenID Provider to store as the User's name
|
|
OPENID_NAME_CLAIM=
|
|
# Optional audience parameter for OpenID authorization requests
|
|
OPENID_AUDIENCE=
|
|
|
|
OPENID_BUTTON_LABEL=
|
|
OPENID_IMAGE_URL=
|
|
# Set to true to automatically redirect to the OpenID provider when a user visits the login page
|
|
# This will bypass the login form completely for users, only use this if OpenID is your only authentication method
|
|
OPENID_AUTO_REDIRECT=false
|
|
# Set to true to use PKCE (Proof Key for Code Exchange) for OpenID authentication
|
|
OPENID_USE_PKCE=false
|
|
#Set to true to reuse openid tokens for authentication management instead of using the mongodb session and the custom refresh token.
|
|
OPENID_REUSE_TOKENS=
|
|
#By default, signing key verification results are cached in order to prevent excessive HTTP requests to the JWKS endpoint.
|
|
#If a signing key matching the kid is found, this will be cached and the next time this kid is requested the signing key will be served from the cache.
|
|
#Default is true.
|
|
OPENID_JWKS_URL_CACHE_ENABLED=
|
|
OPENID_JWKS_URL_CACHE_TIME= # 600000 ms eq to 10 minutes leave empty to disable caching
|
|
#Set to true to trigger token exchange flow to acquire access token for the userinfo endpoint.
|
|
OPENID_ON_BEHALF_FLOW_FOR_USERINFO_REQUIRED=
|
|
OPENID_ON_BEHALF_FLOW_USERINFO_SCOPE="user.read" # example for Scope Needed for Microsoft Graph API
|
|
# Set to true to use the OpenID Connect end session endpoint for logout
|
|
OPENID_USE_END_SESSION_ENDPOINT=
|
|
# URL to redirect to after OpenID logout (defaults to ${DOMAIN_CLIENT}/login)
|
|
OPENID_POST_LOGOUT_REDIRECT_URI=
|
|
|
|
#========================#
|
|
# SharePoint Integration #
|
|
#========================#
|
|
# Requires Entra ID (OpenID) authentication to be configured
|
|
|
|
# Enable SharePoint file picker in chat and agent panels
|
|
# ENABLE_SHAREPOINT_FILEPICKER=true
|
|
|
|
# SharePoint tenant base URL (e.g., https://yourtenant.sharepoint.com)
|
|
# SHAREPOINT_BASE_URL=https://yourtenant.sharepoint.com
|
|
|
|
# Microsoft Graph API And SharePoint scopes for file picker
|
|
# SHAREPOINT_PICKER_SHAREPOINT_SCOPE==https://yourtenant.sharepoint.com/AllSites.Read
|
|
# SHAREPOINT_PICKER_GRAPH_SCOPE=Files.Read.All
|
|
#========================#
|
|
|
|
# SAML
|
|
# Note: If OpenID is enabled, SAML authentication will be automatically disabled.
|
|
SAML_ENTRY_POINT=
|
|
SAML_ISSUER=
|
|
SAML_CERT=
|
|
SAML_CALLBACK_URL=/oauth/saml/callback
|
|
SAML_SESSION_SECRET=
|
|
|
|
# Attribute mappings (optional)
|
|
SAML_EMAIL_CLAIM=
|
|
SAML_USERNAME_CLAIM=
|
|
SAML_GIVEN_NAME_CLAIM=
|
|
SAML_FAMILY_NAME_CLAIM=
|
|
SAML_PICTURE_CLAIM=
|
|
SAML_NAME_CLAIM=
|
|
|
|
# Logint buttion settings (optional)
|
|
SAML_BUTTON_LABEL=
|
|
SAML_IMAGE_URL=
|
|
|
|
# Whether the SAML Response should be signed.
|
|
# - If "true", the entire `SAML Response` will be signed.
|
|
# - If "false" or unset, only the `SAML Assertion` will be signed (default behavior).
|
|
# SAML_USE_AUTHN_RESPONSE_SIGNED=
|
|
|
|
|
|
#===============================================#
|
|
# Microsoft Graph API / Entra ID Integration #
|
|
#===============================================#
|
|
|
|
# Enable Entra ID people search integration in permissions/sharing system
|
|
# When enabled, the people picker will search both local database and Entra ID
|
|
USE_ENTRA_ID_FOR_PEOPLE_SEARCH=false
|
|
|
|
# When enabled, entra id groups owners will be considered as members of the group
|
|
ENTRA_ID_INCLUDE_OWNERS_AS_MEMBERS=false
|
|
|
|
# Microsoft Graph API scopes needed for people/group search
|
|
# Default scopes provide access to user profiles and group memberships
|
|
OPENID_GRAPH_SCOPES=User.Read,People.Read,GroupMember.Read.All
|
|
|
|
# LDAP
|
|
LDAP_URL=
|
|
LDAP_BIND_DN=
|
|
LDAP_BIND_CREDENTIALS=
|
|
LDAP_USER_SEARCH_BASE=
|
|
#LDAP_SEARCH_FILTER="mail="
|
|
LDAP_CA_CERT_PATH=
|
|
# LDAP_TLS_REJECT_UNAUTHORIZED=
|
|
# LDAP_STARTTLS=
|
|
# LDAP_LOGIN_USES_USERNAME=true
|
|
# LDAP_ID=
|
|
# LDAP_USERNAME=
|
|
# LDAP_EMAIL=
|
|
# LDAP_FULL_NAME=
|
|
|
|
#========================#
|
|
# Email Password Reset #
|
|
#========================#
|
|
|
|
EMAIL_SERVICE=
|
|
EMAIL_HOST=
|
|
EMAIL_PORT=25
|
|
EMAIL_ENCRYPTION=
|
|
EMAIL_ENCRYPTION_HOSTNAME=
|
|
EMAIL_ALLOW_SELFSIGNED=
|
|
EMAIL_USERNAME=
|
|
EMAIL_PASSWORD=
|
|
EMAIL_FROM_NAME=
|
|
EMAIL_FROM=noreply@librechat.ai
|
|
|
|
#========================#
|
|
# Mailgun API #
|
|
#========================#
|
|
|
|
# MAILGUN_API_KEY=your-mailgun-api-key
|
|
# MAILGUN_DOMAIN=mg.yourdomain.com
|
|
# EMAIL_FROM=noreply@yourdomain.com
|
|
# EMAIL_FROM_NAME="LibreChat"
|
|
|
|
# # Optional: For EU region
|
|
# MAILGUN_HOST=https://api.eu.mailgun.net
|
|
|
|
#========================#
|
|
# Firebase CDN #
|
|
#========================#
|
|
|
|
FIREBASE_API_KEY=
|
|
FIREBASE_AUTH_DOMAIN=
|
|
FIREBASE_PROJECT_ID=
|
|
FIREBASE_STORAGE_BUCKET=
|
|
FIREBASE_MESSAGING_SENDER_ID=
|
|
FIREBASE_APP_ID=
|
|
|
|
#========================#
|
|
# S3 AWS Bucket #
|
|
#========================#
|
|
|
|
AWS_ENDPOINT_URL=
|
|
AWS_ACCESS_KEY_ID=
|
|
AWS_SECRET_ACCESS_KEY=
|
|
AWS_REGION=
|
|
AWS_BUCKET_NAME=
|
|
|
|
#========================#
|
|
# Azure Blob Storage #
|
|
#========================#
|
|
|
|
AZURE_STORAGE_CONNECTION_STRING=
|
|
AZURE_STORAGE_PUBLIC_ACCESS=false
|
|
AZURE_CONTAINER_NAME=files
|
|
|
|
#========================#
|
|
# Shared Links #
|
|
#========================#
|
|
|
|
ALLOW_SHARED_LINKS=true
|
|
ALLOW_SHARED_LINKS_PUBLIC=true
|
|
|
|
#==============================#
|
|
# Static File Cache Control #
|
|
#==============================#
|
|
|
|
# Leave commented out to use defaults: 1 day (86400 seconds) for s-maxage and 2 days (172800 seconds) for max-age
|
|
# NODE_ENV must be set to production for these to take effect
|
|
# STATIC_CACHE_MAX_AGE=172800
|
|
# STATIC_CACHE_S_MAX_AGE=86400
|
|
|
|
# If you have another service in front of your LibreChat doing compression, disable express based compression here
|
|
# DISABLE_COMPRESSION=true
|
|
|
|
# If you have gzipped version of uploaded image images in the same folder, this will enable gzip scan and serving of these images
|
|
# Note: The images folder will be scanned on startup and a ma kept in memory. Be careful for large number of images.
|
|
# ENABLE_IMAGE_OUTPUT_GZIP_SCAN=true
|
|
|
|
#===================================================#
|
|
# UI #
|
|
#===================================================#
|
|
|
|
APP_TITLE=LibreChat
|
|
# CUSTOM_FOOTER="My custom footer"
|
|
HELP_AND_FAQ_URL=https://librechat.ai
|
|
|
|
# SHOW_BIRTHDAY_ICON=true
|
|
|
|
# Google tag manager id
|
|
#ANALYTICS_GTM_ID=user provided google tag manager id
|
|
|
|
# limit conversation file imports to a certain number of bytes in size to avoid the container
|
|
# maxing out memory limitations by unremarking this line and supplying a file size in bytes
|
|
# such as the below example of 250 mib
|
|
# CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES=262144000
|
|
|
|
|
|
#===============#
|
|
# REDIS Options #
|
|
#===============#
|
|
|
|
# Enable Redis for caching and session storage
|
|
# USE_REDIS=true
|
|
# Enable Redis for resumable LLM streams (defaults to USE_REDIS value if not set)
|
|
# Set to false to use in-memory storage for streams while keeping Redis for other caches
|
|
# USE_REDIS_STREAMS=true
|
|
|
|
# Single Redis instance
|
|
# REDIS_URI=redis://127.0.0.1:6379
|
|
|
|
# Redis cluster (multiple nodes)
|
|
# REDIS_URI=redis://127.0.0.1:7001,redis://127.0.0.1:7002,redis://127.0.0.1:7003
|
|
|
|
# Redis with TLS/SSL encryption and CA certificate
|
|
# REDIS_URI=rediss://127.0.0.1:6380
|
|
# REDIS_CA=/path/to/ca-cert.pem
|
|
|
|
# Elasticache may need to use an alternate dnsLookup for TLS connections. see "Special Note: Aws Elasticache Clusters with TLS" on this webpage: https://www.npmjs.com/package/ioredis
|
|
# Enable alternative dnsLookup for redis
|
|
# REDIS_USE_ALTERNATIVE_DNS_LOOKUP=true
|
|
|
|
# Redis authentication (if required)
|
|
# REDIS_USERNAME=your_redis_username
|
|
# REDIS_PASSWORD=your_redis_password
|
|
|
|
# Redis key prefix configuration
|
|
# Use environment variable name for dynamic prefix (recommended for cloud deployments)
|
|
# REDIS_KEY_PREFIX_VAR=K_REVISION
|
|
# Or use static prefix directly
|
|
# REDIS_KEY_PREFIX=librechat
|
|
|
|
# Redis connection limits
|
|
# REDIS_MAX_LISTENERS=40
|
|
|
|
# Redis ping interval in seconds (0 = disabled, >0 = enabled)
|
|
# When set to a positive integer, Redis clients will ping the server at this interval to keep connections alive
|
|
# When unset or 0, no pinging is performed (recommended for most use cases)
|
|
# REDIS_PING_INTERVAL=300
|
|
|
|
# Force specific cache namespaces to use in-memory storage even when Redis is enabled
|
|
# Comma-separated list of CacheKeys (e.g., ROLES,MESSAGES)
|
|
# FORCED_IN_MEMORY_CACHE_NAMESPACES=ROLES,MESSAGES
|
|
|
|
# Leader Election Configuration (for multi-instance deployments with Redis)
|
|
# Duration in seconds that the leader lease is valid before it expires (default: 25)
|
|
# LEADER_LEASE_DURATION=25
|
|
# Interval in seconds at which the leader renews its lease (default: 10)
|
|
# LEADER_RENEW_INTERVAL=10
|
|
# Maximum number of retry attempts when renewing the lease fails (default: 3)
|
|
# LEADER_RENEW_ATTEMPTS=3
|
|
# Delay in seconds between retry attempts when renewing the lease (default: 0.5)
|
|
# LEADER_RENEW_RETRY_DELAY=0.5
|
|
|
|
#==================================================#
|
|
# Others #
|
|
#==================================================#
|
|
# You should leave the following commented out #
|
|
|
|
# NODE_ENV=
|
|
|
|
# E2E_USER_EMAIL=
|
|
# E2E_USER_PASSWORD=
|
|
|
|
#=====================================================#
|
|
# Cache Headers #
|
|
#=====================================================#
|
|
# Headers that control caching of the index.html #
|
|
# Default configuration prevents caching to ensure #
|
|
# users always get the latest version. Customize #
|
|
# only if you understand caching implications. #
|
|
|
|
# INDEX_CACHE_CONTROL=no-cache, no-store, must-revalidate
|
|
# INDEX_PRAGMA=no-cache
|
|
# INDEX_EXPIRES=0
|
|
|
|
# no-cache: Forces validation with server before using cached version
|
|
# no-store: Prevents storing the response entirely
|
|
# must-revalidate: Prevents using stale content when offline
|
|
|
|
#=====================================================#
|
|
# OpenWeather #
|
|
#=====================================================#
|
|
OPENWEATHER_API_KEY=
|
|
|
|
#====================================#
|
|
# LibreChat Code Interpreter API #
|
|
#====================================#
|
|
|
|
# https://code.librechat.ai
|
|
# LIBRECHAT_CODE_API_KEY=your-key
|
|
|
|
#======================#
|
|
# Web Search #
|
|
#======================#
|
|
|
|
# Note: All of the following variable names can be customized.
|
|
# Omit values to allow user to provide them.
|
|
|
|
# For more information on configuration values, see:
|
|
# https://librechat.ai/docs/features/web_search
|
|
|
|
# Search Provider (Required)
|
|
# SERPER_API_KEY=your_serper_api_key
|
|
|
|
# Scraper (Required)
|
|
# FIRECRAWL_API_KEY=your_firecrawl_api_key
|
|
# Optional: Custom Firecrawl API URL
|
|
# FIRECRAWL_API_URL=your_firecrawl_api_url
|
|
|
|
# Reranker (Required)
|
|
# JINA_API_KEY=your_jina_api_key
|
|
# or
|
|
# COHERE_API_KEY=your_cohere_api_key
|
|
|
|
#======================#
|
|
# MCP Configuration #
|
|
#======================#
|
|
|
|
# Treat 401/403 responses as OAuth requirement when no oauth metadata found
|
|
# MCP_OAUTH_ON_AUTH_ERROR=true
|
|
|
|
# Timeout for OAuth detection requests in milliseconds
|
|
# MCP_OAUTH_DETECTION_TIMEOUT=5000
|
|
|
|
# Cache connection status checks for this many milliseconds to avoid expensive verification
|
|
# MCP_CONNECTION_CHECK_TTL=60000
|
|
|
|
# Skip code challenge method validation (e.g., for AWS Cognito that supports S256 but doesn't advertise it)
|
|
# When set to true, forces S256 code challenge even if not advertised in .well-known/openid-configuration
|
|
# MCP_SKIP_CODE_CHALLENGE_CHECK=false
|