* 🦙 fix: Ollama Custom Headers
* chore: Correct import order for resolveHeaders in OllamaClient.js
* fix: Improve error logging for Ollama API model fetch failure
* ci: update Ollama model fetch tests
* ci: Add unit test for passing headers and user object to Ollama fetchModels
* refactor: resolve request-based headers for custom endpoints right before LLM request
* ci: clarify request-based header resolution in initializeClient test
* 🪶 feat: Add Support for Uploading Plaintext Files
feat: delineate between OCR and text handling in fileConfig field of config file
- also adds support for passing in mimetypes as just plain file extensions
feat: add showLabel bool to support future synthetic component DynamicDropdownInput
feat: add new combination dropdown-input component in params panel to support file type token limits
refactor: move hovercard to side to align with other hovercards
chore: clean up autogenerated comments
feat: add delineation to file upload path between text and ocr configured filetypes
feat: add token limit checks during file upload
refactor: move textParsing out of ocrEnabled logic
refactor: clean up types for filetype config
refactor: finish decoupling DynamicDropdownInput from fileTokenLimits
fix: move image token cost function into file to fix circular dependency causing unittest to fail and remove unused var for linter
chore: remove out of scope code following review
refactor: make fileTokenLimit conform to existing styles
chore: remove unused localization string
chore: undo changes to DynamicInput and other strays
feat: add fileTokenLimit to all provider config panels
fix: move textParsing back into ocr tool_resource block for now so that it doesn't interfere with other upload types
* 📤 feat: Add RAG API Endpoint Support for Text Parsing (#8849)
* feat: implement RAG API integration for text parsing with fallback to native parsing
* chore: remove TODO now that placeholder and fllback are implemented
* ✈️ refactor: Migrate Text Parsing to TS (#8892)
* refactor: move generateShortLivedToken to packages/api
* refactor: move textParsing logic into packages/api
* refactor: reduce nesting and dry code with createTextFile
* fix: add proper source handling
* fix: mock new parseText and parseTextNative functions in jest file
* ci: add test coverage for textParser
* 💬 feat: Add Audio File Support to Upload as Text (#8893)
* feat: add STT support for Upload as Text
* refactor: move processAudioFile to packages/api
* refactor: move textParsing from utils to files
* fix: remove audio/mp3 from unsupported mimetypes test since it is now supported
* ✂️ feat: Configurable File Token Limits and Truncation (#8911)
* feat: add configurable fileTokenLimit default value
* fix: add stt to fileConfig merge logic
* fix: add fileTokenLimit to mergeFileConfig logic so configurable value is actually respected from yaml
* feat: add token limiting to parsed text files
* fix: add extraction logic and update tests so fileTokenLimit isnt sent to LLM providers
* fix: address comments
* refactor: rename textTokenLimiter.ts to text.ts
* chore: update form-data package to address CVE-2025-7783 and update package-lock
* feat: use default supported mime types for ocr on frontend file validation
* fix: should be using logger.debug not console.debug
* fix: mock existsSync in text.spec.ts
* fix: mock logger rather than every one of its function calls
* fix: reorganize imports and streamline file upload processing logic
* refactor: update createTextFile function to use destructured parameters and improve readability
* chore: update file validation to use EToolResources for improved type safety
* chore: update import path for types in audio processing module
* fix: update file configuration access and replace console.debug with logger.debug for improved logging
---------
Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
* WIP: app.locals refactoring
WIP: appConfig
fix: update memory configuration retrieval to use getAppConfig based on user role
fix: update comment for AppConfig interface to clarify purpose
🏷️ refactor: Update tests to use getAppConfig for endpoint configurations
ci: Update AppService tests to initialize app config instead of app.locals
ci: Integrate getAppConfig into remaining tests
refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests
refactor: Rename initializeAppConfig to setAppConfig and update related tests
ci: Mock getAppConfig in various tests to provide default configurations
refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests
chore: rename `Config/getAppConfig` -> `Config/app`
fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters
chore: correct parameter documentation for imageOutputType in ToolService.js
refactor: remove `getCustomConfig` dependency in config route
refactor: update domain validation to use appConfig for allowed domains
refactor: use appConfig registration property
chore: remove app parameter from AppService invocation
refactor: update AppConfig interface to correct registration and turnstile configurations
refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services
refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files
refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type
refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration
ci: update related tests
refactor: update getAppConfig call in getCustomConfigSpeech to include user role
fix: update appConfig usage to access allowedDomains from actions instead of registration
refactor: enhance AppConfig to include fileStrategies and update related file strategy logic
refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions
chore: remove deprecated unused RunManager
refactor: get balance config primarily from appConfig
refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic
refactor: remove getCustomConfig usage and use app config in file citations
refactor: consolidate endpoint loading logic into loadEndpoints function
refactor: update appConfig access to use endpoints structure across various services
refactor: implement custom endpoints configuration and streamline endpoint loading logic
refactor: update getAppConfig call to include user role parameter
refactor: streamline endpoint configuration and enhance appConfig usage across services
refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file
refactor: add type annotation for loadedEndpoints in loadEndpoints function
refactor: move /services/Files/images/parse to TS API
chore: add missing FILE_CITATIONS permission to IRole interface
refactor: restructure toolkits to TS API
refactor: separate manifest logic into its own module
refactor: consolidate tool loading logic into a new tools module for startup logic
refactor: move interface config logic to TS API
refactor: migrate checkEmailConfig to TypeScript and update imports
refactor: add FunctionTool interface and availableTools to AppConfig
refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig`
WIP: fix tests
* fix: rebase conflicts
* refactor: remove app.locals references
* refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware
* refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients
* test: add balance configuration to titleConvo method in AgentClient tests
* chore: remove unused `openai-chat-tokens` package
* chore: remove unused imports in initializeMCPs.js
* refactor: update balance configuration to use getAppConfig instead of getBalanceConfig
* refactor: integrate configMiddleware for centralized configuration handling
* refactor: optimize email domain validation by removing unnecessary async calls
* refactor: simplify multer storage configuration by removing async calls
* refactor: reorder imports for better readability in user.js
* refactor: replace getAppConfig calls with req.config for improved performance
* chore: replace getAppConfig calls with req.config in tests for centralized configuration handling
* chore: remove unused override config
* refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config
* chore: remove customConfig parameter from TTSService constructor
* refactor: pass appConfig from request to processFileCitations for improved configuration handling
* refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config`
* test: add mockAppConfig to processFileCitations tests for improved configuration handling
* fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor
* fix: type safety in useExportConversation
* refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached
* chore: change `MongoUser` typedef to `IUser`
* fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest
* fix: remove unused setAppConfig mock from Server configuration tests
* feat: Add conversation ID support to custom endpoint headers
- Add LIBRECHAT_CONVERSATION_ID to customUserVars when provided
- Pass conversation ID to header resolution for dynamic headers
- Add comprehensive test coverage
Enables custom endpoints to access conversation context using {{LIBRECHAT_CONVERSATION_ID}} placeholder.
* fix: filter out unresolved placeholders from headers (thanks @MrunmayS)
* feat: add support for request body placeholders in custom endpoint headers
- Add {{LIBRECHAT_BODY_*}} placeholders for conversationId, parentMessageId, messageId
- Update tests to reflect new body placeholder functionality
* refactor resolveHeaders
* style: minor styling cleanup
* fix: type error in unit test
* feat: add body to other endpoints
* feat: add body for mcp tool calls
* chore: remove changes that unnecessarily increase scope after clarification of requirements
* refactor: move http.ts to packages/api and have RequestBody intersect with Express request body
* refactor: processMCPEnv now uses single object argument pattern
* refactor: update processMCPEnv to use 'options' parameter and align types across MCP connection classes
* feat: enhance MCP connection handling with dynamic request headers to pass request body fields
---------
Co-authored-by: Gopal Sharma <gopalsharma@gopal.sharma1>
Co-authored-by: s10gopal <36487439+s10gopal@users.noreply.github.com>
Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
* 🕒 refactor: Use Legacy Content for Custom Endpoints to Improve Compatibility
- Also applies to Azure serverless endpoints from AI Foundry
* chore: move useLegacyContent condition before early return
* fix: Ensure useLegacyContent is set only when options are available
* Enhanced existing tests for the `resolveHeaders` function to cover all user field placeholders and messy scenarios.
* Added basic integration tests for custom endpoints initialization file
* 🔧 fix: enhance client options handling in AgentClient and set default recursion limit
- Updated the recursion limit to default to 25 if not specified in agentsEConfig.
- Enhanced client options in AgentClient to include model parameters such as apiKey and anthropicApiUrl from agentModelParams.
- Updated requestOptions in the anthropic endpoint to use reverseProxyUrl as anthropicApiUrl.
* Enhance LLM configuration tests with edge case handling
* chore add return type annotation for getCustomEndpointConfig function
* fix: update modelOptions handling to use optional chaining and default to empty object in multiple endpoint initializations
* chore: update @librechat/agents to version 2.4.42
* refactor: streamline agent endpoint configuration and enhance client options handling for title generations
- Introduced a new `getProviderConfig` function to centralize provider configuration logic.
- Updated `AgentClient` to utilize the new provider configuration, improving clarity and maintainability.
- Removed redundant code related to endpoint initialization and model parameter handling.
- Enhanced error logging for missing endpoint configurations.
* fix: add abort handling for image generation and editing in OpenAIImageTools
* ci: enhance getLLMConfig tests to verify fetchOptions and dispatcher properties
* fix: use optional chaining for endpointOption properties in getOptions
* fix: increase title generation timeout from 25s to 45s, pass `endpointOption` to `getOptions`
* fix: update file filtering logic in getToolFilesByIds to ensure text field is properly checked
* fix: add error handling for empty OCR results in uploadMistralOCR and uploadAzureMistralOCR
* fix: enhance error handling in file upload to include 'No OCR result' message
* chore: update error messages in uploadMistralOCR and uploadAzureMistralOCR
* fix: enhance filtering logic in getToolFilesByIds to include context checks for OCR resources to only include files directly attached to agent
---------
Co-authored-by: Matt Burnett <matt.burnett@shopify.com>
* 🧠 feat: User Memories for Conversational Context
chore: mcp typing, use `t`
WIP: first pass, Memories UI
- Added MemoryViewer component for displaying, editing, and deleting user memories.
- Integrated data provider hooks for fetching, updating, and deleting memories.
- Implemented pagination and loading states for better user experience.
- Created unit tests for MemoryViewer to ensure functionality and interaction with data provider.
- Updated translation files to include new UI strings related to memories.
chore: move mcp-related files to own directory
chore: rename librechat-mcp to librechat-api
WIP: first pass, memory processing and data schemas
chore: linting in fileSearch.js query description
chore: rename librechat-api to @librechat/api across the project
WIP: first pass, functional memory agent
feat: add MemoryEditDialog and MemoryViewer components for managing user memories
- Introduced MemoryEditDialog for editing memory entries with validation and toast notifications.
- Updated MemoryViewer to support editing and deleting memories, including pagination and loading states.
- Enhanced data provider to handle memory updates with optional original key for better management.
- Added new localization strings for memory-related UI elements.
feat: add memory permissions management
- Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories.
- Added new API endpoints for updating memory permissions associated with roles.
- Created a new AdminSettings component for managing memory permissions in the frontend.
- Integrated memory permissions into the existing roles and permissions schemas.
- Updated the interface to include memory settings and permissions.
- Enhanced the MemoryViewer component to conditionally render admin settings based on user roles.
- Added localization support for memory permissions in the translation files.
feat: move AdminSettings component to a new position in MemoryViewer for better visibility
refactor: clean up commented code in MemoryViewer component
feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration
- Added a search input to filter memories in the MemoryViewer component.
- Refactored MemoryEditDialog to accept children for better customization.
- Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories.
- Improved localization support by adding new strings for memory filtering and deletion confirmation.
refactor: optimize memory filtering in MemoryViewer using match-sorter
- Replaced manual filtering logic with match-sorter for improved search functionality.
- Enhanced performance and readability of the filteredMemories computation.
feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling
feat: implement access control for MemoryEditDialog and MemoryViewer components
refactor: remove commented out code and create runMemory method
refactor: rename role based files
feat: implement access control for memory usage in AgentClient
refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code
refactor: make `agents` dir in api package
refactor: migrate Azure utilities to TypeScript and consolidate imports
refactor: move sanitizeFilename function to a new file and update imports, add related tests
refactor: update LLM configuration types and consolidate Azure options in the API package
chore: linting
chore: import order
refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file
chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json
refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports
refactor: move createRun function to a new run.ts file and update related imports
fix: ensure safeAttachments is correctly typed as an array of TFile
chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file
refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options
feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling
fix: update types due to new TEndpointOption typing
fix: ensure safe access to group parameters in initializeOpenAIOptions function
fix: remove redundant API key validation comment in initializeOpenAIOptions function
refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation
refactor: decouple req.body fields and tool loading from initializeAgentOptions
chore: linting
refactor: adjust column widths in MemoryViewer for improved layout
refactor: simplify agent initialization by creating loadAgent function and removing unused code
feat: add memory configuration loading and validation functions
WIP: first pass, memory processing with config
feat: implement memory callback and artifact handling
feat: implement memory artifacts display and processing updates
feat: add memory configuration options and schema validation for validKeys
fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements
refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling
WIP: initial tokenLimit config and move Tokenizer to @librechat/api
refactor: update mongoMeili plugin methods to use callback for better error handling
feat: enhance memory management with token tracking and usage metrics
- Added token counting for memory entries to enforce limits and provide usage statistics.
- Updated memory retrieval and update routes to include total token usage and limit.
- Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information.
- Refactored memory processing functions to handle token limits and provide feedback on memory capacity.
feat: implement memory artifact handling in attachment handler
- Enhanced useAttachmentHandler to process memory artifacts when receiving updates.
- Introduced handleMemoryArtifact utility to manage memory updates and deletions.
- Updated query client to reflect changes in memory state based on incoming data.
refactor: restructure web search key extraction logic
- Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys.
- Updated webSearchKeys to utilize the new function for improved clarity and maintainability.
- Prevents build time errors
feat: add personalization settings and memory preferences management
- Introduced a new Personalization tab in settings to manage user memory preferences.
- Implemented API endpoints and client-side logic for updating memory preferences.
- Enhanced user interface components to reflect personalization options and memory usage.
- Updated permissions to allow users to opt out of memory features.
- Added localization support for new settings and messages related to personalization.
style: personalization switch class
feat: add PersonalizationIcon and align Side Panel UI
feat: implement memory creation functionality
- Added a new API endpoint for creating memory entries, including validation for key and value.
- Introduced MemoryCreateDialog component for user interface to facilitate memory creation.
- Integrated token limit checks to prevent exceeding user memory capacity.
- Updated MemoryViewer to include a button for opening the memory creation dialog.
- Enhanced localization support for new messages related to memory creation.
feat: enhance message processing with configurable window size
- Updated AgentClient to use a configurable message window size for processing messages.
- Introduced messageWindowSize option in memory configuration schema with a default value of 5.
- Improved logic for selecting messages to process based on the configured window size.
chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json
chore: remove OpenAPIPlugin and its associated tests
chore: remove MIGRATION_README.md as migration tasks are completed
ci: fix backend tests
chore: remove unused translation keys from localization file
chore: remove problematic test file and unused var in AgentClient
chore: remove unused import and import directly for JSDoc
* feat: add api package build stage in Dockerfile for improved modularity
* docs: reorder build steps in contributing guide for clarity
* #
* - refactor: simplified getCustomConfig func
* #
* - feature: persist values for parameters with optionType of custom
* #
* - refactor: moved `Parameters/settings.ts` into `data-provider` so that both frontend and backend code can use it.
* - feature: loadCustomConfig can now parse and validate customParams property for `endpoints.custom` in `librechat.yaml`
* # fixed linter
* # removed .strict() in config.ts
* change: added packages/data-provider/src to SOURCE_DIRS for i18n check
* # removed unnecessary lodash imports
* # addressed PR comments
# fixed lint for updated files
* # better import for lodash (w/o relying on tree-shaking)
* refactor: agent token handling to use createHandleLLMNewToken for improved closure
* refactor: update llmConfig to use maxTokens instead of max_tokens for consistency
* chore: remove unused redis file
* chore: bump keyv dependencies, and update related imports
* refactor: Implement IoRedis client for rate limiting across middleware, as node-redis via keyv not compatible
* fix: Set max listeners to expected amount
* WIP: memory improvements
* refactor: Simplify getAbortData assignment in createAbortController
* refactor: Update getAbortData to use WeakRef for content management
* WIP: memory improvements in agent chat requests
* refactor: Enhance memory management with finalization registry and cleanup functions
* refactor: Simplify domainParser calls by removing unnecessary request parameter
* refactor: Update parameter types for action tools and agent loading functions to use minimal configs
* refactor: Simplify domainParser tests by removing unnecessary request parameter
* refactor: Simplify domainParser call by removing unnecessary request parameter
* refactor: Enhance client disposal by nullifying additional properties to improve memory management
* refactor: Improve title generation by adding abort controller and timeout handling, consolidate request cleanup
* refactor: Update checkIdleConnections to skip current user when checking for idle connections if passed
* refactor: Update createMCPTool to derive userId from config and handle abort signals
* refactor: Introduce createTokenCounter function and update tokenCounter usage; enhance disposeClient to reset Graph values
* refactor: Update getMCPManager to accept userId parameter for improved idle connection handling
* refactor: Extract logToolError function for improved error handling in AgentClient
* refactor: Update disposeClient to clear handlerRegistry and graphRunnable references in client.run
* refactor: Extract createHandleNewToken function to streamline token handling in initializeClient
* chore: bump @librechat/agents
* refactor: Improve timeout handling in addTitle function for better error management
* refactor: Introduce createFetch instead of using class method
* refactor: Enhance client disposal and request data handling in AskController and EditController
* refactor: Update import statements for AnthropicClient and OpenAIClient to use specific paths
* refactor: Use WeakRef for response handling in SplitStreamHandler to prevent memory leaks
* refactor: Simplify client disposal and rename getReqData to processReqData in AskController and EditController
* refactor: Improve logging structure and parameter handling in OpenAIClient
* refactor: Remove unused GraphEvents and improve stream event handling in AnthropicClient and OpenAIClient
* refactor: Simplify client initialization in AskController and EditController
* refactor: Remove unused mock functions and implement in-memory store for KeyvMongo
* chore: Update dependencies in package-lock.json to latest versions
* refactor: Await token usage recording in OpenAIClient to ensure proper async handling
* refactor: Remove handleAbort route from multiple endpoints and enhance client disposal logic
* refactor: Enhance abort controller logic by managing abortKey more effectively
* refactor: Add newConversation handling in useEventHandlers for improved conversation management
* fix: dropparams
* refactor: Use optional chaining for safer access to request properties in BaseClient
* refactor: Move client disposal and request data processing logic to cleanup module for better organization
* refactor: Remove aborted request check from addTitle function for cleaner logic
* feat: Add Grok 3 model pricing and update tests for new models
* chore: Remove trace warnings and inspect flags from backend start script used for debugging
* refactor: Replace user identifier handling with userId for consistency across controllers, use UserId in clientRegistry
* refactor: Enhance client disposal logic to prevent memory leaks by clearing additional references
* chore: Update @librechat/agents to version 2.4.14 in package.json and package-lock.json
* chore: update @librechat/agents to version 2.1.9
* feat: xAI standalone provider for agents
* chore: bump librechat-data-provider version to 0.7.6997
* fix: reorder import statements and enhance user listing output
* fix: Update Docker Compose commands to support v2 syntax with fallback
* 🔧 fix: drop `reasoning_effort` for o1-preview/mini models
* chore: requireLocalAuth logging
* fix: edge case artifact message editing logic to handle `new` conversation IDs
* fix: remove `temperature` from model options in OpenAIClient if o1-mini/preview
* fix: update type annotation for fetchPromisesMap to use Promise<string[]> instead of string[]
* feat: anthropic model fetching
* fix: update model name to use EModelEndpoint.openAI in fetchModels and fetchOpenAIModels
* fix: add error handling to modelController for loadModels
* fix: add error handling and logging for model fetching in loadDefaultModels
* ci: update getAnthropicModels tests to be asynchronous
* feat: add user ID to model options in OpenAI and custom endpoint initialization
---------
Co-authored-by: Andrei Berceanu <andreicberceanu@gmail.com>
Co-authored-by: KiGamji <maloyh44@gmail.com>
* feat: Add thinking and thinkingBudget parameters for Bedrock Anthropic models
* chore: Update @librechat/agents to version 2.1.8
* refactor: change region order in params
* refactor: Add maxTokens parameter to conversation preset schema
* refactor: Update agent client to use bedrockInputSchema and improve error handling for model parameters
* refactor: streamline/optimize llmConfig initialization and saving for bedrock
* fix: ensure config titleModel is used for all endpoints
* refactor: enhance OpenAIClient and agent initialization to support endpoint checks for OpenRouter
* chore: bump @google/generative-ai
* feat: Refactor ModelEndHandler to collect usage metadata only if it exists
* feat: google tool end handling, custom anthropic class for better token ux
* refactor: differentiate between client <> request options
* feat: initial support for google agents
* feat: only cache messages with non-empty text
* feat: Cache non-empty messages in chatV2 controller
* fix: anthropic llm client options llmConfig
* refactor: streamline client options handling in LLM configuration
* fix: VertexAI Agent Auth & Tool Handling
* fix: additional fields for llmConfig, however customHeaders are not supported by langchain, requires PR
* feat: set default location for vertexai LLM configuration
* fix: outdated OpenAI Client options for getLLMConfig
* chore: agent provider options typing
* chore: add note about currently unsupported customHeaders in langchain GenAI client
* fix: skip transaction creation when rawAmount is NaN
* feat: Code Interpreter API & File Search Agent Uploads
chore: add back code files
wip: first pass, abstract key dialog
refactor: influence checkbox on key changes
refactor: update localization keys for 'execute code' to 'run code'
wip: run code button
refactor: add throwError parameter to loadAuthValues and getUserPluginAuthValue functions
feat: first pass, API tool calling
fix: handle missing toolId in callTool function and return 404 for non-existent tools
feat: show code outputs
fix: improve error handling in callTool function and log errors
fix: handle potential null value for filepath in attachment destructuring
fix: normalize language before rendering and prevent null return
fix: add loading indicator in RunCode component while executing code
feat: add support for conditional code execution in Markdown components
feat: attachments
refactor: remove bash
fix: pass abort signal to graph/run
refactor: debounce and rate limit tool call
refactor: increase debounce delay for execute function
feat: set code output attachments
feat: image attachments
refactor: apply message context
refactor: pass `partIndex`
feat: toolCall schema/model/methods
feat: block indexing
feat: get tool calls
chore: imports
chore: typing
chore: condense type imports
feat: get tool calls
fix: block indexing
chore: typing
refactor: update tool calls mapping to support multiple results
fix: add unique key to nav link for rendering
wip: first pass, tool call results
refactor: update query cache from successful tool call mutation
style: improve result switcher styling
chore: note on using \`.toObject()\`
feat: add agent_id field to conversation schema
chore: typing
refactor: rename agentMap to agentsMap for consistency
feat: Agent Name as chat input placeholder
chore: bump agents
📦 chore: update @langchain dependencies to latest versions to match agents package
📦 chore: update @librechat/agents dependency to version 1.8.0
fix: Aborting agent stream removes sender; fix(bedrock): completion removes preset name label
refactor: remove direct file parameter to use req.file, add `processAgentFileUpload` for image uploads
feat: upload menu
feat: prime message_file resources
feat: implement conversation access validation in chat route
refactor: remove file parameter from processFileUpload and use req.file instead
feat: add savedMessageIds set to track saved message IDs in BaseClient, to prevent unnecessary double-write to db
feat: prevent duplicate message saves by checking savedMessageIds in AgentController
refactor: skip legacy RAG API handling for agents
feat: add files field to convoSchema
refactor: update request type annotations from Express.Request to ServerRequest in file processing functions
feat: track conversation files
fix: resendFiles, addPreviousAttachments handling
feat: add ID validation for session_id and file_id in download route
feat: entity_id for code file uploads/downloads
fix: code file edge cases
feat: delete related tool calls
feat: add stream rate handling for LLM configuration
feat: enhance system content with attached file information
fix: improve error logging in resource priming function
* WIP: PoC, sequential agents
WIP: PoC Sequential Agents, first pass content data + bump agents package
fix: package-lock
WIP: PoC, o1 support, refactor bufferString
feat: convertJsonSchemaToZod
fix: form issues and schema defining erroneous model
fix: max length issue on agent form instructions, limit conversation messages to sequential agents
feat: add abort signal support to createRun function and AgentClient
feat: PoC, hide prior sequential agent steps
fix: update parameter naming from config to metadata in event handlers for clarity, add model to usage data
refactor: use only last contentData, track model for usage data
chore: bump agents package
fix: content parts issue
refactor: filter contentParts to include tool calls and relevant indices
feat: show function calls
refactor: filter context messages to exclude tool calls when no tools are available to the agent
fix: ensure tool call content is not undefined in formatMessages
feat: add agent_id field to conversationPreset schema
feat: hide sequential agents
feat: increase upload toast duration to 10 seconds
* refactor: tool context handling & update Code API Key Dialog
feat: toolContextMap
chore: skipSpecs -> useSpecs
ci: fix handleTools tests
feat: API Key Dialog
* feat: Agent Permissions Admin Controls
feat: replace label with button for prompt permission toggle
feat: update agent permissions
feat: enable experimental agents and streamline capability configuration
feat: implement access control for agents and enhance endpoint menu items
feat: add welcome message for agent selection in localization
feat: add agents permission to access control and update version to 0.7.57
* fix: update types in useAssistantListMap and useMentions hooks for better null handling
* feat: mention agents
* fix: agent tool resource race conditions when deleting agent tool resource files
* feat: add error handling for code execution with user feedback
* refactor: rename AdminControls to AdminSettings for clarity
* style: add gap to button in AdminSettings for improved layout
* refactor: separate agent query hooks and check access to enable fetching
* fix: remove unused provider from agent initialization options, creates issue with custom endpoints
* refactor: remove redundant/deprecated modelOptions from AgentClient processes
* chore: update @librechat/agents to version 1.8.5 in package.json and package-lock.json
* fix: minor styling issues + agent panel uniformity
* fix: agent edge cases when set endpoint is no longer defined
* refactor: remove unused cleanup function call from AppService
* fix: update link in ApiKeyDialog to point to pricing page
* fix: improve type handling and layout calculations in SidePanel component
* fix: add missing localization string for agent selection in SidePanel
* chore: form styling and localizations for upload filesearch/code interpreter
* fix: model selection placeholder logic in AgentConfig component
* style: agent capabilities
* fix: add localization for provider selection and improve dropdown styling in ModelPanel
* refactor: use gpt-4o-mini > gpt-3.5-turbo
* fix: agents configuration for loadDefaultInterface and update related tests
* feat: DALLE Agents support
* feat: Add CodeArtifacts component to Beta settings tab
* chore: Update npm dependency to @codesandbox/sandpack-react@2.18.2
* WIP: artifacts first pass
* WIP first pass remark-directive
* chore: revert markdown to original component + new artifacts rendering
* refactor: first pass rewrite
* refactor: add throttling
* first pass styling
* style: Add Radix Tabs, more styling changes
* feat: second pass
* style: code styling
* fix: package markdown fixes
* feat: Add useEffect hook to Artifacts component for visibility control, slide in animation
* fix: only set artifact if there is content
* refactor: typing and make latest artifact active if the number of artifacts changed
* feat: artifacts + shadcnui
* feat: Add Copy Code button to Artifacts component
* feat: first pass streaming updates
* refactor: optimize ordering of artifacts in Artifacts component
* refactor: optimize ordering of artifacts and add latest artifact activation in Artifacts component
* refactor: add order prop to Artifact
* feat: update to latest, use update time for ordering
* refactor: optimize ordering of artifacts and activate latest artifact in Artifacts component
* wip: remove thinking text and artifact formatting if empty
* refactor: optimize Markdown rendering and add support for code artifacts
* feat: global state for current artifact Id and set on artifact preview click
* refactor: Rename CodePreview component to ArtifactButton
* refactor: apply growth to artifact frame so artifact preview can take full space
* refactor: remove artifactIdsState
* refactor: nullify artifact state and reset on empty conversation
* feat: reset artifact state on conversation change
* feat: artifacts system prompt in backend
* refactor: update UI artifact toggle label to match localization key
* style: remove ArtifactButton inline-block styling
* feat: memoize ArtifactPreview, add html support
* refactor: abstract out components
* chore: bump react-resizable-panel
* refactor: resizable panel order props
* fix: side panel resizing crashes
* style: temporarily remove scrolling, add better styling
* chore: remove thinking for now
* chore: preprocess artifacts for now
* feat: Add auto scrolling to CodeMarkdown (artifacts)
* feat: autoswitch to preview
* feat: auto switch to code, adjust prompt, remove unused code
* feat: refresh button
* feat: open/close artifacts
* wip: mermaid
* refactor: w-fit Artifact button
* chore: organize code
* feat: first pass mermaid
* refactor: improve panning logic in MermaidDiagram component
* feat: center/zoom on first render
* refactor: add centering with reset button
* style: mermaid styling
* refactor: add back MermaidDiagram
* fix: static/html template
* fix: mermaid
* add examples to artifacts prompt
* refactor: fix CodeBar plugin prop logic
* refactor: remove unnecessary mention of artifacts when not requested
* fix: remove preprocessCodeArtifacts function and fix imports
* feat: improve artifacts guidelines and remove unnecessary mentions
* refactor: improve artifacts guidelines and remove unnecessary mentions
* chore: uninstall unused packages
* chore: bump vite
* chore: update three dependency to version 0.167.1
* refactor: move beta settings, add additional artifacts toggles
* feat: artifacts mode toggles
* refactor: adjust prompt
* feat: shadcnui instructions
* feat: code artifacts custom prompt mode
* chore: Update artifacts UI labels and instructions localizations
* refactor: Remove unused code in Markdown component
* refactor: use parseCompactConvo in buildOptions, and generate no default values for the API to avoid weird model behavior with defaults
* refactor: OTHER - always show cursor when markdown component is empty (preferable to not)
* refactor(OpenAISettings): use config object for setting defaults app-wide
* refactor: Use removeNullishValues in buildOptions for ALL endpoints
* fix: add missing conversationId to title methods for transactions; refactor(GoogleClient): model options, set no default, add todo note for recording token usage
* fix: at minimum set a model default, as is required by API (edge case)
* chore: use node-fetch for OpenAIClient fetch key for non-crashing usage of AbortController in Bun runtime
* chore: variable order
* fix(useSSE): prevent finalHandler call in abortConversation to update messages/conversation after user navigated away
* chore: params order
* refactor: organize intermediate message logic and ensure correct variables are passed
* fix: Add stt and tts routes before upload limiters, prevent bans
* fix(abortRun): temp fix to delete unfinished messages to avoid message thread parent relationship issues
* refactor: Update AnthropicClient to use node-fetch for fetch key and add proxy support
* fix(gptPlugins): ensure parentMessageId/messageId relationship is maintained
* feat(BaseClient): custom fetch function to analyze/edit payloads just before sending (also prevents abortController crash on Bun runtime)
* feat: `directEndpoint` and `titleMessageRole` custom endpoint options
* chore: Bump version to 0.6.6 in data-provider package.json
* chore: make frequent 'error' log into 'debug' log
* feat: add maxContextTokens as a conversation field
* refactor(settings): increase popover height
* feat: add DynamicInputNumber and maxContextTokens to all endpoints that support it (frontend), fix schema
* feat: maxContextTokens handling (backend)
* style: revert popover height
* feat: max tokens
* fix: Ollama Vision firebase compatibility
* fix: Ollama Vision, use message_file_map to determine multimodal request
* refactor: bring back MobileNav and improve title styling
* WIP: first pass ModelSpecs
* refactor(onSelectEndpoint): use `getConvoSwitchLogic`
* feat: introduce iconURL, greeting, frontend fields for conversations/presets/messages
* feat: conversation.iconURL & greeting in Landing
* feat: conversation.iconURL & greeting in New Chat button
* feat: message.iconURL
* refactor: ConversationIcon -> ConvoIconURL
* WIP: add spec as a conversation field
* refactor: useAppStartup, set spec on initial load for new chat, allow undefined spec, add localStorage keys enum, additional type fields for spec
* feat: handle `showIconInMenu`, `showIconInHeader`, undefined `iconURL` and no specs on initial load
* chore: handle undefined or empty modelSpecs
* WIP: first pass, modelSpec schema for custom config
* refactor: move default filtered tools definition to ToolService
* feat: pass modelSpecs from backend via startupConfig
* refactor: modelSpecs config, return and define list
* fix: react error and include iconURL in responseMessage
* refactor: add iconURL to responseMessage only
* refactor: getIconEndpoint
* refactor: pass TSpecsConfig
* fix(assistants): differentiate compactAssistantSchema, correctly resets shared conversation state with other endpoints
* refactor: assistant id prefix localStorage key
* refactor: add more LocalStorageKeys and replace hardcoded values
* feat: prioritize spec on new chat behavior: last selected modelSpec behavior (localStorage)
* feat: first pass, interface config
* chore: WIP, todo: add warnings based on config.modelSpecs settings.
* feat: enforce modelSpecs if configured
* feat: show config file yaml errors
* chore: delete unused legacy Plugins component
* refactor: set tools to localStorage from recoil store
* chore: add stable recoil setter to useEffect deps
* refactor: save tools to conversation documents
* style(MultiSelectPop): dynamic height, remove unused import
* refactor(react-query): use localstorage keys and pass config to useAvailablePluginsQuery
* feat(utils): add mapPlugins
* refactor(Convo): use conversation.tools if defined, lastSelectedTools if not
* refactor: remove unused legacy code using `useSetOptions`, remove conditional flag `isMultiChat` for using legacy settings
* refactor(PluginStoreDialog): add exhaustive-deps which are stable react state setters
* fix(HeaderOptions): pass `popover` as true
* refactor(useSetStorage): use project enums
* refactor: use LocalStorageKeys enum
* fix: prevent setConversation from setting falsy values in lastSelectedTools
* refactor: use map for availableTools state and available Plugins query
* refactor(updateLastSelectedModel): organize logic better and add note on purpose
* fix(setAgentOption): prevent reseting last model to secondary model for gptPlugins
* refactor(buildDefaultConvo): use enum
* refactor: remove `useSetStorage` and consolidate areas where conversation state is saved to localStorage
* fix: conversations retain tools on refresh
* fix(gptPlugins): prevent nullish tools from being saved
* chore: delete useServerStream
* refactor: move initial plugins logic to useAppStartup
* refactor(MultiSelectDropDown): add more pass-in className props
* feat: use tools in presets
* chore: delete unused usePresetOptions
* refactor: new agentOptions default handling
* chore: note
* feat: add label and custom instructions to agents
* chore: remove 'disabled with tools' message
* style: move plugins to 2nd column in parameters
* fix: TPreset type for agentOptions
* fix: interface controls
* refactor: add interfaceConfig, use Separator within Switcher
* refactor: hide Assistants panel if interface.parameters are disabled
* fix(Header): only modelSpecs if list is greater than 0
* refactor: separate MessageIcon logic from useMessageHelpers for better react rule-following
* fix(AppService): don't use reserved keyword 'interface'
* feat: set existing Icon for custom endpoints through iconURL
* fix(ci): tests passing for App Service
* docs: refactor custom_config.md for readability and better organization, also include missing values
* docs: interface section and re-organize docs
* docs: update modelSpecs info
* chore: remove unused files
* chore: remove unused files
* chore: move useSetIndexOptions
* chore: remove unused file
* chore: move useConversation(s)
* chore: move useDefaultConvo
* chore: move useNavigateToConvo
* refactor: use plugin install hook so it can be used elsewhere
* chore: import order
* update docs
* refactor(OpenAI/Plugins): allow modelLabel as an initial value for chatGptLabel
* chore: remove unused EndpointOptionsPopover and hide 'Save as Preset' button if preset UI visibility disabled
* feat(loadDefaultInterface): issue warnings based on values
* feat: changelog for custom config file
* docs: add additional changelog note
* fix: prevent unavailable tool selection from preset and update availableTools on Plugin installations
* feat: add `filteredTools` option in custom config
* chore: changelog
* fix(MessageIcon): always overwrite conversation.iconURL in messageSettings
* fix(ModelSpecsMenu): icon edge cases
* fix(NewChat): dynamic icon
* fix(PluginsClient): always include endpoint in responseMessage
* fix: always include endpoint and iconURL in responseMessage across different response methods
* feat: interchangeable keys for modelSpec enforcing
* refactor: re-purpose `resendImages` as `resendFiles`
* refactor: re-purpose `resendImages` as `resendFiles`
* feat: upload general files
* feat: embed file during upload
* feat: delete file embeddings on file deletion
* chore(fileConfig): add epub+zip type
* feat(encodeAndFormat): handle non-image files
* feat(createContextHandlers): build context prompt from file attachments and successful RAG
* fix: prevent non-temp files as well as embedded files to be deleted on new conversation
* fix: remove temp_file_id on usage, prevent non-temp files as well as embedded files to be deleted on new conversation
* fix: prevent non-temp files as well as embedded files to be deleted on new conversation
* feat(OpenAI/Anthropic/Google): basic RAG support
* fix: delete `resendFiles` only when true (Default)
* refactor(RAG): update endpoints and pass JWT
* fix(resendFiles): default values
* fix(context/processFile): query unique ids only
* feat: rag-api.yaml
* feat: file upload improved ux for longer uploads
* chore: await embed call and catch embedding errors
* refactor: store augmentedPrompt in Client
* refactor(processFileUpload): throw error if not assistant file upload
* fix(useFileHandling): handle markdown empty mimetype issue
* chore: necessary compose file changes
* chore: bump browserslist-db@latest
* refactor(EndpointService): simplify with `generateConfig`, utilize optional baseURL for OpenAI-based endpoints, use `isUserProvided` helper fn wherever needed
* refactor(custom/initializeClient): use standardized naming for common variables
* feat: user provided baseURL for openAI-based endpoints
* refactor(custom/initializeClient): re-order operations
* fix: knownendpoints enum definition and add FetchTokenConfig, bump data-provider
* refactor(custom): use tokenKey dependent on userProvided conditions for caching and fetching endpointTokenConfig, anticipate token rates from custom config
* refactor(custom): assure endpointTokenConfig is only accessed from cache if qualifies for fetching
* fix(ci): update tests for initializeClient based on userProvideURL changes
* fix(EndpointService): correct baseURL env var for assistants: `ASSISTANTS_BASE_URL`
* fix: unnecessary run cancellation on res.close() when response.run is completed
* feat(assistants): user provided URL option
* ci: update tests and add test for `assistants` endpoint
* chore: leaner condition for request closing
* chore: more descriptive error message to provide keys again
* wip: first pass for azure endpoint schema
* refactor: azure config to return groupMap and modelConfigMap
* wip: naming and schema changes
* refactor(errorsToString): move to data-provider
* feat: rename to azureGroups, add additional tests, tests all expected outcomes, return errors
* feat(AppService): load Azure groups
* refactor(azure): use imported types, write `mapModelToAzureConfig`
* refactor: move `extractEnvVariable` to data-provider
* refactor(validateAzureGroups): throw on duplicate groups or models; feat(mapModelToAzureConfig): throw if env vars not present, add tests
* refactor(AppService): ensure each model is properly configured on startup
* refactor: deprecate azureOpenAI environment variables in favor of librechat.yaml config
* feat: use helper functions to handle and order enabled/default endpoints; initialize azureOpenAI from config file
* refactor: redefine types as well as load azureOpenAI models from config file
* chore(ci): fix test description naming
* feat(azureOpenAI): use validated model grouping for request authentication
* chore: bump data-provider following rebase
* chore: bump config file version noting significant changes
* feat: add title options and switch azure configs for titling and vision requests
* feat: enable azure plugins from config file
* fix(ci): pass tests
* chore(.env.example): mark `PLUGINS_USE_AZURE` as deprecated
* fix(fetchModels): early return if apiKey not passed
* chore: fix azure config typing
* refactor(mapModelToAzureConfig): return baseURL and headers as well as azureOptions
* feat(createLLM): use `azureOpenAIBasePath`
* feat(parsers): resolveHeaders
* refactor(extractBaseURL): handle invalid input
* feat(OpenAIClient): handle headers and baseURL for azureConfig
* fix(ci): pass `OpenAIClient` tests
* chore: extract env var for azureOpenAI group config, baseURL
* docs: azureOpenAI config setup docs
* feat: safe check of potential conflicting env vars that map to unique placeholders
* fix: reset apiKey when model switches from originally requested model (vision or title)
* chore: linting
* docs: CONFIG_PATH notes in custom_config.md
* fix: handle non-assistant role ChatCompletionMessage error
* refactor(ModelController): decouple res.send from loading/caching models
* fix(custom/initializeClient): only fetch custom endpoint models if models.fetch is true
* refactor(validateModel): load models if modelsConfig is not yet cached
* docs: update on file upload rate limiting
* feat: send the LibreChat user ID as a query param when fetching the list of models
* chore: update bun
* chore: change bun command for building data-provider
* refactor: prefer use of `getCustomConfig` to access custom config, also move to `server/services/Config`
* refactor: make endpoints/custom option for the config optional, add userIdQuery, and use modelQueries log store in ModelService
* refactor(ModelService): use env variables at runtime, use default models from data-provider, and add tests
* docs: add `userIdQuery`
* fix(ci): import changed
* style(Icon): remove error bubble from message icon
* fix(custom): `initializeClient` now throws error if apiKey or baseURL are admin provided but no env var was found
* refactor(tPresetSchema): match `conversationId` type to `tConversationSchema` but optional, use `extendedModelEndpointSchema` for `endpoint`
* fix(useSSE): minor improvements
- use `completed` set to avoid submitting unecessary abort request
- set preset with `newConversation` calls using initial conversation settings to prevent default Preset override as well as default settings
- return if there is a parsing error within `onerror` as expected errors from server are properly formatted
* WIP(backend/api): custom endpoint
* WIP(frontend/client): custom endpoint
* chore: adjust typedefs for configs
* refactor: use data-provider for cache keys and rename enums and custom endpoint for better clarity and compatibility
* feat: loadYaml utility
* refactor: rename back to from and proof-of-concept for creating schemas from user-defined defaults
* refactor: remove custom endpoint from default endpointsConfig as it will be exclusively managed by yaml config
* refactor(EndpointController): rename variables for clarity
* feat: initial load custom config
* feat(server/utils): add simple `isUserProvided` helper
* chore(types): update TConfig type
* refactor: remove custom endpoint handling from model services as will be handled by config, modularize fetching of models
* feat: loadCustomConfig, loadConfigEndpoints, loadConfigModels
* chore: reorganize server init imports, invoke loadCustomConfig
* refactor(loadConfigEndpoints/Models): return each custom endpoint as standalone endpoint
* refactor(Endpoint/ModelController): spread config values after default (temporary)
* chore(client): fix type issues
* WIP: first pass for multiple custom endpoints
- add endpointType to Conversation schema
- add update zod schemas for both convo/presets to allow non-EModelEndpoint value as endpoint (also using type assertion)
- use `endpointType` value as `endpoint` where mapping to type is necessary using this field
- use custom defined `endpoint` value and not type for mapping to modelsConfig
- misc: add return type to `getDefaultEndpoint`
- in `useNewConvo`, add the endpointType if it wasn't already added to conversation
- EndpointsMenu: use user-defined endpoint name as Title in menu
- TODO: custom icon via custom config, change unknown to robot icon
* refactor(parseConvo): pass args as an object and change where used accordingly; chore: comment out 'create schema' code
* chore: remove unused availableModels field in TConfig type
* refactor(parseCompactConvo): pass args as an object and change where used accordingly
* feat: chat through custom endpoint
* chore(message/convoSchemas): avoid saving empty arrays
* fix(BaseClient/saveMessageToDatabase): save endpointType
* refactor(ChatRoute): show Spinner if endpointsQuery or modelsQuery are still loading, which is apparent with slow fetching of models/remote config on first serve
* fix(useConversation): assign endpointType if it's missing
* fix(SaveAsPreset): pass real endpoint and endpointType when saving Preset)
* chore: recorganize types order for TConfig, add `iconURL`
* feat: custom endpoint icon support:
- use UnknownIcon in all icon contexts
- add mistral and openrouter as known endpoints, and add their icons
- iconURL support
* fix(presetSchema): move endpointType to default schema definitions shared between convoSchema and defaults
* refactor(Settings/OpenAI): remove legacy `isOpenAI` flag
* fix(OpenAIClient): do not invoke abortCompletion on completion error
* feat: add responseSender/label support for custom endpoints:
- use defaultModelLabel field in endpointOption
- add model defaults for custom endpoints in `getResponseSender`
- add `useGetSender` hook which uses EndpointsQuery to determine `defaultModelLabel`
- include defaultModelLabel from endpointConfig in custom endpoint client options
- pass `endpointType` to `getResponseSender`
* feat(OpenAIClient): use custom options from config file
* refactor: rename `defaultModelLabel` to `modelDisplayLabel`
* refactor(data-provider): separate concerns from `schemas` into `parsers`, `config`, and fix imports elsewhere
* feat: `iconURL` and extract environment variables from custom endpoint config values
* feat: custom config validation via zod schema, rename and move to `./projectRoot/librechat.yaml`
* docs: custom config docs and examples
* fix(OpenAIClient/mistral): mistral does not allow singular system message, also add `useChatCompletion` flag to use openai-node for title completions
* fix(custom/initializeClient): extract env var and use `isUserProvided` function
* Update librechat.example.yaml
* feat(InputWithLabel): add className props, and forwardRef
* fix(streamResponse): handle error edge case where either messages or convos query throws an error
* fix(useSSE): handle errorHandler edge cases where error response is and is not properly formatted from API, especially when a conversationId is not yet provided, which ensures stream is properly closed on error
* feat: user_provided keys for custom endpoints
* fix(config/endpointSchema): do not allow default endpoint values in custom endpoint `name`
* feat(loadConfigModels): extract env variables and optimize fetching models
* feat: support custom endpoint iconURL for messages and Nav
* feat(OpenAIClient): add/dropParams support
* docs: update docs with default params, add/dropParams, and notes to use config file instead of `OPENAI_REVERSE_PROXY`
* docs: update docs with additional notes
* feat(maxTokensMap): add mistral models (32k context)
* docs: update openrouter notes
* Update ai_setup.md
* docs(custom_config): add table of contents and fix note about custom name
* docs(custom_config): reorder ToC
* Update custom_config.md
* Add note about `max_tokens` field in custom_config.md