* chore: update express to version 5.1.0 in package.json
* chore: update express-rate-limit to version 8.2.1 in package.json and package-lock.json
* fix: Enhance server startup error handling in experimental and index files
* Added error handling for server startup in both experimental.js and index.js to log errors and exit the process if the server fails to start.
* Updated comments in openidStrategy.js to clarify the purpose of the CustomOpenIDStrategy class and its relation to Express version changes.
* chore: Implement rate limiting for all POST routes excluding /speech, required for express v5
* Added middleware to apply IP and user rate limiters to all POST requests, ensuring that the /speech route remains unaffected.
* Enhanced code clarity with comments explaining the new rate limiting logic.
* chore: Enable writable req.query for mongoSanitize compatibility in Express 5
* chore: Ensure req.body exists in multiple middleware and route files for Express 5 compatibility
* 🗑️ chore: Remove unused Legacy Provider clients and related helpers
* Deleted OpenAIClient and GoogleClient files along with their associated tests.
* Removed references to these clients in the clients index file.
* Cleaned up typedefs by removing the OpenAISpecClient export.
* Updated chat controllers to use the OpenAI SDK directly instead of the removed client classes.
* chore/remove-openapi-specs
* 🗑️ chore: Remove unused mergeSort and misc utility functions
* Deleted mergeSort.js and misc.js files as they are no longer needed.
* Removed references to cleanUpPrimaryKeyValue in messages.js and adjusted related logic.
* Updated mongoMeili.ts to eliminate local implementations of removed functions.
* chore: remove legacy endpoints
* chore: remove all plugins endpoint related code
* chore: remove unused prompt handling code and clean up imports
* Deleted handleInputs.js and instructions.js files as they are no longer needed.
* Removed references to these files in the prompts index.js.
* Updated docker-compose.yml to simplify reverse proxy configuration.
* chore: remove unused LightningIcon import from Icons.tsx
* chore: clean up translation.json by removing deprecated and unused keys
* chore: update Jest configuration and remove unused mock file
* Simplified the setupFiles array in jest.config.js by removing the fetchEventSource mock.
* Deleted the fetchEventSource.js mock file as it is no longer needed.
* fix: simplify endpoint type check in Landing and ConversationStarters components
* Updated the endpoint type check to use strict equality for better clarity and performance.
* Ensured consistency in the handling of the azureOpenAI endpoint across both components.
* chore: remove unused dependencies from package.json and package-lock.json
* chore: remove legacy EditController, associated routes and imports
* chore: update banResponse logic to refine request handling for banned users
* chore: remove unused validateEndpoint middleware and its references
* chore: remove unused 'res' parameter from initializeClient in multiple endpoint files
* chore: remove unused 'isSmallScreen' prop from BookmarkNav and NewChat components; clean up imports in ArchivedChatsTable and useSetIndexOptions hooks; enhance localization in PromptVersions
* chore: remove unused import of Constants and TMessage from MobileNav; retain only necessary QueryKeys import
* chore: remove unused TResPlugin type and related references; clean up imports in types and schemas
* refactor: Token Limit Processing with Enhanced Efficiency
- Added a new test suite for `processTextWithTokenLimit`, ensuring comprehensive coverage of various scenarios including under, at, and exceeding token limits.
- Refactored the `processTextWithTokenLimit` function to utilize a ratio-based estimation method, significantly reducing the number of token counting function calls compared to the previous binary search approach.
- Improved handling of edge cases and variable token density, ensuring accurate truncation and performance across diverse text inputs.
- Included direct comparisons with the old implementation to validate correctness and efficiency improvements.
* refactor: Remove Tokenizer Route and Related References
- Deleted the tokenizer route from the server and removed its references from the routes index and server files, streamlining the API structure.
- This change simplifies the routing configuration by eliminating unused endpoints.
* refactor: Migrate countTokens Utility to API Module
- Removed the local countTokens utility and integrated it into the @librechat/api module for centralized access.
- Updated various files to reference the new countTokens import from the API module, ensuring consistent usage across the application.
- Cleaned up unused references and imports related to the previous countTokens implementation.
* refactor: Centralize escapeRegExp Utility in API Module
- Moved the escapeRegExp function from local utility files to the @librechat/api module for consistent usage across the application.
- Updated imports in various files to reference the new centralized escapeRegExp function, ensuring cleaner code and reducing redundancy.
- Removed duplicate implementations of escapeRegExp from multiple files, streamlining the codebase.
* refactor: Enhance Token Counting Flexibility in Text Processing
- Updated the `processTextWithTokenLimit` function to accept both synchronous and asynchronous token counting functions, improving its versatility.
- Introduced a new `TokenCountFn` type to define the token counting function signature.
- Added comprehensive tests to validate the behavior of `processTextWithTokenLimit` with both sync and async token counting functions, ensuring consistent results.
- Implemented a wrapper to track call counts for the `countTokens` function, optimizing performance and reducing unnecessary calls.
- Enhanced existing tests to compare the performance of the new implementation against the old one, demonstrating significant improvements in efficiency.
* chore: documentation for Truncation Safety Buffer in Token Processing
- Added a safety buffer multiplier to the character position estimates during text truncation to prevent overshooting token limits.
- Updated the `processTextWithTokenLimit` function to utilize the new `TRUNCATION_SAFETY_BUFFER` constant, enhancing the accuracy of token limit processing.
- Improved documentation to clarify the rationale behind the buffer and its impact on performance and efficiency in token counting.
* 🗨️ fix: Safe Validation for Prompt Updates
- Added `safeValidatePromptGroupUpdate` function to validate and sanitize prompt group update requests, ensuring only allowed fields are processed and sensitive fields are stripped.
- Updated the `patchPromptGroup` route to utilize the new validation function, returning appropriate error messages for invalid requests.
- Introduced comprehensive tests for the validation logic, covering various scenarios including allowed and disallowed fields, enhancing overall request integrity and security.
- Created a new schema file for prompt group updates, defining validation rules and types for better maintainability.
* 🔒 feat: Add JSON parse error handling middleware
* chore: Add experimental backend server for multi-pod simulation
* Introduced a new backend script (`experimental.js`) to manage a clustered server environment with Redis cache flushing on startup.
* Updated `package.json` to include a new script command for the experimental backend.
* This setup aims to enhance scalability and performance for production environments.
* refactor: Remove server disconnection handling logic from useMCPServerManager