* fix: handling of top_k and top_p parameters for Claude-3.7 models (allowed without reasoning)
* feat: bump @librechat/agents for Anthropic Reasoning support
* fix: update reasoning handling for OpenRouter integration
* fix: enhance agent token spending logic to include cache creation and read details
* fix: update logic for thinking status in ContentParts component
* refactor: improve agent title handling
* chore: bump @librechat/agents to version 2.1.7 for parallel tool calling for Google models
* chore: include all assets for service worker, remove unused tsconfig.node.json, eslint ignore vite config
* chore: exclude image files from service worker caching
* refactor: simplify googleSchema transformation and error handling
* fix: max output tokens cap for 3.7 models
* fix: skip index fixing in CI, development, and test environments
* ci: add maxOutputTokens handling tests for Claude models
* refactor: drop top_k and top_p parameters for claude-3.7 in AnthropicClient and add tests for new behavior
* refactor: conditionally include top_k and top_p parameters for non-claude-3.7 models
* ci: add unit tests for getLLMConfig function with various model options
* chore: remove all OPENROUTER_API_KEY legacy logic
* refactor: optimize stream chunk handling
* feat: reset model parameters button
* refactor: remove unused examples field from convoSchema and presetSchema
* chore: update librechat-data-provider version to 0.7.6993
* refactor: move excludedKeys set to data-provider for better reusability
* feat: enhance saveMessageToDatabase to handle unset fields and fetched conversation state
* feat: add 'iconURL' and 'greeting' to excludedKeys in data provider config
* fix: add optional chaining to user ID retrieval in getConvo call