Commit graph

6 commits

Author SHA1 Message Date
Danny Avila
24e8a258cd
🔧 fix: Clean empty strings from model_parameters for Agents/OpenAI (#11248)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Waiting to run
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Waiting to run
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Blocked by required conditions
2026-01-07 11:26:53 -05:00
Danny Avila
6c0aad423f
📐 refactor: Exclude Params from OAI Reasoning Models (#10745)
* 📐 refactor: Exclude Params from OAI Reasoning Models

- Introduced a new test suite for `getOpenAILLMConfig` covering various model configurations, including basic settings, reasoning models, and web search functionality.
- Validated parameter handling for different models, ensuring correct exclusions and conversions, particularly for temperature and max_tokens.
- Enhanced tests for default and additional parameters, drop parameters, and verbosity handling, ensuring robust coverage of the configuration logic.

* ci: Update OpenAI model version in configuration tests

- Changed model references from 'gpt-5' to 'gpt-4' across multiple test cases in the `getOpenAIConfig` function.
- Adjusted related parameter handling to ensure compatibility with the updated model version, including maxTokens and temperature settings.
- Enhanced test coverage for model options and their expected configurations.
2025-12-01 12:00:54 -05:00
Dustin Healy
c6ecf0095b
🎚️ feat: Anthropic Parameter Set Support via Custom Endpoints (#9415)
* refactor: modularize openai llm config logic into new getOpenAILLMConfig function (#9412)

* ✈️ refactor: Migrate Anthropic's getLLMConfig to TypeScript (#9413)

* refactor: move tokens.js over to packages/api and update imports

* refactor: port tokens.js to typescript

* refactor: move helpers.js over to packages/api and update imports

* refactor: port helpers.js to typescript

* refactor: move anthropic/llm.js over to packages/api and update imports

* refactor: port anthropic/llm.js to typescript with supporting types in types/anthropic.ts and updated tests in llm.spec.js

* refactor: move llm.spec.js over to packages/api and update import

* refactor: port llm.spec.js over to typescript

* 📝  Add Prompt Parameter Support for Anthropic Custom Endpoints (#9414)

feat: add anthropic llm config support for openai-like (custom) endpoints

* fix: missed compiler / type issues from addition of getAnthropicLLMConfig

* refactor: update tokens.ts to export constants and functions, enhance type definitions, and adjust default values

* WIP: first pass, decouple `llmConfig` from `configOptions`

* chore: update import path for OpenAI configuration from 'llm' to 'config'

* refactor: enhance type definitions for ThinkingConfig and update modelOptions in AnthropicConfigOptions

* refactor: cleanup type, introduce openai transform from alt provider

* chore: integrate removeNullishValues in Google llmConfig and update OpenAI exports

* chore: bump version of @librechat/api to 1.3.5 in package.json and package-lock.json

* refactor: update customParams type in OpenAIConfigOptions to use TConfig['customParams']

* refactor: enhance transformToOpenAIConfig to include fromEndpoint and improve config extraction

* refactor: conform userId field for anthropic/openai, cleanup anthropic typing

* ci: add backward compatibility tests for getOpenAIConfig with various endpoints and configurations

* ci: replace userId with user in clientOptions for getLLMConfig

* test: add Azure OpenAI endpoint tests for various configurations in getOpenAIConfig

* refactor: defaultHeaders retrieval for prompt caching for anthropic-based custom endpoint (litellm)

* test: add unit tests for getOpenAIConfig with various Anthropic model configurations

* test: enhance Anthropic compatibility tests with addParams and dropParams handling

* chore: update @librechat/agents dependency to version 2.4.78 in package.json and package-lock.json

* chore: update @librechat/agents dependency to version 2.4.79 in package.json and package-lock.json

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-08 14:35:29 -04:00
Danny Avila
035f85c3ba
🧪 ci: Tests for Anthropic and OpenAI LLM Configuration (#9484)
* fix: freq. and pres. penalty use camelcase

* ci: OpenAI Configuration Tests

* ci: Enhance OpenAI Configuration Tests with Azure and Custom Endpoint Scenarios

* Added integration tests for OpenAI and Azure configurations simulating various initialization scenarios.
* Updated OpenAIConfigOptions to allow null values for reverseProxyUrl and proxy.
* Improved handling of reasoning parameters in tests for both OpenAI and Azure setups.
* Ensured robust error handling for missing API keys and malformed configurations.
* Optimized performance for large parameter sets in configuration.

* test: Add comprehensive integration tests for Anthropic LLM configuration

* Introduced real usage integration tests for various Anthropic endpoint configurations, including handling of proxy and reverse proxy setups.
* Implemented model-specific scenarios for Claude-3.7 and web search functionality.
* Enhanced error handling for missing user IDs and large parameter sets.
* Validated parameter logic, including default values, boundary conditions, and type handling for numeric and array parameters.
* Ensured proper exclusion of system options from model options and maintained expected behavior across different model variations.
2025-09-06 09:42:12 -04:00
Dustin Healy
21e00168b1
🪙 fix: Max Output Tokens Refactor for Responses API (#8972)
🪙 fix: Max Output Tokens Refactor for Responses API (#8972)

chore: Remove `max_output_tokens` from model kwargs in `titleConvo` if provided
2025-08-10 14:08:35 -04:00
Danny Avila
7147bce3c3
feat: Add OpenAI Verbosity Parameter (#8929)
* WIP: Verbosity OpenAI Parameter

* 🔧 chore: remove unused import of extractEnvVariable from parsers.ts

*  feat: add comprehensive tests for getOpenAIConfig and enhance verbosity handling

* fix: Handling for maxTokens in GPT-5+ models and add corresponding tests

* feat: Implement GPT-5+ model handling in processMemory function
2025-08-07 20:49:40 -04:00