mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-03-11 18:42:36 +01:00
🔀 feat: update OpenRouter with new Reasoning config (#11993)
* fix: Update OpenRouter reasoning handling in LLM configuration - Modified the OpenRouter configuration to use a unified `reasoning` object instead of separate `reasoning_effort` and `include_reasoning` properties. - Updated tests to ensure that `reasoning_summary` is excluded from the reasoning object and that the configuration behaves correctly based on the presence of reasoning parameters. - Enhanced test coverage for OpenRouter-specific configurations, ensuring proper handling of various reasoning effort levels. * refactor: Improve OpenRouter reasoning handling in LLM configuration - Updated the handling of the `reasoning` object in the OpenRouter configuration to clarify the relationship between `reasoning_effort` and `include_reasoning`. - Enhanced comments to explain the behavior of the `reasoning` object and its compatibility with legacy parameters. - Ensured that the configuration correctly falls back to legacy behavior when no explicit reasoning effort is provided. * test: Enhance OpenRouter LLM configuration tests - Added a new test to verify the combination of web search plugins and reasoning object for OpenRouter configurations. - Updated existing tests to ensure proper handling of reasoning effort levels and fallback behavior when reasoning_effort is unset. - Improved test coverage for OpenRouter-specific configurations, ensuring accurate validation of reasoning parameters. * chore: Update @librechat/agents dependency to version 3.1.53 - Bumped the version of @librechat/agents in package-lock.json and related package.json files to ensure compatibility with the latest features and fixes. - Updated integrity hashes to reflect the new version.
This commit is contained in:
parent
e6b324b259
commit
826b494578
6 changed files with 110 additions and 17 deletions
|
|
@ -861,7 +861,7 @@ describe('getOpenAIConfig', () => {
|
|||
expect(result.provider).toBe('openrouter');
|
||||
});
|
||||
|
||||
it('should handle OpenRouter with reasoning params', () => {
|
||||
it('should handle OpenRouter with reasoning params (no summary)', () => {
|
||||
const modelOptions = {
|
||||
reasoning_effort: ReasoningEffort.high,
|
||||
reasoning_summary: ReasoningSummary.detailed,
|
||||
|
|
@ -872,10 +872,11 @@ describe('getOpenAIConfig', () => {
|
|||
modelOptions,
|
||||
});
|
||||
|
||||
// OpenRouter reasoning object should only include effort, not summary
|
||||
expect(result.llmConfig.reasoning).toEqual({
|
||||
effort: ReasoningEffort.high,
|
||||
summary: ReasoningSummary.detailed,
|
||||
});
|
||||
expect(result.llmConfig.include_reasoning).toBeUndefined();
|
||||
expect(result.provider).toBe('openrouter');
|
||||
});
|
||||
|
||||
|
|
@ -1205,8 +1206,9 @@ describe('getOpenAIConfig', () => {
|
|||
model: 'gpt-4-turbo',
|
||||
temperature: 0.8,
|
||||
streaming: false,
|
||||
include_reasoning: true, // OpenRouter specific
|
||||
reasoning: { effort: ReasoningEffort.high }, // OpenRouter reasoning object
|
||||
});
|
||||
expect(result.llmConfig.include_reasoning).toBeUndefined();
|
||||
// Should NOT have useResponsesApi for OpenRouter
|
||||
expect(result.llmConfig.useResponsesApi).toBeUndefined();
|
||||
expect(result.llmConfig.maxTokens).toBe(2000);
|
||||
|
|
@ -1480,13 +1482,12 @@ describe('getOpenAIConfig', () => {
|
|||
user: 'openrouter-user',
|
||||
temperature: 0.7,
|
||||
maxTokens: 4000,
|
||||
include_reasoning: true, // OpenRouter specific
|
||||
reasoning: {
|
||||
effort: ReasoningEffort.high,
|
||||
summary: ReasoningSummary.detailed,
|
||||
},
|
||||
apiKey: apiKey,
|
||||
});
|
||||
expect(result.llmConfig.include_reasoning).toBeUndefined();
|
||||
expect(result.llmConfig.modelKwargs).toMatchObject({
|
||||
top_k: 50,
|
||||
repetition_penalty: 1.1,
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue