🤖 feat: Claude Opus 4.6 - 1M Context, Premium Pricing, Adaptive Thinking (#11670)
Some checks are pending
Docker Dev Branch Images Build / build (Dockerfile, lc-dev, node) (push) Waiting to run
Docker Dev Branch Images Build / build (Dockerfile.multi, lc-dev-api, api-build) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile, librechat-dev, node) (push) Waiting to run
Docker Dev Images Build / build (Dockerfile.multi, librechat-dev-api, api-build) (push) Waiting to run
Sync Locize Translations & Create Translation PR / Sync Translation Keys with Locize (push) Waiting to run
Sync Locize Translations & Create Translation PR / Create Translation PR on Version Published (push) Blocked by required conditions

* feat: Implement new features for Claude Opus 4.6 model

- Added support for tiered pricing based on input token count for the Claude Opus 4.6 model.
- Updated token value calculations to include inputTokenCount for accurate pricing.
- Enhanced transaction handling to apply premium rates when input tokens exceed defined thresholds.
- Introduced comprehensive tests to validate pricing logic for both standard and premium rates across various scenarios.
- Updated related utility functions and models to accommodate new pricing structure.

This change improves the flexibility and accuracy of token pricing for the Claude Opus 4.6 model, ensuring users are charged appropriately based on their usage.

* feat: Add effort field to conversation and preset schemas

- Introduced a new optional `effort` field of type `String` in both the `IPreset` and `IConversation` interfaces.
- Updated the `conversationPreset` schema to include the `effort` field, enhancing the data structure for better context management.

* chore: Clean up unused variable and comments in initialize function

* chore: update dependencies and SDK versions

- Updated @anthropic-ai/sdk to version 0.73.0 in package.json and overrides.
- Updated @anthropic-ai/vertex-sdk to version 0.14.3 in packages/api/package.json.
- Updated @librechat/agents to version 3.1.34 in packages/api/package.json.
- Refactored imports in packages/api/src/endpoints/anthropic/vertex.ts for consistency.

* chore: remove postcss-loader from dependencies

* feat: Bedrock model support for adaptive thinking configuration

- Updated .env.example to include new Bedrock model IDs for Claude Opus 4.6.
- Refactored bedrockInputParser to support adaptive thinking for Opus models, allowing for dynamic thinking configurations.
- Introduced a new function to check model compatibility with adaptive thinking.
- Added an optional `effort` field to the input schemas and updated related configurations.
- Enhanced tests to validate the new adaptive thinking logic and model configurations.

* feat: Add tests for Opus 4.6 adaptive thinking configuration

* feat: Update model references for Opus 4.6 by removing version suffix

* feat: Update @librechat/agents to version 3.1.35 in package.json and package-lock.json

* chore: @librechat/agents to version 3.1.36 in package.json and package-lock.json

* feat: Normalize inputTokenCount for spendTokens and enhance transaction handling

- Introduced normalization for promptTokens to ensure inputTokenCount does not go negative.
- Updated transaction logic to reflect normalized inputTokenCount in pricing calculations.
- Added comprehensive tests to validate the new normalization logic and its impact on transaction rates for both standard and premium models.
- Refactored related functions to improve clarity and maintainability of token value calculations.

* chore: Simplify adaptive thinking configuration in helpers.ts

- Removed unnecessary type casting for the thinking property in updatedOptions.
- Ensured that adaptive thinking is directly assigned when conditions are met, improving code clarity.

* refactor: Replace hard-coded token values with dynamic retrieval from maxTokensMap in model tests

* fix: Ensure non-negative token values in spendTokens calculations

- Updated token value retrieval to use Math.max for prompt and completion tokens, preventing negative values.
- Enhanced clarity in token calculations for both prompt and completion transactions.

* test: Add test for normalization of negative structured token values in spendStructuredTokens

- Implemented a test to ensure that negative structured token values are normalized to zero during token spending.
- Verified that the transaction rates remain consistent with the expected standard values after normalization.

* refactor: Bedrock model support for adaptive thinking and context handling

- Added tests for various alternate naming conventions of Claude models to validate adaptive thinking and context support.
- Refactored `supportsAdaptiveThinking` and `supportsContext1m` functions to utilize new parsing methods for model version extraction.
- Updated `bedrockInputParser` to handle effort configurations more effectively and strip unnecessary fields for non-adaptive models.
- Improved handling of anthropic model configurations in the input parser.

* fix: Improve token value retrieval in getMultiplier function

- Updated the token value retrieval logic to use optional chaining for better safety against undefined values.
- Added a test case to ensure that the function returns the default rate when the provided valueKey does not exist in tokenValues.
This commit is contained in:
Danny Avila 2026-02-06 18:35:36 -05:00 committed by GitHub
parent 1d5f2eb04b
commit 41e2348d47
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
32 changed files with 2902 additions and 1087 deletions

View file

@ -1,7 +1,7 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { spendTokens, spendStructuredTokens } = require('./spendTokens');
const { getMultiplier, getCacheMultiplier } = require('./tx');
const { getMultiplier, getCacheMultiplier, premiumTokenValues, tokenValues } = require('./tx');
const { createTransaction, createStructuredTransaction } = require('./Transaction');
const { Balance, Transaction } = require('~/db/models');
@ -564,3 +564,291 @@ describe('Transactions Config Tests', () => {
expect(balance.tokenCredits).toBe(initialBalance);
});
});
describe('calculateTokenValue Edge Cases', () => {
test('should derive multiplier from model when valueKey is not provided', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-4';
const promptTokens = 1000;
const result = await createTransaction({
user: userId,
conversationId: 'test-no-valuekey',
model,
tokenType: 'prompt',
rawAmount: -promptTokens,
context: 'test',
balance: { enabled: true },
});
const expectedRate = getMultiplier({ model, tokenType: 'prompt' });
expect(result.rate).toBe(expectedRate);
const tx = await Transaction.findOne({ user: userId });
expect(tx.tokenValue).toBe(-promptTokens * expectedRate);
expect(tx.rate).toBe(expectedRate);
});
test('should derive valueKey and apply correct rate for an unknown model with tokenType', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
await createTransaction({
user: userId,
conversationId: 'test-unknown-model',
model: 'some-unrecognized-model-xyz',
tokenType: 'prompt',
rawAmount: -500,
context: 'test',
balance: { enabled: true },
});
const tx = await Transaction.findOne({ user: userId });
expect(tx.rate).toBeDefined();
expect(tx.rate).toBeGreaterThan(0);
expect(tx.tokenValue).toBe(tx.rawAmount * tx.rate);
});
test('should correctly apply model-derived multiplier without valueKey for completion', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-opus-4-6';
const completionTokens = 500;
const result = await createTransaction({
user: userId,
conversationId: 'test-completion-no-valuekey',
model,
tokenType: 'completion',
rawAmount: -completionTokens,
context: 'test',
balance: { enabled: true },
});
const expectedRate = getMultiplier({ model, tokenType: 'completion' });
expect(expectedRate).toBe(tokenValues[model].completion);
expect(result.rate).toBe(expectedRate);
const updatedBalance = await Balance.findOne({ user: userId });
expect(updatedBalance.tokenCredits).toBeCloseTo(
initialBalance - completionTokens * expectedRate,
0,
);
});
});
describe('Premium Token Pricing Integration Tests', () => {
test('spendTokens should apply standard pricing when prompt tokens are below premium threshold', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-opus-4-6';
const promptTokens = 100000;
const completionTokens = 500;
const txData = {
user: userId,
conversationId: 'test-premium-below',
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
await spendTokens(txData, { promptTokens, completionTokens });
const standardPromptRate = tokenValues[model].prompt;
const standardCompletionRate = tokenValues[model].completion;
const expectedCost =
promptTokens * standardPromptRate + completionTokens * standardCompletionRate;
const updatedBalance = await Balance.findOne({ user: userId });
expect(updatedBalance.tokenCredits).toBeCloseTo(initialBalance - expectedCost, 0);
});
test('spendTokens should apply premium pricing when prompt tokens exceed premium threshold', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-opus-4-6';
const promptTokens = 250000;
const completionTokens = 500;
const txData = {
user: userId,
conversationId: 'test-premium-above',
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
await spendTokens(txData, { promptTokens, completionTokens });
const premiumPromptRate = premiumTokenValues[model].prompt;
const premiumCompletionRate = premiumTokenValues[model].completion;
const expectedCost =
promptTokens * premiumPromptRate + completionTokens * premiumCompletionRate;
const updatedBalance = await Balance.findOne({ user: userId });
expect(updatedBalance.tokenCredits).toBeCloseTo(initialBalance - expectedCost, 0);
});
test('spendTokens should apply standard pricing at exactly the premium threshold', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-opus-4-6';
const promptTokens = premiumTokenValues[model].threshold;
const completionTokens = 500;
const txData = {
user: userId,
conversationId: 'test-premium-exact',
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
await spendTokens(txData, { promptTokens, completionTokens });
const standardPromptRate = tokenValues[model].prompt;
const standardCompletionRate = tokenValues[model].completion;
const expectedCost =
promptTokens * standardPromptRate + completionTokens * standardCompletionRate;
const updatedBalance = await Balance.findOne({ user: userId });
expect(updatedBalance.tokenCredits).toBeCloseTo(initialBalance - expectedCost, 0);
});
test('spendStructuredTokens should apply premium pricing when total input tokens exceed threshold', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-opus-4-6';
const txData = {
user: userId,
conversationId: 'test-structured-premium',
model,
context: 'message',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: {
input: 200000,
write: 10000,
read: 5000,
},
completionTokens: 1000,
};
const totalInput =
tokenUsage.promptTokens.input + tokenUsage.promptTokens.write + tokenUsage.promptTokens.read;
await spendStructuredTokens(txData, tokenUsage);
const premiumPromptRate = premiumTokenValues[model].prompt;
const premiumCompletionRate = premiumTokenValues[model].completion;
const writeMultiplier = getCacheMultiplier({ model, cacheType: 'write' });
const readMultiplier = getCacheMultiplier({ model, cacheType: 'read' });
const expectedPromptCost =
tokenUsage.promptTokens.input * premiumPromptRate +
tokenUsage.promptTokens.write * writeMultiplier +
tokenUsage.promptTokens.read * readMultiplier;
const expectedCompletionCost = tokenUsage.completionTokens * premiumCompletionRate;
const expectedTotalCost = expectedPromptCost + expectedCompletionCost;
const updatedBalance = await Balance.findOne({ user: userId });
expect(totalInput).toBeGreaterThan(premiumTokenValues[model].threshold);
expect(updatedBalance.tokenCredits).toBeCloseTo(initialBalance - expectedTotalCost, 0);
});
test('spendStructuredTokens should apply standard pricing when total input tokens are below threshold', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-opus-4-6';
const txData = {
user: userId,
conversationId: 'test-structured-standard',
model,
context: 'message',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: {
input: 50000,
write: 10000,
read: 5000,
},
completionTokens: 1000,
};
const totalInput =
tokenUsage.promptTokens.input + tokenUsage.promptTokens.write + tokenUsage.promptTokens.read;
await spendStructuredTokens(txData, tokenUsage);
const standardPromptRate = tokenValues[model].prompt;
const standardCompletionRate = tokenValues[model].completion;
const writeMultiplier = getCacheMultiplier({ model, cacheType: 'write' });
const readMultiplier = getCacheMultiplier({ model, cacheType: 'read' });
const expectedPromptCost =
tokenUsage.promptTokens.input * standardPromptRate +
tokenUsage.promptTokens.write * writeMultiplier +
tokenUsage.promptTokens.read * readMultiplier;
const expectedCompletionCost = tokenUsage.completionTokens * standardCompletionRate;
const expectedTotalCost = expectedPromptCost + expectedCompletionCost;
const updatedBalance = await Balance.findOne({ user: userId });
expect(totalInput).toBeLessThanOrEqual(premiumTokenValues[model].threshold);
expect(updatedBalance.tokenCredits).toBeCloseTo(initialBalance - expectedTotalCost, 0);
});
test('non-premium models should not be affected by inputTokenCount regardless of prompt size', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 100000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-opus-4-5';
const promptTokens = 300000;
const completionTokens = 500;
const txData = {
user: userId,
conversationId: 'test-no-premium',
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
await spendTokens(txData, { promptTokens, completionTokens });
const standardPromptRate = getMultiplier({ model, tokenType: 'prompt' });
const standardCompletionRate = getMultiplier({ model, tokenType: 'completion' });
const expectedCost =
promptTokens * standardPromptRate + completionTokens * standardCompletionRate;
const updatedBalance = await Balance.findOne({ user: userId });
expect(updatedBalance.tokenCredits).toBeCloseTo(initialBalance - expectedCost, 0);
});
});