Commit graph

2 commits

Author SHA1 Message Date
Danny Avila
5ea59ecb2b
🐛 fix: Normalize output_text blocks in Responses API input conversion (#11835)
* 🐛 fix: Normalize `output_text` blocks in Responses API input conversion

Treat `output_text` content blocks the same as `input_text` when
converting Responses API input to internal message format. Previously,
assistant messages containing `output_text` blocks fell through to the
default handler, producing `{ type: 'output_text' }` without a `text`
field, which caused downstream provider adapters (e.g. Bedrock) to fail
with "Unsupported content block type: output_text".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: Remove ChatModelStreamHandler from OpenAI and Responses controllers

Eliminated the ChatModelStreamHandler from both OpenAIChatCompletionController and createResponse functions to streamline event handling. This change simplifies the code by relying on existing handlers for message deltas and reasoning deltas, enhancing maintainability and reducing complexity in the agent's event processing logic.

* feat: Enhance input conversion in Responses API

Updated the `convertInputToMessages` function to handle additional content types, including `input_file` and `refusal` blocks, ensuring they are converted to appropriate message formats. Implemented null filtering for content arrays and default values for missing fields, improving robustness. Added comprehensive unit tests to validate these changes and ensure correct behavior across various input scenarios.

* fix: Forward upstream provider status codes in error responses

Updated error handling in OpenAIChatCompletionController and createResponse functions to forward upstream provider status codes (e.g., Anthropic 400s) instead of masking them as 500. This change improves error reporting by providing more accurate status codes and error types, enhancing the clarity of error responses for clients.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 22:34:19 -05:00
Danny Avila
9a38af5875
📉 feat: Add Token Usage Tracking for Agents API Routes (#11600)
* feat: Implement token usage tracking for OpenAI and Responses controllers

- Added functionality to record token usage against user balances in OpenAIChatCompletionController and createResponse functions.
- Introduced new utility functions for managing token spending and structured token usage.
- Enhanced error handling for token recording to improve logging and debugging capabilities.
- Updated imports to include new usage tracking methods and configurations.

* test: Add unit tests for recordCollectedUsage function in usage.spec.ts

- Introduced comprehensive tests for the recordCollectedUsage function, covering various scenarios including handling empty and null collectedUsage, single and multiple usage entries, and sequential and parallel execution cases.
- Enhanced token handling tests to ensure correct calculations for both OpenAI and Anthropic formats, including cache token management.
- Improved overall test coverage for usage tracking functionality, ensuring robust validation of expected behaviors and outcomes.

* test: Add unit tests for OpenAI and Responses API controllers

- Introduced comprehensive unit tests for the OpenAIChatCompletionController and createResponse functions, focusing on the correct invocation of recordCollectedUsage for token spending.
- Enhanced tests to validate the passing of balance and transactions configuration to the recordCollectedUsage function.
- Ensured proper dependency injection of spendTokens and spendStructuredTokens in the usage recording process.
- Improved overall test coverage for token usage tracking, ensuring robust validation of expected behaviors and outcomes.
2026-02-01 21:36:51 -05:00