🌊 feat: Add Disable Streaming Toggle (#8177)

* 🌊 feat: Add Disable Streaming Option in Configuration

- Introduced a new setting to disable streaming responses in openAI, Azure, and custom endpoint parameter panels.
- Updated translation files to include labels and descriptions for the disable streaming feature.
- Modified relevant schemas and parameter settings to support the new disable streaming functionality.

* 🔧 fix: disableStreaming state not persisting when returning to a conversation

- Added disableStreaming field to the IPreset interface and conversationPreset.
- Moved toggles and sliders around for nicer left-right UI split in parameters panel.
- Removed old reference to 'grounding' ub conversationPreset (now web_search) and added web_search to IPreset.
This commit is contained in:
Dustin Healy 2025-07-16 07:09:40 -07:00 committed by GitHub
parent 52bbac3a37
commit 7f8c327509
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 27 additions and 3 deletions

View file

@ -231,6 +231,7 @@
"com_endpoint_openai_reasoning_summary": "Responses API only: A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process. Set to none,auto, concise, or detailed.",
"com_endpoint_openai_resend": "Resend all previously attached images. Note: this can significantly increase token cost and you may experience errors with many image attachments.",
"com_endpoint_openai_resend_files": "Resend all previously attached files. Note: this will increase token cost and you may experience errors with many attachments.",
"com_endpoint_disable_streaming": "Disable streaming responses and receive the complete response at once. Useful for models like o3 that require organization verification for streaming",
"com_endpoint_openai_stop": "Up to 4 sequences where the API will stop generating further tokens.",
"com_endpoint_openai_temp": "Higher values = more random, while lower values = more focused and deterministic. We recommend altering this or Top P but not both.",
"com_endpoint_openai_topp": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We recommend altering this or temperature but not both.",
@ -239,6 +240,7 @@
"com_endpoint_output": "Output",
"com_endpoint_plug_image_detail": "Image Detail",
"com_endpoint_plug_resend_files": "Resend Files",
"com_endpoint_disable_streaming_label": "Disable Streaming",
"com_endpoint_plug_set_custom_instructions_for_gpt_placeholder": "Set custom instructions to include in System Message. Default: none",
"com_endpoint_plug_skip_completion": "Skip Completion",
"com_endpoint_plug_use_functions": "Use Functions",