🧠 feat: Prompt caching switch, prompt query params; refactor: static cache, prompt/markdown styling, trim copied code, switch new chat to convo URL (#3784)

* refactor: Update staticCache to use oneDayInSeconds for sMaxAge and maxAge

* refactor: role updates

* style: first pass cursor

* style: Update nested list styles in style.css

* feat: setIsSubmitting to true in message handler to prevent edge case where submitting turns false during message stream

* feat: Add logic to redirect to conversation page after creating a new conversation

* refactor: Trim code string before copying in CodeBlock component

* feat: configSchema bookmarks and presets defaults

* feat: Update loadDefaultInterface to handle undefined config

* refactor: use  for compression check

* feat: first pass, query params

* fix: styling issues for prompt cards

* feat: anthropic prompt caching UI switch

* chore: Update static file cache control defaults/comments in .env.example

* ci: fix tests

* ci: fix tests

* chore:  use "submitting" class server error connection suspense fallback
This commit is contained in:
Danny Avila 2024-08-26 15:34:46 -04:00 committed by GitHub
parent bd701c197e
commit 5694ad4e55
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
31 changed files with 519 additions and 112 deletions

View file

@ -472,6 +472,9 @@ export default {
'Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model\'s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).',
com_endpoint_anthropic_maxoutputtokens:
'Maximum number of tokens that can be generated in the response. Specify a lower value for shorter responses and a higher value for longer responses. Note: models may stop before reaching this maximum.',
com_endpoint_anthropic_prompt_cache:
'Prompt caching allows reusing large context or instructions across API calls, reducing costs and latency',
com_endpoint_prompt_cache: 'Use Prompt Caching',
com_endpoint_anthropic_custom_name_placeholder: 'Set a custom name for Anthropic',
com_endpoint_frequency_penalty: 'Frequency Penalty',
com_endpoint_presence_penalty: 'Presence Penalty',