Add localization support for endpoint pages components (#667)

* init localization

* Update defaul to en

* Fix merge issue and import path.

* Set default to en

* Change jsx to tsx

* Update the password max length string.

* Remove languageContext as using the recoil instead.

* Add localization to component endpoints pages

* Revert default to en after testing.

* Update LoginForm.tsx

* Fix translation.

* Make lint happy
This commit is contained in:
Abner Chou 2023-07-22 15:09:45 -04:00 committed by GitHub
parent 4148c6d219
commit b64273957a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
15 changed files with 212 additions and 67 deletions

View file

@ -78,4 +78,74 @@ export default {
com_auth_to_try_again: 'to try again.',
com_auth_submit_registration: 'Submit registration',
com_auth_welcome_back: 'Welcome back',
com_endpoint_bing_enable_sydney: 'Enable Sydney',
com_endpoint_bing_to_enable_sydney: 'To enable Sydney',
com_endpoint_bing_jailbreak: 'Jailbreak',
com_endpoint_bing_context_placeholder:
'Bing can use up to 7k tokens for \'context\', which it can reference for the conversation. The specific limit is not known but may run into errors exceeding 7k tokens',
com_endpoint_bing_system_message_placeholder:
'WARNING: Misuse of this feature can get you BANNED from using Bing! Click on \'System Message\' for full instructions and the default message if omitted, which is the \'Sydney\' preset that is considered safe.',
com_endpoint_system_message: 'System Message',
com_endpoint_default_blank: 'default: blank',
com_endpoint_default_false: 'default: false',
com_endpoint_default_creative: 'default: creative',
com_endpoint_default_empty: 'default: empty',
com_endpoint_default_with_num: 'default: {0}',
com_endpoint_context: 'Context',
com_endpoint_tone_style: 'Tone Style',
com_endpoint_token_count: 'Token count',
com_endpoint_output: 'Output',
com_endpoint_google_temp:
'Higher values = more random, while lower values = more focused and deterministic. We recommend altering this or Top P but not both.',
com_endpoint_google_topp:
'Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value.',
com_endpoint_google_topk:
'Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model\'s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).',
com_endpoint_google_maxoutputtokens:
' Maximum number of tokens that can be generated in the response. Specify a lower value for shorter responses and a higher value for longer responses.',
com_endpoint_google_custom_name_placeholder: 'Set a custom name for PaLM2',
com_endpoint_google_prompt_prefix_placeholder:
'Set custom instructions or context. Ignored if empty.',
com_endpoint_custom_name: 'Custom Name',
com_endpoint_prompt_prefix: 'Prompt Prefix',
com_endpoint_temperature: 'Temperature',
com_endpoint_default: 'default',
com_endpoint_top_p: 'Top P',
com_endpoint_top_k: 'Top K',
com_endpoint_max_output_tokens: 'Max Output Tokens',
com_endpoint_openai_temp:
'Higher values = more random, while lower values = more focused and deterministic. We recommend altering this or Top P but not both.',
com_endpoint_openai_max:
'The max tokens to generate. The total length of input tokens and generated tokens is limited by the model\'s context length.',
com_endpoint_openai_topp:
'An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We recommend altering this or temperature but not both.',
com_endpoint_openai_freq:
'Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model\'s likelihood to repeat the same line verbatim.',
com_endpoint_openai_pres:
'Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model\'s likelihood to talk about new topics.',
com_endpoint_openai_custom_name_placeholder: 'Set a custom name for ChatGPT',
com_endpoint_openai_prompt_prefix_placeholder:
'Set custom instructions to include in System Message. Default: none',
com_endpoint_frequency_penalty: 'Frequency Penalty',
com_endpoint_presence_penalty: 'Presence Penalty',
com_endpoint_plug_use_functions: 'Use Functions',
com_endpoint_plug_skip_completion: 'Skip Completion',
com_endpoint_disabled_with_tools: 'disabled with tools',
com_endpoint_disabled_with_tools_placeholder: 'Disabled with Tools Selected',
com_endpoint_plug_set_custom_name_for_gpt_placeholder: 'Set a custom name for ChatGPT.',
com_endpoint_plug_set_custom_instructions_for_gpt_placeholder:
'Set custom instructions to include in System Message. Default: none',
com_endpoint_set_custom_name: 'Set a custom name, in case you can find this preset',
com_endpoint_preset_name: 'Preset Name',
com_endpoint: 'Endpoint',
com_endpoint_hide: 'Hide',
com_endpoint_show: 'Show',
com_endpoint_examples: ' Examples',
com_endpoint_completion: 'Completion',
com_endpoint_agent: 'Agent',
com_endpoint_show_what_settings: 'Show {0} Settings',
com_endpoint_save: 'Save',
com_endpoint_export: 'Export',
com_endpoint_save_as_preset: 'Save As Preset',
com_endpoint_not_implemented: 'Not implemented',
};