You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: client/src/locales/en/translation.json
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -140,11 +140,9 @@
140
140
"com_endpoint_ai": "AI",
141
141
"com_endpoint_anthropic_maxoutputtokens": "Maximum number of tokens that can be generated in the response. Specify a lower value for shorter responses and a higher value for longer responses. Note: models may stop before reaching this maximum.",
142
142
"com_endpoint_anthropic_prompt_cache": "Prompt caching allows reusing large context or instructions across API calls, reducing costs and latency",
"com_endpoint_anthropic_thinking": "Enables internal reasoning for supported Claude models (3.7 Sonnet). Note: requires \"Thinking Budget\" to be set and lower than \"Max Output Tokens\"",
146
-
"com_endpoint_anthropic_thinking_budget": "Determines the max number of tokens Claude is allowed to use for its internal reasoning process. Larger budgets can improve response quality by enabling more thorough analysis for complex problems, although Claude may not use the entire budget allocated, especially at ranges above 32K. This setting must be lower than \"Max Output Tokens.\"",
147
143
"com_endpoint_anthropic_temp": "Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and closer to 1 for creative and generative tasks. We recommend altering this or Top P but not both.",
144
+
"com_endpoint_anthropic_thinking": "Enables internal reasoning for supported Claude models (3.7 Sonnet). Note: requires \"Thinking Budget\" to be set and lower than \"Max Output Tokens\"",
145
+
"com_endpoint_anthropic_thinking_budget": "Determines the max number of tokens Claude is allowed use for its internal reasoning process. Larger budgets can improve response quality by enabling more thorough analysis for complex problems, although Claude may not use the entire budget allocated, especially at ranges above 32K. This setting must be lower than \"Max Output Tokens.\"",
148
146
"com_endpoint_anthropic_topk": "Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).",
149
147
"com_endpoint_anthropic_topp": "Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value.",
150
148
"com_endpoint_assistant": "Assistant",
@@ -250,6 +248,8 @@
250
248
"com_endpoint_stop": "Stop Sequences",
251
249
"com_endpoint_stop_placeholder": "Separate values by pressing `Enter`",
0 commit comments