You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/transformers/generation_utils.py
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -950,9 +950,9 @@ def generate(
950
950
eos_token_id (`int`, *optional*):
951
951
The id of the *end-of-sequence* token.
952
952
length_penalty (`float`, *optional*, defaults to 1.0):
953
-
Exponential penalty to the length. 1.0 means no penalty. Set to values < 1.0 in order to encourage the
954
-
model to generate shorter sequences, to a value > 1.0 in order to encourage the model to produce longer
955
-
sequences.
953
+
Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length. 0.0 means no penalty. Set to values < 0.0 in order to encourage the
954
+
model to generate longer sequences, to a value > 0.0 in order to encourage the model to produce shorter
955
+
sequences.
956
956
no_repeat_ngram_size (`int`, *optional*, defaults to 0):
957
957
If set to int > 0, all ngrams of that size can only occur once.
958
958
encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0):
0 commit comments