Skip to content

Conversation

@STHSF
Copy link

@STHSF STHSF commented Jun 26, 2023

Adjust pad token before count the number of tokens

add pad token before count the number of tokens
@airaria
Copy link
Contributor

airaria commented Jun 26, 2023

We recommend using the Alpaca tokenizer when running run_clm_sft_with_peft.py.
The if statement is to check if the tokenizer is alpaca tokenizer (of which vocab size is 49954).

In #666, you used the merged tokenizer (of which vocab size is 49953), instead of the alpaca tokenizer from the chinese-alpaca-lora.
Therefore, if you switch to using the Alpaca tokenizer, the script should function correctly.

@ymcui ymcui marked this pull request as draft July 7, 2023 06:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants