-
Notifications
You must be signed in to change notification settings - Fork 2.1k
⏯️ Fix: handle None inputs when resuming GRPO Trainer from checkpoint #3148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⏯️ Fix: handle None inputs when resuming GRPO Trainer from checkpoint #3148
Conversation
Thanks!! Can share a simple piece of code that would fail without your fix? |
@qgallouedec here's the sample code: from datasets import load_dataset
from trl import GRPOTrainer, GRPOConfig
def reward_fn(completions, **_):
return [1.0 for _ in completions]
# Normal training
trainer = GRPOTrainer(
model="facebook/opt-125m",
args=GRPOConfig(
output_dir="save/test",
num_generations=2,
per_device_train_batch_size=2,
num_iterations=4,
save_steps=1,
max_steps=10,
max_prompt_length=1,
max_completion_length=1,
),
reward_funcs=reward_fn,
train_dataset=load_dataset("trl-lib/tldr", split="train"),
)
trainer.train()
# Simulating a fresh new trainer instance after interruption
trainer = GRPOTrainer(
model="facebook/opt-125m",
args=GRPOConfig(
output_dir="save/test",
num_generations=2,
per_device_train_batch_size=2,
num_iterations=4,
save_steps=1,
max_steps=10,
max_prompt_length=1,
max_completion_length=1,
),
reward_funcs=reward_fn,
train_dataset=load_dataset("trl-lib/tldr", split="train"),
)
# Resume from checkpoint at step which is not divisible by num_iterations
trainer.train(resume_from_checkpoint="save/test/checkpoint-6") error traceback:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thank you @PenutChen! In this case, the results won't be exactly the same as if we hadn't interrupted the training (we would have to save and load this buffer), but that's not a big deal.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
…huggingface#3148) Co-authored-by: Quentin Gallouédec <[email protected]>
…ggingface#3131) Co-authored-by: Quentin Gallouédec <[email protected]> Co-authored-by: Quentin Gallouédec <[email protected]> log answer key to wandb all Table HTML logging table bump patch hmm formatting html esacape reward isnt string [Liger] Liger KTO support (huggingface#2812) Co-authored-by: Kashif Rasul <[email protected]> Co-authored-by: Quentin Gallouédec <[email protected]> 🏃 Migrate CI to self-hosted runners (huggingface#3174) ❤️🩹 [CI] fix transformers dev CI failure (huggingface#3176) Co-authored-by: Quentin Gallouédec <[email protected]> ⏯️ Fix: handle None inputs when resuming GRPO Trainer from checkpoint (huggingface#3148) Co-authored-by: Quentin Gallouédec <[email protected]> 📎 Fix is_clipped to compute the effective clip_ratio (huggingface#3175) Co-authored-by: Quentin Gallouédec <[email protected]> Co-authored-by: Quentin Gallouédec <[email protected]> Fix breaking typo for flash_attention reducing_memory_usage.md (huggingface#3190) Show unique prompts in GRPO WandB tables (huggingface#3191) 🐗 [CI] Fix trufflehog false positives (huggingface#3192) [GRPO] Improve completion length logging (huggingface#3188) preliminary openai compatible endpoint early concept, needs refining dedupe debug print some slop to work on unslop, missing hist almost valid pseudocode middle-ware monkey patch in mp.Pool()... remove unused More accurate .md need gpu renting lambda again much nicer small aider-chat and datasets conflict risky reqs change should work, but hacky some insights, but monkeypatching probably wont suffice refactor: Rewrite test script to use SWE-bench dataset with MultiProcessAider refactor: Remove logging statements from test.py one step closer finally, the correct abstraction doc todo unslop unslop undo accidental black cleaner abstraction new abstraction
…huggingface#3148) Co-authored-by: Quentin Gallouédec <[email protected]>
What does this PR do?
This PR refactors the
_prepare_inputs
method to ensure it never returnsNone
when resuming training from checkpoints inGRPOTrainer
.Previously, if
self._buffered_inputs[...]
wasNone
during the resume process, andself.state.global_step % self.num_iterations != 0
, the method would returnNone
. This caused issues downstream where non-None
inputs were expected.To fix this, the logic has been updated so that if the buffered input is
None
, it always falls back to generating new inputs via_generate_and_score_completions(inputs)
, regardless of the step count. This ensures stability and correctness when resuming from checkpoints.In addition, the modulo expression
self._step % self.args.gradient_accumulation_steps
has been refactored into a dedicated variableaccumulation_index
to improve readability and reduce redundancy.Fixes
No corresponding issue was filed, but this change addresses a potential silent failure when using
resume_from_checkpoint
withGRPOTrainer
.Motivation and context
Users resuming training from checkpoints may encounter
None
inputs in_prepare_inputs
, leading to errors in the training loop. This fix ensures robustness by avoiding returningNone
in any case and improves the maintainability of the code through refactoring.Before submitting
make precommit
to ensure code style consistency.Who can review?
Anyone familiar with
GRPOTrainer
, buffered input logic, or checkpoint resume behavior in TRL.