Skip to content

🌡️ Fix temperature inconsistency in GRPO trainer #3029

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

Aladoro
Copy link
Contributor

@Aladoro Aladoro commented Mar 8, 2025

What does this PR do?

It fixes the temperature scaling omission when computing the model's and reference model's log probabilities, potentially causing highly biased policy gradients.

This should be consistent with other TRL implementations as PPO and RLOO. Omitting this term has been evidenced to potentially cause detrimental instabilities when fine-tuning LLMs with RL (e.g., see here). Consistently with these results, using my own custom implementation after this fix also seems to work better. I tried doing additional research, but I could not find references that mention purposefully biasing the optimization leads to benefits... am I missing something?

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@qgallouedec
Copy link
Member

That's a good point.
That's also what's done in open-instruct: https://github.com/allenai/open-instruct/blob/6d5320539f23a6dd55c892fd35e7e86907569af1/open_instruct/grpo_vllm_thread_ray_gtrl.py#L777C9-L777C37
Ideally, we would like to have some curves to show this gap, so if someone has any, feel free to share.

@qgallouedec qgallouedec changed the title fix temperature inconsistency in GRPO trainer 🌡️ Fix temperature inconsistency in GRPO trainer Mar 11, 2025
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@qgallouedec qgallouedec merged commit 04f6597 into huggingface:main Mar 11, 2025
13 checks passed
@Aladoro Aladoro deleted the fix-temperature-logits-inconsistency branch March 12, 2025 04:38
jhinpan pushed a commit to jhinpan/trl-jin that referenced this pull request Mar 12, 2025
* fix temperature inconsistency in GRPO trainer

* adding 1e-7 isn't necessary

* comment

---------

Co-authored-by: Quentin Gallouédec <[email protected]>
yxliu-TAMU pushed a commit to mincheolseong/ECEN743-GRPO-Project-Proposal that referenced this pull request Apr 20, 2025
* fix temperature inconsistency in GRPO trainer

* adding 1e-7 isn't necessary

* comment

---------

Co-authored-by: Quentin Gallouédec <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants