Skip to content

Conversation

dzh19990407
Copy link

@dzh19990407 dzh19990407 commented Sep 17, 2025

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

This PR implements the Single-stream Policy Optimization proposed by paper https://arxiv.org/abs/2509.13232.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: [algo] feat: add GSPO-token policy loss computation function #2775
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

dzh19990407 and others added 4 commits September 17, 2025 11:37
- Add SPO algorithm implementation with KL-adaptive value tracker
- Implement single-stream architecture eliminating group synchronization
- Add prioritized sampling and global advantage normalization
- Include comprehensive README with performance results and usage guide
- Add configuration files and training scripts
- Achieve +3.4 pp improvement on math benchmarks vs GRPO
Remove Chinese language comments from spo_ray_trainer.py to improve code readability and maintain English-only codebase standards.
@CLAassistant
Copy link

CLAassistant commented Sep 17, 2025

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the Single-stream Policy Optimization (SPO) algorithm, a novel reinforcement learning method for Large Language Models. The changes primarily consist of new files for the SPO recipe, including configuration, the main training script, a run script, and the core Ray trainer implementation. My review has identified two critical issues. First, the run_spo.sh script uses an undefined variable which will cause the training to fail at launch. Second, the spo_ray_trainer.py contains unsafe exception handling during data resampling, which could lead to silent data corruption and hard-to-debug training failures. Addressing these issues is crucial for the correctness and stability of the new algorithm's implementation.

@vermouth1992
Copy link
Collaborator

Could you pin the verl commit in your readme?

```bash
# Enable SPO training mode
export SPO_ENABLE=True
export SPO_OFFLINE_VALUES="/path/to/offline/values.json"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the purpose of this file?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see Appendix A

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is an offline value estimate (Appendix A), I have added a link to huggingface in the README.

dzh19990407 and others added 3 commits September 28, 2025 14:54
@dzh19990407
Copy link
Author

Could you pin the verl commit in your readme?

I have update it in the readme file.

- Switch offline values from local JSON file to HuggingFace dataset loading
- Update README with offline value generation instructions
- Add debug mode support with RAY_DEBUG flag in config
- Fix config name reference from ppo_trainer to spo_trainer
- Update batch sizes and paths to use environment variables
- Change custom module paths from retool to spo directory
- Switch multi-turn format from retool_paper to hermes
- Adjust offline value threshold from 0 to 0.5 for binary classification

This improves the SPO training pipeline by using centralized dataset storage
and providing better configuration flexibility through environment variables.
@dzh19990407 dzh19990407 requested a review from hustnn September 28, 2025 08:35
@dzh19990407
Copy link
Author

@wuxibin89 @vermouth1992 @tongyx361 @PeterSH6
Hi team,

Thanks for all the great feedback! I have updated the code based on the review comments and pushed the changes.

Please take a quick look when you have a chance. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants