-
Notifications
You must be signed in to change notification settings - Fork 2.5k
[BREAKING][misc] feat: Abstract optimizer #3656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a flexible optimizer abstraction, allowing users to specify any optimizer via configuration. This is a great enhancement for modularity. My review focuses on the implementation of the new build_optimizer
function. I've identified a critical issue where the argument handling for the dynamically loaded optimizer is not robust and can lead to runtime TypeError
exceptions. My suggestion involves using Python's inspect
module to build the arguments dictionary safely, ensuring only valid parameters are passed to the optimizer's constructor. This will make the implementation more generic and prevent unexpected crashes.
098c2c8
to
e9cfd78
Compare
@EduardDurech Please resolve conflicts with main branch. |
@wuxibin89 I overwrote the PR #3692 as extra parameters should now be defined in ci is unrelated to PR, tests passed #cf4cc6a6c60b2a21b1765825b83158ae6bea101b cpu_unit_tests
sgl
e2e_ascend
|
Abstract optimizer so can be used with whatever module and method a user wants, should be backwards compatible as default is
torch.optim.AdamW
, adds{actor_rollout_ref.actor,critic}.optim.{optimizer,optimizer_impl,override_optimizer_config}
Important: fsdp_sft_trainer optim aligned with FSDP optim
optim.warmup_steps_ratio
->optim.lr_warmup_steps_ratio