Skip to content

Commit 5b417da

Browse files
authored
[megatron] fix: fix logits process error when disable pack_seqs (volcengine#3777)
### What does this PR do? When set pack_seqs = False(Although it is not a default behavior when using megatron as training backend), the current code will face runtime error due to the lack of logits_processor, we need to let the last PP rank do this operation. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
1 parent f0539a5 commit 5b417da

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

verl/models/mcore/model_forward.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,12 +68,13 @@ def gptmodel_forward(
6868
output_orig, packed_seq_params, attention_mask, batch_size, seq_len, post_process=post_process
6969
)
7070
else:
71-
assert logits_processor is None, "logits_processor is not supported for non-packed sequence"
7271
batch_size, sequence_length = attention_mask.shape
7372
new_input_ids, new_attention_mask, new_position_ids = remove_left_padding(
7473
input_ids, attention_mask, position_ids, sequence_parallel, pre_process=pre_process
7574
)
7675
output = model(input_ids=new_input_ids, attention_mask=new_attention_mask, position_ids=new_position_ids)
76+
if post_process:
77+
output = logits_processor(output, **logits_processor_args)
7778
output = recover_left_padding(
7879
output, new_attention_mask, attention_mask, sequence_length, post_process=post_process
7980
)

0 commit comments

Comments
 (0)