-
Notifications
You must be signed in to change notification settings - Fork 30.8k
Fix model saving bug post training with tensor parallel in Accelerate #36434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -3662,6 +3662,17 @@ def save_pretrained( | |
if self._tp_size is not None: | ||
state_dict = replace_state_dict_local_with_dtensor(state_dict, self._tp_plan, self._device_mesh) | ||
|
||
# if using tensor parallel we need to gather the tensors in state dict | ||
gathered_state_dict = {} | ||
for key, value in state_dict.items(): | ||
if hasattr(value, "_local_tensor"): | ||
gathered_state_dict[key] = value.to_local().cpu() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. memory will explode no? this should happen in the function that write the files to make sure you save bits by bits |
||
else: | ||
gathered_state_dict[key] = value.cpu() | ||
|
||
del state_dict | ||
state_dict = gathered_state_dict | ||
|
||
if safe_serialization: | ||
# TODO: fix safe_serialization for tied weights | ||
# Safetensors does not allow tensor aliasing. | ||
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: we might want to do something closer to https://github.com/pytorch/pytorch/blob/1eba9b3aa3c43f86f4a2c807ac8e12c4a7767340/torch/distributed/tensor/_api.py#L572
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah using
full_tensors
will be better I think.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bursteratom and I found that
full_tensor
would hang here, not 100% sure why, but we could investigate more if manually redistributing doesn't work.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SalmanMohammadi I wonder if it's related: pytorch/pytorch#115310
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@muellerzr Should this be in transformers or is the preference that this sort of unsharding is in accelerate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@winglian We have (will have) similar stuff in Accelerate for FSDP2, so possibly if we want to support both TP + FSDP2 on Accelerate side it'd need to be on both places. Though I remember
full_tensor()
working for me there, I might take a look at this too.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would only return local to the rank shard of the tensor if the DTensor has a
Shard
placement which is highly likely for TP. Would not that mean the state dicts would be now different on each rank, isn't that a problem?Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is correct.
.to_local()
only returns the local part of the tensor if it was sharded (most likely was as we're talking about TP), therefore this results for each process to have its own part.Possibility for why this hangs is because iircfull_tensor()
requires communication and here only main process is running iirc.