Skip to content

Conversation

tinglvv
Copy link

@tinglvv tinglvv commented Aug 20, 2025

Comment on lines +22 to +23
def to_python_float(scalar_tensor: torch.Tensor):
return scalar_tensor.float().item()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where would this be used?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems used here

# to_python_float incurs a host<->device sync
losses.update(to_python_float(reduced_loss), input.size(0))
top1.update(to_python_float(prec1), input.size(0))
top5.update(to_python_float(prec5), input.size(0))

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weird it was never defined previously

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These lines were added 6 years ago, my guess is the function definition was removed in between?
The test that is related to this main_amp.py may have been failed for more than 3 years, I saw a bug to deprecate the usage of DDP/AMP in apex that was opened on 2022..

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was defined in the removed code for AMP:

def to_python_float(t):
if hasattr(t, 'item'):
return t.item()
else:
return t[0]

(another reason why doing something like from apex.fp16_utils import * is undesirable)

Copy link
Collaborator

@Aidyn-A Aidyn-A left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes look good to me. But we will need to come up with something new in future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants