Skip to content

Automatic mixed precision (AMP) training is now natively supported and a stable feature. #557

@Lornatang

Description

@Lornatang

🚀 Feature

AMP allows users to easily enable automatic mixed precision training enabling higher performance and memory savings of up to 50% on Tensor Core GPUs. Using the natively supported torch.cuda.amp API, AMP provides convenience methods for mixed precision, where some operations use the torch.float32 (float) datatype and other operations use torch.float16 (half). Some ops, like linear layers and convolutions, are much faster in float16. Other ops, like reductions, often require the dynamic range of float32. Mixed precision tries to match each op to its appropriate datatype.

Motivation

In PyTorch1.6, the mixed precision calculation has been integrated, and there is no need to download the Nvdia/apex library.

Pitch

Update the code in training, remove apex.

Alternatives

No changes on the original basis.

Additional context

Refer to my recently updated PR.

Metadata

Metadata

Assignees

No one assigned

    Labels

    StaleStale and schedule for closing soonenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions