Skip to content

torch.distributed.all_reduce does not free memory #2150

@WoosukKwon

Description

@WoosukKwon

I've visualized the memory usage:

  • llama 7B, TP=1
Screenshot 2023-12-16 at 11 14 03 PM

The activation memory is reused after every layer.

  • llama-70B, TP=8
Screenshot 2023-12-16 at 11 20 10 PM

However, when using TP, the activation memory for all reduce is not reused

Originally posted by @WoosukKwon in #2031 (comment)

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions