-
-
Notifications
You must be signed in to change notification settings - Fork 17.2k
Closed
Description
Line 199 in bc48457
del ckpt, csd |
Hi jocher!
Why is 'del' called explicitly?
When I ran the below code with and without del ckpt
, it printed same maximum batch size.
Can it save memory usage?
If so, can we set larger batch size?
import torch
import torchvision.models as models
def load_model(path, device):
ckpt = torch.load(path, map_location=device)
#Process...
del ckpt
return True
if __name__ == "__main__":
torch.backends.cudnn.benchmark = True
load_model("./yolox_x.pth", device="cuda") # dummy work
model = models.resnet18().cuda()
print("model is loaded!")
batch_size = 1
while True:
try:
dummy_input = torch.randn(batch_size, 3, 224, 224).cuda()
dummy_output = model(dummy_input)
dummy_output = dummy_output.sum()
dummy_output.backward()
batch_size += 1
ckpt = model.state_dict()
del ckpt
except:
break
print(f"maximum batch size: {batch_size}")
Weight Link: https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_x.pth
Metadata
Metadata
Assignees
Labels
No labels