Skip to content

Conversation

@fxyfxy777
Copy link
Contributor

PR Docs

PR APIs

 "torch.cuda.device",
"torch.cuda.is_bf16_supported",
"torch.cuda.manual_seed",
"torch.cuda.max_memory_allocated",
"torch.cuda.reset_peak_memory_stats",
"torch.cuda.set_stream",
"torch.cuda.Event",
"torch.get_device",
'torch.cuda.FloatTensor',
'torch.cuda.DoubleTensor',
'torch.cuda.HalfTensor',
'torch.cuda.BFloat16Tensor',
'torch.cuda.ByteTensor',
'torch.cuda.CharTensor',
'torch.cuda.ShortTensor',
'torch.cuda.IntTensor',
'torch.cuda.LongTensor',
'torch.cuda.BoolTensor',
'torch.cuda.nvtx.range_push',
'torch.cuda.nvtx.range_pop',

@paddle-bot
Copy link

paddle-bot bot commented Oct 27, 2025

Thanks for your contribution!

obj.run(
pytorch_code,
["result"],
check_stop_gradient=False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个返回应该是一个int,应该不用设置check_stop_gradient?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是的,我删去

rtol=1.0e-6,
atol=0.0,
):
assert pytorch_result == paddle_result or isinstance(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch返回的类型也可以判断一下
assert isinstance(torch_result,
assert isinstance(paddle_result,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的

Copy link
Collaborator

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhwesky2010 zhwesky2010 merged commit 6f6ccb8 into PaddlePaddle:master Oct 29, 2025
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants