Skip to content

max op do not support compute at float16 #52584

@PommesPeter

Description

@PommesPeter

请提出你的问题 Please ask your question

When I am using paddle.amp.auto_cast to compute the data at float16, it shows some errors.

Errors below:
image

it seems that PaddlePaddle currently does not support compute tensor at float16 for paddle.max op, and I also have checked the doc of paddle.max op and have searched issues but can not get any information about it.

Have any other ways to let paddle.max op support computation at float16?

The environment I'm using:

Is debug build: False
Paddle Version: 2.4.2
CUDA used to build Paddle: 11.2

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31

Python version: 3.8.16 (default, Mar  2 2023, 03:21:46)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-136-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY

Nvidia driver version: 495.29.05
cuDNN version: 8.5.0.96
Is XNNPACK available: True

Metadata

Metadata

Assignees

Labels

PFCCPaddle Framework Contributor Club,https://github.com/PaddlePaddle/community/tree/master/pfccgood first issuestatus/new-issue新建type/question用户提问

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions