Skip to content

在win10,利用tensorrt8.5部署pp_mobileseg加速报错 #3902

@jiaerwang0328

Description

@jiaerwang0328

问题确认 Search before asking

  • 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.

请提出你的问题 Please ask your question

paddlepaddle版本为paddle_inferenceV2.6_msvc2019_cu118_cudnn8.6_trt8.5;

W0424 09:26:13.359738 15364 helper.h:127] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
W0424 09:26:13.359738 15364 helper.h:127] The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
E0424 09:26:13.361732 15364 helper.h:131] 3: [network.cpp::nvinfer1::Network::addPoolingNd::1069] Error Code 3: API Usage Error (Parameter check failed at: network.cpp::nvinfer1::Network::addPoolingNd::1069, condition: allDimsGtEq(window
Size, 1) && volume(windowSize) < MAX_KERNEL_DIMS_PRODUCT(nbSpatialDims)
)


C++ Traceback (most recent call last):

Not support stack backtrace yet.


Error Message Summary:

FatalError: trt pool layer in converter could not be created.
[Hint: pool_layer should not be null.] (at ..\paddle\fluid\inference\tensorrt\convert\pool2d_op.cc:246)

Metadata

Metadata

Assignees

Labels

questionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions