Skip to content

[Bug] InternVL3_5-8B failed with out of resource: shared memory, Required when tp=2 #3992

@kerlion

Description

@kerlion

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

lmdeploy serve api_server /modles/InternVL3_5-38B runs well with tp[=1, but failed with below error when tp=2:

triton.runtime.errors.OutOfResources: out of resource: shared memory, Required: 131088, Hardware limit: 101376. Reducing block sizes or num_stages may help.

Reproduction

lmdeploy serve api_server \
 --server-port 8080 \
 --tp 2 --backend pytorch \
 --session-len 32768 \
 /modles/InternVL3_5-8B

Environment

sys.platform: linux
Python: 3.10.18 (main, Jun  5 2025, 13:14:17) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5: NVIDIA RTX A6000
CUDA_HOME: /usr/local/cuda-12.6
NVCC: Cuda compilation tools, release 12.6, V12.6.85
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.8.0+cu128
PyTorch compiling details: PyTorch built with:
  - GCC 13.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.8
  - NVCC architecture flags: -gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_100,code=sm_100;-gencode;arch=compute_120,code=sm_120
  - CuDNN 91.0.2  (built against CUDA 12.9)
    - Built with CuDNN 90.8
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=a1cb3cc05d46d198467bebbb6e8fba50a325d4e7, CUDA_VERSION=12.8, CUDNN_VERSION=9.8.0, CXX_COMPILER=/opt/rh/gcc-toolset-13/root/usr/bin/c++, CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-dangling-reference -Wno-error=dangling-reference -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.8.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=OFF, USE_XPU=OFF,

TorchVision: 0.23.0+cu128
LMDeploy: 0.10.0+
transformers: 4.55.0
fastapi: 0.116.2
pydantic: 2.11.9
triton: 3.4.0
NVIDIA Topology:
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PHB     PHB     PHB     PHB     PHB     0-99    0-1             N/A
GPU1    PHB      X      PHB     PHB     PHB     PHB     0-99    0-1             N/A
GPU2    PHB     PHB      X      PHB     PHB     PHB     0-99    0-1             N/A
GPU3    PHB     PHB     PHB      X      PHB     PHB     0-99    0-1             N/A
GPU4    PHB     PHB     PHB     PHB      X      PHB     0-99    0-1             N/A
GPU5    PHB     PHB     PHB     PHB     PHB      X      0-99    0-1             N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

(RayWorkerWrapper pid=106727) 2025-09-19 08:42:56,732 - lmdeploy - ERROR - model_agent.py:804 - Task <ModelAgentLoop> failed
(RayWorkerWrapper pid=106727) Traceback (most recent call last):
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 799, in _on_finish_callback
(RayWorkerWrapper pid=106727)     task.result()
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 771, in _async_loop_background
(RayWorkerWrapper pid=106727)     await self._async_step_background(**forward_inputs, )
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 701, in _async_step_background
(RayWorkerWrapper pid=106727)     output = await self._async_model_forward(
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 503, in _async_model_forward
(RayWorkerWrapper pid=106727)     ret = await __forward(inputs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 463, in __forward
(RayWorkerWrapper pid=106727)     return await self.async_forward(inputs, swap_in_map=swap_in_map, swap_out_map=swap_out_map)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 976, in async_forward
(RayWorkerWrapper pid=106727)     output = self._forward_impl(inputs, swap_in_map=swap_in_map, swap_out_map=swap_out_map)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 960, in _forward_impl
(RayWorkerWrapper pid=106727)     output = model_forward(
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
(RayWorkerWrapper pid=106727)     return func(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/engine/model_agent.py", line 240, in model_forward
(RayWorkerWrapper pid=106727)     output = model(**input_dict)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/backends/cuda/graph_runner.py", line 198, in __call__
(RayWorkerWrapper pid=106727)     return self.model(**kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
(RayWorkerWrapper pid=106727)     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
(RayWorkerWrapper pid=106727)     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/models/internvl3_hf.py", line 607, in forward
(RayWorkerWrapper pid=106727)     outputs = self.language_model.forward(input_ids=input_ids,
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/models/qwen3.py", line 323, in forward
(RayWorkerWrapper pid=106727)     hidden_states = self.model(
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
(RayWorkerWrapper pid=106727)     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
(RayWorkerWrapper pid=106727)     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/models/qwen3.py", line 263, in forward
(RayWorkerWrapper pid=106727)     hidden_states, residual = decoder_layer(
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
(RayWorkerWrapper pid=106727)     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
(RayWorkerWrapper pid=106727)     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/models/qwen3.py", line 205, in forward
(RayWorkerWrapper pid=106727)     hidden_states, residual = self.post_attention_layernorm(hidden_states, residual)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
(RayWorkerWrapper pid=106727)     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
(RayWorkerWrapper pid=106727)     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/nn/norm.py", line 79, in forward
(RayWorkerWrapper pid=106727)     return self.impl.forward(x, self.weight, residual)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/backends/cuda/norm.py", line 22, in forward
(RayWorkerWrapper pid=106727)     x, residual = rms_norm(x, weight, self.eps, residual=residual)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/lmdeploy/pytorch/kernels/cuda/rms_norm.py", line 122, in rms_norm
(RayWorkerWrapper pid=106727)     add_rms_norm_kernel[grid](
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/triton/runtime/jit.py", line 390, in <lambda>
(RayWorkerWrapper pid=106727)     return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/triton/runtime/jit.py", line 617, in run
(RayWorkerWrapper pid=106727)     kernel.run(grid_0, grid_1, grid_2, stream, kernel.function, kernel.packed_metadata, launch_metadata,
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/triton/compiler/compiler.py", line 498, in __getattribute__
(RayWorkerWrapper pid=106727)     self._init_handles()
(RayWorkerWrapper pid=106727)   File "/app/Anaconda3/envs/lmdeploy/lib/python3.10/site-packages/triton/compiler/compiler.py", line 483, in _init_handles
(RayWorkerWrapper pid=106727)     raise OutOfResources(self.metadata.shared, max_shared, "shared memory")
(RayWorkerWrapper pid=106727) triton.runtime.errors.OutOfResources: out of resource: shared memory, Required: 131088, Hardware limit: 101376. Reducing block sizes or `num_stages` may help.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions