Skip to content

[Bug]: Quantization In MambaMixer2 Not Supported when Tensor Parallel is enabled #14618

@fabianlim

Description

@fabianlim

Your current environment

The output of `python collect_env.py`
Environment is not relevant for this issue.

🐛 Describe the bug

The current implementation for TP for Mamba2 is complicated for the in_proj, because the gate, projection, state space, heads, are all fused into this one layer. And furthermore, we also need to consider different possibilities if the number of groups divide the number of heads or not, see #13660.

For now the implementation of TP is simplified:

  • limited to the case of num_groups == 1 if num_groups does not divide num_heads #13660.
  • will does not support TP > 1 if the mamba2 mixer is quantised, see #14617

However for large models, it may be useful to support TP > 1 with quant layers, even in some special cases of num_heads and num_groups. cc: @tlrmchlsmth

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleOver 90 days of inactivity

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions