Skip to content

Commit 88cbad1

Browse files
authored
[train] fix multimodal packing & padding_free (#4838)
1 parent c97afe3 commit 88cbad1

File tree

12 files changed

+29
-24
lines changed

12 files changed

+29
-24
lines changed

docs/source/BestPractices/Qwen3最佳实践.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ I am Qwen, a large-scale language model developed by Alibaba Cloud. I am designe
6161

6262
```bash
6363
pip install ms-swift -U
64-
pip install transformers -U
64+
pip install transformers
6565

6666
pip install deepspeed # 多GPU训练
6767
pip install liger-kernel # 节约显存资源

docs/source/Instruction/命令行参数.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@
8585
- 注意:`swift pt`默认为False,使用generation模板。
8686
- 🔥padding_free: 将一个batch中的数据进行展平而避免数据padding,从而降低显存占用并加快训练。默认为False。当前支持CPT/SFT/DPO/GRPO/GKD。
8787
- 注意:使用padding_free请结合`--attn_impl flash_attn`使用且"transformers>=4.44",具体查看[该PR](https://github.com/huggingface/transformers/pull/31629)。(同packing)
88-
- 支持的多模态模型与多模态packing支持情况相同。相较于packing,padding_free不额外消耗时间和空间。
88+
- 支持的多模态模型与多模态packing支持情况相同。相较于packing,padding_free不额外消耗时间和空间。注意:请使用"ms-swift>=3.6",关注[此PR](https://github.com/modelscope/ms-swift/pull/4838)
8989
- Megatron-SWIFT默认使用padding_free,即`qkv_format='thd'`,不需要额外设置。
9090
- padding_side: 当训练`batch_size>=2`时的padding_side,可选值为'left'、'right',默认为'right'。(推理时的batch_size>=2时,只进行左padding)。
9191
- 注意:PPO和GKD默认设置为'left'。
@@ -379,7 +379,7 @@ Vera使用`target_modules`, `target_regex`, `modules_to_save`三个参数.
379379
- channels: 数据集包含的channel集合。默认为None。结合`--loss_type channel_loss`使用,可参考[这里](https://github.com/modelscope/ms-swift/blob/main/examples/train/plugins/channel_loss.sh)
380380
- 🔥packing: 是否使用序列packing提升计算效率,默认为False。当前支持`swift pt/sft`
381381
- 注意:使用packing请结合`--attn_impl flash_attn`使用且"transformers>=4.44",具体查看[该PR](https://github.com/huggingface/transformers/pull/31629)
382-
- 支持的多模态模型参考:https://github.com/modelscope/ms-swift/blob/main/examples/train/packing/qwen2_5_vl.sh
382+
- 支持的多模态模型参考:https://github.com/modelscope/ms-swift/blob/main/examples/train/packing/qwen2_5_vl.sh。注意:请使用"ms-swift>=3.6",关注[此PR](https://github.com/modelscope/ms-swift/pull/4838)
383383
- packing_cache: 指定 packing 缓存目录。默认值为`None`,表示缓存将存储在环境变量 `$MODELSCOPE_CACHE`所指定的路径下。在跨节点使用 packing 功能时,需确保所有节点的 packing 缓存路径共享且一致。你可以通过设置`MODELSCOPE_CACHE`环境变量,或在命令行中添加 `--packing_cache <shared_path>`参数来实现这一要求。
384384
- 🔥lazy_tokenize: 是否使用lazy_tokenize。若该参数设置为False,则在训练之前对所有的数据集样本进行tokenize(多模态模型则包括从磁盘中读取图片)。该参数在LLM训练中默认设置为False,而MLLM训练默认为True,节约内存。
385385
- use_logits_to_keep: 通过在`forward`中根据labels传入logits_to_keep,减少无效logits的计算与存储,从而减少显存占用并加快训练速度。默认为None,进行自动选择。

docs/source_en/BestPractices/Qwen3-Best-Practice.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Before starting training, please ensure that your environment is properly config
6363

6464
```bash
6565
pip install ms-swift -U
66-
pip install transformers -U
66+
pip install transformers
6767

6868
pip install deepspeed # for multi-GPU training
6969
pip install liger-kernel # to save GPU memory resources

docs/source_en/Instruction/Command-line-parameters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ Hints:
8686
- Note: `swift pt` is set to False by default, using the generation template.
8787
- 🔥padding_free: Flattens the data in a batch to avoid padding, thereby reducing memory usage and accelerating training. Default is False. Currently supported in CPT/SFT/DPO/GRPO/GKD.
8888
- Note: When using `padding_free`, it should be combined with `--attn_impl flash_attn` and "transformers>=4.44". For details, see [this PR](https://github.com/huggingface/transformers/pull/31629). (Same as packing)
89-
- The supported multimodal models are the same as those supported for multimodal packing. Compared to packing, padding_free does not consume additional time or space.
89+
- The supported multimodal models are the same as those supported for multimodal packing. Compared to packing, padding_free does not consume additional time or space. Note: Please use "ms-swift>=3.6" and follow [this PR](https://github.com/modelscope/ms-swift/pull/4838).
9090
- Megatron-SWIFT uses `padding_free` by default, i.e., `qkv_format='thd'`, and no additional configuration is required.
9191
- padding_side: Padding side when `batch_size>=2` during training. Options are 'left' and 'right', with 'right' as the default. (For inference with batch_size>=2, only left padding is applied.)
9292
- Note: PPO and GKD are set to 'left' by default.
@@ -388,7 +388,7 @@ Training arguments include the [base arguments](#base-arguments), [Seq2SeqTraine
388388
- channels: Set of channels included in the dataset. Defaults to None. Used in conjunction with `--loss_type channel_loss`. Refer to [this example](https://github.com/modelscope/ms-swift/blob/main/examples/train/plugins/channel_loss.sh) for more details.
389389
- 🔥packing: Whether to use sequence packing to improve computational efficiency. The default value is False. Currently supports `swift pt/sft`.
390390
- Note: When using packing, please combine it with `--attn_impl flash_attn` and ensure "transformers>=4.44". For details, see [this PR](https://github.com/huggingface/transformers/pull/31629).
391-
- Supported multimodal models reference: https://github.com/modelscope/ms-swift/blob/main/examples/train/packing/qwen2_5_vl.sh
391+
- Supported multimodal models reference: https://github.com/modelscope/ms-swift/blob/main/examples/train/packing/qwen2_5_vl.sh. Note: Please use "ms-swift>=3.6" and follow [this PR](https://github.com/modelscope/ms-swift/pull/4838).
392392
- packing_cache: Specifies the directory for packing cache. The default value is `None`, which means the cache will be stored in the path defined by the environment variable `$MODELSCOPE_CACHE`. When using the packing feature across multiple nodes, ensure that all nodes share the same packing cache directory. You can achieve this by setting the `MODELSCOPE_CACHE` environment variable or by adding the `--packing_cache <shared_path>` argument in the command line.
393393
- 🔥lazy_tokenize: Whether to use lazy tokenization. If set to False, all dataset samples are tokenized before training (for multimodal models, this includes reading images from disk). This parameter defaults to False for LLM training, and True for MLLM training, to save memory.
394394
- use_logits_to_keep: Pass `logits_to_keep` in the `forward` method based on labels to reduce the computation and storage of unnecessary logits, thereby reducing memory usage and accelerating training. The default is `None`, which enables automatic selection.

examples/train/multimodal/omni/sft.sh

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
# 4*35GB
22
# A demo for four modalities that can be run directly
3-
pip install transformers -U
4-
53
nproc_per_node=4
64

75
CUDA_VISIBLE_DEVICES=0,1,2,3 \

examples/train/packing/qwen2_5_omni.sh

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@
22
# Multimodal packing currently only supports qwen2_vl, qwen2_5_vl, qwen2_5_omni, internvl2_5/3
33
# A demo for four modalities that can be run directly
44
# For local datasets, it is recommended to use streaming: `--streaming true` (save memory)
5-
pip install transformers -U
6-
75
NPROC_PER_NODE=4 \
86
ENABLE_AUDIO_OUTPUT=1 \
97
CUDA_VISIBLE_DEVICES=0,1,2,3 \

swift/llm/template/base.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -309,7 +309,7 @@ def _extend_tokens(input_ids: List[int], labels: Optional[List[int]], replace_id
309309
added_tokens_len += token_len - 1
310310
return input_ids, labels
311311

312-
def compute_loss_context(self, model, inputs):
312+
def training_step_context(self, model, inputs):
313313
return nullcontext()
314314

315315
@staticmethod

swift/llm/template/template/internvl.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,14 +56,14 @@ def _encode(self, inputs: StdTemplateInputs) -> Dict[str, Any]:
5656
encoded['pixel_values'] = pixel_values
5757
return encoded
5858

59-
def compute_loss_context(self, model, inputs):
59+
def training_step_context(self, model, inputs):
6060
model_name = model.language_model.__class__.__name__.lower()
6161
if self._packing and 'internlm2' in model_name:
6262
position_ids = inputs['position_ids']
6363
modeling_module = model.language_model.model.layers[0].attention.__class__
6464
return self._patch_flash_attention_forward(modeling_module, position_ids, use_new_func=True)
6565
else:
66-
return super().compute_loss_context(model, inputs)
66+
return super().training_step_context(model, inputs)
6767

6868
def _post_encode(self, model: nn.Module, inputs: Dict[str, Any]) -> Dict[str, Any]:
6969
embedding = model.get_input_embeddings()

swift/llm/template/template/qwen.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -297,9 +297,9 @@ def _get_new_tokens(i):
297297
encoded['labels'] = labels
298298
return encoded
299299

300-
def compute_loss_context(self, model, inputs):
300+
def training_step_context(self, model, inputs):
301301
if 'real_position_ids' not in inputs:
302-
return super().compute_loss_context(model, inputs)
302+
return super().training_step_context(model, inputs)
303303
if self.version == 'v2':
304304
from transformers.models.qwen2_vl import modeling_qwen2_vl as modeling_module
305305
elif self.version == 'v2_5':

swift/trainers/rlhf_trainer/dpo_trainer.py

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -56,9 +56,10 @@ def concatenated_forward(
5656
batch['output_router_logits'] = True
5757
if self.is_encoder_decoder:
5858
batch['labels'] = labels
59-
position_ids = batch.get('position_ids')
60-
with self.template.compute_loss_context(self.model, batch):
61-
outputs = model(**batch, use_cache=False)
59+
position_ids = batch.pop('_position_ids', None)
60+
if position_ids is None:
61+
position_ids = batch.get('position_ids')
62+
outputs = model(**batch, use_cache=False)
6263
all_logits = outputs.logits
6364

6465
if all_logits.shape[1] != labels.shape[1]:
@@ -121,3 +122,8 @@ def get_per_token_logps(
121122
per_token_logps = selective_log_softmax(logits, labels)
122123
per_token_logps[~loss_mask] = 0
123124
return per_token_logps, logits.mean(-1), loss_mask
125+
126+
def training_step(self, model, inputs, *args, **kwargs):
127+
inputs['_position_ids'] = inputs.get('position_ids')
128+
with self.template.training_step_context(self.model, inputs):
129+
return super().training_step(model, inputs, *args, **kwargs)

0 commit comments

Comments
 (0)