Skip to content

Batch is empty when fine-tuning flan-t5 using LoRA #31357

@MorenoLaQuatra

Description

@MorenoLaQuatra

System Info

  • transformers version: 4.38.2
  • Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
  • Python version: 3.11.9
  • Huggingface_hub version: 0.23.2
  • Safetensors version: 0.4.3
  • Accelerate version: 0.30.1
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.3.0+cu121 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

@muellerzr @SunMarc

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

The issue is reported here: https://discuss.huggingface.co/t/valueerror-the-batch-received-was-empty-your-model-wont-be-able-to-train-on-it-double-check-that-your-training-dataset-contains-keys-expected-by-the-model-args-kwargs-label-ids-label/20200

Expected behavior

Correctly working fine-tuning with LoRA. Anyway, I thing the bug can be solved adding:

self._signature_columns += list(set(["labels", "input_ids"]))

To the function _set_signature_columns_if_needed - here.

May not be the best way to go tho.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions