Skip to content

Commit b96e82c

Browse files
authored
Add image classification script, no trainer (#16727)
* Add first draft * Improve README and run fixup * Make script aligned with other scripts, improve README * Improve script and add test * Remove print statement * Apply suggestions from code review * Add num_labels to make test pass * Improve README
1 parent db9f189 commit b96e82c

File tree

4 files changed

+604
-16
lines changed

4 files changed

+604
-16
lines changed

examples/pytorch/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Coming soon!
4343
| [**`speech-recognition`**](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition) | TIMIT | ✅ | - |✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)
4444
| [**`multi-lingual speech-recognition`**](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition) | Common Voice | ✅ | - |✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)
4545
| [**`audio-classification`**](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) | SUPERB KS | ✅ | - |✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)
46-
| [**`image-classification`**](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | CIFAR-10 | ✅ | - |✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)
46+
| [**`image-classification`**](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) | CIFAR-10 | ✅ | |✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)
4747

4848

4949
## Running quick tests

examples/pytorch/image-classification/README.md

Lines changed: 68 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -14,20 +14,27 @@ See the License for the specific language governing permissions and
1414
limitations under the License.
1515
-->
1616

17-
# Image classification example
17+
# Image classification examples
1818

19-
This directory contains a script, `run_image_classification.py`, that showcases how to fine-tune any model supported by the [`AutoModelForImageClassification` API](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForImageClassification) (such as [ViT](https://huggingface.co/docs/transformers/main/en/model_doc/vit), [ConvNeXT](https://huggingface.co/docs/transformers/main/en/model_doc/convnext), [ResNet](https://huggingface.co/docs/transformers/main/en/model_doc/resnet), [Swin Transformer](https://huggingface.co/docs/transformers/main/en/model_doc/swin)...) using PyTorch. It can be used to fine-tune models on both well-known datasets (like [CIFAR-10](https://huggingface.co/datasets/cifar10), [Fashion MNIST](https://huggingface.co/datasets/fashion_mnist), ...) as well as on your own custom data.
19+
This directory contains 2 scripts that showcase how to fine-tune any model supported by the [`AutoModelForImageClassification` API](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForImageClassification) (such as [ViT](https://huggingface.co/docs/transformers/main/en/model_doc/vit), [ConvNeXT](https://huggingface.co/docs/transformers/main/en/model_doc/convnext), [ResNet](https://huggingface.co/docs/transformers/main/en/model_doc/resnet), [Swin Transformer](https://huggingface.co/docs/transformers/main/en/model_doc/swin)...) using PyTorch. They can be used to fine-tune models on both [datasets from the hub](#using-datasets-from-hub) as well as on [your own custom data](#using-your-own-data).
2020

21-
This page includes 2 sections:
22-
- [Using datasets from the 🤗 hub](#using-datasets-from-hub)
23-
- [Using your own data](#using-your-own-data).
21+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/image_classification_inference_widget.png" height="400" />
2422

23+
Try out the inference widget here: https://huggingface.co/google/vit-base-patch16-224
2524

26-
## Using datasets from Hub
25+
Content:
26+
- [PyTorch version, Trainer](#pytorch-version-no-trainer)
27+
- [PyTorch version, no Trainer](#pytorch-version-trainer)
2728

28-
Here we show how to fine-tune a Vision Transformer (`ViT`) on the [beans](https://huggingface.co/datasets/beans) dataset, to classify the disease type of bean leaves.
29+
## PyTorch version, Trainer
2930

30-
👀 See the results here: [nateraw/vit-base-beans](https://huggingface.co/nateraw/vit-base-beans).
31+
Based on the script [`run_image_classification.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py).
32+
33+
The script leverages the 🤗 [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer) to automatically take care of the training for you, running on distributed environments right away.
34+
35+
### Using datasets from Hub
36+
37+
Here we show how to fine-tune a Vision Transformer (`ViT`) on the [beans](https://huggingface.co/datasets/beans) dataset, to classify the disease type of bean leaves.
3138

3239
```bash
3340
python run_image_classification.py \
@@ -51,17 +58,19 @@ python run_image_classification.py \
5158
--seed 1337
5259
```
5360

54-
To fine-tune another model, simply provide the `--model_name_or_path` argument. To train on another dataset, simply set the `--dataset_name` argument.
61+
👀 See the results here: [nateraw/vit-base-beans](https://huggingface.co/nateraw/vit-base-beans).
62+
63+
Note that you can replace the model and dataset by simply setting the `model_name_or_path` and `dataset_name` arguments respectively, with any model or dataset from the [hub](https://huggingface.co/). For an overview of all possible arguments, we refer to the [docs](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) of the `TrainingArguments`, which can be passed as flags.
5564

56-
## Using your own data
65+
### Using your own data
5766

5867
To use your own dataset, there are 2 ways:
5968
- you can either provide your own folders as `--train_dir` and/or `--validation_dir` arguments
6069
- you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the `--dataset_name` argument.
6170

6271
Below, we explain both in more detail.
6372

64-
### Provide them as folders
73+
#### Provide them as folders
6574

6675
If you provide your own folders with images, the script expects the following directory structure:
6776

@@ -88,11 +97,11 @@ python run_image_classification.py \
8897

8998
Internally, the script will use the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature which will automatically turn the folders into 🤗 Dataset objects.
9099

91-
#### 💡 The above will split the train dir into training and evaluation sets
100+
##### 💡 The above will split the train dir into training and evaluation sets
92101
- To control the split amount, use the `--train_val_split` flag.
93102
- To provide your own validation split in its own directory, you can pass the `--validation_dir <path-to-val-root>` flag.
94103

95-
### Upload your data to the hub, as a (possibly private) repo
104+
#### Upload your data to the hub, as a (possibly private) repo
96105

97106
It's very easy (and convenient) to upload your image dataset to the hub using the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature available in 🤗 Datasets. Simply do the following:
98107

@@ -117,17 +126,18 @@ dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "pa
117126
Next, push it to the hub!
118127

119128
```python
129+
# assuming you have ran the huggingface-cli login command in a terminal
120130
dataset.push_to_hub("name_of_your_dataset")
121131

122132
# if you want to push to a private repo, simply pass private=True:
123133
dataset.push_to_hub("name_of_your_dataset", private=True)
124134
```
125135

126-
and that's it! You can now simply train your model simply by setting the `--dataset_name` argument to the name of your dataset on the hub (as explained in [Using datasets from the 🤗 hub](#using-datasets-from-hub)).
136+
and that's it! You can now train your model by simply setting the `--dataset_name` argument to the name of your dataset on the hub (as explained in [Using datasets from the 🤗 hub](#using-datasets-from-hub)).
127137

128138
More on this can also be found in [this blog post](https://huggingface.co/blog/image-search-datasets).
129139

130-
# Sharing your model on 🤗 Hub
140+
### Sharing your model on 🤗 Hub
131141

132142
0. If you haven't already, [sign up](https://huggingface.co/join) for a 🤗 account
133143

@@ -154,3 +164,46 @@ python run_image_classification.py \
154164
--push_to_hub_model_id <name-your-model> \
155165
...
156166
```
167+
168+
## PyTorch version, no Trainer
169+
170+
Based on the script [`run_image_classification_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification_no_trainer.py).
171+
172+
Like `run_image_classification.py`, this script allows you to fine-tune any of the models on the [hub](https://huggingface.co/models) on an image classification task. The main difference is that this script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like.
173+
174+
It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer
175+
or the dataloaders directly in the script) but still run in a distributed setup, and supports mixed precision by
176+
the means of the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally
177+
after installing it:
178+
179+
```bash
180+
pip install accelerate
181+
```
182+
183+
You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run
184+
185+
```bash
186+
accelerate config
187+
```
188+
189+
and reply to the questions asked. Then
190+
191+
```bash
192+
accelerate test
193+
```
194+
195+
that will check everything is ready for training. Finally, you can launch training with
196+
197+
```bash
198+
accelerate launch run_image_classification_trainer.py
199+
```
200+
201+
This command is the same and will work for:
202+
203+
- single/multiple CPUs
204+
- single/multiple GPUs
205+
- TPUs
206+
207+
Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.
208+
209+
Regarding using custom data with this script, we refer to [using your own data](#using-your-own-data).

0 commit comments

Comments
 (0)