You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Add first draft
* Improve README and run fixup
* Make script aligned with other scripts, improve README
* Improve script and add test
* Remove print statement
* Apply suggestions from code review
* Add num_labels to make test pass
* Improve README
Copy file name to clipboardExpand all lines: examples/pytorch/image-classification/README.md
+68-15Lines changed: 68 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,20 +14,27 @@ See the License for the specific language governing permissions and
14
14
limitations under the License.
15
15
-->
16
16
17
-
# Image classification example
17
+
# Image classification examples
18
18
19
-
This directory contains a script, `run_image_classification.py`, that showcases how to fine-tune any model supported by the [`AutoModelForImageClassification` API](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForImageClassification) (such as [ViT](https://huggingface.co/docs/transformers/main/en/model_doc/vit), [ConvNeXT](https://huggingface.co/docs/transformers/main/en/model_doc/convnext), [ResNet](https://huggingface.co/docs/transformers/main/en/model_doc/resnet), [Swin Transformer](https://huggingface.co/docs/transformers/main/en/model_doc/swin)...) using PyTorch. It can be used to fine-tune models on both well-known datasets (like [CIFAR-10](https://huggingface.co/datasets/cifar10), [Fashion MNIST](https://huggingface.co/datasets/fashion_mnist), ...) as well as on your own custom data.
19
+
This directory contains 2 scripts that showcase how to fine-tune any model supported by the [`AutoModelForImageClassification` API](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForImageClassification) (such as [ViT](https://huggingface.co/docs/transformers/main/en/model_doc/vit), [ConvNeXT](https://huggingface.co/docs/transformers/main/en/model_doc/convnext), [ResNet](https://huggingface.co/docs/transformers/main/en/model_doc/resnet), [Swin Transformer](https://huggingface.co/docs/transformers/main/en/model_doc/swin)...) using PyTorch. They can be used to fine-tune models on both [datasets from the hub](#using-datasets-from-hub)as well as on [your own custom data](#using-your-own-data).
20
20
21
-
This page includes 2 sections:
22
-
-[Using datasets from the 🤗 hub](#using-datasets-from-hub)
-[PyTorch version, no Trainer](#pytorch-version-trainer)
27
28
28
-
Here we show how to fine-tune a Vision Transformer (`ViT`) on the [beans](https://huggingface.co/datasets/beans) dataset, to classify the disease type of bean leaves.
29
+
## PyTorch version, Trainer
29
30
30
-
👀 See the results here: [nateraw/vit-base-beans](https://huggingface.co/nateraw/vit-base-beans).
31
+
Based on the script [`run_image_classification.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py).
32
+
33
+
The script leverages the 🤗 [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer) to automatically take care of the training for you, running on distributed environments right away.
34
+
35
+
### Using datasets from Hub
36
+
37
+
Here we show how to fine-tune a Vision Transformer (`ViT`) on the [beans](https://huggingface.co/datasets/beans) dataset, to classify the disease type of bean leaves.
To fine-tune another model, simply provide the `--model_name_or_path` argument. To train on another dataset, simply set the `--dataset_name` argument.
61
+
👀 See the results here: [nateraw/vit-base-beans](https://huggingface.co/nateraw/vit-base-beans).
62
+
63
+
Note that you can replace the model and dataset by simply setting the `model_name_or_path` and `dataset_name` arguments respectively, with any model or dataset from the [hub](https://huggingface.co/). For an overview of all possible arguments, we refer to the [docs](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) of the `TrainingArguments`, which can be passed as flags.
55
64
56
-
## Using your own data
65
+
###Using your own data
57
66
58
67
To use your own dataset, there are 2 ways:
59
68
- you can either provide your own folders as `--train_dir` and/or `--validation_dir` arguments
60
69
- you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the `--dataset_name` argument.
61
70
62
71
Below, we explain both in more detail.
63
72
64
-
### Provide them as folders
73
+
####Provide them as folders
65
74
66
75
If you provide your own folders with images, the script expects the following directory structure:
Internally, the script will use the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature which will automatically turn the folders into 🤗 Dataset objects.
90
99
91
-
#### 💡 The above will split the train dir into training and evaluation sets
100
+
#####💡 The above will split the train dir into training and evaluation sets
92
101
- To control the split amount, use the `--train_val_split` flag.
93
102
- To provide your own validation split in its own directory, you can pass the `--validation_dir <path-to-val-root>` flag.
94
103
95
-
### Upload your data to the hub, as a (possibly private) repo
104
+
####Upload your data to the hub, as a (possibly private) repo
96
105
97
106
It's very easy (and convenient) to upload your image dataset to the hub using the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature available in 🤗 Datasets. Simply do the following:
and that's it! You can now simply train your model simply by setting the `--dataset_name` argument to the name of your dataset on the hub (as explained in [Using datasets from the 🤗 hub](#using-datasets-from-hub)).
136
+
and that's it! You can now train your model by simply setting the `--dataset_name` argument to the name of your dataset on the hub (as explained in [Using datasets from the 🤗 hub](#using-datasets-from-hub)).
127
137
128
138
More on this can also be found in [this blog post](https://huggingface.co/blog/image-search-datasets).
129
139
130
-
# Sharing your model on 🤗 Hub
140
+
###Sharing your model on 🤗 Hub
131
141
132
142
0. If you haven't already, [sign up](https://huggingface.co/join) for a 🤗 account
Based on the script [`run_image_classification_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification_no_trainer.py).
171
+
172
+
Like `run_image_classification.py`, this script allows you to fine-tune any of the models on the [hub](https://huggingface.co/models) on an image classification task. The main difference is that this script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like.
173
+
174
+
It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer
175
+
or the dataloaders directly in the script) but still run in a distributed setup, and supports mixed precision by
176
+
the means of the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally
177
+
after installing it:
178
+
179
+
```bash
180
+
pip install accelerate
181
+
```
182
+
183
+
You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run
184
+
185
+
```bash
186
+
accelerate config
187
+
```
188
+
189
+
and reply to the questions asked. Then
190
+
191
+
```bash
192
+
accelerate test
193
+
```
194
+
195
+
that will check everything is ready for training. Finally, you can launch training with
0 commit comments