Releases: open-mmlab/mmdetection
MMDetection V2.24.1 Release
What's Changed
- [Fix] Fix no attribute 'samples_per_gpu' bug in
auto_scale_lrby @jbwang1997 in #7862 - [Fix] Fix KeyError: 'ori_filename' when using --show-dir with centernet_resnet18_dcnv2_140e_coco.py by @jbwang1997 in #7865
- [Fix] Fix the configs of simplecopypaste by @Czm369 in #7864
- [Docs] Update readme by @chhluo in #7867
Full Changelog: v2.24.0...v2.24.1
MMDetection V2.24.0 Release
Highlights
- Support Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
- Support automatically scaling LR according to GPU number and samples per GPU
- Support Class Aware Sampler that improves performance on OpenImages Dataset
New Features
-
Support Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation, see example configs (#7501)
-
Support Class Aware Sampler, users can set
data=dict(train_dataloader=dict(class_aware_sampler=dict(num_sample_class=1))))
in the config to use
ClassAwareSampler. Examples can be found in the configs of OpenImages Dataset. (#7436) -
Support automatically scaling LR according to GPU number and samples per GPU. (#7482)
In each config, there is a corresponding config of auto-scaling LR as below,auto_scale_lr = dict(enable=True, base_batch_size=N)
where
Nis the batch size used for the current learning rate in the config (also equals tosamples_per_gpu* gpu number to train this config).
By default, we setenable=Falseso that the original usages will not be affected. Users can setenable=Truein each config or add--auto-scale-lrafter the command line to enable this feature and should check the correctness ofbase_batch_sizein customized configs. -
Support setting dataloader arguments in config and add functions to handle config compatibility. (#7668)
The comparison between the old and new usages is as below.Before v2.24.0 Since v2.24.0 data = dict( samples_per_gpu=64, workers_per_gpu=4, train=dict(type='xxx', ...), val=dict(type='xxx', samples_per_gpu=4, ...), test=dict(type='xxx', ...), )
# A recommended config that is clear data = dict( train=dict(type='xxx', ...), val=dict(type='xxx', ...), test=dict(type='xxx', ...), # Use different batch size during inference. train_dataloader=dict(samples_per_gpu=64, workers_per_gpu=4), val_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2), test_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2), ) # Old style still works but allows to set more arguments about data loaders data = dict( samples_per_gpu=64, # only works for train_dataloader workers_per_gpu=4, # only works for train_dataloader train=dict(type='xxx', ...), val=dict(type='xxx', ...), test=dict(type='xxx', ...), # Use different batch size during inference. val_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2), test_dataloader=dict(samples_per_gpu=8, workers_per_gpu=2), )
-
Support memory profile hook. Users can use it to monitor the memory usages during training as below (#7560)
custom_hooks = [ dict(type='MemoryProfilerHook', interval=50) ]
-
Support to run on PyTorch with MLU chip (#7578)
-
Support re-spliting data batch with tag (#7641)
-
Support the
DiceCostused by K-Net inMaskHungarianAssigner(#7716) -
Support splitting COCO data for Semi-supervised object detection (#7431)
-
Support Pathlib for Config.fromfile (#7685)
-
Support to use file client in OpenImages dataset (#7433)
-
Add a probability parameter to Mosaic transformation (#7371)
-
Support specifying interpolation mode in
Resizepipeline (#7585)
Bug Fixes
- Avoid invalid bbox after deform_sampling (#7567)
- Fix the issue that argument color_theme does not take effect when exporting confusion matrix (#7701)
- Fix the
end_levelin Necks, which should be the index of the end input backbone level (#7502) - Fix the bug that
mix_resultsmay be None inMultiImageMixDataset(#7530) - Fix the bug in ResNet plugin when two plugins are used (#7797)
Improvements
- Enhance
load_json_logsof analyze_logs.py for resumed training logs (#7732) - Add argument
out_filein image_demo.py (#7676) - Allow mixed precision training with
SimOTAAssigner(#7516) - Updated INF to 100000.0 to be the same as that in the official YOLOX (#7778)
- Add documentations of:
- Release pre-trained models of
- Mask2Former (#7595, #7709)
- RetinaNet with ResNet-18 and release models (#7387)
- RetinaNet with EfficientNet backbone (#7646)
Contributors
A total of 27 developers contributed to this release.
Thanks @jovialio, @zhangsanfeng2022, @HarryZJ, @jamiechoi1995, @nestiank, @PeterH0323, @RangeKing, @Y-M-Y, @mattcasey02, @weiji14, @Yulv-git, @xiefeifeihu, @FANG-MING, @meng976537406, @nijkah, @sudz123, @CCODING04, @SheffieldCao, @Czm369, @BIGWangYuDong, @zytx121, @jbwang1997, @chhluo, @jshilong, @RangiLyu, @hhaAndroid, @ZwwWayne
New Contributors
- @nestiank made their first contribution in #7591
- @PeterH0323 made their first contribution in #7482
- @mattcasey02 made their first contribution in #7610
- @weiji14 made their first contribution in #7516
- @Yulv-git made their first contribution in #7679
- @xiefeifeihu made their first contribution in #7701
- @SheffieldCao made their first contribution in #7732
- @jovialio made their first contribution in #7778
- @zhangsanfeng2022 made their first contribution in #7578
Full Changelog: v2.23.0...v2.24.0
MMDetection V2.23.0 Release
Highlights
- Support Mask2Former: Masked-attention Mask Transformer for Universal Image Segmentation
- Support EfficientNet: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
- Support setting data root through environment variable
MMDET_DATASETS, users don't have to modify the corresponding path in config files anymore. - Find a good recipe for fine-tuning high precision ResNet backbone pre-trained by Torchvision.
New Features
- Support Mask2Former(#6938)(#7466)(#7471)
- Support EfficientNet (#7514)
- Support setting data root through environment variable
MMDET_DATASETS, users don't have to modify the corresponding path in config files anymore. (#7386) - Support setting different seeds to different ranks (#7432)
- Update the
dist_train.shso that the script can be used to support launching multi-node training on machines without slurm (#7415) - Find a good recipe for fine-tuning high precision ResNet backbone pre-trained by Torchvision (#7489)
Bug Fixes
- Fix bug in VOC unit test which removes the data directory (#7270)
- Adjust the order of
get_classesandFileClient(#7276) - Force the inputs of
get_bboxesin yolox_head to float32 (#7324) - Fix misplaced arguments in LoadPanopticAnnotations (#7388)
- Fix reduction=mean in CELoss. (#7449)
- Update unit test of CrossEntropyCost (#7537)
- Fix memory leaking in panpotic segmentation evaluation (#7538)
- Fix the bug of shape broadcast in YOLOv3 (#7551)
Improvements
- Add Chinese version of onnx2tensorrt.md (#7219)
- Update colab tutorials (#7310)
- Update information about Localization Distillation (#7350)
- Add Chinese version of
finetune.md(#7178) - Update YOLOX log for non square input (#7235)
- Add
nprocincoco_panoptic.pyfor panoptic quality computing (#7315) - Allow to set channel_order in LoadImageFromFile (#7258)
- Take point sample related functions out of mask_point_head (#7353)
- Add instance evaluation for coco_panoptic (#7313)
- Enhance the robustness of analyze_logs.py (#7407)
- Supplementary notes of sync_random_seed (#7440)
- Update docstring of cross entropy loss (#7472)
- Update pascal voc result (#7503)
- We create How-to documentation to record any questions about How to xxx. In this version, we added
Contributors
A total of 27 developers contributed to this release.
Thanks @ZwwWayne, @haofanwang, @shinya7y, @chhluo, @yangrisheng, @triple-Mu, @jbwang1997, @HikariTJU, @imflash217, @274869388, @zytx121, @matrixgame2018, @jamiechoi1995, @BIGWangYuDong, @JingweiZhang12, @Xiangxu-0103, @hhaAndroid, @jshilong, @osbm, @ceroytres, @bunge-bedstraw-herb, @Youth-Got, @daavoo, @jiangyitong, @RangiLyu, @CCODING04, @yarkable
New Contributors
- @triple-Mu made their first contribution in #7219
- @yangrisheng made their first contribution in #7180
- @imflash217 made their first contribution in #7398
- @osbm made their first contribution in #7498
- @ceroytres made their first contribution in #7510
- @bunge-bedstraw-herb made their first contribution in #7507
- @Youth-Got made their first contribution in #7521
- @CCODING04 made their first contribution in #7386
Full Changelog: v2.22.0...v2.23.0
MMDetection V2.22.0 Release
Breaking Changes
In order to support the visualization for Panoptic Segmentation, the num_classes can not be None when using the get_palette function to determine whether to use the panoptic palette.
Highlights
- Support MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation (#7212)
- Support DyHead: Dynamic Head: Unifying Object Detection Heads with Attentions (#6823)
- Release a good recipe of using ResNet in object detectors pre-trained by ResNet Strikes Back, which consistently brings about 3~4 mAP improvements over RetinaNet, Faster/Mask/Cascade Mask R-CNN (#7001)
- Support Open Images Dataset (#6331)
- Support TIMM backbone: PyTorch Image Models (#7020)
New Features
- Support MaskFormer (#7212)
- Support DyHead (#6823)
- Support ResNet Strikes Back (#7001)
- Support OpenImages Dataset (#6331)
- Support TIMM backbone (#7020)
- Support visualization for Panoptic Segmentation (#7041)
Bug Fixes
- Fix bug for the best checkpoints can not be saved when the
key_scoreis None (#7101) - Fix MixUp transform filter boxes failing case (#7080)
- Add missing properties in SABLHead (#7091)
- Fix bug when NaNs exist in confusion matrix (#7147)
- Fix PALETTE AttributeError in downstream task (#7230)
Improvements
- Speed up SimOTA matching (#7098)
- Add Chinese translation of
docs_zh-CN/tutorials/init_cfg.md(#7188)
Contributors
A total of 20 developers contributed to this release.
Thanks @ZwwWayne, @hhaAndroid, @RangiLyu, @AronLin, @BIGWangYuDong, @jbwang1997, @zytx121, @chhluo, @shinya7y, @LuooChen, @dvansa, @siatwangmin, @del-zhenwu, @vikashranjan26, @haofanwang, @jamiechoi1995, @HJoonKwon, @yarkable, @zhijian-liu, @RangeKing
New Contributors
- @LuooChen made their first contribution in #7101
- @dvansa made their first contribution in #7080
- @siatwangmin made their first contribution in #6476
- @vikashranjan26 made their first contribution in #7091
- @RangeKing made their first contribution in #7215
- @zhijian-liu made their first contribution in #7175
- @HJoonKwon made their first contribution in #7187
- @yarkable made their first contribution in #7188
Full Changelog: v2.21.0...v2.22.0
MMDetection V2.21.0 Release
Breaking Changes
To standardize the contents in config READMEs and meta files of OpenMMLab projects, the READMEs and meta files in each config directory have been significantly changed. The template will be released in the future, for now, you can refer to the examples of README for algorithm, dataset and backbone. To align with the standard, the configs in dcn are put into to two directories named dcn and dcnv2.
New Features
- Allow to customize colors of different classes during visualization (#6716)
- Support CPU training (#7016)
- Add download script of COCO, LVIS, and VOC dataset (#7015)
Bug Fixes
- Fix weight conversion issue of RetinaNet with Swin-S (#6973)
- Update
__repr__ofCompose(#6951) - Fix BadZipFile Error when build docker (#6966)
- Fix bug in non-distributed multi-gpu training/testing (#7019)
- Fix bbox clamp in PyTorch 1.10 (#7074)
- Relax the requirement of PALETTE in dataset wrappers (#7085)
- Keep the same weights before reassign in the PAA head (#7032)
- Update code demo in doc (#7092)
Improvements
- Speed-up training by allow to set variables of multi-processing (#6974, #7036)
- Add links of Chinese tutorials in readme (#6897)
- Disable cv2 multiprocessing by default for acceleration (#6867)
- Deprecate the support for "python setup.py test" (#6998)
- Re-organize metafiles and config readmes (#7051)
- Fix None grad problem during training TOOD by adding
SigmoidGeometricMean(#7090)
Contributors
A total of 26 developers contributed to this release.
Thanks @del-zhenwu, @zimoqingfeng, @srishilesh, @imyhxy, @jenhaoyang, @jliu-ac, @kimnamu, @ShengliLiu, @garvan2021, @ciusji, @DIYer22, @kimnamu, @q3394101, @zhouzaida, @gaotongxiao, @topsy404, @AntoAndGar, @jbwang1997, @nijkah, @ZwwWayne, @Czm369, @jshilong, @RangiLyu, @BIGWangYuDong, @hhaAndroid, @AronLin
New Contributors
- @srishilesh made their first contribution in #6936
- @imyhxy made their first contribution in #6867
- @jliu-ac made their first contribution in #6725
- @ShengliLiu made their first contribution in #6973
- @garvan2021 made their first contribution in #6951
- @nijkah made their first contribution in #6956
- @ciusji made their first contribution in #7033
- @DIYer22 made their first contribution in #6966
- @topsy404 made their first contribution in #7124
- @AntoAndGar made their first contribution in #7095
- @zimoqingfeng made their first contribution in #7032
Full Changelog: v2.20.0...v2.21.0
MMDetection V2.20.0 Release
New Features
- Support TOOD: Task-aligned One-stage Object Detection (ICCV 2021 Oral) (#6746)
- Support resuming from the latest checkpoint automatically (#6727)
Bug Fixes
- Fix wrong bbox
loss_weightof the PAA head (#6744) - Fix the padding value of
gt_semantic_segin batch collating (#6837) - Fix test error of lvis when using
classwise(#6845) - Avoid BC-breaking of
get_local_path(#6719) - Fix bug in
sync_norm_hookwhen the BN layer does not exist (#6852) - Use pycocotools directly no matter what platform it is (#6838)
Improvements
- Add unit test for SimOTA with no valid bbox (#6770)
- Use precommit to check readme (#6802)
- Support selecting GPU-ids in non-distributed testing time (#6781)
Contributors
A total of 16 developers contributed to this release.
Thanks @ZwwWayne, @Czm369, @jshilong, @RangiLyu, @BIGWangYuDong, @hhaAndroid, @jamiechoi1995, @AronLin, @Keiku, @gkagkos, @fcakyon, @www516717402, @vansin, @zactodd, @kimnamu, @jenhaoyang
New Contributors
- @jamiechoi1995 made their first contribution in #6795
- @Keiku made their first contribution in #6865
- @gkagkos made their first contribution in #6744
- @vansin made their first contribution in #6858
- @zactodd made their first contribution in #6764
- @kimnamu made their first contribution in #6906
- @jenhaoyang made their first contribution in #6881
Full Changelog: v2.19.1...v2.20.0
MMDetection V2.19.1 Release
[Fix] Cancel previous runs that are not completed (#6772) * [Fix] Cancel previous runs that are not completed * Empty to check * Empty to check
MMDetection V2.19.0 Release
Highlights
- Support Label Assignment Distillation
- Support
persistent_workersfor Pytorch >= 1.7 - Align accuracy to the updated official YOLOX
New Features
- Support Label Assignment Distillation (#6342)
- Support
persistent_workersfor Pytorch >= 1.7 (#6435)
Bug Fixes
- Fix repeatedly output warning message (#6584)
- Avoid infinite GPU waiting in dist training (#6501)
- Fix SSD512 config error (#6574)
- Fix MMDetection model to ONNX command (#6558)
Improvements
- Refactor configs of FP16 models (#6592)
- Align accuracy to the updated official YOLOX (#6443)
- Speed up training and reduce memory cost when using PhotoMetricDistortion. (#6442)
- Make OHEM work with seesaw loss (#6514)
Documents
- Update README.md (#6567)
Contributors
A total of 11 developers contributed to this release.
Thanks @FloydHsiu, @RangiLyu, @ZwwWayne, @AndreaPi, @st9007a, @hachreak, @BIGWangYuDong, @hhaAndroid, @AronLin, @chhluo, @vealocia, @HarborYuan, @st9007a, @jshilong
New Contributors
- @FloydHsiu made their first contribution in #6546
MMDetection V2.18.1 Release
Highlights
New Features
Bug Fixes
- Fix aug test error when the number of prediction bboxes is 0 (#6398)
- Fix SpatialReductionAttention in PVT (#6488)
- Fix wrong use of
trunc_normal_initin PVT and Swin-Transformer (#6432)
Improvements
- Save the printed AP information of COCO API to logger (#6505)
- Always map location to cpu when load checkpoint (#6405)
- Set a random seed when the user does not set a seed (#6457)
Documents
- Chinese version of Corruption Benchmarking (#6375)
- Fix config path in docs (#6396)
- Update GRoIE readme (#6401)
Contributors
A total of 11 developers contributed to this release.
Thanks @st9007a, @hachreak, @HarborYuan, @vealocia, @chhluo, @AndreaPi, @AronLin, @BIGWangYuDong, @hhaAndroid, @RangiLyu, @ZwwWayne
Full Changelog: v2.18.0...v2.18.1
MMDetection V2.18.0 Release
Highlights
- Support QueryInst (#6050)
- Refactor dense heads to decouple onnx export logics from
get_bboxesand speed up inference (#5317, #6003, #6369, #6268, #6315)
New Features
Bug Fixes
- Fix
init_weightin fcn_mask_head (#6378) - Fix type error in
imshow_bboxesof RPN (#6386) - Fix broken colab link in MMDetection Tutorial (#6382)
- Make sure the device and dtype of
scale_factorare the same as bboxes (#6374) - Remove sampling hardcode (#6317)
- Fix
RandomAffinebbox coordinate bug (#6293) - Fix initialization bug of final cls/reg layer in convfc head (#6279)
- Fix
img_shapebroken in auto_augment (#6259) - Fix kwargs parameter missing error in two stage detector (#6256)
Improvements
- Unify the interface of stuff head and panoptic head (#6308)
- Polish readme (#6243)
- Add code-spell pre-commit hook and fix a typo (#6306)
- Fix typos (#6245, #6190)
- Fix sampler unit test (#6284)
- Fix
forward_dummyof YOLACT to enableget_flops(#6079) - Fix link error in the config documentation (#6252)
- Adjust the order to beautify the document (#6195)
Refactors
- Refactor one-stage get_bboxes logic (#5317)
- Refactor ONNX export of One-Stage models (#6003, #6369)
- Refactor dense heads and speed-up (#6268)
- Migrate to use prior_generator in the training of dense heads (#6315)
Contributors
A total of 18 developers contributed to this release.
Thanks @boyden, @onnkeat, @st9007a, @vealocia, @yhcao6, @DapangpangX, @yellowdolphin, @cclauss, @kennymckormick,
@pingguokiller, @collinzrj, @AndreaPi, @AronLin, @BIGWangYuDong, @hhaAndroid, @jshilong, @RangiLyu, @ZwwWayne
New Contributors
- @AndreaPi made their first contribution in #6252
- @pingguokiller made their first contribution in #6256
- @kennymckormick made their first contribution in #6245
- @yellowdolphin made their first contribution in #6259
- @DapangpangX made their first contribution in #6293
- @vealocia made their first contribution in #6050
- @st9007a made their first contribution in #6374
- @onnkeat made their first contribution in #6382
- @boyden made their first contribution in #6378
Full Changelog: v2.17.0...v2.18.0