Skip to content

Commit 10774a0

Browse files
authored
Update and rename readme to README.md
1 parent b9cac6e commit 10774a0

File tree

2 files changed

+107
-9
lines changed

2 files changed

+107
-9
lines changed

README.md

Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
# PhysAug: A Physical-guided and Frequency-based Data Augmentation for Single-Domain Generalized Object Detection
2+
3+
This repository contains the official implementation of our AAAI 2025 accepted paper:
4+
5+
**"PhysAug: A Physical-guided and Frequency-based Data Augmentation for Single-Domain Generalized Object Detection"**
6+
7+
## 🎯 Abstract
8+
9+
PhysAug is a novel data augmentation technique designed for single-domain generalized object detection. By leveraging physical priors and frequency-based operations, PhysAug enhances the robustness of detection models under various challenging conditions, such as low-light or motion blur, while maintaining computational efficiency. Extensive experiments demonstrate the superior performance of PhysAug over existing methods, particularly in adverse real-world scenarios.
10+
11+
## 📜 Highlights
12+
13+
- **Physical-guided Augmentation**: Simulates real-world conditions using physical priors.
14+
- **Frequency-based Feature Simulation**: Operates in the frequency domain for precise and computationally efficient augmentation.
15+
- **Improved Robustness**: Enhances model performance in challenging conditions like diverse weather.
16+
- **Single-Domain Generalization**: Outperforms traditional methods without requiring domain adaptation techniques.
17+
18+
19+
## 🚀 Installation
20+
```bash
21+
git clone https://github.com/startracker0/PhysAug.git
22+
cd PhysAug
23+
24+
conda create -n physaug python=3.8 -y
25+
pip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu116
26+
27+
pip install -U openmim
28+
mim install mmengine
29+
mim install "mmcv==2.1.0"
30+
pip install -v -e .
31+
32+
pip install einops==0.3.2
33+
pip install opt-einsum==3.3.0
34+
pip install tensorboard==2.13.0
35+
```
36+
To ensure reproducibility, the detailed environment dependencies are provided in requirements.txt and environment.yaml
37+
38+
## 📊 Reproducing Results
39+
40+
Follow the steps below to reproduce the results reported in our AAAI 2025 paper.
41+
42+
### 1. Prepare the Dataset
43+
Download and prepare the dataset required for the experiments. Update the dataset path in the configuration file.
44+
45+
#### DWD Dataset
46+
You can download the DWD dataset from the following link:
47+
[Download DWD Dataset](https://drive.google.com/drive/folders/1IIUnUrJrvFgPzU8D6KtV0CXa8k1eBV9B)
48+
49+
#### Cityscapes-C Dataset
50+
The Cityscapes dataset can be downloaded from the official website:
51+
[Download Cityscapes Dataset](https://www.cityscapes-dataset.com/)
52+
53+
We generate the Cityscapes-C validation set based on the cityscapes/leftImg8bit/val portion of the dataset.
54+
You can create this dataset using the [imagecorruptions](https://github.com/bethgelab/imagecorruptions) library, which provides various corruption functions to simulate adverse conditions such as noise, blur, weather, and digital artifacts.
55+
56+
```bash
57+
git clone https://github.com/bethgelab/imagecorruptions.git
58+
cd imagecorruptions
59+
pip install -v -e .
60+
python gen_cityscapes_c.py
61+
```
62+
63+
The datasets should be organized as follows:
64+
```bash
65+
datasets/
66+
├── DWD/
67+
│ ├── daytime_clear/
68+
│ ├── daytime_foggy/
69+
│ ├── dusk_rainy/
70+
│ ├── night_rainy/
71+
│ └── night_sunny/
72+
├── Cityscapes-c/
73+
│ ├── brightness/
74+
│ ├── contrast/
75+
│ ├── defocus_blur/
76+
........
77+
│ └── zoom_blur/
78+
```
79+
80+
### 2. Training the Model
81+
82+
To train the model using PhysAug, follow these steps:
83+
84+
1. Ensure the dataset paths are correctly configured in `configs/_base_/datasets/dwd.py` and `configs/_base_/datasets/cityscapes_detection.py`.
85+
2. Run the following command to start training:
86+
87+
```bash
88+
bash train_dwd.sh
89+
bash train_cityscapes_c.sh
90+
```
91+
92+
### 3. Evaluating the Model
93+
94+
To evaluate the trained model, follow these steps:
95+
96+
1. Specify the dataset to evaluate (e.g., DWD, Cityscapes, or Cityscapes-C).
97+
2. Run the evaluation script with the following command:
98+
99+
```bash
100+
bash test.sh
101+
```
102+
103+
### 4. Pre-trained Models
104+
105+
You can download the pre-trained models including Physaug_DWD and Physaug_Cityscapes from [here](https://pan.baidu.com/s/1bSoP0b2Ce4W4_14wwTyxcQ?pwd=6ske)
106+
107+
If the links are no longer accessible, please feel free to contact me.

readme

Lines changed: 0 additions & 9 deletions
This file was deleted.

0 commit comments

Comments
 (0)