Skip to content

Commit 419af45

Browse files
authored
【Hackathon 7th】Remove parser.add_argument (#3878)
* Update test_wav.py * Update export.py * Update test_export.py * Update model.py * Update README.md * Apply suggestions from code review * Apply suggestions from code review * Update README.md * Update README.md * Update test.py * Update README.md
1 parent 99d4b70 commit 419af45

File tree

6 files changed

+24
-28
lines changed

6 files changed

+24
-28
lines changed

examples/aishell/asr0/README.md

Lines changed: 18 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -103,12 +103,19 @@ If you want to train the model, you can use the script below to execute stage 0
103103
```bash
104104
bash run.sh --stage 0 --stop_stage 1
105105
```
106-
or you can run these scripts in the command line (only use CPU).
106+
Or you can run these scripts in the command line (only use CPU).
107107
```bash
108108
source path.sh
109109
bash ./local/data.sh
110-
CUDA_VISIBLE_DEVICES= ./local/train.sh conf/deepspeech2.yaml deepspeech2
110+
CUDA_VISIBLE_DEVICES= ./local/train.sh conf/deepspeech2.yaml deepspeech2
111111
```
112+
If you want to use GPU, you can run these scripts in the command line (suppose you have only 1 GPU).
113+
```bash
114+
source path.sh
115+
bash ./local/data.sh
116+
CUDA_VISIBLE_DEVICES=0 ./local/train.sh conf/deepspeech2.yaml deepspeech2
117+
```
118+
112119
## Stage 2: Top-k Models Averaging
113120
After training the model, we need to get the final model for testing and inference. In every epoch, the model checkpoint is saved, so we can choose the best model from them based on the validation loss or we can sort them and average the parameters of the top-k models to get the final model. We can use stage 2 to do this, and the code is shown below:
114121
```bash
@@ -148,7 +155,7 @@ source path.sh
148155
bash ./local/data.sh
149156
CUDA_VISIBLE_DEVICES= ./local/train.sh conf/deepspeech2.yaml deepspeech2
150157
avg.sh best exp/deepspeech2/checkpoints 1
151-
CUDA_VISIBLE_DEVICES= ./local/test.sh conf/deepspeech2.yaml exp/deepspeech2/checkpoints/avg_1
158+
CUDA_VISIBLE_DEVICES= ./local/test.sh conf/deepspeech2.yaml conf/tuning/decode.yaml exp/deepspeech2/checkpoints/avg_10
152159
```
153160
## Pretrained Model
154161
You can get the pretrained models from [this](../../../docs/source/released_model.md).
@@ -157,14 +164,14 @@ using the `tar` scripts to unpack the model and then you can use the script to t
157164

158165
For example:
159166
```
160-
wget https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_aishell_ckpt_0.1.1.model.tar.gz
161-
tar xzvf asr0_deepspeech2_aishell_ckpt_0.1.1.model.tar.gz
167+
wget https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_offline_aishell_ckpt_1.0.1.model.tar.gz
168+
tar xzvf asr0_deepspeech2_offline_aishell_ckpt_1.0.1.model.tar.gz
162169
source path.sh
163170
# If you have process the data and get the manifest file, you can skip the following 2 steps
164171
bash local/data.sh --stage -1 --stop_stage -1
165172
bash local/data.sh --stage 2 --stop_stage 2
166173
167-
CUDA_VISIBLE_DEVICES= ./local/test.sh conf/deepspeech2.yaml exp/deepspeech2/checkpoints/avg_1
174+
CUDA_VISIBLE_DEVICES= ./local/test.sh conf/deepspeech2.yaml exp/deepspeech2/checkpoints/avg_10
168175
```
169176
The performance of the released models are shown in [this](./RESULTS.md)
170177
## Stage 4: Static graph model Export
@@ -178,7 +185,7 @@ This stage is to transform dygraph to static graph.
178185
If you already have a dynamic graph model, you can run this script:
179186
```bash
180187
source path.sh
181-
./local/export.sh deepspeech2.yaml exp/deepspeech2/checkpoints/avg_1 exp/deepspeech2/checkpoints/avg_1.jit offline
188+
./local/export.sh conf/deepspeech2.yaml exp/deepspeech2/checkpoints/avg_10 exp/deepspeech2/checkpoints/avg_10.jit
182189
```
183190
## Stage 5: Static graph Model Testing
184191
Similar to stage 3, the static graph model can also be tested.
@@ -190,7 +197,7 @@ Similar to stage 3, the static graph model can also be tested.
190197
```
191198
If you already have exported the static graph, you can run this script:
192199
```bash
193-
CUDA_VISIBLE_DEVICES= ./local/test_export.sh conf/deepspeech2.yaml exp/deepspeech2/checkpoints/avg_1.jit offline
200+
CUDA_VISIBLE_DEVICES= ./local/test_export.sh conf/deepspeech2.yaml conf/tuning/decode.yaml exp/deepspeech2/checkpoints/avg_10.jit
194201
```
195202
## Stage 6: Single Audio File Inference
196203
In some situations, you want to use the trained model to do the inference for the single audio file. You can use stage 5. The code is shown below
@@ -202,14 +209,14 @@ if [ ${stage} -le 6 ] && [ ${stop_stage} -ge 6 ]; then
202209
```
203210
you can train the model by yourself, or you can download the pretrained model by the script below:
204211
```bash
205-
wget https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_aishell_ckpt_0.1.1.model.tar.gz
206-
tar xzvf asr0_deepspeech2_aishell_ckpt_0.1.1.model.tar.gz
212+
wget https://paddlespeech.bj.bcebos.com/s2t/aishell/asr0/asr0_deepspeech2_offline_aishell_ckpt_1.0.1.model.tar.gz
213+
tar asr0_deepspeech2_offline_aishell_ckpt_1.0.1.model.tar.gz
207214
```
208215
You can download the audio demo:
209216
```bash
210217
wget -nc https://paddlespeech.bj.bcebos.com/datasets/single_wav/zh/demo_01_03.wav -P data/
211218
```
212219
You need to prepare an audio file or use the audio demo above, please confirm the sample rate of the audio is 16K. You can get the result of the audio demo by running the script below.
213220
```bash
214-
CUDA_VISIBLE_DEVICES= ./local/test_wav.sh conf/deepspeech2.yaml conf/tuning/decode.yaml exp/deepspeech2/checkpoints/avg_1 data/demo_01_03.wav
221+
CUDA_VISIBLE_DEVICES= ./local/test_wav.sh conf/deepspeech2.yaml conf/tuning/decode.yaml exp/deepspeech2/checkpoints/avg_10 data/demo_01_03.wav
215222
```

paddlespeech/s2t/exps/deepspeech2/bin/export.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,6 @@ def main(config, args):
3232

3333
if __name__ == "__main__":
3434
parser = default_argument_parser()
35-
# save jit model to
36-
parser.add_argument(
37-
"--export_path", type=str, help="path of the jit model to save")
3835
args = parser.parse_args()
3936
print_arguments(args)
4037

paddlespeech/s2t/exps/deepspeech2/bin/test.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,6 @@ def main(config, args):
3232

3333
if __name__ == "__main__":
3434
parser = default_argument_parser()
35-
# save asr result to
36-
parser.add_argument(
37-
"--result_file", type=str, help="path of save the asr result")
3835
args = parser.parse_args()
3936
print_arguments(args, globals())
4037

paddlespeech/s2t/exps/deepspeech2/bin/test_export.py

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -32,12 +32,6 @@ def main(config, args):
3232

3333
if __name__ == "__main__":
3434
parser = default_argument_parser()
35-
# save asr result to
36-
parser.add_argument(
37-
"--result_file", type=str, help="path of save the asr result")
38-
#load jit model from
39-
parser.add_argument(
40-
"--export_path", type=str, help="path of the jit model to save")
4135
parser.add_argument(
4236
"--enable-auto-log", action="store_true", help="use auto log")
4337
args = parser.parse_args()

paddlespeech/s2t/exps/deepspeech2/bin/test_wav.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -171,10 +171,6 @@ def main(config, args):
171171

172172
if __name__ == "__main__":
173173
parser = default_argument_parser()
174-
parser.add_argument("--audio_file", type=str, help='audio file path')
175-
# save asr result to
176-
parser.add_argument(
177-
"--result_file", type=str, help="path of save the asr result")
178174
args = parser.parse_args()
179175
print_arguments(args, globals())
180176
if not os.path.isfile(args.audio_file):

paddlespeech/s2t/exps/deepspeech2/model.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -335,7 +335,12 @@ def export(self):
335335
self.test_loader, self.config, self.args.checkpoint_path)
336336
infer_model.eval()
337337
static_model = infer_model.export()
338-
logger.info(f"Export code: {static_model.forward.code}")
338+
try:
339+
logger.info(f"Export code: {static_model.forward.code}")
340+
except:
341+
logger.info(
342+
f"Fail to print Export code, static_model.forward.code can not be run."
343+
)
339344
paddle.jit.save(static_model, self.args.export_path)
340345

341346

0 commit comments

Comments
 (0)