icefall-asr-commonvoice-fr-pruned-transducer-stateless7-streaming-2023-04-02 / decoding_results /fast_beam_search /log-decode-epoch-29-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2023-04-03-17-31-21
shaojieli's picture
Upload 34 files
fc345f5
2023-04-03 17:31:21,345 INFO [decode.py:659] Decoding started
2023-04-03 17:31:21,345 INFO [decode.py:665] Device: cuda:0
2023-04-03 17:31:21,347 INFO [decode.py:675] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '62e404dd3f3a811d73e424199b3408e309c06e1a', 'k2-git-date': 'Mon Jan 30 02:26:16 2023', 'lhotse-version': '1.12.0.dev+git.3ccfeb7.clean', 'torch-version': '1.13.0', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.8', 'icefall-git-branch': 'master', 'icefall-git-sha1': 'd74822d-dirty', 'icefall-git-date': 'Tue Mar 21 21:35:32 2023', 'icefall-path': '/home/lishaojie/icefall', 'k2-path': '/home/lishaojie/.conda/envs/env_lishaojie/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/home/lishaojie/.conda/envs/env_lishaojie/lib/python3.8/site-packages/lhotse/__init__.py', 'hostname': 'cnc533', 'IP address': '127.0.1.1'}, 'epoch': 29, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp1'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 64, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp1/fast_beam_search'), 'suffix': 'epoch-29-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
2023-04-03 17:31:21,347 INFO [decode.py:677] About to create model
2023-04-03 17:31:21,749 INFO [zipformer.py:405] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
2023-04-03 17:31:21,757 INFO [decode.py:748] Calculating the averaged model over epoch range from 20 (excluded) to 29
2023-04-03 17:31:23,870 INFO [decode.py:782] Number of model parameters: 70369391
2023-04-03 17:31:23,871 INFO [commonvoice_fr.py:406] About to get test cuts
2023-04-03 17:31:26,743 INFO [decode.py:560] batch 0/?, cuts processed until now is 27
2023-04-03 17:31:31,854 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.8338, 1.6836, 1.5364, 1.7643, 2.1272, 2.0399, 1.7407, 1.5925],
device='cuda:0'), covar=tensor([0.0367, 0.0349, 0.0585, 0.0342, 0.0213, 0.0459, 0.0350, 0.0414],
device='cuda:0'), in_proj_covar=tensor([0.0097, 0.0103, 0.0143, 0.0108, 0.0097, 0.0111, 0.0100, 0.0110],
device='cuda:0'), out_proj_covar=tensor([7.4944e-05, 7.9098e-05, 1.1173e-04, 8.2734e-05, 7.5248e-05, 8.1783e-05,
7.3728e-05, 8.3511e-05], device='cuda:0')
2023-04-03 17:31:36,035 INFO [decode.py:560] batch 20/?, cuts processed until now is 604
2023-04-03 17:31:46,332 INFO [decode.py:560] batch 40/?, cuts processed until now is 1209
2023-04-03 17:31:54,962 INFO [decode.py:560] batch 60/?, cuts processed until now is 1866
2023-04-03 17:32:04,386 INFO [decode.py:560] batch 80/?, cuts processed until now is 2422
2023-04-03 17:32:13,074 INFO [decode.py:560] batch 100/?, cuts processed until now is 3088
2023-04-03 17:32:14,054 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.2042, 1.9050, 2.4510, 1.6668, 2.2104, 2.4274, 1.7766, 2.5439],
device='cuda:0'), covar=tensor([0.1183, 0.2019, 0.1326, 0.1735, 0.0824, 0.1156, 0.2855, 0.0688],
device='cuda:0'), in_proj_covar=tensor([0.0188, 0.0202, 0.0188, 0.0186, 0.0170, 0.0210, 0.0213, 0.0194],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002],
device='cuda:0')
2023-04-03 17:32:22,296 INFO [decode.py:560] batch 120/?, cuts processed until now is 3672
2023-04-03 17:32:28,822 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.4821, 2.5480, 2.1061, 1.0486, 2.2957, 1.9745, 1.9215, 2.3727],
device='cuda:0'), covar=tensor([0.0910, 0.0618, 0.1588, 0.1982, 0.1334, 0.2716, 0.2164, 0.0818],
device='cuda:0'), in_proj_covar=tensor([0.0167, 0.0187, 0.0196, 0.0178, 0.0206, 0.0207, 0.0220, 0.0192],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002],
device='cuda:0')
2023-04-03 17:32:30,833 INFO [decode.py:560] batch 140/?, cuts processed until now is 4348
2023-04-03 17:32:39,389 INFO [decode.py:560] batch 160/?, cuts processed until now is 5035
2023-04-03 17:32:41,458 INFO [zipformer.py:2441] attn_weights_entropy = tensor([0.5151, 1.7409, 1.7151, 0.9089, 1.8593, 1.9981, 2.0410, 1.5445],
device='cuda:0'), covar=tensor([0.0868, 0.0573, 0.0496, 0.0555, 0.0400, 0.0600, 0.0273, 0.0678],
device='cuda:0'), in_proj_covar=tensor([0.0119, 0.0146, 0.0125, 0.0119, 0.0128, 0.0127, 0.0138, 0.0146],
device='cuda:0'), out_proj_covar=tensor([8.7160e-05, 1.0465e-04, 8.8840e-05, 8.3773e-05, 8.9721e-05, 9.0117e-05,
9.8475e-05, 1.0448e-04], device='cuda:0')
2023-04-03 17:32:48,122 INFO [decode.py:560] batch 180/?, cuts processed until now is 5674
2023-04-03 17:32:56,943 INFO [decode.py:560] batch 200/?, cuts processed until now is 6301
2023-04-03 17:33:05,928 INFO [decode.py:560] batch 220/?, cuts processed until now is 6914
2023-04-03 17:33:14,496 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.3674, 2.1843, 2.3572, 1.7864, 2.2139, 2.4677, 2.4855, 1.9260],
device='cuda:0'), covar=tensor([0.0445, 0.0549, 0.0557, 0.0678, 0.1166, 0.0458, 0.0437, 0.0916],
device='cuda:0'), in_proj_covar=tensor([0.0128, 0.0133, 0.0136, 0.0116, 0.0123, 0.0135, 0.0136, 0.0158],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
device='cuda:0')
2023-04-03 17:33:14,778 INFO [decode.py:560] batch 240/?, cuts processed until now is 7540
2023-04-03 17:33:23,635 INFO [decode.py:560] batch 260/?, cuts processed until now is 8161
2023-04-03 17:33:27,250 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.4658, 2.5376, 2.1602, 1.2614, 2.3429, 2.0655, 1.9856, 2.4390],
device='cuda:0'), covar=tensor([0.0989, 0.0582, 0.1814, 0.1883, 0.1155, 0.2226, 0.2119, 0.0799],
device='cuda:0'), in_proj_covar=tensor([0.0167, 0.0187, 0.0196, 0.0178, 0.0206, 0.0207, 0.0220, 0.0192],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002],
device='cuda:0')
2023-04-03 17:33:32,012 INFO [decode.py:560] batch 280/?, cuts processed until now is 8857
2023-04-03 17:33:39,857 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.5870, 2.6057, 2.1060, 1.0188, 2.3087, 2.0434, 1.9922, 2.3943],
device='cuda:0'), covar=tensor([0.0943, 0.0679, 0.1608, 0.1959, 0.1304, 0.2489, 0.2117, 0.0787],
device='cuda:0'), in_proj_covar=tensor([0.0167, 0.0187, 0.0196, 0.0178, 0.0206, 0.0207, 0.0220, 0.0192],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002],
device='cuda:0')
2023-04-03 17:33:40,207 INFO [decode.py:560] batch 300/?, cuts processed until now is 9574
2023-04-03 17:33:49,272 INFO [decode.py:560] batch 320/?, cuts processed until now is 10169
2023-04-03 17:33:57,944 INFO [decode.py:560] batch 340/?, cuts processed until now is 10810
2023-04-03 17:34:06,479 INFO [decode.py:560] batch 360/?, cuts processed until now is 11452
2023-04-03 17:34:14,029 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.5451, 2.5973, 2.1309, 1.0323, 2.3093, 2.0920, 1.9761, 2.4208],
device='cuda:0'), covar=tensor([0.0829, 0.0689, 0.1334, 0.1924, 0.1226, 0.2190, 0.2106, 0.0793],
device='cuda:0'), in_proj_covar=tensor([0.0167, 0.0187, 0.0196, 0.0178, 0.0206, 0.0207, 0.0220, 0.0192],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002],
device='cuda:0')
2023-04-03 17:34:14,816 INFO [decode.py:560] batch 380/?, cuts processed until now is 12133
2023-04-03 17:34:24,080 INFO [decode.py:560] batch 400/?, cuts processed until now is 12706
2023-04-03 17:34:33,187 INFO [decode.py:560] batch 420/?, cuts processed until now is 13299
2023-04-03 17:34:42,380 INFO [decode.py:560] batch 440/?, cuts processed until now is 13891
2023-04-03 17:34:51,250 INFO [decode.py:560] batch 460/?, cuts processed until now is 14515
2023-04-03 17:34:59,929 INFO [decode.py:560] batch 480/?, cuts processed until now is 15158
2023-04-03 17:35:08,659 INFO [decode.py:560] batch 500/?, cuts processed until now is 15743
2023-04-03 17:35:11,772 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp1/fast_beam_search/recogs-test-cv-beam_20.0_max_contexts_8_max_states_64-epoch-29-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-04-03 17:35:12,013 INFO [utils.py:558] [test-cv-beam_20.0_max_contexts_8_max_states_64] %WER 10.25% [16082 / 156915, 1180 ins, 1721 del, 13181 sub ]
2023-04-03 17:35:12,601 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp1/fast_beam_search/errs-test-cv-beam_20.0_max_contexts_8_max_states_64-epoch-29-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt
2023-04-03 17:35:12,601 INFO [decode.py:609]
For test-cv, WER of different settings are:
beam_20.0_max_contexts_8_max_states_64 10.25 best for test-cv
2023-04-03 17:35:12,601 INFO [decode.py:808] Done!