2022-05-26 15:07:31,886 INFO [train.py:906] (0/4) Training started 2022-05-26 15:07:31,889 INFO [train.py:916] (0/4) Device: cuda:0 2022-05-26 15:07:31,893 INFO [train.py:934] (0/4) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 512, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': 'ecfe7bd6d9189964bf3ff043038918d889a43185', 'k2-git-date': 'Tue May 10 10:57:55 2022', 'lhotse-version': '1.1.0', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'streaming-conformer', 'icefall-git-sha1': '364bccb-dirty', 'icefall-git-date': 'Thu May 26 10:29:08 2022', 'icefall-path': '/ceph-kw/kangwei/code/icefall_reworked2', 'k2-path': '/ceph-kw/kangwei/code/k2/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-hw/kangwei/dev_tools/anaconda3/envs/rnnt2/lib/python3.8/site-packages/lhotse-1.1.0-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-3-0307202051-57dc848959-8tmmp', 'IP address': '10.177.24.138'}, 'world_size': 4, 'master_port': 13498, 'tensorboard': True, 'num_epochs': 50, 'start_epoch': 2, 'start_batch': 0, 'exp_dir': PosixPath('streaming_pruned_transducer_stateless4/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'dynamic_chunk_training': True, 'causal_convolution': True, 'short_chunk_size': 32, 'num_left_chunks': 4, 'delay_penalty': 0.0, 'return_sym_delay': False, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 300, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'blank_id': 0, 'vocab_size': 500} 2022-05-26 15:07:31,893 INFO [train.py:936] (0/4) About to create model 2022-05-26 15:07:32,328 INFO [train.py:940] (0/4) Number of model parameters: 78648040 2022-05-26 15:07:32,565 INFO [checkpoint.py:112] (0/4) Loading checkpoint from streaming_pruned_transducer_stateless4/exp/epoch-1.pt 2022-05-26 15:07:40,046 INFO [checkpoint.py:131] (0/4) Loading averaged model 2022-05-26 15:07:44,407 INFO [train.py:955] (0/4) Using DDP 2022-05-26 15:07:44,729 INFO [train.py:963] (0/4) Loading optimizer state dict 2022-05-26 15:07:45,477 INFO [train.py:971] (0/4) Loading scheduler state dict 2022-05-26 15:07:45,478 INFO [asr_datamodule.py:391] (0/4) About to get train-clean-100 cuts 2022-05-26 15:07:52,348 INFO [asr_datamodule.py:398] (0/4) About to get train-clean-360 cuts 2022-05-26 15:08:18,286 INFO [asr_datamodule.py:405] (0/4) About to get train-other-500 cuts 2022-05-26 15:09:03,999 INFO [asr_datamodule.py:209] (0/4) Enable MUSAN 2022-05-26 15:09:03,999 INFO [asr_datamodule.py:210] (0/4) About to get Musan cuts 2022-05-26 15:09:05,436 INFO [asr_datamodule.py:238] (0/4) Enable SpecAugment 2022-05-26 15:09:05,436 INFO [asr_datamodule.py:239] (0/4) Time warp factor: 80 2022-05-26 15:09:05,437 INFO [asr_datamodule.py:251] (0/4) Num frame mask: 10 2022-05-26 15:09:05,437 INFO [asr_datamodule.py:264] (0/4) About to create train dataset 2022-05-26 15:09:05,437 INFO [asr_datamodule.py:292] (0/4) Using BucketingSampler. 2022-05-26 15:09:10,626 INFO [asr_datamodule.py:308] (0/4) About to create train dataloader 2022-05-26 15:09:10,627 INFO [asr_datamodule.py:412] (0/4) About to get dev-clean cuts 2022-05-26 15:09:10,909 INFO [asr_datamodule.py:417] (0/4) About to get dev-other cuts 2022-05-26 15:09:11,060 INFO [asr_datamodule.py:339] (0/4) About to create dev dataset 2022-05-26 15:09:11,072 INFO [asr_datamodule.py:358] (0/4) About to create dev dataloader 2022-05-26 15:09:11,073 INFO [train.py:1082] (0/4) Sanity check -- see if any of the batches in epoch 1 would cause OOM. 2022-05-26 15:09:21,055 INFO [distributed.py:874] (0/4) Reducer buckets have been rebuilt in this iteration. 2022-05-26 15:09:26,257 INFO [train.py:1023] (0/4) Loading grad scaler state dict 2022-05-26 15:09:38,906 INFO [train.py:842] (0/4) Epoch 2, batch 0, loss[loss=0.4737, simple_loss=0.4318, pruned_loss=0.2578, over 7160.00 frames.], tot_loss[loss=0.4737, simple_loss=0.4318, pruned_loss=0.2578, over 7160.00 frames.], batch size: 26, lr: 2.06e-03 2022-05-26 15:10:18,169 INFO [train.py:842] (0/4) Epoch 2, batch 50, loss[loss=0.3442, simple_loss=0.3908, pruned_loss=0.1488, over 7242.00 frames.], tot_loss[loss=0.3544, simple_loss=0.3867, pruned_loss=0.161, over 310714.08 frames.], batch size: 20, lr: 2.06e-03 2022-05-26 15:10:57,065 INFO [train.py:842] (0/4) Epoch 2, batch 100, loss[loss=0.3202, simple_loss=0.3633, pruned_loss=0.1385, over 7425.00 frames.], tot_loss[loss=0.3481, simple_loss=0.383, pruned_loss=0.1566, over 559080.38 frames.], batch size: 20, lr: 2.05e-03 2022-05-26 15:11:36,013 INFO [train.py:842] (0/4) Epoch 2, batch 150, loss[loss=0.3079, simple_loss=0.3561, pruned_loss=0.1298, over 7320.00 frames.], tot_loss[loss=0.3459, simple_loss=0.3814, pruned_loss=0.1552, over 750064.73 frames.], batch size: 20, lr: 2.05e-03 2022-05-26 15:12:14,710 INFO [train.py:842] (0/4) Epoch 2, batch 200, loss[loss=0.3457, simple_loss=0.3748, pruned_loss=0.1584, over 7163.00 frames.], tot_loss[loss=0.3421, simple_loss=0.3789, pruned_loss=0.1527, over 899600.05 frames.], batch size: 19, lr: 2.04e-03 2022-05-26 15:12:53,677 INFO [train.py:842] (0/4) Epoch 2, batch 250, loss[loss=0.3895, simple_loss=0.4085, pruned_loss=0.1852, over 7390.00 frames.], tot_loss[loss=0.3421, simple_loss=0.3792, pruned_loss=0.1525, over 1014665.10 frames.], batch size: 23, lr: 2.04e-03 2022-05-26 15:13:32,396 INFO [train.py:842] (0/4) Epoch 2, batch 300, loss[loss=0.2864, simple_loss=0.3384, pruned_loss=0.1173, over 7259.00 frames.], tot_loss[loss=0.3414, simple_loss=0.379, pruned_loss=0.1519, over 1103630.59 frames.], batch size: 19, lr: 2.03e-03 2022-05-26 15:14:11,455 INFO [train.py:842] (0/4) Epoch 2, batch 350, loss[loss=0.3421, simple_loss=0.3674, pruned_loss=0.1585, over 7225.00 frames.], tot_loss[loss=0.3354, simple_loss=0.3745, pruned_loss=0.1482, over 1172735.56 frames.], batch size: 21, lr: 2.03e-03 2022-05-26 15:14:50,189 INFO [train.py:842] (0/4) Epoch 2, batch 400, loss[loss=0.3738, simple_loss=0.3944, pruned_loss=0.1766, over 7135.00 frames.], tot_loss[loss=0.3361, simple_loss=0.3745, pruned_loss=0.1489, over 1229478.66 frames.], batch size: 20, lr: 2.03e-03 2022-05-26 15:15:28,836 INFO [train.py:842] (0/4) Epoch 2, batch 450, loss[loss=0.2757, simple_loss=0.3307, pruned_loss=0.1103, over 7158.00 frames.], tot_loss[loss=0.3361, simple_loss=0.375, pruned_loss=0.1486, over 1274709.63 frames.], batch size: 19, lr: 2.02e-03 2022-05-26 15:16:07,189 INFO [train.py:842] (0/4) Epoch 2, batch 500, loss[loss=0.3031, simple_loss=0.3593, pruned_loss=0.1234, over 7170.00 frames.], tot_loss[loss=0.3351, simple_loss=0.3744, pruned_loss=0.1479, over 1306170.00 frames.], batch size: 18, lr: 2.02e-03 2022-05-26 15:16:46,575 INFO [train.py:842] (0/4) Epoch 2, batch 550, loss[loss=0.2826, simple_loss=0.3361, pruned_loss=0.1145, over 7362.00 frames.], tot_loss[loss=0.3339, simple_loss=0.3735, pruned_loss=0.1472, over 1332006.51 frames.], batch size: 19, lr: 2.01e-03 2022-05-26 15:17:25,157 INFO [train.py:842] (0/4) Epoch 2, batch 600, loss[loss=0.2752, simple_loss=0.3398, pruned_loss=0.1053, over 7370.00 frames.], tot_loss[loss=0.3368, simple_loss=0.3761, pruned_loss=0.1488, over 1353927.69 frames.], batch size: 23, lr: 2.01e-03 2022-05-26 15:18:04,053 INFO [train.py:842] (0/4) Epoch 2, batch 650, loss[loss=0.267, simple_loss=0.327, pruned_loss=0.1036, over 7272.00 frames.], tot_loss[loss=0.3346, simple_loss=0.3744, pruned_loss=0.1474, over 1368032.79 frames.], batch size: 18, lr: 2.01e-03 2022-05-26 15:18:42,824 INFO [train.py:842] (0/4) Epoch 2, batch 700, loss[loss=0.4374, simple_loss=0.435, pruned_loss=0.22, over 5187.00 frames.], tot_loss[loss=0.3319, simple_loss=0.3726, pruned_loss=0.1456, over 1379724.32 frames.], batch size: 52, lr: 2.00e-03 2022-05-26 15:19:21,648 INFO [train.py:842] (0/4) Epoch 2, batch 750, loss[loss=0.314, simple_loss=0.358, pruned_loss=0.135, over 7256.00 frames.], tot_loss[loss=0.3316, simple_loss=0.3723, pruned_loss=0.1455, over 1390727.13 frames.], batch size: 19, lr: 2.00e-03 2022-05-26 15:20:00,304 INFO [train.py:842] (0/4) Epoch 2, batch 800, loss[loss=0.2689, simple_loss=0.3276, pruned_loss=0.1051, over 7065.00 frames.], tot_loss[loss=0.333, simple_loss=0.3733, pruned_loss=0.1463, over 1400198.93 frames.], batch size: 18, lr: 1.99e-03 2022-05-26 15:20:39,201 INFO [train.py:842] (0/4) Epoch 2, batch 850, loss[loss=0.3978, simple_loss=0.405, pruned_loss=0.1954, over 7331.00 frames.], tot_loss[loss=0.3298, simple_loss=0.371, pruned_loss=0.1443, over 1408694.74 frames.], batch size: 20, lr: 1.99e-03 2022-05-26 15:21:17,858 INFO [train.py:842] (0/4) Epoch 2, batch 900, loss[loss=0.2941, simple_loss=0.3499, pruned_loss=0.1192, over 7441.00 frames.], tot_loss[loss=0.3317, simple_loss=0.3726, pruned_loss=0.1454, over 1413752.33 frames.], batch size: 20, lr: 1.99e-03 2022-05-26 15:21:56,746 INFO [train.py:842] (0/4) Epoch 2, batch 950, loss[loss=0.2428, simple_loss=0.3089, pruned_loss=0.08832, over 7253.00 frames.], tot_loss[loss=0.3311, simple_loss=0.372, pruned_loss=0.145, over 1415166.48 frames.], batch size: 19, lr: 1.98e-03 2022-05-26 15:22:35,377 INFO [train.py:842] (0/4) Epoch 2, batch 1000, loss[loss=0.3488, simple_loss=0.3989, pruned_loss=0.1494, over 6805.00 frames.], tot_loss[loss=0.329, simple_loss=0.3704, pruned_loss=0.1438, over 1416430.60 frames.], batch size: 31, lr: 1.98e-03 2022-05-26 15:23:14,143 INFO [train.py:842] (0/4) Epoch 2, batch 1050, loss[loss=0.3206, simple_loss=0.3575, pruned_loss=0.1419, over 7429.00 frames.], tot_loss[loss=0.3294, simple_loss=0.3706, pruned_loss=0.1441, over 1418576.63 frames.], batch size: 20, lr: 1.97e-03 2022-05-26 15:23:52,745 INFO [train.py:842] (0/4) Epoch 2, batch 1100, loss[loss=0.342, simple_loss=0.3792, pruned_loss=0.1525, over 7153.00 frames.], tot_loss[loss=0.3325, simple_loss=0.3733, pruned_loss=0.1459, over 1419396.12 frames.], batch size: 18, lr: 1.97e-03 2022-05-26 15:24:32,109 INFO [train.py:842] (0/4) Epoch 2, batch 1150, loss[loss=0.3467, simple_loss=0.3793, pruned_loss=0.157, over 7233.00 frames.], tot_loss[loss=0.3317, simple_loss=0.3727, pruned_loss=0.1453, over 1423282.03 frames.], batch size: 20, lr: 1.97e-03 2022-05-26 15:25:10,577 INFO [train.py:842] (0/4) Epoch 2, batch 1200, loss[loss=0.3793, simple_loss=0.4008, pruned_loss=0.1789, over 7054.00 frames.], tot_loss[loss=0.3303, simple_loss=0.3722, pruned_loss=0.1442, over 1422340.50 frames.], batch size: 28, lr: 1.96e-03 2022-05-26 15:25:49,491 INFO [train.py:842] (0/4) Epoch 2, batch 1250, loss[loss=0.2841, simple_loss=0.3279, pruned_loss=0.1201, over 7282.00 frames.], tot_loss[loss=0.3296, simple_loss=0.372, pruned_loss=0.1436, over 1422723.20 frames.], batch size: 18, lr: 1.96e-03 2022-05-26 15:26:28,033 INFO [train.py:842] (0/4) Epoch 2, batch 1300, loss[loss=0.3372, simple_loss=0.3861, pruned_loss=0.1441, over 7218.00 frames.], tot_loss[loss=0.3306, simple_loss=0.3727, pruned_loss=0.1442, over 1416720.90 frames.], batch size: 21, lr: 1.95e-03 2022-05-26 15:27:06,800 INFO [train.py:842] (0/4) Epoch 2, batch 1350, loss[loss=0.315, simple_loss=0.3422, pruned_loss=0.1439, over 7264.00 frames.], tot_loss[loss=0.3309, simple_loss=0.373, pruned_loss=0.1444, over 1420217.79 frames.], batch size: 17, lr: 1.95e-03 2022-05-26 15:27:45,155 INFO [train.py:842] (0/4) Epoch 2, batch 1400, loss[loss=0.3424, simple_loss=0.396, pruned_loss=0.1444, over 7224.00 frames.], tot_loss[loss=0.3315, simple_loss=0.3733, pruned_loss=0.1448, over 1419234.68 frames.], batch size: 21, lr: 1.95e-03 2022-05-26 15:28:24,275 INFO [train.py:842] (0/4) Epoch 2, batch 1450, loss[loss=0.3556, simple_loss=0.4045, pruned_loss=0.1533, over 7167.00 frames.], tot_loss[loss=0.3335, simple_loss=0.3744, pruned_loss=0.1463, over 1422230.83 frames.], batch size: 26, lr: 1.94e-03 2022-05-26 15:29:02,868 INFO [train.py:842] (0/4) Epoch 2, batch 1500, loss[loss=0.3542, simple_loss=0.3906, pruned_loss=0.1589, over 6488.00 frames.], tot_loss[loss=0.3324, simple_loss=0.3736, pruned_loss=0.1456, over 1421195.28 frames.], batch size: 38, lr: 1.94e-03 2022-05-26 15:29:41,837 INFO [train.py:842] (0/4) Epoch 2, batch 1550, loss[loss=0.3335, simple_loss=0.3749, pruned_loss=0.1461, over 7433.00 frames.], tot_loss[loss=0.3311, simple_loss=0.3726, pruned_loss=0.1448, over 1425137.46 frames.], batch size: 20, lr: 1.94e-03 2022-05-26 15:30:20,570 INFO [train.py:842] (0/4) Epoch 2, batch 1600, loss[loss=0.245, simple_loss=0.3035, pruned_loss=0.09324, over 7156.00 frames.], tot_loss[loss=0.3273, simple_loss=0.3696, pruned_loss=0.1425, over 1424882.25 frames.], batch size: 18, lr: 1.93e-03 2022-05-26 15:30:59,432 INFO [train.py:842] (0/4) Epoch 2, batch 1650, loss[loss=0.2983, simple_loss=0.3464, pruned_loss=0.1251, over 7423.00 frames.], tot_loss[loss=0.3251, simple_loss=0.3679, pruned_loss=0.1412, over 1424865.63 frames.], batch size: 20, lr: 1.93e-03 2022-05-26 15:31:38,107 INFO [train.py:842] (0/4) Epoch 2, batch 1700, loss[loss=0.3806, simple_loss=0.4202, pruned_loss=0.1705, over 7406.00 frames.], tot_loss[loss=0.3237, simple_loss=0.3671, pruned_loss=0.1402, over 1423789.21 frames.], batch size: 21, lr: 1.92e-03 2022-05-26 15:32:16,707 INFO [train.py:842] (0/4) Epoch 2, batch 1750, loss[loss=0.2803, simple_loss=0.3061, pruned_loss=0.1272, over 7278.00 frames.], tot_loss[loss=0.3244, simple_loss=0.3681, pruned_loss=0.1404, over 1423108.95 frames.], batch size: 18, lr: 1.92e-03 2022-05-26 15:32:55,310 INFO [train.py:842] (0/4) Epoch 2, batch 1800, loss[loss=0.3059, simple_loss=0.3467, pruned_loss=0.1325, over 7362.00 frames.], tot_loss[loss=0.3262, simple_loss=0.3692, pruned_loss=0.1416, over 1424568.56 frames.], batch size: 19, lr: 1.92e-03 2022-05-26 15:33:34,122 INFO [train.py:842] (0/4) Epoch 2, batch 1850, loss[loss=0.3299, simple_loss=0.3605, pruned_loss=0.1496, over 7328.00 frames.], tot_loss[loss=0.3226, simple_loss=0.3664, pruned_loss=0.1394, over 1424261.38 frames.], batch size: 20, lr: 1.91e-03 2022-05-26 15:34:12,754 INFO [train.py:842] (0/4) Epoch 2, batch 1900, loss[loss=0.3269, simple_loss=0.349, pruned_loss=0.1524, over 6999.00 frames.], tot_loss[loss=0.3228, simple_loss=0.3673, pruned_loss=0.1391, over 1427997.60 frames.], batch size: 16, lr: 1.91e-03 2022-05-26 15:34:51,936 INFO [train.py:842] (0/4) Epoch 2, batch 1950, loss[loss=0.3283, simple_loss=0.3613, pruned_loss=0.1476, over 7258.00 frames.], tot_loss[loss=0.3243, simple_loss=0.3683, pruned_loss=0.1402, over 1428479.21 frames.], batch size: 18, lr: 1.91e-03 2022-05-26 15:35:30,329 INFO [train.py:842] (0/4) Epoch 2, batch 2000, loss[loss=0.2936, simple_loss=0.3539, pruned_loss=0.1167, over 7118.00 frames.], tot_loss[loss=0.3274, simple_loss=0.3706, pruned_loss=0.1421, over 1421923.15 frames.], batch size: 21, lr: 1.90e-03 2022-05-26 15:36:09,247 INFO [train.py:842] (0/4) Epoch 2, batch 2050, loss[loss=0.3144, simple_loss=0.3593, pruned_loss=0.1348, over 7130.00 frames.], tot_loss[loss=0.3269, simple_loss=0.37, pruned_loss=0.1419, over 1423475.31 frames.], batch size: 28, lr: 1.90e-03 2022-05-26 15:36:48,126 INFO [train.py:842] (0/4) Epoch 2, batch 2100, loss[loss=0.3293, simple_loss=0.3578, pruned_loss=0.1504, over 7424.00 frames.], tot_loss[loss=0.3256, simple_loss=0.3691, pruned_loss=0.141, over 1424556.90 frames.], batch size: 18, lr: 1.90e-03 2022-05-26 15:37:27,071 INFO [train.py:842] (0/4) Epoch 2, batch 2150, loss[loss=0.3495, simple_loss=0.4023, pruned_loss=0.1483, over 7418.00 frames.], tot_loss[loss=0.3244, simple_loss=0.3681, pruned_loss=0.1404, over 1423310.72 frames.], batch size: 21, lr: 1.89e-03 2022-05-26 15:38:05,650 INFO [train.py:842] (0/4) Epoch 2, batch 2200, loss[loss=0.3548, simple_loss=0.3897, pruned_loss=0.1599, over 7115.00 frames.], tot_loss[loss=0.3223, simple_loss=0.3667, pruned_loss=0.1389, over 1423088.65 frames.], batch size: 21, lr: 1.89e-03 2022-05-26 15:38:44,472 INFO [train.py:842] (0/4) Epoch 2, batch 2250, loss[loss=0.3067, simple_loss=0.368, pruned_loss=0.1227, over 7224.00 frames.], tot_loss[loss=0.322, simple_loss=0.3666, pruned_loss=0.1387, over 1424413.58 frames.], batch size: 21, lr: 1.89e-03 2022-05-26 15:39:23,296 INFO [train.py:842] (0/4) Epoch 2, batch 2300, loss[loss=0.3285, simple_loss=0.3753, pruned_loss=0.1408, over 7209.00 frames.], tot_loss[loss=0.3232, simple_loss=0.3677, pruned_loss=0.1393, over 1424778.58 frames.], batch size: 22, lr: 1.88e-03 2022-05-26 15:40:02,235 INFO [train.py:842] (0/4) Epoch 2, batch 2350, loss[loss=0.3145, simple_loss=0.3699, pruned_loss=0.1296, over 7224.00 frames.], tot_loss[loss=0.3234, simple_loss=0.368, pruned_loss=0.1394, over 1422216.39 frames.], batch size: 20, lr: 1.88e-03 2022-05-26 15:40:40,745 INFO [train.py:842] (0/4) Epoch 2, batch 2400, loss[loss=0.3443, simple_loss=0.3909, pruned_loss=0.1488, over 7311.00 frames.], tot_loss[loss=0.324, simple_loss=0.3683, pruned_loss=0.1398, over 1422520.67 frames.], batch size: 21, lr: 1.87e-03 2022-05-26 15:41:19,575 INFO [train.py:842] (0/4) Epoch 2, batch 2450, loss[loss=0.3159, simple_loss=0.364, pruned_loss=0.134, over 7308.00 frames.], tot_loss[loss=0.3252, simple_loss=0.3695, pruned_loss=0.1404, over 1426702.09 frames.], batch size: 21, lr: 1.87e-03 2022-05-26 15:41:58,100 INFO [train.py:842] (0/4) Epoch 2, batch 2500, loss[loss=0.3357, simple_loss=0.3925, pruned_loss=0.1395, over 7133.00 frames.], tot_loss[loss=0.3221, simple_loss=0.3678, pruned_loss=0.1382, over 1427509.35 frames.], batch size: 26, lr: 1.87e-03 2022-05-26 15:42:36,842 INFO [train.py:842] (0/4) Epoch 2, batch 2550, loss[loss=0.2627, simple_loss=0.3005, pruned_loss=0.1125, over 7013.00 frames.], tot_loss[loss=0.3228, simple_loss=0.3679, pruned_loss=0.1389, over 1427511.94 frames.], batch size: 16, lr: 1.86e-03 2022-05-26 15:43:15,429 INFO [train.py:842] (0/4) Epoch 2, batch 2600, loss[loss=0.3292, simple_loss=0.3787, pruned_loss=0.1399, over 7180.00 frames.], tot_loss[loss=0.3202, simple_loss=0.3656, pruned_loss=0.1373, over 1429259.73 frames.], batch size: 26, lr: 1.86e-03 2022-05-26 15:43:54,094 INFO [train.py:842] (0/4) Epoch 2, batch 2650, loss[loss=0.3179, simple_loss=0.3615, pruned_loss=0.1372, over 6317.00 frames.], tot_loss[loss=0.3183, simple_loss=0.3646, pruned_loss=0.136, over 1427139.37 frames.], batch size: 37, lr: 1.86e-03 2022-05-26 15:44:32,725 INFO [train.py:842] (0/4) Epoch 2, batch 2700, loss[loss=0.4014, simple_loss=0.4246, pruned_loss=0.1891, over 6739.00 frames.], tot_loss[loss=0.3184, simple_loss=0.3647, pruned_loss=0.1361, over 1426621.94 frames.], batch size: 31, lr: 1.85e-03 2022-05-26 15:45:11,857 INFO [train.py:842] (0/4) Epoch 2, batch 2750, loss[loss=0.343, simple_loss=0.3857, pruned_loss=0.1501, over 7300.00 frames.], tot_loss[loss=0.3184, simple_loss=0.3646, pruned_loss=0.1361, over 1423536.71 frames.], batch size: 24, lr: 1.85e-03 2022-05-26 15:45:50,269 INFO [train.py:842] (0/4) Epoch 2, batch 2800, loss[loss=0.3227, simple_loss=0.3749, pruned_loss=0.1353, over 7206.00 frames.], tot_loss[loss=0.3179, simple_loss=0.3643, pruned_loss=0.1358, over 1426325.48 frames.], batch size: 23, lr: 1.85e-03 2022-05-26 15:46:29,149 INFO [train.py:842] (0/4) Epoch 2, batch 2850, loss[loss=0.2887, simple_loss=0.3456, pruned_loss=0.1159, over 7282.00 frames.], tot_loss[loss=0.3189, simple_loss=0.3648, pruned_loss=0.1365, over 1426213.49 frames.], batch size: 24, lr: 1.84e-03 2022-05-26 15:47:07,589 INFO [train.py:842] (0/4) Epoch 2, batch 2900, loss[loss=0.3473, simple_loss=0.3822, pruned_loss=0.1562, over 7241.00 frames.], tot_loss[loss=0.3203, simple_loss=0.3659, pruned_loss=0.1373, over 1420931.73 frames.], batch size: 20, lr: 1.84e-03 2022-05-26 15:47:46,390 INFO [train.py:842] (0/4) Epoch 2, batch 2950, loss[loss=0.3227, simple_loss=0.3732, pruned_loss=0.1361, over 7228.00 frames.], tot_loss[loss=0.3199, simple_loss=0.3658, pruned_loss=0.137, over 1422018.71 frames.], batch size: 20, lr: 1.84e-03 2022-05-26 15:48:24,992 INFO [train.py:842] (0/4) Epoch 2, batch 3000, loss[loss=0.2962, simple_loss=0.3401, pruned_loss=0.1262, over 7281.00 frames.], tot_loss[loss=0.3198, simple_loss=0.3657, pruned_loss=0.137, over 1425366.07 frames.], batch size: 17, lr: 1.84e-03 2022-05-26 15:48:24,993 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 15:48:34,574 INFO [train.py:871] (0/4) Epoch 2, validation: loss=0.2365, simple_loss=0.3276, pruned_loss=0.07266, over 868885.00 frames. 2022-05-26 15:49:14,140 INFO [train.py:842] (0/4) Epoch 2, batch 3050, loss[loss=0.2533, simple_loss=0.3032, pruned_loss=0.1016, over 7277.00 frames.], tot_loss[loss=0.3188, simple_loss=0.3652, pruned_loss=0.1362, over 1421208.21 frames.], batch size: 18, lr: 1.83e-03 2022-05-26 15:49:52,543 INFO [train.py:842] (0/4) Epoch 2, batch 3100, loss[loss=0.4614, simple_loss=0.4685, pruned_loss=0.2272, over 5364.00 frames.], tot_loss[loss=0.3185, simple_loss=0.3654, pruned_loss=0.1358, over 1421456.21 frames.], batch size: 52, lr: 1.83e-03 2022-05-26 15:50:31,850 INFO [train.py:842] (0/4) Epoch 2, batch 3150, loss[loss=0.2472, simple_loss=0.3005, pruned_loss=0.09696, over 6788.00 frames.], tot_loss[loss=0.3163, simple_loss=0.3637, pruned_loss=0.1344, over 1423299.55 frames.], batch size: 15, lr: 1.83e-03 2022-05-26 15:51:10,289 INFO [train.py:842] (0/4) Epoch 2, batch 3200, loss[loss=0.3815, simple_loss=0.4151, pruned_loss=0.174, over 4839.00 frames.], tot_loss[loss=0.3217, simple_loss=0.3677, pruned_loss=0.1378, over 1412342.71 frames.], batch size: 53, lr: 1.82e-03 2022-05-26 15:51:49,106 INFO [train.py:842] (0/4) Epoch 2, batch 3250, loss[loss=0.3049, simple_loss=0.3609, pruned_loss=0.1244, over 7204.00 frames.], tot_loss[loss=0.3226, simple_loss=0.3682, pruned_loss=0.1385, over 1415061.01 frames.], batch size: 23, lr: 1.82e-03 2022-05-26 15:52:27,634 INFO [train.py:842] (0/4) Epoch 2, batch 3300, loss[loss=0.3935, simple_loss=0.4172, pruned_loss=0.185, over 7209.00 frames.], tot_loss[loss=0.3188, simple_loss=0.3651, pruned_loss=0.1363, over 1419727.52 frames.], batch size: 22, lr: 1.82e-03 2022-05-26 15:53:06,344 INFO [train.py:842] (0/4) Epoch 2, batch 3350, loss[loss=0.3411, simple_loss=0.3735, pruned_loss=0.1543, over 7191.00 frames.], tot_loss[loss=0.3195, simple_loss=0.3661, pruned_loss=0.1364, over 1422736.40 frames.], batch size: 26, lr: 1.81e-03 2022-05-26 15:53:45,047 INFO [train.py:842] (0/4) Epoch 2, batch 3400, loss[loss=0.2484, simple_loss=0.3052, pruned_loss=0.09584, over 7156.00 frames.], tot_loss[loss=0.3191, simple_loss=0.3658, pruned_loss=0.1362, over 1424812.81 frames.], batch size: 17, lr: 1.81e-03 2022-05-26 15:54:23,674 INFO [train.py:842] (0/4) Epoch 2, batch 3450, loss[loss=0.3129, simple_loss=0.3709, pruned_loss=0.1275, over 7295.00 frames.], tot_loss[loss=0.3184, simple_loss=0.3657, pruned_loss=0.1355, over 1426655.71 frames.], batch size: 24, lr: 1.81e-03 2022-05-26 15:55:02,062 INFO [train.py:842] (0/4) Epoch 2, batch 3500, loss[loss=0.2722, simple_loss=0.3331, pruned_loss=0.1057, over 6273.00 frames.], tot_loss[loss=0.3187, simple_loss=0.3655, pruned_loss=0.136, over 1423986.62 frames.], batch size: 37, lr: 1.80e-03 2022-05-26 15:55:40,849 INFO [train.py:842] (0/4) Epoch 2, batch 3550, loss[loss=0.3614, simple_loss=0.4097, pruned_loss=0.1565, over 7312.00 frames.], tot_loss[loss=0.3187, simple_loss=0.3659, pruned_loss=0.1357, over 1424415.13 frames.], batch size: 25, lr: 1.80e-03 2022-05-26 15:56:19,286 INFO [train.py:842] (0/4) Epoch 2, batch 3600, loss[loss=0.3613, simple_loss=0.3847, pruned_loss=0.169, over 7230.00 frames.], tot_loss[loss=0.3177, simple_loss=0.3655, pruned_loss=0.135, over 1425592.74 frames.], batch size: 20, lr: 1.80e-03 2022-05-26 15:56:58,175 INFO [train.py:842] (0/4) Epoch 2, batch 3650, loss[loss=0.342, simple_loss=0.367, pruned_loss=0.1584, over 7196.00 frames.], tot_loss[loss=0.3176, simple_loss=0.3652, pruned_loss=0.135, over 1427878.45 frames.], batch size: 16, lr: 1.79e-03 2022-05-26 15:57:36,613 INFO [train.py:842] (0/4) Epoch 2, batch 3700, loss[loss=0.2554, simple_loss=0.3194, pruned_loss=0.09565, over 7156.00 frames.], tot_loss[loss=0.3166, simple_loss=0.3655, pruned_loss=0.1338, over 1429803.89 frames.], batch size: 19, lr: 1.79e-03 2022-05-26 15:58:15,382 INFO [train.py:842] (0/4) Epoch 2, batch 3750, loss[loss=0.2904, simple_loss=0.3469, pruned_loss=0.1169, over 7275.00 frames.], tot_loss[loss=0.3154, simple_loss=0.3646, pruned_loss=0.1331, over 1430441.22 frames.], batch size: 24, lr: 1.79e-03 2022-05-26 15:58:54,100 INFO [train.py:842] (0/4) Epoch 2, batch 3800, loss[loss=0.2556, simple_loss=0.3132, pruned_loss=0.09905, over 6983.00 frames.], tot_loss[loss=0.3153, simple_loss=0.3643, pruned_loss=0.1332, over 1431455.57 frames.], batch size: 16, lr: 1.79e-03 2022-05-26 15:59:32,919 INFO [train.py:842] (0/4) Epoch 2, batch 3850, loss[loss=0.3067, simple_loss=0.3572, pruned_loss=0.1281, over 7194.00 frames.], tot_loss[loss=0.3135, simple_loss=0.3632, pruned_loss=0.1319, over 1431685.81 frames.], batch size: 22, lr: 1.78e-03 2022-05-26 16:00:11,544 INFO [train.py:842] (0/4) Epoch 2, batch 3900, loss[loss=0.3512, simple_loss=0.3861, pruned_loss=0.1581, over 6668.00 frames.], tot_loss[loss=0.3125, simple_loss=0.3624, pruned_loss=0.1313, over 1434161.73 frames.], batch size: 38, lr: 1.78e-03 2022-05-26 16:00:50,504 INFO [train.py:842] (0/4) Epoch 2, batch 3950, loss[loss=0.3892, simple_loss=0.4258, pruned_loss=0.1763, over 7311.00 frames.], tot_loss[loss=0.3146, simple_loss=0.3637, pruned_loss=0.1328, over 1432239.67 frames.], batch size: 21, lr: 1.78e-03 2022-05-26 16:01:29,037 INFO [train.py:842] (0/4) Epoch 2, batch 4000, loss[loss=0.3878, simple_loss=0.4092, pruned_loss=0.1832, over 4998.00 frames.], tot_loss[loss=0.314, simple_loss=0.3631, pruned_loss=0.1324, over 1432240.47 frames.], batch size: 52, lr: 1.77e-03 2022-05-26 16:02:07,538 INFO [train.py:842] (0/4) Epoch 2, batch 4050, loss[loss=0.3174, simple_loss=0.3681, pruned_loss=0.1333, over 6865.00 frames.], tot_loss[loss=0.3149, simple_loss=0.3642, pruned_loss=0.1328, over 1427768.06 frames.], batch size: 31, lr: 1.77e-03 2022-05-26 16:02:46,185 INFO [train.py:842] (0/4) Epoch 2, batch 4100, loss[loss=0.3382, simple_loss=0.376, pruned_loss=0.1502, over 7122.00 frames.], tot_loss[loss=0.3171, simple_loss=0.3657, pruned_loss=0.1343, over 1429518.69 frames.], batch size: 28, lr: 1.77e-03 2022-05-26 16:03:25,053 INFO [train.py:842] (0/4) Epoch 2, batch 4150, loss[loss=0.3712, simple_loss=0.4109, pruned_loss=0.1658, over 7134.00 frames.], tot_loss[loss=0.3162, simple_loss=0.3648, pruned_loss=0.1339, over 1427396.59 frames.], batch size: 26, lr: 1.76e-03 2022-05-26 16:04:03,590 INFO [train.py:842] (0/4) Epoch 2, batch 4200, loss[loss=0.2562, simple_loss=0.3181, pruned_loss=0.09712, over 7005.00 frames.], tot_loss[loss=0.3158, simple_loss=0.3648, pruned_loss=0.1334, over 1425642.80 frames.], batch size: 16, lr: 1.76e-03 2022-05-26 16:04:42,388 INFO [train.py:842] (0/4) Epoch 2, batch 4250, loss[loss=0.263, simple_loss=0.324, pruned_loss=0.101, over 7215.00 frames.], tot_loss[loss=0.3156, simple_loss=0.3647, pruned_loss=0.1333, over 1424008.27 frames.], batch size: 22, lr: 1.76e-03 2022-05-26 16:05:21,025 INFO [train.py:842] (0/4) Epoch 2, batch 4300, loss[loss=0.2913, simple_loss=0.3495, pruned_loss=0.1165, over 7329.00 frames.], tot_loss[loss=0.3135, simple_loss=0.3628, pruned_loss=0.1321, over 1426063.30 frames.], batch size: 22, lr: 1.76e-03 2022-05-26 16:05:59,680 INFO [train.py:842] (0/4) Epoch 2, batch 4350, loss[loss=0.3016, simple_loss=0.3504, pruned_loss=0.1264, over 7163.00 frames.], tot_loss[loss=0.3121, simple_loss=0.362, pruned_loss=0.1311, over 1422688.45 frames.], batch size: 19, lr: 1.75e-03 2022-05-26 16:06:38,243 INFO [train.py:842] (0/4) Epoch 2, batch 4400, loss[loss=0.3035, simple_loss=0.3536, pruned_loss=0.1267, over 7270.00 frames.], tot_loss[loss=0.3119, simple_loss=0.3617, pruned_loss=0.131, over 1423356.02 frames.], batch size: 24, lr: 1.75e-03 2022-05-26 16:07:17,566 INFO [train.py:842] (0/4) Epoch 2, batch 4450, loss[loss=0.33, simple_loss=0.3632, pruned_loss=0.1484, over 7409.00 frames.], tot_loss[loss=0.3102, simple_loss=0.3606, pruned_loss=0.1299, over 1423661.71 frames.], batch size: 18, lr: 1.75e-03 2022-05-26 16:07:56,067 INFO [train.py:842] (0/4) Epoch 2, batch 4500, loss[loss=0.3198, simple_loss=0.3689, pruned_loss=0.1353, over 7333.00 frames.], tot_loss[loss=0.3125, simple_loss=0.362, pruned_loss=0.1315, over 1425320.65 frames.], batch size: 20, lr: 1.74e-03 2022-05-26 16:08:34,946 INFO [train.py:842] (0/4) Epoch 2, batch 4550, loss[loss=0.3689, simple_loss=0.4184, pruned_loss=0.1597, over 7275.00 frames.], tot_loss[loss=0.3123, simple_loss=0.3621, pruned_loss=0.1313, over 1426037.82 frames.], batch size: 18, lr: 1.74e-03 2022-05-26 16:09:13,336 INFO [train.py:842] (0/4) Epoch 2, batch 4600, loss[loss=0.2808, simple_loss=0.3508, pruned_loss=0.1054, over 7201.00 frames.], tot_loss[loss=0.3139, simple_loss=0.3634, pruned_loss=0.1322, over 1421071.45 frames.], batch size: 22, lr: 1.74e-03 2022-05-26 16:09:52,097 INFO [train.py:842] (0/4) Epoch 2, batch 4650, loss[loss=0.3683, simple_loss=0.411, pruned_loss=0.1627, over 7265.00 frames.], tot_loss[loss=0.3131, simple_loss=0.3631, pruned_loss=0.1316, over 1424222.51 frames.], batch size: 25, lr: 1.74e-03 2022-05-26 16:10:30,647 INFO [train.py:842] (0/4) Epoch 2, batch 4700, loss[loss=0.4107, simple_loss=0.4331, pruned_loss=0.1942, over 7318.00 frames.], tot_loss[loss=0.3122, simple_loss=0.3622, pruned_loss=0.1311, over 1424918.93 frames.], batch size: 21, lr: 1.73e-03 2022-05-26 16:11:09,332 INFO [train.py:842] (0/4) Epoch 2, batch 4750, loss[loss=0.3692, simple_loss=0.399, pruned_loss=0.1697, over 7410.00 frames.], tot_loss[loss=0.3144, simple_loss=0.3633, pruned_loss=0.1328, over 1417559.00 frames.], batch size: 21, lr: 1.73e-03 2022-05-26 16:11:47,766 INFO [train.py:842] (0/4) Epoch 2, batch 4800, loss[loss=0.3205, simple_loss=0.3775, pruned_loss=0.1317, over 7301.00 frames.], tot_loss[loss=0.3149, simple_loss=0.364, pruned_loss=0.1329, over 1415849.02 frames.], batch size: 24, lr: 1.73e-03 2022-05-26 16:12:26,436 INFO [train.py:842] (0/4) Epoch 2, batch 4850, loss[loss=0.2902, simple_loss=0.3348, pruned_loss=0.1228, over 7150.00 frames.], tot_loss[loss=0.3159, simple_loss=0.3646, pruned_loss=0.1336, over 1415946.88 frames.], batch size: 18, lr: 1.73e-03 2022-05-26 16:13:04,910 INFO [train.py:842] (0/4) Epoch 2, batch 4900, loss[loss=0.2347, simple_loss=0.3051, pruned_loss=0.08218, over 7276.00 frames.], tot_loss[loss=0.3138, simple_loss=0.3632, pruned_loss=0.1323, over 1418653.37 frames.], batch size: 17, lr: 1.72e-03 2022-05-26 16:13:43,580 INFO [train.py:842] (0/4) Epoch 2, batch 4950, loss[loss=0.3246, simple_loss=0.389, pruned_loss=0.1301, over 7240.00 frames.], tot_loss[loss=0.3114, simple_loss=0.3614, pruned_loss=0.1308, over 1421182.04 frames.], batch size: 20, lr: 1.72e-03 2022-05-26 16:14:22,291 INFO [train.py:842] (0/4) Epoch 2, batch 5000, loss[loss=0.3436, simple_loss=0.376, pruned_loss=0.1556, over 7285.00 frames.], tot_loss[loss=0.3123, simple_loss=0.3619, pruned_loss=0.1314, over 1423476.22 frames.], batch size: 17, lr: 1.72e-03 2022-05-26 16:15:00,733 INFO [train.py:842] (0/4) Epoch 2, batch 5050, loss[loss=0.322, simple_loss=0.379, pruned_loss=0.1325, over 7414.00 frames.], tot_loss[loss=0.3146, simple_loss=0.3638, pruned_loss=0.1327, over 1417393.53 frames.], batch size: 21, lr: 1.71e-03 2022-05-26 16:15:39,316 INFO [train.py:842] (0/4) Epoch 2, batch 5100, loss[loss=0.2525, simple_loss=0.3225, pruned_loss=0.09126, over 7158.00 frames.], tot_loss[loss=0.3107, simple_loss=0.3612, pruned_loss=0.1301, over 1419448.24 frames.], batch size: 19, lr: 1.71e-03 2022-05-26 16:16:18,376 INFO [train.py:842] (0/4) Epoch 2, batch 5150, loss[loss=0.3999, simple_loss=0.4341, pruned_loss=0.1828, over 7222.00 frames.], tot_loss[loss=0.3115, simple_loss=0.3618, pruned_loss=0.1306, over 1421226.94 frames.], batch size: 21, lr: 1.71e-03 2022-05-26 16:16:56,911 INFO [train.py:842] (0/4) Epoch 2, batch 5200, loss[loss=0.3872, simple_loss=0.4115, pruned_loss=0.1815, over 7283.00 frames.], tot_loss[loss=0.3117, simple_loss=0.3621, pruned_loss=0.1306, over 1422589.00 frames.], batch size: 25, lr: 1.71e-03 2022-05-26 16:17:35,687 INFO [train.py:842] (0/4) Epoch 2, batch 5250, loss[loss=0.3569, simple_loss=0.4045, pruned_loss=0.1546, over 6952.00 frames.], tot_loss[loss=0.3111, simple_loss=0.3616, pruned_loss=0.1304, over 1424991.62 frames.], batch size: 32, lr: 1.70e-03 2022-05-26 16:18:14,242 INFO [train.py:842] (0/4) Epoch 2, batch 5300, loss[loss=0.2701, simple_loss=0.3392, pruned_loss=0.1005, over 7373.00 frames.], tot_loss[loss=0.3102, simple_loss=0.3609, pruned_loss=0.1297, over 1421943.71 frames.], batch size: 23, lr: 1.70e-03 2022-05-26 16:18:53,028 INFO [train.py:842] (0/4) Epoch 2, batch 5350, loss[loss=0.2719, simple_loss=0.3225, pruned_loss=0.1107, over 7353.00 frames.], tot_loss[loss=0.307, simple_loss=0.3583, pruned_loss=0.1279, over 1419169.17 frames.], batch size: 19, lr: 1.70e-03 2022-05-26 16:19:31,682 INFO [train.py:842] (0/4) Epoch 2, batch 5400, loss[loss=0.3349, simple_loss=0.3836, pruned_loss=0.143, over 6296.00 frames.], tot_loss[loss=0.3062, simple_loss=0.3576, pruned_loss=0.1273, over 1419700.28 frames.], batch size: 37, lr: 1.70e-03 2022-05-26 16:20:10,826 INFO [train.py:842] (0/4) Epoch 2, batch 5450, loss[loss=0.3306, simple_loss=0.3569, pruned_loss=0.1521, over 6799.00 frames.], tot_loss[loss=0.305, simple_loss=0.3566, pruned_loss=0.1267, over 1421266.45 frames.], batch size: 15, lr: 1.69e-03 2022-05-26 16:20:49,326 INFO [train.py:842] (0/4) Epoch 2, batch 5500, loss[loss=0.255, simple_loss=0.3243, pruned_loss=0.0929, over 7131.00 frames.], tot_loss[loss=0.3066, simple_loss=0.3572, pruned_loss=0.128, over 1422566.08 frames.], batch size: 17, lr: 1.69e-03 2022-05-26 16:21:28,406 INFO [train.py:842] (0/4) Epoch 2, batch 5550, loss[loss=0.2618, simple_loss=0.3118, pruned_loss=0.1059, over 6992.00 frames.], tot_loss[loss=0.3046, simple_loss=0.3558, pruned_loss=0.1267, over 1422978.52 frames.], batch size: 16, lr: 1.69e-03 2022-05-26 16:22:06,833 INFO [train.py:842] (0/4) Epoch 2, batch 5600, loss[loss=0.3054, simple_loss=0.3618, pruned_loss=0.1245, over 7286.00 frames.], tot_loss[loss=0.3047, simple_loss=0.3561, pruned_loss=0.1267, over 1423625.70 frames.], batch size: 24, lr: 1.69e-03 2022-05-26 16:22:45,454 INFO [train.py:842] (0/4) Epoch 2, batch 5650, loss[loss=0.2994, simple_loss=0.3552, pruned_loss=0.1217, over 7227.00 frames.], tot_loss[loss=0.306, simple_loss=0.3573, pruned_loss=0.1273, over 1424675.85 frames.], batch size: 23, lr: 1.68e-03 2022-05-26 16:23:24,159 INFO [train.py:842] (0/4) Epoch 2, batch 5700, loss[loss=0.2184, simple_loss=0.2904, pruned_loss=0.07324, over 7277.00 frames.], tot_loss[loss=0.3042, simple_loss=0.3559, pruned_loss=0.1262, over 1424371.95 frames.], batch size: 18, lr: 1.68e-03 2022-05-26 16:24:03,273 INFO [train.py:842] (0/4) Epoch 2, batch 5750, loss[loss=0.353, simple_loss=0.3983, pruned_loss=0.1538, over 7306.00 frames.], tot_loss[loss=0.3065, simple_loss=0.358, pruned_loss=0.1275, over 1422667.85 frames.], batch size: 21, lr: 1.68e-03 2022-05-26 16:24:41,859 INFO [train.py:842] (0/4) Epoch 2, batch 5800, loss[loss=0.2748, simple_loss=0.352, pruned_loss=0.09879, over 7145.00 frames.], tot_loss[loss=0.3073, simple_loss=0.3587, pruned_loss=0.1279, over 1426674.19 frames.], batch size: 26, lr: 1.68e-03 2022-05-26 16:25:20,739 INFO [train.py:842] (0/4) Epoch 2, batch 5850, loss[loss=0.3816, simple_loss=0.4117, pruned_loss=0.1757, over 7405.00 frames.], tot_loss[loss=0.3063, simple_loss=0.3581, pruned_loss=0.1273, over 1422870.48 frames.], batch size: 21, lr: 1.67e-03 2022-05-26 16:26:08,846 INFO [train.py:842] (0/4) Epoch 2, batch 5900, loss[loss=0.2944, simple_loss=0.3327, pruned_loss=0.128, over 7282.00 frames.], tot_loss[loss=0.3034, simple_loss=0.3559, pruned_loss=0.1255, over 1424640.98 frames.], batch size: 17, lr: 1.67e-03 2022-05-26 16:26:47,917 INFO [train.py:842] (0/4) Epoch 2, batch 5950, loss[loss=0.3739, simple_loss=0.422, pruned_loss=0.1629, over 7200.00 frames.], tot_loss[loss=0.3029, simple_loss=0.3558, pruned_loss=0.125, over 1423781.91 frames.], batch size: 22, lr: 1.67e-03 2022-05-26 16:27:26,494 INFO [train.py:842] (0/4) Epoch 2, batch 6000, loss[loss=0.2733, simple_loss=0.3381, pruned_loss=0.1043, over 7405.00 frames.], tot_loss[loss=0.3015, simple_loss=0.3549, pruned_loss=0.124, over 1420434.44 frames.], batch size: 21, lr: 1.67e-03 2022-05-26 16:27:26,495 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 16:27:35,906 INFO [train.py:871] (0/4) Epoch 2, validation: loss=0.2259, simple_loss=0.3198, pruned_loss=0.06603, over 868885.00 frames. 2022-05-26 16:28:15,170 INFO [train.py:842] (0/4) Epoch 2, batch 6050, loss[loss=0.3091, simple_loss=0.362, pruned_loss=0.1281, over 7200.00 frames.], tot_loss[loss=0.3012, simple_loss=0.3546, pruned_loss=0.1239, over 1424145.28 frames.], batch size: 23, lr: 1.66e-03 2022-05-26 16:28:53,622 INFO [train.py:842] (0/4) Epoch 2, batch 6100, loss[loss=0.2641, simple_loss=0.3267, pruned_loss=0.1008, over 7360.00 frames.], tot_loss[loss=0.3012, simple_loss=0.3543, pruned_loss=0.1241, over 1426710.01 frames.], batch size: 23, lr: 1.66e-03 2022-05-26 16:29:42,239 INFO [train.py:842] (0/4) Epoch 2, batch 6150, loss[loss=0.315, simple_loss=0.3679, pruned_loss=0.1311, over 6988.00 frames.], tot_loss[loss=0.3008, simple_loss=0.3543, pruned_loss=0.1236, over 1426576.95 frames.], batch size: 28, lr: 1.66e-03 2022-05-26 16:30:39,575 INFO [train.py:842] (0/4) Epoch 2, batch 6200, loss[loss=0.3499, simple_loss=0.3993, pruned_loss=0.1503, over 6726.00 frames.], tot_loss[loss=0.3032, simple_loss=0.3564, pruned_loss=0.125, over 1424669.59 frames.], batch size: 31, lr: 1.66e-03 2022-05-26 16:31:18,943 INFO [train.py:842] (0/4) Epoch 2, batch 6250, loss[loss=0.2408, simple_loss=0.3186, pruned_loss=0.08147, over 7109.00 frames.], tot_loss[loss=0.3028, simple_loss=0.3555, pruned_loss=0.125, over 1427991.60 frames.], batch size: 21, lr: 1.65e-03 2022-05-26 16:31:57,565 INFO [train.py:842] (0/4) Epoch 2, batch 6300, loss[loss=0.3305, simple_loss=0.3753, pruned_loss=0.1428, over 7149.00 frames.], tot_loss[loss=0.3024, simple_loss=0.3551, pruned_loss=0.1249, over 1432149.10 frames.], batch size: 26, lr: 1.65e-03 2022-05-26 16:32:36,115 INFO [train.py:842] (0/4) Epoch 2, batch 6350, loss[loss=0.3801, simple_loss=0.4143, pruned_loss=0.173, over 6481.00 frames.], tot_loss[loss=0.3059, simple_loss=0.3578, pruned_loss=0.127, over 1430266.23 frames.], batch size: 38, lr: 1.65e-03 2022-05-26 16:33:14,716 INFO [train.py:842] (0/4) Epoch 2, batch 6400, loss[loss=0.3206, simple_loss=0.369, pruned_loss=0.1361, over 6740.00 frames.], tot_loss[loss=0.3061, simple_loss=0.3577, pruned_loss=0.1273, over 1425780.11 frames.], batch size: 31, lr: 1.65e-03 2022-05-26 16:33:53,450 INFO [train.py:842] (0/4) Epoch 2, batch 6450, loss[loss=0.2356, simple_loss=0.2944, pruned_loss=0.08835, over 7417.00 frames.], tot_loss[loss=0.3062, simple_loss=0.3577, pruned_loss=0.1274, over 1425575.17 frames.], batch size: 18, lr: 1.64e-03 2022-05-26 16:34:31,948 INFO [train.py:842] (0/4) Epoch 2, batch 6500, loss[loss=0.2594, simple_loss=0.3363, pruned_loss=0.09118, over 7201.00 frames.], tot_loss[loss=0.3046, simple_loss=0.3563, pruned_loss=0.1265, over 1424325.44 frames.], batch size: 22, lr: 1.64e-03 2022-05-26 16:35:10,758 INFO [train.py:842] (0/4) Epoch 2, batch 6550, loss[loss=0.2739, simple_loss=0.3292, pruned_loss=0.1093, over 7453.00 frames.], tot_loss[loss=0.3048, simple_loss=0.3564, pruned_loss=0.1266, over 1422313.58 frames.], batch size: 19, lr: 1.64e-03 2022-05-26 16:35:49,285 INFO [train.py:842] (0/4) Epoch 2, batch 6600, loss[loss=0.2481, simple_loss=0.3089, pruned_loss=0.09365, over 7268.00 frames.], tot_loss[loss=0.3029, simple_loss=0.3546, pruned_loss=0.1256, over 1421964.36 frames.], batch size: 18, lr: 1.64e-03 2022-05-26 16:36:27,772 INFO [train.py:842] (0/4) Epoch 2, batch 6650, loss[loss=0.2511, simple_loss=0.3231, pruned_loss=0.0895, over 7211.00 frames.], tot_loss[loss=0.3042, simple_loss=0.3557, pruned_loss=0.1264, over 1414573.48 frames.], batch size: 23, lr: 1.63e-03 2022-05-26 16:37:06,262 INFO [train.py:842] (0/4) Epoch 2, batch 6700, loss[loss=0.3207, simple_loss=0.3516, pruned_loss=0.1449, over 7276.00 frames.], tot_loss[loss=0.3037, simple_loss=0.3557, pruned_loss=0.1259, over 1419701.40 frames.], batch size: 17, lr: 1.63e-03 2022-05-26 16:37:45,011 INFO [train.py:842] (0/4) Epoch 2, batch 6750, loss[loss=0.3032, simple_loss=0.3578, pruned_loss=0.1243, over 7232.00 frames.], tot_loss[loss=0.3036, simple_loss=0.356, pruned_loss=0.1256, over 1422341.11 frames.], batch size: 20, lr: 1.63e-03 2022-05-26 16:38:23,578 INFO [train.py:842] (0/4) Epoch 2, batch 6800, loss[loss=0.2807, simple_loss=0.3551, pruned_loss=0.1031, over 7107.00 frames.], tot_loss[loss=0.3022, simple_loss=0.3555, pruned_loss=0.1244, over 1424525.78 frames.], batch size: 21, lr: 1.63e-03 2022-05-26 16:38:28,418 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-16000.pt 2022-05-26 16:39:05,222 INFO [train.py:842] (0/4) Epoch 2, batch 6850, loss[loss=0.2835, simple_loss=0.3372, pruned_loss=0.1149, over 7328.00 frames.], tot_loss[loss=0.3033, simple_loss=0.3566, pruned_loss=0.125, over 1421156.74 frames.], batch size: 20, lr: 1.63e-03 2022-05-26 16:39:43,762 INFO [train.py:842] (0/4) Epoch 2, batch 6900, loss[loss=0.3031, simple_loss=0.358, pruned_loss=0.1241, over 7426.00 frames.], tot_loss[loss=0.3029, simple_loss=0.3563, pruned_loss=0.1248, over 1420458.42 frames.], batch size: 20, lr: 1.62e-03 2022-05-26 16:40:22,591 INFO [train.py:842] (0/4) Epoch 2, batch 6950, loss[loss=0.2376, simple_loss=0.2994, pruned_loss=0.08794, over 7288.00 frames.], tot_loss[loss=0.3006, simple_loss=0.3537, pruned_loss=0.1237, over 1420741.03 frames.], batch size: 18, lr: 1.62e-03 2022-05-26 16:41:01,157 INFO [train.py:842] (0/4) Epoch 2, batch 7000, loss[loss=0.2993, simple_loss=0.3441, pruned_loss=0.1272, over 7325.00 frames.], tot_loss[loss=0.3, simple_loss=0.3536, pruned_loss=0.1232, over 1422796.89 frames.], batch size: 21, lr: 1.62e-03 2022-05-26 16:41:40,648 INFO [train.py:842] (0/4) Epoch 2, batch 7050, loss[loss=0.3467, simple_loss=0.3783, pruned_loss=0.1575, over 5363.00 frames.], tot_loss[loss=0.2993, simple_loss=0.353, pruned_loss=0.1228, over 1425503.17 frames.], batch size: 52, lr: 1.62e-03 2022-05-26 16:42:19,188 INFO [train.py:842] (0/4) Epoch 2, batch 7100, loss[loss=0.2843, simple_loss=0.3445, pruned_loss=0.1121, over 7112.00 frames.], tot_loss[loss=0.3012, simple_loss=0.3543, pruned_loss=0.124, over 1425121.55 frames.], batch size: 21, lr: 1.61e-03 2022-05-26 16:42:57,905 INFO [train.py:842] (0/4) Epoch 2, batch 7150, loss[loss=0.2571, simple_loss=0.3189, pruned_loss=0.09761, over 7416.00 frames.], tot_loss[loss=0.3039, simple_loss=0.3562, pruned_loss=0.1258, over 1422132.13 frames.], batch size: 21, lr: 1.61e-03 2022-05-26 16:43:36,579 INFO [train.py:842] (0/4) Epoch 2, batch 7200, loss[loss=0.2786, simple_loss=0.322, pruned_loss=0.1176, over 6991.00 frames.], tot_loss[loss=0.3045, simple_loss=0.3568, pruned_loss=0.1261, over 1419725.14 frames.], batch size: 16, lr: 1.61e-03 2022-05-26 16:44:15,864 INFO [train.py:842] (0/4) Epoch 2, batch 7250, loss[loss=0.2564, simple_loss=0.3302, pruned_loss=0.09134, over 7236.00 frames.], tot_loss[loss=0.3025, simple_loss=0.3554, pruned_loss=0.1247, over 1425049.89 frames.], batch size: 20, lr: 1.61e-03 2022-05-26 16:44:54,471 INFO [train.py:842] (0/4) Epoch 2, batch 7300, loss[loss=0.3488, simple_loss=0.3891, pruned_loss=0.1542, over 7210.00 frames.], tot_loss[loss=0.3028, simple_loss=0.3559, pruned_loss=0.1249, over 1427524.08 frames.], batch size: 21, lr: 1.60e-03 2022-05-26 16:45:33,710 INFO [train.py:842] (0/4) Epoch 2, batch 7350, loss[loss=0.3916, simple_loss=0.415, pruned_loss=0.1841, over 4822.00 frames.], tot_loss[loss=0.3006, simple_loss=0.3537, pruned_loss=0.1238, over 1423703.50 frames.], batch size: 52, lr: 1.60e-03 2022-05-26 16:46:12,356 INFO [train.py:842] (0/4) Epoch 2, batch 7400, loss[loss=0.2799, simple_loss=0.3277, pruned_loss=0.1161, over 6985.00 frames.], tot_loss[loss=0.3004, simple_loss=0.3537, pruned_loss=0.1235, over 1423095.26 frames.], batch size: 16, lr: 1.60e-03 2022-05-26 16:46:51,029 INFO [train.py:842] (0/4) Epoch 2, batch 7450, loss[loss=0.2701, simple_loss=0.3277, pruned_loss=0.1062, over 7343.00 frames.], tot_loss[loss=0.2993, simple_loss=0.3528, pruned_loss=0.1229, over 1418852.70 frames.], batch size: 19, lr: 1.60e-03 2022-05-26 16:47:29,540 INFO [train.py:842] (0/4) Epoch 2, batch 7500, loss[loss=0.3212, simple_loss=0.382, pruned_loss=0.1302, over 7213.00 frames.], tot_loss[loss=0.3011, simple_loss=0.3543, pruned_loss=0.1239, over 1419843.71 frames.], batch size: 21, lr: 1.60e-03 2022-05-26 16:48:08,282 INFO [train.py:842] (0/4) Epoch 2, batch 7550, loss[loss=0.331, simple_loss=0.3814, pruned_loss=0.1403, over 7404.00 frames.], tot_loss[loss=0.299, simple_loss=0.3531, pruned_loss=0.1224, over 1420963.12 frames.], batch size: 21, lr: 1.59e-03 2022-05-26 16:48:46,895 INFO [train.py:842] (0/4) Epoch 2, batch 7600, loss[loss=0.3567, simple_loss=0.3892, pruned_loss=0.1621, over 5391.00 frames.], tot_loss[loss=0.299, simple_loss=0.3526, pruned_loss=0.1227, over 1421102.83 frames.], batch size: 52, lr: 1.59e-03 2022-05-26 16:49:26,128 INFO [train.py:842] (0/4) Epoch 2, batch 7650, loss[loss=0.2905, simple_loss=0.3521, pruned_loss=0.1145, over 7420.00 frames.], tot_loss[loss=0.299, simple_loss=0.3528, pruned_loss=0.1227, over 1421699.63 frames.], batch size: 21, lr: 1.59e-03 2022-05-26 16:50:04,696 INFO [train.py:842] (0/4) Epoch 2, batch 7700, loss[loss=0.3409, simple_loss=0.3715, pruned_loss=0.1552, over 7341.00 frames.], tot_loss[loss=0.2981, simple_loss=0.3519, pruned_loss=0.1221, over 1423316.38 frames.], batch size: 22, lr: 1.59e-03 2022-05-26 16:50:43,501 INFO [train.py:842] (0/4) Epoch 2, batch 7750, loss[loss=0.3022, simple_loss=0.3663, pruned_loss=0.119, over 7078.00 frames.], tot_loss[loss=0.2974, simple_loss=0.3521, pruned_loss=0.1214, over 1424497.74 frames.], batch size: 28, lr: 1.59e-03 2022-05-26 16:51:21,993 INFO [train.py:842] (0/4) Epoch 2, batch 7800, loss[loss=0.3337, simple_loss=0.3738, pruned_loss=0.1469, over 7141.00 frames.], tot_loss[loss=0.2988, simple_loss=0.3532, pruned_loss=0.1222, over 1423731.93 frames.], batch size: 20, lr: 1.58e-03 2022-05-26 16:52:00,801 INFO [train.py:842] (0/4) Epoch 2, batch 7850, loss[loss=0.2993, simple_loss=0.3531, pruned_loss=0.1228, over 7320.00 frames.], tot_loss[loss=0.2989, simple_loss=0.353, pruned_loss=0.1225, over 1424272.71 frames.], batch size: 21, lr: 1.58e-03 2022-05-26 16:52:39,358 INFO [train.py:842] (0/4) Epoch 2, batch 7900, loss[loss=0.3395, simple_loss=0.3814, pruned_loss=0.1488, over 5202.00 frames.], tot_loss[loss=0.299, simple_loss=0.3534, pruned_loss=0.1224, over 1426656.01 frames.], batch size: 53, lr: 1.58e-03 2022-05-26 16:53:18,138 INFO [train.py:842] (0/4) Epoch 2, batch 7950, loss[loss=0.2545, simple_loss=0.3134, pruned_loss=0.09775, over 7159.00 frames.], tot_loss[loss=0.2965, simple_loss=0.3519, pruned_loss=0.1206, over 1428315.63 frames.], batch size: 18, lr: 1.58e-03 2022-05-26 16:53:56,766 INFO [train.py:842] (0/4) Epoch 2, batch 8000, loss[loss=0.3125, simple_loss=0.3684, pruned_loss=0.1283, over 7211.00 frames.], tot_loss[loss=0.2959, simple_loss=0.3517, pruned_loss=0.12, over 1426745.97 frames.], batch size: 21, lr: 1.57e-03 2022-05-26 16:54:35,632 INFO [train.py:842] (0/4) Epoch 2, batch 8050, loss[loss=0.3381, simple_loss=0.3791, pruned_loss=0.1485, over 6473.00 frames.], tot_loss[loss=0.2981, simple_loss=0.3532, pruned_loss=0.1215, over 1424708.70 frames.], batch size: 37, lr: 1.57e-03 2022-05-26 16:55:14,309 INFO [train.py:842] (0/4) Epoch 2, batch 8100, loss[loss=0.2725, simple_loss=0.3391, pruned_loss=0.1029, over 7196.00 frames.], tot_loss[loss=0.2991, simple_loss=0.3538, pruned_loss=0.1222, over 1427660.09 frames.], batch size: 26, lr: 1.57e-03 2022-05-26 16:55:53,019 INFO [train.py:842] (0/4) Epoch 2, batch 8150, loss[loss=0.2486, simple_loss=0.3095, pruned_loss=0.09382, over 7076.00 frames.], tot_loss[loss=0.2964, simple_loss=0.3519, pruned_loss=0.1205, over 1429183.01 frames.], batch size: 18, lr: 1.57e-03 2022-05-26 16:56:31,567 INFO [train.py:842] (0/4) Epoch 2, batch 8200, loss[loss=0.2404, simple_loss=0.3026, pruned_loss=0.08913, over 7271.00 frames.], tot_loss[loss=0.2978, simple_loss=0.3528, pruned_loss=0.1214, over 1424677.95 frames.], batch size: 18, lr: 1.57e-03 2022-05-26 16:57:10,902 INFO [train.py:842] (0/4) Epoch 2, batch 8250, loss[loss=0.2966, simple_loss=0.3519, pruned_loss=0.1207, over 7070.00 frames.], tot_loss[loss=0.2967, simple_loss=0.3517, pruned_loss=0.1208, over 1422250.89 frames.], batch size: 28, lr: 1.56e-03 2022-05-26 16:57:49,464 INFO [train.py:842] (0/4) Epoch 2, batch 8300, loss[loss=0.3092, simple_loss=0.3739, pruned_loss=0.1223, over 7145.00 frames.], tot_loss[loss=0.2992, simple_loss=0.3534, pruned_loss=0.1224, over 1420461.16 frames.], batch size: 20, lr: 1.56e-03 2022-05-26 16:58:28,250 INFO [train.py:842] (0/4) Epoch 2, batch 8350, loss[loss=0.3708, simple_loss=0.4124, pruned_loss=0.1646, over 4922.00 frames.], tot_loss[loss=0.2977, simple_loss=0.3526, pruned_loss=0.1214, over 1420153.56 frames.], batch size: 53, lr: 1.56e-03 2022-05-26 16:59:06,606 INFO [train.py:842] (0/4) Epoch 2, batch 8400, loss[loss=0.2598, simple_loss=0.3115, pruned_loss=0.1041, over 7148.00 frames.], tot_loss[loss=0.2986, simple_loss=0.3531, pruned_loss=0.122, over 1419376.02 frames.], batch size: 17, lr: 1.56e-03 2022-05-26 16:59:45,118 INFO [train.py:842] (0/4) Epoch 2, batch 8450, loss[loss=0.362, simple_loss=0.4044, pruned_loss=0.1598, over 7211.00 frames.], tot_loss[loss=0.2988, simple_loss=0.3536, pruned_loss=0.122, over 1415027.74 frames.], batch size: 22, lr: 1.56e-03 2022-05-26 17:00:23,727 INFO [train.py:842] (0/4) Epoch 2, batch 8500, loss[loss=0.3137, simple_loss=0.3473, pruned_loss=0.14, over 7137.00 frames.], tot_loss[loss=0.2987, simple_loss=0.3536, pruned_loss=0.1219, over 1418660.75 frames.], batch size: 17, lr: 1.55e-03 2022-05-26 17:01:02,429 INFO [train.py:842] (0/4) Epoch 2, batch 8550, loss[loss=0.2393, simple_loss=0.3184, pruned_loss=0.08007, over 7342.00 frames.], tot_loss[loss=0.3002, simple_loss=0.3548, pruned_loss=0.1228, over 1423267.61 frames.], batch size: 19, lr: 1.55e-03 2022-05-26 17:01:41,159 INFO [train.py:842] (0/4) Epoch 2, batch 8600, loss[loss=0.4275, simple_loss=0.433, pruned_loss=0.211, over 6446.00 frames.], tot_loss[loss=0.2973, simple_loss=0.3521, pruned_loss=0.1213, over 1420207.10 frames.], batch size: 37, lr: 1.55e-03 2022-05-26 17:02:19,992 INFO [train.py:842] (0/4) Epoch 2, batch 8650, loss[loss=0.3063, simple_loss=0.3721, pruned_loss=0.1202, over 7149.00 frames.], tot_loss[loss=0.297, simple_loss=0.352, pruned_loss=0.121, over 1422166.54 frames.], batch size: 20, lr: 1.55e-03 2022-05-26 17:02:58,639 INFO [train.py:842] (0/4) Epoch 2, batch 8700, loss[loss=0.2297, simple_loss=0.3068, pruned_loss=0.07629, over 7075.00 frames.], tot_loss[loss=0.2931, simple_loss=0.3489, pruned_loss=0.1187, over 1421121.19 frames.], batch size: 18, lr: 1.55e-03 2022-05-26 17:03:37,084 INFO [train.py:842] (0/4) Epoch 2, batch 8750, loss[loss=0.2799, simple_loss=0.3363, pruned_loss=0.1117, over 7160.00 frames.], tot_loss[loss=0.2943, simple_loss=0.3497, pruned_loss=0.1195, over 1420532.75 frames.], batch size: 18, lr: 1.54e-03 2022-05-26 17:04:15,668 INFO [train.py:842] (0/4) Epoch 2, batch 8800, loss[loss=0.3412, simple_loss=0.3901, pruned_loss=0.1461, over 7333.00 frames.], tot_loss[loss=0.2964, simple_loss=0.3507, pruned_loss=0.1211, over 1412882.90 frames.], batch size: 22, lr: 1.54e-03 2022-05-26 17:04:54,408 INFO [train.py:842] (0/4) Epoch 2, batch 8850, loss[loss=0.4076, simple_loss=0.439, pruned_loss=0.1881, over 7267.00 frames.], tot_loss[loss=0.2977, simple_loss=0.3517, pruned_loss=0.1218, over 1411056.68 frames.], batch size: 24, lr: 1.54e-03 2022-05-26 17:05:32,685 INFO [train.py:842] (0/4) Epoch 2, batch 8900, loss[loss=0.3401, simple_loss=0.3944, pruned_loss=0.1429, over 6667.00 frames.], tot_loss[loss=0.2979, simple_loss=0.3522, pruned_loss=0.1218, over 1401951.45 frames.], batch size: 31, lr: 1.54e-03 2022-05-26 17:06:11,174 INFO [train.py:842] (0/4) Epoch 2, batch 8950, loss[loss=0.3027, simple_loss=0.3759, pruned_loss=0.1148, over 7111.00 frames.], tot_loss[loss=0.3001, simple_loss=0.3541, pruned_loss=0.123, over 1402203.66 frames.], batch size: 21, lr: 1.54e-03 2022-05-26 17:06:49,707 INFO [train.py:842] (0/4) Epoch 2, batch 9000, loss[loss=0.3862, simple_loss=0.3991, pruned_loss=0.1867, over 7271.00 frames.], tot_loss[loss=0.3009, simple_loss=0.3546, pruned_loss=0.1235, over 1397438.09 frames.], batch size: 18, lr: 1.53e-03 2022-05-26 17:06:49,708 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 17:06:59,111 INFO [train.py:871] (0/4) Epoch 2, validation: loss=0.2204, simple_loss=0.3177, pruned_loss=0.06159, over 868885.00 frames. 2022-05-26 17:07:37,648 INFO [train.py:842] (0/4) Epoch 2, batch 9050, loss[loss=0.2367, simple_loss=0.2976, pruned_loss=0.08793, over 7274.00 frames.], tot_loss[loss=0.3023, simple_loss=0.3555, pruned_loss=0.1245, over 1382698.79 frames.], batch size: 18, lr: 1.53e-03 2022-05-26 17:08:15,242 INFO [train.py:842] (0/4) Epoch 2, batch 9100, loss[loss=0.2942, simple_loss=0.3422, pruned_loss=0.1231, over 5105.00 frames.], tot_loss[loss=0.3072, simple_loss=0.3592, pruned_loss=0.1277, over 1329781.40 frames.], batch size: 52, lr: 1.53e-03 2022-05-26 17:08:52,760 INFO [train.py:842] (0/4) Epoch 2, batch 9150, loss[loss=0.3096, simple_loss=0.3611, pruned_loss=0.1291, over 5021.00 frames.], tot_loss[loss=0.315, simple_loss=0.3642, pruned_loss=0.133, over 1258555.56 frames.], batch size: 52, lr: 1.53e-03 2022-05-26 17:09:25,075 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-2.pt 2022-05-26 17:09:46,537 INFO [train.py:842] (0/4) Epoch 3, batch 0, loss[loss=0.2351, simple_loss=0.293, pruned_loss=0.0886, over 7278.00 frames.], tot_loss[loss=0.2351, simple_loss=0.293, pruned_loss=0.0886, over 7278.00 frames.], batch size: 17, lr: 1.50e-03 2022-05-26 17:10:25,877 INFO [train.py:842] (0/4) Epoch 3, batch 50, loss[loss=0.4898, simple_loss=0.4724, pruned_loss=0.2536, over 7297.00 frames.], tot_loss[loss=0.296, simple_loss=0.3485, pruned_loss=0.1217, over 321993.14 frames.], batch size: 25, lr: 1.49e-03 2022-05-26 17:11:04,575 INFO [train.py:842] (0/4) Epoch 3, batch 100, loss[loss=0.2895, simple_loss=0.3244, pruned_loss=0.1273, over 7006.00 frames.], tot_loss[loss=0.2969, simple_loss=0.3507, pruned_loss=0.1216, over 569626.37 frames.], batch size: 16, lr: 1.49e-03 2022-05-26 17:11:43,656 INFO [train.py:842] (0/4) Epoch 3, batch 150, loss[loss=0.2954, simple_loss=0.3617, pruned_loss=0.1146, over 6636.00 frames.], tot_loss[loss=0.2953, simple_loss=0.349, pruned_loss=0.1208, over 761968.24 frames.], batch size: 31, lr: 1.49e-03 2022-05-26 17:12:22,309 INFO [train.py:842] (0/4) Epoch 3, batch 200, loss[loss=0.2942, simple_loss=0.3308, pruned_loss=0.1288, over 7212.00 frames.], tot_loss[loss=0.294, simple_loss=0.3484, pruned_loss=0.1199, over 900401.01 frames.], batch size: 16, lr: 1.49e-03 2022-05-26 17:13:00,908 INFO [train.py:842] (0/4) Epoch 3, batch 250, loss[loss=0.2846, simple_loss=0.3426, pruned_loss=0.1133, over 7355.00 frames.], tot_loss[loss=0.294, simple_loss=0.3493, pruned_loss=0.1194, over 1010393.44 frames.], batch size: 19, lr: 1.49e-03 2022-05-26 17:13:39,431 INFO [train.py:842] (0/4) Epoch 3, batch 300, loss[loss=0.2967, simple_loss=0.3631, pruned_loss=0.1152, over 6782.00 frames.], tot_loss[loss=0.2952, simple_loss=0.351, pruned_loss=0.1197, over 1100189.74 frames.], batch size: 31, lr: 1.49e-03 2022-05-26 17:14:18,344 INFO [train.py:842] (0/4) Epoch 3, batch 350, loss[loss=0.3281, simple_loss=0.372, pruned_loss=0.1421, over 7324.00 frames.], tot_loss[loss=0.2978, simple_loss=0.3533, pruned_loss=0.1212, over 1171089.75 frames.], batch size: 21, lr: 1.48e-03 2022-05-26 17:14:56,928 INFO [train.py:842] (0/4) Epoch 3, batch 400, loss[loss=0.3094, simple_loss=0.3622, pruned_loss=0.1284, over 7271.00 frames.], tot_loss[loss=0.2971, simple_loss=0.3529, pruned_loss=0.1206, over 1222090.58 frames.], batch size: 24, lr: 1.48e-03 2022-05-26 17:15:35,596 INFO [train.py:842] (0/4) Epoch 3, batch 450, loss[loss=0.2791, simple_loss=0.3348, pruned_loss=0.1116, over 7195.00 frames.], tot_loss[loss=0.2978, simple_loss=0.3538, pruned_loss=0.1209, over 1263197.61 frames.], batch size: 22, lr: 1.48e-03 2022-05-26 17:16:14,226 INFO [train.py:842] (0/4) Epoch 3, batch 500, loss[loss=0.2854, simple_loss=0.3325, pruned_loss=0.1191, over 6991.00 frames.], tot_loss[loss=0.2945, simple_loss=0.3508, pruned_loss=0.1191, over 1301517.78 frames.], batch size: 16, lr: 1.48e-03 2022-05-26 17:16:53,099 INFO [train.py:842] (0/4) Epoch 3, batch 550, loss[loss=0.3139, simple_loss=0.3722, pruned_loss=0.1278, over 7231.00 frames.], tot_loss[loss=0.2943, simple_loss=0.3508, pruned_loss=0.1189, over 1331605.90 frames.], batch size: 21, lr: 1.48e-03 2022-05-26 17:17:31,916 INFO [train.py:842] (0/4) Epoch 3, batch 600, loss[loss=0.2856, simple_loss=0.3566, pruned_loss=0.1073, over 7287.00 frames.], tot_loss[loss=0.292, simple_loss=0.3486, pruned_loss=0.1177, over 1352313.24 frames.], batch size: 25, lr: 1.47e-03 2022-05-26 17:18:10,649 INFO [train.py:842] (0/4) Epoch 3, batch 650, loss[loss=0.361, simple_loss=0.3967, pruned_loss=0.1626, over 7359.00 frames.], tot_loss[loss=0.2932, simple_loss=0.349, pruned_loss=0.1187, over 1366541.01 frames.], batch size: 19, lr: 1.47e-03 2022-05-26 17:18:49,279 INFO [train.py:842] (0/4) Epoch 3, batch 700, loss[loss=0.2587, simple_loss=0.3319, pruned_loss=0.09272, over 7208.00 frames.], tot_loss[loss=0.292, simple_loss=0.3483, pruned_loss=0.1179, over 1376744.20 frames.], batch size: 21, lr: 1.47e-03 2022-05-26 17:19:28,527 INFO [train.py:842] (0/4) Epoch 3, batch 750, loss[loss=0.3047, simple_loss=0.3619, pruned_loss=0.1238, over 7197.00 frames.], tot_loss[loss=0.29, simple_loss=0.3472, pruned_loss=0.1164, over 1390388.48 frames.], batch size: 23, lr: 1.47e-03 2022-05-26 17:20:07,105 INFO [train.py:842] (0/4) Epoch 3, batch 800, loss[loss=0.2539, simple_loss=0.3296, pruned_loss=0.08913, over 7222.00 frames.], tot_loss[loss=0.2919, simple_loss=0.349, pruned_loss=0.1174, over 1401901.24 frames.], batch size: 23, lr: 1.47e-03 2022-05-26 17:20:46,497 INFO [train.py:842] (0/4) Epoch 3, batch 850, loss[loss=0.2973, simple_loss=0.3593, pruned_loss=0.1176, over 7294.00 frames.], tot_loss[loss=0.2895, simple_loss=0.3469, pruned_loss=0.1161, over 1410218.52 frames.], batch size: 25, lr: 1.47e-03 2022-05-26 17:21:24,954 INFO [train.py:842] (0/4) Epoch 3, batch 900, loss[loss=0.2187, simple_loss=0.2824, pruned_loss=0.07751, over 7062.00 frames.], tot_loss[loss=0.2894, simple_loss=0.3473, pruned_loss=0.1158, over 1412867.34 frames.], batch size: 18, lr: 1.46e-03 2022-05-26 17:22:03,733 INFO [train.py:842] (0/4) Epoch 3, batch 950, loss[loss=0.4697, simple_loss=0.4679, pruned_loss=0.2357, over 7139.00 frames.], tot_loss[loss=0.2915, simple_loss=0.3486, pruned_loss=0.1172, over 1417771.01 frames.], batch size: 20, lr: 1.46e-03 2022-05-26 17:22:42,287 INFO [train.py:842] (0/4) Epoch 3, batch 1000, loss[loss=0.3702, simple_loss=0.4024, pruned_loss=0.1691, over 6863.00 frames.], tot_loss[loss=0.2911, simple_loss=0.3485, pruned_loss=0.1168, over 1416854.82 frames.], batch size: 31, lr: 1.46e-03 2022-05-26 17:23:20,997 INFO [train.py:842] (0/4) Epoch 3, batch 1050, loss[loss=0.2444, simple_loss=0.3016, pruned_loss=0.0936, over 7274.00 frames.], tot_loss[loss=0.292, simple_loss=0.349, pruned_loss=0.1175, over 1414097.81 frames.], batch size: 18, lr: 1.46e-03 2022-05-26 17:23:59,535 INFO [train.py:842] (0/4) Epoch 3, batch 1100, loss[loss=0.2898, simple_loss=0.3498, pruned_loss=0.1149, over 7231.00 frames.], tot_loss[loss=0.2915, simple_loss=0.349, pruned_loss=0.117, over 1419518.72 frames.], batch size: 21, lr: 1.46e-03 2022-05-26 17:24:38,370 INFO [train.py:842] (0/4) Epoch 3, batch 1150, loss[loss=0.3327, simple_loss=0.3744, pruned_loss=0.1455, over 7234.00 frames.], tot_loss[loss=0.289, simple_loss=0.3472, pruned_loss=0.1154, over 1421189.84 frames.], batch size: 20, lr: 1.45e-03 2022-05-26 17:25:16,894 INFO [train.py:842] (0/4) Epoch 3, batch 1200, loss[loss=0.2378, simple_loss=0.3052, pruned_loss=0.08516, over 7434.00 frames.], tot_loss[loss=0.2898, simple_loss=0.3475, pruned_loss=0.1161, over 1425123.00 frames.], batch size: 20, lr: 1.45e-03 2022-05-26 17:25:55,752 INFO [train.py:842] (0/4) Epoch 3, batch 1250, loss[loss=0.2967, simple_loss=0.3482, pruned_loss=0.1226, over 7413.00 frames.], tot_loss[loss=0.2918, simple_loss=0.3482, pruned_loss=0.1177, over 1425815.42 frames.], batch size: 21, lr: 1.45e-03 2022-05-26 17:26:34,357 INFO [train.py:842] (0/4) Epoch 3, batch 1300, loss[loss=0.286, simple_loss=0.3479, pruned_loss=0.1121, over 7320.00 frames.], tot_loss[loss=0.288, simple_loss=0.3458, pruned_loss=0.1152, over 1426976.36 frames.], batch size: 21, lr: 1.45e-03 2022-05-26 17:27:13,050 INFO [train.py:842] (0/4) Epoch 3, batch 1350, loss[loss=0.36, simple_loss=0.3723, pruned_loss=0.1738, over 7437.00 frames.], tot_loss[loss=0.2893, simple_loss=0.3473, pruned_loss=0.1156, over 1426481.44 frames.], batch size: 20, lr: 1.45e-03 2022-05-26 17:27:51,771 INFO [train.py:842] (0/4) Epoch 3, batch 1400, loss[loss=0.3017, simple_loss=0.3574, pruned_loss=0.123, over 7160.00 frames.], tot_loss[loss=0.2898, simple_loss=0.3476, pruned_loss=0.116, over 1424040.52 frames.], batch size: 19, lr: 1.45e-03 2022-05-26 17:28:30,419 INFO [train.py:842] (0/4) Epoch 3, batch 1450, loss[loss=0.267, simple_loss=0.307, pruned_loss=0.1135, over 7149.00 frames.], tot_loss[loss=0.2884, simple_loss=0.3461, pruned_loss=0.1153, over 1421007.93 frames.], batch size: 17, lr: 1.44e-03 2022-05-26 17:29:08,821 INFO [train.py:842] (0/4) Epoch 3, batch 1500, loss[loss=0.3959, simple_loss=0.4265, pruned_loss=0.1827, over 7319.00 frames.], tot_loss[loss=0.2886, simple_loss=0.3464, pruned_loss=0.1154, over 1418663.96 frames.], batch size: 21, lr: 1.44e-03 2022-05-26 17:29:47,552 INFO [train.py:842] (0/4) Epoch 3, batch 1550, loss[loss=0.3165, simple_loss=0.3609, pruned_loss=0.136, over 7162.00 frames.], tot_loss[loss=0.2907, simple_loss=0.3478, pruned_loss=0.1167, over 1422212.95 frames.], batch size: 19, lr: 1.44e-03 2022-05-26 17:30:26,309 INFO [train.py:842] (0/4) Epoch 3, batch 1600, loss[loss=0.2851, simple_loss=0.3394, pruned_loss=0.1154, over 7160.00 frames.], tot_loss[loss=0.2916, simple_loss=0.3482, pruned_loss=0.1175, over 1424201.45 frames.], batch size: 19, lr: 1.44e-03 2022-05-26 17:31:05,606 INFO [train.py:842] (0/4) Epoch 3, batch 1650, loss[loss=0.2418, simple_loss=0.3083, pruned_loss=0.08762, over 7431.00 frames.], tot_loss[loss=0.2898, simple_loss=0.3468, pruned_loss=0.1164, over 1426241.24 frames.], batch size: 20, lr: 1.44e-03 2022-05-26 17:31:43,967 INFO [train.py:842] (0/4) Epoch 3, batch 1700, loss[loss=0.2682, simple_loss=0.3414, pruned_loss=0.09753, over 7154.00 frames.], tot_loss[loss=0.2888, simple_loss=0.3457, pruned_loss=0.1159, over 1417401.28 frames.], batch size: 20, lr: 1.44e-03 2022-05-26 17:32:23,202 INFO [train.py:842] (0/4) Epoch 3, batch 1750, loss[loss=0.3114, simple_loss=0.3659, pruned_loss=0.1284, over 7233.00 frames.], tot_loss[loss=0.2907, simple_loss=0.3473, pruned_loss=0.117, over 1424784.60 frames.], batch size: 20, lr: 1.43e-03 2022-05-26 17:33:01,584 INFO [train.py:842] (0/4) Epoch 3, batch 1800, loss[loss=0.305, simple_loss=0.3579, pruned_loss=0.126, over 7125.00 frames.], tot_loss[loss=0.2901, simple_loss=0.3467, pruned_loss=0.1168, over 1418073.06 frames.], batch size: 21, lr: 1.43e-03 2022-05-26 17:33:40,961 INFO [train.py:842] (0/4) Epoch 3, batch 1850, loss[loss=0.2528, simple_loss=0.3213, pruned_loss=0.09221, over 7416.00 frames.], tot_loss[loss=0.2897, simple_loss=0.3463, pruned_loss=0.1165, over 1419200.62 frames.], batch size: 21, lr: 1.43e-03 2022-05-26 17:34:19,482 INFO [train.py:842] (0/4) Epoch 3, batch 1900, loss[loss=0.2901, simple_loss=0.3517, pruned_loss=0.1142, over 7157.00 frames.], tot_loss[loss=0.2896, simple_loss=0.3465, pruned_loss=0.1164, over 1416848.05 frames.], batch size: 18, lr: 1.43e-03 2022-05-26 17:34:58,244 INFO [train.py:842] (0/4) Epoch 3, batch 1950, loss[loss=0.2962, simple_loss=0.3491, pruned_loss=0.1216, over 6784.00 frames.], tot_loss[loss=0.2872, simple_loss=0.345, pruned_loss=0.1147, over 1417870.95 frames.], batch size: 31, lr: 1.43e-03 2022-05-26 17:35:36,883 INFO [train.py:842] (0/4) Epoch 3, batch 2000, loss[loss=0.3075, simple_loss=0.3573, pruned_loss=0.1288, over 7155.00 frames.], tot_loss[loss=0.2858, simple_loss=0.3439, pruned_loss=0.1138, over 1422392.03 frames.], batch size: 19, lr: 1.43e-03 2022-05-26 17:36:15,532 INFO [train.py:842] (0/4) Epoch 3, batch 2050, loss[loss=0.3083, simple_loss=0.3587, pruned_loss=0.1289, over 5062.00 frames.], tot_loss[loss=0.2885, simple_loss=0.3464, pruned_loss=0.1153, over 1422265.43 frames.], batch size: 53, lr: 1.42e-03 2022-05-26 17:36:54,228 INFO [train.py:842] (0/4) Epoch 3, batch 2100, loss[loss=0.3268, simple_loss=0.3762, pruned_loss=0.1387, over 7330.00 frames.], tot_loss[loss=0.2892, simple_loss=0.3465, pruned_loss=0.116, over 1425340.08 frames.], batch size: 21, lr: 1.42e-03 2022-05-26 17:37:33,226 INFO [train.py:842] (0/4) Epoch 3, batch 2150, loss[loss=0.3065, simple_loss=0.36, pruned_loss=0.1265, over 7241.00 frames.], tot_loss[loss=0.2903, simple_loss=0.3472, pruned_loss=0.1167, over 1426565.59 frames.], batch size: 20, lr: 1.42e-03 2022-05-26 17:38:11,809 INFO [train.py:842] (0/4) Epoch 3, batch 2200, loss[loss=0.2515, simple_loss=0.3233, pruned_loss=0.08982, over 7136.00 frames.], tot_loss[loss=0.2888, simple_loss=0.3462, pruned_loss=0.1157, over 1424503.59 frames.], batch size: 20, lr: 1.42e-03 2022-05-26 17:38:50,513 INFO [train.py:842] (0/4) Epoch 3, batch 2250, loss[loss=0.3, simple_loss=0.3531, pruned_loss=0.1234, over 7329.00 frames.], tot_loss[loss=0.2893, simple_loss=0.3468, pruned_loss=0.1159, over 1424787.13 frames.], batch size: 20, lr: 1.42e-03 2022-05-26 17:39:29,084 INFO [train.py:842] (0/4) Epoch 3, batch 2300, loss[loss=0.276, simple_loss=0.3289, pruned_loss=0.1115, over 7365.00 frames.], tot_loss[loss=0.291, simple_loss=0.3479, pruned_loss=0.117, over 1412772.76 frames.], batch size: 19, lr: 1.42e-03 2022-05-26 17:40:07,872 INFO [train.py:842] (0/4) Epoch 3, batch 2350, loss[loss=0.2652, simple_loss=0.3275, pruned_loss=0.1015, over 7255.00 frames.], tot_loss[loss=0.2885, simple_loss=0.346, pruned_loss=0.1155, over 1414434.45 frames.], batch size: 19, lr: 1.41e-03 2022-05-26 17:40:46,289 INFO [train.py:842] (0/4) Epoch 3, batch 2400, loss[loss=0.2397, simple_loss=0.3062, pruned_loss=0.08662, over 7264.00 frames.], tot_loss[loss=0.2853, simple_loss=0.3443, pruned_loss=0.1131, over 1417573.88 frames.], batch size: 19, lr: 1.41e-03 2022-05-26 17:41:25,129 INFO [train.py:842] (0/4) Epoch 3, batch 2450, loss[loss=0.2586, simple_loss=0.3085, pruned_loss=0.1044, over 7228.00 frames.], tot_loss[loss=0.2881, simple_loss=0.3462, pruned_loss=0.115, over 1414636.39 frames.], batch size: 20, lr: 1.41e-03 2022-05-26 17:42:03,917 INFO [train.py:842] (0/4) Epoch 3, batch 2500, loss[loss=0.2826, simple_loss=0.3427, pruned_loss=0.1112, over 7154.00 frames.], tot_loss[loss=0.2865, simple_loss=0.3445, pruned_loss=0.1142, over 1412894.46 frames.], batch size: 19, lr: 1.41e-03 2022-05-26 17:42:42,665 INFO [train.py:842] (0/4) Epoch 3, batch 2550, loss[loss=0.3807, simple_loss=0.4152, pruned_loss=0.1731, over 7201.00 frames.], tot_loss[loss=0.2855, simple_loss=0.3436, pruned_loss=0.1137, over 1411900.06 frames.], batch size: 21, lr: 1.41e-03 2022-05-26 17:43:21,436 INFO [train.py:842] (0/4) Epoch 3, batch 2600, loss[loss=0.2734, simple_loss=0.3343, pruned_loss=0.1062, over 7278.00 frames.], tot_loss[loss=0.2873, simple_loss=0.3449, pruned_loss=0.1149, over 1418064.22 frames.], batch size: 18, lr: 1.41e-03 2022-05-26 17:44:00,640 INFO [train.py:842] (0/4) Epoch 3, batch 2650, loss[loss=0.2664, simple_loss=0.331, pruned_loss=0.1009, over 7333.00 frames.], tot_loss[loss=0.2849, simple_loss=0.343, pruned_loss=0.1134, over 1417469.95 frames.], batch size: 20, lr: 1.41e-03 2022-05-26 17:44:39,142 INFO [train.py:842] (0/4) Epoch 3, batch 2700, loss[loss=0.3595, simple_loss=0.38, pruned_loss=0.1695, over 7442.00 frames.], tot_loss[loss=0.2857, simple_loss=0.3436, pruned_loss=0.1139, over 1419151.09 frames.], batch size: 19, lr: 1.40e-03 2022-05-26 17:45:17,845 INFO [train.py:842] (0/4) Epoch 3, batch 2750, loss[loss=0.3072, simple_loss=0.3612, pruned_loss=0.1266, over 7144.00 frames.], tot_loss[loss=0.2832, simple_loss=0.342, pruned_loss=0.1121, over 1418529.03 frames.], batch size: 26, lr: 1.40e-03 2022-05-26 17:45:56,561 INFO [train.py:842] (0/4) Epoch 3, batch 2800, loss[loss=0.3108, simple_loss=0.3537, pruned_loss=0.134, over 5121.00 frames.], tot_loss[loss=0.2838, simple_loss=0.3423, pruned_loss=0.1127, over 1419079.18 frames.], batch size: 52, lr: 1.40e-03 2022-05-26 17:46:35,460 INFO [train.py:842] (0/4) Epoch 3, batch 2850, loss[loss=0.3037, simple_loss=0.3578, pruned_loss=0.1248, over 7225.00 frames.], tot_loss[loss=0.2853, simple_loss=0.3434, pruned_loss=0.1136, over 1421464.75 frames.], batch size: 21, lr: 1.40e-03 2022-05-26 17:47:13,941 INFO [train.py:842] (0/4) Epoch 3, batch 2900, loss[loss=0.2377, simple_loss=0.3103, pruned_loss=0.08254, over 6378.00 frames.], tot_loss[loss=0.284, simple_loss=0.3427, pruned_loss=0.1127, over 1417445.87 frames.], batch size: 37, lr: 1.40e-03 2022-05-26 17:47:52,723 INFO [train.py:842] (0/4) Epoch 3, batch 2950, loss[loss=0.3916, simple_loss=0.422, pruned_loss=0.1806, over 7150.00 frames.], tot_loss[loss=0.2878, simple_loss=0.3452, pruned_loss=0.1152, over 1416534.31 frames.], batch size: 26, lr: 1.40e-03 2022-05-26 17:48:31,449 INFO [train.py:842] (0/4) Epoch 3, batch 3000, loss[loss=0.3164, simple_loss=0.3758, pruned_loss=0.1285, over 7336.00 frames.], tot_loss[loss=0.2865, simple_loss=0.3445, pruned_loss=0.1143, over 1420306.78 frames.], batch size: 22, lr: 1.39e-03 2022-05-26 17:48:31,451 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 17:48:40,685 INFO [train.py:871] (0/4) Epoch 3, validation: loss=0.2137, simple_loss=0.3102, pruned_loss=0.05856, over 868885.00 frames. 2022-05-26 17:49:19,937 INFO [train.py:842] (0/4) Epoch 3, batch 3050, loss[loss=0.2991, simple_loss=0.3613, pruned_loss=0.1184, over 7410.00 frames.], tot_loss[loss=0.2858, simple_loss=0.3446, pruned_loss=0.1135, over 1426132.00 frames.], batch size: 21, lr: 1.39e-03 2022-05-26 17:49:58,623 INFO [train.py:842] (0/4) Epoch 3, batch 3100, loss[loss=0.2436, simple_loss=0.3066, pruned_loss=0.09031, over 7268.00 frames.], tot_loss[loss=0.2833, simple_loss=0.3426, pruned_loss=0.112, over 1429955.74 frames.], batch size: 18, lr: 1.39e-03 2022-05-26 17:50:37,604 INFO [train.py:842] (0/4) Epoch 3, batch 3150, loss[loss=0.3312, simple_loss=0.388, pruned_loss=0.1372, over 7221.00 frames.], tot_loss[loss=0.2837, simple_loss=0.3425, pruned_loss=0.1125, over 1423600.39 frames.], batch size: 21, lr: 1.39e-03 2022-05-26 17:51:16,011 INFO [train.py:842] (0/4) Epoch 3, batch 3200, loss[loss=0.2921, simple_loss=0.3499, pruned_loss=0.1171, over 7381.00 frames.], tot_loss[loss=0.2826, simple_loss=0.3421, pruned_loss=0.1115, over 1426406.65 frames.], batch size: 23, lr: 1.39e-03 2022-05-26 17:51:54,667 INFO [train.py:842] (0/4) Epoch 3, batch 3250, loss[loss=0.2742, simple_loss=0.3425, pruned_loss=0.1029, over 7167.00 frames.], tot_loss[loss=0.2835, simple_loss=0.3431, pruned_loss=0.1119, over 1427240.72 frames.], batch size: 19, lr: 1.39e-03 2022-05-26 17:52:33,226 INFO [train.py:842] (0/4) Epoch 3, batch 3300, loss[loss=0.2387, simple_loss=0.3077, pruned_loss=0.08484, over 7175.00 frames.], tot_loss[loss=0.284, simple_loss=0.3433, pruned_loss=0.1124, over 1429939.18 frames.], batch size: 26, lr: 1.39e-03 2022-05-26 17:53:11,851 INFO [train.py:842] (0/4) Epoch 3, batch 3350, loss[loss=0.2551, simple_loss=0.3208, pruned_loss=0.09474, over 7269.00 frames.], tot_loss[loss=0.2876, simple_loss=0.3466, pruned_loss=0.1144, over 1426982.45 frames.], batch size: 18, lr: 1.38e-03 2022-05-26 17:53:50,293 INFO [train.py:842] (0/4) Epoch 3, batch 3400, loss[loss=0.2095, simple_loss=0.2864, pruned_loss=0.06636, over 7434.00 frames.], tot_loss[loss=0.2865, simple_loss=0.346, pruned_loss=0.1134, over 1424254.64 frames.], batch size: 18, lr: 1.38e-03 2022-05-26 17:54:29,396 INFO [train.py:842] (0/4) Epoch 3, batch 3450, loss[loss=0.2829, simple_loss=0.3396, pruned_loss=0.1131, over 7263.00 frames.], tot_loss[loss=0.2854, simple_loss=0.3453, pruned_loss=0.1128, over 1421046.06 frames.], batch size: 19, lr: 1.38e-03 2022-05-26 17:55:07,992 INFO [train.py:842] (0/4) Epoch 3, batch 3500, loss[loss=0.3718, simple_loss=0.4207, pruned_loss=0.1614, over 7278.00 frames.], tot_loss[loss=0.286, simple_loss=0.3451, pruned_loss=0.1135, over 1421534.87 frames.], batch size: 25, lr: 1.38e-03 2022-05-26 17:55:46,692 INFO [train.py:842] (0/4) Epoch 3, batch 3550, loss[loss=0.2588, simple_loss=0.3306, pruned_loss=0.09349, over 7219.00 frames.], tot_loss[loss=0.2855, simple_loss=0.3449, pruned_loss=0.113, over 1420129.05 frames.], batch size: 21, lr: 1.38e-03 2022-05-26 17:56:25,356 INFO [train.py:842] (0/4) Epoch 3, batch 3600, loss[loss=0.2678, simple_loss=0.3395, pruned_loss=0.09808, over 7291.00 frames.], tot_loss[loss=0.2841, simple_loss=0.3435, pruned_loss=0.1123, over 1421408.14 frames.], batch size: 24, lr: 1.38e-03 2022-05-26 17:57:04,455 INFO [train.py:842] (0/4) Epoch 3, batch 3650, loss[loss=0.2782, simple_loss=0.3379, pruned_loss=0.1093, over 7369.00 frames.], tot_loss[loss=0.2855, simple_loss=0.3439, pruned_loss=0.1135, over 1421241.13 frames.], batch size: 23, lr: 1.37e-03 2022-05-26 17:57:43,094 INFO [train.py:842] (0/4) Epoch 3, batch 3700, loss[loss=0.2411, simple_loss=0.3056, pruned_loss=0.08829, over 7410.00 frames.], tot_loss[loss=0.285, simple_loss=0.3436, pruned_loss=0.1132, over 1417327.72 frames.], batch size: 18, lr: 1.37e-03 2022-05-26 17:58:22,042 INFO [train.py:842] (0/4) Epoch 3, batch 3750, loss[loss=0.2073, simple_loss=0.2827, pruned_loss=0.06594, over 7292.00 frames.], tot_loss[loss=0.2825, simple_loss=0.3419, pruned_loss=0.1115, over 1423661.62 frames.], batch size: 18, lr: 1.37e-03 2022-05-26 17:59:00,549 INFO [train.py:842] (0/4) Epoch 3, batch 3800, loss[loss=0.244, simple_loss=0.3162, pruned_loss=0.08588, over 7162.00 frames.], tot_loss[loss=0.2814, simple_loss=0.3412, pruned_loss=0.1108, over 1424293.16 frames.], batch size: 18, lr: 1.37e-03 2022-05-26 17:59:39,242 INFO [train.py:842] (0/4) Epoch 3, batch 3850, loss[loss=0.2531, simple_loss=0.3215, pruned_loss=0.09231, over 7345.00 frames.], tot_loss[loss=0.2792, simple_loss=0.3393, pruned_loss=0.1096, over 1422596.46 frames.], batch size: 22, lr: 1.37e-03 2022-05-26 18:00:17,856 INFO [train.py:842] (0/4) Epoch 3, batch 3900, loss[loss=0.2655, simple_loss=0.3292, pruned_loss=0.1009, over 7334.00 frames.], tot_loss[loss=0.281, simple_loss=0.3406, pruned_loss=0.1107, over 1423764.88 frames.], batch size: 20, lr: 1.37e-03 2022-05-26 18:00:57,115 INFO [train.py:842] (0/4) Epoch 3, batch 3950, loss[loss=0.2918, simple_loss=0.358, pruned_loss=0.1128, over 7311.00 frames.], tot_loss[loss=0.2808, simple_loss=0.3406, pruned_loss=0.1105, over 1421278.78 frames.], batch size: 21, lr: 1.37e-03 2022-05-26 18:01:35,689 INFO [train.py:842] (0/4) Epoch 3, batch 4000, loss[loss=0.344, simple_loss=0.3907, pruned_loss=0.1487, over 7335.00 frames.], tot_loss[loss=0.2795, simple_loss=0.3396, pruned_loss=0.1096, over 1425927.79 frames.], batch size: 22, lr: 1.36e-03 2022-05-26 18:02:15,091 INFO [train.py:842] (0/4) Epoch 3, batch 4050, loss[loss=0.2162, simple_loss=0.3032, pruned_loss=0.06461, over 7440.00 frames.], tot_loss[loss=0.2782, simple_loss=0.3386, pruned_loss=0.1089, over 1425466.74 frames.], batch size: 20, lr: 1.36e-03 2022-05-26 18:02:53,409 INFO [train.py:842] (0/4) Epoch 3, batch 4100, loss[loss=0.2346, simple_loss=0.2979, pruned_loss=0.0857, over 7055.00 frames.], tot_loss[loss=0.2788, simple_loss=0.3392, pruned_loss=0.1092, over 1416690.56 frames.], batch size: 18, lr: 1.36e-03 2022-05-26 18:03:32,194 INFO [train.py:842] (0/4) Epoch 3, batch 4150, loss[loss=0.2682, simple_loss=0.3329, pruned_loss=0.1017, over 7303.00 frames.], tot_loss[loss=0.2773, simple_loss=0.3385, pruned_loss=0.108, over 1422016.10 frames.], batch size: 25, lr: 1.36e-03 2022-05-26 18:04:10,665 INFO [train.py:842] (0/4) Epoch 3, batch 4200, loss[loss=0.2737, simple_loss=0.3376, pruned_loss=0.1049, over 7220.00 frames.], tot_loss[loss=0.2787, simple_loss=0.3395, pruned_loss=0.109, over 1420591.34 frames.], batch size: 22, lr: 1.36e-03 2022-05-26 18:04:49,675 INFO [train.py:842] (0/4) Epoch 3, batch 4250, loss[loss=0.2408, simple_loss=0.3121, pruned_loss=0.08473, over 7254.00 frames.], tot_loss[loss=0.278, simple_loss=0.3393, pruned_loss=0.1083, over 1425585.84 frames.], batch size: 19, lr: 1.36e-03 2022-05-26 18:05:28,270 INFO [train.py:842] (0/4) Epoch 3, batch 4300, loss[loss=0.2745, simple_loss=0.3295, pruned_loss=0.1098, over 6790.00 frames.], tot_loss[loss=0.2779, simple_loss=0.3393, pruned_loss=0.1083, over 1425231.18 frames.], batch size: 15, lr: 1.36e-03 2022-05-26 18:06:07,370 INFO [train.py:842] (0/4) Epoch 3, batch 4350, loss[loss=0.2443, simple_loss=0.3063, pruned_loss=0.09122, over 7355.00 frames.], tot_loss[loss=0.2752, simple_loss=0.3372, pruned_loss=0.1067, over 1428097.72 frames.], batch size: 19, lr: 1.35e-03 2022-05-26 18:06:45,881 INFO [train.py:842] (0/4) Epoch 3, batch 4400, loss[loss=0.3328, simple_loss=0.3858, pruned_loss=0.1399, over 7424.00 frames.], tot_loss[loss=0.2783, simple_loss=0.3398, pruned_loss=0.1084, over 1429084.81 frames.], batch size: 20, lr: 1.35e-03 2022-05-26 18:07:24,812 INFO [train.py:842] (0/4) Epoch 3, batch 4450, loss[loss=0.2706, simple_loss=0.323, pruned_loss=0.1091, over 6979.00 frames.], tot_loss[loss=0.279, simple_loss=0.3398, pruned_loss=0.1091, over 1432866.32 frames.], batch size: 16, lr: 1.35e-03 2022-05-26 18:08:03,430 INFO [train.py:842] (0/4) Epoch 3, batch 4500, loss[loss=0.2752, simple_loss=0.3416, pruned_loss=0.1044, over 7206.00 frames.], tot_loss[loss=0.279, simple_loss=0.3396, pruned_loss=0.1092, over 1427276.83 frames.], batch size: 23, lr: 1.35e-03 2022-05-26 18:08:42,197 INFO [train.py:842] (0/4) Epoch 3, batch 4550, loss[loss=0.2691, simple_loss=0.3286, pruned_loss=0.1048, over 7328.00 frames.], tot_loss[loss=0.279, simple_loss=0.3391, pruned_loss=0.1095, over 1426995.58 frames.], batch size: 20, lr: 1.35e-03 2022-05-26 18:09:20,902 INFO [train.py:842] (0/4) Epoch 3, batch 4600, loss[loss=0.2392, simple_loss=0.2969, pruned_loss=0.09072, over 7408.00 frames.], tot_loss[loss=0.2781, simple_loss=0.3387, pruned_loss=0.1087, over 1428120.83 frames.], batch size: 18, lr: 1.35e-03 2022-05-26 18:10:00,052 INFO [train.py:842] (0/4) Epoch 3, batch 4650, loss[loss=0.2633, simple_loss=0.3077, pruned_loss=0.1095, over 7028.00 frames.], tot_loss[loss=0.2798, simple_loss=0.3403, pruned_loss=0.1097, over 1428696.46 frames.], batch size: 16, lr: 1.35e-03 2022-05-26 18:10:38,689 INFO [train.py:842] (0/4) Epoch 3, batch 4700, loss[loss=0.2646, simple_loss=0.3343, pruned_loss=0.0975, over 7251.00 frames.], tot_loss[loss=0.28, simple_loss=0.3404, pruned_loss=0.1099, over 1432981.04 frames.], batch size: 19, lr: 1.34e-03 2022-05-26 18:11:17,506 INFO [train.py:842] (0/4) Epoch 3, batch 4750, loss[loss=0.2765, simple_loss=0.3507, pruned_loss=0.1012, over 7096.00 frames.], tot_loss[loss=0.2795, simple_loss=0.34, pruned_loss=0.1095, over 1433848.92 frames.], batch size: 28, lr: 1.34e-03 2022-05-26 18:11:55,921 INFO [train.py:842] (0/4) Epoch 3, batch 4800, loss[loss=0.2169, simple_loss=0.2796, pruned_loss=0.0771, over 7270.00 frames.], tot_loss[loss=0.2807, simple_loss=0.341, pruned_loss=0.1102, over 1433655.66 frames.], batch size: 17, lr: 1.34e-03 2022-05-26 18:12:34,712 INFO [train.py:842] (0/4) Epoch 3, batch 4850, loss[loss=0.2517, simple_loss=0.3228, pruned_loss=0.09028, over 7320.00 frames.], tot_loss[loss=0.2819, simple_loss=0.3419, pruned_loss=0.1109, over 1430179.57 frames.], batch size: 21, lr: 1.34e-03 2022-05-26 18:13:13,093 INFO [train.py:842] (0/4) Epoch 3, batch 4900, loss[loss=0.27, simple_loss=0.3392, pruned_loss=0.1004, over 7242.00 frames.], tot_loss[loss=0.2807, simple_loss=0.3413, pruned_loss=0.1101, over 1427545.87 frames.], batch size: 20, lr: 1.34e-03 2022-05-26 18:13:51,820 INFO [train.py:842] (0/4) Epoch 3, batch 4950, loss[loss=0.2758, simple_loss=0.3441, pruned_loss=0.1038, over 7123.00 frames.], tot_loss[loss=0.28, simple_loss=0.3411, pruned_loss=0.1095, over 1426185.38 frames.], batch size: 21, lr: 1.34e-03 2022-05-26 18:14:30,290 INFO [train.py:842] (0/4) Epoch 3, batch 5000, loss[loss=0.2819, simple_loss=0.3423, pruned_loss=0.1107, over 7151.00 frames.], tot_loss[loss=0.2786, simple_loss=0.3398, pruned_loss=0.1087, over 1420499.00 frames.], batch size: 19, lr: 1.34e-03 2022-05-26 18:15:09,119 INFO [train.py:842] (0/4) Epoch 3, batch 5050, loss[loss=0.2769, simple_loss=0.3443, pruned_loss=0.1048, over 7317.00 frames.], tot_loss[loss=0.2764, simple_loss=0.3378, pruned_loss=0.1075, over 1421939.51 frames.], batch size: 21, lr: 1.33e-03 2022-05-26 18:15:47,520 INFO [train.py:842] (0/4) Epoch 3, batch 5100, loss[loss=0.2409, simple_loss=0.3147, pruned_loss=0.08356, over 7367.00 frames.], tot_loss[loss=0.2775, simple_loss=0.3389, pruned_loss=0.1081, over 1423536.97 frames.], batch size: 19, lr: 1.33e-03 2022-05-26 18:16:26,335 INFO [train.py:842] (0/4) Epoch 3, batch 5150, loss[loss=0.2331, simple_loss=0.298, pruned_loss=0.08406, over 7160.00 frames.], tot_loss[loss=0.2773, simple_loss=0.3384, pruned_loss=0.108, over 1423629.10 frames.], batch size: 18, lr: 1.33e-03 2022-05-26 18:17:04,907 INFO [train.py:842] (0/4) Epoch 3, batch 5200, loss[loss=0.295, simple_loss=0.3435, pruned_loss=0.1232, over 7196.00 frames.], tot_loss[loss=0.2789, simple_loss=0.339, pruned_loss=0.1094, over 1423298.62 frames.], batch size: 22, lr: 1.33e-03 2022-05-26 18:17:43,696 INFO [train.py:842] (0/4) Epoch 3, batch 5250, loss[loss=0.2939, simple_loss=0.3472, pruned_loss=0.1203, over 7326.00 frames.], tot_loss[loss=0.2808, simple_loss=0.3405, pruned_loss=0.1106, over 1423223.41 frames.], batch size: 20, lr: 1.33e-03 2022-05-26 18:18:22,089 INFO [train.py:842] (0/4) Epoch 3, batch 5300, loss[loss=0.2616, simple_loss=0.33, pruned_loss=0.09659, over 7315.00 frames.], tot_loss[loss=0.2826, simple_loss=0.3422, pruned_loss=0.1114, over 1420277.67 frames.], batch size: 25, lr: 1.33e-03 2022-05-26 18:19:00,847 INFO [train.py:842] (0/4) Epoch 3, batch 5350, loss[loss=0.2752, simple_loss=0.3449, pruned_loss=0.1028, over 7408.00 frames.], tot_loss[loss=0.2822, simple_loss=0.3427, pruned_loss=0.1108, over 1416845.46 frames.], batch size: 21, lr: 1.33e-03 2022-05-26 18:19:39,499 INFO [train.py:842] (0/4) Epoch 3, batch 5400, loss[loss=0.3047, simple_loss=0.3666, pruned_loss=0.1214, over 7076.00 frames.], tot_loss[loss=0.2821, simple_loss=0.3426, pruned_loss=0.1108, over 1416047.26 frames.], batch size: 18, lr: 1.33e-03 2022-05-26 18:20:18,486 INFO [train.py:842] (0/4) Epoch 3, batch 5450, loss[loss=0.2999, simple_loss=0.347, pruned_loss=0.1264, over 7158.00 frames.], tot_loss[loss=0.2821, simple_loss=0.3419, pruned_loss=0.1111, over 1417129.85 frames.], batch size: 18, lr: 1.32e-03 2022-05-26 18:20:57,208 INFO [train.py:842] (0/4) Epoch 3, batch 5500, loss[loss=0.3608, simple_loss=0.3954, pruned_loss=0.1631, over 7217.00 frames.], tot_loss[loss=0.2798, simple_loss=0.3401, pruned_loss=0.1098, over 1419300.27 frames.], batch size: 21, lr: 1.32e-03 2022-05-26 18:21:35,993 INFO [train.py:842] (0/4) Epoch 3, batch 5550, loss[loss=0.2845, simple_loss=0.3328, pruned_loss=0.1181, over 7226.00 frames.], tot_loss[loss=0.28, simple_loss=0.3402, pruned_loss=0.1099, over 1418716.34 frames.], batch size: 16, lr: 1.32e-03 2022-05-26 18:22:14,563 INFO [train.py:842] (0/4) Epoch 3, batch 5600, loss[loss=0.3006, simple_loss=0.3544, pruned_loss=0.1234, over 7207.00 frames.], tot_loss[loss=0.2778, simple_loss=0.3387, pruned_loss=0.1085, over 1423169.14 frames.], batch size: 26, lr: 1.32e-03 2022-05-26 18:22:25,708 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-24000.pt 2022-05-26 18:22:55,841 INFO [train.py:842] (0/4) Epoch 3, batch 5650, loss[loss=0.2823, simple_loss=0.3362, pruned_loss=0.1142, over 7284.00 frames.], tot_loss[loss=0.2773, simple_loss=0.3384, pruned_loss=0.1081, over 1424256.17 frames.], batch size: 17, lr: 1.32e-03 2022-05-26 18:23:34,208 INFO [train.py:842] (0/4) Epoch 3, batch 5700, loss[loss=0.2932, simple_loss=0.3578, pruned_loss=0.1143, over 6364.00 frames.], tot_loss[loss=0.2767, simple_loss=0.3386, pruned_loss=0.1075, over 1423528.39 frames.], batch size: 37, lr: 1.32e-03 2022-05-26 18:24:13,058 INFO [train.py:842] (0/4) Epoch 3, batch 5750, loss[loss=0.3228, simple_loss=0.3694, pruned_loss=0.1382, over 5218.00 frames.], tot_loss[loss=0.274, simple_loss=0.3367, pruned_loss=0.1056, over 1420212.97 frames.], batch size: 52, lr: 1.32e-03 2022-05-26 18:24:51,637 INFO [train.py:842] (0/4) Epoch 3, batch 5800, loss[loss=0.2729, simple_loss=0.3295, pruned_loss=0.1081, over 7150.00 frames.], tot_loss[loss=0.2731, simple_loss=0.336, pruned_loss=0.1052, over 1424985.21 frames.], batch size: 17, lr: 1.31e-03 2022-05-26 18:25:30,393 INFO [train.py:842] (0/4) Epoch 3, batch 5850, loss[loss=0.3698, simple_loss=0.397, pruned_loss=0.1712, over 4978.00 frames.], tot_loss[loss=0.275, simple_loss=0.3372, pruned_loss=0.1065, over 1423570.64 frames.], batch size: 52, lr: 1.31e-03 2022-05-26 18:26:08,878 INFO [train.py:842] (0/4) Epoch 3, batch 5900, loss[loss=0.2769, simple_loss=0.3379, pruned_loss=0.1079, over 7219.00 frames.], tot_loss[loss=0.2764, simple_loss=0.3384, pruned_loss=0.1072, over 1426121.68 frames.], batch size: 21, lr: 1.31e-03 2022-05-26 18:26:47,686 INFO [train.py:842] (0/4) Epoch 3, batch 5950, loss[loss=0.2876, simple_loss=0.338, pruned_loss=0.1186, over 7377.00 frames.], tot_loss[loss=0.277, simple_loss=0.3391, pruned_loss=0.1075, over 1423564.13 frames.], batch size: 23, lr: 1.31e-03 2022-05-26 18:27:26,257 INFO [train.py:842] (0/4) Epoch 3, batch 6000, loss[loss=0.2162, simple_loss=0.2806, pruned_loss=0.07589, over 7314.00 frames.], tot_loss[loss=0.2759, simple_loss=0.3379, pruned_loss=0.1069, over 1421865.58 frames.], batch size: 17, lr: 1.31e-03 2022-05-26 18:27:26,258 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 18:27:36,053 INFO [train.py:871] (0/4) Epoch 3, validation: loss=0.2084, simple_loss=0.3064, pruned_loss=0.05526, over 868885.00 frames. 2022-05-26 18:28:14,674 INFO [train.py:842] (0/4) Epoch 3, batch 6050, loss[loss=0.3211, simple_loss=0.3839, pruned_loss=0.1292, over 7144.00 frames.], tot_loss[loss=0.2764, simple_loss=0.3385, pruned_loss=0.1071, over 1421189.13 frames.], batch size: 20, lr: 1.31e-03 2022-05-26 18:28:53,175 INFO [train.py:842] (0/4) Epoch 3, batch 6100, loss[loss=0.3286, simple_loss=0.3726, pruned_loss=0.1423, over 7173.00 frames.], tot_loss[loss=0.2765, simple_loss=0.3382, pruned_loss=0.1074, over 1420007.92 frames.], batch size: 26, lr: 1.31e-03 2022-05-26 18:29:31,953 INFO [train.py:842] (0/4) Epoch 3, batch 6150, loss[loss=0.2756, simple_loss=0.3372, pruned_loss=0.107, over 7418.00 frames.], tot_loss[loss=0.2751, simple_loss=0.3368, pruned_loss=0.1067, over 1418826.20 frames.], batch size: 21, lr: 1.31e-03 2022-05-26 18:30:10,392 INFO [train.py:842] (0/4) Epoch 3, batch 6200, loss[loss=0.3486, simple_loss=0.3861, pruned_loss=0.1556, over 5154.00 frames.], tot_loss[loss=0.2754, simple_loss=0.3368, pruned_loss=0.107, over 1416246.12 frames.], batch size: 52, lr: 1.30e-03 2022-05-26 18:30:49,611 INFO [train.py:842] (0/4) Epoch 3, batch 6250, loss[loss=0.2914, simple_loss=0.3527, pruned_loss=0.1151, over 6711.00 frames.], tot_loss[loss=0.2726, simple_loss=0.3346, pruned_loss=0.1053, over 1418222.28 frames.], batch size: 31, lr: 1.30e-03 2022-05-26 18:31:28,263 INFO [train.py:842] (0/4) Epoch 3, batch 6300, loss[loss=0.3688, simple_loss=0.3837, pruned_loss=0.177, over 7163.00 frames.], tot_loss[loss=0.276, simple_loss=0.3372, pruned_loss=0.1074, over 1413118.73 frames.], batch size: 18, lr: 1.30e-03 2022-05-26 18:32:07,485 INFO [train.py:842] (0/4) Epoch 3, batch 6350, loss[loss=0.2524, simple_loss=0.304, pruned_loss=0.1004, over 7288.00 frames.], tot_loss[loss=0.2747, simple_loss=0.336, pruned_loss=0.1067, over 1418654.03 frames.], batch size: 17, lr: 1.30e-03 2022-05-26 18:32:46,049 INFO [train.py:842] (0/4) Epoch 3, batch 6400, loss[loss=0.3278, simple_loss=0.394, pruned_loss=0.1308, over 7338.00 frames.], tot_loss[loss=0.2769, simple_loss=0.3375, pruned_loss=0.1081, over 1417896.90 frames.], batch size: 25, lr: 1.30e-03 2022-05-26 18:33:24,827 INFO [train.py:842] (0/4) Epoch 3, batch 6450, loss[loss=0.3354, simple_loss=0.3942, pruned_loss=0.1383, over 7120.00 frames.], tot_loss[loss=0.2754, simple_loss=0.3365, pruned_loss=0.1072, over 1418268.11 frames.], batch size: 21, lr: 1.30e-03 2022-05-26 18:34:03,177 INFO [train.py:842] (0/4) Epoch 3, batch 6500, loss[loss=0.2335, simple_loss=0.2892, pruned_loss=0.08895, over 7291.00 frames.], tot_loss[loss=0.2758, simple_loss=0.337, pruned_loss=0.1074, over 1416934.92 frames.], batch size: 18, lr: 1.30e-03 2022-05-26 18:34:42,299 INFO [train.py:842] (0/4) Epoch 3, batch 6550, loss[loss=0.3386, simple_loss=0.3716, pruned_loss=0.1527, over 7150.00 frames.], tot_loss[loss=0.2763, simple_loss=0.3377, pruned_loss=0.1075, over 1421626.52 frames.], batch size: 20, lr: 1.30e-03 2022-05-26 18:35:21,029 INFO [train.py:842] (0/4) Epoch 3, batch 6600, loss[loss=0.3697, simple_loss=0.4026, pruned_loss=0.1684, over 7120.00 frames.], tot_loss[loss=0.2763, simple_loss=0.3376, pruned_loss=0.1075, over 1421725.99 frames.], batch size: 21, lr: 1.29e-03 2022-05-26 18:36:00,215 INFO [train.py:842] (0/4) Epoch 3, batch 6650, loss[loss=0.2913, simple_loss=0.3546, pruned_loss=0.114, over 7113.00 frames.], tot_loss[loss=0.2759, simple_loss=0.3374, pruned_loss=0.1072, over 1421825.38 frames.], batch size: 21, lr: 1.29e-03 2022-05-26 18:36:38,882 INFO [train.py:842] (0/4) Epoch 3, batch 6700, loss[loss=0.3065, simple_loss=0.369, pruned_loss=0.122, over 7211.00 frames.], tot_loss[loss=0.2762, simple_loss=0.3376, pruned_loss=0.1074, over 1418996.41 frames.], batch size: 21, lr: 1.29e-03 2022-05-26 18:37:17,628 INFO [train.py:842] (0/4) Epoch 3, batch 6750, loss[loss=0.2281, simple_loss=0.2861, pruned_loss=0.08504, over 6850.00 frames.], tot_loss[loss=0.2758, simple_loss=0.3377, pruned_loss=0.1069, over 1420623.48 frames.], batch size: 15, lr: 1.29e-03 2022-05-26 18:37:56,016 INFO [train.py:842] (0/4) Epoch 3, batch 6800, loss[loss=0.3268, simple_loss=0.3693, pruned_loss=0.1421, over 7212.00 frames.], tot_loss[loss=0.2734, simple_loss=0.3358, pruned_loss=0.1055, over 1421502.36 frames.], batch size: 26, lr: 1.29e-03 2022-05-26 18:38:34,911 INFO [train.py:842] (0/4) Epoch 3, batch 6850, loss[loss=0.2403, simple_loss=0.313, pruned_loss=0.08378, over 7362.00 frames.], tot_loss[loss=0.2723, simple_loss=0.3347, pruned_loss=0.105, over 1423128.20 frames.], batch size: 23, lr: 1.29e-03 2022-05-26 18:39:13,429 INFO [train.py:842] (0/4) Epoch 3, batch 6900, loss[loss=0.2902, simple_loss=0.3351, pruned_loss=0.1226, over 7143.00 frames.], tot_loss[loss=0.2735, simple_loss=0.3357, pruned_loss=0.1057, over 1425932.82 frames.], batch size: 17, lr: 1.29e-03 2022-05-26 18:39:52,125 INFO [train.py:842] (0/4) Epoch 3, batch 6950, loss[loss=0.266, simple_loss=0.3375, pruned_loss=0.09721, over 7228.00 frames.], tot_loss[loss=0.2738, simple_loss=0.336, pruned_loss=0.1057, over 1427195.21 frames.], batch size: 21, lr: 1.29e-03 2022-05-26 18:40:30,491 INFO [train.py:842] (0/4) Epoch 3, batch 7000, loss[loss=0.298, simple_loss=0.3556, pruned_loss=0.1202, over 7310.00 frames.], tot_loss[loss=0.2727, simple_loss=0.3355, pruned_loss=0.105, over 1425382.63 frames.], batch size: 21, lr: 1.28e-03 2022-05-26 18:41:09,291 INFO [train.py:842] (0/4) Epoch 3, batch 7050, loss[loss=0.3493, simple_loss=0.3919, pruned_loss=0.1533, over 7206.00 frames.], tot_loss[loss=0.2727, simple_loss=0.3353, pruned_loss=0.1051, over 1423400.99 frames.], batch size: 22, lr: 1.28e-03 2022-05-26 18:41:47,871 INFO [train.py:842] (0/4) Epoch 3, batch 7100, loss[loss=0.2369, simple_loss=0.3082, pruned_loss=0.08282, over 7240.00 frames.], tot_loss[loss=0.2721, simple_loss=0.3348, pruned_loss=0.1047, over 1422737.76 frames.], batch size: 20, lr: 1.28e-03 2022-05-26 18:42:26,617 INFO [train.py:842] (0/4) Epoch 3, batch 7150, loss[loss=0.2979, simple_loss=0.346, pruned_loss=0.125, over 7159.00 frames.], tot_loss[loss=0.274, simple_loss=0.3362, pruned_loss=0.1059, over 1422885.81 frames.], batch size: 19, lr: 1.28e-03 2022-05-26 18:43:05,181 INFO [train.py:842] (0/4) Epoch 3, batch 7200, loss[loss=0.4277, simple_loss=0.4312, pruned_loss=0.2121, over 5067.00 frames.], tot_loss[loss=0.2771, simple_loss=0.3387, pruned_loss=0.1078, over 1416081.65 frames.], batch size: 53, lr: 1.28e-03 2022-05-26 18:43:43,952 INFO [train.py:842] (0/4) Epoch 3, batch 7250, loss[loss=0.2424, simple_loss=0.3078, pruned_loss=0.08855, over 7260.00 frames.], tot_loss[loss=0.2768, simple_loss=0.3386, pruned_loss=0.1075, over 1418165.65 frames.], batch size: 17, lr: 1.28e-03 2022-05-26 18:44:22,283 INFO [train.py:842] (0/4) Epoch 3, batch 7300, loss[loss=0.2863, simple_loss=0.3559, pruned_loss=0.1083, over 7321.00 frames.], tot_loss[loss=0.2768, simple_loss=0.3392, pruned_loss=0.1072, over 1421095.61 frames.], batch size: 21, lr: 1.28e-03 2022-05-26 18:45:01,013 INFO [train.py:842] (0/4) Epoch 3, batch 7350, loss[loss=0.3632, simple_loss=0.3959, pruned_loss=0.1652, over 7288.00 frames.], tot_loss[loss=0.2767, simple_loss=0.339, pruned_loss=0.1072, over 1423006.05 frames.], batch size: 24, lr: 1.28e-03 2022-05-26 18:45:39,599 INFO [train.py:842] (0/4) Epoch 3, batch 7400, loss[loss=0.2577, simple_loss=0.3149, pruned_loss=0.1002, over 7264.00 frames.], tot_loss[loss=0.2761, simple_loss=0.3386, pruned_loss=0.1068, over 1426441.23 frames.], batch size: 19, lr: 1.27e-03 2022-05-26 18:46:18,833 INFO [train.py:842] (0/4) Epoch 3, batch 7450, loss[loss=0.2584, simple_loss=0.3446, pruned_loss=0.0861, over 7412.00 frames.], tot_loss[loss=0.2757, simple_loss=0.338, pruned_loss=0.1067, over 1425326.63 frames.], batch size: 21, lr: 1.27e-03 2022-05-26 18:46:57,492 INFO [train.py:842] (0/4) Epoch 3, batch 7500, loss[loss=0.2279, simple_loss=0.2922, pruned_loss=0.08175, over 7138.00 frames.], tot_loss[loss=0.2733, simple_loss=0.3363, pruned_loss=0.1051, over 1429094.17 frames.], batch size: 17, lr: 1.27e-03 2022-05-26 18:47:36,162 INFO [train.py:842] (0/4) Epoch 3, batch 7550, loss[loss=0.2914, simple_loss=0.3524, pruned_loss=0.1152, over 7288.00 frames.], tot_loss[loss=0.2748, simple_loss=0.3373, pruned_loss=0.1061, over 1428204.31 frames.], batch size: 24, lr: 1.27e-03 2022-05-26 18:48:14,670 INFO [train.py:842] (0/4) Epoch 3, batch 7600, loss[loss=0.2432, simple_loss=0.3268, pruned_loss=0.07978, over 7343.00 frames.], tot_loss[loss=0.2739, simple_loss=0.3369, pruned_loss=0.1055, over 1424405.98 frames.], batch size: 22, lr: 1.27e-03 2022-05-26 18:48:53,734 INFO [train.py:842] (0/4) Epoch 3, batch 7650, loss[loss=0.2178, simple_loss=0.286, pruned_loss=0.0748, over 7013.00 frames.], tot_loss[loss=0.2732, simple_loss=0.3355, pruned_loss=0.1055, over 1418964.26 frames.], batch size: 16, lr: 1.27e-03 2022-05-26 18:49:32,388 INFO [train.py:842] (0/4) Epoch 3, batch 7700, loss[loss=0.2509, simple_loss=0.3253, pruned_loss=0.08821, over 7079.00 frames.], tot_loss[loss=0.2726, simple_loss=0.3348, pruned_loss=0.1052, over 1419519.33 frames.], batch size: 28, lr: 1.27e-03 2022-05-26 18:50:11,093 INFO [train.py:842] (0/4) Epoch 3, batch 7750, loss[loss=0.2767, simple_loss=0.326, pruned_loss=0.1137, over 6793.00 frames.], tot_loss[loss=0.2725, simple_loss=0.335, pruned_loss=0.105, over 1424247.61 frames.], batch size: 15, lr: 1.27e-03 2022-05-26 18:50:49,654 INFO [train.py:842] (0/4) Epoch 3, batch 7800, loss[loss=0.2548, simple_loss=0.3301, pruned_loss=0.08975, over 7370.00 frames.], tot_loss[loss=0.2706, simple_loss=0.3337, pruned_loss=0.1037, over 1424540.67 frames.], batch size: 23, lr: 1.27e-03 2022-05-26 18:51:28,477 INFO [train.py:842] (0/4) Epoch 3, batch 7850, loss[loss=0.2596, simple_loss=0.3327, pruned_loss=0.09321, over 7167.00 frames.], tot_loss[loss=0.2693, simple_loss=0.3328, pruned_loss=0.1029, over 1423898.24 frames.], batch size: 26, lr: 1.26e-03 2022-05-26 18:52:07,135 INFO [train.py:842] (0/4) Epoch 3, batch 7900, loss[loss=0.4064, simple_loss=0.4048, pruned_loss=0.204, over 7337.00 frames.], tot_loss[loss=0.2696, simple_loss=0.3326, pruned_loss=0.1034, over 1425204.82 frames.], batch size: 20, lr: 1.26e-03 2022-05-26 18:52:45,862 INFO [train.py:842] (0/4) Epoch 3, batch 7950, loss[loss=0.2714, simple_loss=0.3446, pruned_loss=0.09913, over 7231.00 frames.], tot_loss[loss=0.2684, simple_loss=0.3317, pruned_loss=0.1026, over 1424158.88 frames.], batch size: 20, lr: 1.26e-03 2022-05-26 18:53:24,306 INFO [train.py:842] (0/4) Epoch 3, batch 8000, loss[loss=0.3747, simple_loss=0.4052, pruned_loss=0.1721, over 7135.00 frames.], tot_loss[loss=0.2709, simple_loss=0.3334, pruned_loss=0.1042, over 1420526.17 frames.], batch size: 26, lr: 1.26e-03 2022-05-26 18:54:03,062 INFO [train.py:842] (0/4) Epoch 3, batch 8050, loss[loss=0.2963, simple_loss=0.3573, pruned_loss=0.1176, over 7191.00 frames.], tot_loss[loss=0.2705, simple_loss=0.3327, pruned_loss=0.1042, over 1418406.33 frames.], batch size: 26, lr: 1.26e-03 2022-05-26 18:54:41,738 INFO [train.py:842] (0/4) Epoch 3, batch 8100, loss[loss=0.2686, simple_loss=0.3347, pruned_loss=0.1012, over 7204.00 frames.], tot_loss[loss=0.2708, simple_loss=0.333, pruned_loss=0.1043, over 1418655.75 frames.], batch size: 22, lr: 1.26e-03 2022-05-26 18:55:20,571 INFO [train.py:842] (0/4) Epoch 3, batch 8150, loss[loss=0.2951, simple_loss=0.3535, pruned_loss=0.1183, over 7254.00 frames.], tot_loss[loss=0.2726, simple_loss=0.334, pruned_loss=0.1056, over 1412956.66 frames.], batch size: 19, lr: 1.26e-03 2022-05-26 18:55:59,058 INFO [train.py:842] (0/4) Epoch 3, batch 8200, loss[loss=0.2304, simple_loss=0.3183, pruned_loss=0.07127, over 7324.00 frames.], tot_loss[loss=0.2719, simple_loss=0.3337, pruned_loss=0.1051, over 1415606.69 frames.], batch size: 20, lr: 1.26e-03 2022-05-26 18:56:37,962 INFO [train.py:842] (0/4) Epoch 3, batch 8250, loss[loss=0.2849, simple_loss=0.3481, pruned_loss=0.1108, over 7263.00 frames.], tot_loss[loss=0.271, simple_loss=0.3331, pruned_loss=0.1044, over 1419587.76 frames.], batch size: 19, lr: 1.26e-03 2022-05-26 18:57:16,527 INFO [train.py:842] (0/4) Epoch 3, batch 8300, loss[loss=0.2144, simple_loss=0.3013, pruned_loss=0.06374, over 7111.00 frames.], tot_loss[loss=0.2713, simple_loss=0.3337, pruned_loss=0.1044, over 1421142.32 frames.], batch size: 21, lr: 1.25e-03 2022-05-26 18:57:55,085 INFO [train.py:842] (0/4) Epoch 3, batch 8350, loss[loss=0.3162, simple_loss=0.3561, pruned_loss=0.1381, over 4869.00 frames.], tot_loss[loss=0.2747, simple_loss=0.3364, pruned_loss=0.1065, over 1417082.38 frames.], batch size: 52, lr: 1.25e-03 2022-05-26 18:58:33,826 INFO [train.py:842] (0/4) Epoch 3, batch 8400, loss[loss=0.2478, simple_loss=0.3188, pruned_loss=0.0884, over 7253.00 frames.], tot_loss[loss=0.2737, simple_loss=0.336, pruned_loss=0.1058, over 1417781.53 frames.], batch size: 19, lr: 1.25e-03 2022-05-26 18:59:12,589 INFO [train.py:842] (0/4) Epoch 3, batch 8450, loss[loss=0.3981, simple_loss=0.4151, pruned_loss=0.1905, over 7089.00 frames.], tot_loss[loss=0.2758, simple_loss=0.3374, pruned_loss=0.1071, over 1417647.94 frames.], batch size: 28, lr: 1.25e-03 2022-05-26 19:00:01,252 INFO [train.py:842] (0/4) Epoch 3, batch 8500, loss[loss=0.2146, simple_loss=0.2982, pruned_loss=0.0655, over 7139.00 frames.], tot_loss[loss=0.2754, simple_loss=0.3368, pruned_loss=0.107, over 1420189.21 frames.], batch size: 19, lr: 1.25e-03 2022-05-26 19:00:39,772 INFO [train.py:842] (0/4) Epoch 3, batch 8550, loss[loss=0.2709, simple_loss=0.3327, pruned_loss=0.1046, over 6294.00 frames.], tot_loss[loss=0.2752, simple_loss=0.3367, pruned_loss=0.1069, over 1417196.38 frames.], batch size: 38, lr: 1.25e-03 2022-05-26 19:01:18,431 INFO [train.py:842] (0/4) Epoch 3, batch 8600, loss[loss=0.2663, simple_loss=0.3132, pruned_loss=0.1097, over 7277.00 frames.], tot_loss[loss=0.2739, simple_loss=0.3354, pruned_loss=0.1063, over 1418496.71 frames.], batch size: 17, lr: 1.25e-03 2022-05-26 19:01:57,288 INFO [train.py:842] (0/4) Epoch 3, batch 8650, loss[loss=0.2382, simple_loss=0.3054, pruned_loss=0.0855, over 7264.00 frames.], tot_loss[loss=0.2731, simple_loss=0.3349, pruned_loss=0.1057, over 1416780.47 frames.], batch size: 18, lr: 1.25e-03 2022-05-26 19:02:35,785 INFO [train.py:842] (0/4) Epoch 3, batch 8700, loss[loss=0.2535, simple_loss=0.3311, pruned_loss=0.08793, over 7102.00 frames.], tot_loss[loss=0.2743, simple_loss=0.3358, pruned_loss=0.1064, over 1417744.47 frames.], batch size: 28, lr: 1.24e-03 2022-05-26 19:03:14,501 INFO [train.py:842] (0/4) Epoch 3, batch 8750, loss[loss=0.2755, simple_loss=0.3519, pruned_loss=0.09948, over 6967.00 frames.], tot_loss[loss=0.2728, simple_loss=0.3352, pruned_loss=0.1052, over 1418655.22 frames.], batch size: 28, lr: 1.24e-03 2022-05-26 19:03:53,162 INFO [train.py:842] (0/4) Epoch 3, batch 8800, loss[loss=0.2235, simple_loss=0.2911, pruned_loss=0.07795, over 7283.00 frames.], tot_loss[loss=0.2725, simple_loss=0.3354, pruned_loss=0.1048, over 1418387.04 frames.], batch size: 18, lr: 1.24e-03 2022-05-26 19:04:31,958 INFO [train.py:842] (0/4) Epoch 3, batch 8850, loss[loss=0.2356, simple_loss=0.316, pruned_loss=0.07766, over 7337.00 frames.], tot_loss[loss=0.2707, simple_loss=0.3342, pruned_loss=0.1036, over 1419632.20 frames.], batch size: 22, lr: 1.24e-03 2022-05-26 19:05:10,527 INFO [train.py:842] (0/4) Epoch 3, batch 8900, loss[loss=0.2966, simple_loss=0.3579, pruned_loss=0.1177, over 7018.00 frames.], tot_loss[loss=0.2706, simple_loss=0.3341, pruned_loss=0.1036, over 1418204.22 frames.], batch size: 28, lr: 1.24e-03 2022-05-26 19:05:59,446 INFO [train.py:842] (0/4) Epoch 3, batch 8950, loss[loss=0.2233, simple_loss=0.2859, pruned_loss=0.08041, over 7300.00 frames.], tot_loss[loss=0.2729, simple_loss=0.336, pruned_loss=0.1049, over 1412109.30 frames.], batch size: 17, lr: 1.24e-03 2022-05-26 19:06:48,322 INFO [train.py:842] (0/4) Epoch 3, batch 9000, loss[loss=0.2875, simple_loss=0.3391, pruned_loss=0.118, over 4968.00 frames.], tot_loss[loss=0.2754, simple_loss=0.3377, pruned_loss=0.1066, over 1402127.85 frames.], batch size: 52, lr: 1.24e-03 2022-05-26 19:06:48,323 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 19:07:08,221 INFO [train.py:871] (0/4) Epoch 3, validation: loss=0.2052, simple_loss=0.3029, pruned_loss=0.0537, over 868885.00 frames. 2022-05-26 19:07:46,535 INFO [train.py:842] (0/4) Epoch 3, batch 9050, loss[loss=0.3311, simple_loss=0.3899, pruned_loss=0.1361, over 7303.00 frames.], tot_loss[loss=0.2794, simple_loss=0.3412, pruned_loss=0.1088, over 1388171.87 frames.], batch size: 25, lr: 1.24e-03 2022-05-26 19:08:24,000 INFO [train.py:842] (0/4) Epoch 3, batch 9100, loss[loss=0.3334, simple_loss=0.3695, pruned_loss=0.1486, over 4814.00 frames.], tot_loss[loss=0.2837, simple_loss=0.3449, pruned_loss=0.1113, over 1357431.91 frames.], batch size: 52, lr: 1.24e-03 2022-05-26 19:09:01,535 INFO [train.py:842] (0/4) Epoch 3, batch 9150, loss[loss=0.3071, simple_loss=0.3608, pruned_loss=0.1267, over 5098.00 frames.], tot_loss[loss=0.2905, simple_loss=0.3492, pruned_loss=0.1159, over 1295633.43 frames.], batch size: 53, lr: 1.24e-03 2022-05-26 19:09:32,879 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-3.pt 2022-05-26 19:09:53,254 INFO [train.py:842] (0/4) Epoch 4, batch 0, loss[loss=0.2731, simple_loss=0.3402, pruned_loss=0.1029, over 7190.00 frames.], tot_loss[loss=0.2731, simple_loss=0.3402, pruned_loss=0.1029, over 7190.00 frames.], batch size: 23, lr: 1.20e-03 2022-05-26 19:10:32,529 INFO [train.py:842] (0/4) Epoch 4, batch 50, loss[loss=0.2343, simple_loss=0.2948, pruned_loss=0.08695, over 7269.00 frames.], tot_loss[loss=0.2684, simple_loss=0.3315, pruned_loss=0.1027, over 317597.34 frames.], batch size: 17, lr: 1.20e-03 2022-05-26 19:11:11,264 INFO [train.py:842] (0/4) Epoch 4, batch 100, loss[loss=0.2105, simple_loss=0.2752, pruned_loss=0.0729, over 7300.00 frames.], tot_loss[loss=0.2695, simple_loss=0.3315, pruned_loss=0.1037, over 565009.18 frames.], batch size: 17, lr: 1.20e-03 2022-05-26 19:11:50,050 INFO [train.py:842] (0/4) Epoch 4, batch 150, loss[loss=0.3044, simple_loss=0.3704, pruned_loss=0.1192, over 7337.00 frames.], tot_loss[loss=0.2659, simple_loss=0.3299, pruned_loss=0.101, over 755439.20 frames.], batch size: 22, lr: 1.20e-03 2022-05-26 19:12:28,772 INFO [train.py:842] (0/4) Epoch 4, batch 200, loss[loss=0.2751, simple_loss=0.3402, pruned_loss=0.105, over 7201.00 frames.], tot_loss[loss=0.2687, simple_loss=0.3326, pruned_loss=0.1024, over 904404.40 frames.], batch size: 23, lr: 1.19e-03 2022-05-26 19:13:07,616 INFO [train.py:842] (0/4) Epoch 4, batch 250, loss[loss=0.28, simple_loss=0.3473, pruned_loss=0.1064, over 7320.00 frames.], tot_loss[loss=0.2685, simple_loss=0.333, pruned_loss=0.102, over 1017883.38 frames.], batch size: 22, lr: 1.19e-03 2022-05-26 19:13:46,326 INFO [train.py:842] (0/4) Epoch 4, batch 300, loss[loss=0.2352, simple_loss=0.3101, pruned_loss=0.08022, over 7367.00 frames.], tot_loss[loss=0.2667, simple_loss=0.3318, pruned_loss=0.1007, over 1111720.36 frames.], batch size: 23, lr: 1.19e-03 2022-05-26 19:14:25,606 INFO [train.py:842] (0/4) Epoch 4, batch 350, loss[loss=0.2075, simple_loss=0.2965, pruned_loss=0.05928, over 7318.00 frames.], tot_loss[loss=0.2657, simple_loss=0.331, pruned_loss=0.1002, over 1183455.34 frames.], batch size: 21, lr: 1.19e-03 2022-05-26 19:15:04,154 INFO [train.py:842] (0/4) Epoch 4, batch 400, loss[loss=0.2421, simple_loss=0.3173, pruned_loss=0.08341, over 7225.00 frames.], tot_loss[loss=0.2658, simple_loss=0.3308, pruned_loss=0.1004, over 1232334.90 frames.], batch size: 20, lr: 1.19e-03 2022-05-26 19:15:42,984 INFO [train.py:842] (0/4) Epoch 4, batch 450, loss[loss=0.25, simple_loss=0.3143, pruned_loss=0.09285, over 7157.00 frames.], tot_loss[loss=0.2647, simple_loss=0.3295, pruned_loss=0.09993, over 1274321.68 frames.], batch size: 20, lr: 1.19e-03 2022-05-26 19:16:21,286 INFO [train.py:842] (0/4) Epoch 4, batch 500, loss[loss=0.2808, simple_loss=0.3542, pruned_loss=0.1037, over 7165.00 frames.], tot_loss[loss=0.2648, simple_loss=0.3302, pruned_loss=0.09972, over 1303710.18 frames.], batch size: 19, lr: 1.19e-03 2022-05-26 19:17:00,276 INFO [train.py:842] (0/4) Epoch 4, batch 550, loss[loss=0.2204, simple_loss=0.2988, pruned_loss=0.07096, over 7167.00 frames.], tot_loss[loss=0.2647, simple_loss=0.3299, pruned_loss=0.09977, over 1330352.94 frames.], batch size: 18, lr: 1.19e-03 2022-05-26 19:17:38,805 INFO [train.py:842] (0/4) Epoch 4, batch 600, loss[loss=0.3588, simple_loss=0.3983, pruned_loss=0.1596, over 6528.00 frames.], tot_loss[loss=0.2668, simple_loss=0.3311, pruned_loss=0.1013, over 1347908.77 frames.], batch size: 37, lr: 1.19e-03 2022-05-26 19:18:17,917 INFO [train.py:842] (0/4) Epoch 4, batch 650, loss[loss=0.2701, simple_loss=0.3314, pruned_loss=0.1044, over 7421.00 frames.], tot_loss[loss=0.2687, simple_loss=0.3328, pruned_loss=0.1023, over 1367951.41 frames.], batch size: 20, lr: 1.18e-03 2022-05-26 19:18:56,785 INFO [train.py:842] (0/4) Epoch 4, batch 700, loss[loss=0.3388, simple_loss=0.3929, pruned_loss=0.1424, over 7301.00 frames.], tot_loss[loss=0.2657, simple_loss=0.3303, pruned_loss=0.1006, over 1385295.09 frames.], batch size: 24, lr: 1.18e-03 2022-05-26 19:19:35,623 INFO [train.py:842] (0/4) Epoch 4, batch 750, loss[loss=0.262, simple_loss=0.3379, pruned_loss=0.09306, over 7250.00 frames.], tot_loss[loss=0.2654, simple_loss=0.3297, pruned_loss=0.1005, over 1393425.52 frames.], batch size: 24, lr: 1.18e-03 2022-05-26 19:20:14,162 INFO [train.py:842] (0/4) Epoch 4, batch 800, loss[loss=0.2709, simple_loss=0.3279, pruned_loss=0.107, over 7265.00 frames.], tot_loss[loss=0.2655, simple_loss=0.3304, pruned_loss=0.1003, over 1398329.33 frames.], batch size: 19, lr: 1.18e-03 2022-05-26 19:20:53,466 INFO [train.py:842] (0/4) Epoch 4, batch 850, loss[loss=0.216, simple_loss=0.2874, pruned_loss=0.07234, over 7053.00 frames.], tot_loss[loss=0.2649, simple_loss=0.33, pruned_loss=0.09992, over 1408494.97 frames.], batch size: 18, lr: 1.18e-03 2022-05-26 19:21:32,121 INFO [train.py:842] (0/4) Epoch 4, batch 900, loss[loss=0.2801, simple_loss=0.3507, pruned_loss=0.1047, over 7110.00 frames.], tot_loss[loss=0.2633, simple_loss=0.329, pruned_loss=0.09885, over 1415881.04 frames.], batch size: 21, lr: 1.18e-03 2022-05-26 19:22:10,974 INFO [train.py:842] (0/4) Epoch 4, batch 950, loss[loss=0.274, simple_loss=0.3507, pruned_loss=0.09869, over 7206.00 frames.], tot_loss[loss=0.2649, simple_loss=0.3299, pruned_loss=0.09992, over 1420902.75 frames.], batch size: 26, lr: 1.18e-03 2022-05-26 19:22:49,539 INFO [train.py:842] (0/4) Epoch 4, batch 1000, loss[loss=0.2178, simple_loss=0.285, pruned_loss=0.07523, over 7279.00 frames.], tot_loss[loss=0.264, simple_loss=0.3288, pruned_loss=0.09962, over 1421753.56 frames.], batch size: 18, lr: 1.18e-03 2022-05-26 19:23:28,172 INFO [train.py:842] (0/4) Epoch 4, batch 1050, loss[loss=0.2912, simple_loss=0.3538, pruned_loss=0.1142, over 6786.00 frames.], tot_loss[loss=0.2661, simple_loss=0.3303, pruned_loss=0.1009, over 1419589.90 frames.], batch size: 31, lr: 1.18e-03 2022-05-26 19:24:06,791 INFO [train.py:842] (0/4) Epoch 4, batch 1100, loss[loss=0.1963, simple_loss=0.2879, pruned_loss=0.05233, over 7425.00 frames.], tot_loss[loss=0.2655, simple_loss=0.3304, pruned_loss=0.1004, over 1420046.34 frames.], batch size: 21, lr: 1.18e-03 2022-05-26 19:24:45,510 INFO [train.py:842] (0/4) Epoch 4, batch 1150, loss[loss=0.2791, simple_loss=0.3419, pruned_loss=0.1082, over 7320.00 frames.], tot_loss[loss=0.2695, simple_loss=0.3337, pruned_loss=0.1026, over 1416790.39 frames.], batch size: 21, lr: 1.17e-03 2022-05-26 19:25:24,164 INFO [train.py:842] (0/4) Epoch 4, batch 1200, loss[loss=0.3285, simple_loss=0.383, pruned_loss=0.137, over 7313.00 frames.], tot_loss[loss=0.2694, simple_loss=0.3338, pruned_loss=0.1025, over 1414633.52 frames.], batch size: 21, lr: 1.17e-03 2022-05-26 19:26:03,128 INFO [train.py:842] (0/4) Epoch 4, batch 1250, loss[loss=0.1886, simple_loss=0.2614, pruned_loss=0.05796, over 7219.00 frames.], tot_loss[loss=0.2702, simple_loss=0.334, pruned_loss=0.1032, over 1413565.58 frames.], batch size: 16, lr: 1.17e-03 2022-05-26 19:26:45,096 INFO [train.py:842] (0/4) Epoch 4, batch 1300, loss[loss=0.2449, simple_loss=0.324, pruned_loss=0.08293, over 7209.00 frames.], tot_loss[loss=0.2676, simple_loss=0.3317, pruned_loss=0.1018, over 1416641.90 frames.], batch size: 23, lr: 1.17e-03 2022-05-26 19:27:23,946 INFO [train.py:842] (0/4) Epoch 4, batch 1350, loss[loss=0.2331, simple_loss=0.3096, pruned_loss=0.07825, over 7233.00 frames.], tot_loss[loss=0.2659, simple_loss=0.3305, pruned_loss=0.1007, over 1416514.36 frames.], batch size: 20, lr: 1.17e-03 2022-05-26 19:28:02,845 INFO [train.py:842] (0/4) Epoch 4, batch 1400, loss[loss=0.2733, simple_loss=0.3348, pruned_loss=0.1059, over 7229.00 frames.], tot_loss[loss=0.2645, simple_loss=0.3292, pruned_loss=0.09987, over 1419621.13 frames.], batch size: 22, lr: 1.17e-03 2022-05-26 19:28:42,068 INFO [train.py:842] (0/4) Epoch 4, batch 1450, loss[loss=0.2628, simple_loss=0.3428, pruned_loss=0.09144, over 7289.00 frames.], tot_loss[loss=0.2649, simple_loss=0.3301, pruned_loss=0.09984, over 1421470.79 frames.], batch size: 24, lr: 1.17e-03 2022-05-26 19:29:23,388 INFO [train.py:842] (0/4) Epoch 4, batch 1500, loss[loss=0.2407, simple_loss=0.3233, pruned_loss=0.07904, over 7282.00 frames.], tot_loss[loss=0.2632, simple_loss=0.3287, pruned_loss=0.09887, over 1418717.83 frames.], batch size: 24, lr: 1.17e-03 2022-05-26 19:30:02,816 INFO [train.py:842] (0/4) Epoch 4, batch 1550, loss[loss=0.3148, simple_loss=0.3581, pruned_loss=0.1358, over 4680.00 frames.], tot_loss[loss=0.2627, simple_loss=0.3283, pruned_loss=0.09861, over 1416862.70 frames.], batch size: 52, lr: 1.17e-03 2022-05-26 19:30:42,102 INFO [train.py:842] (0/4) Epoch 4, batch 1600, loss[loss=0.2494, simple_loss=0.335, pruned_loss=0.08191, over 7328.00 frames.], tot_loss[loss=0.2638, simple_loss=0.3297, pruned_loss=0.09902, over 1413983.54 frames.], batch size: 25, lr: 1.17e-03 2022-05-26 19:31:21,038 INFO [train.py:842] (0/4) Epoch 4, batch 1650, loss[loss=0.2671, simple_loss=0.3391, pruned_loss=0.09754, over 7320.00 frames.], tot_loss[loss=0.2653, simple_loss=0.3303, pruned_loss=0.1001, over 1415181.33 frames.], batch size: 20, lr: 1.17e-03 2022-05-26 19:31:59,653 INFO [train.py:842] (0/4) Epoch 4, batch 1700, loss[loss=0.2829, simple_loss=0.3489, pruned_loss=0.1084, over 7144.00 frames.], tot_loss[loss=0.2644, simple_loss=0.3303, pruned_loss=0.09932, over 1418512.73 frames.], batch size: 20, lr: 1.16e-03 2022-05-26 19:32:38,143 INFO [train.py:842] (0/4) Epoch 4, batch 1750, loss[loss=0.2961, simple_loss=0.3601, pruned_loss=0.1161, over 7197.00 frames.], tot_loss[loss=0.2652, simple_loss=0.3308, pruned_loss=0.09981, over 1418535.00 frames.], batch size: 22, lr: 1.16e-03 2022-05-26 19:33:16,553 INFO [train.py:842] (0/4) Epoch 4, batch 1800, loss[loss=0.2383, simple_loss=0.3061, pruned_loss=0.08527, over 7221.00 frames.], tot_loss[loss=0.2661, simple_loss=0.3316, pruned_loss=0.1003, over 1420493.03 frames.], batch size: 21, lr: 1.16e-03 2022-05-26 19:33:55,178 INFO [train.py:842] (0/4) Epoch 4, batch 1850, loss[loss=0.2341, simple_loss=0.296, pruned_loss=0.08604, over 7133.00 frames.], tot_loss[loss=0.2664, simple_loss=0.332, pruned_loss=0.1004, over 1419719.81 frames.], batch size: 17, lr: 1.16e-03 2022-05-26 19:34:33,646 INFO [train.py:842] (0/4) Epoch 4, batch 1900, loss[loss=0.2485, simple_loss=0.3267, pruned_loss=0.0851, over 7163.00 frames.], tot_loss[loss=0.2664, simple_loss=0.3322, pruned_loss=0.1003, over 1422902.34 frames.], batch size: 19, lr: 1.16e-03 2022-05-26 19:35:12,540 INFO [train.py:842] (0/4) Epoch 4, batch 1950, loss[loss=0.281, simple_loss=0.3515, pruned_loss=0.1053, over 6410.00 frames.], tot_loss[loss=0.2682, simple_loss=0.334, pruned_loss=0.1012, over 1427589.04 frames.], batch size: 38, lr: 1.16e-03 2022-05-26 19:35:51,015 INFO [train.py:842] (0/4) Epoch 4, batch 2000, loss[loss=0.2736, simple_loss=0.3417, pruned_loss=0.1028, over 7117.00 frames.], tot_loss[loss=0.2701, simple_loss=0.3355, pruned_loss=0.1023, over 1424340.46 frames.], batch size: 21, lr: 1.16e-03 2022-05-26 19:36:29,967 INFO [train.py:842] (0/4) Epoch 4, batch 2050, loss[loss=0.2373, simple_loss=0.3116, pruned_loss=0.08149, over 6831.00 frames.], tot_loss[loss=0.2702, simple_loss=0.335, pruned_loss=0.1026, over 1422410.27 frames.], batch size: 31, lr: 1.16e-03 2022-05-26 19:37:08,654 INFO [train.py:842] (0/4) Epoch 4, batch 2100, loss[loss=0.3055, simple_loss=0.3688, pruned_loss=0.1211, over 7319.00 frames.], tot_loss[loss=0.2699, simple_loss=0.3348, pruned_loss=0.1025, over 1420390.60 frames.], batch size: 21, lr: 1.16e-03 2022-05-26 19:37:47,582 INFO [train.py:842] (0/4) Epoch 4, batch 2150, loss[loss=0.2552, simple_loss=0.3227, pruned_loss=0.09383, over 7332.00 frames.], tot_loss[loss=0.2688, simple_loss=0.3341, pruned_loss=0.1018, over 1422589.45 frames.], batch size: 22, lr: 1.16e-03 2022-05-26 19:38:26,054 INFO [train.py:842] (0/4) Epoch 4, batch 2200, loss[loss=0.2563, simple_loss=0.3315, pruned_loss=0.09059, over 7227.00 frames.], tot_loss[loss=0.267, simple_loss=0.3321, pruned_loss=0.101, over 1425426.26 frames.], batch size: 21, lr: 1.15e-03 2022-05-26 19:39:04,827 INFO [train.py:842] (0/4) Epoch 4, batch 2250, loss[loss=0.3266, simple_loss=0.3762, pruned_loss=0.1385, over 5214.00 frames.], tot_loss[loss=0.2656, simple_loss=0.3314, pruned_loss=0.09984, over 1427236.70 frames.], batch size: 52, lr: 1.15e-03 2022-05-26 19:39:43,447 INFO [train.py:842] (0/4) Epoch 4, batch 2300, loss[loss=0.3085, simple_loss=0.3509, pruned_loss=0.1331, over 7167.00 frames.], tot_loss[loss=0.264, simple_loss=0.3301, pruned_loss=0.09895, over 1430046.48 frames.], batch size: 19, lr: 1.15e-03 2022-05-26 19:40:22,286 INFO [train.py:842] (0/4) Epoch 4, batch 2350, loss[loss=0.2305, simple_loss=0.309, pruned_loss=0.076, over 7328.00 frames.], tot_loss[loss=0.2638, simple_loss=0.3298, pruned_loss=0.09889, over 1430811.98 frames.], batch size: 20, lr: 1.15e-03 2022-05-26 19:41:00,759 INFO [train.py:842] (0/4) Epoch 4, batch 2400, loss[loss=0.254, simple_loss=0.3266, pruned_loss=0.09069, over 7298.00 frames.], tot_loss[loss=0.265, simple_loss=0.3311, pruned_loss=0.09943, over 1433709.09 frames.], batch size: 25, lr: 1.15e-03 2022-05-26 19:41:39,550 INFO [train.py:842] (0/4) Epoch 4, batch 2450, loss[loss=0.2908, simple_loss=0.3561, pruned_loss=0.1128, over 7376.00 frames.], tot_loss[loss=0.2637, simple_loss=0.3303, pruned_loss=0.09859, over 1437132.88 frames.], batch size: 23, lr: 1.15e-03 2022-05-26 19:42:18,027 INFO [train.py:842] (0/4) Epoch 4, batch 2500, loss[loss=0.3129, simple_loss=0.3648, pruned_loss=0.1305, over 7154.00 frames.], tot_loss[loss=0.265, simple_loss=0.331, pruned_loss=0.09947, over 1434519.52 frames.], batch size: 19, lr: 1.15e-03 2022-05-26 19:42:56,652 INFO [train.py:842] (0/4) Epoch 4, batch 2550, loss[loss=0.1954, simple_loss=0.2813, pruned_loss=0.0548, over 7415.00 frames.], tot_loss[loss=0.2667, simple_loss=0.3324, pruned_loss=0.1005, over 1425844.34 frames.], batch size: 18, lr: 1.15e-03 2022-05-26 19:43:35,176 INFO [train.py:842] (0/4) Epoch 4, batch 2600, loss[loss=0.2238, simple_loss=0.3056, pruned_loss=0.07097, over 7229.00 frames.], tot_loss[loss=0.2665, simple_loss=0.3324, pruned_loss=0.1003, over 1426072.44 frames.], batch size: 20, lr: 1.15e-03 2022-05-26 19:44:13,868 INFO [train.py:842] (0/4) Epoch 4, batch 2650, loss[loss=0.2086, simple_loss=0.2806, pruned_loss=0.06825, over 7008.00 frames.], tot_loss[loss=0.2678, simple_loss=0.3334, pruned_loss=0.1011, over 1419872.42 frames.], batch size: 16, lr: 1.15e-03 2022-05-26 19:44:52,274 INFO [train.py:842] (0/4) Epoch 4, batch 2700, loss[loss=0.2551, simple_loss=0.3134, pruned_loss=0.09847, over 6807.00 frames.], tot_loss[loss=0.2677, simple_loss=0.3333, pruned_loss=0.101, over 1418120.02 frames.], batch size: 15, lr: 1.15e-03 2022-05-26 19:45:31,376 INFO [train.py:842] (0/4) Epoch 4, batch 2750, loss[loss=0.2337, simple_loss=0.3043, pruned_loss=0.08154, over 7267.00 frames.], tot_loss[loss=0.2676, simple_loss=0.3333, pruned_loss=0.101, over 1421962.88 frames.], batch size: 19, lr: 1.14e-03 2022-05-26 19:46:09,941 INFO [train.py:842] (0/4) Epoch 4, batch 2800, loss[loss=0.2196, simple_loss=0.292, pruned_loss=0.07358, over 7161.00 frames.], tot_loss[loss=0.2653, simple_loss=0.3315, pruned_loss=0.09961, over 1423683.86 frames.], batch size: 19, lr: 1.14e-03 2022-05-26 19:46:48,794 INFO [train.py:842] (0/4) Epoch 4, batch 2850, loss[loss=0.3323, simple_loss=0.3662, pruned_loss=0.1492, over 5166.00 frames.], tot_loss[loss=0.2642, simple_loss=0.3307, pruned_loss=0.09889, over 1423238.01 frames.], batch size: 52, lr: 1.14e-03 2022-05-26 19:47:27,282 INFO [train.py:842] (0/4) Epoch 4, batch 2900, loss[loss=0.2987, simple_loss=0.3566, pruned_loss=0.1204, over 6792.00 frames.], tot_loss[loss=0.2631, simple_loss=0.3297, pruned_loss=0.09828, over 1423393.85 frames.], batch size: 31, lr: 1.14e-03 2022-05-26 19:48:06,163 INFO [train.py:842] (0/4) Epoch 4, batch 2950, loss[loss=0.2622, simple_loss=0.3438, pruned_loss=0.09032, over 7044.00 frames.], tot_loss[loss=0.2642, simple_loss=0.3303, pruned_loss=0.09905, over 1427602.20 frames.], batch size: 28, lr: 1.14e-03 2022-05-26 19:48:45,192 INFO [train.py:842] (0/4) Epoch 4, batch 3000, loss[loss=0.3475, simple_loss=0.3799, pruned_loss=0.1576, over 7143.00 frames.], tot_loss[loss=0.2644, simple_loss=0.3306, pruned_loss=0.09914, over 1426194.42 frames.], batch size: 20, lr: 1.14e-03 2022-05-26 19:48:45,193 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 19:48:54,321 INFO [train.py:871] (0/4) Epoch 4, validation: loss=0.2021, simple_loss=0.3, pruned_loss=0.05209, over 868885.00 frames. 2022-05-26 19:49:33,150 INFO [train.py:842] (0/4) Epoch 4, batch 3050, loss[loss=0.2238, simple_loss=0.2973, pruned_loss=0.07517, over 7118.00 frames.], tot_loss[loss=0.2659, simple_loss=0.3311, pruned_loss=0.1004, over 1422148.30 frames.], batch size: 21, lr: 1.14e-03 2022-05-26 19:50:11,895 INFO [train.py:842] (0/4) Epoch 4, batch 3100, loss[loss=0.3238, simple_loss=0.3914, pruned_loss=0.1281, over 7308.00 frames.], tot_loss[loss=0.2646, simple_loss=0.33, pruned_loss=0.09962, over 1418351.31 frames.], batch size: 24, lr: 1.14e-03 2022-05-26 19:50:51,351 INFO [train.py:842] (0/4) Epoch 4, batch 3150, loss[loss=0.2847, simple_loss=0.3529, pruned_loss=0.1082, over 7293.00 frames.], tot_loss[loss=0.2632, simple_loss=0.3289, pruned_loss=0.09882, over 1423022.73 frames.], batch size: 25, lr: 1.14e-03 2022-05-26 19:51:30,073 INFO [train.py:842] (0/4) Epoch 4, batch 3200, loss[loss=0.1935, simple_loss=0.2821, pruned_loss=0.05247, over 7071.00 frames.], tot_loss[loss=0.2606, simple_loss=0.3267, pruned_loss=0.09726, over 1424763.18 frames.], batch size: 18, lr: 1.14e-03 2022-05-26 19:52:09,194 INFO [train.py:842] (0/4) Epoch 4, batch 3250, loss[loss=0.2766, simple_loss=0.339, pruned_loss=0.1071, over 7250.00 frames.], tot_loss[loss=0.2615, simple_loss=0.3275, pruned_loss=0.09771, over 1425037.53 frames.], batch size: 19, lr: 1.14e-03 2022-05-26 19:52:47,476 INFO [train.py:842] (0/4) Epoch 4, batch 3300, loss[loss=0.2851, simple_loss=0.333, pruned_loss=0.1186, over 7200.00 frames.], tot_loss[loss=0.2611, simple_loss=0.3276, pruned_loss=0.09733, over 1423000.87 frames.], batch size: 23, lr: 1.13e-03 2022-05-26 19:53:26,305 INFO [train.py:842] (0/4) Epoch 4, batch 3350, loss[loss=0.2514, simple_loss=0.3154, pruned_loss=0.09373, over 6393.00 frames.], tot_loss[loss=0.2584, simple_loss=0.3255, pruned_loss=0.0956, over 1420852.24 frames.], batch size: 37, lr: 1.13e-03 2022-05-26 19:54:04,823 INFO [train.py:842] (0/4) Epoch 4, batch 3400, loss[loss=0.2145, simple_loss=0.2831, pruned_loss=0.0729, over 7000.00 frames.], tot_loss[loss=0.2594, simple_loss=0.3261, pruned_loss=0.09632, over 1421645.53 frames.], batch size: 16, lr: 1.13e-03 2022-05-26 19:54:43,736 INFO [train.py:842] (0/4) Epoch 4, batch 3450, loss[loss=0.2518, simple_loss=0.3258, pruned_loss=0.08889, over 7174.00 frames.], tot_loss[loss=0.258, simple_loss=0.3251, pruned_loss=0.0955, over 1426538.34 frames.], batch size: 18, lr: 1.13e-03 2022-05-26 19:55:22,249 INFO [train.py:842] (0/4) Epoch 4, batch 3500, loss[loss=0.3075, simple_loss=0.3649, pruned_loss=0.125, over 7392.00 frames.], tot_loss[loss=0.2583, simple_loss=0.3248, pruned_loss=0.0959, over 1428445.24 frames.], batch size: 23, lr: 1.13e-03 2022-05-26 19:56:01,282 INFO [train.py:842] (0/4) Epoch 4, batch 3550, loss[loss=0.285, simple_loss=0.3428, pruned_loss=0.1136, over 7305.00 frames.], tot_loss[loss=0.2595, simple_loss=0.3254, pruned_loss=0.09676, over 1429549.20 frames.], batch size: 24, lr: 1.13e-03 2022-05-26 19:56:39,768 INFO [train.py:842] (0/4) Epoch 4, batch 3600, loss[loss=0.1981, simple_loss=0.2717, pruned_loss=0.0622, over 6981.00 frames.], tot_loss[loss=0.2603, simple_loss=0.3267, pruned_loss=0.09694, over 1428435.80 frames.], batch size: 16, lr: 1.13e-03 2022-05-26 19:57:18,390 INFO [train.py:842] (0/4) Epoch 4, batch 3650, loss[loss=0.2626, simple_loss=0.3162, pruned_loss=0.1045, over 7117.00 frames.], tot_loss[loss=0.2602, simple_loss=0.3268, pruned_loss=0.09683, over 1428579.76 frames.], batch size: 17, lr: 1.13e-03 2022-05-26 19:57:56,886 INFO [train.py:842] (0/4) Epoch 4, batch 3700, loss[loss=0.2679, simple_loss=0.3239, pruned_loss=0.106, over 7013.00 frames.], tot_loss[loss=0.2606, simple_loss=0.3269, pruned_loss=0.09714, over 1427827.72 frames.], batch size: 16, lr: 1.13e-03 2022-05-26 19:58:35,877 INFO [train.py:842] (0/4) Epoch 4, batch 3750, loss[loss=0.3049, simple_loss=0.3537, pruned_loss=0.1281, over 7428.00 frames.], tot_loss[loss=0.2587, simple_loss=0.3251, pruned_loss=0.09619, over 1426759.03 frames.], batch size: 20, lr: 1.13e-03 2022-05-26 19:59:14,459 INFO [train.py:842] (0/4) Epoch 4, batch 3800, loss[loss=0.3305, simple_loss=0.3737, pruned_loss=0.1437, over 7059.00 frames.], tot_loss[loss=0.2596, simple_loss=0.3258, pruned_loss=0.09672, over 1422593.69 frames.], batch size: 18, lr: 1.13e-03 2022-05-26 19:59:53,433 INFO [train.py:842] (0/4) Epoch 4, batch 3850, loss[loss=0.2192, simple_loss=0.2937, pruned_loss=0.0724, over 7420.00 frames.], tot_loss[loss=0.2584, simple_loss=0.3251, pruned_loss=0.09583, over 1426429.86 frames.], batch size: 18, lr: 1.12e-03 2022-05-26 20:00:32,054 INFO [train.py:842] (0/4) Epoch 4, batch 3900, loss[loss=0.318, simple_loss=0.3722, pruned_loss=0.1318, over 5270.00 frames.], tot_loss[loss=0.2601, simple_loss=0.3265, pruned_loss=0.09684, over 1427661.64 frames.], batch size: 52, lr: 1.12e-03 2022-05-26 20:01:10,952 INFO [train.py:842] (0/4) Epoch 4, batch 3950, loss[loss=0.2211, simple_loss=0.2842, pruned_loss=0.07894, over 7206.00 frames.], tot_loss[loss=0.2608, simple_loss=0.3269, pruned_loss=0.09734, over 1425568.45 frames.], batch size: 16, lr: 1.12e-03 2022-05-26 20:01:49,423 INFO [train.py:842] (0/4) Epoch 4, batch 4000, loss[loss=0.3437, simple_loss=0.3979, pruned_loss=0.1448, over 7212.00 frames.], tot_loss[loss=0.2605, simple_loss=0.3265, pruned_loss=0.09719, over 1417467.36 frames.], batch size: 21, lr: 1.12e-03 2022-05-26 20:02:28,189 INFO [train.py:842] (0/4) Epoch 4, batch 4050, loss[loss=0.1989, simple_loss=0.2842, pruned_loss=0.05675, over 7408.00 frames.], tot_loss[loss=0.26, simple_loss=0.3263, pruned_loss=0.0969, over 1419312.82 frames.], batch size: 21, lr: 1.12e-03 2022-05-26 20:03:06,973 INFO [train.py:842] (0/4) Epoch 4, batch 4100, loss[loss=0.2314, simple_loss=0.2949, pruned_loss=0.08395, over 7421.00 frames.], tot_loss[loss=0.2621, simple_loss=0.3281, pruned_loss=0.09801, over 1424060.39 frames.], batch size: 18, lr: 1.12e-03 2022-05-26 20:03:45,837 INFO [train.py:842] (0/4) Epoch 4, batch 4150, loss[loss=0.2413, simple_loss=0.2994, pruned_loss=0.09163, over 6824.00 frames.], tot_loss[loss=0.2609, simple_loss=0.3271, pruned_loss=0.09732, over 1425682.22 frames.], batch size: 15, lr: 1.12e-03 2022-05-26 20:04:24,275 INFO [train.py:842] (0/4) Epoch 4, batch 4200, loss[loss=0.1858, simple_loss=0.2591, pruned_loss=0.05626, over 7295.00 frames.], tot_loss[loss=0.2607, simple_loss=0.3274, pruned_loss=0.09698, over 1425934.89 frames.], batch size: 17, lr: 1.12e-03 2022-05-26 20:05:03,049 INFO [train.py:842] (0/4) Epoch 4, batch 4250, loss[loss=0.225, simple_loss=0.3117, pruned_loss=0.06919, over 7245.00 frames.], tot_loss[loss=0.262, simple_loss=0.3285, pruned_loss=0.0977, over 1423665.15 frames.], batch size: 20, lr: 1.12e-03 2022-05-26 20:05:41,571 INFO [train.py:842] (0/4) Epoch 4, batch 4300, loss[loss=0.216, simple_loss=0.2914, pruned_loss=0.07028, over 7249.00 frames.], tot_loss[loss=0.2616, simple_loss=0.3284, pruned_loss=0.0974, over 1422446.98 frames.], batch size: 19, lr: 1.12e-03 2022-05-26 20:06:20,305 INFO [train.py:842] (0/4) Epoch 4, batch 4350, loss[loss=0.2575, simple_loss=0.3286, pruned_loss=0.09317, over 7197.00 frames.], tot_loss[loss=0.2605, simple_loss=0.3276, pruned_loss=0.0967, over 1420984.65 frames.], batch size: 23, lr: 1.12e-03 2022-05-26 20:06:58,804 INFO [train.py:842] (0/4) Epoch 4, batch 4400, loss[loss=0.2377, simple_loss=0.3229, pruned_loss=0.07621, over 7235.00 frames.], tot_loss[loss=0.2604, simple_loss=0.3278, pruned_loss=0.09648, over 1421226.18 frames.], batch size: 20, lr: 1.12e-03 2022-05-26 20:07:16,833 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-32000.pt 2022-05-26 20:07:40,547 INFO [train.py:842] (0/4) Epoch 4, batch 4450, loss[loss=0.2652, simple_loss=0.3301, pruned_loss=0.1002, over 7352.00 frames.], tot_loss[loss=0.2589, simple_loss=0.3261, pruned_loss=0.09586, over 1423594.68 frames.], batch size: 19, lr: 1.11e-03 2022-05-26 20:08:19,163 INFO [train.py:842] (0/4) Epoch 4, batch 4500, loss[loss=0.2777, simple_loss=0.3392, pruned_loss=0.1081, over 7120.00 frames.], tot_loss[loss=0.2588, simple_loss=0.3256, pruned_loss=0.096, over 1426342.62 frames.], batch size: 21, lr: 1.11e-03 2022-05-26 20:08:58,291 INFO [train.py:842] (0/4) Epoch 4, batch 4550, loss[loss=0.2231, simple_loss=0.2862, pruned_loss=0.07998, over 7412.00 frames.], tot_loss[loss=0.2562, simple_loss=0.3237, pruned_loss=0.09438, over 1424199.81 frames.], batch size: 18, lr: 1.11e-03 2022-05-26 20:09:36,961 INFO [train.py:842] (0/4) Epoch 4, batch 4600, loss[loss=0.2648, simple_loss=0.3308, pruned_loss=0.09941, over 7412.00 frames.], tot_loss[loss=0.257, simple_loss=0.3244, pruned_loss=0.09485, over 1424752.06 frames.], batch size: 21, lr: 1.11e-03 2022-05-26 20:10:15,683 INFO [train.py:842] (0/4) Epoch 4, batch 4650, loss[loss=0.38, simple_loss=0.4186, pruned_loss=0.1707, over 7408.00 frames.], tot_loss[loss=0.2569, simple_loss=0.3248, pruned_loss=0.09449, over 1425267.88 frames.], batch size: 21, lr: 1.11e-03 2022-05-26 20:10:54,172 INFO [train.py:842] (0/4) Epoch 4, batch 4700, loss[loss=0.2629, simple_loss=0.3287, pruned_loss=0.09852, over 6675.00 frames.], tot_loss[loss=0.259, simple_loss=0.3263, pruned_loss=0.09581, over 1424594.38 frames.], batch size: 31, lr: 1.11e-03 2022-05-26 20:11:33,091 INFO [train.py:842] (0/4) Epoch 4, batch 4750, loss[loss=0.2789, simple_loss=0.3542, pruned_loss=0.1018, over 7117.00 frames.], tot_loss[loss=0.2577, simple_loss=0.3253, pruned_loss=0.09501, over 1424654.98 frames.], batch size: 21, lr: 1.11e-03 2022-05-26 20:12:11,650 INFO [train.py:842] (0/4) Epoch 4, batch 4800, loss[loss=0.2317, simple_loss=0.2984, pruned_loss=0.08244, over 7134.00 frames.], tot_loss[loss=0.2584, simple_loss=0.3263, pruned_loss=0.09531, over 1425784.78 frames.], batch size: 17, lr: 1.11e-03 2022-05-26 20:12:51,124 INFO [train.py:842] (0/4) Epoch 4, batch 4850, loss[loss=0.2559, simple_loss=0.3118, pruned_loss=0.09999, over 6824.00 frames.], tot_loss[loss=0.257, simple_loss=0.3248, pruned_loss=0.09457, over 1425702.24 frames.], batch size: 15, lr: 1.11e-03 2022-05-26 20:13:29,712 INFO [train.py:842] (0/4) Epoch 4, batch 4900, loss[loss=0.2745, simple_loss=0.3401, pruned_loss=0.1044, over 7275.00 frames.], tot_loss[loss=0.2556, simple_loss=0.3234, pruned_loss=0.09391, over 1424984.19 frames.], batch size: 24, lr: 1.11e-03 2022-05-26 20:14:08,460 INFO [train.py:842] (0/4) Epoch 4, batch 4950, loss[loss=0.1841, simple_loss=0.2867, pruned_loss=0.04076, over 7122.00 frames.], tot_loss[loss=0.2552, simple_loss=0.3231, pruned_loss=0.09366, over 1424747.77 frames.], batch size: 21, lr: 1.11e-03 2022-05-26 20:14:46,908 INFO [train.py:842] (0/4) Epoch 4, batch 5000, loss[loss=0.2225, simple_loss=0.3032, pruned_loss=0.07085, over 7321.00 frames.], tot_loss[loss=0.2559, simple_loss=0.3237, pruned_loss=0.09402, over 1423716.26 frames.], batch size: 20, lr: 1.11e-03 2022-05-26 20:15:25,475 INFO [train.py:842] (0/4) Epoch 4, batch 5050, loss[loss=0.301, simple_loss=0.3743, pruned_loss=0.1139, over 7202.00 frames.], tot_loss[loss=0.2591, simple_loss=0.3264, pruned_loss=0.09593, over 1424353.74 frames.], batch size: 26, lr: 1.10e-03 2022-05-26 20:16:04,054 INFO [train.py:842] (0/4) Epoch 4, batch 5100, loss[loss=0.2703, simple_loss=0.3354, pruned_loss=0.1026, over 7068.00 frames.], tot_loss[loss=0.2593, simple_loss=0.3271, pruned_loss=0.09581, over 1423114.19 frames.], batch size: 28, lr: 1.10e-03 2022-05-26 20:16:43,113 INFO [train.py:842] (0/4) Epoch 4, batch 5150, loss[loss=0.2518, simple_loss=0.3125, pruned_loss=0.09551, over 7269.00 frames.], tot_loss[loss=0.2575, simple_loss=0.3258, pruned_loss=0.09463, over 1428015.80 frames.], batch size: 17, lr: 1.10e-03 2022-05-26 20:17:21,940 INFO [train.py:842] (0/4) Epoch 4, batch 5200, loss[loss=0.2792, simple_loss=0.3455, pruned_loss=0.1065, over 7368.00 frames.], tot_loss[loss=0.2574, simple_loss=0.3255, pruned_loss=0.09469, over 1429123.20 frames.], batch size: 19, lr: 1.10e-03 2022-05-26 20:18:00,740 INFO [train.py:842] (0/4) Epoch 4, batch 5250, loss[loss=0.2509, simple_loss=0.3348, pruned_loss=0.08345, over 7107.00 frames.], tot_loss[loss=0.2566, simple_loss=0.3246, pruned_loss=0.09435, over 1427676.17 frames.], batch size: 21, lr: 1.10e-03 2022-05-26 20:18:39,347 INFO [train.py:842] (0/4) Epoch 4, batch 5300, loss[loss=0.2687, simple_loss=0.3299, pruned_loss=0.1037, over 7063.00 frames.], tot_loss[loss=0.2559, simple_loss=0.3238, pruned_loss=0.09401, over 1431274.38 frames.], batch size: 18, lr: 1.10e-03 2022-05-26 20:19:18,344 INFO [train.py:842] (0/4) Epoch 4, batch 5350, loss[loss=0.1961, simple_loss=0.2622, pruned_loss=0.06502, over 7293.00 frames.], tot_loss[loss=0.2556, simple_loss=0.3232, pruned_loss=0.09401, over 1432822.51 frames.], batch size: 17, lr: 1.10e-03 2022-05-26 20:19:56,804 INFO [train.py:842] (0/4) Epoch 4, batch 5400, loss[loss=0.3291, simple_loss=0.3535, pruned_loss=0.1523, over 7287.00 frames.], tot_loss[loss=0.2566, simple_loss=0.3239, pruned_loss=0.09464, over 1432651.34 frames.], batch size: 17, lr: 1.10e-03 2022-05-26 20:20:35,580 INFO [train.py:842] (0/4) Epoch 4, batch 5450, loss[loss=0.2488, simple_loss=0.3268, pruned_loss=0.08541, over 7213.00 frames.], tot_loss[loss=0.2584, simple_loss=0.325, pruned_loss=0.09591, over 1430937.94 frames.], batch size: 23, lr: 1.10e-03 2022-05-26 20:21:14,027 INFO [train.py:842] (0/4) Epoch 4, batch 5500, loss[loss=0.259, simple_loss=0.3229, pruned_loss=0.09755, over 7143.00 frames.], tot_loss[loss=0.2599, simple_loss=0.3265, pruned_loss=0.09663, over 1428866.15 frames.], batch size: 26, lr: 1.10e-03 2022-05-26 20:21:53,006 INFO [train.py:842] (0/4) Epoch 4, batch 5550, loss[loss=0.2866, simple_loss=0.3537, pruned_loss=0.1098, over 6740.00 frames.], tot_loss[loss=0.2578, simple_loss=0.3246, pruned_loss=0.09552, over 1422684.39 frames.], batch size: 31, lr: 1.10e-03 2022-05-26 20:22:31,504 INFO [train.py:842] (0/4) Epoch 4, batch 5600, loss[loss=0.2769, simple_loss=0.3304, pruned_loss=0.1117, over 7277.00 frames.], tot_loss[loss=0.26, simple_loss=0.326, pruned_loss=0.09703, over 1426364.78 frames.], batch size: 18, lr: 1.10e-03 2022-05-26 20:23:10,162 INFO [train.py:842] (0/4) Epoch 4, batch 5650, loss[loss=0.2797, simple_loss=0.3481, pruned_loss=0.1057, over 7191.00 frames.], tot_loss[loss=0.2608, simple_loss=0.3267, pruned_loss=0.09748, over 1422021.83 frames.], batch size: 23, lr: 1.09e-03 2022-05-26 20:23:48,736 INFO [train.py:842] (0/4) Epoch 4, batch 5700, loss[loss=0.3013, simple_loss=0.3672, pruned_loss=0.1177, over 7228.00 frames.], tot_loss[loss=0.2606, simple_loss=0.3269, pruned_loss=0.09719, over 1422933.11 frames.], batch size: 20, lr: 1.09e-03 2022-05-26 20:24:27,440 INFO [train.py:842] (0/4) Epoch 4, batch 5750, loss[loss=0.2646, simple_loss=0.3397, pruned_loss=0.09477, over 7263.00 frames.], tot_loss[loss=0.2611, simple_loss=0.3273, pruned_loss=0.09742, over 1421890.13 frames.], batch size: 25, lr: 1.09e-03 2022-05-26 20:25:06,096 INFO [train.py:842] (0/4) Epoch 4, batch 5800, loss[loss=0.2384, simple_loss=0.3228, pruned_loss=0.07702, over 7316.00 frames.], tot_loss[loss=0.2618, simple_loss=0.3282, pruned_loss=0.09772, over 1421705.61 frames.], batch size: 21, lr: 1.09e-03 2022-05-26 20:25:44,933 INFO [train.py:842] (0/4) Epoch 4, batch 5850, loss[loss=0.2418, simple_loss=0.32, pruned_loss=0.08178, over 6540.00 frames.], tot_loss[loss=0.2611, simple_loss=0.3276, pruned_loss=0.09734, over 1417515.23 frames.], batch size: 38, lr: 1.09e-03 2022-05-26 20:26:23,774 INFO [train.py:842] (0/4) Epoch 4, batch 5900, loss[loss=0.235, simple_loss=0.3178, pruned_loss=0.07611, over 7323.00 frames.], tot_loss[loss=0.2598, simple_loss=0.3268, pruned_loss=0.0964, over 1422926.86 frames.], batch size: 21, lr: 1.09e-03 2022-05-26 20:27:02,266 INFO [train.py:842] (0/4) Epoch 4, batch 5950, loss[loss=0.2244, simple_loss=0.3, pruned_loss=0.07438, over 7167.00 frames.], tot_loss[loss=0.2619, simple_loss=0.3279, pruned_loss=0.09796, over 1421662.18 frames.], batch size: 19, lr: 1.09e-03 2022-05-26 20:27:41,334 INFO [train.py:842] (0/4) Epoch 4, batch 6000, loss[loss=0.3037, simple_loss=0.3711, pruned_loss=0.1182, over 7208.00 frames.], tot_loss[loss=0.26, simple_loss=0.3269, pruned_loss=0.09653, over 1421327.51 frames.], batch size: 23, lr: 1.09e-03 2022-05-26 20:27:41,335 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 20:27:50,628 INFO [train.py:871] (0/4) Epoch 4, validation: loss=0.1997, simple_loss=0.2983, pruned_loss=0.05051, over 868885.00 frames. 2022-05-26 20:28:29,525 INFO [train.py:842] (0/4) Epoch 4, batch 6050, loss[loss=0.3533, simple_loss=0.4075, pruned_loss=0.1496, over 7236.00 frames.], tot_loss[loss=0.2589, simple_loss=0.3261, pruned_loss=0.09583, over 1416999.01 frames.], batch size: 20, lr: 1.09e-03 2022-05-26 20:29:08,227 INFO [train.py:842] (0/4) Epoch 4, batch 6100, loss[loss=0.2791, simple_loss=0.3495, pruned_loss=0.1043, over 7104.00 frames.], tot_loss[loss=0.2597, simple_loss=0.3269, pruned_loss=0.0962, over 1414190.65 frames.], batch size: 21, lr: 1.09e-03 2022-05-26 20:29:47,420 INFO [train.py:842] (0/4) Epoch 4, batch 6150, loss[loss=0.2526, simple_loss=0.3142, pruned_loss=0.09553, over 7302.00 frames.], tot_loss[loss=0.2597, simple_loss=0.3267, pruned_loss=0.09639, over 1418803.55 frames.], batch size: 25, lr: 1.09e-03 2022-05-26 20:30:26,272 INFO [train.py:842] (0/4) Epoch 4, batch 6200, loss[loss=0.1937, simple_loss=0.2657, pruned_loss=0.06086, over 7130.00 frames.], tot_loss[loss=0.257, simple_loss=0.324, pruned_loss=0.09501, over 1420421.62 frames.], batch size: 17, lr: 1.09e-03 2022-05-26 20:31:05,408 INFO [train.py:842] (0/4) Epoch 4, batch 6250, loss[loss=0.2283, simple_loss=0.3002, pruned_loss=0.07816, over 7428.00 frames.], tot_loss[loss=0.2553, simple_loss=0.3228, pruned_loss=0.09395, over 1419412.79 frames.], batch size: 20, lr: 1.08e-03 2022-05-26 20:31:44,038 INFO [train.py:842] (0/4) Epoch 4, batch 6300, loss[loss=0.2577, simple_loss=0.3291, pruned_loss=0.09314, over 7319.00 frames.], tot_loss[loss=0.2565, simple_loss=0.3235, pruned_loss=0.09479, over 1423249.39 frames.], batch size: 21, lr: 1.08e-03 2022-05-26 20:32:22,627 INFO [train.py:842] (0/4) Epoch 4, batch 6350, loss[loss=0.2563, simple_loss=0.3143, pruned_loss=0.09909, over 7433.00 frames.], tot_loss[loss=0.2588, simple_loss=0.3254, pruned_loss=0.09616, over 1419246.17 frames.], batch size: 20, lr: 1.08e-03 2022-05-26 20:33:01,275 INFO [train.py:842] (0/4) Epoch 4, batch 6400, loss[loss=0.2919, simple_loss=0.356, pruned_loss=0.1139, over 7385.00 frames.], tot_loss[loss=0.2594, simple_loss=0.3257, pruned_loss=0.09656, over 1419018.15 frames.], batch size: 23, lr: 1.08e-03 2022-05-26 20:33:40,213 INFO [train.py:842] (0/4) Epoch 4, batch 6450, loss[loss=0.3004, simple_loss=0.3647, pruned_loss=0.1181, over 7321.00 frames.], tot_loss[loss=0.2601, simple_loss=0.3259, pruned_loss=0.09713, over 1421023.23 frames.], batch size: 21, lr: 1.08e-03 2022-05-26 20:34:18,856 INFO [train.py:842] (0/4) Epoch 4, batch 6500, loss[loss=0.2169, simple_loss=0.2818, pruned_loss=0.07603, over 6781.00 frames.], tot_loss[loss=0.2601, simple_loss=0.3261, pruned_loss=0.09706, over 1421617.63 frames.], batch size: 15, lr: 1.08e-03 2022-05-26 20:34:57,615 INFO [train.py:842] (0/4) Epoch 4, batch 6550, loss[loss=0.2118, simple_loss=0.2856, pruned_loss=0.06901, over 7362.00 frames.], tot_loss[loss=0.2594, simple_loss=0.3259, pruned_loss=0.09643, over 1425643.23 frames.], batch size: 19, lr: 1.08e-03 2022-05-26 20:35:36,115 INFO [train.py:842] (0/4) Epoch 4, batch 6600, loss[loss=0.2704, simple_loss=0.3374, pruned_loss=0.1017, over 7191.00 frames.], tot_loss[loss=0.2581, simple_loss=0.3248, pruned_loss=0.09566, over 1421158.44 frames.], batch size: 22, lr: 1.08e-03 2022-05-26 20:36:14,969 INFO [train.py:842] (0/4) Epoch 4, batch 6650, loss[loss=0.2539, simple_loss=0.3232, pruned_loss=0.09227, over 7332.00 frames.], tot_loss[loss=0.2591, simple_loss=0.3257, pruned_loss=0.09628, over 1422968.58 frames.], batch size: 22, lr: 1.08e-03 2022-05-26 20:36:53,546 INFO [train.py:842] (0/4) Epoch 4, batch 6700, loss[loss=0.1884, simple_loss=0.2589, pruned_loss=0.05897, over 7138.00 frames.], tot_loss[loss=0.2623, simple_loss=0.3284, pruned_loss=0.0981, over 1421268.39 frames.], batch size: 17, lr: 1.08e-03 2022-05-26 20:37:32,291 INFO [train.py:842] (0/4) Epoch 4, batch 6750, loss[loss=0.2451, simple_loss=0.3308, pruned_loss=0.07976, over 7213.00 frames.], tot_loss[loss=0.2609, simple_loss=0.3274, pruned_loss=0.09716, over 1421228.17 frames.], batch size: 23, lr: 1.08e-03 2022-05-26 20:38:11,165 INFO [train.py:842] (0/4) Epoch 4, batch 6800, loss[loss=0.2855, simple_loss=0.348, pruned_loss=0.1115, over 7420.00 frames.], tot_loss[loss=0.2596, simple_loss=0.3261, pruned_loss=0.09651, over 1424001.63 frames.], batch size: 21, lr: 1.08e-03 2022-05-26 20:38:50,354 INFO [train.py:842] (0/4) Epoch 4, batch 6850, loss[loss=0.2797, simple_loss=0.3474, pruned_loss=0.106, over 7290.00 frames.], tot_loss[loss=0.2602, simple_loss=0.3269, pruned_loss=0.09678, over 1421706.01 frames.], batch size: 25, lr: 1.08e-03 2022-05-26 20:39:29,154 INFO [train.py:842] (0/4) Epoch 4, batch 6900, loss[loss=0.2941, simple_loss=0.3595, pruned_loss=0.1144, over 7214.00 frames.], tot_loss[loss=0.2582, simple_loss=0.3255, pruned_loss=0.09548, over 1422735.03 frames.], batch size: 22, lr: 1.07e-03 2022-05-26 20:40:08,045 INFO [train.py:842] (0/4) Epoch 4, batch 6950, loss[loss=0.2095, simple_loss=0.2879, pruned_loss=0.06557, over 7249.00 frames.], tot_loss[loss=0.2578, simple_loss=0.3249, pruned_loss=0.09534, over 1421520.80 frames.], batch size: 19, lr: 1.07e-03 2022-05-26 20:40:46,525 INFO [train.py:842] (0/4) Epoch 4, batch 7000, loss[loss=0.2523, simple_loss=0.3122, pruned_loss=0.09616, over 7166.00 frames.], tot_loss[loss=0.2589, simple_loss=0.3256, pruned_loss=0.09609, over 1419012.17 frames.], batch size: 19, lr: 1.07e-03 2022-05-26 20:41:25,587 INFO [train.py:842] (0/4) Epoch 4, batch 7050, loss[loss=0.2498, simple_loss=0.3158, pruned_loss=0.09193, over 7322.00 frames.], tot_loss[loss=0.2597, simple_loss=0.3266, pruned_loss=0.09634, over 1416514.94 frames.], batch size: 21, lr: 1.07e-03 2022-05-26 20:42:04,320 INFO [train.py:842] (0/4) Epoch 4, batch 7100, loss[loss=0.2922, simple_loss=0.356, pruned_loss=0.1142, over 7304.00 frames.], tot_loss[loss=0.2573, simple_loss=0.3245, pruned_loss=0.09502, over 1419977.14 frames.], batch size: 21, lr: 1.07e-03 2022-05-26 20:42:43,543 INFO [train.py:842] (0/4) Epoch 4, batch 7150, loss[loss=0.2287, simple_loss=0.3022, pruned_loss=0.07761, over 7142.00 frames.], tot_loss[loss=0.2579, simple_loss=0.3246, pruned_loss=0.09557, over 1418456.07 frames.], batch size: 20, lr: 1.07e-03 2022-05-26 20:43:22,290 INFO [train.py:842] (0/4) Epoch 4, batch 7200, loss[loss=0.2584, simple_loss=0.3233, pruned_loss=0.0968, over 7220.00 frames.], tot_loss[loss=0.2573, simple_loss=0.3243, pruned_loss=0.09512, over 1418103.99 frames.], batch size: 21, lr: 1.07e-03 2022-05-26 20:44:01,296 INFO [train.py:842] (0/4) Epoch 4, batch 7250, loss[loss=0.2362, simple_loss=0.3136, pruned_loss=0.07942, over 7414.00 frames.], tot_loss[loss=0.2556, simple_loss=0.3231, pruned_loss=0.09407, over 1417869.89 frames.], batch size: 21, lr: 1.07e-03 2022-05-26 20:44:39,878 INFO [train.py:842] (0/4) Epoch 4, batch 7300, loss[loss=0.2437, simple_loss=0.3224, pruned_loss=0.0825, over 7355.00 frames.], tot_loss[loss=0.2571, simple_loss=0.3236, pruned_loss=0.09529, over 1420131.26 frames.], batch size: 22, lr: 1.07e-03 2022-05-26 20:45:19,009 INFO [train.py:842] (0/4) Epoch 4, batch 7350, loss[loss=0.273, simple_loss=0.3287, pruned_loss=0.1087, over 7318.00 frames.], tot_loss[loss=0.2563, simple_loss=0.3232, pruned_loss=0.09476, over 1419495.58 frames.], batch size: 21, lr: 1.07e-03 2022-05-26 20:45:57,685 INFO [train.py:842] (0/4) Epoch 4, batch 7400, loss[loss=0.1928, simple_loss=0.2714, pruned_loss=0.05709, over 7335.00 frames.], tot_loss[loss=0.2559, simple_loss=0.3233, pruned_loss=0.09428, over 1420405.30 frames.], batch size: 20, lr: 1.07e-03 2022-05-26 20:46:36,586 INFO [train.py:842] (0/4) Epoch 4, batch 7450, loss[loss=0.3002, simple_loss=0.3623, pruned_loss=0.1191, over 7303.00 frames.], tot_loss[loss=0.2565, simple_loss=0.3239, pruned_loss=0.0946, over 1417708.53 frames.], batch size: 25, lr: 1.07e-03 2022-05-26 20:47:15,371 INFO [train.py:842] (0/4) Epoch 4, batch 7500, loss[loss=0.25, simple_loss=0.3127, pruned_loss=0.09365, over 7349.00 frames.], tot_loss[loss=0.2546, simple_loss=0.3225, pruned_loss=0.09337, over 1421029.93 frames.], batch size: 19, lr: 1.07e-03 2022-05-26 20:47:54,187 INFO [train.py:842] (0/4) Epoch 4, batch 7550, loss[loss=0.2753, simple_loss=0.345, pruned_loss=0.1028, over 6382.00 frames.], tot_loss[loss=0.2548, simple_loss=0.3233, pruned_loss=0.09317, over 1424298.04 frames.], batch size: 38, lr: 1.07e-03 2022-05-26 20:48:32,786 INFO [train.py:842] (0/4) Epoch 4, batch 7600, loss[loss=0.2432, simple_loss=0.3041, pruned_loss=0.09112, over 7151.00 frames.], tot_loss[loss=0.2557, simple_loss=0.3236, pruned_loss=0.09392, over 1424022.94 frames.], batch size: 17, lr: 1.06e-03 2022-05-26 20:49:11,711 INFO [train.py:842] (0/4) Epoch 4, batch 7650, loss[loss=0.2411, simple_loss=0.3121, pruned_loss=0.08503, over 7263.00 frames.], tot_loss[loss=0.2545, simple_loss=0.3231, pruned_loss=0.09298, over 1427090.89 frames.], batch size: 19, lr: 1.06e-03 2022-05-26 20:49:50,447 INFO [train.py:842] (0/4) Epoch 4, batch 7700, loss[loss=0.2285, simple_loss=0.2975, pruned_loss=0.07971, over 7172.00 frames.], tot_loss[loss=0.2543, simple_loss=0.3228, pruned_loss=0.0929, over 1427011.03 frames.], batch size: 19, lr: 1.06e-03 2022-05-26 20:50:29,404 INFO [train.py:842] (0/4) Epoch 4, batch 7750, loss[loss=0.281, simple_loss=0.3384, pruned_loss=0.1118, over 6248.00 frames.], tot_loss[loss=0.2537, simple_loss=0.3223, pruned_loss=0.09259, over 1428651.10 frames.], batch size: 37, lr: 1.06e-03 2022-05-26 20:51:08,057 INFO [train.py:842] (0/4) Epoch 4, batch 7800, loss[loss=0.2324, simple_loss=0.3003, pruned_loss=0.08227, over 7332.00 frames.], tot_loss[loss=0.2521, simple_loss=0.3209, pruned_loss=0.09165, over 1426350.41 frames.], batch size: 20, lr: 1.06e-03 2022-05-26 20:51:46,891 INFO [train.py:842] (0/4) Epoch 4, batch 7850, loss[loss=0.2767, simple_loss=0.3435, pruned_loss=0.1049, over 6365.00 frames.], tot_loss[loss=0.2543, simple_loss=0.3224, pruned_loss=0.0931, over 1423537.42 frames.], batch size: 37, lr: 1.06e-03 2022-05-26 20:52:25,498 INFO [train.py:842] (0/4) Epoch 4, batch 7900, loss[loss=0.2429, simple_loss=0.3044, pruned_loss=0.09072, over 7399.00 frames.], tot_loss[loss=0.2519, simple_loss=0.3203, pruned_loss=0.09169, over 1425373.75 frames.], batch size: 18, lr: 1.06e-03 2022-05-26 20:53:04,273 INFO [train.py:842] (0/4) Epoch 4, batch 7950, loss[loss=0.206, simple_loss=0.2855, pruned_loss=0.0632, over 7169.00 frames.], tot_loss[loss=0.2524, simple_loss=0.3204, pruned_loss=0.09224, over 1425441.22 frames.], batch size: 18, lr: 1.06e-03 2022-05-26 20:53:42,673 INFO [train.py:842] (0/4) Epoch 4, batch 8000, loss[loss=0.2873, simple_loss=0.35, pruned_loss=0.1123, over 6353.00 frames.], tot_loss[loss=0.2564, simple_loss=0.3233, pruned_loss=0.09474, over 1423208.73 frames.], batch size: 37, lr: 1.06e-03 2022-05-26 20:54:21,447 INFO [train.py:842] (0/4) Epoch 4, batch 8050, loss[loss=0.2661, simple_loss=0.3461, pruned_loss=0.09309, over 7310.00 frames.], tot_loss[loss=0.2556, simple_loss=0.3229, pruned_loss=0.09411, over 1424029.20 frames.], batch size: 21, lr: 1.06e-03 2022-05-26 20:54:59,925 INFO [train.py:842] (0/4) Epoch 4, batch 8100, loss[loss=0.2707, simple_loss=0.3416, pruned_loss=0.09992, over 7052.00 frames.], tot_loss[loss=0.2552, simple_loss=0.3229, pruned_loss=0.0937, over 1425785.67 frames.], batch size: 28, lr: 1.06e-03 2022-05-26 20:55:38,676 INFO [train.py:842] (0/4) Epoch 4, batch 8150, loss[loss=0.2184, simple_loss=0.2985, pruned_loss=0.06911, over 7428.00 frames.], tot_loss[loss=0.2562, simple_loss=0.3239, pruned_loss=0.09426, over 1428094.31 frames.], batch size: 20, lr: 1.06e-03 2022-05-26 20:56:17,360 INFO [train.py:842] (0/4) Epoch 4, batch 8200, loss[loss=0.2755, simple_loss=0.3441, pruned_loss=0.1034, over 7201.00 frames.], tot_loss[loss=0.255, simple_loss=0.3232, pruned_loss=0.09343, over 1430794.99 frames.], batch size: 23, lr: 1.06e-03 2022-05-26 20:56:55,971 INFO [train.py:842] (0/4) Epoch 4, batch 8250, loss[loss=0.2797, simple_loss=0.346, pruned_loss=0.1067, over 7272.00 frames.], tot_loss[loss=0.2572, simple_loss=0.3245, pruned_loss=0.09498, over 1421480.89 frames.], batch size: 25, lr: 1.05e-03 2022-05-26 20:57:34,494 INFO [train.py:842] (0/4) Epoch 4, batch 8300, loss[loss=0.2928, simple_loss=0.3614, pruned_loss=0.1121, over 7166.00 frames.], tot_loss[loss=0.2581, simple_loss=0.3254, pruned_loss=0.09535, over 1422511.48 frames.], batch size: 26, lr: 1.05e-03 2022-05-26 20:58:13,606 INFO [train.py:842] (0/4) Epoch 4, batch 8350, loss[loss=0.255, simple_loss=0.3224, pruned_loss=0.09374, over 7170.00 frames.], tot_loss[loss=0.2575, simple_loss=0.3245, pruned_loss=0.09526, over 1420123.76 frames.], batch size: 26, lr: 1.05e-03 2022-05-26 20:58:52,457 INFO [train.py:842] (0/4) Epoch 4, batch 8400, loss[loss=0.2032, simple_loss=0.2766, pruned_loss=0.06491, over 7082.00 frames.], tot_loss[loss=0.2565, simple_loss=0.3241, pruned_loss=0.09439, over 1421066.46 frames.], batch size: 18, lr: 1.05e-03 2022-05-26 20:59:31,362 INFO [train.py:842] (0/4) Epoch 4, batch 8450, loss[loss=0.2288, simple_loss=0.2956, pruned_loss=0.08103, over 7363.00 frames.], tot_loss[loss=0.2563, simple_loss=0.3236, pruned_loss=0.0945, over 1422333.95 frames.], batch size: 19, lr: 1.05e-03 2022-05-26 21:00:09,907 INFO [train.py:842] (0/4) Epoch 4, batch 8500, loss[loss=0.2915, simple_loss=0.3566, pruned_loss=0.1132, over 7249.00 frames.], tot_loss[loss=0.257, simple_loss=0.3242, pruned_loss=0.09493, over 1422750.93 frames.], batch size: 19, lr: 1.05e-03 2022-05-26 21:00:48,714 INFO [train.py:842] (0/4) Epoch 4, batch 8550, loss[loss=0.2214, simple_loss=0.2846, pruned_loss=0.07911, over 7421.00 frames.], tot_loss[loss=0.2591, simple_loss=0.3256, pruned_loss=0.09633, over 1417995.19 frames.], batch size: 18, lr: 1.05e-03 2022-05-26 21:01:27,362 INFO [train.py:842] (0/4) Epoch 4, batch 8600, loss[loss=0.2443, simple_loss=0.3261, pruned_loss=0.08119, over 7142.00 frames.], tot_loss[loss=0.2572, simple_loss=0.324, pruned_loss=0.09521, over 1419343.87 frames.], batch size: 26, lr: 1.05e-03 2022-05-26 21:02:06,591 INFO [train.py:842] (0/4) Epoch 4, batch 8650, loss[loss=0.2657, simple_loss=0.3346, pruned_loss=0.09838, over 7373.00 frames.], tot_loss[loss=0.257, simple_loss=0.3237, pruned_loss=0.09511, over 1417521.89 frames.], batch size: 23, lr: 1.05e-03 2022-05-26 21:02:45,052 INFO [train.py:842] (0/4) Epoch 4, batch 8700, loss[loss=0.2776, simple_loss=0.341, pruned_loss=0.1071, over 7187.00 frames.], tot_loss[loss=0.2548, simple_loss=0.3221, pruned_loss=0.09372, over 1413693.35 frames.], batch size: 26, lr: 1.05e-03 2022-05-26 21:03:23,717 INFO [train.py:842] (0/4) Epoch 4, batch 8750, loss[loss=0.3384, simple_loss=0.374, pruned_loss=0.1514, over 4881.00 frames.], tot_loss[loss=0.2546, simple_loss=0.3217, pruned_loss=0.09369, over 1402712.74 frames.], batch size: 52, lr: 1.05e-03 2022-05-26 21:04:02,246 INFO [train.py:842] (0/4) Epoch 4, batch 8800, loss[loss=0.2404, simple_loss=0.3017, pruned_loss=0.08955, over 6790.00 frames.], tot_loss[loss=0.2539, simple_loss=0.3205, pruned_loss=0.0937, over 1401476.93 frames.], batch size: 31, lr: 1.05e-03 2022-05-26 21:04:40,825 INFO [train.py:842] (0/4) Epoch 4, batch 8850, loss[loss=0.1762, simple_loss=0.2592, pruned_loss=0.04658, over 7159.00 frames.], tot_loss[loss=0.2515, simple_loss=0.3187, pruned_loss=0.09215, over 1395256.33 frames.], batch size: 18, lr: 1.05e-03 2022-05-26 21:05:19,471 INFO [train.py:842] (0/4) Epoch 4, batch 8900, loss[loss=0.2359, simple_loss=0.3049, pruned_loss=0.08344, over 7160.00 frames.], tot_loss[loss=0.2537, simple_loss=0.3198, pruned_loss=0.09382, over 1392047.28 frames.], batch size: 18, lr: 1.05e-03 2022-05-26 21:05:58,249 INFO [train.py:842] (0/4) Epoch 4, batch 8950, loss[loss=0.2252, simple_loss=0.2923, pruned_loss=0.0791, over 7364.00 frames.], tot_loss[loss=0.2559, simple_loss=0.3211, pruned_loss=0.0954, over 1391516.32 frames.], batch size: 19, lr: 1.04e-03 2022-05-26 21:06:36,750 INFO [train.py:842] (0/4) Epoch 4, batch 9000, loss[loss=0.2227, simple_loss=0.3021, pruned_loss=0.07162, over 7157.00 frames.], tot_loss[loss=0.2566, simple_loss=0.3221, pruned_loss=0.0956, over 1376621.23 frames.], batch size: 19, lr: 1.04e-03 2022-05-26 21:06:36,752 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 21:06:46,019 INFO [train.py:871] (0/4) Epoch 4, validation: loss=0.1922, simple_loss=0.292, pruned_loss=0.04621, over 868885.00 frames. 2022-05-26 21:07:24,190 INFO [train.py:842] (0/4) Epoch 4, batch 9050, loss[loss=0.3228, simple_loss=0.3769, pruned_loss=0.1344, over 4994.00 frames.], tot_loss[loss=0.2577, simple_loss=0.3232, pruned_loss=0.09614, over 1360898.13 frames.], batch size: 52, lr: 1.04e-03 2022-05-26 21:08:01,816 INFO [train.py:842] (0/4) Epoch 4, batch 9100, loss[loss=0.3483, simple_loss=0.392, pruned_loss=0.1524, over 6552.00 frames.], tot_loss[loss=0.261, simple_loss=0.3264, pruned_loss=0.09781, over 1340122.89 frames.], batch size: 38, lr: 1.04e-03 2022-05-26 21:08:39,508 INFO [train.py:842] (0/4) Epoch 4, batch 9150, loss[loss=0.3432, simple_loss=0.3901, pruned_loss=0.1481, over 5160.00 frames.], tot_loss[loss=0.2678, simple_loss=0.331, pruned_loss=0.1022, over 1284989.81 frames.], batch size: 52, lr: 1.04e-03 2022-05-26 21:09:12,595 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-4.pt 2022-05-26 21:09:32,023 INFO [train.py:842] (0/4) Epoch 5, batch 0, loss[loss=0.3114, simple_loss=0.3698, pruned_loss=0.1265, over 7200.00 frames.], tot_loss[loss=0.3114, simple_loss=0.3698, pruned_loss=0.1265, over 7200.00 frames.], batch size: 23, lr: 1.00e-03 2022-05-26 21:10:11,435 INFO [train.py:842] (0/4) Epoch 5, batch 50, loss[loss=0.2816, simple_loss=0.3582, pruned_loss=0.1025, over 7337.00 frames.], tot_loss[loss=0.2534, simple_loss=0.3236, pruned_loss=0.0916, over 320267.14 frames.], batch size: 22, lr: 1.00e-03 2022-05-26 21:10:50,252 INFO [train.py:842] (0/4) Epoch 5, batch 100, loss[loss=0.3436, simple_loss=0.4008, pruned_loss=0.1432, over 7332.00 frames.], tot_loss[loss=0.2576, simple_loss=0.326, pruned_loss=0.09463, over 566561.71 frames.], batch size: 22, lr: 1.00e-03 2022-05-26 21:11:29,044 INFO [train.py:842] (0/4) Epoch 5, batch 150, loss[loss=0.3451, simple_loss=0.3891, pruned_loss=0.1505, over 4743.00 frames.], tot_loss[loss=0.2604, simple_loss=0.3269, pruned_loss=0.09696, over 754480.79 frames.], batch size: 52, lr: 1.00e-03 2022-05-26 21:12:07,500 INFO [train.py:842] (0/4) Epoch 5, batch 200, loss[loss=0.2178, simple_loss=0.2925, pruned_loss=0.0716, over 7163.00 frames.], tot_loss[loss=0.258, simple_loss=0.3259, pruned_loss=0.09507, over 904208.07 frames.], batch size: 19, lr: 1.00e-03 2022-05-26 21:12:46,173 INFO [train.py:842] (0/4) Epoch 5, batch 250, loss[loss=0.2421, simple_loss=0.3209, pruned_loss=0.08167, over 7327.00 frames.], tot_loss[loss=0.2564, simple_loss=0.3255, pruned_loss=0.09365, over 1021499.88 frames.], batch size: 22, lr: 1.00e-03 2022-05-26 21:13:24,985 INFO [train.py:842] (0/4) Epoch 5, batch 300, loss[loss=0.197, simple_loss=0.2729, pruned_loss=0.06053, over 7263.00 frames.], tot_loss[loss=0.2528, simple_loss=0.3221, pruned_loss=0.09179, over 1113416.54 frames.], batch size: 17, lr: 1.00e-03 2022-05-26 21:14:03,869 INFO [train.py:842] (0/4) Epoch 5, batch 350, loss[loss=0.2144, simple_loss=0.2866, pruned_loss=0.07115, over 7159.00 frames.], tot_loss[loss=0.2512, simple_loss=0.3204, pruned_loss=0.09103, over 1181319.67 frames.], batch size: 19, lr: 1.00e-03 2022-05-26 21:14:42,425 INFO [train.py:842] (0/4) Epoch 5, batch 400, loss[loss=0.2661, simple_loss=0.3248, pruned_loss=0.1037, over 7088.00 frames.], tot_loss[loss=0.2535, simple_loss=0.3219, pruned_loss=0.09251, over 1232977.57 frames.], batch size: 28, lr: 9.99e-04 2022-05-26 21:15:21,482 INFO [train.py:842] (0/4) Epoch 5, batch 450, loss[loss=0.237, simple_loss=0.3107, pruned_loss=0.08162, over 7006.00 frames.], tot_loss[loss=0.2538, simple_loss=0.3216, pruned_loss=0.093, over 1275187.56 frames.], batch size: 28, lr: 9.99e-04 2022-05-26 21:16:00,451 INFO [train.py:842] (0/4) Epoch 5, batch 500, loss[loss=0.2179, simple_loss=0.3112, pruned_loss=0.06225, over 7315.00 frames.], tot_loss[loss=0.2517, simple_loss=0.3201, pruned_loss=0.09167, over 1310712.44 frames.], batch size: 21, lr: 9.98e-04 2022-05-26 21:16:39,268 INFO [train.py:842] (0/4) Epoch 5, batch 550, loss[loss=0.2746, simple_loss=0.3457, pruned_loss=0.1017, over 6671.00 frames.], tot_loss[loss=0.2497, simple_loss=0.3187, pruned_loss=0.09034, over 1334596.87 frames.], batch size: 31, lr: 9.97e-04 2022-05-26 21:17:17,934 INFO [train.py:842] (0/4) Epoch 5, batch 600, loss[loss=0.2291, simple_loss=0.2865, pruned_loss=0.08579, over 6995.00 frames.], tot_loss[loss=0.2493, simple_loss=0.3186, pruned_loss=0.08995, over 1356480.29 frames.], batch size: 16, lr: 9.97e-04 2022-05-26 21:17:56,775 INFO [train.py:842] (0/4) Epoch 5, batch 650, loss[loss=0.2366, simple_loss=0.307, pruned_loss=0.08313, over 7330.00 frames.], tot_loss[loss=0.2494, simple_loss=0.3188, pruned_loss=0.09007, over 1371434.55 frames.], batch size: 20, lr: 9.96e-04 2022-05-26 21:18:35,149 INFO [train.py:842] (0/4) Epoch 5, batch 700, loss[loss=0.2497, simple_loss=0.3276, pruned_loss=0.08591, over 7319.00 frames.], tot_loss[loss=0.25, simple_loss=0.3197, pruned_loss=0.09013, over 1381222.45 frames.], batch size: 25, lr: 9.95e-04 2022-05-26 21:19:14,041 INFO [train.py:842] (0/4) Epoch 5, batch 750, loss[loss=0.2007, simple_loss=0.2851, pruned_loss=0.0581, over 7453.00 frames.], tot_loss[loss=0.249, simple_loss=0.3185, pruned_loss=0.08978, over 1386199.42 frames.], batch size: 19, lr: 9.95e-04 2022-05-26 21:19:52,738 INFO [train.py:842] (0/4) Epoch 5, batch 800, loss[loss=0.1902, simple_loss=0.2578, pruned_loss=0.0613, over 7068.00 frames.], tot_loss[loss=0.2476, simple_loss=0.3169, pruned_loss=0.08919, over 1397826.87 frames.], batch size: 18, lr: 9.94e-04 2022-05-26 21:20:31,431 INFO [train.py:842] (0/4) Epoch 5, batch 850, loss[loss=0.2426, simple_loss=0.308, pruned_loss=0.08855, over 7069.00 frames.], tot_loss[loss=0.2477, simple_loss=0.3169, pruned_loss=0.0893, over 1396394.40 frames.], batch size: 18, lr: 9.93e-04 2022-05-26 21:21:09,958 INFO [train.py:842] (0/4) Epoch 5, batch 900, loss[loss=0.2685, simple_loss=0.3416, pruned_loss=0.09766, over 7333.00 frames.], tot_loss[loss=0.248, simple_loss=0.3173, pruned_loss=0.0894, over 1403120.34 frames.], batch size: 21, lr: 9.93e-04 2022-05-26 21:21:48,885 INFO [train.py:842] (0/4) Epoch 5, batch 950, loss[loss=0.3013, simple_loss=0.3779, pruned_loss=0.1124, over 7092.00 frames.], tot_loss[loss=0.2494, simple_loss=0.3188, pruned_loss=0.09001, over 1407758.50 frames.], batch size: 28, lr: 9.92e-04 2022-05-26 21:22:27,557 INFO [train.py:842] (0/4) Epoch 5, batch 1000, loss[loss=0.2303, simple_loss=0.302, pruned_loss=0.07928, over 7065.00 frames.], tot_loss[loss=0.2473, simple_loss=0.3171, pruned_loss=0.08874, over 1412149.65 frames.], batch size: 18, lr: 9.91e-04 2022-05-26 21:23:06,447 INFO [train.py:842] (0/4) Epoch 5, batch 1050, loss[loss=0.2151, simple_loss=0.2954, pruned_loss=0.06741, over 7300.00 frames.], tot_loss[loss=0.246, simple_loss=0.3167, pruned_loss=0.08767, over 1417548.51 frames.], batch size: 24, lr: 9.91e-04 2022-05-26 21:23:44,729 INFO [train.py:842] (0/4) Epoch 5, batch 1100, loss[loss=0.2683, simple_loss=0.333, pruned_loss=0.1018, over 6534.00 frames.], tot_loss[loss=0.2467, simple_loss=0.3173, pruned_loss=0.08803, over 1413021.90 frames.], batch size: 38, lr: 9.90e-04 2022-05-26 21:24:23,623 INFO [train.py:842] (0/4) Epoch 5, batch 1150, loss[loss=0.222, simple_loss=0.3114, pruned_loss=0.06631, over 7405.00 frames.], tot_loss[loss=0.2477, simple_loss=0.3184, pruned_loss=0.08846, over 1415676.25 frames.], batch size: 20, lr: 9.89e-04 2022-05-26 21:25:02,149 INFO [train.py:842] (0/4) Epoch 5, batch 1200, loss[loss=0.2503, simple_loss=0.3266, pruned_loss=0.08697, over 6292.00 frames.], tot_loss[loss=0.2475, simple_loss=0.3178, pruned_loss=0.08861, over 1417498.75 frames.], batch size: 38, lr: 9.89e-04 2022-05-26 21:25:40,991 INFO [train.py:842] (0/4) Epoch 5, batch 1250, loss[loss=0.2576, simple_loss=0.3141, pruned_loss=0.1006, over 7243.00 frames.], tot_loss[loss=0.249, simple_loss=0.3184, pruned_loss=0.08982, over 1412292.61 frames.], batch size: 19, lr: 9.88e-04 2022-05-26 21:26:19,423 INFO [train.py:842] (0/4) Epoch 5, batch 1300, loss[loss=0.2705, simple_loss=0.3437, pruned_loss=0.09865, over 7333.00 frames.], tot_loss[loss=0.2503, simple_loss=0.3201, pruned_loss=0.09029, over 1415748.55 frames.], batch size: 20, lr: 9.87e-04 2022-05-26 21:26:58,319 INFO [train.py:842] (0/4) Epoch 5, batch 1350, loss[loss=0.2055, simple_loss=0.2689, pruned_loss=0.07105, over 7146.00 frames.], tot_loss[loss=0.251, simple_loss=0.3205, pruned_loss=0.09076, over 1422443.76 frames.], batch size: 17, lr: 9.87e-04 2022-05-26 21:27:36,942 INFO [train.py:842] (0/4) Epoch 5, batch 1400, loss[loss=0.2872, simple_loss=0.3503, pruned_loss=0.112, over 7227.00 frames.], tot_loss[loss=0.2526, simple_loss=0.3219, pruned_loss=0.09162, over 1418935.18 frames.], batch size: 20, lr: 9.86e-04 2022-05-26 21:28:15,793 INFO [train.py:842] (0/4) Epoch 5, batch 1450, loss[loss=0.2146, simple_loss=0.2807, pruned_loss=0.07428, over 6992.00 frames.], tot_loss[loss=0.2512, simple_loss=0.3207, pruned_loss=0.09087, over 1420024.20 frames.], batch size: 16, lr: 9.86e-04 2022-05-26 21:28:54,578 INFO [train.py:842] (0/4) Epoch 5, batch 1500, loss[loss=0.2603, simple_loss=0.3197, pruned_loss=0.1005, over 7332.00 frames.], tot_loss[loss=0.2508, simple_loss=0.3199, pruned_loss=0.09083, over 1423665.33 frames.], batch size: 20, lr: 9.85e-04 2022-05-26 21:29:33,921 INFO [train.py:842] (0/4) Epoch 5, batch 1550, loss[loss=0.2504, simple_loss=0.3301, pruned_loss=0.0853, over 7368.00 frames.], tot_loss[loss=0.2486, simple_loss=0.318, pruned_loss=0.08962, over 1425124.83 frames.], batch size: 23, lr: 9.84e-04 2022-05-26 21:30:12,526 INFO [train.py:842] (0/4) Epoch 5, batch 1600, loss[loss=0.3183, simple_loss=0.3772, pruned_loss=0.1297, over 7321.00 frames.], tot_loss[loss=0.2488, simple_loss=0.3184, pruned_loss=0.08962, over 1424157.29 frames.], batch size: 25, lr: 9.84e-04 2022-05-26 21:31:01,955 INFO [train.py:842] (0/4) Epoch 5, batch 1650, loss[loss=0.2609, simple_loss=0.3337, pruned_loss=0.09409, over 7110.00 frames.], tot_loss[loss=0.2483, simple_loss=0.3182, pruned_loss=0.08923, over 1422521.86 frames.], batch size: 21, lr: 9.83e-04 2022-05-26 21:31:40,731 INFO [train.py:842] (0/4) Epoch 5, batch 1700, loss[loss=0.2373, simple_loss=0.321, pruned_loss=0.07681, over 7330.00 frames.], tot_loss[loss=0.2482, simple_loss=0.3178, pruned_loss=0.08932, over 1424690.97 frames.], batch size: 22, lr: 9.82e-04 2022-05-26 21:32:19,655 INFO [train.py:842] (0/4) Epoch 5, batch 1750, loss[loss=0.2559, simple_loss=0.3228, pruned_loss=0.09453, over 7304.00 frames.], tot_loss[loss=0.248, simple_loss=0.3176, pruned_loss=0.08917, over 1423467.42 frames.], batch size: 24, lr: 9.82e-04 2022-05-26 21:32:58,262 INFO [train.py:842] (0/4) Epoch 5, batch 1800, loss[loss=0.256, simple_loss=0.3417, pruned_loss=0.08513, over 7326.00 frames.], tot_loss[loss=0.2505, simple_loss=0.3203, pruned_loss=0.09036, over 1426136.36 frames.], batch size: 21, lr: 9.81e-04 2022-05-26 21:33:37,264 INFO [train.py:842] (0/4) Epoch 5, batch 1850, loss[loss=0.3224, simple_loss=0.3766, pruned_loss=0.1341, over 6289.00 frames.], tot_loss[loss=0.2511, simple_loss=0.3208, pruned_loss=0.09067, over 1425622.84 frames.], batch size: 37, lr: 9.81e-04 2022-05-26 21:34:15,859 INFO [train.py:842] (0/4) Epoch 5, batch 1900, loss[loss=0.283, simple_loss=0.3574, pruned_loss=0.1043, over 7105.00 frames.], tot_loss[loss=0.2494, simple_loss=0.3198, pruned_loss=0.08949, over 1426921.42 frames.], batch size: 21, lr: 9.80e-04 2022-05-26 21:34:55,056 INFO [train.py:842] (0/4) Epoch 5, batch 1950, loss[loss=0.2095, simple_loss=0.2861, pruned_loss=0.0665, over 7165.00 frames.], tot_loss[loss=0.2501, simple_loss=0.3199, pruned_loss=0.09021, over 1427591.53 frames.], batch size: 18, lr: 9.79e-04 2022-05-26 21:35:33,842 INFO [train.py:842] (0/4) Epoch 5, batch 2000, loss[loss=0.2828, simple_loss=0.3429, pruned_loss=0.1113, over 7291.00 frames.], tot_loss[loss=0.2508, simple_loss=0.3201, pruned_loss=0.09071, over 1424962.53 frames.], batch size: 25, lr: 9.79e-04 2022-05-26 21:36:12,762 INFO [train.py:842] (0/4) Epoch 5, batch 2050, loss[loss=0.2456, simple_loss=0.3155, pruned_loss=0.08789, over 7294.00 frames.], tot_loss[loss=0.2496, simple_loss=0.3191, pruned_loss=0.09005, over 1429569.41 frames.], batch size: 24, lr: 9.78e-04 2022-05-26 21:36:51,525 INFO [train.py:842] (0/4) Epoch 5, batch 2100, loss[loss=0.1901, simple_loss=0.2626, pruned_loss=0.05886, over 7432.00 frames.], tot_loss[loss=0.2515, simple_loss=0.3204, pruned_loss=0.09128, over 1433175.33 frames.], batch size: 18, lr: 9.77e-04 2022-05-26 21:37:30,127 INFO [train.py:842] (0/4) Epoch 5, batch 2150, loss[loss=0.2106, simple_loss=0.2946, pruned_loss=0.06327, over 7059.00 frames.], tot_loss[loss=0.2499, simple_loss=0.32, pruned_loss=0.08996, over 1432055.54 frames.], batch size: 18, lr: 9.77e-04 2022-05-26 21:38:08,834 INFO [train.py:842] (0/4) Epoch 5, batch 2200, loss[loss=0.2759, simple_loss=0.3482, pruned_loss=0.1018, over 7333.00 frames.], tot_loss[loss=0.2479, simple_loss=0.3182, pruned_loss=0.08876, over 1433658.74 frames.], batch size: 22, lr: 9.76e-04 2022-05-26 21:38:47,513 INFO [train.py:842] (0/4) Epoch 5, batch 2250, loss[loss=0.2286, simple_loss=0.3153, pruned_loss=0.071, over 7370.00 frames.], tot_loss[loss=0.2481, simple_loss=0.3184, pruned_loss=0.08889, over 1431091.87 frames.], batch size: 23, lr: 9.76e-04 2022-05-26 21:39:26,220 INFO [train.py:842] (0/4) Epoch 5, batch 2300, loss[loss=0.2736, simple_loss=0.3107, pruned_loss=0.1183, over 7279.00 frames.], tot_loss[loss=0.2482, simple_loss=0.3184, pruned_loss=0.08899, over 1429844.25 frames.], batch size: 17, lr: 9.75e-04 2022-05-26 21:40:05,045 INFO [train.py:842] (0/4) Epoch 5, batch 2350, loss[loss=0.1791, simple_loss=0.2549, pruned_loss=0.0517, over 7412.00 frames.], tot_loss[loss=0.2469, simple_loss=0.3174, pruned_loss=0.08817, over 1433187.72 frames.], batch size: 18, lr: 9.74e-04 2022-05-26 21:40:53,737 INFO [train.py:842] (0/4) Epoch 5, batch 2400, loss[loss=0.2198, simple_loss=0.2996, pruned_loss=0.07001, over 7209.00 frames.], tot_loss[loss=0.2463, simple_loss=0.3166, pruned_loss=0.08795, over 1434406.88 frames.], batch size: 21, lr: 9.74e-04 2022-05-26 21:41:32,642 INFO [train.py:842] (0/4) Epoch 5, batch 2450, loss[loss=0.2319, simple_loss=0.296, pruned_loss=0.08389, over 7269.00 frames.], tot_loss[loss=0.2469, simple_loss=0.3171, pruned_loss=0.08831, over 1434556.62 frames.], batch size: 18, lr: 9.73e-04 2022-05-26 21:42:21,519 INFO [train.py:842] (0/4) Epoch 5, batch 2500, loss[loss=0.2644, simple_loss=0.3312, pruned_loss=0.09887, over 7208.00 frames.], tot_loss[loss=0.2466, simple_loss=0.317, pruned_loss=0.08813, over 1432873.29 frames.], batch size: 22, lr: 9.73e-04 2022-05-26 21:43:11,060 INFO [train.py:842] (0/4) Epoch 5, batch 2550, loss[loss=0.2406, simple_loss=0.3178, pruned_loss=0.08174, over 7142.00 frames.], tot_loss[loss=0.2466, simple_loss=0.3173, pruned_loss=0.088, over 1433069.58 frames.], batch size: 20, lr: 9.72e-04 2022-05-26 21:43:49,608 INFO [train.py:842] (0/4) Epoch 5, batch 2600, loss[loss=0.2479, simple_loss=0.3283, pruned_loss=0.08375, over 7322.00 frames.], tot_loss[loss=0.2485, simple_loss=0.3187, pruned_loss=0.0892, over 1431263.55 frames.], batch size: 21, lr: 9.71e-04 2022-05-26 21:44:28,550 INFO [train.py:842] (0/4) Epoch 5, batch 2650, loss[loss=0.208, simple_loss=0.2754, pruned_loss=0.07026, over 6996.00 frames.], tot_loss[loss=0.2489, simple_loss=0.3187, pruned_loss=0.08955, over 1430087.90 frames.], batch size: 16, lr: 9.71e-04 2022-05-26 21:45:07,103 INFO [train.py:842] (0/4) Epoch 5, batch 2700, loss[loss=0.2348, simple_loss=0.3073, pruned_loss=0.08122, over 7277.00 frames.], tot_loss[loss=0.2464, simple_loss=0.3169, pruned_loss=0.08796, over 1431876.61 frames.], batch size: 18, lr: 9.70e-04 2022-05-26 21:45:46,201 INFO [train.py:842] (0/4) Epoch 5, batch 2750, loss[loss=0.3006, simple_loss=0.3561, pruned_loss=0.1225, over 7355.00 frames.], tot_loss[loss=0.2471, simple_loss=0.3171, pruned_loss=0.08856, over 1432106.88 frames.], batch size: 19, lr: 9.70e-04 2022-05-26 21:46:24,821 INFO [train.py:842] (0/4) Epoch 5, batch 2800, loss[loss=0.2358, simple_loss=0.3007, pruned_loss=0.08541, over 7119.00 frames.], tot_loss[loss=0.2464, simple_loss=0.3162, pruned_loss=0.08831, over 1432855.26 frames.], batch size: 17, lr: 9.69e-04 2022-05-26 21:47:03,559 INFO [train.py:842] (0/4) Epoch 5, batch 2850, loss[loss=0.2795, simple_loss=0.3392, pruned_loss=0.1099, over 6757.00 frames.], tot_loss[loss=0.2489, simple_loss=0.3186, pruned_loss=0.08956, over 1430224.93 frames.], batch size: 31, lr: 9.68e-04 2022-05-26 21:47:41,771 INFO [train.py:842] (0/4) Epoch 5, batch 2900, loss[loss=0.2585, simple_loss=0.3265, pruned_loss=0.09529, over 7283.00 frames.], tot_loss[loss=0.2487, simple_loss=0.3189, pruned_loss=0.08929, over 1428263.47 frames.], batch size: 24, lr: 9.68e-04 2022-05-26 21:48:20,721 INFO [train.py:842] (0/4) Epoch 5, batch 2950, loss[loss=0.2178, simple_loss=0.3043, pruned_loss=0.06565, over 7336.00 frames.], tot_loss[loss=0.2481, simple_loss=0.3181, pruned_loss=0.08906, over 1429723.92 frames.], batch size: 22, lr: 9.67e-04 2022-05-26 21:48:59,371 INFO [train.py:842] (0/4) Epoch 5, batch 3000, loss[loss=0.2333, simple_loss=0.3118, pruned_loss=0.0774, over 7178.00 frames.], tot_loss[loss=0.2468, simple_loss=0.3167, pruned_loss=0.08841, over 1425897.99 frames.], batch size: 26, lr: 9.66e-04 2022-05-26 21:48:59,373 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 21:49:08,664 INFO [train.py:871] (0/4) Epoch 5, validation: loss=0.1927, simple_loss=0.2922, pruned_loss=0.04663, over 868885.00 frames. 2022-05-26 21:49:47,976 INFO [train.py:842] (0/4) Epoch 5, batch 3050, loss[loss=0.2313, simple_loss=0.3046, pruned_loss=0.07898, over 7212.00 frames.], tot_loss[loss=0.2478, simple_loss=0.3178, pruned_loss=0.08887, over 1430111.47 frames.], batch size: 22, lr: 9.66e-04 2022-05-26 21:50:26,471 INFO [train.py:842] (0/4) Epoch 5, batch 3100, loss[loss=0.191, simple_loss=0.2702, pruned_loss=0.05589, over 7238.00 frames.], tot_loss[loss=0.2489, simple_loss=0.3189, pruned_loss=0.08944, over 1428721.99 frames.], batch size: 20, lr: 9.65e-04 2022-05-26 21:51:05,267 INFO [train.py:842] (0/4) Epoch 5, batch 3150, loss[loss=0.2598, simple_loss=0.3356, pruned_loss=0.09203, over 7306.00 frames.], tot_loss[loss=0.2471, simple_loss=0.3179, pruned_loss=0.08816, over 1430045.92 frames.], batch size: 25, lr: 9.65e-04 2022-05-26 21:51:43,967 INFO [train.py:842] (0/4) Epoch 5, batch 3200, loss[loss=0.2037, simple_loss=0.2754, pruned_loss=0.06605, over 7352.00 frames.], tot_loss[loss=0.2469, simple_loss=0.3175, pruned_loss=0.08819, over 1430985.78 frames.], batch size: 19, lr: 9.64e-04 2022-05-26 21:52:07,364 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-40000.pt 2022-05-26 21:52:25,510 INFO [train.py:842] (0/4) Epoch 5, batch 3250, loss[loss=0.2295, simple_loss=0.3057, pruned_loss=0.07671, over 7179.00 frames.], tot_loss[loss=0.2467, simple_loss=0.3169, pruned_loss=0.08821, over 1428710.32 frames.], batch size: 18, lr: 9.64e-04 2022-05-26 21:53:04,084 INFO [train.py:842] (0/4) Epoch 5, batch 3300, loss[loss=0.3476, simple_loss=0.3913, pruned_loss=0.1519, over 7174.00 frames.], tot_loss[loss=0.2478, simple_loss=0.3177, pruned_loss=0.0889, over 1423076.38 frames.], batch size: 26, lr: 9.63e-04 2022-05-26 21:53:43,083 INFO [train.py:842] (0/4) Epoch 5, batch 3350, loss[loss=0.2334, simple_loss=0.3142, pruned_loss=0.07631, over 7121.00 frames.], tot_loss[loss=0.2481, simple_loss=0.3185, pruned_loss=0.08886, over 1425583.26 frames.], batch size: 21, lr: 9.62e-04 2022-05-26 21:54:21,685 INFO [train.py:842] (0/4) Epoch 5, batch 3400, loss[loss=0.2215, simple_loss=0.3085, pruned_loss=0.06727, over 7244.00 frames.], tot_loss[loss=0.2481, simple_loss=0.3187, pruned_loss=0.0887, over 1427473.10 frames.], batch size: 20, lr: 9.62e-04 2022-05-26 21:55:00,666 INFO [train.py:842] (0/4) Epoch 5, batch 3450, loss[loss=0.2787, simple_loss=0.3374, pruned_loss=0.1099, over 7206.00 frames.], tot_loss[loss=0.2455, simple_loss=0.3166, pruned_loss=0.0872, over 1426969.97 frames.], batch size: 23, lr: 9.61e-04 2022-05-26 21:55:39,261 INFO [train.py:842] (0/4) Epoch 5, batch 3500, loss[loss=0.2885, simple_loss=0.3519, pruned_loss=0.1126, over 7309.00 frames.], tot_loss[loss=0.2452, simple_loss=0.3167, pruned_loss=0.0868, over 1428955.53 frames.], batch size: 21, lr: 9.61e-04 2022-05-26 21:56:18,052 INFO [train.py:842] (0/4) Epoch 5, batch 3550, loss[loss=0.2141, simple_loss=0.301, pruned_loss=0.0636, over 7334.00 frames.], tot_loss[loss=0.2466, simple_loss=0.3174, pruned_loss=0.0879, over 1424861.35 frames.], batch size: 20, lr: 9.60e-04 2022-05-26 21:56:56,518 INFO [train.py:842] (0/4) Epoch 5, batch 3600, loss[loss=0.2423, simple_loss=0.3167, pruned_loss=0.08398, over 7323.00 frames.], tot_loss[loss=0.2476, simple_loss=0.3182, pruned_loss=0.08852, over 1420169.85 frames.], batch size: 20, lr: 9.59e-04 2022-05-26 21:57:35,223 INFO [train.py:842] (0/4) Epoch 5, batch 3650, loss[loss=0.267, simple_loss=0.3136, pruned_loss=0.1102, over 7064.00 frames.], tot_loss[loss=0.2509, simple_loss=0.3208, pruned_loss=0.09047, over 1411750.60 frames.], batch size: 18, lr: 9.59e-04 2022-05-26 21:58:13,856 INFO [train.py:842] (0/4) Epoch 5, batch 3700, loss[loss=0.2347, simple_loss=0.3032, pruned_loss=0.08313, over 7219.00 frames.], tot_loss[loss=0.249, simple_loss=0.3189, pruned_loss=0.08958, over 1417511.69 frames.], batch size: 21, lr: 9.58e-04 2022-05-26 21:58:52,969 INFO [train.py:842] (0/4) Epoch 5, batch 3750, loss[loss=0.4234, simple_loss=0.435, pruned_loss=0.2059, over 5042.00 frames.], tot_loss[loss=0.2491, simple_loss=0.3189, pruned_loss=0.08966, over 1417871.86 frames.], batch size: 52, lr: 9.58e-04 2022-05-26 21:59:31,650 INFO [train.py:842] (0/4) Epoch 5, batch 3800, loss[loss=0.1764, simple_loss=0.2602, pruned_loss=0.04631, over 6803.00 frames.], tot_loss[loss=0.2501, simple_loss=0.3198, pruned_loss=0.09022, over 1418509.68 frames.], batch size: 15, lr: 9.57e-04 2022-05-26 22:00:10,485 INFO [train.py:842] (0/4) Epoch 5, batch 3850, loss[loss=0.2452, simple_loss=0.3085, pruned_loss=0.09092, over 7403.00 frames.], tot_loss[loss=0.2485, simple_loss=0.3185, pruned_loss=0.08922, over 1419521.92 frames.], batch size: 18, lr: 9.56e-04 2022-05-26 22:00:49,035 INFO [train.py:842] (0/4) Epoch 5, batch 3900, loss[loss=0.2013, simple_loss=0.2843, pruned_loss=0.05912, over 7355.00 frames.], tot_loss[loss=0.2496, simple_loss=0.3191, pruned_loss=0.0901, over 1417964.67 frames.], batch size: 19, lr: 9.56e-04 2022-05-26 22:01:28,127 INFO [train.py:842] (0/4) Epoch 5, batch 3950, loss[loss=0.2164, simple_loss=0.2909, pruned_loss=0.07097, over 7256.00 frames.], tot_loss[loss=0.2476, simple_loss=0.3176, pruned_loss=0.08882, over 1415420.66 frames.], batch size: 19, lr: 9.55e-04 2022-05-26 22:02:06,777 INFO [train.py:842] (0/4) Epoch 5, batch 4000, loss[loss=0.2441, simple_loss=0.3204, pruned_loss=0.08394, over 7333.00 frames.], tot_loss[loss=0.2478, simple_loss=0.3174, pruned_loss=0.08908, over 1419366.03 frames.], batch size: 22, lr: 9.55e-04 2022-05-26 22:02:46,091 INFO [train.py:842] (0/4) Epoch 5, batch 4050, loss[loss=0.1678, simple_loss=0.2566, pruned_loss=0.03951, over 7279.00 frames.], tot_loss[loss=0.2467, simple_loss=0.3165, pruned_loss=0.08849, over 1420891.43 frames.], batch size: 18, lr: 9.54e-04 2022-05-26 22:03:24,852 INFO [train.py:842] (0/4) Epoch 5, batch 4100, loss[loss=0.2286, simple_loss=0.3055, pruned_loss=0.07581, over 7125.00 frames.], tot_loss[loss=0.248, simple_loss=0.3174, pruned_loss=0.08932, over 1422243.96 frames.], batch size: 26, lr: 9.54e-04 2022-05-26 22:04:03,580 INFO [train.py:842] (0/4) Epoch 5, batch 4150, loss[loss=0.2276, simple_loss=0.3035, pruned_loss=0.07584, over 7122.00 frames.], tot_loss[loss=0.2489, simple_loss=0.3186, pruned_loss=0.08965, over 1417146.78 frames.], batch size: 26, lr: 9.53e-04 2022-05-26 22:04:42,302 INFO [train.py:842] (0/4) Epoch 5, batch 4200, loss[loss=0.1835, simple_loss=0.2628, pruned_loss=0.0521, over 7265.00 frames.], tot_loss[loss=0.2484, simple_loss=0.3182, pruned_loss=0.08928, over 1421736.71 frames.], batch size: 18, lr: 9.52e-04 2022-05-26 22:05:21,085 INFO [train.py:842] (0/4) Epoch 5, batch 4250, loss[loss=0.2445, simple_loss=0.3222, pruned_loss=0.08336, over 7198.00 frames.], tot_loss[loss=0.2495, simple_loss=0.3188, pruned_loss=0.09012, over 1421602.27 frames.], batch size: 22, lr: 9.52e-04 2022-05-26 22:05:59,621 INFO [train.py:842] (0/4) Epoch 5, batch 4300, loss[loss=0.2892, simple_loss=0.3409, pruned_loss=0.1188, over 7171.00 frames.], tot_loss[loss=0.2481, simple_loss=0.3177, pruned_loss=0.0893, over 1421841.13 frames.], batch size: 18, lr: 9.51e-04 2022-05-26 22:06:38,339 INFO [train.py:842] (0/4) Epoch 5, batch 4350, loss[loss=0.2555, simple_loss=0.3292, pruned_loss=0.09086, over 7212.00 frames.], tot_loss[loss=0.2469, simple_loss=0.3164, pruned_loss=0.08868, over 1422941.75 frames.], batch size: 26, lr: 9.51e-04 2022-05-26 22:07:17,015 INFO [train.py:842] (0/4) Epoch 5, batch 4400, loss[loss=0.2537, simple_loss=0.3421, pruned_loss=0.0827, over 7153.00 frames.], tot_loss[loss=0.2462, simple_loss=0.3159, pruned_loss=0.08827, over 1424594.19 frames.], batch size: 20, lr: 9.50e-04 2022-05-26 22:07:55,961 INFO [train.py:842] (0/4) Epoch 5, batch 4450, loss[loss=0.2489, simple_loss=0.3267, pruned_loss=0.08554, over 7183.00 frames.], tot_loss[loss=0.2461, simple_loss=0.3163, pruned_loss=0.08793, over 1426302.90 frames.], batch size: 26, lr: 9.50e-04 2022-05-26 22:08:34,455 INFO [train.py:842] (0/4) Epoch 5, batch 4500, loss[loss=0.2356, simple_loss=0.3092, pruned_loss=0.08099, over 7385.00 frames.], tot_loss[loss=0.2464, simple_loss=0.3168, pruned_loss=0.08796, over 1424898.98 frames.], batch size: 23, lr: 9.49e-04 2022-05-26 22:09:13,378 INFO [train.py:842] (0/4) Epoch 5, batch 4550, loss[loss=0.2236, simple_loss=0.3103, pruned_loss=0.0685, over 6970.00 frames.], tot_loss[loss=0.2469, simple_loss=0.3172, pruned_loss=0.0883, over 1427094.83 frames.], batch size: 28, lr: 9.48e-04 2022-05-26 22:09:51,881 INFO [train.py:842] (0/4) Epoch 5, batch 4600, loss[loss=0.3392, simple_loss=0.3908, pruned_loss=0.1438, over 7216.00 frames.], tot_loss[loss=0.247, simple_loss=0.3173, pruned_loss=0.08837, over 1423357.98 frames.], batch size: 22, lr: 9.48e-04 2022-05-26 22:10:31,332 INFO [train.py:842] (0/4) Epoch 5, batch 4650, loss[loss=0.2825, simple_loss=0.3427, pruned_loss=0.1111, over 7327.00 frames.], tot_loss[loss=0.2476, simple_loss=0.3176, pruned_loss=0.08882, over 1423970.22 frames.], batch size: 21, lr: 9.47e-04 2022-05-26 22:11:09,882 INFO [train.py:842] (0/4) Epoch 5, batch 4700, loss[loss=0.2374, simple_loss=0.3092, pruned_loss=0.08273, over 7321.00 frames.], tot_loss[loss=0.2482, simple_loss=0.3179, pruned_loss=0.08925, over 1423792.96 frames.], batch size: 20, lr: 9.47e-04 2022-05-26 22:11:48,474 INFO [train.py:842] (0/4) Epoch 5, batch 4750, loss[loss=0.2243, simple_loss=0.3135, pruned_loss=0.06756, over 7335.00 frames.], tot_loss[loss=0.2476, simple_loss=0.318, pruned_loss=0.0886, over 1423185.35 frames.], batch size: 20, lr: 9.46e-04 2022-05-26 22:12:26,969 INFO [train.py:842] (0/4) Epoch 5, batch 4800, loss[loss=0.2488, simple_loss=0.3293, pruned_loss=0.08417, over 7326.00 frames.], tot_loss[loss=0.2471, simple_loss=0.3179, pruned_loss=0.08816, over 1421838.95 frames.], batch size: 22, lr: 9.46e-04 2022-05-26 22:13:05,801 INFO [train.py:842] (0/4) Epoch 5, batch 4850, loss[loss=0.2131, simple_loss=0.2839, pruned_loss=0.07117, over 7395.00 frames.], tot_loss[loss=0.2465, simple_loss=0.3174, pruned_loss=0.08778, over 1425312.04 frames.], batch size: 18, lr: 9.45e-04 2022-05-26 22:13:44,301 INFO [train.py:842] (0/4) Epoch 5, batch 4900, loss[loss=0.2457, simple_loss=0.3296, pruned_loss=0.08086, over 7222.00 frames.], tot_loss[loss=0.2468, simple_loss=0.3176, pruned_loss=0.088, over 1425601.12 frames.], batch size: 23, lr: 9.45e-04 2022-05-26 22:14:23,578 INFO [train.py:842] (0/4) Epoch 5, batch 4950, loss[loss=0.2601, simple_loss=0.3434, pruned_loss=0.08839, over 7366.00 frames.], tot_loss[loss=0.2482, simple_loss=0.3185, pruned_loss=0.08895, over 1427608.87 frames.], batch size: 23, lr: 9.44e-04 2022-05-26 22:15:02,124 INFO [train.py:842] (0/4) Epoch 5, batch 5000, loss[loss=0.3058, simple_loss=0.3773, pruned_loss=0.1171, over 7147.00 frames.], tot_loss[loss=0.25, simple_loss=0.3199, pruned_loss=0.09005, over 1424888.89 frames.], batch size: 28, lr: 9.43e-04 2022-05-26 22:15:41,355 INFO [train.py:842] (0/4) Epoch 5, batch 5050, loss[loss=0.247, simple_loss=0.3262, pruned_loss=0.08392, over 7408.00 frames.], tot_loss[loss=0.2464, simple_loss=0.3172, pruned_loss=0.0878, over 1426425.24 frames.], batch size: 21, lr: 9.43e-04 2022-05-26 22:16:20,156 INFO [train.py:842] (0/4) Epoch 5, batch 5100, loss[loss=0.2201, simple_loss=0.3099, pruned_loss=0.06519, over 7335.00 frames.], tot_loss[loss=0.2456, simple_loss=0.317, pruned_loss=0.08712, over 1421693.38 frames.], batch size: 22, lr: 9.42e-04 2022-05-26 22:16:59,119 INFO [train.py:842] (0/4) Epoch 5, batch 5150, loss[loss=0.2357, simple_loss=0.3009, pruned_loss=0.08527, over 7323.00 frames.], tot_loss[loss=0.2448, simple_loss=0.3161, pruned_loss=0.0868, over 1423364.62 frames.], batch size: 20, lr: 9.42e-04 2022-05-26 22:17:37,806 INFO [train.py:842] (0/4) Epoch 5, batch 5200, loss[loss=0.2306, simple_loss=0.3041, pruned_loss=0.07857, over 7437.00 frames.], tot_loss[loss=0.2475, simple_loss=0.3181, pruned_loss=0.08844, over 1423393.84 frames.], batch size: 20, lr: 9.41e-04 2022-05-26 22:18:16,850 INFO [train.py:842] (0/4) Epoch 5, batch 5250, loss[loss=0.24, simple_loss=0.3101, pruned_loss=0.08495, over 7218.00 frames.], tot_loss[loss=0.2466, simple_loss=0.3177, pruned_loss=0.08775, over 1423704.15 frames.], batch size: 21, lr: 9.41e-04 2022-05-26 22:18:55,413 INFO [train.py:842] (0/4) Epoch 5, batch 5300, loss[loss=0.2098, simple_loss=0.278, pruned_loss=0.07081, over 6816.00 frames.], tot_loss[loss=0.2482, simple_loss=0.319, pruned_loss=0.08868, over 1418878.49 frames.], batch size: 15, lr: 9.40e-04 2022-05-26 22:19:34,407 INFO [train.py:842] (0/4) Epoch 5, batch 5350, loss[loss=0.2269, simple_loss=0.3019, pruned_loss=0.07598, over 7436.00 frames.], tot_loss[loss=0.249, simple_loss=0.3193, pruned_loss=0.08942, over 1422189.81 frames.], batch size: 20, lr: 9.40e-04 2022-05-26 22:20:12,918 INFO [train.py:842] (0/4) Epoch 5, batch 5400, loss[loss=0.2544, simple_loss=0.3128, pruned_loss=0.09798, over 7273.00 frames.], tot_loss[loss=0.2472, simple_loss=0.3176, pruned_loss=0.08838, over 1420769.95 frames.], batch size: 18, lr: 9.39e-04 2022-05-26 22:20:51,933 INFO [train.py:842] (0/4) Epoch 5, batch 5450, loss[loss=0.2571, simple_loss=0.3371, pruned_loss=0.08856, over 7341.00 frames.], tot_loss[loss=0.2466, simple_loss=0.317, pruned_loss=0.08815, over 1425218.27 frames.], batch size: 22, lr: 9.38e-04 2022-05-26 22:21:30,507 INFO [train.py:842] (0/4) Epoch 5, batch 5500, loss[loss=0.3222, simple_loss=0.3753, pruned_loss=0.1346, over 7222.00 frames.], tot_loss[loss=0.2494, simple_loss=0.319, pruned_loss=0.08987, over 1417683.15 frames.], batch size: 20, lr: 9.38e-04 2022-05-26 22:22:09,557 INFO [train.py:842] (0/4) Epoch 5, batch 5550, loss[loss=0.2574, simple_loss=0.3366, pruned_loss=0.08905, over 7303.00 frames.], tot_loss[loss=0.249, simple_loss=0.3186, pruned_loss=0.08969, over 1419878.43 frames.], batch size: 25, lr: 9.37e-04 2022-05-26 22:22:48,036 INFO [train.py:842] (0/4) Epoch 5, batch 5600, loss[loss=0.2324, simple_loss=0.3115, pruned_loss=0.07659, over 7206.00 frames.], tot_loss[loss=0.2503, simple_loss=0.3198, pruned_loss=0.09038, over 1417384.54 frames.], batch size: 22, lr: 9.37e-04 2022-05-26 22:23:26,828 INFO [train.py:842] (0/4) Epoch 5, batch 5650, loss[loss=0.2163, simple_loss=0.2878, pruned_loss=0.07241, over 7429.00 frames.], tot_loss[loss=0.2492, simple_loss=0.319, pruned_loss=0.08974, over 1416648.80 frames.], batch size: 18, lr: 9.36e-04 2022-05-26 22:24:05,331 INFO [train.py:842] (0/4) Epoch 5, batch 5700, loss[loss=0.2773, simple_loss=0.3416, pruned_loss=0.1065, over 7188.00 frames.], tot_loss[loss=0.2482, simple_loss=0.3181, pruned_loss=0.0891, over 1419150.11 frames.], batch size: 26, lr: 9.36e-04 2022-05-26 22:24:44,528 INFO [train.py:842] (0/4) Epoch 5, batch 5750, loss[loss=0.2393, simple_loss=0.3043, pruned_loss=0.08714, over 7171.00 frames.], tot_loss[loss=0.2475, simple_loss=0.3177, pruned_loss=0.08859, over 1424117.37 frames.], batch size: 18, lr: 9.35e-04 2022-05-26 22:25:23,050 INFO [train.py:842] (0/4) Epoch 5, batch 5800, loss[loss=0.2733, simple_loss=0.3389, pruned_loss=0.1038, over 5036.00 frames.], tot_loss[loss=0.2467, simple_loss=0.317, pruned_loss=0.08827, over 1421748.47 frames.], batch size: 52, lr: 9.35e-04 2022-05-26 22:26:01,746 INFO [train.py:842] (0/4) Epoch 5, batch 5850, loss[loss=0.2246, simple_loss=0.3046, pruned_loss=0.07229, over 7149.00 frames.], tot_loss[loss=0.2472, simple_loss=0.3172, pruned_loss=0.08858, over 1418826.20 frames.], batch size: 20, lr: 9.34e-04 2022-05-26 22:26:40,295 INFO [train.py:842] (0/4) Epoch 5, batch 5900, loss[loss=0.2491, simple_loss=0.3221, pruned_loss=0.08802, over 6771.00 frames.], tot_loss[loss=0.2461, simple_loss=0.3165, pruned_loss=0.08788, over 1420698.15 frames.], batch size: 31, lr: 9.34e-04 2022-05-26 22:27:19,129 INFO [train.py:842] (0/4) Epoch 5, batch 5950, loss[loss=0.2428, simple_loss=0.3026, pruned_loss=0.09144, over 7156.00 frames.], tot_loss[loss=0.2456, simple_loss=0.3159, pruned_loss=0.0876, over 1421900.49 frames.], batch size: 19, lr: 9.33e-04 2022-05-26 22:27:58,406 INFO [train.py:842] (0/4) Epoch 5, batch 6000, loss[loss=0.31, simple_loss=0.3657, pruned_loss=0.1272, over 7239.00 frames.], tot_loss[loss=0.2455, simple_loss=0.316, pruned_loss=0.08756, over 1424118.17 frames.], batch size: 20, lr: 9.32e-04 2022-05-26 22:27:58,407 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 22:28:07,776 INFO [train.py:871] (0/4) Epoch 5, validation: loss=0.1919, simple_loss=0.2921, pruned_loss=0.04582, over 868885.00 frames. 2022-05-26 22:28:46,653 INFO [train.py:842] (0/4) Epoch 5, batch 6050, loss[loss=0.2385, simple_loss=0.2937, pruned_loss=0.09162, over 7173.00 frames.], tot_loss[loss=0.2444, simple_loss=0.3158, pruned_loss=0.08655, over 1424568.35 frames.], batch size: 18, lr: 9.32e-04 2022-05-26 22:29:25,199 INFO [train.py:842] (0/4) Epoch 5, batch 6100, loss[loss=0.3342, simple_loss=0.3673, pruned_loss=0.1506, over 4776.00 frames.], tot_loss[loss=0.247, simple_loss=0.3179, pruned_loss=0.08809, over 1420578.09 frames.], batch size: 52, lr: 9.31e-04 2022-05-26 22:30:04,151 INFO [train.py:842] (0/4) Epoch 5, batch 6150, loss[loss=0.2458, simple_loss=0.3145, pruned_loss=0.08851, over 7169.00 frames.], tot_loss[loss=0.2472, simple_loss=0.3177, pruned_loss=0.08833, over 1423770.22 frames.], batch size: 18, lr: 9.31e-04 2022-05-26 22:30:42,837 INFO [train.py:842] (0/4) Epoch 5, batch 6200, loss[loss=0.2488, simple_loss=0.317, pruned_loss=0.09034, over 7414.00 frames.], tot_loss[loss=0.2465, simple_loss=0.3173, pruned_loss=0.08786, over 1426541.01 frames.], batch size: 18, lr: 9.30e-04 2022-05-26 22:31:21,500 INFO [train.py:842] (0/4) Epoch 5, batch 6250, loss[loss=0.1771, simple_loss=0.2696, pruned_loss=0.0423, over 7132.00 frames.], tot_loss[loss=0.2477, simple_loss=0.3182, pruned_loss=0.08859, over 1426884.93 frames.], batch size: 21, lr: 9.30e-04 2022-05-26 22:32:00,100 INFO [train.py:842] (0/4) Epoch 5, batch 6300, loss[loss=0.2326, simple_loss=0.3117, pruned_loss=0.07669, over 7376.00 frames.], tot_loss[loss=0.2454, simple_loss=0.3166, pruned_loss=0.08709, over 1427924.83 frames.], batch size: 23, lr: 9.29e-04 2022-05-26 22:32:39,015 INFO [train.py:842] (0/4) Epoch 5, batch 6350, loss[loss=0.2442, simple_loss=0.3075, pruned_loss=0.0905, over 7167.00 frames.], tot_loss[loss=0.2462, simple_loss=0.3176, pruned_loss=0.0874, over 1428969.25 frames.], batch size: 18, lr: 9.29e-04 2022-05-26 22:33:17,681 INFO [train.py:842] (0/4) Epoch 5, batch 6400, loss[loss=0.1985, simple_loss=0.2944, pruned_loss=0.05127, over 7335.00 frames.], tot_loss[loss=0.2456, simple_loss=0.3168, pruned_loss=0.0872, over 1429107.44 frames.], batch size: 20, lr: 9.28e-04 2022-05-26 22:33:56,744 INFO [train.py:842] (0/4) Epoch 5, batch 6450, loss[loss=0.2304, simple_loss=0.2922, pruned_loss=0.08433, over 7221.00 frames.], tot_loss[loss=0.246, simple_loss=0.3168, pruned_loss=0.0876, over 1432350.42 frames.], batch size: 16, lr: 9.28e-04 2022-05-26 22:34:35,281 INFO [train.py:842] (0/4) Epoch 5, batch 6500, loss[loss=0.2131, simple_loss=0.2902, pruned_loss=0.06796, over 7161.00 frames.], tot_loss[loss=0.2462, simple_loss=0.3168, pruned_loss=0.08776, over 1430663.83 frames.], batch size: 18, lr: 9.27e-04 2022-05-26 22:35:13,827 INFO [train.py:842] (0/4) Epoch 5, batch 6550, loss[loss=0.24, simple_loss=0.3213, pruned_loss=0.07939, over 7310.00 frames.], tot_loss[loss=0.2449, simple_loss=0.3158, pruned_loss=0.08698, over 1424530.20 frames.], batch size: 21, lr: 9.27e-04 2022-05-26 22:35:52,473 INFO [train.py:842] (0/4) Epoch 5, batch 6600, loss[loss=0.2278, simple_loss=0.3066, pruned_loss=0.07453, over 6319.00 frames.], tot_loss[loss=0.2453, simple_loss=0.3163, pruned_loss=0.0872, over 1424434.61 frames.], batch size: 38, lr: 9.26e-04 2022-05-26 22:36:31,324 INFO [train.py:842] (0/4) Epoch 5, batch 6650, loss[loss=0.2307, simple_loss=0.3049, pruned_loss=0.07829, over 7146.00 frames.], tot_loss[loss=0.2447, simple_loss=0.3155, pruned_loss=0.08698, over 1423472.15 frames.], batch size: 20, lr: 9.26e-04 2022-05-26 22:37:09,797 INFO [train.py:842] (0/4) Epoch 5, batch 6700, loss[loss=0.2939, simple_loss=0.357, pruned_loss=0.1154, over 6782.00 frames.], tot_loss[loss=0.2457, simple_loss=0.3163, pruned_loss=0.08758, over 1423647.20 frames.], batch size: 31, lr: 9.25e-04 2022-05-26 22:37:49,199 INFO [train.py:842] (0/4) Epoch 5, batch 6750, loss[loss=0.2317, simple_loss=0.3003, pruned_loss=0.08155, over 7326.00 frames.], tot_loss[loss=0.2437, simple_loss=0.3149, pruned_loss=0.08626, over 1427311.70 frames.], batch size: 21, lr: 9.25e-04 2022-05-26 22:38:27,820 INFO [train.py:842] (0/4) Epoch 5, batch 6800, loss[loss=0.234, simple_loss=0.3088, pruned_loss=0.07957, over 7295.00 frames.], tot_loss[loss=0.2425, simple_loss=0.3142, pruned_loss=0.08543, over 1425311.49 frames.], batch size: 24, lr: 9.24e-04 2022-05-26 22:39:06,767 INFO [train.py:842] (0/4) Epoch 5, batch 6850, loss[loss=0.2366, simple_loss=0.3064, pruned_loss=0.0834, over 7325.00 frames.], tot_loss[loss=0.2414, simple_loss=0.3136, pruned_loss=0.08463, over 1427446.19 frames.], batch size: 20, lr: 9.23e-04 2022-05-26 22:39:45,098 INFO [train.py:842] (0/4) Epoch 5, batch 6900, loss[loss=0.2521, simple_loss=0.327, pruned_loss=0.08864, over 7187.00 frames.], tot_loss[loss=0.2423, simple_loss=0.3146, pruned_loss=0.08501, over 1427948.17 frames.], batch size: 23, lr: 9.23e-04 2022-05-26 22:40:23,996 INFO [train.py:842] (0/4) Epoch 5, batch 6950, loss[loss=0.2224, simple_loss=0.3047, pruned_loss=0.07001, over 7108.00 frames.], tot_loss[loss=0.2434, simple_loss=0.3156, pruned_loss=0.08556, over 1429884.56 frames.], batch size: 21, lr: 9.22e-04 2022-05-26 22:41:02,650 INFO [train.py:842] (0/4) Epoch 5, batch 7000, loss[loss=0.2537, simple_loss=0.3228, pruned_loss=0.09234, over 7066.00 frames.], tot_loss[loss=0.245, simple_loss=0.3167, pruned_loss=0.0866, over 1432926.03 frames.], batch size: 18, lr: 9.22e-04 2022-05-26 22:41:41,586 INFO [train.py:842] (0/4) Epoch 5, batch 7050, loss[loss=0.2319, simple_loss=0.3065, pruned_loss=0.07868, over 7147.00 frames.], tot_loss[loss=0.2454, simple_loss=0.3163, pruned_loss=0.08721, over 1425089.96 frames.], batch size: 20, lr: 9.21e-04 2022-05-26 22:42:20,187 INFO [train.py:842] (0/4) Epoch 5, batch 7100, loss[loss=0.2959, simple_loss=0.3609, pruned_loss=0.1154, over 7245.00 frames.], tot_loss[loss=0.2439, simple_loss=0.3149, pruned_loss=0.08646, over 1428916.61 frames.], batch size: 20, lr: 9.21e-04 2022-05-26 22:42:59,025 INFO [train.py:842] (0/4) Epoch 5, batch 7150, loss[loss=0.1952, simple_loss=0.2762, pruned_loss=0.05709, over 7285.00 frames.], tot_loss[loss=0.2444, simple_loss=0.3158, pruned_loss=0.08649, over 1428681.12 frames.], batch size: 17, lr: 9.20e-04 2022-05-26 22:43:37,789 INFO [train.py:842] (0/4) Epoch 5, batch 7200, loss[loss=0.2751, simple_loss=0.3131, pruned_loss=0.1185, over 6825.00 frames.], tot_loss[loss=0.2452, simple_loss=0.3168, pruned_loss=0.08682, over 1427499.96 frames.], batch size: 15, lr: 9.20e-04 2022-05-26 22:44:16,570 INFO [train.py:842] (0/4) Epoch 5, batch 7250, loss[loss=0.1815, simple_loss=0.2642, pruned_loss=0.04933, over 7001.00 frames.], tot_loss[loss=0.2447, simple_loss=0.3158, pruned_loss=0.08679, over 1424011.61 frames.], batch size: 16, lr: 9.19e-04 2022-05-26 22:44:55,166 INFO [train.py:842] (0/4) Epoch 5, batch 7300, loss[loss=0.3911, simple_loss=0.4187, pruned_loss=0.1817, over 6732.00 frames.], tot_loss[loss=0.2447, simple_loss=0.3156, pruned_loss=0.08691, over 1421649.81 frames.], batch size: 31, lr: 9.19e-04 2022-05-26 22:45:33,970 INFO [train.py:842] (0/4) Epoch 5, batch 7350, loss[loss=0.2697, simple_loss=0.3531, pruned_loss=0.09317, over 7065.00 frames.], tot_loss[loss=0.2461, simple_loss=0.3169, pruned_loss=0.0877, over 1421271.91 frames.], batch size: 28, lr: 9.18e-04 2022-05-26 22:46:12,464 INFO [train.py:842] (0/4) Epoch 5, batch 7400, loss[loss=0.235, simple_loss=0.3101, pruned_loss=0.07993, over 7256.00 frames.], tot_loss[loss=0.246, simple_loss=0.3163, pruned_loss=0.08788, over 1415350.90 frames.], batch size: 19, lr: 9.18e-04 2022-05-26 22:46:51,146 INFO [train.py:842] (0/4) Epoch 5, batch 7450, loss[loss=0.2565, simple_loss=0.3097, pruned_loss=0.1016, over 7430.00 frames.], tot_loss[loss=0.2453, simple_loss=0.3159, pruned_loss=0.08737, over 1417837.61 frames.], batch size: 18, lr: 9.17e-04 2022-05-26 22:47:29,607 INFO [train.py:842] (0/4) Epoch 5, batch 7500, loss[loss=0.2392, simple_loss=0.2992, pruned_loss=0.08957, over 7263.00 frames.], tot_loss[loss=0.2441, simple_loss=0.315, pruned_loss=0.08653, over 1420936.21 frames.], batch size: 18, lr: 9.17e-04 2022-05-26 22:48:08,446 INFO [train.py:842] (0/4) Epoch 5, batch 7550, loss[loss=0.2945, simple_loss=0.3618, pruned_loss=0.1136, over 7336.00 frames.], tot_loss[loss=0.2444, simple_loss=0.3155, pruned_loss=0.08658, over 1419672.80 frames.], batch size: 22, lr: 9.16e-04 2022-05-26 22:48:46,888 INFO [train.py:842] (0/4) Epoch 5, batch 7600, loss[loss=0.2914, simple_loss=0.3577, pruned_loss=0.1125, over 7199.00 frames.], tot_loss[loss=0.2454, simple_loss=0.3163, pruned_loss=0.08727, over 1419204.83 frames.], batch size: 22, lr: 9.16e-04 2022-05-26 22:49:25,696 INFO [train.py:842] (0/4) Epoch 5, batch 7650, loss[loss=0.206, simple_loss=0.2746, pruned_loss=0.06865, over 7435.00 frames.], tot_loss[loss=0.2448, simple_loss=0.3153, pruned_loss=0.0871, over 1419468.52 frames.], batch size: 20, lr: 9.15e-04 2022-05-26 22:50:04,146 INFO [train.py:842] (0/4) Epoch 5, batch 7700, loss[loss=0.2525, simple_loss=0.3286, pruned_loss=0.08817, over 7154.00 frames.], tot_loss[loss=0.244, simple_loss=0.3145, pruned_loss=0.08675, over 1421125.96 frames.], batch size: 20, lr: 9.15e-04 2022-05-26 22:50:43,135 INFO [train.py:842] (0/4) Epoch 5, batch 7750, loss[loss=0.2013, simple_loss=0.2812, pruned_loss=0.06067, over 7407.00 frames.], tot_loss[loss=0.2434, simple_loss=0.3144, pruned_loss=0.0862, over 1423524.39 frames.], batch size: 18, lr: 9.14e-04 2022-05-26 22:51:21,851 INFO [train.py:842] (0/4) Epoch 5, batch 7800, loss[loss=0.2043, simple_loss=0.291, pruned_loss=0.05883, over 7326.00 frames.], tot_loss[loss=0.243, simple_loss=0.314, pruned_loss=0.08603, over 1426460.79 frames.], batch size: 20, lr: 9.14e-04 2022-05-26 22:52:00,601 INFO [train.py:842] (0/4) Epoch 5, batch 7850, loss[loss=0.2389, simple_loss=0.3165, pruned_loss=0.08066, over 7266.00 frames.], tot_loss[loss=0.2436, simple_loss=0.315, pruned_loss=0.08613, over 1428248.67 frames.], batch size: 19, lr: 9.13e-04 2022-05-26 22:52:39,123 INFO [train.py:842] (0/4) Epoch 5, batch 7900, loss[loss=0.2121, simple_loss=0.2821, pruned_loss=0.07104, over 7274.00 frames.], tot_loss[loss=0.2446, simple_loss=0.3155, pruned_loss=0.08685, over 1429331.68 frames.], batch size: 17, lr: 9.13e-04 2022-05-26 22:53:18,009 INFO [train.py:842] (0/4) Epoch 5, batch 7950, loss[loss=0.2365, simple_loss=0.3153, pruned_loss=0.07889, over 7045.00 frames.], tot_loss[loss=0.2437, simple_loss=0.315, pruned_loss=0.08618, over 1428594.23 frames.], batch size: 28, lr: 9.12e-04 2022-05-26 22:53:56,512 INFO [train.py:842] (0/4) Epoch 5, batch 8000, loss[loss=0.18, simple_loss=0.2432, pruned_loss=0.05842, over 7144.00 frames.], tot_loss[loss=0.2441, simple_loss=0.3153, pruned_loss=0.0865, over 1428679.97 frames.], batch size: 17, lr: 9.12e-04 2022-05-26 22:54:35,446 INFO [train.py:842] (0/4) Epoch 5, batch 8050, loss[loss=0.2635, simple_loss=0.3288, pruned_loss=0.09911, over 7354.00 frames.], tot_loss[loss=0.2436, simple_loss=0.3148, pruned_loss=0.08617, over 1428655.42 frames.], batch size: 19, lr: 9.11e-04 2022-05-26 22:55:14,037 INFO [train.py:842] (0/4) Epoch 5, batch 8100, loss[loss=0.2348, simple_loss=0.3071, pruned_loss=0.08122, over 7083.00 frames.], tot_loss[loss=0.2434, simple_loss=0.3147, pruned_loss=0.08611, over 1428501.19 frames.], batch size: 28, lr: 9.11e-04 2022-05-26 22:55:52,776 INFO [train.py:842] (0/4) Epoch 5, batch 8150, loss[loss=0.2371, simple_loss=0.3168, pruned_loss=0.07871, over 7177.00 frames.], tot_loss[loss=0.2442, simple_loss=0.3155, pruned_loss=0.08648, over 1422488.81 frames.], batch size: 26, lr: 9.10e-04 2022-05-26 22:56:31,175 INFO [train.py:842] (0/4) Epoch 5, batch 8200, loss[loss=0.2347, simple_loss=0.3179, pruned_loss=0.07572, over 7230.00 frames.], tot_loss[loss=0.2439, simple_loss=0.3154, pruned_loss=0.08623, over 1419920.83 frames.], batch size: 20, lr: 9.10e-04 2022-05-26 22:57:10,079 INFO [train.py:842] (0/4) Epoch 5, batch 8250, loss[loss=0.2179, simple_loss=0.29, pruned_loss=0.07287, over 7274.00 frames.], tot_loss[loss=0.2422, simple_loss=0.314, pruned_loss=0.08519, over 1421672.96 frames.], batch size: 18, lr: 9.09e-04 2022-05-26 22:57:48,642 INFO [train.py:842] (0/4) Epoch 5, batch 8300, loss[loss=0.2331, simple_loss=0.3177, pruned_loss=0.07428, over 7107.00 frames.], tot_loss[loss=0.244, simple_loss=0.3149, pruned_loss=0.08649, over 1425755.12 frames.], batch size: 28, lr: 9.09e-04 2022-05-26 22:58:27,290 INFO [train.py:842] (0/4) Epoch 5, batch 8350, loss[loss=0.2383, simple_loss=0.32, pruned_loss=0.07832, over 7403.00 frames.], tot_loss[loss=0.2452, simple_loss=0.3161, pruned_loss=0.08713, over 1422692.14 frames.], batch size: 21, lr: 9.08e-04 2022-05-26 22:59:05,780 INFO [train.py:842] (0/4) Epoch 5, batch 8400, loss[loss=0.1853, simple_loss=0.2708, pruned_loss=0.04986, over 7230.00 frames.], tot_loss[loss=0.2441, simple_loss=0.3149, pruned_loss=0.08666, over 1423227.07 frames.], batch size: 20, lr: 9.08e-04 2022-05-26 22:59:44,484 INFO [train.py:842] (0/4) Epoch 5, batch 8450, loss[loss=0.1777, simple_loss=0.26, pruned_loss=0.04766, over 7161.00 frames.], tot_loss[loss=0.2447, simple_loss=0.315, pruned_loss=0.08724, over 1416263.99 frames.], batch size: 17, lr: 9.07e-04 2022-05-26 23:00:23,193 INFO [train.py:842] (0/4) Epoch 5, batch 8500, loss[loss=0.2812, simple_loss=0.314, pruned_loss=0.1242, over 7283.00 frames.], tot_loss[loss=0.2451, simple_loss=0.3153, pruned_loss=0.08745, over 1419365.20 frames.], batch size: 17, lr: 9.07e-04 2022-05-26 23:01:02,291 INFO [train.py:842] (0/4) Epoch 5, batch 8550, loss[loss=0.1934, simple_loss=0.2776, pruned_loss=0.05457, over 7252.00 frames.], tot_loss[loss=0.2451, simple_loss=0.3154, pruned_loss=0.0874, over 1421689.93 frames.], batch size: 19, lr: 9.06e-04 2022-05-26 23:01:41,070 INFO [train.py:842] (0/4) Epoch 5, batch 8600, loss[loss=0.1859, simple_loss=0.2697, pruned_loss=0.05108, over 7355.00 frames.], tot_loss[loss=0.244, simple_loss=0.3151, pruned_loss=0.08649, over 1423271.53 frames.], batch size: 19, lr: 9.06e-04 2022-05-26 23:02:19,899 INFO [train.py:842] (0/4) Epoch 5, batch 8650, loss[loss=0.2666, simple_loss=0.331, pruned_loss=0.1011, over 7224.00 frames.], tot_loss[loss=0.245, simple_loss=0.3161, pruned_loss=0.08701, over 1418565.94 frames.], batch size: 21, lr: 9.05e-04 2022-05-26 23:02:58,454 INFO [train.py:842] (0/4) Epoch 5, batch 8700, loss[loss=0.255, simple_loss=0.3315, pruned_loss=0.08929, over 7235.00 frames.], tot_loss[loss=0.2444, simple_loss=0.3154, pruned_loss=0.08667, over 1416773.45 frames.], batch size: 20, lr: 9.05e-04 2022-05-26 23:03:37,435 INFO [train.py:842] (0/4) Epoch 5, batch 8750, loss[loss=0.2642, simple_loss=0.333, pruned_loss=0.09769, over 7184.00 frames.], tot_loss[loss=0.2433, simple_loss=0.3149, pruned_loss=0.08583, over 1418002.82 frames.], batch size: 26, lr: 9.04e-04 2022-05-26 23:04:16,080 INFO [train.py:842] (0/4) Epoch 5, batch 8800, loss[loss=0.2822, simple_loss=0.3486, pruned_loss=0.1079, over 7312.00 frames.], tot_loss[loss=0.2446, simple_loss=0.3159, pruned_loss=0.08667, over 1418045.08 frames.], batch size: 24, lr: 9.04e-04 2022-05-26 23:04:55,062 INFO [train.py:842] (0/4) Epoch 5, batch 8850, loss[loss=0.3787, simple_loss=0.415, pruned_loss=0.1712, over 5191.00 frames.], tot_loss[loss=0.2458, simple_loss=0.3163, pruned_loss=0.08769, over 1412720.99 frames.], batch size: 52, lr: 9.03e-04 2022-05-26 23:05:33,476 INFO [train.py:842] (0/4) Epoch 5, batch 8900, loss[loss=0.2668, simple_loss=0.3326, pruned_loss=0.1005, over 6269.00 frames.], tot_loss[loss=0.2452, simple_loss=0.3162, pruned_loss=0.08703, over 1411847.06 frames.], batch size: 37, lr: 9.03e-04 2022-05-26 23:06:11,794 INFO [train.py:842] (0/4) Epoch 5, batch 8950, loss[loss=0.2579, simple_loss=0.3145, pruned_loss=0.1006, over 7190.00 frames.], tot_loss[loss=0.246, simple_loss=0.3171, pruned_loss=0.08743, over 1402394.19 frames.], batch size: 23, lr: 9.02e-04 2022-05-26 23:06:49,934 INFO [train.py:842] (0/4) Epoch 5, batch 9000, loss[loss=0.2687, simple_loss=0.3378, pruned_loss=0.09977, over 6491.00 frames.], tot_loss[loss=0.25, simple_loss=0.3207, pruned_loss=0.08965, over 1395748.27 frames.], batch size: 38, lr: 9.02e-04 2022-05-26 23:06:49,935 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 23:06:59,288 INFO [train.py:871] (0/4) Epoch 5, validation: loss=0.191, simple_loss=0.2904, pruned_loss=0.04585, over 868885.00 frames. 2022-05-26 23:07:37,144 INFO [train.py:842] (0/4) Epoch 5, batch 9050, loss[loss=0.2944, simple_loss=0.3574, pruned_loss=0.1157, over 5058.00 frames.], tot_loss[loss=0.2547, simple_loss=0.3246, pruned_loss=0.09246, over 1363825.73 frames.], batch size: 52, lr: 9.01e-04 2022-05-26 23:08:14,671 INFO [train.py:842] (0/4) Epoch 5, batch 9100, loss[loss=0.3065, simple_loss=0.3611, pruned_loss=0.1259, over 5065.00 frames.], tot_loss[loss=0.2614, simple_loss=0.3288, pruned_loss=0.09695, over 1296852.53 frames.], batch size: 52, lr: 9.01e-04 2022-05-26 23:08:52,467 INFO [train.py:842] (0/4) Epoch 5, batch 9150, loss[loss=0.2983, simple_loss=0.3584, pruned_loss=0.119, over 4724.00 frames.], tot_loss[loss=0.267, simple_loss=0.3323, pruned_loss=0.1008, over 1231037.43 frames.], batch size: 52, lr: 9.00e-04 2022-05-26 23:09:24,127 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-5.pt 2022-05-26 23:09:43,994 INFO [train.py:842] (0/4) Epoch 6, batch 0, loss[loss=0.2286, simple_loss=0.302, pruned_loss=0.0776, over 7158.00 frames.], tot_loss[loss=0.2286, simple_loss=0.302, pruned_loss=0.0776, over 7158.00 frames.], batch size: 19, lr: 8.65e-04 2022-05-26 23:10:23,195 INFO [train.py:842] (0/4) Epoch 6, batch 50, loss[loss=0.3468, simple_loss=0.3795, pruned_loss=0.157, over 5243.00 frames.], tot_loss[loss=0.2507, simple_loss=0.321, pruned_loss=0.09026, over 318498.81 frames.], batch size: 52, lr: 8.64e-04 2022-05-26 23:11:01,578 INFO [train.py:842] (0/4) Epoch 6, batch 100, loss[loss=0.2421, simple_loss=0.3079, pruned_loss=0.08815, over 7145.00 frames.], tot_loss[loss=0.2482, simple_loss=0.3197, pruned_loss=0.08839, over 562026.44 frames.], batch size: 20, lr: 8.64e-04 2022-05-26 23:11:40,520 INFO [train.py:842] (0/4) Epoch 6, batch 150, loss[loss=0.2792, simple_loss=0.3351, pruned_loss=0.1117, over 6757.00 frames.], tot_loss[loss=0.2451, simple_loss=0.3173, pruned_loss=0.08645, over 750011.12 frames.], batch size: 31, lr: 8.63e-04 2022-05-26 23:12:19,099 INFO [train.py:842] (0/4) Epoch 6, batch 200, loss[loss=0.2222, simple_loss=0.2993, pruned_loss=0.07255, over 7423.00 frames.], tot_loss[loss=0.2457, simple_loss=0.3175, pruned_loss=0.08693, over 898747.37 frames.], batch size: 18, lr: 8.63e-04 2022-05-26 23:12:57,963 INFO [train.py:842] (0/4) Epoch 6, batch 250, loss[loss=0.2429, simple_loss=0.3238, pruned_loss=0.08096, over 7346.00 frames.], tot_loss[loss=0.2442, simple_loss=0.3164, pruned_loss=0.08597, over 1018965.05 frames.], batch size: 22, lr: 8.62e-04 2022-05-26 23:13:36,475 INFO [train.py:842] (0/4) Epoch 6, batch 300, loss[loss=0.2433, simple_loss=0.3129, pruned_loss=0.08691, over 7231.00 frames.], tot_loss[loss=0.2407, simple_loss=0.3134, pruned_loss=0.08397, over 1111528.07 frames.], batch size: 20, lr: 8.62e-04 2022-05-26 23:14:15,749 INFO [train.py:842] (0/4) Epoch 6, batch 350, loss[loss=0.2081, simple_loss=0.2809, pruned_loss=0.06767, over 7333.00 frames.], tot_loss[loss=0.237, simple_loss=0.3105, pruned_loss=0.08179, over 1185148.89 frames.], batch size: 20, lr: 8.61e-04 2022-05-26 23:14:54,193 INFO [train.py:842] (0/4) Epoch 6, batch 400, loss[loss=0.2702, simple_loss=0.3371, pruned_loss=0.1017, over 7388.00 frames.], tot_loss[loss=0.238, simple_loss=0.3118, pruned_loss=0.08215, over 1237463.26 frames.], batch size: 23, lr: 8.61e-04 2022-05-26 23:15:33,155 INFO [train.py:842] (0/4) Epoch 6, batch 450, loss[loss=0.2119, simple_loss=0.2785, pruned_loss=0.07263, over 6791.00 frames.], tot_loss[loss=0.2406, simple_loss=0.3136, pruned_loss=0.08382, over 1279843.24 frames.], batch size: 15, lr: 8.61e-04 2022-05-26 23:16:11,610 INFO [train.py:842] (0/4) Epoch 6, batch 500, loss[loss=0.2762, simple_loss=0.3373, pruned_loss=0.1076, over 4781.00 frames.], tot_loss[loss=0.2407, simple_loss=0.3142, pruned_loss=0.08362, over 1308075.80 frames.], batch size: 52, lr: 8.60e-04 2022-05-26 23:16:50,470 INFO [train.py:842] (0/4) Epoch 6, batch 550, loss[loss=0.2596, simple_loss=0.33, pruned_loss=0.0946, over 6533.00 frames.], tot_loss[loss=0.2405, simple_loss=0.3137, pruned_loss=0.08364, over 1332354.43 frames.], batch size: 39, lr: 8.60e-04 2022-05-26 23:17:29,201 INFO [train.py:842] (0/4) Epoch 6, batch 600, loss[loss=0.2036, simple_loss=0.2809, pruned_loss=0.06317, over 7150.00 frames.], tot_loss[loss=0.2396, simple_loss=0.3126, pruned_loss=0.08336, over 1351707.45 frames.], batch size: 20, lr: 8.59e-04 2022-05-26 23:18:08,046 INFO [train.py:842] (0/4) Epoch 6, batch 650, loss[loss=0.2473, simple_loss=0.3299, pruned_loss=0.08232, over 7406.00 frames.], tot_loss[loss=0.2386, simple_loss=0.3119, pruned_loss=0.0826, over 1365644.75 frames.], batch size: 21, lr: 8.59e-04 2022-05-26 23:18:46,494 INFO [train.py:842] (0/4) Epoch 6, batch 700, loss[loss=0.1942, simple_loss=0.2645, pruned_loss=0.06197, over 7246.00 frames.], tot_loss[loss=0.2403, simple_loss=0.3134, pruned_loss=0.08363, over 1377955.21 frames.], batch size: 16, lr: 8.58e-04 2022-05-26 23:19:25,313 INFO [train.py:842] (0/4) Epoch 6, batch 750, loss[loss=0.2476, simple_loss=0.3169, pruned_loss=0.08919, over 7221.00 frames.], tot_loss[loss=0.2399, simple_loss=0.3129, pruned_loss=0.08345, over 1387676.93 frames.], batch size: 21, lr: 8.58e-04 2022-05-26 23:20:03,923 INFO [train.py:842] (0/4) Epoch 6, batch 800, loss[loss=0.2466, simple_loss=0.3287, pruned_loss=0.08223, over 7222.00 frames.], tot_loss[loss=0.2388, simple_loss=0.3118, pruned_loss=0.08288, over 1398322.87 frames.], batch size: 21, lr: 8.57e-04 2022-05-26 23:20:42,687 INFO [train.py:842] (0/4) Epoch 6, batch 850, loss[loss=0.233, simple_loss=0.3105, pruned_loss=0.07778, over 7228.00 frames.], tot_loss[loss=0.2389, simple_loss=0.312, pruned_loss=0.08286, over 1404008.88 frames.], batch size: 23, lr: 8.57e-04 2022-05-26 23:21:21,222 INFO [train.py:842] (0/4) Epoch 6, batch 900, loss[loss=0.2455, simple_loss=0.3223, pruned_loss=0.08433, over 7418.00 frames.], tot_loss[loss=0.2376, simple_loss=0.311, pruned_loss=0.08208, over 1405092.83 frames.], batch size: 21, lr: 8.56e-04 2022-05-26 23:21:59,936 INFO [train.py:842] (0/4) Epoch 6, batch 950, loss[loss=0.2684, simple_loss=0.32, pruned_loss=0.1084, over 7137.00 frames.], tot_loss[loss=0.2379, simple_loss=0.3112, pruned_loss=0.08235, over 1405674.63 frames.], batch size: 17, lr: 8.56e-04 2022-05-26 23:22:38,497 INFO [train.py:842] (0/4) Epoch 6, batch 1000, loss[loss=0.225, simple_loss=0.3086, pruned_loss=0.07073, over 7417.00 frames.], tot_loss[loss=0.2392, simple_loss=0.3116, pruned_loss=0.08339, over 1407810.27 frames.], batch size: 21, lr: 8.56e-04 2022-05-26 23:23:17,197 INFO [train.py:842] (0/4) Epoch 6, batch 1050, loss[loss=0.1915, simple_loss=0.2891, pruned_loss=0.04699, over 7335.00 frames.], tot_loss[loss=0.2402, simple_loss=0.3126, pruned_loss=0.08387, over 1412917.22 frames.], batch size: 20, lr: 8.55e-04 2022-05-26 23:23:55,788 INFO [train.py:842] (0/4) Epoch 6, batch 1100, loss[loss=0.2657, simple_loss=0.3398, pruned_loss=0.0958, over 7318.00 frames.], tot_loss[loss=0.2424, simple_loss=0.3145, pruned_loss=0.0852, over 1407614.50 frames.], batch size: 21, lr: 8.55e-04 2022-05-26 23:24:34,774 INFO [train.py:842] (0/4) Epoch 6, batch 1150, loss[loss=0.2385, simple_loss=0.3255, pruned_loss=0.07574, over 7146.00 frames.], tot_loss[loss=0.243, simple_loss=0.3155, pruned_loss=0.08524, over 1412460.48 frames.], batch size: 20, lr: 8.54e-04 2022-05-26 23:25:13,270 INFO [train.py:842] (0/4) Epoch 6, batch 1200, loss[loss=0.2726, simple_loss=0.3383, pruned_loss=0.1035, over 7168.00 frames.], tot_loss[loss=0.2441, simple_loss=0.3164, pruned_loss=0.08594, over 1413097.17 frames.], batch size: 26, lr: 8.54e-04 2022-05-26 23:25:52,097 INFO [train.py:842] (0/4) Epoch 6, batch 1250, loss[loss=0.2767, simple_loss=0.3368, pruned_loss=0.1083, over 7141.00 frames.], tot_loss[loss=0.243, simple_loss=0.3153, pruned_loss=0.08535, over 1413280.04 frames.], batch size: 20, lr: 8.53e-04 2022-05-26 23:26:30,697 INFO [train.py:842] (0/4) Epoch 6, batch 1300, loss[loss=0.192, simple_loss=0.2765, pruned_loss=0.05379, over 7348.00 frames.], tot_loss[loss=0.2425, simple_loss=0.3148, pruned_loss=0.0851, over 1411753.13 frames.], batch size: 19, lr: 8.53e-04 2022-05-26 23:27:09,693 INFO [train.py:842] (0/4) Epoch 6, batch 1350, loss[loss=0.2676, simple_loss=0.3444, pruned_loss=0.09534, over 7136.00 frames.], tot_loss[loss=0.2401, simple_loss=0.3128, pruned_loss=0.08369, over 1415269.58 frames.], batch size: 28, lr: 8.52e-04 2022-05-26 23:27:48,109 INFO [train.py:842] (0/4) Epoch 6, batch 1400, loss[loss=0.2127, simple_loss=0.297, pruned_loss=0.06419, over 7322.00 frames.], tot_loss[loss=0.241, simple_loss=0.3133, pruned_loss=0.08434, over 1419511.35 frames.], batch size: 20, lr: 8.52e-04 2022-05-26 23:28:26,833 INFO [train.py:842] (0/4) Epoch 6, batch 1450, loss[loss=0.2491, simple_loss=0.3316, pruned_loss=0.08334, over 7435.00 frames.], tot_loss[loss=0.2429, simple_loss=0.315, pruned_loss=0.08543, over 1421396.90 frames.], batch size: 20, lr: 8.52e-04 2022-05-26 23:29:05,423 INFO [train.py:842] (0/4) Epoch 6, batch 1500, loss[loss=0.2679, simple_loss=0.351, pruned_loss=0.09239, over 7143.00 frames.], tot_loss[loss=0.2423, simple_loss=0.3148, pruned_loss=0.08491, over 1421641.65 frames.], batch size: 20, lr: 8.51e-04 2022-05-26 23:29:44,026 INFO [train.py:842] (0/4) Epoch 6, batch 1550, loss[loss=0.212, simple_loss=0.2829, pruned_loss=0.07054, over 7283.00 frames.], tot_loss[loss=0.2414, simple_loss=0.3136, pruned_loss=0.08453, over 1423255.42 frames.], batch size: 17, lr: 8.51e-04 2022-05-26 23:30:22,451 INFO [train.py:842] (0/4) Epoch 6, batch 1600, loss[loss=0.2316, simple_loss=0.3068, pruned_loss=0.07819, over 7427.00 frames.], tot_loss[loss=0.2427, simple_loss=0.3146, pruned_loss=0.08536, over 1417114.46 frames.], batch size: 20, lr: 8.50e-04 2022-05-26 23:31:01,141 INFO [train.py:842] (0/4) Epoch 6, batch 1650, loss[loss=0.2663, simple_loss=0.3435, pruned_loss=0.09458, over 7262.00 frames.], tot_loss[loss=0.2405, simple_loss=0.3129, pruned_loss=0.08407, over 1417167.77 frames.], batch size: 25, lr: 8.50e-04 2022-05-26 23:31:39,551 INFO [train.py:842] (0/4) Epoch 6, batch 1700, loss[loss=0.2354, simple_loss=0.3125, pruned_loss=0.07914, over 7201.00 frames.], tot_loss[loss=0.2411, simple_loss=0.3133, pruned_loss=0.08448, over 1414666.60 frames.], batch size: 22, lr: 8.49e-04 2022-05-26 23:32:18,315 INFO [train.py:842] (0/4) Epoch 6, batch 1750, loss[loss=0.2292, simple_loss=0.3094, pruned_loss=0.07445, over 7272.00 frames.], tot_loss[loss=0.241, simple_loss=0.3133, pruned_loss=0.08431, over 1411272.78 frames.], batch size: 18, lr: 8.49e-04 2022-05-26 23:32:56,822 INFO [train.py:842] (0/4) Epoch 6, batch 1800, loss[loss=0.2848, simple_loss=0.3313, pruned_loss=0.1191, over 4838.00 frames.], tot_loss[loss=0.2403, simple_loss=0.313, pruned_loss=0.0838, over 1412538.80 frames.], batch size: 52, lr: 8.48e-04 2022-05-26 23:33:35,703 INFO [train.py:842] (0/4) Epoch 6, batch 1850, loss[loss=0.1882, simple_loss=0.2759, pruned_loss=0.05023, over 7180.00 frames.], tot_loss[loss=0.2399, simple_loss=0.3129, pruned_loss=0.08347, over 1415899.76 frames.], batch size: 18, lr: 8.48e-04 2022-05-26 23:34:14,179 INFO [train.py:842] (0/4) Epoch 6, batch 1900, loss[loss=0.2953, simple_loss=0.3397, pruned_loss=0.1254, over 7117.00 frames.], tot_loss[loss=0.241, simple_loss=0.3136, pruned_loss=0.08419, over 1415616.90 frames.], batch size: 17, lr: 8.48e-04 2022-05-26 23:34:52,998 INFO [train.py:842] (0/4) Epoch 6, batch 1950, loss[loss=0.2174, simple_loss=0.2988, pruned_loss=0.06804, over 7113.00 frames.], tot_loss[loss=0.2408, simple_loss=0.3135, pruned_loss=0.08401, over 1419950.41 frames.], batch size: 21, lr: 8.47e-04 2022-05-26 23:35:31,651 INFO [train.py:842] (0/4) Epoch 6, batch 2000, loss[loss=0.2815, simple_loss=0.3397, pruned_loss=0.1117, over 7275.00 frames.], tot_loss[loss=0.2414, simple_loss=0.3139, pruned_loss=0.08442, over 1423283.03 frames.], batch size: 18, lr: 8.47e-04 2022-05-26 23:36:01,878 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-48000.pt 2022-05-26 23:36:13,253 INFO [train.py:842] (0/4) Epoch 6, batch 2050, loss[loss=0.2799, simple_loss=0.3515, pruned_loss=0.1042, over 7101.00 frames.], tot_loss[loss=0.2414, simple_loss=0.314, pruned_loss=0.08442, over 1423353.40 frames.], batch size: 28, lr: 8.46e-04 2022-05-26 23:36:51,807 INFO [train.py:842] (0/4) Epoch 6, batch 2100, loss[loss=0.2399, simple_loss=0.314, pruned_loss=0.08289, over 6489.00 frames.], tot_loss[loss=0.2412, simple_loss=0.3139, pruned_loss=0.08424, over 1424934.11 frames.], batch size: 38, lr: 8.46e-04 2022-05-26 23:37:30,938 INFO [train.py:842] (0/4) Epoch 6, batch 2150, loss[loss=0.2798, simple_loss=0.3606, pruned_loss=0.09947, over 7142.00 frames.], tot_loss[loss=0.2405, simple_loss=0.3136, pruned_loss=0.08369, over 1429998.97 frames.], batch size: 20, lr: 8.45e-04 2022-05-26 23:38:09,301 INFO [train.py:842] (0/4) Epoch 6, batch 2200, loss[loss=0.1992, simple_loss=0.2954, pruned_loss=0.05153, over 7147.00 frames.], tot_loss[loss=0.2407, simple_loss=0.3135, pruned_loss=0.08391, over 1426485.78 frames.], batch size: 20, lr: 8.45e-04 2022-05-26 23:38:48,168 INFO [train.py:842] (0/4) Epoch 6, batch 2250, loss[loss=0.2457, simple_loss=0.3077, pruned_loss=0.09191, over 7361.00 frames.], tot_loss[loss=0.2388, simple_loss=0.3121, pruned_loss=0.08281, over 1424351.83 frames.], batch size: 19, lr: 8.45e-04 2022-05-26 23:39:26,675 INFO [train.py:842] (0/4) Epoch 6, batch 2300, loss[loss=0.2361, simple_loss=0.3185, pruned_loss=0.07685, over 7299.00 frames.], tot_loss[loss=0.2405, simple_loss=0.3131, pruned_loss=0.08397, over 1423146.01 frames.], batch size: 24, lr: 8.44e-04 2022-05-26 23:40:05,770 INFO [train.py:842] (0/4) Epoch 6, batch 2350, loss[loss=0.2246, simple_loss=0.3177, pruned_loss=0.0658, over 7212.00 frames.], tot_loss[loss=0.2409, simple_loss=0.3134, pruned_loss=0.08418, over 1423713.81 frames.], batch size: 21, lr: 8.44e-04 2022-05-26 23:40:44,302 INFO [train.py:842] (0/4) Epoch 6, batch 2400, loss[loss=0.2062, simple_loss=0.2904, pruned_loss=0.06096, over 7330.00 frames.], tot_loss[loss=0.2396, simple_loss=0.3119, pruned_loss=0.08367, over 1422817.34 frames.], batch size: 20, lr: 8.43e-04 2022-05-26 23:41:23,355 INFO [train.py:842] (0/4) Epoch 6, batch 2450, loss[loss=0.2005, simple_loss=0.2712, pruned_loss=0.06492, over 6799.00 frames.], tot_loss[loss=0.2392, simple_loss=0.3119, pruned_loss=0.08326, over 1421708.98 frames.], batch size: 15, lr: 8.43e-04 2022-05-26 23:42:01,779 INFO [train.py:842] (0/4) Epoch 6, batch 2500, loss[loss=0.2458, simple_loss=0.3189, pruned_loss=0.08638, over 7345.00 frames.], tot_loss[loss=0.2396, simple_loss=0.3122, pruned_loss=0.08354, over 1420451.90 frames.], batch size: 22, lr: 8.42e-04 2022-05-26 23:42:40,582 INFO [train.py:842] (0/4) Epoch 6, batch 2550, loss[loss=0.207, simple_loss=0.276, pruned_loss=0.06902, over 7240.00 frames.], tot_loss[loss=0.2389, simple_loss=0.3116, pruned_loss=0.08315, over 1422679.61 frames.], batch size: 16, lr: 8.42e-04 2022-05-26 23:43:19,149 INFO [train.py:842] (0/4) Epoch 6, batch 2600, loss[loss=0.2596, simple_loss=0.3415, pruned_loss=0.08885, over 7320.00 frames.], tot_loss[loss=0.239, simple_loss=0.312, pruned_loss=0.08301, over 1425770.51 frames.], batch size: 21, lr: 8.42e-04 2022-05-26 23:43:57,912 INFO [train.py:842] (0/4) Epoch 6, batch 2650, loss[loss=0.2901, simple_loss=0.3542, pruned_loss=0.1131, over 7316.00 frames.], tot_loss[loss=0.2411, simple_loss=0.3134, pruned_loss=0.08442, over 1424686.12 frames.], batch size: 25, lr: 8.41e-04 2022-05-26 23:44:36,533 INFO [train.py:842] (0/4) Epoch 6, batch 2700, loss[loss=0.2258, simple_loss=0.2914, pruned_loss=0.08003, over 6812.00 frames.], tot_loss[loss=0.2406, simple_loss=0.3134, pruned_loss=0.08395, over 1426164.07 frames.], batch size: 15, lr: 8.41e-04 2022-05-26 23:45:15,204 INFO [train.py:842] (0/4) Epoch 6, batch 2750, loss[loss=0.1986, simple_loss=0.2854, pruned_loss=0.05593, over 7218.00 frames.], tot_loss[loss=0.2387, simple_loss=0.3121, pruned_loss=0.08269, over 1423355.04 frames.], batch size: 20, lr: 8.40e-04 2022-05-26 23:45:53,698 INFO [train.py:842] (0/4) Epoch 6, batch 2800, loss[loss=0.1717, simple_loss=0.2505, pruned_loss=0.04648, over 7274.00 frames.], tot_loss[loss=0.2364, simple_loss=0.3103, pruned_loss=0.08125, over 1420406.66 frames.], batch size: 18, lr: 8.40e-04 2022-05-26 23:46:32,464 INFO [train.py:842] (0/4) Epoch 6, batch 2850, loss[loss=0.1958, simple_loss=0.2663, pruned_loss=0.06264, over 7293.00 frames.], tot_loss[loss=0.2363, simple_loss=0.3101, pruned_loss=0.08125, over 1418149.16 frames.], batch size: 17, lr: 8.39e-04 2022-05-26 23:47:10,988 INFO [train.py:842] (0/4) Epoch 6, batch 2900, loss[loss=0.2472, simple_loss=0.3289, pruned_loss=0.08273, over 6828.00 frames.], tot_loss[loss=0.2345, simple_loss=0.3086, pruned_loss=0.08019, over 1421011.26 frames.], batch size: 31, lr: 8.39e-04 2022-05-26 23:47:50,194 INFO [train.py:842] (0/4) Epoch 6, batch 2950, loss[loss=0.1978, simple_loss=0.2895, pruned_loss=0.05302, over 7143.00 frames.], tot_loss[loss=0.2344, simple_loss=0.3085, pruned_loss=0.08014, over 1419865.44 frames.], batch size: 20, lr: 8.39e-04 2022-05-26 23:48:28,891 INFO [train.py:842] (0/4) Epoch 6, batch 3000, loss[loss=0.2037, simple_loss=0.2926, pruned_loss=0.05741, over 7232.00 frames.], tot_loss[loss=0.2348, simple_loss=0.3089, pruned_loss=0.08037, over 1419025.14 frames.], batch size: 20, lr: 8.38e-04 2022-05-26 23:48:28,893 INFO [train.py:862] (0/4) Computing validation loss 2022-05-26 23:48:38,153 INFO [train.py:871] (0/4) Epoch 6, validation: loss=0.1895, simple_loss=0.2892, pruned_loss=0.04494, over 868885.00 frames. 2022-05-26 23:49:17,573 INFO [train.py:842] (0/4) Epoch 6, batch 3050, loss[loss=0.2105, simple_loss=0.3005, pruned_loss=0.06018, over 7209.00 frames.], tot_loss[loss=0.2354, simple_loss=0.3091, pruned_loss=0.08085, over 1424788.06 frames.], batch size: 23, lr: 8.38e-04 2022-05-26 23:49:56,216 INFO [train.py:842] (0/4) Epoch 6, batch 3100, loss[loss=0.2454, simple_loss=0.3295, pruned_loss=0.08064, over 7342.00 frames.], tot_loss[loss=0.236, simple_loss=0.3091, pruned_loss=0.08148, over 1423420.97 frames.], batch size: 22, lr: 8.37e-04 2022-05-26 23:50:35,001 INFO [train.py:842] (0/4) Epoch 6, batch 3150, loss[loss=0.2471, simple_loss=0.3272, pruned_loss=0.08345, over 7202.00 frames.], tot_loss[loss=0.2363, simple_loss=0.3097, pruned_loss=0.08143, over 1423312.79 frames.], batch size: 23, lr: 8.37e-04 2022-05-26 23:51:13,521 INFO [train.py:842] (0/4) Epoch 6, batch 3200, loss[loss=0.2618, simple_loss=0.3381, pruned_loss=0.09271, over 7224.00 frames.], tot_loss[loss=0.2375, simple_loss=0.3111, pruned_loss=0.082, over 1424582.27 frames.], batch size: 21, lr: 8.36e-04 2022-05-26 23:51:52,211 INFO [train.py:842] (0/4) Epoch 6, batch 3250, loss[loss=0.2322, simple_loss=0.3131, pruned_loss=0.07571, over 7354.00 frames.], tot_loss[loss=0.2387, simple_loss=0.3124, pruned_loss=0.08249, over 1424994.51 frames.], batch size: 19, lr: 8.36e-04 2022-05-26 23:52:30,645 INFO [train.py:842] (0/4) Epoch 6, batch 3300, loss[loss=0.2515, simple_loss=0.3296, pruned_loss=0.08671, over 7197.00 frames.], tot_loss[loss=0.2404, simple_loss=0.3138, pruned_loss=0.08351, over 1421019.42 frames.], batch size: 23, lr: 8.36e-04 2022-05-26 23:53:09,689 INFO [train.py:842] (0/4) Epoch 6, batch 3350, loss[loss=0.1973, simple_loss=0.2792, pruned_loss=0.05771, over 7249.00 frames.], tot_loss[loss=0.2404, simple_loss=0.3136, pruned_loss=0.08365, over 1425895.24 frames.], batch size: 19, lr: 8.35e-04 2022-05-26 23:53:48,172 INFO [train.py:842] (0/4) Epoch 6, batch 3400, loss[loss=0.26, simple_loss=0.3287, pruned_loss=0.09563, over 7284.00 frames.], tot_loss[loss=0.2385, simple_loss=0.3119, pruned_loss=0.08258, over 1426210.30 frames.], batch size: 24, lr: 8.35e-04 2022-05-26 23:54:26,999 INFO [train.py:842] (0/4) Epoch 6, batch 3450, loss[loss=0.2603, simple_loss=0.3387, pruned_loss=0.09099, over 7410.00 frames.], tot_loss[loss=0.2402, simple_loss=0.3135, pruned_loss=0.08348, over 1428030.56 frames.], batch size: 21, lr: 8.34e-04 2022-05-26 23:55:05,838 INFO [train.py:842] (0/4) Epoch 6, batch 3500, loss[loss=0.2104, simple_loss=0.2898, pruned_loss=0.0655, over 7211.00 frames.], tot_loss[loss=0.2388, simple_loss=0.3116, pruned_loss=0.08297, over 1424817.16 frames.], batch size: 22, lr: 8.34e-04 2022-05-26 23:55:44,587 INFO [train.py:842] (0/4) Epoch 6, batch 3550, loss[loss=0.2476, simple_loss=0.3231, pruned_loss=0.086, over 7318.00 frames.], tot_loss[loss=0.2397, simple_loss=0.312, pruned_loss=0.0837, over 1427954.32 frames.], batch size: 21, lr: 8.33e-04 2022-05-26 23:56:22,914 INFO [train.py:842] (0/4) Epoch 6, batch 3600, loss[loss=0.2346, simple_loss=0.309, pruned_loss=0.08016, over 7171.00 frames.], tot_loss[loss=0.2384, simple_loss=0.3113, pruned_loss=0.08272, over 1429420.97 frames.], batch size: 18, lr: 8.33e-04 2022-05-26 23:57:01,909 INFO [train.py:842] (0/4) Epoch 6, batch 3650, loss[loss=0.1933, simple_loss=0.2864, pruned_loss=0.05015, over 7417.00 frames.], tot_loss[loss=0.2377, simple_loss=0.3112, pruned_loss=0.08206, over 1428604.59 frames.], batch size: 21, lr: 8.33e-04 2022-05-26 23:57:40,535 INFO [train.py:842] (0/4) Epoch 6, batch 3700, loss[loss=0.2523, simple_loss=0.3241, pruned_loss=0.09026, over 7226.00 frames.], tot_loss[loss=0.2372, simple_loss=0.3108, pruned_loss=0.08184, over 1427225.74 frames.], batch size: 20, lr: 8.32e-04 2022-05-26 23:58:19,415 INFO [train.py:842] (0/4) Epoch 6, batch 3750, loss[loss=0.2591, simple_loss=0.345, pruned_loss=0.08657, over 7385.00 frames.], tot_loss[loss=0.24, simple_loss=0.313, pruned_loss=0.08351, over 1425044.23 frames.], batch size: 23, lr: 8.32e-04 2022-05-26 23:58:57,941 INFO [train.py:842] (0/4) Epoch 6, batch 3800, loss[loss=0.2069, simple_loss=0.2868, pruned_loss=0.0635, over 7290.00 frames.], tot_loss[loss=0.2387, simple_loss=0.3118, pruned_loss=0.08285, over 1421195.74 frames.], batch size: 24, lr: 8.31e-04 2022-05-26 23:59:36,777 INFO [train.py:842] (0/4) Epoch 6, batch 3850, loss[loss=0.3028, simple_loss=0.3686, pruned_loss=0.1185, over 7332.00 frames.], tot_loss[loss=0.2388, simple_loss=0.3122, pruned_loss=0.08273, over 1420731.12 frames.], batch size: 22, lr: 8.31e-04 2022-05-27 00:00:15,370 INFO [train.py:842] (0/4) Epoch 6, batch 3900, loss[loss=0.2405, simple_loss=0.3048, pruned_loss=0.08811, over 7275.00 frames.], tot_loss[loss=0.2368, simple_loss=0.3105, pruned_loss=0.08157, over 1424873.94 frames.], batch size: 18, lr: 8.31e-04 2022-05-27 00:00:54,255 INFO [train.py:842] (0/4) Epoch 6, batch 3950, loss[loss=0.3068, simple_loss=0.3764, pruned_loss=0.1186, over 6757.00 frames.], tot_loss[loss=0.2359, simple_loss=0.3096, pruned_loss=0.08111, over 1425163.87 frames.], batch size: 31, lr: 8.30e-04 2022-05-27 00:01:33,003 INFO [train.py:842] (0/4) Epoch 6, batch 4000, loss[loss=0.3518, simple_loss=0.3944, pruned_loss=0.1546, over 7364.00 frames.], tot_loss[loss=0.2371, simple_loss=0.3105, pruned_loss=0.08184, over 1427142.06 frames.], batch size: 23, lr: 8.30e-04 2022-05-27 00:02:11,871 INFO [train.py:842] (0/4) Epoch 6, batch 4050, loss[loss=0.2495, simple_loss=0.3088, pruned_loss=0.09508, over 7166.00 frames.], tot_loss[loss=0.2382, simple_loss=0.3113, pruned_loss=0.08254, over 1429688.56 frames.], batch size: 19, lr: 8.29e-04 2022-05-27 00:02:50,474 INFO [train.py:842] (0/4) Epoch 6, batch 4100, loss[loss=0.3804, simple_loss=0.4129, pruned_loss=0.174, over 7380.00 frames.], tot_loss[loss=0.2401, simple_loss=0.3125, pruned_loss=0.08385, over 1427202.67 frames.], batch size: 23, lr: 8.29e-04 2022-05-27 00:03:29,572 INFO [train.py:842] (0/4) Epoch 6, batch 4150, loss[loss=0.242, simple_loss=0.302, pruned_loss=0.091, over 7129.00 frames.], tot_loss[loss=0.2383, simple_loss=0.3113, pruned_loss=0.08265, over 1426194.04 frames.], batch size: 17, lr: 8.29e-04 2022-05-27 00:04:18,780 INFO [train.py:842] (0/4) Epoch 6, batch 4200, loss[loss=0.1932, simple_loss=0.276, pruned_loss=0.05521, over 7413.00 frames.], tot_loss[loss=0.2367, simple_loss=0.3098, pruned_loss=0.0818, over 1428278.61 frames.], batch size: 18, lr: 8.28e-04 2022-05-27 00:04:57,586 INFO [train.py:842] (0/4) Epoch 6, batch 4250, loss[loss=0.2, simple_loss=0.2845, pruned_loss=0.05771, over 7290.00 frames.], tot_loss[loss=0.2357, simple_loss=0.3091, pruned_loss=0.08113, over 1428047.56 frames.], batch size: 24, lr: 8.28e-04 2022-05-27 00:05:36,220 INFO [train.py:842] (0/4) Epoch 6, batch 4300, loss[loss=0.279, simple_loss=0.3501, pruned_loss=0.104, over 7338.00 frames.], tot_loss[loss=0.2351, simple_loss=0.3086, pruned_loss=0.08077, over 1429735.16 frames.], batch size: 22, lr: 8.27e-04 2022-05-27 00:06:15,215 INFO [train.py:842] (0/4) Epoch 6, batch 4350, loss[loss=0.2481, simple_loss=0.3093, pruned_loss=0.09348, over 7057.00 frames.], tot_loss[loss=0.2333, simple_loss=0.3074, pruned_loss=0.07964, over 1430072.88 frames.], batch size: 18, lr: 8.27e-04 2022-05-27 00:06:53,750 INFO [train.py:842] (0/4) Epoch 6, batch 4400, loss[loss=0.1855, simple_loss=0.2735, pruned_loss=0.04881, over 7234.00 frames.], tot_loss[loss=0.2339, simple_loss=0.3079, pruned_loss=0.07994, over 1427797.80 frames.], batch size: 20, lr: 8.26e-04 2022-05-27 00:07:32,833 INFO [train.py:842] (0/4) Epoch 6, batch 4450, loss[loss=0.2137, simple_loss=0.2996, pruned_loss=0.06391, over 7238.00 frames.], tot_loss[loss=0.2352, simple_loss=0.3086, pruned_loss=0.08093, over 1428810.24 frames.], batch size: 20, lr: 8.26e-04 2022-05-27 00:08:11,512 INFO [train.py:842] (0/4) Epoch 6, batch 4500, loss[loss=0.2328, simple_loss=0.2995, pruned_loss=0.08299, over 5286.00 frames.], tot_loss[loss=0.2337, simple_loss=0.3068, pruned_loss=0.0803, over 1428091.47 frames.], batch size: 52, lr: 8.26e-04 2022-05-27 00:08:50,459 INFO [train.py:842] (0/4) Epoch 6, batch 4550, loss[loss=0.1988, simple_loss=0.2795, pruned_loss=0.05904, over 7069.00 frames.], tot_loss[loss=0.2348, simple_loss=0.3076, pruned_loss=0.08096, over 1428516.91 frames.], batch size: 18, lr: 8.25e-04 2022-05-27 00:09:28,904 INFO [train.py:842] (0/4) Epoch 6, batch 4600, loss[loss=0.2485, simple_loss=0.3269, pruned_loss=0.08502, over 7077.00 frames.], tot_loss[loss=0.2341, simple_loss=0.3073, pruned_loss=0.08045, over 1429004.50 frames.], batch size: 28, lr: 8.25e-04 2022-05-27 00:10:07,614 INFO [train.py:842] (0/4) Epoch 6, batch 4650, loss[loss=0.2475, simple_loss=0.3252, pruned_loss=0.08489, over 7118.00 frames.], tot_loss[loss=0.2361, simple_loss=0.3089, pruned_loss=0.08166, over 1419198.81 frames.], batch size: 21, lr: 8.24e-04 2022-05-27 00:10:46,043 INFO [train.py:842] (0/4) Epoch 6, batch 4700, loss[loss=0.2635, simple_loss=0.3182, pruned_loss=0.1044, over 6995.00 frames.], tot_loss[loss=0.2382, simple_loss=0.3109, pruned_loss=0.08276, over 1425544.65 frames.], batch size: 16, lr: 8.24e-04 2022-05-27 00:11:25,083 INFO [train.py:842] (0/4) Epoch 6, batch 4750, loss[loss=0.2022, simple_loss=0.2724, pruned_loss=0.06599, over 7418.00 frames.], tot_loss[loss=0.2371, simple_loss=0.3099, pruned_loss=0.08216, over 1427941.39 frames.], batch size: 18, lr: 8.24e-04 2022-05-27 00:12:03,899 INFO [train.py:842] (0/4) Epoch 6, batch 4800, loss[loss=0.2232, simple_loss=0.2889, pruned_loss=0.07872, over 7252.00 frames.], tot_loss[loss=0.2362, simple_loss=0.3092, pruned_loss=0.08158, over 1426181.35 frames.], batch size: 19, lr: 8.23e-04 2022-05-27 00:12:42,813 INFO [train.py:842] (0/4) Epoch 6, batch 4850, loss[loss=0.2726, simple_loss=0.3473, pruned_loss=0.09895, over 7415.00 frames.], tot_loss[loss=0.2346, simple_loss=0.3081, pruned_loss=0.08061, over 1427100.71 frames.], batch size: 21, lr: 8.23e-04 2022-05-27 00:13:21,323 INFO [train.py:842] (0/4) Epoch 6, batch 4900, loss[loss=0.2384, simple_loss=0.3117, pruned_loss=0.08254, over 7336.00 frames.], tot_loss[loss=0.2359, simple_loss=0.3096, pruned_loss=0.08107, over 1430826.09 frames.], batch size: 22, lr: 8.22e-04 2022-05-27 00:14:00,185 INFO [train.py:842] (0/4) Epoch 6, batch 4950, loss[loss=0.2368, simple_loss=0.3168, pruned_loss=0.07841, over 7099.00 frames.], tot_loss[loss=0.2363, simple_loss=0.3095, pruned_loss=0.08151, over 1428176.63 frames.], batch size: 28, lr: 8.22e-04 2022-05-27 00:14:38,739 INFO [train.py:842] (0/4) Epoch 6, batch 5000, loss[loss=0.2454, simple_loss=0.3281, pruned_loss=0.08138, over 6890.00 frames.], tot_loss[loss=0.2364, simple_loss=0.3094, pruned_loss=0.08174, over 1426930.03 frames.], batch size: 32, lr: 8.22e-04 2022-05-27 00:15:17,432 INFO [train.py:842] (0/4) Epoch 6, batch 5050, loss[loss=0.2538, simple_loss=0.3108, pruned_loss=0.0984, over 7141.00 frames.], tot_loss[loss=0.237, simple_loss=0.3101, pruned_loss=0.08191, over 1423791.61 frames.], batch size: 17, lr: 8.21e-04 2022-05-27 00:15:55,964 INFO [train.py:842] (0/4) Epoch 6, batch 5100, loss[loss=0.2118, simple_loss=0.2868, pruned_loss=0.06844, over 7072.00 frames.], tot_loss[loss=0.2372, simple_loss=0.3099, pruned_loss=0.08222, over 1424587.30 frames.], batch size: 18, lr: 8.21e-04 2022-05-27 00:16:34,566 INFO [train.py:842] (0/4) Epoch 6, batch 5150, loss[loss=0.1833, simple_loss=0.2674, pruned_loss=0.04962, over 7282.00 frames.], tot_loss[loss=0.2384, simple_loss=0.3107, pruned_loss=0.08299, over 1423864.44 frames.], batch size: 17, lr: 8.20e-04 2022-05-27 00:17:13,260 INFO [train.py:842] (0/4) Epoch 6, batch 5200, loss[loss=0.2202, simple_loss=0.305, pruned_loss=0.06772, over 7375.00 frames.], tot_loss[loss=0.2373, simple_loss=0.3099, pruned_loss=0.08236, over 1427801.66 frames.], batch size: 23, lr: 8.20e-04 2022-05-27 00:18:02,469 INFO [train.py:842] (0/4) Epoch 6, batch 5250, loss[loss=0.3093, simple_loss=0.3724, pruned_loss=0.1231, over 7329.00 frames.], tot_loss[loss=0.2357, simple_loss=0.309, pruned_loss=0.08126, over 1427444.89 frames.], batch size: 25, lr: 8.20e-04 2022-05-27 00:19:01,429 INFO [train.py:842] (0/4) Epoch 6, batch 5300, loss[loss=0.2299, simple_loss=0.317, pruned_loss=0.0714, over 7116.00 frames.], tot_loss[loss=0.2393, simple_loss=0.3118, pruned_loss=0.0834, over 1417147.31 frames.], batch size: 21, lr: 8.19e-04 2022-05-27 00:19:40,581 INFO [train.py:842] (0/4) Epoch 6, batch 5350, loss[loss=0.2155, simple_loss=0.2999, pruned_loss=0.06558, over 7415.00 frames.], tot_loss[loss=0.237, simple_loss=0.3103, pruned_loss=0.08183, over 1422084.89 frames.], batch size: 21, lr: 8.19e-04 2022-05-27 00:20:19,192 INFO [train.py:842] (0/4) Epoch 6, batch 5400, loss[loss=0.1896, simple_loss=0.2638, pruned_loss=0.05773, over 7290.00 frames.], tot_loss[loss=0.2373, simple_loss=0.3101, pruned_loss=0.08228, over 1420066.81 frames.], batch size: 18, lr: 8.18e-04 2022-05-27 00:20:58,461 INFO [train.py:842] (0/4) Epoch 6, batch 5450, loss[loss=0.2774, simple_loss=0.3425, pruned_loss=0.1062, over 7350.00 frames.], tot_loss[loss=0.2363, simple_loss=0.3092, pruned_loss=0.08167, over 1424555.65 frames.], batch size: 22, lr: 8.18e-04 2022-05-27 00:21:37,387 INFO [train.py:842] (0/4) Epoch 6, batch 5500, loss[loss=0.183, simple_loss=0.2579, pruned_loss=0.05408, over 7174.00 frames.], tot_loss[loss=0.2363, simple_loss=0.3095, pruned_loss=0.08152, over 1420358.47 frames.], batch size: 18, lr: 8.18e-04 2022-05-27 00:22:16,128 INFO [train.py:842] (0/4) Epoch 6, batch 5550, loss[loss=0.2379, simple_loss=0.3148, pruned_loss=0.08049, over 7214.00 frames.], tot_loss[loss=0.2368, simple_loss=0.3099, pruned_loss=0.08183, over 1417059.27 frames.], batch size: 22, lr: 8.17e-04 2022-05-27 00:22:54,567 INFO [train.py:842] (0/4) Epoch 6, batch 5600, loss[loss=0.267, simple_loss=0.3452, pruned_loss=0.09434, over 7124.00 frames.], tot_loss[loss=0.235, simple_loss=0.3085, pruned_loss=0.0807, over 1419431.75 frames.], batch size: 28, lr: 8.17e-04 2022-05-27 00:23:33,351 INFO [train.py:842] (0/4) Epoch 6, batch 5650, loss[loss=0.2701, simple_loss=0.3363, pruned_loss=0.1019, over 7209.00 frames.], tot_loss[loss=0.2346, simple_loss=0.3079, pruned_loss=0.0806, over 1417054.91 frames.], batch size: 22, lr: 8.17e-04 2022-05-27 00:24:12,118 INFO [train.py:842] (0/4) Epoch 6, batch 5700, loss[loss=0.2299, simple_loss=0.3167, pruned_loss=0.07155, over 7109.00 frames.], tot_loss[loss=0.2344, simple_loss=0.3073, pruned_loss=0.08071, over 1418738.47 frames.], batch size: 21, lr: 8.16e-04 2022-05-27 00:24:50,734 INFO [train.py:842] (0/4) Epoch 6, batch 5750, loss[loss=0.2312, simple_loss=0.3012, pruned_loss=0.08058, over 7166.00 frames.], tot_loss[loss=0.2341, simple_loss=0.3074, pruned_loss=0.08037, over 1418974.80 frames.], batch size: 19, lr: 8.16e-04 2022-05-27 00:25:29,328 INFO [train.py:842] (0/4) Epoch 6, batch 5800, loss[loss=0.1996, simple_loss=0.2799, pruned_loss=0.05962, over 7224.00 frames.], tot_loss[loss=0.2338, simple_loss=0.3071, pruned_loss=0.08025, over 1418676.29 frames.], batch size: 21, lr: 8.15e-04 2022-05-27 00:26:08,344 INFO [train.py:842] (0/4) Epoch 6, batch 5850, loss[loss=0.2087, simple_loss=0.2794, pruned_loss=0.06899, over 7404.00 frames.], tot_loss[loss=0.2333, simple_loss=0.307, pruned_loss=0.07976, over 1423626.99 frames.], batch size: 18, lr: 8.15e-04 2022-05-27 00:26:46,769 INFO [train.py:842] (0/4) Epoch 6, batch 5900, loss[loss=0.3438, simple_loss=0.3894, pruned_loss=0.1491, over 7415.00 frames.], tot_loss[loss=0.2337, simple_loss=0.3071, pruned_loss=0.08012, over 1424152.40 frames.], batch size: 21, lr: 8.15e-04 2022-05-27 00:27:25,964 INFO [train.py:842] (0/4) Epoch 6, batch 5950, loss[loss=0.2349, simple_loss=0.308, pruned_loss=0.08086, over 7357.00 frames.], tot_loss[loss=0.2352, simple_loss=0.3083, pruned_loss=0.08102, over 1425107.57 frames.], batch size: 19, lr: 8.14e-04 2022-05-27 00:28:04,704 INFO [train.py:842] (0/4) Epoch 6, batch 6000, loss[loss=0.2671, simple_loss=0.3536, pruned_loss=0.0903, over 7349.00 frames.], tot_loss[loss=0.2371, simple_loss=0.31, pruned_loss=0.08212, over 1424852.51 frames.], batch size: 22, lr: 8.14e-04 2022-05-27 00:28:04,705 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 00:28:13,973 INFO [train.py:871] (0/4) Epoch 6, validation: loss=0.1847, simple_loss=0.2853, pruned_loss=0.04201, over 868885.00 frames. 2022-05-27 00:28:52,837 INFO [train.py:842] (0/4) Epoch 6, batch 6050, loss[loss=0.2052, simple_loss=0.2754, pruned_loss=0.06749, over 7284.00 frames.], tot_loss[loss=0.2344, simple_loss=0.308, pruned_loss=0.08037, over 1428525.97 frames.], batch size: 18, lr: 8.13e-04 2022-05-27 00:29:31,595 INFO [train.py:842] (0/4) Epoch 6, batch 6100, loss[loss=0.2685, simple_loss=0.3124, pruned_loss=0.1122, over 6823.00 frames.], tot_loss[loss=0.2346, simple_loss=0.308, pruned_loss=0.08066, over 1425558.21 frames.], batch size: 15, lr: 8.13e-04 2022-05-27 00:30:10,443 INFO [train.py:842] (0/4) Epoch 6, batch 6150, loss[loss=0.3179, simple_loss=0.3882, pruned_loss=0.1238, over 7114.00 frames.], tot_loss[loss=0.2357, simple_loss=0.3088, pruned_loss=0.08125, over 1418370.86 frames.], batch size: 21, lr: 8.13e-04 2022-05-27 00:30:48,856 INFO [train.py:842] (0/4) Epoch 6, batch 6200, loss[loss=0.2568, simple_loss=0.3353, pruned_loss=0.08911, over 6570.00 frames.], tot_loss[loss=0.2388, simple_loss=0.3112, pruned_loss=0.08319, over 1414430.16 frames.], batch size: 38, lr: 8.12e-04 2022-05-27 00:31:27,493 INFO [train.py:842] (0/4) Epoch 6, batch 6250, loss[loss=0.3803, simple_loss=0.4026, pruned_loss=0.179, over 6212.00 frames.], tot_loss[loss=0.239, simple_loss=0.312, pruned_loss=0.08305, over 1417107.04 frames.], batch size: 37, lr: 8.12e-04 2022-05-27 00:32:06,045 INFO [train.py:842] (0/4) Epoch 6, batch 6300, loss[loss=0.2155, simple_loss=0.2981, pruned_loss=0.06647, over 7336.00 frames.], tot_loss[loss=0.237, simple_loss=0.3103, pruned_loss=0.08181, over 1419580.12 frames.], batch size: 20, lr: 8.11e-04 2022-05-27 00:32:44,931 INFO [train.py:842] (0/4) Epoch 6, batch 6350, loss[loss=0.2081, simple_loss=0.2816, pruned_loss=0.06728, over 7420.00 frames.], tot_loss[loss=0.2373, simple_loss=0.3103, pruned_loss=0.08215, over 1421131.90 frames.], batch size: 18, lr: 8.11e-04 2022-05-27 00:33:23,419 INFO [train.py:842] (0/4) Epoch 6, batch 6400, loss[loss=0.2388, simple_loss=0.315, pruned_loss=0.08128, over 7117.00 frames.], tot_loss[loss=0.2379, simple_loss=0.3107, pruned_loss=0.08252, over 1419442.27 frames.], batch size: 28, lr: 8.11e-04 2022-05-27 00:34:02,225 INFO [train.py:842] (0/4) Epoch 6, batch 6450, loss[loss=0.1864, simple_loss=0.2576, pruned_loss=0.05757, over 7283.00 frames.], tot_loss[loss=0.2365, simple_loss=0.3093, pruned_loss=0.08182, over 1418768.44 frames.], batch size: 17, lr: 8.10e-04 2022-05-27 00:34:40,745 INFO [train.py:842] (0/4) Epoch 6, batch 6500, loss[loss=0.2816, simple_loss=0.3492, pruned_loss=0.107, over 7329.00 frames.], tot_loss[loss=0.2353, simple_loss=0.3083, pruned_loss=0.08115, over 1420613.23 frames.], batch size: 25, lr: 8.10e-04 2022-05-27 00:35:19,800 INFO [train.py:842] (0/4) Epoch 6, batch 6550, loss[loss=0.2895, simple_loss=0.3593, pruned_loss=0.1098, over 7117.00 frames.], tot_loss[loss=0.2367, simple_loss=0.3095, pruned_loss=0.08195, over 1417749.64 frames.], batch size: 21, lr: 8.10e-04 2022-05-27 00:35:58,637 INFO [train.py:842] (0/4) Epoch 6, batch 6600, loss[loss=0.264, simple_loss=0.338, pruned_loss=0.09502, over 7310.00 frames.], tot_loss[loss=0.2366, simple_loss=0.3089, pruned_loss=0.0821, over 1419872.94 frames.], batch size: 24, lr: 8.09e-04 2022-05-27 00:36:37,430 INFO [train.py:842] (0/4) Epoch 6, batch 6650, loss[loss=0.2913, simple_loss=0.3609, pruned_loss=0.1109, over 7143.00 frames.], tot_loss[loss=0.2376, simple_loss=0.3099, pruned_loss=0.08269, over 1420500.89 frames.], batch size: 20, lr: 8.09e-04 2022-05-27 00:37:16,104 INFO [train.py:842] (0/4) Epoch 6, batch 6700, loss[loss=0.2373, simple_loss=0.3146, pruned_loss=0.07995, over 7215.00 frames.], tot_loss[loss=0.2365, simple_loss=0.3091, pruned_loss=0.08193, over 1419339.52 frames.], batch size: 23, lr: 8.08e-04 2022-05-27 00:37:54,960 INFO [train.py:842] (0/4) Epoch 6, batch 6750, loss[loss=0.2425, simple_loss=0.3191, pruned_loss=0.08295, over 7360.00 frames.], tot_loss[loss=0.2349, simple_loss=0.3079, pruned_loss=0.08096, over 1417997.38 frames.], batch size: 23, lr: 8.08e-04 2022-05-27 00:38:33,404 INFO [train.py:842] (0/4) Epoch 6, batch 6800, loss[loss=0.2263, simple_loss=0.3093, pruned_loss=0.0716, over 7323.00 frames.], tot_loss[loss=0.2354, simple_loss=0.3084, pruned_loss=0.08126, over 1419310.93 frames.], batch size: 20, lr: 8.08e-04 2022-05-27 00:39:12,299 INFO [train.py:842] (0/4) Epoch 6, batch 6850, loss[loss=0.2331, simple_loss=0.307, pruned_loss=0.07956, over 7064.00 frames.], tot_loss[loss=0.2337, simple_loss=0.3072, pruned_loss=0.08009, over 1419711.15 frames.], batch size: 18, lr: 8.07e-04 2022-05-27 00:39:50,890 INFO [train.py:842] (0/4) Epoch 6, batch 6900, loss[loss=0.2111, simple_loss=0.2968, pruned_loss=0.06273, over 7153.00 frames.], tot_loss[loss=0.2352, simple_loss=0.3085, pruned_loss=0.08094, over 1421135.17 frames.], batch size: 20, lr: 8.07e-04 2022-05-27 00:40:29,629 INFO [train.py:842] (0/4) Epoch 6, batch 6950, loss[loss=0.2637, simple_loss=0.3289, pruned_loss=0.09925, over 7330.00 frames.], tot_loss[loss=0.2373, simple_loss=0.3101, pruned_loss=0.08226, over 1425745.60 frames.], batch size: 20, lr: 8.07e-04 2022-05-27 00:41:08,365 INFO [train.py:842] (0/4) Epoch 6, batch 7000, loss[loss=0.2344, simple_loss=0.3159, pruned_loss=0.07642, over 7230.00 frames.], tot_loss[loss=0.2355, simple_loss=0.3088, pruned_loss=0.08108, over 1428286.38 frames.], batch size: 20, lr: 8.06e-04 2022-05-27 00:41:47,478 INFO [train.py:842] (0/4) Epoch 6, batch 7050, loss[loss=0.235, simple_loss=0.3203, pruned_loss=0.07481, over 7346.00 frames.], tot_loss[loss=0.2346, simple_loss=0.3078, pruned_loss=0.08072, over 1425353.16 frames.], batch size: 22, lr: 8.06e-04 2022-05-27 00:42:26,213 INFO [train.py:842] (0/4) Epoch 6, batch 7100, loss[loss=0.1579, simple_loss=0.2379, pruned_loss=0.03894, over 7269.00 frames.], tot_loss[loss=0.2343, simple_loss=0.3073, pruned_loss=0.08061, over 1423884.37 frames.], batch size: 17, lr: 8.05e-04 2022-05-27 00:43:05,056 INFO [train.py:842] (0/4) Epoch 6, batch 7150, loss[loss=0.249, simple_loss=0.3275, pruned_loss=0.08527, over 7415.00 frames.], tot_loss[loss=0.2344, simple_loss=0.3079, pruned_loss=0.08044, over 1426698.36 frames.], batch size: 21, lr: 8.05e-04 2022-05-27 00:43:43,851 INFO [train.py:842] (0/4) Epoch 6, batch 7200, loss[loss=0.2396, simple_loss=0.3099, pruned_loss=0.08469, over 7286.00 frames.], tot_loss[loss=0.233, simple_loss=0.3067, pruned_loss=0.0797, over 1428919.77 frames.], batch size: 24, lr: 8.05e-04 2022-05-27 00:44:22,615 INFO [train.py:842] (0/4) Epoch 6, batch 7250, loss[loss=0.2513, simple_loss=0.3386, pruned_loss=0.08198, over 7327.00 frames.], tot_loss[loss=0.235, simple_loss=0.3085, pruned_loss=0.08075, over 1424466.97 frames.], batch size: 20, lr: 8.04e-04 2022-05-27 00:45:01,195 INFO [train.py:842] (0/4) Epoch 6, batch 7300, loss[loss=0.2101, simple_loss=0.2959, pruned_loss=0.06212, over 7142.00 frames.], tot_loss[loss=0.2349, simple_loss=0.3082, pruned_loss=0.08081, over 1424853.20 frames.], batch size: 20, lr: 8.04e-04 2022-05-27 00:45:40,186 INFO [train.py:842] (0/4) Epoch 6, batch 7350, loss[loss=0.2079, simple_loss=0.2758, pruned_loss=0.07004, over 6844.00 frames.], tot_loss[loss=0.2329, simple_loss=0.3065, pruned_loss=0.07965, over 1425360.25 frames.], batch size: 15, lr: 8.04e-04 2022-05-27 00:46:18,803 INFO [train.py:842] (0/4) Epoch 6, batch 7400, loss[loss=0.2167, simple_loss=0.2925, pruned_loss=0.07042, over 7421.00 frames.], tot_loss[loss=0.2321, simple_loss=0.3061, pruned_loss=0.0791, over 1431064.55 frames.], batch size: 20, lr: 8.03e-04 2022-05-27 00:46:57,745 INFO [train.py:842] (0/4) Epoch 6, batch 7450, loss[loss=0.1723, simple_loss=0.2511, pruned_loss=0.04674, over 7302.00 frames.], tot_loss[loss=0.2341, simple_loss=0.3077, pruned_loss=0.08023, over 1426335.53 frames.], batch size: 17, lr: 8.03e-04 2022-05-27 00:47:36,437 INFO [train.py:842] (0/4) Epoch 6, batch 7500, loss[loss=0.2473, simple_loss=0.3295, pruned_loss=0.08257, over 7177.00 frames.], tot_loss[loss=0.2336, simple_loss=0.3068, pruned_loss=0.08016, over 1422443.48 frames.], batch size: 22, lr: 8.02e-04 2022-05-27 00:48:15,419 INFO [train.py:842] (0/4) Epoch 6, batch 7550, loss[loss=0.2079, simple_loss=0.288, pruned_loss=0.06395, over 7321.00 frames.], tot_loss[loss=0.2351, simple_loss=0.3083, pruned_loss=0.08092, over 1422674.85 frames.], batch size: 20, lr: 8.02e-04 2022-05-27 00:48:53,866 INFO [train.py:842] (0/4) Epoch 6, batch 7600, loss[loss=0.2726, simple_loss=0.3407, pruned_loss=0.1022, over 7411.00 frames.], tot_loss[loss=0.2362, simple_loss=0.3095, pruned_loss=0.08142, over 1419981.68 frames.], batch size: 21, lr: 8.02e-04 2022-05-27 00:49:32,679 INFO [train.py:842] (0/4) Epoch 6, batch 7650, loss[loss=0.2484, simple_loss=0.3287, pruned_loss=0.08403, over 7304.00 frames.], tot_loss[loss=0.2365, simple_loss=0.3099, pruned_loss=0.08156, over 1416976.77 frames.], batch size: 25, lr: 8.01e-04 2022-05-27 00:50:11,215 INFO [train.py:842] (0/4) Epoch 6, batch 7700, loss[loss=0.2198, simple_loss=0.3011, pruned_loss=0.06926, over 7073.00 frames.], tot_loss[loss=0.2366, simple_loss=0.3101, pruned_loss=0.0816, over 1418562.57 frames.], batch size: 18, lr: 8.01e-04 2022-05-27 00:50:49,903 INFO [train.py:842] (0/4) Epoch 6, batch 7750, loss[loss=0.2339, simple_loss=0.3143, pruned_loss=0.07673, over 7176.00 frames.], tot_loss[loss=0.2384, simple_loss=0.3113, pruned_loss=0.08276, over 1413331.95 frames.], batch size: 26, lr: 8.01e-04 2022-05-27 00:51:28,321 INFO [train.py:842] (0/4) Epoch 6, batch 7800, loss[loss=0.3061, simple_loss=0.3423, pruned_loss=0.135, over 7124.00 frames.], tot_loss[loss=0.2381, simple_loss=0.3108, pruned_loss=0.08267, over 1414311.12 frames.], batch size: 17, lr: 8.00e-04 2022-05-27 00:52:07,392 INFO [train.py:842] (0/4) Epoch 6, batch 7850, loss[loss=0.2441, simple_loss=0.332, pruned_loss=0.07815, over 7420.00 frames.], tot_loss[loss=0.2369, simple_loss=0.3097, pruned_loss=0.08212, over 1412459.76 frames.], batch size: 21, lr: 8.00e-04 2022-05-27 00:52:46,286 INFO [train.py:842] (0/4) Epoch 6, batch 7900, loss[loss=0.1864, simple_loss=0.2603, pruned_loss=0.05625, over 7232.00 frames.], tot_loss[loss=0.2358, simple_loss=0.3087, pruned_loss=0.08142, over 1414215.97 frames.], batch size: 16, lr: 7.99e-04 2022-05-27 00:53:25,060 INFO [train.py:842] (0/4) Epoch 6, batch 7950, loss[loss=0.2551, simple_loss=0.3468, pruned_loss=0.08169, over 7325.00 frames.], tot_loss[loss=0.2347, simple_loss=0.3082, pruned_loss=0.08061, over 1419447.47 frames.], batch size: 25, lr: 7.99e-04 2022-05-27 00:54:03,684 INFO [train.py:842] (0/4) Epoch 6, batch 8000, loss[loss=0.2345, simple_loss=0.3173, pruned_loss=0.07586, over 7321.00 frames.], tot_loss[loss=0.2341, simple_loss=0.3079, pruned_loss=0.08011, over 1420974.83 frames.], batch size: 21, lr: 7.99e-04 2022-05-27 00:54:42,717 INFO [train.py:842] (0/4) Epoch 6, batch 8050, loss[loss=0.2477, simple_loss=0.3079, pruned_loss=0.09372, over 5152.00 frames.], tot_loss[loss=0.2361, simple_loss=0.3091, pruned_loss=0.08155, over 1417951.62 frames.], batch size: 52, lr: 7.98e-04 2022-05-27 00:55:21,170 INFO [train.py:842] (0/4) Epoch 6, batch 8100, loss[loss=0.2069, simple_loss=0.2891, pruned_loss=0.06238, over 7331.00 frames.], tot_loss[loss=0.2363, simple_loss=0.3098, pruned_loss=0.0814, over 1422016.00 frames.], batch size: 20, lr: 7.98e-04 2022-05-27 00:55:59,914 INFO [train.py:842] (0/4) Epoch 6, batch 8150, loss[loss=0.2713, simple_loss=0.3511, pruned_loss=0.09576, over 7153.00 frames.], tot_loss[loss=0.2367, simple_loss=0.3105, pruned_loss=0.08144, over 1420018.69 frames.], batch size: 26, lr: 7.98e-04 2022-05-27 00:56:38,268 INFO [train.py:842] (0/4) Epoch 6, batch 8200, loss[loss=0.2859, simple_loss=0.354, pruned_loss=0.1089, over 7349.00 frames.], tot_loss[loss=0.2345, simple_loss=0.3091, pruned_loss=0.07993, over 1421058.48 frames.], batch size: 22, lr: 7.97e-04 2022-05-27 00:57:17,215 INFO [train.py:842] (0/4) Epoch 6, batch 8250, loss[loss=0.2266, simple_loss=0.3046, pruned_loss=0.0743, over 6454.00 frames.], tot_loss[loss=0.2333, simple_loss=0.3078, pruned_loss=0.07941, over 1421216.10 frames.], batch size: 38, lr: 7.97e-04 2022-05-27 00:57:55,644 INFO [train.py:842] (0/4) Epoch 6, batch 8300, loss[loss=0.2862, simple_loss=0.3519, pruned_loss=0.1103, over 7398.00 frames.], tot_loss[loss=0.2332, simple_loss=0.3079, pruned_loss=0.07925, over 1424911.09 frames.], batch size: 23, lr: 7.97e-04 2022-05-27 00:58:34,562 INFO [train.py:842] (0/4) Epoch 6, batch 8350, loss[loss=0.2376, simple_loss=0.3187, pruned_loss=0.07827, over 7335.00 frames.], tot_loss[loss=0.2335, simple_loss=0.3082, pruned_loss=0.07942, over 1426404.11 frames.], batch size: 22, lr: 7.96e-04 2022-05-27 00:59:13,014 INFO [train.py:842] (0/4) Epoch 6, batch 8400, loss[loss=0.206, simple_loss=0.2712, pruned_loss=0.07044, over 7277.00 frames.], tot_loss[loss=0.2358, simple_loss=0.3095, pruned_loss=0.08107, over 1423648.04 frames.], batch size: 17, lr: 7.96e-04 2022-05-27 00:59:52,130 INFO [train.py:842] (0/4) Epoch 6, batch 8450, loss[loss=0.2079, simple_loss=0.2813, pruned_loss=0.06721, over 7270.00 frames.], tot_loss[loss=0.2362, simple_loss=0.3096, pruned_loss=0.08136, over 1424678.90 frames.], batch size: 17, lr: 7.95e-04 2022-05-27 01:00:31,031 INFO [train.py:842] (0/4) Epoch 6, batch 8500, loss[loss=0.2191, simple_loss=0.31, pruned_loss=0.06409, over 7106.00 frames.], tot_loss[loss=0.2323, simple_loss=0.3061, pruned_loss=0.07918, over 1423975.65 frames.], batch size: 21, lr: 7.95e-04 2022-05-27 01:01:10,178 INFO [train.py:842] (0/4) Epoch 6, batch 8550, loss[loss=0.2308, simple_loss=0.3121, pruned_loss=0.0747, over 7391.00 frames.], tot_loss[loss=0.2332, simple_loss=0.3067, pruned_loss=0.0798, over 1427168.53 frames.], batch size: 23, lr: 7.95e-04 2022-05-27 01:01:48,814 INFO [train.py:842] (0/4) Epoch 6, batch 8600, loss[loss=0.2752, simple_loss=0.3452, pruned_loss=0.1026, over 7221.00 frames.], tot_loss[loss=0.2358, simple_loss=0.3091, pruned_loss=0.08121, over 1427728.39 frames.], batch size: 21, lr: 7.94e-04 2022-05-27 01:02:27,485 INFO [train.py:842] (0/4) Epoch 6, batch 8650, loss[loss=0.2348, simple_loss=0.3169, pruned_loss=0.07636, over 7321.00 frames.], tot_loss[loss=0.2358, simple_loss=0.3093, pruned_loss=0.08112, over 1425164.65 frames.], batch size: 20, lr: 7.94e-04 2022-05-27 01:03:06,052 INFO [train.py:842] (0/4) Epoch 6, batch 8700, loss[loss=0.2297, simple_loss=0.2945, pruned_loss=0.0825, over 7123.00 frames.], tot_loss[loss=0.237, simple_loss=0.3102, pruned_loss=0.08191, over 1419366.80 frames.], batch size: 17, lr: 7.94e-04 2022-05-27 01:03:44,889 INFO [train.py:842] (0/4) Epoch 6, batch 8750, loss[loss=0.1893, simple_loss=0.268, pruned_loss=0.05529, over 7148.00 frames.], tot_loss[loss=0.2379, simple_loss=0.3112, pruned_loss=0.08225, over 1417886.34 frames.], batch size: 17, lr: 7.93e-04 2022-05-27 01:04:23,730 INFO [train.py:842] (0/4) Epoch 6, batch 8800, loss[loss=0.1932, simple_loss=0.2674, pruned_loss=0.05952, over 7130.00 frames.], tot_loss[loss=0.2376, simple_loss=0.3112, pruned_loss=0.08198, over 1416458.57 frames.], batch size: 17, lr: 7.93e-04 2022-05-27 01:05:02,660 INFO [train.py:842] (0/4) Epoch 6, batch 8850, loss[loss=0.2334, simple_loss=0.289, pruned_loss=0.08891, over 7272.00 frames.], tot_loss[loss=0.2373, simple_loss=0.3106, pruned_loss=0.08201, over 1417099.57 frames.], batch size: 17, lr: 7.93e-04 2022-05-27 01:05:41,175 INFO [train.py:842] (0/4) Epoch 6, batch 8900, loss[loss=0.2153, simple_loss=0.3066, pruned_loss=0.06203, over 7188.00 frames.], tot_loss[loss=0.2343, simple_loss=0.3078, pruned_loss=0.08045, over 1411924.03 frames.], batch size: 26, lr: 7.92e-04 2022-05-27 01:06:20,553 INFO [train.py:842] (0/4) Epoch 6, batch 8950, loss[loss=0.2153, simple_loss=0.2851, pruned_loss=0.07271, over 7353.00 frames.], tot_loss[loss=0.2348, simple_loss=0.3073, pruned_loss=0.08117, over 1405687.29 frames.], batch size: 19, lr: 7.92e-04 2022-05-27 01:06:58,907 INFO [train.py:842] (0/4) Epoch 6, batch 9000, loss[loss=0.2043, simple_loss=0.28, pruned_loss=0.06428, over 7281.00 frames.], tot_loss[loss=0.2354, simple_loss=0.3077, pruned_loss=0.08158, over 1398390.84 frames.], batch size: 17, lr: 7.91e-04 2022-05-27 01:06:58,908 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 01:07:08,313 INFO [train.py:871] (0/4) Epoch 6, validation: loss=0.186, simple_loss=0.2866, pruned_loss=0.04271, over 868885.00 frames. 2022-05-27 01:07:46,567 INFO [train.py:842] (0/4) Epoch 6, batch 9050, loss[loss=0.1415, simple_loss=0.226, pruned_loss=0.02849, over 7268.00 frames.], tot_loss[loss=0.2387, simple_loss=0.3101, pruned_loss=0.08365, over 1365301.75 frames.], batch size: 17, lr: 7.91e-04 2022-05-27 01:08:24,102 INFO [train.py:842] (0/4) Epoch 6, batch 9100, loss[loss=0.2205, simple_loss=0.2977, pruned_loss=0.0716, over 7219.00 frames.], tot_loss[loss=0.2399, simple_loss=0.3115, pruned_loss=0.08418, over 1338910.16 frames.], batch size: 21, lr: 7.91e-04 2022-05-27 01:09:01,927 INFO [train.py:842] (0/4) Epoch 6, batch 9150, loss[loss=0.287, simple_loss=0.3385, pruned_loss=0.1177, over 5367.00 frames.], tot_loss[loss=0.2481, simple_loss=0.3178, pruned_loss=0.08921, over 1291863.52 frames.], batch size: 52, lr: 7.90e-04 2022-05-27 01:09:34,298 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-6.pt 2022-05-27 01:09:54,896 INFO [train.py:842] (0/4) Epoch 7, batch 0, loss[loss=0.2123, simple_loss=0.2908, pruned_loss=0.06691, over 7407.00 frames.], tot_loss[loss=0.2123, simple_loss=0.2908, pruned_loss=0.06691, over 7407.00 frames.], batch size: 18, lr: 7.58e-04 2022-05-27 01:10:33,935 INFO [train.py:842] (0/4) Epoch 7, batch 50, loss[loss=0.1775, simple_loss=0.2586, pruned_loss=0.04821, over 7390.00 frames.], tot_loss[loss=0.2304, simple_loss=0.3058, pruned_loss=0.07749, over 322457.12 frames.], batch size: 18, lr: 7.58e-04 2022-05-27 01:11:12,619 INFO [train.py:842] (0/4) Epoch 7, batch 100, loss[loss=0.2237, simple_loss=0.3068, pruned_loss=0.07031, over 7171.00 frames.], tot_loss[loss=0.2304, simple_loss=0.305, pruned_loss=0.07792, over 566701.33 frames.], batch size: 19, lr: 7.57e-04 2022-05-27 01:11:51,438 INFO [train.py:842] (0/4) Epoch 7, batch 150, loss[loss=0.2512, simple_loss=0.307, pruned_loss=0.09772, over 7158.00 frames.], tot_loss[loss=0.2325, simple_loss=0.3062, pruned_loss=0.07946, over 756536.77 frames.], batch size: 19, lr: 7.57e-04 2022-05-27 01:12:30,109 INFO [train.py:842] (0/4) Epoch 7, batch 200, loss[loss=0.282, simple_loss=0.3567, pruned_loss=0.1036, over 7386.00 frames.], tot_loss[loss=0.2328, simple_loss=0.3069, pruned_loss=0.0794, over 905695.41 frames.], batch size: 23, lr: 7.57e-04 2022-05-27 01:13:08,988 INFO [train.py:842] (0/4) Epoch 7, batch 250, loss[loss=0.2791, simple_loss=0.3584, pruned_loss=0.09994, over 7153.00 frames.], tot_loss[loss=0.2335, simple_loss=0.3077, pruned_loss=0.07963, over 1019353.95 frames.], batch size: 20, lr: 7.56e-04 2022-05-27 01:13:47,469 INFO [train.py:842] (0/4) Epoch 7, batch 300, loss[loss=0.1731, simple_loss=0.2571, pruned_loss=0.04459, over 7232.00 frames.], tot_loss[loss=0.2315, simple_loss=0.3061, pruned_loss=0.07845, over 1107211.18 frames.], batch size: 16, lr: 7.56e-04 2022-05-27 01:14:26,535 INFO [train.py:842] (0/4) Epoch 7, batch 350, loss[loss=0.2634, simple_loss=0.3303, pruned_loss=0.0983, over 7123.00 frames.], tot_loss[loss=0.2313, simple_loss=0.3061, pruned_loss=0.07823, over 1178872.60 frames.], batch size: 21, lr: 7.56e-04 2022-05-27 01:15:04,811 INFO [train.py:842] (0/4) Epoch 7, batch 400, loss[loss=0.2222, simple_loss=0.3092, pruned_loss=0.06761, over 7170.00 frames.], tot_loss[loss=0.2309, simple_loss=0.3063, pruned_loss=0.07779, over 1231431.98 frames.], batch size: 18, lr: 7.55e-04 2022-05-27 01:15:43,926 INFO [train.py:842] (0/4) Epoch 7, batch 450, loss[loss=0.2286, simple_loss=0.3063, pruned_loss=0.07541, over 7365.00 frames.], tot_loss[loss=0.2313, simple_loss=0.3061, pruned_loss=0.07825, over 1276761.25 frames.], batch size: 19, lr: 7.55e-04 2022-05-27 01:16:22,226 INFO [train.py:842] (0/4) Epoch 7, batch 500, loss[loss=0.2265, simple_loss=0.3059, pruned_loss=0.07353, over 6478.00 frames.], tot_loss[loss=0.2313, simple_loss=0.3067, pruned_loss=0.07794, over 1305879.09 frames.], batch size: 38, lr: 7.55e-04 2022-05-27 01:17:01,067 INFO [train.py:842] (0/4) Epoch 7, batch 550, loss[loss=0.2023, simple_loss=0.2889, pruned_loss=0.05782, over 7117.00 frames.], tot_loss[loss=0.2309, simple_loss=0.3057, pruned_loss=0.07812, over 1330823.04 frames.], batch size: 21, lr: 7.54e-04 2022-05-27 01:17:39,723 INFO [train.py:842] (0/4) Epoch 7, batch 600, loss[loss=0.2387, simple_loss=0.3192, pruned_loss=0.07914, over 7021.00 frames.], tot_loss[loss=0.2325, simple_loss=0.3071, pruned_loss=0.07897, over 1348321.39 frames.], batch size: 28, lr: 7.54e-04 2022-05-27 01:18:18,869 INFO [train.py:842] (0/4) Epoch 7, batch 650, loss[loss=0.2911, simple_loss=0.3485, pruned_loss=0.1168, over 5285.00 frames.], tot_loss[loss=0.2327, simple_loss=0.3069, pruned_loss=0.07924, over 1364961.12 frames.], batch size: 52, lr: 7.54e-04 2022-05-27 01:18:57,438 INFO [train.py:842] (0/4) Epoch 7, batch 700, loss[loss=0.1974, simple_loss=0.2811, pruned_loss=0.05685, over 7168.00 frames.], tot_loss[loss=0.2309, simple_loss=0.3054, pruned_loss=0.07823, over 1379363.62 frames.], batch size: 18, lr: 7.53e-04 2022-05-27 01:19:36,354 INFO [train.py:842] (0/4) Epoch 7, batch 750, loss[loss=0.2414, simple_loss=0.3194, pruned_loss=0.08172, over 6754.00 frames.], tot_loss[loss=0.2309, simple_loss=0.3056, pruned_loss=0.07804, over 1391643.34 frames.], batch size: 31, lr: 7.53e-04 2022-05-27 01:20:15,031 INFO [train.py:842] (0/4) Epoch 7, batch 800, loss[loss=0.308, simple_loss=0.3556, pruned_loss=0.1303, over 7319.00 frames.], tot_loss[loss=0.2316, simple_loss=0.3058, pruned_loss=0.07874, over 1391948.80 frames.], batch size: 20, lr: 7.53e-04 2022-05-27 01:20:51,503 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-56000.pt 2022-05-27 01:20:56,566 INFO [train.py:842] (0/4) Epoch 7, batch 850, loss[loss=0.2041, simple_loss=0.2923, pruned_loss=0.05799, over 7287.00 frames.], tot_loss[loss=0.2317, simple_loss=0.3061, pruned_loss=0.07864, over 1399127.96 frames.], batch size: 24, lr: 7.52e-04 2022-05-27 01:21:35,043 INFO [train.py:842] (0/4) Epoch 7, batch 900, loss[loss=0.2227, simple_loss=0.3022, pruned_loss=0.07159, over 7377.00 frames.], tot_loss[loss=0.2314, simple_loss=0.306, pruned_loss=0.07839, over 1404772.61 frames.], batch size: 23, lr: 7.52e-04 2022-05-27 01:22:13,821 INFO [train.py:842] (0/4) Epoch 7, batch 950, loss[loss=0.2252, simple_loss=0.3134, pruned_loss=0.06852, over 7358.00 frames.], tot_loss[loss=0.2317, simple_loss=0.3066, pruned_loss=0.07835, over 1408626.35 frames.], batch size: 23, lr: 7.52e-04 2022-05-27 01:22:52,356 INFO [train.py:842] (0/4) Epoch 7, batch 1000, loss[loss=0.2084, simple_loss=0.294, pruned_loss=0.06143, over 7374.00 frames.], tot_loss[loss=0.2307, simple_loss=0.3054, pruned_loss=0.07804, over 1409060.69 frames.], batch size: 23, lr: 7.51e-04 2022-05-27 01:23:31,621 INFO [train.py:842] (0/4) Epoch 7, batch 1050, loss[loss=0.2241, simple_loss=0.301, pruned_loss=0.07358, over 7153.00 frames.], tot_loss[loss=0.2302, simple_loss=0.3054, pruned_loss=0.07757, over 1415603.05 frames.], batch size: 19, lr: 7.51e-04 2022-05-27 01:24:10,639 INFO [train.py:842] (0/4) Epoch 7, batch 1100, loss[loss=0.2056, simple_loss=0.3009, pruned_loss=0.05508, over 7290.00 frames.], tot_loss[loss=0.231, simple_loss=0.3062, pruned_loss=0.07793, over 1419117.72 frames.], batch size: 25, lr: 7.51e-04 2022-05-27 01:24:49,510 INFO [train.py:842] (0/4) Epoch 7, batch 1150, loss[loss=0.204, simple_loss=0.2698, pruned_loss=0.06912, over 7131.00 frames.], tot_loss[loss=0.2302, simple_loss=0.3055, pruned_loss=0.0774, over 1418222.11 frames.], batch size: 17, lr: 7.50e-04 2022-05-27 01:25:28,099 INFO [train.py:842] (0/4) Epoch 7, batch 1200, loss[loss=0.2267, simple_loss=0.2837, pruned_loss=0.08485, over 6806.00 frames.], tot_loss[loss=0.23, simple_loss=0.3056, pruned_loss=0.07722, over 1412839.29 frames.], batch size: 15, lr: 7.50e-04 2022-05-27 01:26:07,113 INFO [train.py:842] (0/4) Epoch 7, batch 1250, loss[loss=0.2115, simple_loss=0.2954, pruned_loss=0.06386, over 7233.00 frames.], tot_loss[loss=0.2312, simple_loss=0.3063, pruned_loss=0.07802, over 1413706.34 frames.], batch size: 20, lr: 7.50e-04 2022-05-27 01:26:45,632 INFO [train.py:842] (0/4) Epoch 7, batch 1300, loss[loss=0.2048, simple_loss=0.2762, pruned_loss=0.06663, over 7267.00 frames.], tot_loss[loss=0.2291, simple_loss=0.3048, pruned_loss=0.07671, over 1415693.28 frames.], batch size: 17, lr: 7.49e-04 2022-05-27 01:27:24,560 INFO [train.py:842] (0/4) Epoch 7, batch 1350, loss[loss=0.2175, simple_loss=0.3038, pruned_loss=0.06556, over 7427.00 frames.], tot_loss[loss=0.2286, simple_loss=0.3042, pruned_loss=0.07643, over 1420820.79 frames.], batch size: 21, lr: 7.49e-04 2022-05-27 01:28:02,937 INFO [train.py:842] (0/4) Epoch 7, batch 1400, loss[loss=0.2061, simple_loss=0.2834, pruned_loss=0.06435, over 7157.00 frames.], tot_loss[loss=0.2304, simple_loss=0.3056, pruned_loss=0.07759, over 1418789.74 frames.], batch size: 19, lr: 7.49e-04 2022-05-27 01:28:41,868 INFO [train.py:842] (0/4) Epoch 7, batch 1450, loss[loss=0.2263, simple_loss=0.3074, pruned_loss=0.07263, over 6794.00 frames.], tot_loss[loss=0.2307, simple_loss=0.3061, pruned_loss=0.07764, over 1418976.76 frames.], batch size: 31, lr: 7.48e-04 2022-05-27 01:29:20,368 INFO [train.py:842] (0/4) Epoch 7, batch 1500, loss[loss=0.2306, simple_loss=0.3098, pruned_loss=0.07571, over 7410.00 frames.], tot_loss[loss=0.2306, simple_loss=0.3059, pruned_loss=0.07765, over 1423401.51 frames.], batch size: 21, lr: 7.48e-04 2022-05-27 01:29:59,203 INFO [train.py:842] (0/4) Epoch 7, batch 1550, loss[loss=0.2416, simple_loss=0.3156, pruned_loss=0.08381, over 7177.00 frames.], tot_loss[loss=0.2326, simple_loss=0.3071, pruned_loss=0.07911, over 1418108.62 frames.], batch size: 26, lr: 7.48e-04 2022-05-27 01:30:37,808 INFO [train.py:842] (0/4) Epoch 7, batch 1600, loss[loss=0.2418, simple_loss=0.3213, pruned_loss=0.0812, over 7121.00 frames.], tot_loss[loss=0.2314, simple_loss=0.3059, pruned_loss=0.07841, over 1424307.49 frames.], batch size: 21, lr: 7.47e-04 2022-05-27 01:31:16,738 INFO [train.py:842] (0/4) Epoch 7, batch 1650, loss[loss=0.2621, simple_loss=0.3126, pruned_loss=0.1058, over 7065.00 frames.], tot_loss[loss=0.2323, simple_loss=0.3063, pruned_loss=0.0791, over 1418039.80 frames.], batch size: 18, lr: 7.47e-04 2022-05-27 01:31:55,643 INFO [train.py:842] (0/4) Epoch 7, batch 1700, loss[loss=0.2512, simple_loss=0.3285, pruned_loss=0.08691, over 7208.00 frames.], tot_loss[loss=0.2313, simple_loss=0.3053, pruned_loss=0.07863, over 1416874.95 frames.], batch size: 22, lr: 7.47e-04 2022-05-27 01:32:34,541 INFO [train.py:842] (0/4) Epoch 7, batch 1750, loss[loss=0.2008, simple_loss=0.2935, pruned_loss=0.0541, over 7331.00 frames.], tot_loss[loss=0.2293, simple_loss=0.3038, pruned_loss=0.07742, over 1412450.06 frames.], batch size: 22, lr: 7.46e-04 2022-05-27 01:33:12,916 INFO [train.py:842] (0/4) Epoch 7, batch 1800, loss[loss=0.3061, simple_loss=0.3618, pruned_loss=0.1252, over 7292.00 frames.], tot_loss[loss=0.2311, simple_loss=0.3064, pruned_loss=0.07789, over 1415148.75 frames.], batch size: 25, lr: 7.46e-04 2022-05-27 01:33:51,777 INFO [train.py:842] (0/4) Epoch 7, batch 1850, loss[loss=0.1743, simple_loss=0.2515, pruned_loss=0.04857, over 6998.00 frames.], tot_loss[loss=0.2295, simple_loss=0.3052, pruned_loss=0.0769, over 1417363.61 frames.], batch size: 16, lr: 7.46e-04 2022-05-27 01:34:30,336 INFO [train.py:842] (0/4) Epoch 7, batch 1900, loss[loss=0.2167, simple_loss=0.2946, pruned_loss=0.06939, over 7079.00 frames.], tot_loss[loss=0.2296, simple_loss=0.305, pruned_loss=0.07705, over 1413911.33 frames.], batch size: 18, lr: 7.45e-04 2022-05-27 01:35:09,633 INFO [train.py:842] (0/4) Epoch 7, batch 1950, loss[loss=0.2201, simple_loss=0.2809, pruned_loss=0.07961, over 7279.00 frames.], tot_loss[loss=0.2292, simple_loss=0.304, pruned_loss=0.07721, over 1417269.04 frames.], batch size: 18, lr: 7.45e-04 2022-05-27 01:35:48,201 INFO [train.py:842] (0/4) Epoch 7, batch 2000, loss[loss=0.274, simple_loss=0.3339, pruned_loss=0.107, over 7290.00 frames.], tot_loss[loss=0.2291, simple_loss=0.3039, pruned_loss=0.07721, over 1418458.58 frames.], batch size: 25, lr: 7.45e-04 2022-05-27 01:36:27,077 INFO [train.py:842] (0/4) Epoch 7, batch 2050, loss[loss=0.2264, simple_loss=0.2993, pruned_loss=0.0767, over 7283.00 frames.], tot_loss[loss=0.2309, simple_loss=0.3052, pruned_loss=0.07833, over 1415657.82 frames.], batch size: 24, lr: 7.44e-04 2022-05-27 01:37:05,447 INFO [train.py:842] (0/4) Epoch 7, batch 2100, loss[loss=0.1791, simple_loss=0.2624, pruned_loss=0.04793, over 7418.00 frames.], tot_loss[loss=0.2305, simple_loss=0.3047, pruned_loss=0.07816, over 1419492.46 frames.], batch size: 17, lr: 7.44e-04 2022-05-27 01:37:44,433 INFO [train.py:842] (0/4) Epoch 7, batch 2150, loss[loss=0.2588, simple_loss=0.3293, pruned_loss=0.09416, over 7412.00 frames.], tot_loss[loss=0.2306, simple_loss=0.3051, pruned_loss=0.07811, over 1424893.97 frames.], batch size: 21, lr: 7.44e-04 2022-05-27 01:38:22,878 INFO [train.py:842] (0/4) Epoch 7, batch 2200, loss[loss=0.1765, simple_loss=0.25, pruned_loss=0.0515, over 7129.00 frames.], tot_loss[loss=0.2311, simple_loss=0.3058, pruned_loss=0.07822, over 1422311.97 frames.], batch size: 17, lr: 7.43e-04 2022-05-27 01:39:01,897 INFO [train.py:842] (0/4) Epoch 7, batch 2250, loss[loss=0.2114, simple_loss=0.2872, pruned_loss=0.06774, over 7270.00 frames.], tot_loss[loss=0.2328, simple_loss=0.3071, pruned_loss=0.07926, over 1416675.29 frames.], batch size: 17, lr: 7.43e-04 2022-05-27 01:39:40,359 INFO [train.py:842] (0/4) Epoch 7, batch 2300, loss[loss=0.2254, simple_loss=0.3016, pruned_loss=0.07456, over 7198.00 frames.], tot_loss[loss=0.2325, simple_loss=0.3068, pruned_loss=0.07912, over 1420232.53 frames.], batch size: 23, lr: 7.43e-04 2022-05-27 01:40:19,212 INFO [train.py:842] (0/4) Epoch 7, batch 2350, loss[loss=0.2185, simple_loss=0.3057, pruned_loss=0.06563, over 7423.00 frames.], tot_loss[loss=0.231, simple_loss=0.3057, pruned_loss=0.07817, over 1419015.28 frames.], batch size: 21, lr: 7.42e-04 2022-05-27 01:40:57,829 INFO [train.py:842] (0/4) Epoch 7, batch 2400, loss[loss=0.2104, simple_loss=0.2805, pruned_loss=0.07013, over 7265.00 frames.], tot_loss[loss=0.2299, simple_loss=0.3051, pruned_loss=0.07731, over 1422336.84 frames.], batch size: 18, lr: 7.42e-04 2022-05-27 01:41:36,505 INFO [train.py:842] (0/4) Epoch 7, batch 2450, loss[loss=0.2083, simple_loss=0.2958, pruned_loss=0.06041, over 7431.00 frames.], tot_loss[loss=0.2305, simple_loss=0.3061, pruned_loss=0.0775, over 1417462.29 frames.], batch size: 21, lr: 7.42e-04 2022-05-27 01:42:14,974 INFO [train.py:842] (0/4) Epoch 7, batch 2500, loss[loss=0.2362, simple_loss=0.3223, pruned_loss=0.07509, over 7335.00 frames.], tot_loss[loss=0.2315, simple_loss=0.3071, pruned_loss=0.07793, over 1417277.26 frames.], batch size: 21, lr: 7.42e-04 2022-05-27 01:42:53,974 INFO [train.py:842] (0/4) Epoch 7, batch 2550, loss[loss=0.2074, simple_loss=0.295, pruned_loss=0.05991, over 7434.00 frames.], tot_loss[loss=0.2302, simple_loss=0.3061, pruned_loss=0.07714, over 1423486.98 frames.], batch size: 20, lr: 7.41e-04 2022-05-27 01:43:32,349 INFO [train.py:842] (0/4) Epoch 7, batch 2600, loss[loss=0.1938, simple_loss=0.2826, pruned_loss=0.05255, over 7156.00 frames.], tot_loss[loss=0.2305, simple_loss=0.306, pruned_loss=0.07751, over 1416871.70 frames.], batch size: 18, lr: 7.41e-04 2022-05-27 01:44:11,312 INFO [train.py:842] (0/4) Epoch 7, batch 2650, loss[loss=0.235, simple_loss=0.3001, pruned_loss=0.08496, over 7173.00 frames.], tot_loss[loss=0.2297, simple_loss=0.3051, pruned_loss=0.07718, over 1416161.24 frames.], batch size: 18, lr: 7.41e-04 2022-05-27 01:44:49,755 INFO [train.py:842] (0/4) Epoch 7, batch 2700, loss[loss=0.1949, simple_loss=0.275, pruned_loss=0.05744, over 6775.00 frames.], tot_loss[loss=0.2299, simple_loss=0.3056, pruned_loss=0.07709, over 1417951.08 frames.], batch size: 15, lr: 7.40e-04 2022-05-27 01:45:28,482 INFO [train.py:842] (0/4) Epoch 7, batch 2750, loss[loss=0.2197, simple_loss=0.2863, pruned_loss=0.07655, over 7411.00 frames.], tot_loss[loss=0.2309, simple_loss=0.3065, pruned_loss=0.07761, over 1418870.81 frames.], batch size: 18, lr: 7.40e-04 2022-05-27 01:46:06,983 INFO [train.py:842] (0/4) Epoch 7, batch 2800, loss[loss=0.1935, simple_loss=0.2688, pruned_loss=0.0591, over 6985.00 frames.], tot_loss[loss=0.232, simple_loss=0.3069, pruned_loss=0.07856, over 1416688.23 frames.], batch size: 16, lr: 7.40e-04 2022-05-27 01:46:46,090 INFO [train.py:842] (0/4) Epoch 7, batch 2850, loss[loss=0.2061, simple_loss=0.2916, pruned_loss=0.06024, over 7322.00 frames.], tot_loss[loss=0.2307, simple_loss=0.3053, pruned_loss=0.07805, over 1421533.31 frames.], batch size: 21, lr: 7.39e-04 2022-05-27 01:47:24,778 INFO [train.py:842] (0/4) Epoch 7, batch 2900, loss[loss=0.2661, simple_loss=0.3258, pruned_loss=0.1032, over 4903.00 frames.], tot_loss[loss=0.2307, simple_loss=0.3054, pruned_loss=0.07795, over 1424151.37 frames.], batch size: 52, lr: 7.39e-04 2022-05-27 01:48:03,869 INFO [train.py:842] (0/4) Epoch 7, batch 2950, loss[loss=0.2351, simple_loss=0.3146, pruned_loss=0.07777, over 7284.00 frames.], tot_loss[loss=0.2321, simple_loss=0.3067, pruned_loss=0.07874, over 1424530.76 frames.], batch size: 25, lr: 7.39e-04 2022-05-27 01:48:42,426 INFO [train.py:842] (0/4) Epoch 7, batch 3000, loss[loss=0.2372, simple_loss=0.3096, pruned_loss=0.08241, over 7156.00 frames.], tot_loss[loss=0.233, simple_loss=0.3073, pruned_loss=0.07931, over 1426857.62 frames.], batch size: 26, lr: 7.38e-04 2022-05-27 01:48:42,427 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 01:48:51,662 INFO [train.py:871] (0/4) Epoch 7, validation: loss=0.1805, simple_loss=0.2812, pruned_loss=0.03987, over 868885.00 frames. 2022-05-27 01:49:30,618 INFO [train.py:842] (0/4) Epoch 7, batch 3050, loss[loss=0.2263, simple_loss=0.3057, pruned_loss=0.07351, over 7155.00 frames.], tot_loss[loss=0.2334, simple_loss=0.3073, pruned_loss=0.07977, over 1426851.52 frames.], batch size: 26, lr: 7.38e-04 2022-05-27 01:50:09,079 INFO [train.py:842] (0/4) Epoch 7, batch 3100, loss[loss=0.2938, simple_loss=0.3759, pruned_loss=0.1058, over 7140.00 frames.], tot_loss[loss=0.2335, simple_loss=0.3075, pruned_loss=0.07968, over 1423063.36 frames.], batch size: 26, lr: 7.38e-04 2022-05-27 01:50:47,929 INFO [train.py:842] (0/4) Epoch 7, batch 3150, loss[loss=0.26, simple_loss=0.3283, pruned_loss=0.09589, over 7134.00 frames.], tot_loss[loss=0.233, simple_loss=0.3077, pruned_loss=0.07919, over 1426400.13 frames.], batch size: 28, lr: 7.37e-04 2022-05-27 01:51:26,322 INFO [train.py:842] (0/4) Epoch 7, batch 3200, loss[loss=0.1973, simple_loss=0.2736, pruned_loss=0.06053, over 7338.00 frames.], tot_loss[loss=0.2331, simple_loss=0.3078, pruned_loss=0.07922, over 1422434.43 frames.], batch size: 22, lr: 7.37e-04 2022-05-27 01:52:05,355 INFO [train.py:842] (0/4) Epoch 7, batch 3250, loss[loss=0.2843, simple_loss=0.355, pruned_loss=0.1069, over 7061.00 frames.], tot_loss[loss=0.2329, simple_loss=0.3074, pruned_loss=0.07922, over 1421083.60 frames.], batch size: 28, lr: 7.37e-04 2022-05-27 01:52:43,675 INFO [train.py:842] (0/4) Epoch 7, batch 3300, loss[loss=0.3514, simple_loss=0.3916, pruned_loss=0.1556, over 7145.00 frames.], tot_loss[loss=0.2349, simple_loss=0.3094, pruned_loss=0.08018, over 1416701.56 frames.], batch size: 20, lr: 7.36e-04 2022-05-27 01:53:22,484 INFO [train.py:842] (0/4) Epoch 7, batch 3350, loss[loss=0.2224, simple_loss=0.309, pruned_loss=0.06788, over 7151.00 frames.], tot_loss[loss=0.2326, simple_loss=0.3078, pruned_loss=0.07869, over 1418763.23 frames.], batch size: 19, lr: 7.36e-04 2022-05-27 01:54:01,074 INFO [train.py:842] (0/4) Epoch 7, batch 3400, loss[loss=0.2497, simple_loss=0.3145, pruned_loss=0.09243, over 7111.00 frames.], tot_loss[loss=0.2327, simple_loss=0.308, pruned_loss=0.07866, over 1421725.36 frames.], batch size: 21, lr: 7.36e-04 2022-05-27 01:54:39,837 INFO [train.py:842] (0/4) Epoch 7, batch 3450, loss[loss=0.2519, simple_loss=0.3223, pruned_loss=0.09075, over 7287.00 frames.], tot_loss[loss=0.2324, simple_loss=0.3076, pruned_loss=0.07857, over 1419983.55 frames.], batch size: 24, lr: 7.36e-04 2022-05-27 01:55:18,276 INFO [train.py:842] (0/4) Epoch 7, batch 3500, loss[loss=0.1927, simple_loss=0.2927, pruned_loss=0.04635, over 7226.00 frames.], tot_loss[loss=0.2328, simple_loss=0.3081, pruned_loss=0.07874, over 1421555.05 frames.], batch size: 21, lr: 7.35e-04 2022-05-27 01:55:57,329 INFO [train.py:842] (0/4) Epoch 7, batch 3550, loss[loss=0.2131, simple_loss=0.2994, pruned_loss=0.06344, over 7385.00 frames.], tot_loss[loss=0.2314, simple_loss=0.3067, pruned_loss=0.0781, over 1423350.39 frames.], batch size: 23, lr: 7.35e-04 2022-05-27 01:56:36,205 INFO [train.py:842] (0/4) Epoch 7, batch 3600, loss[loss=0.2127, simple_loss=0.2927, pruned_loss=0.06639, over 7213.00 frames.], tot_loss[loss=0.2318, simple_loss=0.3069, pruned_loss=0.07841, over 1424745.42 frames.], batch size: 21, lr: 7.35e-04 2022-05-27 01:57:15,151 INFO [train.py:842] (0/4) Epoch 7, batch 3650, loss[loss=0.2428, simple_loss=0.3162, pruned_loss=0.08465, over 7034.00 frames.], tot_loss[loss=0.2317, simple_loss=0.3066, pruned_loss=0.07841, over 1421850.68 frames.], batch size: 28, lr: 7.34e-04 2022-05-27 01:57:53,765 INFO [train.py:842] (0/4) Epoch 7, batch 3700, loss[loss=0.2208, simple_loss=0.2996, pruned_loss=0.07101, over 7432.00 frames.], tot_loss[loss=0.2289, simple_loss=0.3042, pruned_loss=0.07684, over 1423271.28 frames.], batch size: 20, lr: 7.34e-04 2022-05-27 01:58:32,562 INFO [train.py:842] (0/4) Epoch 7, batch 3750, loss[loss=0.366, simple_loss=0.4175, pruned_loss=0.1573, over 5074.00 frames.], tot_loss[loss=0.2295, simple_loss=0.3046, pruned_loss=0.07715, over 1423416.13 frames.], batch size: 52, lr: 7.34e-04 2022-05-27 01:59:10,980 INFO [train.py:842] (0/4) Epoch 7, batch 3800, loss[loss=0.1833, simple_loss=0.2683, pruned_loss=0.04921, over 7361.00 frames.], tot_loss[loss=0.2308, simple_loss=0.306, pruned_loss=0.07778, over 1420876.33 frames.], batch size: 19, lr: 7.33e-04 2022-05-27 01:59:50,099 INFO [train.py:842] (0/4) Epoch 7, batch 3850, loss[loss=0.1659, simple_loss=0.2543, pruned_loss=0.03875, over 7129.00 frames.], tot_loss[loss=0.229, simple_loss=0.3046, pruned_loss=0.07673, over 1423822.32 frames.], batch size: 17, lr: 7.33e-04 2022-05-27 02:00:28,926 INFO [train.py:842] (0/4) Epoch 7, batch 3900, loss[loss=0.2328, simple_loss=0.3065, pruned_loss=0.07959, over 7425.00 frames.], tot_loss[loss=0.2296, simple_loss=0.3049, pruned_loss=0.07709, over 1424435.89 frames.], batch size: 20, lr: 7.33e-04 2022-05-27 02:01:07,923 INFO [train.py:842] (0/4) Epoch 7, batch 3950, loss[loss=0.22, simple_loss=0.2883, pruned_loss=0.07589, over 7268.00 frames.], tot_loss[loss=0.2296, simple_loss=0.3045, pruned_loss=0.07742, over 1423908.23 frames.], batch size: 18, lr: 7.32e-04 2022-05-27 02:01:46,435 INFO [train.py:842] (0/4) Epoch 7, batch 4000, loss[loss=0.2428, simple_loss=0.3269, pruned_loss=0.0794, over 7336.00 frames.], tot_loss[loss=0.2308, simple_loss=0.3064, pruned_loss=0.07765, over 1430603.75 frames.], batch size: 22, lr: 7.32e-04 2022-05-27 02:02:25,553 INFO [train.py:842] (0/4) Epoch 7, batch 4050, loss[loss=0.2653, simple_loss=0.3338, pruned_loss=0.0984, over 7339.00 frames.], tot_loss[loss=0.2309, simple_loss=0.3062, pruned_loss=0.07781, over 1432624.44 frames.], batch size: 22, lr: 7.32e-04 2022-05-27 02:03:04,004 INFO [train.py:842] (0/4) Epoch 7, batch 4100, loss[loss=0.2286, simple_loss=0.307, pruned_loss=0.07507, over 6788.00 frames.], tot_loss[loss=0.2297, simple_loss=0.3051, pruned_loss=0.07713, over 1427727.51 frames.], batch size: 31, lr: 7.32e-04 2022-05-27 02:03:42,702 INFO [train.py:842] (0/4) Epoch 7, batch 4150, loss[loss=0.2126, simple_loss=0.2847, pruned_loss=0.07021, over 7249.00 frames.], tot_loss[loss=0.229, simple_loss=0.3045, pruned_loss=0.07682, over 1427691.94 frames.], batch size: 19, lr: 7.31e-04 2022-05-27 02:04:21,258 INFO [train.py:842] (0/4) Epoch 7, batch 4200, loss[loss=0.2459, simple_loss=0.3134, pruned_loss=0.08923, over 6540.00 frames.], tot_loss[loss=0.2296, simple_loss=0.3044, pruned_loss=0.07736, over 1429513.88 frames.], batch size: 38, lr: 7.31e-04 2022-05-27 02:05:00,146 INFO [train.py:842] (0/4) Epoch 7, batch 4250, loss[loss=0.331, simple_loss=0.3889, pruned_loss=0.1366, over 7109.00 frames.], tot_loss[loss=0.2288, simple_loss=0.3038, pruned_loss=0.07692, over 1431303.99 frames.], batch size: 21, lr: 7.31e-04 2022-05-27 02:05:38,658 INFO [train.py:842] (0/4) Epoch 7, batch 4300, loss[loss=0.2722, simple_loss=0.3462, pruned_loss=0.0991, over 6751.00 frames.], tot_loss[loss=0.2292, simple_loss=0.3038, pruned_loss=0.07735, over 1426852.79 frames.], batch size: 31, lr: 7.30e-04 2022-05-27 02:06:17,399 INFO [train.py:842] (0/4) Epoch 7, batch 4350, loss[loss=0.2143, simple_loss=0.3001, pruned_loss=0.06428, over 7438.00 frames.], tot_loss[loss=0.2284, simple_loss=0.3035, pruned_loss=0.07668, over 1421901.35 frames.], batch size: 20, lr: 7.30e-04 2022-05-27 02:06:55,811 INFO [train.py:842] (0/4) Epoch 7, batch 4400, loss[loss=0.1879, simple_loss=0.2636, pruned_loss=0.05614, over 7393.00 frames.], tot_loss[loss=0.2296, simple_loss=0.3041, pruned_loss=0.07749, over 1415406.11 frames.], batch size: 18, lr: 7.30e-04 2022-05-27 02:07:34,684 INFO [train.py:842] (0/4) Epoch 7, batch 4450, loss[loss=0.2241, simple_loss=0.3171, pruned_loss=0.06554, over 7159.00 frames.], tot_loss[loss=0.2302, simple_loss=0.3048, pruned_loss=0.0778, over 1417502.24 frames.], batch size: 20, lr: 7.29e-04 2022-05-27 02:08:13,172 INFO [train.py:842] (0/4) Epoch 7, batch 4500, loss[loss=0.2715, simple_loss=0.3198, pruned_loss=0.1116, over 7276.00 frames.], tot_loss[loss=0.2295, simple_loss=0.3044, pruned_loss=0.07726, over 1419805.59 frames.], batch size: 17, lr: 7.29e-04 2022-05-27 02:08:52,139 INFO [train.py:842] (0/4) Epoch 7, batch 4550, loss[loss=0.1642, simple_loss=0.2463, pruned_loss=0.04109, over 6806.00 frames.], tot_loss[loss=0.2304, simple_loss=0.3053, pruned_loss=0.0778, over 1418707.56 frames.], batch size: 15, lr: 7.29e-04 2022-05-27 02:09:30,753 INFO [train.py:842] (0/4) Epoch 7, batch 4600, loss[loss=0.2339, simple_loss=0.3095, pruned_loss=0.07912, over 7409.00 frames.], tot_loss[loss=0.2292, simple_loss=0.3038, pruned_loss=0.07723, over 1415971.39 frames.], batch size: 21, lr: 7.28e-04 2022-05-27 02:10:09,713 INFO [train.py:842] (0/4) Epoch 7, batch 4650, loss[loss=0.2209, simple_loss=0.2851, pruned_loss=0.07835, over 7172.00 frames.], tot_loss[loss=0.2288, simple_loss=0.3042, pruned_loss=0.07667, over 1420835.25 frames.], batch size: 18, lr: 7.28e-04 2022-05-27 02:10:48,231 INFO [train.py:842] (0/4) Epoch 7, batch 4700, loss[loss=0.2315, simple_loss=0.3153, pruned_loss=0.07386, over 7295.00 frames.], tot_loss[loss=0.2281, simple_loss=0.3038, pruned_loss=0.07623, over 1422531.15 frames.], batch size: 24, lr: 7.28e-04 2022-05-27 02:11:27,362 INFO [train.py:842] (0/4) Epoch 7, batch 4750, loss[loss=0.2216, simple_loss=0.3006, pruned_loss=0.07127, over 7351.00 frames.], tot_loss[loss=0.2273, simple_loss=0.3032, pruned_loss=0.07574, over 1422927.10 frames.], batch size: 19, lr: 7.28e-04 2022-05-27 02:12:05,924 INFO [train.py:842] (0/4) Epoch 7, batch 4800, loss[loss=0.2022, simple_loss=0.2698, pruned_loss=0.06729, over 7289.00 frames.], tot_loss[loss=0.2263, simple_loss=0.3019, pruned_loss=0.07531, over 1421366.56 frames.], batch size: 18, lr: 7.27e-04 2022-05-27 02:12:44,815 INFO [train.py:842] (0/4) Epoch 7, batch 4850, loss[loss=0.2221, simple_loss=0.3125, pruned_loss=0.06585, over 7415.00 frames.], tot_loss[loss=0.2248, simple_loss=0.3006, pruned_loss=0.07456, over 1419460.99 frames.], batch size: 21, lr: 7.27e-04 2022-05-27 02:13:23,583 INFO [train.py:842] (0/4) Epoch 7, batch 4900, loss[loss=0.2673, simple_loss=0.3311, pruned_loss=0.1018, over 7177.00 frames.], tot_loss[loss=0.2262, simple_loss=0.3021, pruned_loss=0.07518, over 1419225.70 frames.], batch size: 23, lr: 7.27e-04 2022-05-27 02:14:02,741 INFO [train.py:842] (0/4) Epoch 7, batch 4950, loss[loss=0.2622, simple_loss=0.3464, pruned_loss=0.08896, over 7314.00 frames.], tot_loss[loss=0.2253, simple_loss=0.3014, pruned_loss=0.07464, over 1422397.61 frames.], batch size: 21, lr: 7.26e-04 2022-05-27 02:14:41,330 INFO [train.py:842] (0/4) Epoch 7, batch 5000, loss[loss=0.1918, simple_loss=0.2729, pruned_loss=0.05537, over 7197.00 frames.], tot_loss[loss=0.2263, simple_loss=0.3018, pruned_loss=0.07542, over 1422946.99 frames.], batch size: 23, lr: 7.26e-04 2022-05-27 02:15:20,261 INFO [train.py:842] (0/4) Epoch 7, batch 5050, loss[loss=0.257, simple_loss=0.3232, pruned_loss=0.09534, over 7317.00 frames.], tot_loss[loss=0.2252, simple_loss=0.3007, pruned_loss=0.07488, over 1413398.89 frames.], batch size: 25, lr: 7.26e-04 2022-05-27 02:15:58,648 INFO [train.py:842] (0/4) Epoch 7, batch 5100, loss[loss=0.2105, simple_loss=0.2904, pruned_loss=0.06533, over 7149.00 frames.], tot_loss[loss=0.2255, simple_loss=0.3012, pruned_loss=0.07488, over 1415571.01 frames.], batch size: 18, lr: 7.25e-04 2022-05-27 02:16:37,764 INFO [train.py:842] (0/4) Epoch 7, batch 5150, loss[loss=0.1822, simple_loss=0.2678, pruned_loss=0.04833, over 7411.00 frames.], tot_loss[loss=0.225, simple_loss=0.3007, pruned_loss=0.07469, over 1417218.88 frames.], batch size: 18, lr: 7.25e-04 2022-05-27 02:17:16,415 INFO [train.py:842] (0/4) Epoch 7, batch 5200, loss[loss=0.2961, simple_loss=0.3558, pruned_loss=0.1182, over 7330.00 frames.], tot_loss[loss=0.2274, simple_loss=0.3027, pruned_loss=0.07599, over 1419819.93 frames.], batch size: 20, lr: 7.25e-04 2022-05-27 02:17:55,315 INFO [train.py:842] (0/4) Epoch 7, batch 5250, loss[loss=0.2466, simple_loss=0.328, pruned_loss=0.08255, over 7314.00 frames.], tot_loss[loss=0.2269, simple_loss=0.3025, pruned_loss=0.07571, over 1415330.91 frames.], batch size: 20, lr: 7.25e-04 2022-05-27 02:18:33,920 INFO [train.py:842] (0/4) Epoch 7, batch 5300, loss[loss=0.1797, simple_loss=0.2486, pruned_loss=0.05536, over 6987.00 frames.], tot_loss[loss=0.2268, simple_loss=0.3024, pruned_loss=0.07563, over 1418880.12 frames.], batch size: 16, lr: 7.24e-04 2022-05-27 02:19:12,835 INFO [train.py:842] (0/4) Epoch 7, batch 5350, loss[loss=0.2053, simple_loss=0.2831, pruned_loss=0.06374, over 7237.00 frames.], tot_loss[loss=0.2272, simple_loss=0.3024, pruned_loss=0.07604, over 1421507.56 frames.], batch size: 20, lr: 7.24e-04 2022-05-27 02:19:51,743 INFO [train.py:842] (0/4) Epoch 7, batch 5400, loss[loss=0.2855, simple_loss=0.3489, pruned_loss=0.1111, over 5047.00 frames.], tot_loss[loss=0.2287, simple_loss=0.3037, pruned_loss=0.07681, over 1412879.49 frames.], batch size: 52, lr: 7.24e-04 2022-05-27 02:20:30,643 INFO [train.py:842] (0/4) Epoch 7, batch 5450, loss[loss=0.2647, simple_loss=0.346, pruned_loss=0.09173, over 7305.00 frames.], tot_loss[loss=0.2293, simple_loss=0.3043, pruned_loss=0.07718, over 1413917.93 frames.], batch size: 24, lr: 7.23e-04 2022-05-27 02:21:09,278 INFO [train.py:842] (0/4) Epoch 7, batch 5500, loss[loss=0.1925, simple_loss=0.2825, pruned_loss=0.0513, over 7154.00 frames.], tot_loss[loss=0.2285, simple_loss=0.3033, pruned_loss=0.07681, over 1416210.83 frames.], batch size: 19, lr: 7.23e-04 2022-05-27 02:21:48,178 INFO [train.py:842] (0/4) Epoch 7, batch 5550, loss[loss=0.219, simple_loss=0.3072, pruned_loss=0.06538, over 7290.00 frames.], tot_loss[loss=0.229, simple_loss=0.3042, pruned_loss=0.07689, over 1416743.65 frames.], batch size: 24, lr: 7.23e-04 2022-05-27 02:22:26,584 INFO [train.py:842] (0/4) Epoch 7, batch 5600, loss[loss=0.2132, simple_loss=0.3005, pruned_loss=0.06297, over 7191.00 frames.], tot_loss[loss=0.2284, simple_loss=0.3032, pruned_loss=0.07678, over 1416636.37 frames.], batch size: 23, lr: 7.22e-04 2022-05-27 02:23:05,521 INFO [train.py:842] (0/4) Epoch 7, batch 5650, loss[loss=0.1564, simple_loss=0.235, pruned_loss=0.03887, over 7284.00 frames.], tot_loss[loss=0.2293, simple_loss=0.3039, pruned_loss=0.0773, over 1416515.05 frames.], batch size: 17, lr: 7.22e-04 2022-05-27 02:23:44,727 INFO [train.py:842] (0/4) Epoch 7, batch 5700, loss[loss=0.1933, simple_loss=0.2838, pruned_loss=0.05133, over 7150.00 frames.], tot_loss[loss=0.2299, simple_loss=0.3046, pruned_loss=0.07761, over 1420589.56 frames.], batch size: 20, lr: 7.22e-04 2022-05-27 02:24:23,471 INFO [train.py:842] (0/4) Epoch 7, batch 5750, loss[loss=0.2683, simple_loss=0.3433, pruned_loss=0.09665, over 7280.00 frames.], tot_loss[loss=0.2304, simple_loss=0.3048, pruned_loss=0.07799, over 1419327.26 frames.], batch size: 25, lr: 7.22e-04 2022-05-27 02:25:01,791 INFO [train.py:842] (0/4) Epoch 7, batch 5800, loss[loss=0.283, simple_loss=0.3505, pruned_loss=0.1077, over 6553.00 frames.], tot_loss[loss=0.2289, simple_loss=0.3037, pruned_loss=0.07709, over 1417461.47 frames.], batch size: 37, lr: 7.21e-04 2022-05-27 02:25:40,567 INFO [train.py:842] (0/4) Epoch 7, batch 5850, loss[loss=0.2117, simple_loss=0.2735, pruned_loss=0.07494, over 7412.00 frames.], tot_loss[loss=0.2304, simple_loss=0.3049, pruned_loss=0.07798, over 1414657.64 frames.], batch size: 18, lr: 7.21e-04 2022-05-27 02:26:19,072 INFO [train.py:842] (0/4) Epoch 7, batch 5900, loss[loss=0.1908, simple_loss=0.2599, pruned_loss=0.06087, over 7284.00 frames.], tot_loss[loss=0.2298, simple_loss=0.3045, pruned_loss=0.07762, over 1415009.48 frames.], batch size: 17, lr: 7.21e-04 2022-05-27 02:26:58,427 INFO [train.py:842] (0/4) Epoch 7, batch 5950, loss[loss=0.2359, simple_loss=0.3132, pruned_loss=0.07928, over 7440.00 frames.], tot_loss[loss=0.2301, simple_loss=0.3049, pruned_loss=0.07768, over 1419479.14 frames.], batch size: 20, lr: 7.20e-04 2022-05-27 02:27:37,184 INFO [train.py:842] (0/4) Epoch 7, batch 6000, loss[loss=0.1959, simple_loss=0.2674, pruned_loss=0.06222, over 7158.00 frames.], tot_loss[loss=0.2275, simple_loss=0.3027, pruned_loss=0.0762, over 1418123.25 frames.], batch size: 18, lr: 7.20e-04 2022-05-27 02:27:37,185 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 02:27:46,475 INFO [train.py:871] (0/4) Epoch 7, validation: loss=0.1828, simple_loss=0.2835, pruned_loss=0.04102, over 868885.00 frames. 2022-05-27 02:28:25,432 INFO [train.py:842] (0/4) Epoch 7, batch 6050, loss[loss=0.2351, simple_loss=0.3179, pruned_loss=0.07612, over 7317.00 frames.], tot_loss[loss=0.2269, simple_loss=0.3022, pruned_loss=0.07582, over 1421953.09 frames.], batch size: 20, lr: 7.20e-04 2022-05-27 02:29:04,007 INFO [train.py:842] (0/4) Epoch 7, batch 6100, loss[loss=0.2102, simple_loss=0.2796, pruned_loss=0.07045, over 6803.00 frames.], tot_loss[loss=0.2275, simple_loss=0.3027, pruned_loss=0.0761, over 1422344.88 frames.], batch size: 15, lr: 7.20e-04 2022-05-27 02:29:42,980 INFO [train.py:842] (0/4) Epoch 7, batch 6150, loss[loss=0.232, simple_loss=0.306, pruned_loss=0.07898, over 7418.00 frames.], tot_loss[loss=0.2286, simple_loss=0.3033, pruned_loss=0.07697, over 1423755.58 frames.], batch size: 18, lr: 7.19e-04 2022-05-27 02:30:21,588 INFO [train.py:842] (0/4) Epoch 7, batch 6200, loss[loss=0.2132, simple_loss=0.2948, pruned_loss=0.06584, over 7272.00 frames.], tot_loss[loss=0.2284, simple_loss=0.3036, pruned_loss=0.07662, over 1422776.58 frames.], batch size: 18, lr: 7.19e-04 2022-05-27 02:31:00,694 INFO [train.py:842] (0/4) Epoch 7, batch 6250, loss[loss=0.1876, simple_loss=0.264, pruned_loss=0.05561, over 6864.00 frames.], tot_loss[loss=0.2266, simple_loss=0.3022, pruned_loss=0.07552, over 1423066.14 frames.], batch size: 15, lr: 7.19e-04 2022-05-27 02:31:39,595 INFO [train.py:842] (0/4) Epoch 7, batch 6300, loss[loss=0.1859, simple_loss=0.2697, pruned_loss=0.051, over 7371.00 frames.], tot_loss[loss=0.2271, simple_loss=0.3024, pruned_loss=0.07594, over 1414764.93 frames.], batch size: 19, lr: 7.18e-04 2022-05-27 02:32:18,502 INFO [train.py:842] (0/4) Epoch 7, batch 6350, loss[loss=0.1738, simple_loss=0.2618, pruned_loss=0.04287, over 7289.00 frames.], tot_loss[loss=0.2259, simple_loss=0.3014, pruned_loss=0.07525, over 1419618.99 frames.], batch size: 18, lr: 7.18e-04 2022-05-27 02:32:57,028 INFO [train.py:842] (0/4) Epoch 7, batch 6400, loss[loss=0.2981, simple_loss=0.3579, pruned_loss=0.1192, over 5026.00 frames.], tot_loss[loss=0.2284, simple_loss=0.3036, pruned_loss=0.07667, over 1422246.78 frames.], batch size: 52, lr: 7.18e-04 2022-05-27 02:33:36,088 INFO [train.py:842] (0/4) Epoch 7, batch 6450, loss[loss=0.2435, simple_loss=0.331, pruned_loss=0.07802, over 7209.00 frames.], tot_loss[loss=0.2272, simple_loss=0.3023, pruned_loss=0.07599, over 1426649.12 frames.], batch size: 23, lr: 7.18e-04 2022-05-27 02:34:14,473 INFO [train.py:842] (0/4) Epoch 7, batch 6500, loss[loss=0.2094, simple_loss=0.2912, pruned_loss=0.06384, over 7117.00 frames.], tot_loss[loss=0.2259, simple_loss=0.3018, pruned_loss=0.07505, over 1427377.45 frames.], batch size: 28, lr: 7.17e-04 2022-05-27 02:34:53,299 INFO [train.py:842] (0/4) Epoch 7, batch 6550, loss[loss=0.1744, simple_loss=0.2719, pruned_loss=0.03844, over 7296.00 frames.], tot_loss[loss=0.2274, simple_loss=0.303, pruned_loss=0.07589, over 1422743.90 frames.], batch size: 25, lr: 7.17e-04 2022-05-27 02:35:31,818 INFO [train.py:842] (0/4) Epoch 7, batch 6600, loss[loss=0.223, simple_loss=0.3005, pruned_loss=0.07272, over 7418.00 frames.], tot_loss[loss=0.2272, simple_loss=0.3028, pruned_loss=0.07583, over 1422072.12 frames.], batch size: 21, lr: 7.17e-04 2022-05-27 02:36:10,719 INFO [train.py:842] (0/4) Epoch 7, batch 6650, loss[loss=0.2247, simple_loss=0.29, pruned_loss=0.07969, over 7073.00 frames.], tot_loss[loss=0.2274, simple_loss=0.303, pruned_loss=0.0759, over 1421512.32 frames.], batch size: 18, lr: 7.16e-04 2022-05-27 02:36:49,519 INFO [train.py:842] (0/4) Epoch 7, batch 6700, loss[loss=0.2501, simple_loss=0.3182, pruned_loss=0.09097, over 7059.00 frames.], tot_loss[loss=0.2274, simple_loss=0.3031, pruned_loss=0.07583, over 1424308.96 frames.], batch size: 18, lr: 7.16e-04 2022-05-27 02:37:28,478 INFO [train.py:842] (0/4) Epoch 7, batch 6750, loss[loss=0.2061, simple_loss=0.2788, pruned_loss=0.06671, over 7154.00 frames.], tot_loss[loss=0.2274, simple_loss=0.3032, pruned_loss=0.07579, over 1426010.62 frames.], batch size: 19, lr: 7.16e-04 2022-05-27 02:38:06,891 INFO [train.py:842] (0/4) Epoch 7, batch 6800, loss[loss=0.2231, simple_loss=0.3001, pruned_loss=0.07302, over 7327.00 frames.], tot_loss[loss=0.2285, simple_loss=0.3039, pruned_loss=0.07657, over 1423408.17 frames.], batch size: 25, lr: 7.16e-04 2022-05-27 02:38:45,869 INFO [train.py:842] (0/4) Epoch 7, batch 6850, loss[loss=0.1857, simple_loss=0.259, pruned_loss=0.05623, over 7227.00 frames.], tot_loss[loss=0.2295, simple_loss=0.3047, pruned_loss=0.07713, over 1420738.68 frames.], batch size: 16, lr: 7.15e-04 2022-05-27 02:39:35,205 INFO [train.py:842] (0/4) Epoch 7, batch 6900, loss[loss=0.2594, simple_loss=0.3381, pruned_loss=0.09037, over 7321.00 frames.], tot_loss[loss=0.2285, simple_loss=0.3036, pruned_loss=0.07666, over 1422005.80 frames.], batch size: 22, lr: 7.15e-04 2022-05-27 02:40:14,039 INFO [train.py:842] (0/4) Epoch 7, batch 6950, loss[loss=0.2574, simple_loss=0.3288, pruned_loss=0.09295, over 7417.00 frames.], tot_loss[loss=0.2283, simple_loss=0.3032, pruned_loss=0.07668, over 1419146.13 frames.], batch size: 21, lr: 7.15e-04 2022-05-27 02:40:52,451 INFO [train.py:842] (0/4) Epoch 7, batch 7000, loss[loss=0.254, simple_loss=0.3248, pruned_loss=0.0916, over 6400.00 frames.], tot_loss[loss=0.2293, simple_loss=0.3045, pruned_loss=0.07709, over 1420594.27 frames.], batch size: 37, lr: 7.14e-04 2022-05-27 02:41:31,355 INFO [train.py:842] (0/4) Epoch 7, batch 7050, loss[loss=0.2351, simple_loss=0.3126, pruned_loss=0.07881, over 7288.00 frames.], tot_loss[loss=0.2277, simple_loss=0.3032, pruned_loss=0.0761, over 1424227.41 frames.], batch size: 25, lr: 7.14e-04 2022-05-27 02:42:09,936 INFO [train.py:842] (0/4) Epoch 7, batch 7100, loss[loss=0.2551, simple_loss=0.3275, pruned_loss=0.09136, over 7187.00 frames.], tot_loss[loss=0.2285, simple_loss=0.304, pruned_loss=0.07651, over 1425750.68 frames.], batch size: 23, lr: 7.14e-04 2022-05-27 02:42:49,027 INFO [train.py:842] (0/4) Epoch 7, batch 7150, loss[loss=0.1706, simple_loss=0.2491, pruned_loss=0.04609, over 7282.00 frames.], tot_loss[loss=0.2296, simple_loss=0.3049, pruned_loss=0.07715, over 1424099.63 frames.], batch size: 17, lr: 7.14e-04 2022-05-27 02:43:27,660 INFO [train.py:842] (0/4) Epoch 7, batch 7200, loss[loss=0.2021, simple_loss=0.2861, pruned_loss=0.05904, over 7364.00 frames.], tot_loss[loss=0.229, simple_loss=0.3045, pruned_loss=0.07674, over 1416543.52 frames.], batch size: 19, lr: 7.13e-04 2022-05-27 02:44:06,731 INFO [train.py:842] (0/4) Epoch 7, batch 7250, loss[loss=0.2495, simple_loss=0.3091, pruned_loss=0.0949, over 7058.00 frames.], tot_loss[loss=0.2302, simple_loss=0.3052, pruned_loss=0.07759, over 1418528.44 frames.], batch size: 28, lr: 7.13e-04 2022-05-27 02:44:45,281 INFO [train.py:842] (0/4) Epoch 7, batch 7300, loss[loss=0.2346, simple_loss=0.3074, pruned_loss=0.08089, over 7119.00 frames.], tot_loss[loss=0.2303, simple_loss=0.305, pruned_loss=0.07776, over 1421322.82 frames.], batch size: 26, lr: 7.13e-04 2022-05-27 02:45:24,091 INFO [train.py:842] (0/4) Epoch 7, batch 7350, loss[loss=0.1783, simple_loss=0.235, pruned_loss=0.06085, over 7419.00 frames.], tot_loss[loss=0.231, simple_loss=0.3056, pruned_loss=0.07815, over 1422195.92 frames.], batch size: 17, lr: 7.12e-04 2022-05-27 02:46:02,465 INFO [train.py:842] (0/4) Epoch 7, batch 7400, loss[loss=0.2294, simple_loss=0.3062, pruned_loss=0.07629, over 7065.00 frames.], tot_loss[loss=0.2303, simple_loss=0.3053, pruned_loss=0.07771, over 1418110.30 frames.], batch size: 18, lr: 7.12e-04 2022-05-27 02:46:41,240 INFO [train.py:842] (0/4) Epoch 7, batch 7450, loss[loss=0.1916, simple_loss=0.2662, pruned_loss=0.0585, over 7285.00 frames.], tot_loss[loss=0.2326, simple_loss=0.3072, pruned_loss=0.07898, over 1417284.06 frames.], batch size: 17, lr: 7.12e-04 2022-05-27 02:47:19,876 INFO [train.py:842] (0/4) Epoch 7, batch 7500, loss[loss=0.2122, simple_loss=0.298, pruned_loss=0.06314, over 7151.00 frames.], tot_loss[loss=0.2299, simple_loss=0.3047, pruned_loss=0.07752, over 1418761.96 frames.], batch size: 20, lr: 7.12e-04 2022-05-27 02:47:58,785 INFO [train.py:842] (0/4) Epoch 7, batch 7550, loss[loss=0.2153, simple_loss=0.2954, pruned_loss=0.06762, over 7316.00 frames.], tot_loss[loss=0.2285, simple_loss=0.3038, pruned_loss=0.07664, over 1417425.56 frames.], batch size: 21, lr: 7.11e-04 2022-05-27 02:48:37,261 INFO [train.py:842] (0/4) Epoch 7, batch 7600, loss[loss=0.2235, simple_loss=0.3024, pruned_loss=0.07232, over 7320.00 frames.], tot_loss[loss=0.2285, simple_loss=0.3037, pruned_loss=0.07666, over 1421284.38 frames.], batch size: 22, lr: 7.11e-04 2022-05-27 02:49:16,243 INFO [train.py:842] (0/4) Epoch 7, batch 7650, loss[loss=0.2601, simple_loss=0.3265, pruned_loss=0.09685, over 5065.00 frames.], tot_loss[loss=0.2268, simple_loss=0.3024, pruned_loss=0.07559, over 1423535.74 frames.], batch size: 52, lr: 7.11e-04 2022-05-27 02:49:54,714 INFO [train.py:842] (0/4) Epoch 7, batch 7700, loss[loss=0.2154, simple_loss=0.3061, pruned_loss=0.06231, over 7104.00 frames.], tot_loss[loss=0.2262, simple_loss=0.3022, pruned_loss=0.07512, over 1420160.74 frames.], batch size: 21, lr: 7.10e-04 2022-05-27 02:50:33,578 INFO [train.py:842] (0/4) Epoch 7, batch 7750, loss[loss=0.2208, simple_loss=0.3022, pruned_loss=0.06967, over 7230.00 frames.], tot_loss[loss=0.2259, simple_loss=0.302, pruned_loss=0.07492, over 1425228.83 frames.], batch size: 20, lr: 7.10e-04 2022-05-27 02:51:12,167 INFO [train.py:842] (0/4) Epoch 7, batch 7800, loss[loss=0.2018, simple_loss=0.2921, pruned_loss=0.05574, over 7432.00 frames.], tot_loss[loss=0.2259, simple_loss=0.3023, pruned_loss=0.07473, over 1425647.32 frames.], batch size: 20, lr: 7.10e-04 2022-05-27 02:51:51,069 INFO [train.py:842] (0/4) Epoch 7, batch 7850, loss[loss=0.217, simple_loss=0.2792, pruned_loss=0.07736, over 6757.00 frames.], tot_loss[loss=0.2256, simple_loss=0.302, pruned_loss=0.07462, over 1424440.81 frames.], batch size: 15, lr: 7.10e-04 2022-05-27 02:52:29,409 INFO [train.py:842] (0/4) Epoch 7, batch 7900, loss[loss=0.1789, simple_loss=0.2614, pruned_loss=0.04823, over 7253.00 frames.], tot_loss[loss=0.2272, simple_loss=0.3034, pruned_loss=0.07552, over 1421836.41 frames.], batch size: 19, lr: 7.09e-04 2022-05-27 02:53:07,969 INFO [train.py:842] (0/4) Epoch 7, batch 7950, loss[loss=0.2261, simple_loss=0.3061, pruned_loss=0.07308, over 7197.00 frames.], tot_loss[loss=0.2267, simple_loss=0.3027, pruned_loss=0.07529, over 1415397.66 frames.], batch size: 23, lr: 7.09e-04 2022-05-27 02:53:46,558 INFO [train.py:842] (0/4) Epoch 7, batch 8000, loss[loss=0.2435, simple_loss=0.3185, pruned_loss=0.08427, over 5095.00 frames.], tot_loss[loss=0.2281, simple_loss=0.3037, pruned_loss=0.0762, over 1413668.80 frames.], batch size: 52, lr: 7.09e-04 2022-05-27 02:54:25,424 INFO [train.py:842] (0/4) Epoch 7, batch 8050, loss[loss=0.2256, simple_loss=0.3042, pruned_loss=0.07347, over 7421.00 frames.], tot_loss[loss=0.2282, simple_loss=0.304, pruned_loss=0.07622, over 1414017.09 frames.], batch size: 21, lr: 7.08e-04 2022-05-27 02:55:24,479 INFO [train.py:842] (0/4) Epoch 7, batch 8100, loss[loss=0.245, simple_loss=0.3212, pruned_loss=0.08437, over 6810.00 frames.], tot_loss[loss=0.2294, simple_loss=0.3055, pruned_loss=0.07664, over 1414163.98 frames.], batch size: 31, lr: 7.08e-04 2022-05-27 02:56:13,872 INFO [train.py:842] (0/4) Epoch 7, batch 8150, loss[loss=0.2213, simple_loss=0.3035, pruned_loss=0.06961, over 7313.00 frames.], tot_loss[loss=0.229, simple_loss=0.3047, pruned_loss=0.07659, over 1418931.42 frames.], batch size: 21, lr: 7.08e-04 2022-05-27 02:56:52,240 INFO [train.py:842] (0/4) Epoch 7, batch 8200, loss[loss=0.208, simple_loss=0.2845, pruned_loss=0.06573, over 7345.00 frames.], tot_loss[loss=0.2289, simple_loss=0.305, pruned_loss=0.07642, over 1419065.37 frames.], batch size: 19, lr: 7.08e-04 2022-05-27 02:57:31,160 INFO [train.py:842] (0/4) Epoch 7, batch 8250, loss[loss=0.2789, simple_loss=0.3512, pruned_loss=0.1033, over 6779.00 frames.], tot_loss[loss=0.2304, simple_loss=0.3061, pruned_loss=0.07736, over 1420089.03 frames.], batch size: 31, lr: 7.07e-04 2022-05-27 02:58:09,613 INFO [train.py:842] (0/4) Epoch 7, batch 8300, loss[loss=0.2335, simple_loss=0.3182, pruned_loss=0.07434, over 7408.00 frames.], tot_loss[loss=0.2298, simple_loss=0.3053, pruned_loss=0.07712, over 1415457.16 frames.], batch size: 17, lr: 7.07e-04 2022-05-27 02:58:48,414 INFO [train.py:842] (0/4) Epoch 7, batch 8350, loss[loss=0.2354, simple_loss=0.3166, pruned_loss=0.07709, over 7220.00 frames.], tot_loss[loss=0.2285, simple_loss=0.3042, pruned_loss=0.07634, over 1420457.93 frames.], batch size: 21, lr: 7.07e-04 2022-05-27 02:59:26,860 INFO [train.py:842] (0/4) Epoch 7, batch 8400, loss[loss=0.2105, simple_loss=0.2919, pruned_loss=0.06451, over 7237.00 frames.], tot_loss[loss=0.2281, simple_loss=0.3041, pruned_loss=0.07606, over 1423154.90 frames.], batch size: 20, lr: 7.06e-04 2022-05-27 03:00:05,682 INFO [train.py:842] (0/4) Epoch 7, batch 8450, loss[loss=0.2019, simple_loss=0.2974, pruned_loss=0.05317, over 7413.00 frames.], tot_loss[loss=0.2285, simple_loss=0.3049, pruned_loss=0.07607, over 1423661.22 frames.], batch size: 21, lr: 7.06e-04 2022-05-27 03:00:44,407 INFO [train.py:842] (0/4) Epoch 7, batch 8500, loss[loss=0.2363, simple_loss=0.3055, pruned_loss=0.08352, over 7335.00 frames.], tot_loss[loss=0.2288, simple_loss=0.3053, pruned_loss=0.07616, over 1422733.27 frames.], batch size: 22, lr: 7.06e-04 2022-05-27 03:01:22,936 INFO [train.py:842] (0/4) Epoch 7, batch 8550, loss[loss=0.1831, simple_loss=0.2682, pruned_loss=0.04897, over 7153.00 frames.], tot_loss[loss=0.2275, simple_loss=0.3042, pruned_loss=0.07541, over 1416259.93 frames.], batch size: 19, lr: 7.06e-04 2022-05-27 03:02:01,344 INFO [train.py:842] (0/4) Epoch 7, batch 8600, loss[loss=0.2098, simple_loss=0.2852, pruned_loss=0.06725, over 7160.00 frames.], tot_loss[loss=0.2278, simple_loss=0.3046, pruned_loss=0.07553, over 1417307.36 frames.], batch size: 18, lr: 7.05e-04 2022-05-27 03:02:40,394 INFO [train.py:842] (0/4) Epoch 7, batch 8650, loss[loss=0.2205, simple_loss=0.312, pruned_loss=0.06452, over 7336.00 frames.], tot_loss[loss=0.2257, simple_loss=0.3025, pruned_loss=0.07444, over 1417973.54 frames.], batch size: 22, lr: 7.05e-04 2022-05-27 03:03:18,979 INFO [train.py:842] (0/4) Epoch 7, batch 8700, loss[loss=0.1625, simple_loss=0.2495, pruned_loss=0.03775, over 7422.00 frames.], tot_loss[loss=0.2268, simple_loss=0.3031, pruned_loss=0.07524, over 1420150.00 frames.], batch size: 18, lr: 7.05e-04 2022-05-27 03:03:57,708 INFO [train.py:842] (0/4) Epoch 7, batch 8750, loss[loss=0.2118, simple_loss=0.3086, pruned_loss=0.05754, over 7217.00 frames.], tot_loss[loss=0.2284, simple_loss=0.3052, pruned_loss=0.0758, over 1419362.47 frames.], batch size: 21, lr: 7.05e-04 2022-05-27 03:04:35,969 INFO [train.py:842] (0/4) Epoch 7, batch 8800, loss[loss=0.2967, simple_loss=0.3594, pruned_loss=0.117, over 4616.00 frames.], tot_loss[loss=0.2298, simple_loss=0.3056, pruned_loss=0.07701, over 1414041.79 frames.], batch size: 52, lr: 7.04e-04 2022-05-27 03:05:12,358 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-64000.pt 2022-05-27 03:05:17,312 INFO [train.py:842] (0/4) Epoch 7, batch 8850, loss[loss=0.1853, simple_loss=0.2687, pruned_loss=0.05092, over 7415.00 frames.], tot_loss[loss=0.2279, simple_loss=0.3042, pruned_loss=0.07581, over 1417096.32 frames.], batch size: 18, lr: 7.04e-04 2022-05-27 03:05:55,608 INFO [train.py:842] (0/4) Epoch 7, batch 8900, loss[loss=0.1951, simple_loss=0.2714, pruned_loss=0.05935, over 7158.00 frames.], tot_loss[loss=0.229, simple_loss=0.3051, pruned_loss=0.07648, over 1412492.05 frames.], batch size: 18, lr: 7.04e-04 2022-05-27 03:06:34,503 INFO [train.py:842] (0/4) Epoch 7, batch 8950, loss[loss=0.2224, simple_loss=0.308, pruned_loss=0.06835, over 7334.00 frames.], tot_loss[loss=0.2282, simple_loss=0.3047, pruned_loss=0.07589, over 1408240.02 frames.], batch size: 22, lr: 7.03e-04 2022-05-27 03:07:12,744 INFO [train.py:842] (0/4) Epoch 7, batch 9000, loss[loss=0.2833, simple_loss=0.3541, pruned_loss=0.1063, over 7221.00 frames.], tot_loss[loss=0.2311, simple_loss=0.3067, pruned_loss=0.07774, over 1402663.93 frames.], batch size: 21, lr: 7.03e-04 2022-05-27 03:07:12,745 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 03:07:22,108 INFO [train.py:871] (0/4) Epoch 7, validation: loss=0.1806, simple_loss=0.2819, pruned_loss=0.03969, over 868885.00 frames. 2022-05-27 03:08:00,671 INFO [train.py:842] (0/4) Epoch 7, batch 9050, loss[loss=0.2846, simple_loss=0.3431, pruned_loss=0.1131, over 5134.00 frames.], tot_loss[loss=0.2326, simple_loss=0.308, pruned_loss=0.07867, over 1396555.27 frames.], batch size: 53, lr: 7.03e-04 2022-05-27 03:08:38,271 INFO [train.py:842] (0/4) Epoch 7, batch 9100, loss[loss=0.2424, simple_loss=0.3224, pruned_loss=0.0812, over 7275.00 frames.], tot_loss[loss=0.2334, simple_loss=0.3092, pruned_loss=0.0788, over 1379651.46 frames.], batch size: 25, lr: 7.03e-04 2022-05-27 03:09:15,873 INFO [train.py:842] (0/4) Epoch 7, batch 9150, loss[loss=0.2547, simple_loss=0.3393, pruned_loss=0.08506, over 6284.00 frames.], tot_loss[loss=0.2357, simple_loss=0.3109, pruned_loss=0.08027, over 1340174.32 frames.], batch size: 37, lr: 7.02e-04 2022-05-27 03:09:48,797 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-7.pt 2022-05-27 03:10:09,046 INFO [train.py:842] (0/4) Epoch 8, batch 0, loss[loss=0.2541, simple_loss=0.3271, pruned_loss=0.09057, over 7331.00 frames.], tot_loss[loss=0.2541, simple_loss=0.3271, pruned_loss=0.09057, over 7331.00 frames.], batch size: 22, lr: 6.74e-04 2022-05-27 03:10:47,711 INFO [train.py:842] (0/4) Epoch 8, batch 50, loss[loss=0.2116, simple_loss=0.281, pruned_loss=0.07113, over 7140.00 frames.], tot_loss[loss=0.2272, simple_loss=0.3035, pruned_loss=0.07542, over 320720.42 frames.], batch size: 17, lr: 6.73e-04 2022-05-27 03:11:26,542 INFO [train.py:842] (0/4) Epoch 8, batch 100, loss[loss=0.2221, simple_loss=0.3017, pruned_loss=0.07129, over 7313.00 frames.], tot_loss[loss=0.2252, simple_loss=0.303, pruned_loss=0.07368, over 569051.68 frames.], batch size: 25, lr: 6.73e-04 2022-05-27 03:12:05,185 INFO [train.py:842] (0/4) Epoch 8, batch 150, loss[loss=0.2351, simple_loss=0.2996, pruned_loss=0.08531, over 7107.00 frames.], tot_loss[loss=0.2217, simple_loss=0.2992, pruned_loss=0.07214, over 758214.35 frames.], batch size: 21, lr: 6.73e-04 2022-05-27 03:12:43,896 INFO [train.py:842] (0/4) Epoch 8, batch 200, loss[loss=0.1883, simple_loss=0.2864, pruned_loss=0.04516, over 7202.00 frames.], tot_loss[loss=0.2236, simple_loss=0.301, pruned_loss=0.07311, over 907222.00 frames.], batch size: 22, lr: 6.73e-04 2022-05-27 03:13:22,346 INFO [train.py:842] (0/4) Epoch 8, batch 250, loss[loss=0.1981, simple_loss=0.2872, pruned_loss=0.05449, over 7115.00 frames.], tot_loss[loss=0.2236, simple_loss=0.3012, pruned_loss=0.07301, over 1020896.38 frames.], batch size: 21, lr: 6.72e-04 2022-05-27 03:14:01,072 INFO [train.py:842] (0/4) Epoch 8, batch 300, loss[loss=0.2928, simple_loss=0.357, pruned_loss=0.1143, over 7080.00 frames.], tot_loss[loss=0.2251, simple_loss=0.3019, pruned_loss=0.0741, over 1106058.46 frames.], batch size: 18, lr: 6.72e-04 2022-05-27 03:14:39,754 INFO [train.py:842] (0/4) Epoch 8, batch 350, loss[loss=0.2102, simple_loss=0.2965, pruned_loss=0.06195, over 7129.00 frames.], tot_loss[loss=0.2235, simple_loss=0.3009, pruned_loss=0.07307, over 1178216.35 frames.], batch size: 21, lr: 6.72e-04 2022-05-27 03:15:18,894 INFO [train.py:842] (0/4) Epoch 8, batch 400, loss[loss=0.249, simple_loss=0.3223, pruned_loss=0.08781, over 4925.00 frames.], tot_loss[loss=0.2231, simple_loss=0.3007, pruned_loss=0.07274, over 1230879.39 frames.], batch size: 52, lr: 6.72e-04 2022-05-27 03:15:57,545 INFO [train.py:842] (0/4) Epoch 8, batch 450, loss[loss=0.2159, simple_loss=0.2697, pruned_loss=0.08105, over 6815.00 frames.], tot_loss[loss=0.2235, simple_loss=0.3004, pruned_loss=0.07335, over 1272299.57 frames.], batch size: 15, lr: 6.71e-04 2022-05-27 03:16:36,524 INFO [train.py:842] (0/4) Epoch 8, batch 500, loss[loss=0.2332, simple_loss=0.3185, pruned_loss=0.07392, over 7207.00 frames.], tot_loss[loss=0.2225, simple_loss=0.2996, pruned_loss=0.07271, over 1305293.97 frames.], batch size: 23, lr: 6.71e-04 2022-05-27 03:17:15,302 INFO [train.py:842] (0/4) Epoch 8, batch 550, loss[loss=0.2835, simple_loss=0.3536, pruned_loss=0.1067, over 7192.00 frames.], tot_loss[loss=0.2237, simple_loss=0.3008, pruned_loss=0.07329, over 1333089.26 frames.], batch size: 23, lr: 6.71e-04 2022-05-27 03:17:54,029 INFO [train.py:842] (0/4) Epoch 8, batch 600, loss[loss=0.2192, simple_loss=0.2993, pruned_loss=0.06961, over 7229.00 frames.], tot_loss[loss=0.2246, simple_loss=0.3022, pruned_loss=0.07345, over 1353229.01 frames.], batch size: 21, lr: 6.71e-04 2022-05-27 03:18:32,600 INFO [train.py:842] (0/4) Epoch 8, batch 650, loss[loss=0.173, simple_loss=0.2523, pruned_loss=0.04685, over 7257.00 frames.], tot_loss[loss=0.2239, simple_loss=0.3015, pruned_loss=0.07315, over 1368315.49 frames.], batch size: 19, lr: 6.70e-04 2022-05-27 03:19:11,321 INFO [train.py:842] (0/4) Epoch 8, batch 700, loss[loss=0.2373, simple_loss=0.3036, pruned_loss=0.08556, over 5046.00 frames.], tot_loss[loss=0.2251, simple_loss=0.3023, pruned_loss=0.07392, over 1376516.24 frames.], batch size: 54, lr: 6.70e-04 2022-05-27 03:19:49,824 INFO [train.py:842] (0/4) Epoch 8, batch 750, loss[loss=0.19, simple_loss=0.2727, pruned_loss=0.05363, over 7362.00 frames.], tot_loss[loss=0.2242, simple_loss=0.3018, pruned_loss=0.07326, over 1384658.04 frames.], batch size: 19, lr: 6.70e-04 2022-05-27 03:20:28,431 INFO [train.py:842] (0/4) Epoch 8, batch 800, loss[loss=0.2354, simple_loss=0.3187, pruned_loss=0.07603, over 6390.00 frames.], tot_loss[loss=0.2254, simple_loss=0.3031, pruned_loss=0.07389, over 1390073.84 frames.], batch size: 38, lr: 6.69e-04 2022-05-27 03:21:07,287 INFO [train.py:842] (0/4) Epoch 8, batch 850, loss[loss=0.2089, simple_loss=0.2787, pruned_loss=0.06951, over 7422.00 frames.], tot_loss[loss=0.2231, simple_loss=0.3008, pruned_loss=0.07272, over 1399123.68 frames.], batch size: 18, lr: 6.69e-04 2022-05-27 03:21:46,344 INFO [train.py:842] (0/4) Epoch 8, batch 900, loss[loss=0.2425, simple_loss=0.3212, pruned_loss=0.08193, over 6669.00 frames.], tot_loss[loss=0.223, simple_loss=0.3003, pruned_loss=0.07283, over 1399301.52 frames.], batch size: 31, lr: 6.69e-04 2022-05-27 03:22:24,868 INFO [train.py:842] (0/4) Epoch 8, batch 950, loss[loss=0.208, simple_loss=0.2921, pruned_loss=0.06194, over 7232.00 frames.], tot_loss[loss=0.2237, simple_loss=0.3006, pruned_loss=0.07346, over 1405582.12 frames.], batch size: 20, lr: 6.69e-04 2022-05-27 03:23:03,655 INFO [train.py:842] (0/4) Epoch 8, batch 1000, loss[loss=0.2268, simple_loss=0.3111, pruned_loss=0.07123, over 7220.00 frames.], tot_loss[loss=0.2227, simple_loss=0.2997, pruned_loss=0.07285, over 1409704.30 frames.], batch size: 21, lr: 6.68e-04 2022-05-27 03:23:42,255 INFO [train.py:842] (0/4) Epoch 8, batch 1050, loss[loss=0.1958, simple_loss=0.2692, pruned_loss=0.06121, over 7145.00 frames.], tot_loss[loss=0.223, simple_loss=0.3003, pruned_loss=0.07285, over 1408047.44 frames.], batch size: 17, lr: 6.68e-04 2022-05-27 03:24:21,157 INFO [train.py:842] (0/4) Epoch 8, batch 1100, loss[loss=0.2373, simple_loss=0.3284, pruned_loss=0.07312, over 7199.00 frames.], tot_loss[loss=0.2221, simple_loss=0.2997, pruned_loss=0.07231, over 1411269.32 frames.], batch size: 22, lr: 6.68e-04 2022-05-27 03:24:59,655 INFO [train.py:842] (0/4) Epoch 8, batch 1150, loss[loss=0.2586, simple_loss=0.3221, pruned_loss=0.09754, over 5247.00 frames.], tot_loss[loss=0.2226, simple_loss=0.3004, pruned_loss=0.07244, over 1416981.83 frames.], batch size: 52, lr: 6.68e-04 2022-05-27 03:25:38,516 INFO [train.py:842] (0/4) Epoch 8, batch 1200, loss[loss=0.2295, simple_loss=0.3124, pruned_loss=0.07325, over 7151.00 frames.], tot_loss[loss=0.2216, simple_loss=0.2994, pruned_loss=0.07189, over 1420634.78 frames.], batch size: 20, lr: 6.67e-04 2022-05-27 03:26:16,971 INFO [train.py:842] (0/4) Epoch 8, batch 1250, loss[loss=0.2364, simple_loss=0.3062, pruned_loss=0.08332, over 7279.00 frames.], tot_loss[loss=0.2223, simple_loss=0.2995, pruned_loss=0.07251, over 1418717.09 frames.], batch size: 18, lr: 6.67e-04 2022-05-27 03:26:55,516 INFO [train.py:842] (0/4) Epoch 8, batch 1300, loss[loss=0.2457, simple_loss=0.3084, pruned_loss=0.09149, over 7139.00 frames.], tot_loss[loss=0.2235, simple_loss=0.3007, pruned_loss=0.07313, over 1416025.57 frames.], batch size: 20, lr: 6.67e-04 2022-05-27 03:27:34,043 INFO [train.py:842] (0/4) Epoch 8, batch 1350, loss[loss=0.2341, simple_loss=0.2987, pruned_loss=0.08474, over 7167.00 frames.], tot_loss[loss=0.225, simple_loss=0.3018, pruned_loss=0.0741, over 1415100.56 frames.], batch size: 19, lr: 6.67e-04 2022-05-27 03:28:12,691 INFO [train.py:842] (0/4) Epoch 8, batch 1400, loss[loss=0.1577, simple_loss=0.2423, pruned_loss=0.03653, over 7286.00 frames.], tot_loss[loss=0.2241, simple_loss=0.3016, pruned_loss=0.0733, over 1416224.20 frames.], batch size: 18, lr: 6.66e-04 2022-05-27 03:28:51,221 INFO [train.py:842] (0/4) Epoch 8, batch 1450, loss[loss=0.1917, simple_loss=0.2832, pruned_loss=0.05011, over 7162.00 frames.], tot_loss[loss=0.2234, simple_loss=0.3011, pruned_loss=0.07284, over 1415782.29 frames.], batch size: 18, lr: 6.66e-04 2022-05-27 03:29:30,465 INFO [train.py:842] (0/4) Epoch 8, batch 1500, loss[loss=0.1943, simple_loss=0.2773, pruned_loss=0.05561, over 7423.00 frames.], tot_loss[loss=0.2236, simple_loss=0.301, pruned_loss=0.07315, over 1414791.75 frames.], batch size: 18, lr: 6.66e-04 2022-05-27 03:30:08,989 INFO [train.py:842] (0/4) Epoch 8, batch 1550, loss[loss=0.2519, simple_loss=0.3275, pruned_loss=0.08809, over 7212.00 frames.], tot_loss[loss=0.2253, simple_loss=0.3023, pruned_loss=0.07411, over 1420273.27 frames.], batch size: 22, lr: 6.66e-04 2022-05-27 03:30:48,019 INFO [train.py:842] (0/4) Epoch 8, batch 1600, loss[loss=0.1905, simple_loss=0.269, pruned_loss=0.05597, over 6519.00 frames.], tot_loss[loss=0.2263, simple_loss=0.3036, pruned_loss=0.07445, over 1421719.48 frames.], batch size: 38, lr: 6.65e-04 2022-05-27 03:31:26,451 INFO [train.py:842] (0/4) Epoch 8, batch 1650, loss[loss=0.3379, simple_loss=0.3765, pruned_loss=0.1496, over 7293.00 frames.], tot_loss[loss=0.2284, simple_loss=0.305, pruned_loss=0.07593, over 1420149.63 frames.], batch size: 24, lr: 6.65e-04 2022-05-27 03:32:05,103 INFO [train.py:842] (0/4) Epoch 8, batch 1700, loss[loss=0.2506, simple_loss=0.3209, pruned_loss=0.09008, over 7327.00 frames.], tot_loss[loss=0.2281, simple_loss=0.3049, pruned_loss=0.07566, over 1419927.58 frames.], batch size: 21, lr: 6.65e-04 2022-05-27 03:32:43,703 INFO [train.py:842] (0/4) Epoch 8, batch 1750, loss[loss=0.2161, simple_loss=0.2964, pruned_loss=0.06795, over 7339.00 frames.], tot_loss[loss=0.2272, simple_loss=0.3042, pruned_loss=0.07515, over 1419775.64 frames.], batch size: 22, lr: 6.65e-04 2022-05-27 03:33:22,862 INFO [train.py:842] (0/4) Epoch 8, batch 1800, loss[loss=0.203, simple_loss=0.2784, pruned_loss=0.06386, over 7333.00 frames.], tot_loss[loss=0.2265, simple_loss=0.3033, pruned_loss=0.07491, over 1421463.71 frames.], batch size: 22, lr: 6.64e-04 2022-05-27 03:34:01,567 INFO [train.py:842] (0/4) Epoch 8, batch 1850, loss[loss=0.2085, simple_loss=0.2913, pruned_loss=0.06288, over 7227.00 frames.], tot_loss[loss=0.2265, simple_loss=0.3035, pruned_loss=0.07472, over 1423696.03 frames.], batch size: 20, lr: 6.64e-04 2022-05-27 03:34:40,352 INFO [train.py:842] (0/4) Epoch 8, batch 1900, loss[loss=0.2571, simple_loss=0.3297, pruned_loss=0.09232, over 7295.00 frames.], tot_loss[loss=0.2253, simple_loss=0.3018, pruned_loss=0.07437, over 1421993.17 frames.], batch size: 25, lr: 6.64e-04 2022-05-27 03:35:19,049 INFO [train.py:842] (0/4) Epoch 8, batch 1950, loss[loss=0.1788, simple_loss=0.2547, pruned_loss=0.05146, over 7014.00 frames.], tot_loss[loss=0.2252, simple_loss=0.3015, pruned_loss=0.07448, over 1426232.47 frames.], batch size: 16, lr: 6.64e-04 2022-05-27 03:35:57,670 INFO [train.py:842] (0/4) Epoch 8, batch 2000, loss[loss=0.2132, simple_loss=0.3033, pruned_loss=0.06153, over 7119.00 frames.], tot_loss[loss=0.2243, simple_loss=0.3009, pruned_loss=0.07388, over 1427063.02 frames.], batch size: 21, lr: 6.63e-04 2022-05-27 03:36:36,057 INFO [train.py:842] (0/4) Epoch 8, batch 2050, loss[loss=0.2917, simple_loss=0.3555, pruned_loss=0.114, over 4801.00 frames.], tot_loss[loss=0.2259, simple_loss=0.3021, pruned_loss=0.07482, over 1420761.68 frames.], batch size: 52, lr: 6.63e-04 2022-05-27 03:37:14,882 INFO [train.py:842] (0/4) Epoch 8, batch 2100, loss[loss=0.2319, simple_loss=0.3248, pruned_loss=0.06954, over 7232.00 frames.], tot_loss[loss=0.2243, simple_loss=0.3014, pruned_loss=0.07359, over 1417881.69 frames.], batch size: 20, lr: 6.63e-04 2022-05-27 03:37:53,567 INFO [train.py:842] (0/4) Epoch 8, batch 2150, loss[loss=0.238, simple_loss=0.311, pruned_loss=0.08252, over 7203.00 frames.], tot_loss[loss=0.2257, simple_loss=0.3025, pruned_loss=0.07444, over 1419339.11 frames.], batch size: 22, lr: 6.63e-04 2022-05-27 03:38:32,391 INFO [train.py:842] (0/4) Epoch 8, batch 2200, loss[loss=0.2338, simple_loss=0.3209, pruned_loss=0.07338, over 7285.00 frames.], tot_loss[loss=0.224, simple_loss=0.301, pruned_loss=0.07347, over 1417257.25 frames.], batch size: 24, lr: 6.62e-04 2022-05-27 03:39:11,113 INFO [train.py:842] (0/4) Epoch 8, batch 2250, loss[loss=0.1962, simple_loss=0.2892, pruned_loss=0.05157, over 7209.00 frames.], tot_loss[loss=0.2231, simple_loss=0.3002, pruned_loss=0.07299, over 1411068.27 frames.], batch size: 23, lr: 6.62e-04 2022-05-27 03:39:49,976 INFO [train.py:842] (0/4) Epoch 8, batch 2300, loss[loss=0.2154, simple_loss=0.2883, pruned_loss=0.07126, over 7421.00 frames.], tot_loss[loss=0.2227, simple_loss=0.2996, pruned_loss=0.0729, over 1411358.77 frames.], batch size: 18, lr: 6.62e-04 2022-05-27 03:40:28,563 INFO [train.py:842] (0/4) Epoch 8, batch 2350, loss[loss=0.2303, simple_loss=0.292, pruned_loss=0.08435, over 7062.00 frames.], tot_loss[loss=0.2227, simple_loss=0.2997, pruned_loss=0.07285, over 1411284.40 frames.], batch size: 18, lr: 6.62e-04 2022-05-27 03:41:07,421 INFO [train.py:842] (0/4) Epoch 8, batch 2400, loss[loss=0.2045, simple_loss=0.2862, pruned_loss=0.06145, over 7252.00 frames.], tot_loss[loss=0.2223, simple_loss=0.2992, pruned_loss=0.07269, over 1414980.16 frames.], batch size: 19, lr: 6.61e-04 2022-05-27 03:41:46,085 INFO [train.py:842] (0/4) Epoch 8, batch 2450, loss[loss=0.2487, simple_loss=0.3206, pruned_loss=0.08839, over 7292.00 frames.], tot_loss[loss=0.2219, simple_loss=0.2987, pruned_loss=0.07256, over 1421711.87 frames.], batch size: 24, lr: 6.61e-04 2022-05-27 03:42:24,988 INFO [train.py:842] (0/4) Epoch 8, batch 2500, loss[loss=0.2156, simple_loss=0.3032, pruned_loss=0.06401, over 7308.00 frames.], tot_loss[loss=0.2239, simple_loss=0.3007, pruned_loss=0.07356, over 1420074.29 frames.], batch size: 21, lr: 6.61e-04 2022-05-27 03:43:03,692 INFO [train.py:842] (0/4) Epoch 8, batch 2550, loss[loss=0.2356, simple_loss=0.2988, pruned_loss=0.08616, over 7351.00 frames.], tot_loss[loss=0.2242, simple_loss=0.3005, pruned_loss=0.07393, over 1424564.10 frames.], batch size: 19, lr: 6.61e-04 2022-05-27 03:43:43,040 INFO [train.py:842] (0/4) Epoch 8, batch 2600, loss[loss=0.1886, simple_loss=0.2589, pruned_loss=0.05912, over 6773.00 frames.], tot_loss[loss=0.2237, simple_loss=0.3002, pruned_loss=0.07355, over 1424913.31 frames.], batch size: 15, lr: 6.60e-04 2022-05-27 03:44:21,510 INFO [train.py:842] (0/4) Epoch 8, batch 2650, loss[loss=0.217, simple_loss=0.3015, pruned_loss=0.06629, over 7110.00 frames.], tot_loss[loss=0.2225, simple_loss=0.2995, pruned_loss=0.07278, over 1425305.79 frames.], batch size: 21, lr: 6.60e-04 2022-05-27 03:45:00,449 INFO [train.py:842] (0/4) Epoch 8, batch 2700, loss[loss=0.2003, simple_loss=0.2689, pruned_loss=0.06583, over 7222.00 frames.], tot_loss[loss=0.2217, simple_loss=0.299, pruned_loss=0.0722, over 1428241.98 frames.], batch size: 16, lr: 6.60e-04 2022-05-27 03:45:39,000 INFO [train.py:842] (0/4) Epoch 8, batch 2750, loss[loss=0.1907, simple_loss=0.2597, pruned_loss=0.06085, over 7424.00 frames.], tot_loss[loss=0.2211, simple_loss=0.2981, pruned_loss=0.0721, over 1427131.34 frames.], batch size: 17, lr: 6.60e-04 2022-05-27 03:46:17,903 INFO [train.py:842] (0/4) Epoch 8, batch 2800, loss[loss=0.2336, simple_loss=0.3057, pruned_loss=0.0808, over 7149.00 frames.], tot_loss[loss=0.2218, simple_loss=0.2986, pruned_loss=0.07248, over 1427522.21 frames.], batch size: 20, lr: 6.60e-04 2022-05-27 03:46:56,409 INFO [train.py:842] (0/4) Epoch 8, batch 2850, loss[loss=0.205, simple_loss=0.2884, pruned_loss=0.06081, over 7214.00 frames.], tot_loss[loss=0.2221, simple_loss=0.299, pruned_loss=0.07264, over 1426130.59 frames.], batch size: 22, lr: 6.59e-04 2022-05-27 03:47:35,362 INFO [train.py:842] (0/4) Epoch 8, batch 2900, loss[loss=0.2225, simple_loss=0.2946, pruned_loss=0.07517, over 7128.00 frames.], tot_loss[loss=0.2209, simple_loss=0.2987, pruned_loss=0.07154, over 1425747.29 frames.], batch size: 17, lr: 6.59e-04 2022-05-27 03:48:13,869 INFO [train.py:842] (0/4) Epoch 8, batch 2950, loss[loss=0.1771, simple_loss=0.2725, pruned_loss=0.0408, over 7074.00 frames.], tot_loss[loss=0.2189, simple_loss=0.2968, pruned_loss=0.07053, over 1424704.91 frames.], batch size: 18, lr: 6.59e-04 2022-05-27 03:48:52,480 INFO [train.py:842] (0/4) Epoch 8, batch 3000, loss[loss=0.2356, simple_loss=0.3026, pruned_loss=0.08428, over 5386.00 frames.], tot_loss[loss=0.2206, simple_loss=0.2982, pruned_loss=0.07149, over 1422495.54 frames.], batch size: 52, lr: 6.59e-04 2022-05-27 03:48:52,481 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 03:49:01,707 INFO [train.py:871] (0/4) Epoch 8, validation: loss=0.1787, simple_loss=0.2793, pruned_loss=0.03905, over 868885.00 frames. 2022-05-27 03:49:40,133 INFO [train.py:842] (0/4) Epoch 8, batch 3050, loss[loss=0.2134, simple_loss=0.296, pruned_loss=0.06538, over 6415.00 frames.], tot_loss[loss=0.2205, simple_loss=0.2975, pruned_loss=0.07175, over 1415360.25 frames.], batch size: 37, lr: 6.58e-04 2022-05-27 03:50:18,978 INFO [train.py:842] (0/4) Epoch 8, batch 3100, loss[loss=0.2309, simple_loss=0.3098, pruned_loss=0.07598, over 7262.00 frames.], tot_loss[loss=0.2206, simple_loss=0.2975, pruned_loss=0.07181, over 1419854.76 frames.], batch size: 19, lr: 6.58e-04 2022-05-27 03:50:57,750 INFO [train.py:842] (0/4) Epoch 8, batch 3150, loss[loss=0.2408, simple_loss=0.3185, pruned_loss=0.08155, over 7444.00 frames.], tot_loss[loss=0.2203, simple_loss=0.2967, pruned_loss=0.07193, over 1420677.96 frames.], batch size: 20, lr: 6.58e-04 2022-05-27 03:51:36,888 INFO [train.py:842] (0/4) Epoch 8, batch 3200, loss[loss=0.1843, simple_loss=0.2687, pruned_loss=0.04997, over 7422.00 frames.], tot_loss[loss=0.2198, simple_loss=0.2965, pruned_loss=0.07152, over 1423987.77 frames.], batch size: 20, lr: 6.58e-04 2022-05-27 03:52:15,376 INFO [train.py:842] (0/4) Epoch 8, batch 3250, loss[loss=0.2404, simple_loss=0.312, pruned_loss=0.08438, over 7035.00 frames.], tot_loss[loss=0.2201, simple_loss=0.2974, pruned_loss=0.07143, over 1422884.99 frames.], batch size: 28, lr: 6.57e-04 2022-05-27 03:52:54,429 INFO [train.py:842] (0/4) Epoch 8, batch 3300, loss[loss=0.222, simple_loss=0.3057, pruned_loss=0.06922, over 6750.00 frames.], tot_loss[loss=0.2204, simple_loss=0.2974, pruned_loss=0.07169, over 1420955.59 frames.], batch size: 31, lr: 6.57e-04 2022-05-27 03:53:33,011 INFO [train.py:842] (0/4) Epoch 8, batch 3350, loss[loss=0.199, simple_loss=0.283, pruned_loss=0.05748, over 7430.00 frames.], tot_loss[loss=0.2216, simple_loss=0.2983, pruned_loss=0.07239, over 1419908.37 frames.], batch size: 20, lr: 6.57e-04 2022-05-27 03:54:11,879 INFO [train.py:842] (0/4) Epoch 8, batch 3400, loss[loss=0.2412, simple_loss=0.3224, pruned_loss=0.08001, over 6752.00 frames.], tot_loss[loss=0.2211, simple_loss=0.2979, pruned_loss=0.07215, over 1417459.39 frames.], batch size: 31, lr: 6.57e-04 2022-05-27 03:54:50,303 INFO [train.py:842] (0/4) Epoch 8, batch 3450, loss[loss=0.2055, simple_loss=0.2741, pruned_loss=0.06841, over 7420.00 frames.], tot_loss[loss=0.2234, simple_loss=0.3004, pruned_loss=0.07314, over 1421161.75 frames.], batch size: 18, lr: 6.56e-04 2022-05-27 03:55:29,171 INFO [train.py:842] (0/4) Epoch 8, batch 3500, loss[loss=0.2164, simple_loss=0.3, pruned_loss=0.06641, over 7385.00 frames.], tot_loss[loss=0.2236, simple_loss=0.3011, pruned_loss=0.07306, over 1420660.77 frames.], batch size: 23, lr: 6.56e-04 2022-05-27 03:56:07,713 INFO [train.py:842] (0/4) Epoch 8, batch 3550, loss[loss=0.2822, simple_loss=0.3424, pruned_loss=0.111, over 7255.00 frames.], tot_loss[loss=0.2227, simple_loss=0.3, pruned_loss=0.07271, over 1421607.79 frames.], batch size: 19, lr: 6.56e-04 2022-05-27 03:56:46,632 INFO [train.py:842] (0/4) Epoch 8, batch 3600, loss[loss=0.1903, simple_loss=0.2634, pruned_loss=0.05857, over 7276.00 frames.], tot_loss[loss=0.2223, simple_loss=0.2992, pruned_loss=0.07273, over 1420147.51 frames.], batch size: 17, lr: 6.56e-04 2022-05-27 03:57:25,064 INFO [train.py:842] (0/4) Epoch 8, batch 3650, loss[loss=0.1998, simple_loss=0.2912, pruned_loss=0.05418, over 7414.00 frames.], tot_loss[loss=0.2235, simple_loss=0.3004, pruned_loss=0.07334, over 1414864.76 frames.], batch size: 21, lr: 6.55e-04 2022-05-27 03:58:03,908 INFO [train.py:842] (0/4) Epoch 8, batch 3700, loss[loss=0.2045, simple_loss=0.2913, pruned_loss=0.0589, over 7344.00 frames.], tot_loss[loss=0.2207, simple_loss=0.2985, pruned_loss=0.07146, over 1418694.74 frames.], batch size: 22, lr: 6.55e-04 2022-05-27 03:58:42,481 INFO [train.py:842] (0/4) Epoch 8, batch 3750, loss[loss=0.3494, simple_loss=0.3736, pruned_loss=0.1626, over 7383.00 frames.], tot_loss[loss=0.2207, simple_loss=0.2985, pruned_loss=0.07149, over 1417682.46 frames.], batch size: 23, lr: 6.55e-04 2022-05-27 03:59:21,172 INFO [train.py:842] (0/4) Epoch 8, batch 3800, loss[loss=0.2047, simple_loss=0.2922, pruned_loss=0.05857, over 7203.00 frames.], tot_loss[loss=0.2214, simple_loss=0.2995, pruned_loss=0.07164, over 1418521.94 frames.], batch size: 22, lr: 6.55e-04 2022-05-27 03:59:59,656 INFO [train.py:842] (0/4) Epoch 8, batch 3850, loss[loss=0.2312, simple_loss=0.3096, pruned_loss=0.0764, over 7324.00 frames.], tot_loss[loss=0.2233, simple_loss=0.3012, pruned_loss=0.07274, over 1417288.81 frames.], batch size: 25, lr: 6.54e-04 2022-05-27 04:00:38,573 INFO [train.py:842] (0/4) Epoch 8, batch 3900, loss[loss=0.1632, simple_loss=0.2443, pruned_loss=0.04107, over 7071.00 frames.], tot_loss[loss=0.2221, simple_loss=0.2996, pruned_loss=0.07229, over 1421697.62 frames.], batch size: 18, lr: 6.54e-04 2022-05-27 04:01:17,108 INFO [train.py:842] (0/4) Epoch 8, batch 3950, loss[loss=0.2288, simple_loss=0.3082, pruned_loss=0.07468, over 7198.00 frames.], tot_loss[loss=0.2223, simple_loss=0.3002, pruned_loss=0.07217, over 1424302.66 frames.], batch size: 22, lr: 6.54e-04 2022-05-27 04:01:55,874 INFO [train.py:842] (0/4) Epoch 8, batch 4000, loss[loss=0.2303, simple_loss=0.3178, pruned_loss=0.07138, over 7411.00 frames.], tot_loss[loss=0.2231, simple_loss=0.3008, pruned_loss=0.07272, over 1423109.80 frames.], batch size: 21, lr: 6.54e-04 2022-05-27 04:02:34,545 INFO [train.py:842] (0/4) Epoch 8, batch 4050, loss[loss=0.2125, simple_loss=0.2764, pruned_loss=0.07434, over 7014.00 frames.], tot_loss[loss=0.2235, simple_loss=0.3009, pruned_loss=0.07305, over 1423669.24 frames.], batch size: 16, lr: 6.53e-04 2022-05-27 04:03:13,364 INFO [train.py:842] (0/4) Epoch 8, batch 4100, loss[loss=0.3905, simple_loss=0.4087, pruned_loss=0.1862, over 7400.00 frames.], tot_loss[loss=0.2251, simple_loss=0.3018, pruned_loss=0.07425, over 1417771.57 frames.], batch size: 23, lr: 6.53e-04 2022-05-27 04:03:51,865 INFO [train.py:842] (0/4) Epoch 8, batch 4150, loss[loss=0.2948, simple_loss=0.3584, pruned_loss=0.1156, over 6852.00 frames.], tot_loss[loss=0.2238, simple_loss=0.3009, pruned_loss=0.07336, over 1418206.20 frames.], batch size: 31, lr: 6.53e-04 2022-05-27 04:04:30,524 INFO [train.py:842] (0/4) Epoch 8, batch 4200, loss[loss=0.2371, simple_loss=0.3095, pruned_loss=0.08234, over 7065.00 frames.], tot_loss[loss=0.2247, simple_loss=0.3016, pruned_loss=0.07385, over 1417072.14 frames.], batch size: 18, lr: 6.53e-04 2022-05-27 04:05:09,129 INFO [train.py:842] (0/4) Epoch 8, batch 4250, loss[loss=0.2381, simple_loss=0.3221, pruned_loss=0.07704, over 7137.00 frames.], tot_loss[loss=0.2263, simple_loss=0.3032, pruned_loss=0.07472, over 1416569.95 frames.], batch size: 26, lr: 6.53e-04 2022-05-27 04:05:47,935 INFO [train.py:842] (0/4) Epoch 8, batch 4300, loss[loss=0.2388, simple_loss=0.3151, pruned_loss=0.08126, over 7070.00 frames.], tot_loss[loss=0.2236, simple_loss=0.3012, pruned_loss=0.07306, over 1423759.14 frames.], batch size: 28, lr: 6.52e-04 2022-05-27 04:06:26,323 INFO [train.py:842] (0/4) Epoch 8, batch 4350, loss[loss=0.1815, simple_loss=0.2698, pruned_loss=0.04659, over 7208.00 frames.], tot_loss[loss=0.2242, simple_loss=0.3014, pruned_loss=0.07355, over 1423102.81 frames.], batch size: 22, lr: 6.52e-04 2022-05-27 04:07:05,505 INFO [train.py:842] (0/4) Epoch 8, batch 4400, loss[loss=0.2415, simple_loss=0.3063, pruned_loss=0.08834, over 7161.00 frames.], tot_loss[loss=0.222, simple_loss=0.2991, pruned_loss=0.07249, over 1421303.18 frames.], batch size: 19, lr: 6.52e-04 2022-05-27 04:07:43,979 INFO [train.py:842] (0/4) Epoch 8, batch 4450, loss[loss=0.2001, simple_loss=0.2856, pruned_loss=0.0573, over 7343.00 frames.], tot_loss[loss=0.2219, simple_loss=0.2991, pruned_loss=0.0724, over 1421904.90 frames.], batch size: 22, lr: 6.52e-04 2022-05-27 04:08:22,901 INFO [train.py:842] (0/4) Epoch 8, batch 4500, loss[loss=0.1684, simple_loss=0.2482, pruned_loss=0.04425, over 7147.00 frames.], tot_loss[loss=0.2212, simple_loss=0.2988, pruned_loss=0.07182, over 1423477.40 frames.], batch size: 17, lr: 6.51e-04 2022-05-27 04:09:01,452 INFO [train.py:842] (0/4) Epoch 8, batch 4550, loss[loss=0.1913, simple_loss=0.2666, pruned_loss=0.05796, over 7256.00 frames.], tot_loss[loss=0.2221, simple_loss=0.2996, pruned_loss=0.07234, over 1425655.11 frames.], batch size: 19, lr: 6.51e-04 2022-05-27 04:09:40,154 INFO [train.py:842] (0/4) Epoch 8, batch 4600, loss[loss=0.2485, simple_loss=0.3146, pruned_loss=0.09118, over 6701.00 frames.], tot_loss[loss=0.224, simple_loss=0.3015, pruned_loss=0.0733, over 1423410.10 frames.], batch size: 31, lr: 6.51e-04 2022-05-27 04:10:18,840 INFO [train.py:842] (0/4) Epoch 8, batch 4650, loss[loss=0.2116, simple_loss=0.2933, pruned_loss=0.06498, over 6976.00 frames.], tot_loss[loss=0.2231, simple_loss=0.3004, pruned_loss=0.07288, over 1420493.45 frames.], batch size: 28, lr: 6.51e-04 2022-05-27 04:10:57,632 INFO [train.py:842] (0/4) Epoch 8, batch 4700, loss[loss=0.2558, simple_loss=0.3186, pruned_loss=0.09647, over 7279.00 frames.], tot_loss[loss=0.2222, simple_loss=0.2996, pruned_loss=0.0724, over 1422513.64 frames.], batch size: 25, lr: 6.50e-04 2022-05-27 04:11:36,316 INFO [train.py:842] (0/4) Epoch 8, batch 4750, loss[loss=0.2005, simple_loss=0.2781, pruned_loss=0.06148, over 7431.00 frames.], tot_loss[loss=0.2237, simple_loss=0.3007, pruned_loss=0.07336, over 1420032.43 frames.], batch size: 20, lr: 6.50e-04 2022-05-27 04:12:15,130 INFO [train.py:842] (0/4) Epoch 8, batch 4800, loss[loss=0.2381, simple_loss=0.3218, pruned_loss=0.07723, over 7159.00 frames.], tot_loss[loss=0.2245, simple_loss=0.3014, pruned_loss=0.0738, over 1423115.84 frames.], batch size: 26, lr: 6.50e-04 2022-05-27 04:12:53,770 INFO [train.py:842] (0/4) Epoch 8, batch 4850, loss[loss=0.262, simple_loss=0.3242, pruned_loss=0.09987, over 7365.00 frames.], tot_loss[loss=0.2236, simple_loss=0.3007, pruned_loss=0.07321, over 1428727.68 frames.], batch size: 19, lr: 6.50e-04 2022-05-27 04:13:32,360 INFO [train.py:842] (0/4) Epoch 8, batch 4900, loss[loss=0.2901, simple_loss=0.363, pruned_loss=0.1086, over 6724.00 frames.], tot_loss[loss=0.2241, simple_loss=0.3009, pruned_loss=0.07364, over 1426866.18 frames.], batch size: 31, lr: 6.49e-04 2022-05-27 04:14:10,920 INFO [train.py:842] (0/4) Epoch 8, batch 4950, loss[loss=0.204, simple_loss=0.2816, pruned_loss=0.0632, over 7060.00 frames.], tot_loss[loss=0.223, simple_loss=0.3002, pruned_loss=0.07288, over 1426216.45 frames.], batch size: 18, lr: 6.49e-04 2022-05-27 04:14:50,191 INFO [train.py:842] (0/4) Epoch 8, batch 5000, loss[loss=0.2344, simple_loss=0.3089, pruned_loss=0.07995, over 7249.00 frames.], tot_loss[loss=0.2228, simple_loss=0.3, pruned_loss=0.07282, over 1421241.93 frames.], batch size: 19, lr: 6.49e-04 2022-05-27 04:15:28,676 INFO [train.py:842] (0/4) Epoch 8, batch 5050, loss[loss=0.1997, simple_loss=0.2865, pruned_loss=0.05647, over 6474.00 frames.], tot_loss[loss=0.2237, simple_loss=0.3008, pruned_loss=0.07329, over 1421072.21 frames.], batch size: 38, lr: 6.49e-04 2022-05-27 04:16:07,693 INFO [train.py:842] (0/4) Epoch 8, batch 5100, loss[loss=0.2527, simple_loss=0.32, pruned_loss=0.09272, over 7259.00 frames.], tot_loss[loss=0.2236, simple_loss=0.3005, pruned_loss=0.07332, over 1426048.67 frames.], batch size: 17, lr: 6.49e-04 2022-05-27 04:16:46,308 INFO [train.py:842] (0/4) Epoch 8, batch 5150, loss[loss=0.1627, simple_loss=0.2541, pruned_loss=0.03566, over 7359.00 frames.], tot_loss[loss=0.2248, simple_loss=0.3018, pruned_loss=0.07395, over 1427936.38 frames.], batch size: 19, lr: 6.48e-04 2022-05-27 04:17:25,420 INFO [train.py:842] (0/4) Epoch 8, batch 5200, loss[loss=0.2421, simple_loss=0.3317, pruned_loss=0.0763, over 7263.00 frames.], tot_loss[loss=0.2232, simple_loss=0.3003, pruned_loss=0.0731, over 1426482.48 frames.], batch size: 19, lr: 6.48e-04 2022-05-27 04:18:03,674 INFO [train.py:842] (0/4) Epoch 8, batch 5250, loss[loss=0.2075, simple_loss=0.2931, pruned_loss=0.06097, over 7151.00 frames.], tot_loss[loss=0.2232, simple_loss=0.3003, pruned_loss=0.07302, over 1419672.16 frames.], batch size: 19, lr: 6.48e-04 2022-05-27 04:18:42,888 INFO [train.py:842] (0/4) Epoch 8, batch 5300, loss[loss=0.2126, simple_loss=0.2898, pruned_loss=0.06769, over 7163.00 frames.], tot_loss[loss=0.223, simple_loss=0.3001, pruned_loss=0.07298, over 1419200.47 frames.], batch size: 19, lr: 6.48e-04 2022-05-27 04:19:21,453 INFO [train.py:842] (0/4) Epoch 8, batch 5350, loss[loss=0.2199, simple_loss=0.2937, pruned_loss=0.07308, over 7154.00 frames.], tot_loss[loss=0.2229, simple_loss=0.2997, pruned_loss=0.07299, over 1419776.74 frames.], batch size: 19, lr: 6.47e-04 2022-05-27 04:20:00,569 INFO [train.py:842] (0/4) Epoch 8, batch 5400, loss[loss=0.1915, simple_loss=0.2914, pruned_loss=0.04583, over 7311.00 frames.], tot_loss[loss=0.2237, simple_loss=0.3003, pruned_loss=0.07359, over 1418326.45 frames.], batch size: 21, lr: 6.47e-04 2022-05-27 04:20:39,240 INFO [train.py:842] (0/4) Epoch 8, batch 5450, loss[loss=0.1709, simple_loss=0.2513, pruned_loss=0.04525, over 7354.00 frames.], tot_loss[loss=0.2223, simple_loss=0.2988, pruned_loss=0.07285, over 1418326.61 frames.], batch size: 19, lr: 6.47e-04 2022-05-27 04:21:18,484 INFO [train.py:842] (0/4) Epoch 8, batch 5500, loss[loss=0.1929, simple_loss=0.2752, pruned_loss=0.05527, over 7360.00 frames.], tot_loss[loss=0.2204, simple_loss=0.2975, pruned_loss=0.07167, over 1419077.74 frames.], batch size: 19, lr: 6.47e-04 2022-05-27 04:21:56,781 INFO [train.py:842] (0/4) Epoch 8, batch 5550, loss[loss=0.2961, simple_loss=0.3654, pruned_loss=0.1134, over 7131.00 frames.], tot_loss[loss=0.2222, simple_loss=0.2994, pruned_loss=0.07255, over 1415997.03 frames.], batch size: 20, lr: 6.46e-04 2022-05-27 04:22:35,526 INFO [train.py:842] (0/4) Epoch 8, batch 5600, loss[loss=0.1931, simple_loss=0.2759, pruned_loss=0.0552, over 7279.00 frames.], tot_loss[loss=0.2213, simple_loss=0.2992, pruned_loss=0.07167, over 1416414.64 frames.], batch size: 18, lr: 6.46e-04 2022-05-27 04:23:14,203 INFO [train.py:842] (0/4) Epoch 8, batch 5650, loss[loss=0.2588, simple_loss=0.3358, pruned_loss=0.09089, over 7343.00 frames.], tot_loss[loss=0.2206, simple_loss=0.2986, pruned_loss=0.07131, over 1417129.64 frames.], batch size: 22, lr: 6.46e-04 2022-05-27 04:23:53,469 INFO [train.py:842] (0/4) Epoch 8, batch 5700, loss[loss=0.2217, simple_loss=0.3001, pruned_loss=0.07166, over 7248.00 frames.], tot_loss[loss=0.2221, simple_loss=0.2996, pruned_loss=0.07234, over 1421920.92 frames.], batch size: 20, lr: 6.46e-04 2022-05-27 04:24:32,080 INFO [train.py:842] (0/4) Epoch 8, batch 5750, loss[loss=0.176, simple_loss=0.2605, pruned_loss=0.04571, over 7065.00 frames.], tot_loss[loss=0.2204, simple_loss=0.2983, pruned_loss=0.07127, over 1426328.66 frames.], batch size: 18, lr: 6.46e-04 2022-05-27 04:25:10,878 INFO [train.py:842] (0/4) Epoch 8, batch 5800, loss[loss=0.1762, simple_loss=0.2482, pruned_loss=0.05211, over 7307.00 frames.], tot_loss[loss=0.2205, simple_loss=0.2987, pruned_loss=0.07113, over 1424911.77 frames.], batch size: 17, lr: 6.45e-04 2022-05-27 04:25:49,424 INFO [train.py:842] (0/4) Epoch 8, batch 5850, loss[loss=0.2113, simple_loss=0.3073, pruned_loss=0.0576, over 7231.00 frames.], tot_loss[loss=0.2207, simple_loss=0.2989, pruned_loss=0.07126, over 1426902.92 frames.], batch size: 20, lr: 6.45e-04 2022-05-27 04:26:28,993 INFO [train.py:842] (0/4) Epoch 8, batch 5900, loss[loss=0.1986, simple_loss=0.2647, pruned_loss=0.06627, over 7256.00 frames.], tot_loss[loss=0.221, simple_loss=0.2987, pruned_loss=0.07161, over 1425896.89 frames.], batch size: 17, lr: 6.45e-04 2022-05-27 04:27:07,437 INFO [train.py:842] (0/4) Epoch 8, batch 5950, loss[loss=0.1617, simple_loss=0.243, pruned_loss=0.04016, over 7270.00 frames.], tot_loss[loss=0.2216, simple_loss=0.2994, pruned_loss=0.07193, over 1424796.61 frames.], batch size: 17, lr: 6.45e-04 2022-05-27 04:27:46,629 INFO [train.py:842] (0/4) Epoch 8, batch 6000, loss[loss=0.2491, simple_loss=0.3322, pruned_loss=0.08294, over 7322.00 frames.], tot_loss[loss=0.2229, simple_loss=0.3006, pruned_loss=0.0726, over 1425276.31 frames.], batch size: 21, lr: 6.44e-04 2022-05-27 04:27:46,630 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 04:27:55,933 INFO [train.py:871] (0/4) Epoch 8, validation: loss=0.1785, simple_loss=0.2788, pruned_loss=0.03907, over 868885.00 frames. 2022-05-27 04:28:34,539 INFO [train.py:842] (0/4) Epoch 8, batch 6050, loss[loss=0.1903, simple_loss=0.2781, pruned_loss=0.05132, over 7351.00 frames.], tot_loss[loss=0.2226, simple_loss=0.3002, pruned_loss=0.07252, over 1425528.70 frames.], batch size: 19, lr: 6.44e-04 2022-05-27 04:29:13,581 INFO [train.py:842] (0/4) Epoch 8, batch 6100, loss[loss=0.1956, simple_loss=0.2704, pruned_loss=0.06045, over 7161.00 frames.], tot_loss[loss=0.2226, simple_loss=0.3, pruned_loss=0.0726, over 1428225.71 frames.], batch size: 18, lr: 6.44e-04 2022-05-27 04:29:52,011 INFO [train.py:842] (0/4) Epoch 8, batch 6150, loss[loss=0.184, simple_loss=0.254, pruned_loss=0.05705, over 7279.00 frames.], tot_loss[loss=0.2218, simple_loss=0.2996, pruned_loss=0.07198, over 1426993.81 frames.], batch size: 18, lr: 6.44e-04 2022-05-27 04:30:30,792 INFO [train.py:842] (0/4) Epoch 8, batch 6200, loss[loss=0.225, simple_loss=0.3055, pruned_loss=0.07219, over 7302.00 frames.], tot_loss[loss=0.2218, simple_loss=0.2993, pruned_loss=0.07215, over 1427612.50 frames.], batch size: 25, lr: 6.43e-04 2022-05-27 04:31:09,334 INFO [train.py:842] (0/4) Epoch 8, batch 6250, loss[loss=0.2321, simple_loss=0.3118, pruned_loss=0.07622, over 7218.00 frames.], tot_loss[loss=0.2228, simple_loss=0.3, pruned_loss=0.07275, over 1426876.49 frames.], batch size: 22, lr: 6.43e-04 2022-05-27 04:31:47,864 INFO [train.py:842] (0/4) Epoch 8, batch 6300, loss[loss=0.2817, simple_loss=0.3524, pruned_loss=0.1055, over 7212.00 frames.], tot_loss[loss=0.2231, simple_loss=0.3005, pruned_loss=0.07282, over 1427346.67 frames.], batch size: 23, lr: 6.43e-04 2022-05-27 04:32:26,526 INFO [train.py:842] (0/4) Epoch 8, batch 6350, loss[loss=0.2075, simple_loss=0.2785, pruned_loss=0.06825, over 6855.00 frames.], tot_loss[loss=0.2234, simple_loss=0.3008, pruned_loss=0.07301, over 1426587.19 frames.], batch size: 15, lr: 6.43e-04 2022-05-27 04:33:05,971 INFO [train.py:842] (0/4) Epoch 8, batch 6400, loss[loss=0.2248, simple_loss=0.3116, pruned_loss=0.06904, over 7114.00 frames.], tot_loss[loss=0.2231, simple_loss=0.3004, pruned_loss=0.07293, over 1424099.42 frames.], batch size: 21, lr: 6.43e-04 2022-05-27 04:33:44,694 INFO [train.py:842] (0/4) Epoch 8, batch 6450, loss[loss=0.1806, simple_loss=0.2592, pruned_loss=0.05097, over 7271.00 frames.], tot_loss[loss=0.2208, simple_loss=0.298, pruned_loss=0.07183, over 1429135.45 frames.], batch size: 18, lr: 6.42e-04 2022-05-27 04:34:23,914 INFO [train.py:842] (0/4) Epoch 8, batch 6500, loss[loss=0.2353, simple_loss=0.3157, pruned_loss=0.07747, over 7045.00 frames.], tot_loss[loss=0.2204, simple_loss=0.2975, pruned_loss=0.07168, over 1427303.74 frames.], batch size: 28, lr: 6.42e-04 2022-05-27 04:35:02,476 INFO [train.py:842] (0/4) Epoch 8, batch 6550, loss[loss=0.1893, simple_loss=0.2539, pruned_loss=0.06237, over 7001.00 frames.], tot_loss[loss=0.2211, simple_loss=0.2987, pruned_loss=0.07172, over 1429311.41 frames.], batch size: 16, lr: 6.42e-04 2022-05-27 04:35:41,387 INFO [train.py:842] (0/4) Epoch 8, batch 6600, loss[loss=0.2068, simple_loss=0.2818, pruned_loss=0.06593, over 7157.00 frames.], tot_loss[loss=0.2206, simple_loss=0.2977, pruned_loss=0.07174, over 1429284.43 frames.], batch size: 19, lr: 6.42e-04 2022-05-27 04:36:20,212 INFO [train.py:842] (0/4) Epoch 8, batch 6650, loss[loss=0.2377, simple_loss=0.3118, pruned_loss=0.08181, over 7287.00 frames.], tot_loss[loss=0.2222, simple_loss=0.2986, pruned_loss=0.07291, over 1426317.99 frames.], batch size: 24, lr: 6.41e-04 2022-05-27 04:36:59,109 INFO [train.py:842] (0/4) Epoch 8, batch 6700, loss[loss=0.2335, simple_loss=0.3044, pruned_loss=0.08131, over 6559.00 frames.], tot_loss[loss=0.2195, simple_loss=0.2966, pruned_loss=0.07125, over 1426854.01 frames.], batch size: 38, lr: 6.41e-04 2022-05-27 04:37:37,749 INFO [train.py:842] (0/4) Epoch 8, batch 6750, loss[loss=0.1957, simple_loss=0.2879, pruned_loss=0.05178, over 7328.00 frames.], tot_loss[loss=0.2181, simple_loss=0.296, pruned_loss=0.07004, over 1429826.70 frames.], batch size: 22, lr: 6.41e-04 2022-05-27 04:38:16,587 INFO [train.py:842] (0/4) Epoch 8, batch 6800, loss[loss=0.1949, simple_loss=0.2838, pruned_loss=0.05301, over 7317.00 frames.], tot_loss[loss=0.2172, simple_loss=0.2954, pruned_loss=0.06951, over 1429289.23 frames.], batch size: 21, lr: 6.41e-04 2022-05-27 04:38:55,341 INFO [train.py:842] (0/4) Epoch 8, batch 6850, loss[loss=0.2377, simple_loss=0.3222, pruned_loss=0.07663, over 7226.00 frames.], tot_loss[loss=0.2171, simple_loss=0.2956, pruned_loss=0.0693, over 1431782.09 frames.], batch size: 20, lr: 6.41e-04 2022-05-27 04:39:34,216 INFO [train.py:842] (0/4) Epoch 8, batch 6900, loss[loss=0.2104, simple_loss=0.2856, pruned_loss=0.06764, over 7278.00 frames.], tot_loss[loss=0.2184, simple_loss=0.2959, pruned_loss=0.07048, over 1431945.41 frames.], batch size: 18, lr: 6.40e-04 2022-05-27 04:40:12,638 INFO [train.py:842] (0/4) Epoch 8, batch 6950, loss[loss=0.2329, simple_loss=0.3004, pruned_loss=0.08271, over 7260.00 frames.], tot_loss[loss=0.219, simple_loss=0.297, pruned_loss=0.07056, over 1428682.75 frames.], batch size: 19, lr: 6.40e-04 2022-05-27 04:40:51,441 INFO [train.py:842] (0/4) Epoch 8, batch 7000, loss[loss=0.2398, simple_loss=0.3136, pruned_loss=0.08303, over 7390.00 frames.], tot_loss[loss=0.2187, simple_loss=0.2967, pruned_loss=0.07041, over 1428455.04 frames.], batch size: 23, lr: 6.40e-04 2022-05-27 04:41:30,072 INFO [train.py:842] (0/4) Epoch 8, batch 7050, loss[loss=0.1958, simple_loss=0.2669, pruned_loss=0.06237, over 7173.00 frames.], tot_loss[loss=0.2189, simple_loss=0.2964, pruned_loss=0.07067, over 1427409.82 frames.], batch size: 18, lr: 6.40e-04 2022-05-27 04:42:09,250 INFO [train.py:842] (0/4) Epoch 8, batch 7100, loss[loss=0.2184, simple_loss=0.2891, pruned_loss=0.07383, over 7403.00 frames.], tot_loss[loss=0.2184, simple_loss=0.2962, pruned_loss=0.07027, over 1424260.26 frames.], batch size: 18, lr: 6.39e-04 2022-05-27 04:42:47,952 INFO [train.py:842] (0/4) Epoch 8, batch 7150, loss[loss=0.2193, simple_loss=0.2975, pruned_loss=0.07058, over 7268.00 frames.], tot_loss[loss=0.2197, simple_loss=0.2972, pruned_loss=0.07113, over 1420791.39 frames.], batch size: 18, lr: 6.39e-04 2022-05-27 04:43:26,578 INFO [train.py:842] (0/4) Epoch 8, batch 7200, loss[loss=0.2076, simple_loss=0.2999, pruned_loss=0.05771, over 7143.00 frames.], tot_loss[loss=0.218, simple_loss=0.2957, pruned_loss=0.07012, over 1421257.49 frames.], batch size: 20, lr: 6.39e-04 2022-05-27 04:44:05,043 INFO [train.py:842] (0/4) Epoch 8, batch 7250, loss[loss=0.1807, simple_loss=0.2596, pruned_loss=0.05088, over 6759.00 frames.], tot_loss[loss=0.2177, simple_loss=0.2951, pruned_loss=0.0701, over 1418535.28 frames.], batch size: 15, lr: 6.39e-04 2022-05-27 04:44:43,929 INFO [train.py:842] (0/4) Epoch 8, batch 7300, loss[loss=0.2982, simple_loss=0.3535, pruned_loss=0.1215, over 7162.00 frames.], tot_loss[loss=0.2181, simple_loss=0.2954, pruned_loss=0.07041, over 1415423.53 frames.], batch size: 19, lr: 6.39e-04 2022-05-27 04:45:22,491 INFO [train.py:842] (0/4) Epoch 8, batch 7350, loss[loss=0.2338, simple_loss=0.3153, pruned_loss=0.0761, over 7384.00 frames.], tot_loss[loss=0.2191, simple_loss=0.2965, pruned_loss=0.07084, over 1416705.39 frames.], batch size: 23, lr: 6.38e-04 2022-05-27 04:46:01,493 INFO [train.py:842] (0/4) Epoch 8, batch 7400, loss[loss=0.1791, simple_loss=0.2569, pruned_loss=0.05065, over 7411.00 frames.], tot_loss[loss=0.219, simple_loss=0.2962, pruned_loss=0.07094, over 1414055.68 frames.], batch size: 18, lr: 6.38e-04 2022-05-27 04:46:40,154 INFO [train.py:842] (0/4) Epoch 8, batch 7450, loss[loss=0.1943, simple_loss=0.2821, pruned_loss=0.05322, over 7282.00 frames.], tot_loss[loss=0.2187, simple_loss=0.2957, pruned_loss=0.07088, over 1412957.50 frames.], batch size: 18, lr: 6.38e-04 2022-05-27 04:47:19,053 INFO [train.py:842] (0/4) Epoch 8, batch 7500, loss[loss=0.1887, simple_loss=0.2673, pruned_loss=0.05503, over 7072.00 frames.], tot_loss[loss=0.2183, simple_loss=0.2955, pruned_loss=0.07053, over 1415350.34 frames.], batch size: 18, lr: 6.38e-04 2022-05-27 04:47:57,392 INFO [train.py:842] (0/4) Epoch 8, batch 7550, loss[loss=0.2033, simple_loss=0.2885, pruned_loss=0.05909, over 7256.00 frames.], tot_loss[loss=0.2197, simple_loss=0.2974, pruned_loss=0.07101, over 1414363.65 frames.], batch size: 19, lr: 6.37e-04 2022-05-27 04:48:36,292 INFO [train.py:842] (0/4) Epoch 8, batch 7600, loss[loss=0.1567, simple_loss=0.2314, pruned_loss=0.04102, over 7410.00 frames.], tot_loss[loss=0.2183, simple_loss=0.2962, pruned_loss=0.07018, over 1416531.44 frames.], batch size: 18, lr: 6.37e-04 2022-05-27 04:49:15,201 INFO [train.py:842] (0/4) Epoch 8, batch 7650, loss[loss=0.2248, simple_loss=0.3051, pruned_loss=0.07229, over 7318.00 frames.], tot_loss[loss=0.2193, simple_loss=0.297, pruned_loss=0.07079, over 1418235.48 frames.], batch size: 25, lr: 6.37e-04 2022-05-27 04:49:18,641 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-72000.pt 2022-05-27 04:49:56,804 INFO [train.py:842] (0/4) Epoch 8, batch 7700, loss[loss=0.2359, simple_loss=0.3219, pruned_loss=0.07497, over 7315.00 frames.], tot_loss[loss=0.2203, simple_loss=0.2981, pruned_loss=0.07125, over 1416704.67 frames.], batch size: 21, lr: 6.37e-04 2022-05-27 04:50:35,223 INFO [train.py:842] (0/4) Epoch 8, batch 7750, loss[loss=0.1819, simple_loss=0.2528, pruned_loss=0.05549, over 7425.00 frames.], tot_loss[loss=0.2201, simple_loss=0.2982, pruned_loss=0.07097, over 1418976.81 frames.], batch size: 18, lr: 6.37e-04 2022-05-27 04:51:14,007 INFO [train.py:842] (0/4) Epoch 8, batch 7800, loss[loss=0.3098, simple_loss=0.3609, pruned_loss=0.1293, over 7351.00 frames.], tot_loss[loss=0.2198, simple_loss=0.2979, pruned_loss=0.07088, over 1419388.02 frames.], batch size: 19, lr: 6.36e-04 2022-05-27 04:51:52,488 INFO [train.py:842] (0/4) Epoch 8, batch 7850, loss[loss=0.223, simple_loss=0.3036, pruned_loss=0.07123, over 7203.00 frames.], tot_loss[loss=0.2213, simple_loss=0.2991, pruned_loss=0.0717, over 1417021.99 frames.], batch size: 22, lr: 6.36e-04 2022-05-27 04:52:31,257 INFO [train.py:842] (0/4) Epoch 8, batch 7900, loss[loss=0.2357, simple_loss=0.3151, pruned_loss=0.0782, over 7294.00 frames.], tot_loss[loss=0.2237, simple_loss=0.3011, pruned_loss=0.07311, over 1419071.33 frames.], batch size: 25, lr: 6.36e-04 2022-05-27 04:53:09,837 INFO [train.py:842] (0/4) Epoch 8, batch 7950, loss[loss=0.2029, simple_loss=0.2976, pruned_loss=0.05406, over 7156.00 frames.], tot_loss[loss=0.223, simple_loss=0.3005, pruned_loss=0.07278, over 1420923.62 frames.], batch size: 20, lr: 6.36e-04 2022-05-27 04:53:49,254 INFO [train.py:842] (0/4) Epoch 8, batch 8000, loss[loss=0.2199, simple_loss=0.3095, pruned_loss=0.06516, over 7289.00 frames.], tot_loss[loss=0.2206, simple_loss=0.2983, pruned_loss=0.07141, over 1422023.18 frames.], batch size: 25, lr: 6.35e-04 2022-05-27 04:54:27,865 INFO [train.py:842] (0/4) Epoch 8, batch 8050, loss[loss=0.2124, simple_loss=0.2983, pruned_loss=0.06322, over 7315.00 frames.], tot_loss[loss=0.2198, simple_loss=0.2977, pruned_loss=0.07097, over 1425966.04 frames.], batch size: 21, lr: 6.35e-04 2022-05-27 04:55:06,862 INFO [train.py:842] (0/4) Epoch 8, batch 8100, loss[loss=0.239, simple_loss=0.303, pruned_loss=0.0875, over 7277.00 frames.], tot_loss[loss=0.2189, simple_loss=0.2965, pruned_loss=0.07063, over 1426056.34 frames.], batch size: 18, lr: 6.35e-04 2022-05-27 04:55:45,297 INFO [train.py:842] (0/4) Epoch 8, batch 8150, loss[loss=0.1941, simple_loss=0.2673, pruned_loss=0.0605, over 7160.00 frames.], tot_loss[loss=0.2218, simple_loss=0.2994, pruned_loss=0.07212, over 1416493.09 frames.], batch size: 18, lr: 6.35e-04 2022-05-27 04:56:24,244 INFO [train.py:842] (0/4) Epoch 8, batch 8200, loss[loss=0.2185, simple_loss=0.2911, pruned_loss=0.07292, over 7330.00 frames.], tot_loss[loss=0.22, simple_loss=0.2975, pruned_loss=0.07124, over 1418763.29 frames.], batch size: 21, lr: 6.35e-04 2022-05-27 04:57:02,835 INFO [train.py:842] (0/4) Epoch 8, batch 8250, loss[loss=0.2117, simple_loss=0.2909, pruned_loss=0.06624, over 7154.00 frames.], tot_loss[loss=0.222, simple_loss=0.2992, pruned_loss=0.07238, over 1419287.99 frames.], batch size: 18, lr: 6.34e-04 2022-05-27 04:57:41,676 INFO [train.py:842] (0/4) Epoch 8, batch 8300, loss[loss=0.2672, simple_loss=0.3467, pruned_loss=0.09385, over 7151.00 frames.], tot_loss[loss=0.2232, simple_loss=0.3005, pruned_loss=0.07289, over 1419567.45 frames.], batch size: 20, lr: 6.34e-04 2022-05-27 04:58:20,165 INFO [train.py:842] (0/4) Epoch 8, batch 8350, loss[loss=0.2148, simple_loss=0.3009, pruned_loss=0.06439, over 7182.00 frames.], tot_loss[loss=0.2244, simple_loss=0.3015, pruned_loss=0.07366, over 1422320.85 frames.], batch size: 26, lr: 6.34e-04 2022-05-27 04:58:58,958 INFO [train.py:842] (0/4) Epoch 8, batch 8400, loss[loss=0.1729, simple_loss=0.2486, pruned_loss=0.04857, over 7269.00 frames.], tot_loss[loss=0.2229, simple_loss=0.3004, pruned_loss=0.07271, over 1424471.46 frames.], batch size: 18, lr: 6.34e-04 2022-05-27 04:59:37,685 INFO [train.py:842] (0/4) Epoch 8, batch 8450, loss[loss=0.2139, simple_loss=0.3027, pruned_loss=0.06248, over 7149.00 frames.], tot_loss[loss=0.2227, simple_loss=0.3004, pruned_loss=0.07253, over 1420581.58 frames.], batch size: 20, lr: 6.34e-04 2022-05-27 05:00:16,708 INFO [train.py:842] (0/4) Epoch 8, batch 8500, loss[loss=0.2081, simple_loss=0.2903, pruned_loss=0.06299, over 7195.00 frames.], tot_loss[loss=0.222, simple_loss=0.2996, pruned_loss=0.07226, over 1421635.34 frames.], batch size: 22, lr: 6.33e-04 2022-05-27 05:00:55,182 INFO [train.py:842] (0/4) Epoch 8, batch 8550, loss[loss=0.2562, simple_loss=0.3426, pruned_loss=0.08487, over 7148.00 frames.], tot_loss[loss=0.2214, simple_loss=0.299, pruned_loss=0.07188, over 1419502.92 frames.], batch size: 20, lr: 6.33e-04 2022-05-27 05:01:34,185 INFO [train.py:842] (0/4) Epoch 8, batch 8600, loss[loss=0.2474, simple_loss=0.311, pruned_loss=0.09191, over 7278.00 frames.], tot_loss[loss=0.2194, simple_loss=0.2974, pruned_loss=0.07068, over 1419035.08 frames.], batch size: 17, lr: 6.33e-04 2022-05-27 05:02:12,584 INFO [train.py:842] (0/4) Epoch 8, batch 8650, loss[loss=0.213, simple_loss=0.287, pruned_loss=0.06949, over 7125.00 frames.], tot_loss[loss=0.2182, simple_loss=0.2969, pruned_loss=0.06974, over 1414425.60 frames.], batch size: 21, lr: 6.33e-04 2022-05-27 05:02:51,366 INFO [train.py:842] (0/4) Epoch 8, batch 8700, loss[loss=0.1815, simple_loss=0.2666, pruned_loss=0.04821, over 7266.00 frames.], tot_loss[loss=0.2192, simple_loss=0.2979, pruned_loss=0.07024, over 1417993.66 frames.], batch size: 18, lr: 6.32e-04 2022-05-27 05:03:30,097 INFO [train.py:842] (0/4) Epoch 8, batch 8750, loss[loss=0.2065, simple_loss=0.2815, pruned_loss=0.06577, over 7194.00 frames.], tot_loss[loss=0.2169, simple_loss=0.2956, pruned_loss=0.06913, over 1421811.33 frames.], batch size: 16, lr: 6.32e-04 2022-05-27 05:04:09,297 INFO [train.py:842] (0/4) Epoch 8, batch 8800, loss[loss=0.2597, simple_loss=0.3548, pruned_loss=0.08232, over 7340.00 frames.], tot_loss[loss=0.2188, simple_loss=0.2966, pruned_loss=0.07044, over 1418186.67 frames.], batch size: 22, lr: 6.32e-04 2022-05-27 05:04:47,684 INFO [train.py:842] (0/4) Epoch 8, batch 8850, loss[loss=0.2165, simple_loss=0.2796, pruned_loss=0.0767, over 7286.00 frames.], tot_loss[loss=0.2203, simple_loss=0.2981, pruned_loss=0.07125, over 1416066.61 frames.], batch size: 17, lr: 6.32e-04 2022-05-27 05:05:27,275 INFO [train.py:842] (0/4) Epoch 8, batch 8900, loss[loss=0.2113, simple_loss=0.2875, pruned_loss=0.06757, over 7360.00 frames.], tot_loss[loss=0.2191, simple_loss=0.2965, pruned_loss=0.07084, over 1409414.46 frames.], batch size: 19, lr: 6.32e-04 2022-05-27 05:06:05,755 INFO [train.py:842] (0/4) Epoch 8, batch 8950, loss[loss=0.228, simple_loss=0.3048, pruned_loss=0.0756, over 6561.00 frames.], tot_loss[loss=0.2202, simple_loss=0.2978, pruned_loss=0.07131, over 1408697.64 frames.], batch size: 38, lr: 6.31e-04 2022-05-27 05:06:44,582 INFO [train.py:842] (0/4) Epoch 8, batch 9000, loss[loss=0.2616, simple_loss=0.3282, pruned_loss=0.09752, over 4909.00 frames.], tot_loss[loss=0.222, simple_loss=0.2991, pruned_loss=0.07249, over 1404690.63 frames.], batch size: 52, lr: 6.31e-04 2022-05-27 05:06:44,584 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 05:06:53,926 INFO [train.py:871] (0/4) Epoch 8, validation: loss=0.1786, simple_loss=0.2794, pruned_loss=0.03893, over 868885.00 frames. 2022-05-27 05:07:32,105 INFO [train.py:842] (0/4) Epoch 8, batch 9050, loss[loss=0.2661, simple_loss=0.3272, pruned_loss=0.1026, over 4864.00 frames.], tot_loss[loss=0.2232, simple_loss=0.3001, pruned_loss=0.07318, over 1397100.62 frames.], batch size: 52, lr: 6.31e-04 2022-05-27 05:08:11,458 INFO [train.py:842] (0/4) Epoch 8, batch 9100, loss[loss=0.2809, simple_loss=0.3425, pruned_loss=0.1097, over 4927.00 frames.], tot_loss[loss=0.2225, simple_loss=0.2987, pruned_loss=0.07316, over 1382671.03 frames.], batch size: 52, lr: 6.31e-04 2022-05-27 05:08:49,171 INFO [train.py:842] (0/4) Epoch 8, batch 9150, loss[loss=0.2952, simple_loss=0.3396, pruned_loss=0.1253, over 4965.00 frames.], tot_loss[loss=0.229, simple_loss=0.3034, pruned_loss=0.07737, over 1308210.17 frames.], batch size: 53, lr: 6.31e-04 2022-05-27 05:09:21,089 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-8.pt 2022-05-27 05:09:41,211 INFO [train.py:842] (0/4) Epoch 9, batch 0, loss[loss=0.2746, simple_loss=0.342, pruned_loss=0.1036, over 7190.00 frames.], tot_loss[loss=0.2746, simple_loss=0.342, pruned_loss=0.1036, over 7190.00 frames.], batch size: 23, lr: 6.05e-04 2022-05-27 05:10:19,725 INFO [train.py:842] (0/4) Epoch 9, batch 50, loss[loss=0.2693, simple_loss=0.3361, pruned_loss=0.1013, over 7064.00 frames.], tot_loss[loss=0.2251, simple_loss=0.3023, pruned_loss=0.07398, over 319053.34 frames.], batch size: 28, lr: 6.05e-04 2022-05-27 05:10:58,685 INFO [train.py:842] (0/4) Epoch 9, batch 100, loss[loss=0.2033, simple_loss=0.2893, pruned_loss=0.05868, over 7229.00 frames.], tot_loss[loss=0.2181, simple_loss=0.2967, pruned_loss=0.06972, over 566234.38 frames.], batch size: 20, lr: 6.05e-04 2022-05-27 05:11:37,226 INFO [train.py:842] (0/4) Epoch 9, batch 150, loss[loss=0.2421, simple_loss=0.3117, pruned_loss=0.08619, over 4943.00 frames.], tot_loss[loss=0.2169, simple_loss=0.2954, pruned_loss=0.06917, over 753648.64 frames.], batch size: 52, lr: 6.05e-04 2022-05-27 05:12:15,908 INFO [train.py:842] (0/4) Epoch 9, batch 200, loss[loss=0.2326, simple_loss=0.307, pruned_loss=0.07914, over 7197.00 frames.], tot_loss[loss=0.2187, simple_loss=0.2971, pruned_loss=0.07016, over 902937.47 frames.], batch size: 22, lr: 6.04e-04 2022-05-27 05:13:04,719 INFO [train.py:842] (0/4) Epoch 9, batch 250, loss[loss=0.205, simple_loss=0.2904, pruned_loss=0.05979, over 7430.00 frames.], tot_loss[loss=0.2193, simple_loss=0.2978, pruned_loss=0.07044, over 1019385.05 frames.], batch size: 20, lr: 6.04e-04 2022-05-27 05:13:43,498 INFO [train.py:842] (0/4) Epoch 9, batch 300, loss[loss=0.2702, simple_loss=0.3521, pruned_loss=0.09413, over 7332.00 frames.], tot_loss[loss=0.2178, simple_loss=0.2967, pruned_loss=0.06943, over 1104714.97 frames.], batch size: 22, lr: 6.04e-04 2022-05-27 05:14:22,305 INFO [train.py:842] (0/4) Epoch 9, batch 350, loss[loss=0.1585, simple_loss=0.2494, pruned_loss=0.03377, over 7154.00 frames.], tot_loss[loss=0.2155, simple_loss=0.2944, pruned_loss=0.06835, over 1177605.85 frames.], batch size: 19, lr: 6.04e-04 2022-05-27 05:15:01,119 INFO [train.py:842] (0/4) Epoch 9, batch 400, loss[loss=0.265, simple_loss=0.3261, pruned_loss=0.1019, over 7138.00 frames.], tot_loss[loss=0.2147, simple_loss=0.2938, pruned_loss=0.06783, over 1237251.01 frames.], batch size: 17, lr: 6.04e-04 2022-05-27 05:15:39,703 INFO [train.py:842] (0/4) Epoch 9, batch 450, loss[loss=0.1756, simple_loss=0.2581, pruned_loss=0.04655, over 7255.00 frames.], tot_loss[loss=0.2151, simple_loss=0.2939, pruned_loss=0.06818, over 1277948.80 frames.], batch size: 19, lr: 6.03e-04 2022-05-27 05:16:18,392 INFO [train.py:842] (0/4) Epoch 9, batch 500, loss[loss=0.1629, simple_loss=0.2513, pruned_loss=0.03725, over 7425.00 frames.], tot_loss[loss=0.2159, simple_loss=0.2946, pruned_loss=0.06866, over 1310897.38 frames.], batch size: 18, lr: 6.03e-04 2022-05-27 05:16:57,095 INFO [train.py:842] (0/4) Epoch 9, batch 550, loss[loss=0.2004, simple_loss=0.2721, pruned_loss=0.06434, over 7058.00 frames.], tot_loss[loss=0.2157, simple_loss=0.2942, pruned_loss=0.06866, over 1339522.96 frames.], batch size: 18, lr: 6.03e-04 2022-05-27 05:17:36,147 INFO [train.py:842] (0/4) Epoch 9, batch 600, loss[loss=0.1941, simple_loss=0.2857, pruned_loss=0.05128, over 7067.00 frames.], tot_loss[loss=0.217, simple_loss=0.2949, pruned_loss=0.06951, over 1361028.83 frames.], batch size: 18, lr: 6.03e-04 2022-05-27 05:18:14,768 INFO [train.py:842] (0/4) Epoch 9, batch 650, loss[loss=0.1805, simple_loss=0.2625, pruned_loss=0.04925, over 7355.00 frames.], tot_loss[loss=0.2171, simple_loss=0.2952, pruned_loss=0.06947, over 1374431.59 frames.], batch size: 19, lr: 6.03e-04 2022-05-27 05:18:53,497 INFO [train.py:842] (0/4) Epoch 9, batch 700, loss[loss=0.2124, simple_loss=0.2809, pruned_loss=0.07197, over 7428.00 frames.], tot_loss[loss=0.2187, simple_loss=0.2967, pruned_loss=0.07031, over 1386734.36 frames.], batch size: 20, lr: 6.02e-04 2022-05-27 05:19:32,050 INFO [train.py:842] (0/4) Epoch 9, batch 750, loss[loss=0.222, simple_loss=0.2934, pruned_loss=0.07527, over 7153.00 frames.], tot_loss[loss=0.2202, simple_loss=0.2978, pruned_loss=0.07129, over 1389597.24 frames.], batch size: 18, lr: 6.02e-04 2022-05-27 05:20:11,005 INFO [train.py:842] (0/4) Epoch 9, batch 800, loss[loss=0.2349, simple_loss=0.3136, pruned_loss=0.07814, over 7386.00 frames.], tot_loss[loss=0.2189, simple_loss=0.2967, pruned_loss=0.07053, over 1395459.46 frames.], batch size: 23, lr: 6.02e-04 2022-05-27 05:20:49,557 INFO [train.py:842] (0/4) Epoch 9, batch 850, loss[loss=0.2229, simple_loss=0.307, pruned_loss=0.06943, over 7323.00 frames.], tot_loss[loss=0.2182, simple_loss=0.2964, pruned_loss=0.06998, over 1400812.30 frames.], batch size: 21, lr: 6.02e-04 2022-05-27 05:21:28,812 INFO [train.py:842] (0/4) Epoch 9, batch 900, loss[loss=0.2247, simple_loss=0.3146, pruned_loss=0.06745, over 7232.00 frames.], tot_loss[loss=0.2172, simple_loss=0.2961, pruned_loss=0.06919, over 1410011.32 frames.], batch size: 21, lr: 6.02e-04 2022-05-27 05:22:07,485 INFO [train.py:842] (0/4) Epoch 9, batch 950, loss[loss=0.1749, simple_loss=0.2673, pruned_loss=0.04127, over 7331.00 frames.], tot_loss[loss=0.2178, simple_loss=0.2962, pruned_loss=0.06967, over 1407965.39 frames.], batch size: 20, lr: 6.01e-04 2022-05-27 05:22:46,346 INFO [train.py:842] (0/4) Epoch 9, batch 1000, loss[loss=0.2183, simple_loss=0.3077, pruned_loss=0.06443, over 7427.00 frames.], tot_loss[loss=0.2163, simple_loss=0.2951, pruned_loss=0.06875, over 1412128.70 frames.], batch size: 20, lr: 6.01e-04 2022-05-27 05:23:24,849 INFO [train.py:842] (0/4) Epoch 9, batch 1050, loss[loss=0.1878, simple_loss=0.2817, pruned_loss=0.04691, over 7255.00 frames.], tot_loss[loss=0.2163, simple_loss=0.2951, pruned_loss=0.06872, over 1416988.79 frames.], batch size: 19, lr: 6.01e-04 2022-05-27 05:24:03,718 INFO [train.py:842] (0/4) Epoch 9, batch 1100, loss[loss=0.1708, simple_loss=0.2436, pruned_loss=0.04903, over 7291.00 frames.], tot_loss[loss=0.2152, simple_loss=0.2949, pruned_loss=0.0678, over 1420316.54 frames.], batch size: 17, lr: 6.01e-04 2022-05-27 05:24:42,288 INFO [train.py:842] (0/4) Epoch 9, batch 1150, loss[loss=0.2596, simple_loss=0.3355, pruned_loss=0.0918, over 7254.00 frames.], tot_loss[loss=0.2141, simple_loss=0.2937, pruned_loss=0.0673, over 1420145.92 frames.], batch size: 25, lr: 6.01e-04 2022-05-27 05:25:21,230 INFO [train.py:842] (0/4) Epoch 9, batch 1200, loss[loss=0.2035, simple_loss=0.2762, pruned_loss=0.06538, over 7436.00 frames.], tot_loss[loss=0.2157, simple_loss=0.295, pruned_loss=0.06823, over 1420322.76 frames.], batch size: 20, lr: 6.00e-04 2022-05-27 05:25:59,837 INFO [train.py:842] (0/4) Epoch 9, batch 1250, loss[loss=0.2095, simple_loss=0.2773, pruned_loss=0.07082, over 6763.00 frames.], tot_loss[loss=0.2164, simple_loss=0.295, pruned_loss=0.0689, over 1415906.99 frames.], batch size: 15, lr: 6.00e-04 2022-05-27 05:26:38,541 INFO [train.py:842] (0/4) Epoch 9, batch 1300, loss[loss=0.2193, simple_loss=0.309, pruned_loss=0.06479, over 7167.00 frames.], tot_loss[loss=0.2163, simple_loss=0.2949, pruned_loss=0.06881, over 1413102.42 frames.], batch size: 19, lr: 6.00e-04 2022-05-27 05:27:17,251 INFO [train.py:842] (0/4) Epoch 9, batch 1350, loss[loss=0.1866, simple_loss=0.2669, pruned_loss=0.05315, over 7435.00 frames.], tot_loss[loss=0.2175, simple_loss=0.2958, pruned_loss=0.06957, over 1418179.20 frames.], batch size: 20, lr: 6.00e-04 2022-05-27 05:27:56,068 INFO [train.py:842] (0/4) Epoch 9, batch 1400, loss[loss=0.2176, simple_loss=0.3054, pruned_loss=0.06488, over 7212.00 frames.], tot_loss[loss=0.218, simple_loss=0.2958, pruned_loss=0.07011, over 1415263.60 frames.], batch size: 21, lr: 6.00e-04 2022-05-27 05:28:34,843 INFO [train.py:842] (0/4) Epoch 9, batch 1450, loss[loss=0.2954, simple_loss=0.3493, pruned_loss=0.1208, over 7327.00 frames.], tot_loss[loss=0.2177, simple_loss=0.2953, pruned_loss=0.07009, over 1420393.79 frames.], batch size: 21, lr: 5.99e-04 2022-05-27 05:29:13,613 INFO [train.py:842] (0/4) Epoch 9, batch 1500, loss[loss=0.2063, simple_loss=0.298, pruned_loss=0.0573, over 7233.00 frames.], tot_loss[loss=0.2171, simple_loss=0.2954, pruned_loss=0.06947, over 1422793.08 frames.], batch size: 20, lr: 5.99e-04 2022-05-27 05:29:52,238 INFO [train.py:842] (0/4) Epoch 9, batch 1550, loss[loss=0.2339, simple_loss=0.2979, pruned_loss=0.08491, over 7203.00 frames.], tot_loss[loss=0.217, simple_loss=0.2953, pruned_loss=0.06938, over 1421991.45 frames.], batch size: 22, lr: 5.99e-04 2022-05-27 05:30:51,700 INFO [train.py:842] (0/4) Epoch 9, batch 1600, loss[loss=0.2132, simple_loss=0.2881, pruned_loss=0.06916, over 7062.00 frames.], tot_loss[loss=0.2181, simple_loss=0.2966, pruned_loss=0.06983, over 1419827.49 frames.], batch size: 18, lr: 5.99e-04 2022-05-27 05:31:40,488 INFO [train.py:842] (0/4) Epoch 9, batch 1650, loss[loss=0.2275, simple_loss=0.32, pruned_loss=0.06746, over 7113.00 frames.], tot_loss[loss=0.2184, simple_loss=0.297, pruned_loss=0.06993, over 1420934.92 frames.], batch size: 21, lr: 5.99e-04 2022-05-27 05:32:19,069 INFO [train.py:842] (0/4) Epoch 9, batch 1700, loss[loss=0.2107, simple_loss=0.3065, pruned_loss=0.05748, over 7146.00 frames.], tot_loss[loss=0.2189, simple_loss=0.2979, pruned_loss=0.07, over 1420236.86 frames.], batch size: 20, lr: 5.98e-04 2022-05-27 05:32:57,921 INFO [train.py:842] (0/4) Epoch 9, batch 1750, loss[loss=0.2268, simple_loss=0.3053, pruned_loss=0.07417, over 7321.00 frames.], tot_loss[loss=0.2196, simple_loss=0.2981, pruned_loss=0.07054, over 1421207.75 frames.], batch size: 21, lr: 5.98e-04 2022-05-27 05:33:36,455 INFO [train.py:842] (0/4) Epoch 9, batch 1800, loss[loss=0.2215, simple_loss=0.3072, pruned_loss=0.06792, over 7234.00 frames.], tot_loss[loss=0.2178, simple_loss=0.2967, pruned_loss=0.06947, over 1418583.25 frames.], batch size: 20, lr: 5.98e-04 2022-05-27 05:34:14,939 INFO [train.py:842] (0/4) Epoch 9, batch 1850, loss[loss=0.243, simple_loss=0.3253, pruned_loss=0.0804, over 7239.00 frames.], tot_loss[loss=0.219, simple_loss=0.2982, pruned_loss=0.0699, over 1421401.04 frames.], batch size: 20, lr: 5.98e-04 2022-05-27 05:34:54,128 INFO [train.py:842] (0/4) Epoch 9, batch 1900, loss[loss=0.2507, simple_loss=0.3207, pruned_loss=0.09032, over 7168.00 frames.], tot_loss[loss=0.2193, simple_loss=0.2982, pruned_loss=0.07019, over 1420451.34 frames.], batch size: 19, lr: 5.98e-04 2022-05-27 05:35:32,743 INFO [train.py:842] (0/4) Epoch 9, batch 1950, loss[loss=0.2103, simple_loss=0.299, pruned_loss=0.06076, over 7119.00 frames.], tot_loss[loss=0.2187, simple_loss=0.298, pruned_loss=0.06974, over 1421103.87 frames.], batch size: 21, lr: 5.97e-04 2022-05-27 05:36:11,673 INFO [train.py:842] (0/4) Epoch 9, batch 2000, loss[loss=0.2228, simple_loss=0.3088, pruned_loss=0.06844, over 7293.00 frames.], tot_loss[loss=0.2177, simple_loss=0.297, pruned_loss=0.06917, over 1422277.43 frames.], batch size: 24, lr: 5.97e-04 2022-05-27 05:36:50,196 INFO [train.py:842] (0/4) Epoch 9, batch 2050, loss[loss=0.1893, simple_loss=0.2592, pruned_loss=0.05966, over 7286.00 frames.], tot_loss[loss=0.2169, simple_loss=0.2965, pruned_loss=0.06863, over 1422058.08 frames.], batch size: 17, lr: 5.97e-04 2022-05-27 05:37:28,893 INFO [train.py:842] (0/4) Epoch 9, batch 2100, loss[loss=0.1632, simple_loss=0.2534, pruned_loss=0.03644, over 7256.00 frames.], tot_loss[loss=0.2172, simple_loss=0.2967, pruned_loss=0.0689, over 1423849.76 frames.], batch size: 19, lr: 5.97e-04 2022-05-27 05:38:07,465 INFO [train.py:842] (0/4) Epoch 9, batch 2150, loss[loss=0.1928, simple_loss=0.2705, pruned_loss=0.05758, over 7058.00 frames.], tot_loss[loss=0.2175, simple_loss=0.2969, pruned_loss=0.069, over 1425652.95 frames.], batch size: 18, lr: 5.97e-04 2022-05-27 05:38:46,493 INFO [train.py:842] (0/4) Epoch 9, batch 2200, loss[loss=0.2175, simple_loss=0.2936, pruned_loss=0.0707, over 7271.00 frames.], tot_loss[loss=0.2164, simple_loss=0.2957, pruned_loss=0.06855, over 1423807.77 frames.], batch size: 17, lr: 5.96e-04 2022-05-27 05:39:25,140 INFO [train.py:842] (0/4) Epoch 9, batch 2250, loss[loss=0.2316, simple_loss=0.2952, pruned_loss=0.08396, over 7179.00 frames.], tot_loss[loss=0.2174, simple_loss=0.2958, pruned_loss=0.06949, over 1424655.08 frames.], batch size: 18, lr: 5.96e-04 2022-05-27 05:40:03,861 INFO [train.py:842] (0/4) Epoch 9, batch 2300, loss[loss=0.2266, simple_loss=0.3073, pruned_loss=0.07298, over 7159.00 frames.], tot_loss[loss=0.2162, simple_loss=0.295, pruned_loss=0.06866, over 1425739.95 frames.], batch size: 20, lr: 5.96e-04 2022-05-27 05:40:42,438 INFO [train.py:842] (0/4) Epoch 9, batch 2350, loss[loss=0.2522, simple_loss=0.3371, pruned_loss=0.08364, over 6742.00 frames.], tot_loss[loss=0.2191, simple_loss=0.2973, pruned_loss=0.07047, over 1423841.06 frames.], batch size: 31, lr: 5.96e-04 2022-05-27 05:41:21,195 INFO [train.py:842] (0/4) Epoch 9, batch 2400, loss[loss=0.2187, simple_loss=0.2888, pruned_loss=0.07428, over 7273.00 frames.], tot_loss[loss=0.2206, simple_loss=0.2988, pruned_loss=0.07122, over 1424083.35 frames.], batch size: 18, lr: 5.96e-04 2022-05-27 05:41:59,634 INFO [train.py:842] (0/4) Epoch 9, batch 2450, loss[loss=0.1821, simple_loss=0.2597, pruned_loss=0.0523, over 7424.00 frames.], tot_loss[loss=0.2192, simple_loss=0.2978, pruned_loss=0.07033, over 1425596.77 frames.], batch size: 18, lr: 5.95e-04 2022-05-27 05:42:38,357 INFO [train.py:842] (0/4) Epoch 9, batch 2500, loss[loss=0.203, simple_loss=0.2815, pruned_loss=0.0623, over 7214.00 frames.], tot_loss[loss=0.2196, simple_loss=0.2978, pruned_loss=0.07075, over 1424669.97 frames.], batch size: 22, lr: 5.95e-04 2022-05-27 05:43:16,893 INFO [train.py:842] (0/4) Epoch 9, batch 2550, loss[loss=0.1767, simple_loss=0.2558, pruned_loss=0.04878, over 7152.00 frames.], tot_loss[loss=0.2181, simple_loss=0.2965, pruned_loss=0.06986, over 1421566.83 frames.], batch size: 17, lr: 5.95e-04 2022-05-27 05:43:55,664 INFO [train.py:842] (0/4) Epoch 9, batch 2600, loss[loss=0.2565, simple_loss=0.326, pruned_loss=0.09343, over 7376.00 frames.], tot_loss[loss=0.2185, simple_loss=0.2971, pruned_loss=0.06993, over 1419640.90 frames.], batch size: 23, lr: 5.95e-04 2022-05-27 05:44:34,265 INFO [train.py:842] (0/4) Epoch 9, batch 2650, loss[loss=0.2386, simple_loss=0.3045, pruned_loss=0.08633, over 4916.00 frames.], tot_loss[loss=0.2181, simple_loss=0.2969, pruned_loss=0.06967, over 1417924.05 frames.], batch size: 52, lr: 5.95e-04 2022-05-27 05:45:13,028 INFO [train.py:842] (0/4) Epoch 9, batch 2700, loss[loss=0.2153, simple_loss=0.2931, pruned_loss=0.06873, over 7338.00 frames.], tot_loss[loss=0.2176, simple_loss=0.2967, pruned_loss=0.06929, over 1419109.63 frames.], batch size: 22, lr: 5.94e-04 2022-05-27 05:45:51,574 INFO [train.py:842] (0/4) Epoch 9, batch 2750, loss[loss=0.1859, simple_loss=0.2774, pruned_loss=0.04718, over 7341.00 frames.], tot_loss[loss=0.2166, simple_loss=0.2957, pruned_loss=0.06875, over 1423440.43 frames.], batch size: 20, lr: 5.94e-04 2022-05-27 05:46:30,157 INFO [train.py:842] (0/4) Epoch 9, batch 2800, loss[loss=0.2429, simple_loss=0.3079, pruned_loss=0.08892, over 7206.00 frames.], tot_loss[loss=0.2176, simple_loss=0.2967, pruned_loss=0.06926, over 1426012.55 frames.], batch size: 22, lr: 5.94e-04 2022-05-27 05:47:08,786 INFO [train.py:842] (0/4) Epoch 9, batch 2850, loss[loss=0.2408, simple_loss=0.3214, pruned_loss=0.08009, over 7165.00 frames.], tot_loss[loss=0.2165, simple_loss=0.2956, pruned_loss=0.06873, over 1428120.05 frames.], batch size: 19, lr: 5.94e-04 2022-05-27 05:47:47,787 INFO [train.py:842] (0/4) Epoch 9, batch 2900, loss[loss=0.2062, simple_loss=0.2865, pruned_loss=0.06293, over 7320.00 frames.], tot_loss[loss=0.2162, simple_loss=0.2955, pruned_loss=0.06843, over 1427790.84 frames.], batch size: 21, lr: 5.94e-04 2022-05-27 05:48:26,365 INFO [train.py:842] (0/4) Epoch 9, batch 2950, loss[loss=0.2129, simple_loss=0.2877, pruned_loss=0.06903, over 7284.00 frames.], tot_loss[loss=0.2163, simple_loss=0.2955, pruned_loss=0.06861, over 1423634.39 frames.], batch size: 18, lr: 5.94e-04 2022-05-27 05:49:05,571 INFO [train.py:842] (0/4) Epoch 9, batch 3000, loss[loss=0.2003, simple_loss=0.2968, pruned_loss=0.05192, over 7297.00 frames.], tot_loss[loss=0.2165, simple_loss=0.2955, pruned_loss=0.06871, over 1421340.75 frames.], batch size: 24, lr: 5.93e-04 2022-05-27 05:49:05,572 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 05:49:14,866 INFO [train.py:871] (0/4) Epoch 9, validation: loss=0.1779, simple_loss=0.2778, pruned_loss=0.039, over 868885.00 frames. 2022-05-27 05:49:53,753 INFO [train.py:842] (0/4) Epoch 9, batch 3050, loss[loss=0.2185, simple_loss=0.2963, pruned_loss=0.07032, over 7336.00 frames.], tot_loss[loss=0.2174, simple_loss=0.2961, pruned_loss=0.06931, over 1418391.61 frames.], batch size: 20, lr: 5.93e-04 2022-05-27 05:50:32,308 INFO [train.py:842] (0/4) Epoch 9, batch 3100, loss[loss=0.2554, simple_loss=0.3227, pruned_loss=0.09402, over 6685.00 frames.], tot_loss[loss=0.2169, simple_loss=0.296, pruned_loss=0.06895, over 1412890.30 frames.], batch size: 31, lr: 5.93e-04 2022-05-27 05:51:10,983 INFO [train.py:842] (0/4) Epoch 9, batch 3150, loss[loss=0.2075, simple_loss=0.2895, pruned_loss=0.06276, over 7154.00 frames.], tot_loss[loss=0.2155, simple_loss=0.2945, pruned_loss=0.06821, over 1416870.91 frames.], batch size: 19, lr: 5.93e-04 2022-05-27 05:51:50,048 INFO [train.py:842] (0/4) Epoch 9, batch 3200, loss[loss=0.2046, simple_loss=0.2913, pruned_loss=0.05896, over 7145.00 frames.], tot_loss[loss=0.2144, simple_loss=0.2937, pruned_loss=0.06759, over 1421423.30 frames.], batch size: 20, lr: 5.93e-04 2022-05-27 05:52:28,396 INFO [train.py:842] (0/4) Epoch 9, batch 3250, loss[loss=0.2871, simple_loss=0.3386, pruned_loss=0.1178, over 5112.00 frames.], tot_loss[loss=0.2164, simple_loss=0.2953, pruned_loss=0.06871, over 1419616.11 frames.], batch size: 53, lr: 5.92e-04 2022-05-27 05:53:07,201 INFO [train.py:842] (0/4) Epoch 9, batch 3300, loss[loss=0.1783, simple_loss=0.2719, pruned_loss=0.04236, over 7213.00 frames.], tot_loss[loss=0.2158, simple_loss=0.2947, pruned_loss=0.06849, over 1420234.34 frames.], batch size: 22, lr: 5.92e-04 2022-05-27 05:53:45,783 INFO [train.py:842] (0/4) Epoch 9, batch 3350, loss[loss=0.2067, simple_loss=0.2864, pruned_loss=0.06346, over 7254.00 frames.], tot_loss[loss=0.2159, simple_loss=0.2946, pruned_loss=0.06863, over 1424274.24 frames.], batch size: 19, lr: 5.92e-04 2022-05-27 05:54:24,436 INFO [train.py:842] (0/4) Epoch 9, batch 3400, loss[loss=0.221, simple_loss=0.3072, pruned_loss=0.06741, over 6694.00 frames.], tot_loss[loss=0.2168, simple_loss=0.2955, pruned_loss=0.06906, over 1421535.68 frames.], batch size: 31, lr: 5.92e-04 2022-05-27 05:55:02,961 INFO [train.py:842] (0/4) Epoch 9, batch 3450, loss[loss=0.1806, simple_loss=0.2549, pruned_loss=0.05315, over 7408.00 frames.], tot_loss[loss=0.2166, simple_loss=0.2955, pruned_loss=0.06883, over 1423896.97 frames.], batch size: 18, lr: 5.92e-04 2022-05-27 05:55:41,913 INFO [train.py:842] (0/4) Epoch 9, batch 3500, loss[loss=0.1995, simple_loss=0.2848, pruned_loss=0.05711, over 7154.00 frames.], tot_loss[loss=0.2166, simple_loss=0.2952, pruned_loss=0.06896, over 1424078.26 frames.], batch size: 19, lr: 5.91e-04 2022-05-27 05:56:20,577 INFO [train.py:842] (0/4) Epoch 9, batch 3550, loss[loss=0.1863, simple_loss=0.2662, pruned_loss=0.05326, over 7149.00 frames.], tot_loss[loss=0.2149, simple_loss=0.294, pruned_loss=0.06792, over 1425677.58 frames.], batch size: 18, lr: 5.91e-04 2022-05-27 05:56:59,459 INFO [train.py:842] (0/4) Epoch 9, batch 3600, loss[loss=0.1803, simple_loss=0.2623, pruned_loss=0.04913, over 7281.00 frames.], tot_loss[loss=0.2155, simple_loss=0.2945, pruned_loss=0.06825, over 1424075.87 frames.], batch size: 18, lr: 5.91e-04 2022-05-27 05:57:38,140 INFO [train.py:842] (0/4) Epoch 9, batch 3650, loss[loss=0.1669, simple_loss=0.2443, pruned_loss=0.04476, over 7121.00 frames.], tot_loss[loss=0.2149, simple_loss=0.2939, pruned_loss=0.06799, over 1425091.82 frames.], batch size: 17, lr: 5.91e-04 2022-05-27 05:58:16,860 INFO [train.py:842] (0/4) Epoch 9, batch 3700, loss[loss=0.2221, simple_loss=0.3017, pruned_loss=0.07123, over 7307.00 frames.], tot_loss[loss=0.2163, simple_loss=0.2953, pruned_loss=0.06868, over 1425710.36 frames.], batch size: 25, lr: 5.91e-04 2022-05-27 05:58:55,399 INFO [train.py:842] (0/4) Epoch 9, batch 3750, loss[loss=0.2196, simple_loss=0.2993, pruned_loss=0.06993, over 7432.00 frames.], tot_loss[loss=0.2179, simple_loss=0.2969, pruned_loss=0.06949, over 1424951.90 frames.], batch size: 20, lr: 5.90e-04 2022-05-27 05:59:34,169 INFO [train.py:842] (0/4) Epoch 9, batch 3800, loss[loss=0.1985, simple_loss=0.2911, pruned_loss=0.05294, over 7323.00 frames.], tot_loss[loss=0.2166, simple_loss=0.296, pruned_loss=0.06854, over 1426424.03 frames.], batch size: 21, lr: 5.90e-04 2022-05-27 06:00:12,776 INFO [train.py:842] (0/4) Epoch 9, batch 3850, loss[loss=0.1838, simple_loss=0.2754, pruned_loss=0.04611, over 7440.00 frames.], tot_loss[loss=0.2158, simple_loss=0.2951, pruned_loss=0.06821, over 1428264.46 frames.], batch size: 20, lr: 5.90e-04 2022-05-27 06:00:51,532 INFO [train.py:842] (0/4) Epoch 9, batch 3900, loss[loss=0.2687, simple_loss=0.336, pruned_loss=0.1007, over 7262.00 frames.], tot_loss[loss=0.2164, simple_loss=0.2958, pruned_loss=0.06847, over 1428113.55 frames.], batch size: 19, lr: 5.90e-04 2022-05-27 06:01:30,201 INFO [train.py:842] (0/4) Epoch 9, batch 3950, loss[loss=0.209, simple_loss=0.2777, pruned_loss=0.0701, over 7245.00 frames.], tot_loss[loss=0.216, simple_loss=0.2952, pruned_loss=0.06844, over 1425912.55 frames.], batch size: 19, lr: 5.90e-04 2022-05-27 06:02:08,914 INFO [train.py:842] (0/4) Epoch 9, batch 4000, loss[loss=0.2189, simple_loss=0.295, pruned_loss=0.07142, over 7115.00 frames.], tot_loss[loss=0.2161, simple_loss=0.2956, pruned_loss=0.06828, over 1424139.26 frames.], batch size: 21, lr: 5.89e-04 2022-05-27 06:02:47,510 INFO [train.py:842] (0/4) Epoch 9, batch 4050, loss[loss=0.2571, simple_loss=0.3274, pruned_loss=0.09341, over 7319.00 frames.], tot_loss[loss=0.2163, simple_loss=0.2956, pruned_loss=0.06852, over 1422287.89 frames.], batch size: 20, lr: 5.89e-04 2022-05-27 06:03:26,338 INFO [train.py:842] (0/4) Epoch 9, batch 4100, loss[loss=0.1582, simple_loss=0.2333, pruned_loss=0.0416, over 7276.00 frames.], tot_loss[loss=0.2154, simple_loss=0.2945, pruned_loss=0.06815, over 1425372.94 frames.], batch size: 17, lr: 5.89e-04 2022-05-27 06:04:05,008 INFO [train.py:842] (0/4) Epoch 9, batch 4150, loss[loss=0.244, simple_loss=0.3272, pruned_loss=0.08041, over 7203.00 frames.], tot_loss[loss=0.2156, simple_loss=0.295, pruned_loss=0.06812, over 1428531.79 frames.], batch size: 22, lr: 5.89e-04 2022-05-27 06:04:43,796 INFO [train.py:842] (0/4) Epoch 9, batch 4200, loss[loss=0.2335, simple_loss=0.2894, pruned_loss=0.08883, over 7000.00 frames.], tot_loss[loss=0.2158, simple_loss=0.2951, pruned_loss=0.06822, over 1423489.52 frames.], batch size: 16, lr: 5.89e-04 2022-05-27 06:05:22,357 INFO [train.py:842] (0/4) Epoch 9, batch 4250, loss[loss=0.2124, simple_loss=0.2966, pruned_loss=0.06406, over 7024.00 frames.], tot_loss[loss=0.2154, simple_loss=0.2949, pruned_loss=0.06801, over 1423308.33 frames.], batch size: 28, lr: 5.89e-04 2022-05-27 06:06:01,305 INFO [train.py:842] (0/4) Epoch 9, batch 4300, loss[loss=0.2593, simple_loss=0.3278, pruned_loss=0.0954, over 7412.00 frames.], tot_loss[loss=0.2153, simple_loss=0.2949, pruned_loss=0.06786, over 1424754.19 frames.], batch size: 21, lr: 5.88e-04 2022-05-27 06:06:39,795 INFO [train.py:842] (0/4) Epoch 9, batch 4350, loss[loss=0.2072, simple_loss=0.2759, pruned_loss=0.06928, over 7014.00 frames.], tot_loss[loss=0.2157, simple_loss=0.2951, pruned_loss=0.06818, over 1419728.34 frames.], batch size: 16, lr: 5.88e-04 2022-05-27 06:07:18,616 INFO [train.py:842] (0/4) Epoch 9, batch 4400, loss[loss=0.267, simple_loss=0.3303, pruned_loss=0.1018, over 6536.00 frames.], tot_loss[loss=0.2159, simple_loss=0.2951, pruned_loss=0.06837, over 1418426.34 frames.], batch size: 38, lr: 5.88e-04 2022-05-27 06:07:57,179 INFO [train.py:842] (0/4) Epoch 9, batch 4450, loss[loss=0.246, simple_loss=0.3282, pruned_loss=0.08193, over 7364.00 frames.], tot_loss[loss=0.2162, simple_loss=0.2952, pruned_loss=0.06858, over 1420154.92 frames.], batch size: 23, lr: 5.88e-04 2022-05-27 06:08:35,958 INFO [train.py:842] (0/4) Epoch 9, batch 4500, loss[loss=0.2434, simple_loss=0.3155, pruned_loss=0.08561, over 7201.00 frames.], tot_loss[loss=0.2164, simple_loss=0.2954, pruned_loss=0.06871, over 1422280.66 frames.], batch size: 23, lr: 5.88e-04 2022-05-27 06:09:14,590 INFO [train.py:842] (0/4) Epoch 9, batch 4550, loss[loss=0.2125, simple_loss=0.299, pruned_loss=0.06296, over 7176.00 frames.], tot_loss[loss=0.2153, simple_loss=0.2945, pruned_loss=0.06804, over 1423057.61 frames.], batch size: 26, lr: 5.87e-04 2022-05-27 06:09:53,675 INFO [train.py:842] (0/4) Epoch 9, batch 4600, loss[loss=0.1911, simple_loss=0.2676, pruned_loss=0.05731, over 7074.00 frames.], tot_loss[loss=0.214, simple_loss=0.2932, pruned_loss=0.06738, over 1422964.17 frames.], batch size: 18, lr: 5.87e-04 2022-05-27 06:10:32,410 INFO [train.py:842] (0/4) Epoch 9, batch 4650, loss[loss=0.216, simple_loss=0.2935, pruned_loss=0.06927, over 7167.00 frames.], tot_loss[loss=0.2147, simple_loss=0.294, pruned_loss=0.06768, over 1422790.71 frames.], batch size: 19, lr: 5.87e-04 2022-05-27 06:11:11,203 INFO [train.py:842] (0/4) Epoch 9, batch 4700, loss[loss=0.2257, simple_loss=0.3069, pruned_loss=0.07225, over 7335.00 frames.], tot_loss[loss=0.2143, simple_loss=0.2934, pruned_loss=0.06755, over 1421090.85 frames.], batch size: 24, lr: 5.87e-04 2022-05-27 06:11:49,726 INFO [train.py:842] (0/4) Epoch 9, batch 4750, loss[loss=0.2476, simple_loss=0.3187, pruned_loss=0.08819, over 7190.00 frames.], tot_loss[loss=0.2161, simple_loss=0.2949, pruned_loss=0.06867, over 1417352.94 frames.], batch size: 23, lr: 5.87e-04 2022-05-27 06:12:28,733 INFO [train.py:842] (0/4) Epoch 9, batch 4800, loss[loss=0.2933, simple_loss=0.3376, pruned_loss=0.1245, over 7415.00 frames.], tot_loss[loss=0.2163, simple_loss=0.2947, pruned_loss=0.06899, over 1416271.55 frames.], batch size: 18, lr: 5.86e-04 2022-05-27 06:13:07,352 INFO [train.py:842] (0/4) Epoch 9, batch 4850, loss[loss=0.1854, simple_loss=0.2569, pruned_loss=0.05699, over 7280.00 frames.], tot_loss[loss=0.2162, simple_loss=0.2943, pruned_loss=0.06905, over 1419968.43 frames.], batch size: 17, lr: 5.86e-04 2022-05-27 06:13:46,270 INFO [train.py:842] (0/4) Epoch 9, batch 4900, loss[loss=0.2563, simple_loss=0.3163, pruned_loss=0.09813, over 5212.00 frames.], tot_loss[loss=0.2169, simple_loss=0.2948, pruned_loss=0.06951, over 1418217.95 frames.], batch size: 52, lr: 5.86e-04 2022-05-27 06:14:24,884 INFO [train.py:842] (0/4) Epoch 9, batch 4950, loss[loss=0.2632, simple_loss=0.3327, pruned_loss=0.09687, over 7318.00 frames.], tot_loss[loss=0.216, simple_loss=0.2938, pruned_loss=0.06912, over 1419200.95 frames.], batch size: 25, lr: 5.86e-04 2022-05-27 06:15:03,638 INFO [train.py:842] (0/4) Epoch 9, batch 5000, loss[loss=0.2526, simple_loss=0.318, pruned_loss=0.0936, over 7274.00 frames.], tot_loss[loss=0.2151, simple_loss=0.293, pruned_loss=0.06863, over 1417429.88 frames.], batch size: 24, lr: 5.86e-04 2022-05-27 06:15:42,258 INFO [train.py:842] (0/4) Epoch 9, batch 5050, loss[loss=0.1698, simple_loss=0.2583, pruned_loss=0.04067, over 7433.00 frames.], tot_loss[loss=0.2146, simple_loss=0.2927, pruned_loss=0.06819, over 1418639.09 frames.], batch size: 20, lr: 5.86e-04 2022-05-27 06:16:21,072 INFO [train.py:842] (0/4) Epoch 9, batch 5100, loss[loss=0.2083, simple_loss=0.2995, pruned_loss=0.05855, over 6285.00 frames.], tot_loss[loss=0.2157, simple_loss=0.294, pruned_loss=0.06872, over 1419893.60 frames.], batch size: 37, lr: 5.85e-04 2022-05-27 06:16:59,713 INFO [train.py:842] (0/4) Epoch 9, batch 5150, loss[loss=0.2054, simple_loss=0.2977, pruned_loss=0.0566, over 7150.00 frames.], tot_loss[loss=0.2163, simple_loss=0.2944, pruned_loss=0.06905, over 1422488.25 frames.], batch size: 20, lr: 5.85e-04 2022-05-27 06:17:38,706 INFO [train.py:842] (0/4) Epoch 9, batch 5200, loss[loss=0.2043, simple_loss=0.2827, pruned_loss=0.06295, over 7337.00 frames.], tot_loss[loss=0.2183, simple_loss=0.2961, pruned_loss=0.07025, over 1423836.32 frames.], batch size: 20, lr: 5.85e-04 2022-05-27 06:18:17,392 INFO [train.py:842] (0/4) Epoch 9, batch 5250, loss[loss=0.2315, simple_loss=0.3127, pruned_loss=0.07509, over 7232.00 frames.], tot_loss[loss=0.2173, simple_loss=0.2956, pruned_loss=0.06953, over 1424600.24 frames.], batch size: 20, lr: 5.85e-04 2022-05-27 06:18:56,024 INFO [train.py:842] (0/4) Epoch 9, batch 5300, loss[loss=0.2043, simple_loss=0.2904, pruned_loss=0.05907, over 7434.00 frames.], tot_loss[loss=0.2162, simple_loss=0.2948, pruned_loss=0.06879, over 1421005.55 frames.], batch size: 20, lr: 5.85e-04 2022-05-27 06:19:34,496 INFO [train.py:842] (0/4) Epoch 9, batch 5350, loss[loss=0.1946, simple_loss=0.2848, pruned_loss=0.05217, over 7300.00 frames.], tot_loss[loss=0.2182, simple_loss=0.2968, pruned_loss=0.06981, over 1422640.60 frames.], batch size: 24, lr: 5.84e-04 2022-05-27 06:20:13,321 INFO [train.py:842] (0/4) Epoch 9, batch 5400, loss[loss=0.252, simple_loss=0.3086, pruned_loss=0.09774, over 5535.00 frames.], tot_loss[loss=0.2191, simple_loss=0.2976, pruned_loss=0.07029, over 1416931.35 frames.], batch size: 52, lr: 5.84e-04 2022-05-27 06:20:51,803 INFO [train.py:842] (0/4) Epoch 9, batch 5450, loss[loss=0.2139, simple_loss=0.2858, pruned_loss=0.07097, over 6630.00 frames.], tot_loss[loss=0.2193, simple_loss=0.2976, pruned_loss=0.07056, over 1418011.69 frames.], batch size: 31, lr: 5.84e-04 2022-05-27 06:21:30,672 INFO [train.py:842] (0/4) Epoch 9, batch 5500, loss[loss=0.2541, simple_loss=0.3269, pruned_loss=0.09062, over 7119.00 frames.], tot_loss[loss=0.2215, simple_loss=0.2993, pruned_loss=0.0719, over 1417695.28 frames.], batch size: 21, lr: 5.84e-04 2022-05-27 06:22:09,226 INFO [train.py:842] (0/4) Epoch 9, batch 5550, loss[loss=0.4185, simple_loss=0.4397, pruned_loss=0.1986, over 5163.00 frames.], tot_loss[loss=0.2226, simple_loss=0.2995, pruned_loss=0.07285, over 1418016.17 frames.], batch size: 52, lr: 5.84e-04 2022-05-27 06:22:47,826 INFO [train.py:842] (0/4) Epoch 9, batch 5600, loss[loss=0.2303, simple_loss=0.3181, pruned_loss=0.07126, over 6780.00 frames.], tot_loss[loss=0.2212, simple_loss=0.2988, pruned_loss=0.0718, over 1418465.22 frames.], batch size: 31, lr: 5.84e-04 2022-05-27 06:23:26,383 INFO [train.py:842] (0/4) Epoch 9, batch 5650, loss[loss=0.2147, simple_loss=0.3056, pruned_loss=0.06196, over 7101.00 frames.], tot_loss[loss=0.2196, simple_loss=0.2974, pruned_loss=0.07094, over 1420189.51 frames.], batch size: 21, lr: 5.83e-04 2022-05-27 06:24:05,338 INFO [train.py:842] (0/4) Epoch 9, batch 5700, loss[loss=0.2353, simple_loss=0.3057, pruned_loss=0.08249, over 7235.00 frames.], tot_loss[loss=0.2189, simple_loss=0.2966, pruned_loss=0.07064, over 1417995.94 frames.], batch size: 20, lr: 5.83e-04 2022-05-27 06:24:44,090 INFO [train.py:842] (0/4) Epoch 9, batch 5750, loss[loss=0.2013, simple_loss=0.2889, pruned_loss=0.0568, over 7126.00 frames.], tot_loss[loss=0.2181, simple_loss=0.2963, pruned_loss=0.06996, over 1422213.11 frames.], batch size: 21, lr: 5.83e-04 2022-05-27 06:25:23,072 INFO [train.py:842] (0/4) Epoch 9, batch 5800, loss[loss=0.2524, simple_loss=0.3341, pruned_loss=0.0854, over 7315.00 frames.], tot_loss[loss=0.2188, simple_loss=0.2973, pruned_loss=0.07013, over 1421715.71 frames.], batch size: 21, lr: 5.83e-04 2022-05-27 06:26:01,748 INFO [train.py:842] (0/4) Epoch 9, batch 5850, loss[loss=0.2035, simple_loss=0.2827, pruned_loss=0.06216, over 7148.00 frames.], tot_loss[loss=0.2198, simple_loss=0.2977, pruned_loss=0.07098, over 1418378.22 frames.], batch size: 19, lr: 5.83e-04 2022-05-27 06:26:41,214 INFO [train.py:842] (0/4) Epoch 9, batch 5900, loss[loss=0.1683, simple_loss=0.2442, pruned_loss=0.04614, over 7428.00 frames.], tot_loss[loss=0.2177, simple_loss=0.2957, pruned_loss=0.06983, over 1422190.67 frames.], batch size: 18, lr: 5.82e-04 2022-05-27 06:27:19,680 INFO [train.py:842] (0/4) Epoch 9, batch 5950, loss[loss=0.2278, simple_loss=0.3072, pruned_loss=0.07417, over 7293.00 frames.], tot_loss[loss=0.216, simple_loss=0.2945, pruned_loss=0.06876, over 1421982.32 frames.], batch size: 24, lr: 5.82e-04 2022-05-27 06:27:58,478 INFO [train.py:842] (0/4) Epoch 9, batch 6000, loss[loss=0.1525, simple_loss=0.2314, pruned_loss=0.03681, over 7009.00 frames.], tot_loss[loss=0.2139, simple_loss=0.2929, pruned_loss=0.0674, over 1420796.27 frames.], batch size: 16, lr: 5.82e-04 2022-05-27 06:27:58,479 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 06:28:07,748 INFO [train.py:871] (0/4) Epoch 9, validation: loss=0.1769, simple_loss=0.2771, pruned_loss=0.03838, over 868885.00 frames. 2022-05-27 06:28:46,378 INFO [train.py:842] (0/4) Epoch 9, batch 6050, loss[loss=0.1832, simple_loss=0.2698, pruned_loss=0.04829, over 6781.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2918, pruned_loss=0.06658, over 1420636.15 frames.], batch size: 15, lr: 5.82e-04 2022-05-27 06:29:25,559 INFO [train.py:842] (0/4) Epoch 9, batch 6100, loss[loss=0.1853, simple_loss=0.2574, pruned_loss=0.05656, over 7271.00 frames.], tot_loss[loss=0.2133, simple_loss=0.2927, pruned_loss=0.06697, over 1419770.01 frames.], batch size: 17, lr: 5.82e-04 2022-05-27 06:30:04,195 INFO [train.py:842] (0/4) Epoch 9, batch 6150, loss[loss=0.1755, simple_loss=0.2557, pruned_loss=0.04764, over 7288.00 frames.], tot_loss[loss=0.2142, simple_loss=0.2935, pruned_loss=0.06745, over 1417597.74 frames.], batch size: 18, lr: 5.82e-04 2022-05-27 06:30:42,918 INFO [train.py:842] (0/4) Epoch 9, batch 6200, loss[loss=0.2105, simple_loss=0.2824, pruned_loss=0.0693, over 7131.00 frames.], tot_loss[loss=0.2142, simple_loss=0.2936, pruned_loss=0.06742, over 1419876.59 frames.], batch size: 17, lr: 5.81e-04 2022-05-27 06:31:21,489 INFO [train.py:842] (0/4) Epoch 9, batch 6250, loss[loss=0.2266, simple_loss=0.3094, pruned_loss=0.07184, over 7411.00 frames.], tot_loss[loss=0.2135, simple_loss=0.2933, pruned_loss=0.06682, over 1419762.56 frames.], batch size: 20, lr: 5.81e-04 2022-05-27 06:32:00,456 INFO [train.py:842] (0/4) Epoch 9, batch 6300, loss[loss=0.2888, simple_loss=0.342, pruned_loss=0.1178, over 4702.00 frames.], tot_loss[loss=0.2142, simple_loss=0.2935, pruned_loss=0.06744, over 1418405.75 frames.], batch size: 52, lr: 5.81e-04 2022-05-27 06:32:38,974 INFO [train.py:842] (0/4) Epoch 9, batch 6350, loss[loss=0.1809, simple_loss=0.2766, pruned_loss=0.04253, over 7412.00 frames.], tot_loss[loss=0.2137, simple_loss=0.2928, pruned_loss=0.0673, over 1419867.67 frames.], batch size: 21, lr: 5.81e-04 2022-05-27 06:33:17,797 INFO [train.py:842] (0/4) Epoch 9, batch 6400, loss[loss=0.2275, simple_loss=0.3134, pruned_loss=0.07076, over 7115.00 frames.], tot_loss[loss=0.215, simple_loss=0.2932, pruned_loss=0.06841, over 1420108.18 frames.], batch size: 21, lr: 5.81e-04 2022-05-27 06:33:56,412 INFO [train.py:842] (0/4) Epoch 9, batch 6450, loss[loss=0.1802, simple_loss=0.2647, pruned_loss=0.04789, over 7355.00 frames.], tot_loss[loss=0.2146, simple_loss=0.2931, pruned_loss=0.06805, over 1421754.73 frames.], batch size: 19, lr: 5.80e-04 2022-05-27 06:34:06,826 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-80000.pt 2022-05-27 06:34:38,047 INFO [train.py:842] (0/4) Epoch 9, batch 6500, loss[loss=0.2164, simple_loss=0.31, pruned_loss=0.0614, over 7410.00 frames.], tot_loss[loss=0.2148, simple_loss=0.2934, pruned_loss=0.06813, over 1423376.39 frames.], batch size: 21, lr: 5.80e-04 2022-05-27 06:35:16,522 INFO [train.py:842] (0/4) Epoch 9, batch 6550, loss[loss=0.2893, simple_loss=0.3682, pruned_loss=0.1052, over 7143.00 frames.], tot_loss[loss=0.2156, simple_loss=0.294, pruned_loss=0.06854, over 1423523.90 frames.], batch size: 20, lr: 5.80e-04 2022-05-27 06:35:55,355 INFO [train.py:842] (0/4) Epoch 9, batch 6600, loss[loss=0.2062, simple_loss=0.2836, pruned_loss=0.06439, over 7070.00 frames.], tot_loss[loss=0.2161, simple_loss=0.2949, pruned_loss=0.06866, over 1425369.24 frames.], batch size: 18, lr: 5.80e-04 2022-05-27 06:36:34,078 INFO [train.py:842] (0/4) Epoch 9, batch 6650, loss[loss=0.1917, simple_loss=0.2719, pruned_loss=0.05569, over 7450.00 frames.], tot_loss[loss=0.217, simple_loss=0.2952, pruned_loss=0.06941, over 1426529.90 frames.], batch size: 19, lr: 5.80e-04 2022-05-27 06:37:12,640 INFO [train.py:842] (0/4) Epoch 9, batch 6700, loss[loss=0.2025, simple_loss=0.2741, pruned_loss=0.06544, over 7284.00 frames.], tot_loss[loss=0.2174, simple_loss=0.296, pruned_loss=0.06943, over 1423631.98 frames.], batch size: 17, lr: 5.80e-04 2022-05-27 06:37:51,245 INFO [train.py:842] (0/4) Epoch 9, batch 6750, loss[loss=0.1837, simple_loss=0.2703, pruned_loss=0.04849, over 7271.00 frames.], tot_loss[loss=0.2181, simple_loss=0.2969, pruned_loss=0.06962, over 1423753.28 frames.], batch size: 19, lr: 5.79e-04 2022-05-27 06:38:30,277 INFO [train.py:842] (0/4) Epoch 9, batch 6800, loss[loss=0.2634, simple_loss=0.3431, pruned_loss=0.09184, over 7325.00 frames.], tot_loss[loss=0.2172, simple_loss=0.2958, pruned_loss=0.06925, over 1426693.37 frames.], batch size: 22, lr: 5.79e-04 2022-05-27 06:39:08,699 INFO [train.py:842] (0/4) Epoch 9, batch 6850, loss[loss=0.2352, simple_loss=0.3232, pruned_loss=0.07353, over 7110.00 frames.], tot_loss[loss=0.2169, simple_loss=0.2964, pruned_loss=0.06868, over 1426339.20 frames.], batch size: 21, lr: 5.79e-04 2022-05-27 06:39:47,798 INFO [train.py:842] (0/4) Epoch 9, batch 6900, loss[loss=0.23, simple_loss=0.3146, pruned_loss=0.07271, over 7324.00 frames.], tot_loss[loss=0.2167, simple_loss=0.2963, pruned_loss=0.06857, over 1421925.53 frames.], batch size: 20, lr: 5.79e-04 2022-05-27 06:40:26,419 INFO [train.py:842] (0/4) Epoch 9, batch 6950, loss[loss=0.1776, simple_loss=0.2572, pruned_loss=0.04902, over 7358.00 frames.], tot_loss[loss=0.2177, simple_loss=0.2969, pruned_loss=0.06925, over 1419267.14 frames.], batch size: 19, lr: 5.79e-04 2022-05-27 06:41:05,234 INFO [train.py:842] (0/4) Epoch 9, batch 7000, loss[loss=0.2117, simple_loss=0.2994, pruned_loss=0.06196, over 6645.00 frames.], tot_loss[loss=0.2157, simple_loss=0.2953, pruned_loss=0.068, over 1421314.39 frames.], batch size: 31, lr: 5.78e-04 2022-05-27 06:41:43,602 INFO [train.py:842] (0/4) Epoch 9, batch 7050, loss[loss=0.1925, simple_loss=0.2833, pruned_loss=0.05084, over 7115.00 frames.], tot_loss[loss=0.2162, simple_loss=0.2956, pruned_loss=0.06834, over 1419919.04 frames.], batch size: 21, lr: 5.78e-04 2022-05-27 06:42:22,508 INFO [train.py:842] (0/4) Epoch 9, batch 7100, loss[loss=0.1919, simple_loss=0.2708, pruned_loss=0.0565, over 7058.00 frames.], tot_loss[loss=0.2153, simple_loss=0.2948, pruned_loss=0.06785, over 1424398.78 frames.], batch size: 18, lr: 5.78e-04 2022-05-27 06:43:01,075 INFO [train.py:842] (0/4) Epoch 9, batch 7150, loss[loss=0.2569, simple_loss=0.3354, pruned_loss=0.08922, over 7159.00 frames.], tot_loss[loss=0.2159, simple_loss=0.2952, pruned_loss=0.06827, over 1418597.00 frames.], batch size: 26, lr: 5.78e-04 2022-05-27 06:43:40,008 INFO [train.py:842] (0/4) Epoch 9, batch 7200, loss[loss=0.1838, simple_loss=0.2578, pruned_loss=0.05489, over 7361.00 frames.], tot_loss[loss=0.215, simple_loss=0.2944, pruned_loss=0.06786, over 1422254.07 frames.], batch size: 19, lr: 5.78e-04 2022-05-27 06:44:18,604 INFO [train.py:842] (0/4) Epoch 9, batch 7250, loss[loss=0.2279, simple_loss=0.3096, pruned_loss=0.07316, over 6385.00 frames.], tot_loss[loss=0.2138, simple_loss=0.2931, pruned_loss=0.06727, over 1424379.33 frames.], batch size: 38, lr: 5.78e-04 2022-05-27 06:44:57,343 INFO [train.py:842] (0/4) Epoch 9, batch 7300, loss[loss=0.1882, simple_loss=0.2688, pruned_loss=0.05377, over 7067.00 frames.], tot_loss[loss=0.2137, simple_loss=0.2935, pruned_loss=0.06697, over 1427113.41 frames.], batch size: 18, lr: 5.77e-04 2022-05-27 06:45:35,776 INFO [train.py:842] (0/4) Epoch 9, batch 7350, loss[loss=0.2405, simple_loss=0.326, pruned_loss=0.07754, over 7210.00 frames.], tot_loss[loss=0.2152, simple_loss=0.2946, pruned_loss=0.06783, over 1426430.83 frames.], batch size: 23, lr: 5.77e-04 2022-05-27 06:46:15,097 INFO [train.py:842] (0/4) Epoch 9, batch 7400, loss[loss=0.1891, simple_loss=0.2591, pruned_loss=0.05949, over 7426.00 frames.], tot_loss[loss=0.2149, simple_loss=0.294, pruned_loss=0.06788, over 1428398.41 frames.], batch size: 18, lr: 5.77e-04 2022-05-27 06:46:53,808 INFO [train.py:842] (0/4) Epoch 9, batch 7450, loss[loss=0.2056, simple_loss=0.2882, pruned_loss=0.06152, over 7311.00 frames.], tot_loss[loss=0.2151, simple_loss=0.294, pruned_loss=0.06803, over 1429704.58 frames.], batch size: 25, lr: 5.77e-04 2022-05-27 06:47:32,690 INFO [train.py:842] (0/4) Epoch 9, batch 7500, loss[loss=0.2855, simple_loss=0.3559, pruned_loss=0.1076, over 5285.00 frames.], tot_loss[loss=0.2148, simple_loss=0.2938, pruned_loss=0.0679, over 1420211.03 frames.], batch size: 52, lr: 5.77e-04 2022-05-27 06:48:11,216 INFO [train.py:842] (0/4) Epoch 9, batch 7550, loss[loss=0.2062, simple_loss=0.2908, pruned_loss=0.0608, over 7205.00 frames.], tot_loss[loss=0.2138, simple_loss=0.2933, pruned_loss=0.06718, over 1418016.82 frames.], batch size: 23, lr: 5.76e-04 2022-05-27 06:48:50,190 INFO [train.py:842] (0/4) Epoch 9, batch 7600, loss[loss=0.2718, simple_loss=0.3228, pruned_loss=0.1104, over 7140.00 frames.], tot_loss[loss=0.2144, simple_loss=0.2934, pruned_loss=0.0677, over 1417803.10 frames.], batch size: 17, lr: 5.76e-04 2022-05-27 06:49:28,755 INFO [train.py:842] (0/4) Epoch 9, batch 7650, loss[loss=0.2154, simple_loss=0.3035, pruned_loss=0.06362, over 7139.00 frames.], tot_loss[loss=0.2141, simple_loss=0.2935, pruned_loss=0.06733, over 1420339.20 frames.], batch size: 20, lr: 5.76e-04 2022-05-27 06:50:07,698 INFO [train.py:842] (0/4) Epoch 9, batch 7700, loss[loss=0.1735, simple_loss=0.2405, pruned_loss=0.0533, over 7414.00 frames.], tot_loss[loss=0.214, simple_loss=0.2933, pruned_loss=0.06737, over 1420446.94 frames.], batch size: 18, lr: 5.76e-04 2022-05-27 06:50:46,211 INFO [train.py:842] (0/4) Epoch 9, batch 7750, loss[loss=0.2655, simple_loss=0.3375, pruned_loss=0.09676, over 6428.00 frames.], tot_loss[loss=0.2148, simple_loss=0.2934, pruned_loss=0.06806, over 1415686.44 frames.], batch size: 38, lr: 5.76e-04 2022-05-27 06:51:25,030 INFO [train.py:842] (0/4) Epoch 9, batch 7800, loss[loss=0.2075, simple_loss=0.2809, pruned_loss=0.06702, over 7354.00 frames.], tot_loss[loss=0.2155, simple_loss=0.2942, pruned_loss=0.06842, over 1422505.76 frames.], batch size: 19, lr: 5.76e-04 2022-05-27 06:52:03,439 INFO [train.py:842] (0/4) Epoch 9, batch 7850, loss[loss=0.2281, simple_loss=0.3086, pruned_loss=0.07375, over 7294.00 frames.], tot_loss[loss=0.2165, simple_loss=0.2954, pruned_loss=0.06882, over 1426919.48 frames.], batch size: 24, lr: 5.75e-04 2022-05-27 06:52:42,325 INFO [train.py:842] (0/4) Epoch 9, batch 7900, loss[loss=0.2499, simple_loss=0.3287, pruned_loss=0.08553, over 7354.00 frames.], tot_loss[loss=0.2156, simple_loss=0.295, pruned_loss=0.06815, over 1430068.34 frames.], batch size: 19, lr: 5.75e-04 2022-05-27 06:53:20,992 INFO [train.py:842] (0/4) Epoch 9, batch 7950, loss[loss=0.2251, simple_loss=0.3035, pruned_loss=0.07333, over 7144.00 frames.], tot_loss[loss=0.2161, simple_loss=0.2953, pruned_loss=0.06841, over 1428456.99 frames.], batch size: 20, lr: 5.75e-04 2022-05-27 06:53:59,861 INFO [train.py:842] (0/4) Epoch 9, batch 8000, loss[loss=0.2149, simple_loss=0.2847, pruned_loss=0.07255, over 7175.00 frames.], tot_loss[loss=0.2145, simple_loss=0.2939, pruned_loss=0.06749, over 1425423.43 frames.], batch size: 18, lr: 5.75e-04 2022-05-27 06:54:38,287 INFO [train.py:842] (0/4) Epoch 9, batch 8050, loss[loss=0.1793, simple_loss=0.2598, pruned_loss=0.04934, over 7264.00 frames.], tot_loss[loss=0.2161, simple_loss=0.2951, pruned_loss=0.06857, over 1425297.05 frames.], batch size: 19, lr: 5.75e-04 2022-05-27 06:55:17,189 INFO [train.py:842] (0/4) Epoch 9, batch 8100, loss[loss=0.246, simple_loss=0.3091, pruned_loss=0.09146, over 7065.00 frames.], tot_loss[loss=0.2166, simple_loss=0.2952, pruned_loss=0.06899, over 1424734.12 frames.], batch size: 18, lr: 5.75e-04 2022-05-27 06:55:55,749 INFO [train.py:842] (0/4) Epoch 9, batch 8150, loss[loss=0.2271, simple_loss=0.3084, pruned_loss=0.07295, over 6779.00 frames.], tot_loss[loss=0.2155, simple_loss=0.2942, pruned_loss=0.06838, over 1422556.88 frames.], batch size: 31, lr: 5.74e-04 2022-05-27 06:56:34,504 INFO [train.py:842] (0/4) Epoch 9, batch 8200, loss[loss=0.2139, simple_loss=0.2865, pruned_loss=0.07062, over 7104.00 frames.], tot_loss[loss=0.2169, simple_loss=0.2952, pruned_loss=0.06931, over 1415917.48 frames.], batch size: 21, lr: 5.74e-04 2022-05-27 06:57:13,124 INFO [train.py:842] (0/4) Epoch 9, batch 8250, loss[loss=0.1805, simple_loss=0.2557, pruned_loss=0.05262, over 7273.00 frames.], tot_loss[loss=0.2168, simple_loss=0.2952, pruned_loss=0.06915, over 1420803.40 frames.], batch size: 17, lr: 5.74e-04 2022-05-27 06:57:51,714 INFO [train.py:842] (0/4) Epoch 9, batch 8300, loss[loss=0.1828, simple_loss=0.2815, pruned_loss=0.04203, over 7305.00 frames.], tot_loss[loss=0.218, simple_loss=0.2962, pruned_loss=0.06993, over 1409451.96 frames.], batch size: 21, lr: 5.74e-04 2022-05-27 06:58:30,523 INFO [train.py:842] (0/4) Epoch 9, batch 8350, loss[loss=0.2019, simple_loss=0.2959, pruned_loss=0.05395, over 7320.00 frames.], tot_loss[loss=0.2157, simple_loss=0.2942, pruned_loss=0.0686, over 1413697.90 frames.], batch size: 21, lr: 5.74e-04 2022-05-27 06:59:09,802 INFO [train.py:842] (0/4) Epoch 9, batch 8400, loss[loss=0.214, simple_loss=0.3075, pruned_loss=0.06028, over 7167.00 frames.], tot_loss[loss=0.2145, simple_loss=0.2932, pruned_loss=0.06795, over 1421099.23 frames.], batch size: 28, lr: 5.74e-04 2022-05-27 06:59:48,406 INFO [train.py:842] (0/4) Epoch 9, batch 8450, loss[loss=0.2168, simple_loss=0.2973, pruned_loss=0.06811, over 7109.00 frames.], tot_loss[loss=0.2136, simple_loss=0.2926, pruned_loss=0.06733, over 1422857.95 frames.], batch size: 21, lr: 5.73e-04 2022-05-27 07:00:27,399 INFO [train.py:842] (0/4) Epoch 9, batch 8500, loss[loss=0.1922, simple_loss=0.2861, pruned_loss=0.04917, over 7167.00 frames.], tot_loss[loss=0.2136, simple_loss=0.2926, pruned_loss=0.06731, over 1422830.25 frames.], batch size: 19, lr: 5.73e-04 2022-05-27 07:01:05,909 INFO [train.py:842] (0/4) Epoch 9, batch 8550, loss[loss=0.2278, simple_loss=0.3086, pruned_loss=0.07355, over 6253.00 frames.], tot_loss[loss=0.2128, simple_loss=0.2923, pruned_loss=0.06665, over 1419232.19 frames.], batch size: 37, lr: 5.73e-04 2022-05-27 07:01:44,693 INFO [train.py:842] (0/4) Epoch 9, batch 8600, loss[loss=0.2817, simple_loss=0.3473, pruned_loss=0.1081, over 5184.00 frames.], tot_loss[loss=0.2152, simple_loss=0.2943, pruned_loss=0.06806, over 1415789.54 frames.], batch size: 52, lr: 5.73e-04 2022-05-27 07:02:23,104 INFO [train.py:842] (0/4) Epoch 9, batch 8650, loss[loss=0.1911, simple_loss=0.2796, pruned_loss=0.05129, over 7319.00 frames.], tot_loss[loss=0.2149, simple_loss=0.2944, pruned_loss=0.06766, over 1420001.67 frames.], batch size: 21, lr: 5.73e-04 2022-05-27 07:03:02,351 INFO [train.py:842] (0/4) Epoch 9, batch 8700, loss[loss=0.1608, simple_loss=0.2492, pruned_loss=0.03617, over 7345.00 frames.], tot_loss[loss=0.2126, simple_loss=0.2918, pruned_loss=0.06672, over 1422305.02 frames.], batch size: 19, lr: 5.72e-04 2022-05-27 07:03:40,719 INFO [train.py:842] (0/4) Epoch 9, batch 8750, loss[loss=0.1963, simple_loss=0.2745, pruned_loss=0.05907, over 7149.00 frames.], tot_loss[loss=0.2134, simple_loss=0.2926, pruned_loss=0.06704, over 1417912.84 frames.], batch size: 18, lr: 5.72e-04 2022-05-27 07:04:19,472 INFO [train.py:842] (0/4) Epoch 9, batch 8800, loss[loss=0.2597, simple_loss=0.3437, pruned_loss=0.08782, over 7199.00 frames.], tot_loss[loss=0.2147, simple_loss=0.294, pruned_loss=0.06768, over 1418782.23 frames.], batch size: 23, lr: 5.72e-04 2022-05-27 07:04:57,931 INFO [train.py:842] (0/4) Epoch 9, batch 8850, loss[loss=0.1905, simple_loss=0.2771, pruned_loss=0.05188, over 7273.00 frames.], tot_loss[loss=0.2138, simple_loss=0.2931, pruned_loss=0.06724, over 1410683.96 frames.], batch size: 24, lr: 5.72e-04 2022-05-27 07:05:37,302 INFO [train.py:842] (0/4) Epoch 9, batch 8900, loss[loss=0.257, simple_loss=0.3494, pruned_loss=0.08236, over 7378.00 frames.], tot_loss[loss=0.213, simple_loss=0.2921, pruned_loss=0.06696, over 1405964.27 frames.], batch size: 23, lr: 5.72e-04 2022-05-27 07:06:15,871 INFO [train.py:842] (0/4) Epoch 9, batch 8950, loss[loss=0.2661, simple_loss=0.3308, pruned_loss=0.1007, over 7350.00 frames.], tot_loss[loss=0.2147, simple_loss=0.2933, pruned_loss=0.06811, over 1399323.03 frames.], batch size: 19, lr: 5.72e-04 2022-05-27 07:06:55,094 INFO [train.py:842] (0/4) Epoch 9, batch 9000, loss[loss=0.2472, simple_loss=0.3223, pruned_loss=0.08607, over 6400.00 frames.], tot_loss[loss=0.2149, simple_loss=0.2929, pruned_loss=0.06849, over 1393611.58 frames.], batch size: 38, lr: 5.71e-04 2022-05-27 07:06:55,095 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 07:07:04,565 INFO [train.py:871] (0/4) Epoch 9, validation: loss=0.1768, simple_loss=0.2774, pruned_loss=0.03806, over 868885.00 frames. 2022-05-27 07:07:43,577 INFO [train.py:842] (0/4) Epoch 9, batch 9050, loss[loss=0.1831, simple_loss=0.2637, pruned_loss=0.05127, over 7275.00 frames.], tot_loss[loss=0.215, simple_loss=0.2928, pruned_loss=0.0686, over 1386937.13 frames.], batch size: 18, lr: 5.71e-04 2022-05-27 07:08:21,633 INFO [train.py:842] (0/4) Epoch 9, batch 9100, loss[loss=0.3121, simple_loss=0.3662, pruned_loss=0.129, over 5178.00 frames.], tot_loss[loss=0.2214, simple_loss=0.2984, pruned_loss=0.07224, over 1351019.85 frames.], batch size: 52, lr: 5.71e-04 2022-05-27 07:08:59,050 INFO [train.py:842] (0/4) Epoch 9, batch 9150, loss[loss=0.2635, simple_loss=0.3167, pruned_loss=0.1051, over 5245.00 frames.], tot_loss[loss=0.2265, simple_loss=0.3027, pruned_loss=0.07516, over 1298705.85 frames.], batch size: 52, lr: 5.71e-04 2022-05-27 07:09:31,633 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-9.pt 2022-05-27 07:09:51,893 INFO [train.py:842] (0/4) Epoch 10, batch 0, loss[loss=0.2149, simple_loss=0.3006, pruned_loss=0.06464, over 7421.00 frames.], tot_loss[loss=0.2149, simple_loss=0.3006, pruned_loss=0.06464, over 7421.00 frames.], batch size: 21, lr: 5.49e-04 2022-05-27 07:10:30,747 INFO [train.py:842] (0/4) Epoch 10, batch 50, loss[loss=0.2425, simple_loss=0.3266, pruned_loss=0.07923, over 7219.00 frames.], tot_loss[loss=0.2176, simple_loss=0.2963, pruned_loss=0.0694, over 322004.41 frames.], batch size: 23, lr: 5.49e-04 2022-05-27 07:11:09,438 INFO [train.py:842] (0/4) Epoch 10, batch 100, loss[loss=0.211, simple_loss=0.288, pruned_loss=0.06704, over 5151.00 frames.], tot_loss[loss=0.2107, simple_loss=0.2902, pruned_loss=0.06559, over 557552.99 frames.], batch size: 52, lr: 5.48e-04 2022-05-27 07:11:48,007 INFO [train.py:842] (0/4) Epoch 10, batch 150, loss[loss=0.2202, simple_loss=0.2978, pruned_loss=0.0713, over 7435.00 frames.], tot_loss[loss=0.2108, simple_loss=0.2906, pruned_loss=0.06552, over 750564.52 frames.], batch size: 20, lr: 5.48e-04 2022-05-27 07:12:26,871 INFO [train.py:842] (0/4) Epoch 10, batch 200, loss[loss=0.1791, simple_loss=0.2628, pruned_loss=0.0477, over 7432.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2915, pruned_loss=0.066, over 898175.92 frames.], batch size: 20, lr: 5.48e-04 2022-05-27 07:13:05,262 INFO [train.py:842] (0/4) Epoch 10, batch 250, loss[loss=0.1896, simple_loss=0.2682, pruned_loss=0.05553, over 7163.00 frames.], tot_loss[loss=0.2155, simple_loss=0.2948, pruned_loss=0.06806, over 1010312.36 frames.], batch size: 18, lr: 5.48e-04 2022-05-27 07:13:44,130 INFO [train.py:842] (0/4) Epoch 10, batch 300, loss[loss=0.2211, simple_loss=0.3108, pruned_loss=0.06572, over 7334.00 frames.], tot_loss[loss=0.2144, simple_loss=0.2936, pruned_loss=0.0676, over 1103741.91 frames.], batch size: 20, lr: 5.48e-04 2022-05-27 07:14:22,633 INFO [train.py:842] (0/4) Epoch 10, batch 350, loss[loss=0.2049, simple_loss=0.2885, pruned_loss=0.06068, over 7201.00 frames.], tot_loss[loss=0.2144, simple_loss=0.294, pruned_loss=0.06745, over 1172506.87 frames.], batch size: 23, lr: 5.48e-04 2022-05-27 07:15:01,363 INFO [train.py:842] (0/4) Epoch 10, batch 400, loss[loss=0.2499, simple_loss=0.3302, pruned_loss=0.08485, over 7143.00 frames.], tot_loss[loss=0.2152, simple_loss=0.295, pruned_loss=0.06773, over 1222078.66 frames.], batch size: 26, lr: 5.47e-04 2022-05-27 07:15:39,889 INFO [train.py:842] (0/4) Epoch 10, batch 450, loss[loss=0.2683, simple_loss=0.3345, pruned_loss=0.1011, over 6535.00 frames.], tot_loss[loss=0.2144, simple_loss=0.2946, pruned_loss=0.06715, over 1260061.23 frames.], batch size: 38, lr: 5.47e-04 2022-05-27 07:16:18,859 INFO [train.py:842] (0/4) Epoch 10, batch 500, loss[loss=0.2069, simple_loss=0.2956, pruned_loss=0.05915, over 7153.00 frames.], tot_loss[loss=0.2142, simple_loss=0.2942, pruned_loss=0.06707, over 1295054.61 frames.], batch size: 19, lr: 5.47e-04 2022-05-27 07:16:57,392 INFO [train.py:842] (0/4) Epoch 10, batch 550, loss[loss=0.1466, simple_loss=0.2254, pruned_loss=0.03393, over 7137.00 frames.], tot_loss[loss=0.2141, simple_loss=0.2943, pruned_loss=0.06697, over 1323283.98 frames.], batch size: 17, lr: 5.47e-04 2022-05-27 07:17:36,396 INFO [train.py:842] (0/4) Epoch 10, batch 600, loss[loss=0.1855, simple_loss=0.2603, pruned_loss=0.05531, over 7272.00 frames.], tot_loss[loss=0.2138, simple_loss=0.2937, pruned_loss=0.06693, over 1345108.91 frames.], batch size: 18, lr: 5.47e-04 2022-05-27 07:18:15,048 INFO [train.py:842] (0/4) Epoch 10, batch 650, loss[loss=0.1929, simple_loss=0.2876, pruned_loss=0.0491, over 7120.00 frames.], tot_loss[loss=0.2143, simple_loss=0.2939, pruned_loss=0.06733, over 1360929.01 frames.], batch size: 26, lr: 5.47e-04 2022-05-27 07:18:53,917 INFO [train.py:842] (0/4) Epoch 10, batch 700, loss[loss=0.2033, simple_loss=0.2935, pruned_loss=0.05655, over 7254.00 frames.], tot_loss[loss=0.2132, simple_loss=0.293, pruned_loss=0.06671, over 1375428.65 frames.], batch size: 25, lr: 5.46e-04 2022-05-27 07:19:32,401 INFO [train.py:842] (0/4) Epoch 10, batch 750, loss[loss=0.1933, simple_loss=0.2679, pruned_loss=0.05937, over 7434.00 frames.], tot_loss[loss=0.2123, simple_loss=0.2921, pruned_loss=0.06627, over 1385221.51 frames.], batch size: 20, lr: 5.46e-04 2022-05-27 07:20:11,289 INFO [train.py:842] (0/4) Epoch 10, batch 800, loss[loss=0.2646, simple_loss=0.3397, pruned_loss=0.09476, over 7314.00 frames.], tot_loss[loss=0.2119, simple_loss=0.2916, pruned_loss=0.06606, over 1392922.53 frames.], batch size: 24, lr: 5.46e-04 2022-05-27 07:20:50,035 INFO [train.py:842] (0/4) Epoch 10, batch 850, loss[loss=0.2098, simple_loss=0.2974, pruned_loss=0.06113, over 6371.00 frames.], tot_loss[loss=0.2119, simple_loss=0.2916, pruned_loss=0.06609, over 1396353.69 frames.], batch size: 37, lr: 5.46e-04 2022-05-27 07:21:29,288 INFO [train.py:842] (0/4) Epoch 10, batch 900, loss[loss=0.2106, simple_loss=0.2943, pruned_loss=0.06351, over 7318.00 frames.], tot_loss[loss=0.2124, simple_loss=0.2922, pruned_loss=0.06637, over 1406151.44 frames.], batch size: 21, lr: 5.46e-04 2022-05-27 07:22:07,876 INFO [train.py:842] (0/4) Epoch 10, batch 950, loss[loss=0.1993, simple_loss=0.2819, pruned_loss=0.05832, over 7213.00 frames.], tot_loss[loss=0.2138, simple_loss=0.2934, pruned_loss=0.06708, over 1407115.44 frames.], batch size: 26, lr: 5.46e-04 2022-05-27 07:22:46,802 INFO [train.py:842] (0/4) Epoch 10, batch 1000, loss[loss=0.2581, simple_loss=0.328, pruned_loss=0.09415, over 7327.00 frames.], tot_loss[loss=0.2131, simple_loss=0.2928, pruned_loss=0.06669, over 1414797.48 frames.], batch size: 20, lr: 5.46e-04 2022-05-27 07:23:25,633 INFO [train.py:842] (0/4) Epoch 10, batch 1050, loss[loss=0.2002, simple_loss=0.2821, pruned_loss=0.05912, over 7035.00 frames.], tot_loss[loss=0.2121, simple_loss=0.2919, pruned_loss=0.06622, over 1417007.78 frames.], batch size: 28, lr: 5.45e-04 2022-05-27 07:24:04,250 INFO [train.py:842] (0/4) Epoch 10, batch 1100, loss[loss=0.1832, simple_loss=0.2776, pruned_loss=0.04439, over 7076.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2922, pruned_loss=0.06643, over 1417660.96 frames.], batch size: 28, lr: 5.45e-04 2022-05-27 07:24:42,896 INFO [train.py:842] (0/4) Epoch 10, batch 1150, loss[loss=0.1873, simple_loss=0.2671, pruned_loss=0.05371, over 7330.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2923, pruned_loss=0.06631, over 1421763.08 frames.], batch size: 20, lr: 5.45e-04 2022-05-27 07:25:21,662 INFO [train.py:842] (0/4) Epoch 10, batch 1200, loss[loss=0.272, simple_loss=0.36, pruned_loss=0.09205, over 7195.00 frames.], tot_loss[loss=0.2133, simple_loss=0.2931, pruned_loss=0.0667, over 1420114.05 frames.], batch size: 23, lr: 5.45e-04 2022-05-27 07:26:00,220 INFO [train.py:842] (0/4) Epoch 10, batch 1250, loss[loss=0.1851, simple_loss=0.2712, pruned_loss=0.0495, over 7266.00 frames.], tot_loss[loss=0.2137, simple_loss=0.2935, pruned_loss=0.06695, over 1418007.98 frames.], batch size: 17, lr: 5.45e-04 2022-05-27 07:26:39,247 INFO [train.py:842] (0/4) Epoch 10, batch 1300, loss[loss=0.1795, simple_loss=0.2484, pruned_loss=0.05534, over 6995.00 frames.], tot_loss[loss=0.2129, simple_loss=0.2924, pruned_loss=0.06672, over 1415540.31 frames.], batch size: 16, lr: 5.45e-04 2022-05-27 07:27:17,669 INFO [train.py:842] (0/4) Epoch 10, batch 1350, loss[loss=0.198, simple_loss=0.2924, pruned_loss=0.05176, over 7322.00 frames.], tot_loss[loss=0.2127, simple_loss=0.2922, pruned_loss=0.06659, over 1414331.47 frames.], batch size: 21, lr: 5.44e-04 2022-05-27 07:27:56,273 INFO [train.py:842] (0/4) Epoch 10, batch 1400, loss[loss=0.2263, simple_loss=0.3115, pruned_loss=0.07048, over 7134.00 frames.], tot_loss[loss=0.2137, simple_loss=0.2933, pruned_loss=0.06708, over 1417352.49 frames.], batch size: 21, lr: 5.44e-04 2022-05-27 07:28:35,048 INFO [train.py:842] (0/4) Epoch 10, batch 1450, loss[loss=0.1994, simple_loss=0.2987, pruned_loss=0.05009, over 7289.00 frames.], tot_loss[loss=0.2123, simple_loss=0.2921, pruned_loss=0.0662, over 1418276.84 frames.], batch size: 25, lr: 5.44e-04 2022-05-27 07:29:13,792 INFO [train.py:842] (0/4) Epoch 10, batch 1500, loss[loss=0.2241, simple_loss=0.2959, pruned_loss=0.0761, over 5347.00 frames.], tot_loss[loss=0.2141, simple_loss=0.2938, pruned_loss=0.06724, over 1414877.67 frames.], batch size: 52, lr: 5.44e-04 2022-05-27 07:29:52,345 INFO [train.py:842] (0/4) Epoch 10, batch 1550, loss[loss=0.2057, simple_loss=0.2956, pruned_loss=0.0579, over 7367.00 frames.], tot_loss[loss=0.2131, simple_loss=0.2934, pruned_loss=0.06635, over 1418700.24 frames.], batch size: 19, lr: 5.44e-04 2022-05-27 07:30:31,267 INFO [train.py:842] (0/4) Epoch 10, batch 1600, loss[loss=0.1868, simple_loss=0.2728, pruned_loss=0.05038, over 7256.00 frames.], tot_loss[loss=0.2126, simple_loss=0.2928, pruned_loss=0.06619, over 1417969.16 frames.], batch size: 19, lr: 5.44e-04 2022-05-27 07:31:09,811 INFO [train.py:842] (0/4) Epoch 10, batch 1650, loss[loss=0.2377, simple_loss=0.3122, pruned_loss=0.08166, over 7407.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2921, pruned_loss=0.0658, over 1416292.51 frames.], batch size: 21, lr: 5.43e-04 2022-05-27 07:31:48,617 INFO [train.py:842] (0/4) Epoch 10, batch 1700, loss[loss=0.2249, simple_loss=0.3084, pruned_loss=0.07072, over 7288.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2922, pruned_loss=0.06572, over 1414673.66 frames.], batch size: 24, lr: 5.43e-04 2022-05-27 07:32:27,177 INFO [train.py:842] (0/4) Epoch 10, batch 1750, loss[loss=0.2017, simple_loss=0.2838, pruned_loss=0.05977, over 6825.00 frames.], tot_loss[loss=0.2122, simple_loss=0.2924, pruned_loss=0.066, over 1406154.83 frames.], batch size: 15, lr: 5.43e-04 2022-05-27 07:33:05,909 INFO [train.py:842] (0/4) Epoch 10, batch 1800, loss[loss=0.1724, simple_loss=0.2638, pruned_loss=0.04048, over 7359.00 frames.], tot_loss[loss=0.2133, simple_loss=0.2931, pruned_loss=0.0667, over 1410466.92 frames.], batch size: 19, lr: 5.43e-04 2022-05-27 07:33:44,423 INFO [train.py:842] (0/4) Epoch 10, batch 1850, loss[loss=0.2229, simple_loss=0.2931, pruned_loss=0.07633, over 7349.00 frames.], tot_loss[loss=0.2134, simple_loss=0.2932, pruned_loss=0.06676, over 1410626.17 frames.], batch size: 19, lr: 5.43e-04 2022-05-27 07:34:23,386 INFO [train.py:842] (0/4) Epoch 10, batch 1900, loss[loss=0.214, simple_loss=0.2928, pruned_loss=0.06754, over 7266.00 frames.], tot_loss[loss=0.2129, simple_loss=0.2925, pruned_loss=0.06668, over 1415176.97 frames.], batch size: 18, lr: 5.43e-04 2022-05-27 07:35:02,039 INFO [train.py:842] (0/4) Epoch 10, batch 1950, loss[loss=0.2477, simple_loss=0.3287, pruned_loss=0.08336, over 7192.00 frames.], tot_loss[loss=0.2135, simple_loss=0.2926, pruned_loss=0.06723, over 1413679.95 frames.], batch size: 23, lr: 5.42e-04 2022-05-27 07:35:40,859 INFO [train.py:842] (0/4) Epoch 10, batch 2000, loss[loss=0.2028, simple_loss=0.2912, pruned_loss=0.05724, over 7243.00 frames.], tot_loss[loss=0.2134, simple_loss=0.2924, pruned_loss=0.06718, over 1417127.94 frames.], batch size: 20, lr: 5.42e-04 2022-05-27 07:36:19,532 INFO [train.py:842] (0/4) Epoch 10, batch 2050, loss[loss=0.2157, simple_loss=0.3044, pruned_loss=0.06348, over 7214.00 frames.], tot_loss[loss=0.2132, simple_loss=0.2925, pruned_loss=0.06699, over 1419010.73 frames.], batch size: 23, lr: 5.42e-04 2022-05-27 07:36:58,500 INFO [train.py:842] (0/4) Epoch 10, batch 2100, loss[loss=0.2047, simple_loss=0.2954, pruned_loss=0.05702, over 7149.00 frames.], tot_loss[loss=0.2113, simple_loss=0.291, pruned_loss=0.06577, over 1424014.14 frames.], batch size: 20, lr: 5.42e-04 2022-05-27 07:37:37,344 INFO [train.py:842] (0/4) Epoch 10, batch 2150, loss[loss=0.2515, simple_loss=0.3116, pruned_loss=0.09569, over 7405.00 frames.], tot_loss[loss=0.2107, simple_loss=0.2902, pruned_loss=0.06559, over 1426328.72 frames.], batch size: 18, lr: 5.42e-04 2022-05-27 07:38:16,091 INFO [train.py:842] (0/4) Epoch 10, batch 2200, loss[loss=0.1978, simple_loss=0.2839, pruned_loss=0.05585, over 6321.00 frames.], tot_loss[loss=0.211, simple_loss=0.2909, pruned_loss=0.06557, over 1427246.20 frames.], batch size: 37, lr: 5.42e-04 2022-05-27 07:38:54,671 INFO [train.py:842] (0/4) Epoch 10, batch 2250, loss[loss=0.2054, simple_loss=0.2912, pruned_loss=0.05978, over 7323.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2896, pruned_loss=0.0647, over 1429425.58 frames.], batch size: 21, lr: 5.42e-04 2022-05-27 07:39:33,355 INFO [train.py:842] (0/4) Epoch 10, batch 2300, loss[loss=0.2078, simple_loss=0.3003, pruned_loss=0.05768, over 7142.00 frames.], tot_loss[loss=0.2107, simple_loss=0.2908, pruned_loss=0.06532, over 1426351.42 frames.], batch size: 20, lr: 5.41e-04 2022-05-27 07:40:11,931 INFO [train.py:842] (0/4) Epoch 10, batch 2350, loss[loss=0.2593, simple_loss=0.3363, pruned_loss=0.0911, over 7209.00 frames.], tot_loss[loss=0.2124, simple_loss=0.2918, pruned_loss=0.06646, over 1424520.57 frames.], batch size: 22, lr: 5.41e-04 2022-05-27 07:40:50,794 INFO [train.py:842] (0/4) Epoch 10, batch 2400, loss[loss=0.2208, simple_loss=0.2875, pruned_loss=0.07708, over 7301.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2914, pruned_loss=0.06609, over 1426782.53 frames.], batch size: 18, lr: 5.41e-04 2022-05-27 07:41:29,386 INFO [train.py:842] (0/4) Epoch 10, batch 2450, loss[loss=0.222, simple_loss=0.2984, pruned_loss=0.0728, over 7073.00 frames.], tot_loss[loss=0.2121, simple_loss=0.2915, pruned_loss=0.0663, over 1429409.12 frames.], batch size: 18, lr: 5.41e-04 2022-05-27 07:42:08,594 INFO [train.py:842] (0/4) Epoch 10, batch 2500, loss[loss=0.2291, simple_loss=0.3062, pruned_loss=0.07595, over 7323.00 frames.], tot_loss[loss=0.2123, simple_loss=0.2918, pruned_loss=0.06642, over 1427404.66 frames.], batch size: 21, lr: 5.41e-04 2022-05-27 07:42:47,178 INFO [train.py:842] (0/4) Epoch 10, batch 2550, loss[loss=0.3202, simple_loss=0.3573, pruned_loss=0.1415, over 7218.00 frames.], tot_loss[loss=0.2128, simple_loss=0.2922, pruned_loss=0.06673, over 1425682.97 frames.], batch size: 21, lr: 5.41e-04 2022-05-27 07:43:26,341 INFO [train.py:842] (0/4) Epoch 10, batch 2600, loss[loss=0.2739, simple_loss=0.344, pruned_loss=0.1019, over 7210.00 frames.], tot_loss[loss=0.213, simple_loss=0.2925, pruned_loss=0.06674, over 1428854.25 frames.], batch size: 26, lr: 5.40e-04 2022-05-27 07:44:04,673 INFO [train.py:842] (0/4) Epoch 10, batch 2650, loss[loss=0.2214, simple_loss=0.3048, pruned_loss=0.06903, over 7355.00 frames.], tot_loss[loss=0.2122, simple_loss=0.292, pruned_loss=0.06623, over 1424481.70 frames.], batch size: 22, lr: 5.40e-04 2022-05-27 07:44:43,514 INFO [train.py:842] (0/4) Epoch 10, batch 2700, loss[loss=0.2372, simple_loss=0.3262, pruned_loss=0.07415, over 6849.00 frames.], tot_loss[loss=0.2129, simple_loss=0.2919, pruned_loss=0.06694, over 1424967.13 frames.], batch size: 31, lr: 5.40e-04 2022-05-27 07:45:22,171 INFO [train.py:842] (0/4) Epoch 10, batch 2750, loss[loss=0.1813, simple_loss=0.2721, pruned_loss=0.04525, over 6694.00 frames.], tot_loss[loss=0.212, simple_loss=0.2912, pruned_loss=0.06642, over 1423480.14 frames.], batch size: 31, lr: 5.40e-04 2022-05-27 07:46:01,453 INFO [train.py:842] (0/4) Epoch 10, batch 2800, loss[loss=0.2323, simple_loss=0.3085, pruned_loss=0.07804, over 7387.00 frames.], tot_loss[loss=0.2122, simple_loss=0.2914, pruned_loss=0.06652, over 1429023.90 frames.], batch size: 23, lr: 5.40e-04 2022-05-27 07:46:50,634 INFO [train.py:842] (0/4) Epoch 10, batch 2850, loss[loss=0.2237, simple_loss=0.3148, pruned_loss=0.06634, over 7346.00 frames.], tot_loss[loss=0.2119, simple_loss=0.2913, pruned_loss=0.06621, over 1426839.21 frames.], batch size: 22, lr: 5.40e-04 2022-05-27 07:47:29,738 INFO [train.py:842] (0/4) Epoch 10, batch 2900, loss[loss=0.1969, simple_loss=0.2819, pruned_loss=0.05592, over 7122.00 frames.], tot_loss[loss=0.2116, simple_loss=0.291, pruned_loss=0.06616, over 1425760.30 frames.], batch size: 21, lr: 5.39e-04 2022-05-27 07:48:08,427 INFO [train.py:842] (0/4) Epoch 10, batch 2950, loss[loss=0.2335, simple_loss=0.2945, pruned_loss=0.08629, over 7272.00 frames.], tot_loss[loss=0.2119, simple_loss=0.2909, pruned_loss=0.06649, over 1425721.35 frames.], batch size: 18, lr: 5.39e-04 2022-05-27 07:48:47,531 INFO [train.py:842] (0/4) Epoch 10, batch 3000, loss[loss=0.1786, simple_loss=0.2623, pruned_loss=0.04749, over 7291.00 frames.], tot_loss[loss=0.2114, simple_loss=0.2902, pruned_loss=0.06625, over 1425261.64 frames.], batch size: 17, lr: 5.39e-04 2022-05-27 07:48:47,533 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 07:48:56,971 INFO [train.py:871] (0/4) Epoch 10, validation: loss=0.1722, simple_loss=0.2729, pruned_loss=0.03575, over 868885.00 frames. 2022-05-27 07:49:35,665 INFO [train.py:842] (0/4) Epoch 10, batch 3050, loss[loss=0.2598, simple_loss=0.3324, pruned_loss=0.09364, over 7158.00 frames.], tot_loss[loss=0.2112, simple_loss=0.2906, pruned_loss=0.06588, over 1425214.60 frames.], batch size: 19, lr: 5.39e-04 2022-05-27 07:50:14,625 INFO [train.py:842] (0/4) Epoch 10, batch 3100, loss[loss=0.195, simple_loss=0.2809, pruned_loss=0.05453, over 7122.00 frames.], tot_loss[loss=0.2107, simple_loss=0.2906, pruned_loss=0.06534, over 1429224.97 frames.], batch size: 21, lr: 5.39e-04 2022-05-27 07:50:53,117 INFO [train.py:842] (0/4) Epoch 10, batch 3150, loss[loss=0.1863, simple_loss=0.278, pruned_loss=0.04735, over 7320.00 frames.], tot_loss[loss=0.2105, simple_loss=0.2904, pruned_loss=0.06532, over 1425832.49 frames.], batch size: 21, lr: 5.39e-04 2022-05-27 07:51:32,409 INFO [train.py:842] (0/4) Epoch 10, batch 3200, loss[loss=0.1725, simple_loss=0.2609, pruned_loss=0.042, over 7226.00 frames.], tot_loss[loss=0.2109, simple_loss=0.2904, pruned_loss=0.06563, over 1426205.16 frames.], batch size: 20, lr: 5.39e-04 2022-05-27 07:52:11,113 INFO [train.py:842] (0/4) Epoch 10, batch 3250, loss[loss=0.2028, simple_loss=0.2837, pruned_loss=0.06089, over 7411.00 frames.], tot_loss[loss=0.2105, simple_loss=0.2903, pruned_loss=0.06538, over 1426945.80 frames.], batch size: 21, lr: 5.38e-04 2022-05-27 07:52:49,916 INFO [train.py:842] (0/4) Epoch 10, batch 3300, loss[loss=0.2121, simple_loss=0.2869, pruned_loss=0.06864, over 7196.00 frames.], tot_loss[loss=0.2117, simple_loss=0.2913, pruned_loss=0.06608, over 1428243.02 frames.], batch size: 22, lr: 5.38e-04 2022-05-27 07:53:28,327 INFO [train.py:842] (0/4) Epoch 10, batch 3350, loss[loss=0.1903, simple_loss=0.2812, pruned_loss=0.0497, over 7207.00 frames.], tot_loss[loss=0.2123, simple_loss=0.2917, pruned_loss=0.0665, over 1428840.86 frames.], batch size: 23, lr: 5.38e-04 2022-05-27 07:54:07,098 INFO [train.py:842] (0/4) Epoch 10, batch 3400, loss[loss=0.164, simple_loss=0.2427, pruned_loss=0.04261, over 7280.00 frames.], tot_loss[loss=0.2128, simple_loss=0.2919, pruned_loss=0.06691, over 1425111.30 frames.], batch size: 17, lr: 5.38e-04 2022-05-27 07:54:45,621 INFO [train.py:842] (0/4) Epoch 10, batch 3450, loss[loss=0.1808, simple_loss=0.2804, pruned_loss=0.04063, over 7297.00 frames.], tot_loss[loss=0.212, simple_loss=0.2915, pruned_loss=0.06623, over 1424625.49 frames.], batch size: 24, lr: 5.38e-04 2022-05-27 07:55:24,446 INFO [train.py:842] (0/4) Epoch 10, batch 3500, loss[loss=0.197, simple_loss=0.2874, pruned_loss=0.05333, over 7421.00 frames.], tot_loss[loss=0.2108, simple_loss=0.2906, pruned_loss=0.06554, over 1424931.11 frames.], batch size: 21, lr: 5.38e-04 2022-05-27 07:56:03,133 INFO [train.py:842] (0/4) Epoch 10, batch 3550, loss[loss=0.26, simple_loss=0.3183, pruned_loss=0.1009, over 7113.00 frames.], tot_loss[loss=0.2087, simple_loss=0.2888, pruned_loss=0.06433, over 1427758.71 frames.], batch size: 28, lr: 5.37e-04 2022-05-27 07:56:42,275 INFO [train.py:842] (0/4) Epoch 10, batch 3600, loss[loss=0.2921, simple_loss=0.3484, pruned_loss=0.1179, over 7061.00 frames.], tot_loss[loss=0.21, simple_loss=0.2896, pruned_loss=0.06523, over 1427898.34 frames.], batch size: 28, lr: 5.37e-04 2022-05-27 07:57:20,971 INFO [train.py:842] (0/4) Epoch 10, batch 3650, loss[loss=0.2141, simple_loss=0.2982, pruned_loss=0.06504, over 7059.00 frames.], tot_loss[loss=0.2106, simple_loss=0.2902, pruned_loss=0.06555, over 1423704.14 frames.], batch size: 18, lr: 5.37e-04 2022-05-27 07:57:59,623 INFO [train.py:842] (0/4) Epoch 10, batch 3700, loss[loss=0.1805, simple_loss=0.254, pruned_loss=0.05348, over 7279.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2912, pruned_loss=0.06615, over 1425413.33 frames.], batch size: 17, lr: 5.37e-04 2022-05-27 07:58:38,203 INFO [train.py:842] (0/4) Epoch 10, batch 3750, loss[loss=0.1724, simple_loss=0.2551, pruned_loss=0.04489, over 7151.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2915, pruned_loss=0.066, over 1427441.23 frames.], batch size: 19, lr: 5.37e-04 2022-05-27 07:59:17,015 INFO [train.py:842] (0/4) Epoch 10, batch 3800, loss[loss=0.1933, simple_loss=0.2862, pruned_loss=0.05026, over 7434.00 frames.], tot_loss[loss=0.2112, simple_loss=0.2913, pruned_loss=0.06549, over 1425282.88 frames.], batch size: 20, lr: 5.37e-04 2022-05-27 07:59:55,527 INFO [train.py:842] (0/4) Epoch 10, batch 3850, loss[loss=0.1642, simple_loss=0.2525, pruned_loss=0.03796, over 7057.00 frames.], tot_loss[loss=0.2127, simple_loss=0.2929, pruned_loss=0.06624, over 1424704.24 frames.], batch size: 18, lr: 5.36e-04 2022-05-27 08:00:34,374 INFO [train.py:842] (0/4) Epoch 10, batch 3900, loss[loss=0.2683, simple_loss=0.3374, pruned_loss=0.09961, over 7143.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2928, pruned_loss=0.06608, over 1425964.60 frames.], batch size: 20, lr: 5.36e-04 2022-05-27 08:01:13,249 INFO [train.py:842] (0/4) Epoch 10, batch 3950, loss[loss=0.1959, simple_loss=0.2734, pruned_loss=0.05918, over 7071.00 frames.], tot_loss[loss=0.2123, simple_loss=0.2924, pruned_loss=0.06614, over 1424423.96 frames.], batch size: 18, lr: 5.36e-04 2022-05-27 08:01:51,972 INFO [train.py:842] (0/4) Epoch 10, batch 4000, loss[loss=0.2039, simple_loss=0.2714, pruned_loss=0.06824, over 7270.00 frames.], tot_loss[loss=0.2131, simple_loss=0.2931, pruned_loss=0.06654, over 1423737.60 frames.], batch size: 17, lr: 5.36e-04 2022-05-27 08:02:30,693 INFO [train.py:842] (0/4) Epoch 10, batch 4050, loss[loss=0.1958, simple_loss=0.2899, pruned_loss=0.05084, over 7222.00 frames.], tot_loss[loss=0.2131, simple_loss=0.2923, pruned_loss=0.06691, over 1423431.47 frames.], batch size: 20, lr: 5.36e-04 2022-05-27 08:03:09,543 INFO [train.py:842] (0/4) Epoch 10, batch 4100, loss[loss=0.1799, simple_loss=0.2701, pruned_loss=0.04488, over 7319.00 frames.], tot_loss[loss=0.2129, simple_loss=0.2922, pruned_loss=0.0668, over 1423230.72 frames.], batch size: 20, lr: 5.36e-04 2022-05-27 08:03:48,065 INFO [train.py:842] (0/4) Epoch 10, batch 4150, loss[loss=0.3582, simple_loss=0.3881, pruned_loss=0.1642, over 7376.00 frames.], tot_loss[loss=0.2134, simple_loss=0.2927, pruned_loss=0.06708, over 1425415.00 frames.], batch size: 23, lr: 5.36e-04 2022-05-27 08:04:26,996 INFO [train.py:842] (0/4) Epoch 10, batch 4200, loss[loss=0.235, simple_loss=0.3068, pruned_loss=0.08165, over 7281.00 frames.], tot_loss[loss=0.2135, simple_loss=0.2931, pruned_loss=0.06695, over 1424866.84 frames.], batch size: 24, lr: 5.35e-04 2022-05-27 08:05:05,675 INFO [train.py:842] (0/4) Epoch 10, batch 4250, loss[loss=0.2281, simple_loss=0.3108, pruned_loss=0.07268, over 6910.00 frames.], tot_loss[loss=0.2119, simple_loss=0.2921, pruned_loss=0.06581, over 1427387.48 frames.], batch size: 31, lr: 5.35e-04 2022-05-27 08:05:44,568 INFO [train.py:842] (0/4) Epoch 10, batch 4300, loss[loss=0.1808, simple_loss=0.2445, pruned_loss=0.05856, over 7277.00 frames.], tot_loss[loss=0.2128, simple_loss=0.2927, pruned_loss=0.0665, over 1427175.16 frames.], batch size: 17, lr: 5.35e-04 2022-05-27 08:06:22,950 INFO [train.py:842] (0/4) Epoch 10, batch 4350, loss[loss=0.1897, simple_loss=0.2846, pruned_loss=0.04744, over 7131.00 frames.], tot_loss[loss=0.2156, simple_loss=0.2952, pruned_loss=0.068, over 1419936.14 frames.], batch size: 26, lr: 5.35e-04 2022-05-27 08:07:01,871 INFO [train.py:842] (0/4) Epoch 10, batch 4400, loss[loss=0.2011, simple_loss=0.2815, pruned_loss=0.06032, over 7146.00 frames.], tot_loss[loss=0.2142, simple_loss=0.2938, pruned_loss=0.06728, over 1420319.56 frames.], batch size: 20, lr: 5.35e-04 2022-05-27 08:08:01,212 INFO [train.py:842] (0/4) Epoch 10, batch 4450, loss[loss=0.209, simple_loss=0.2966, pruned_loss=0.06069, over 7337.00 frames.], tot_loss[loss=0.2137, simple_loss=0.2935, pruned_loss=0.06698, over 1419198.04 frames.], batch size: 22, lr: 5.35e-04 2022-05-27 08:08:50,709 INFO [train.py:842] (0/4) Epoch 10, batch 4500, loss[loss=0.2373, simple_loss=0.3214, pruned_loss=0.07662, over 7122.00 frames.], tot_loss[loss=0.2124, simple_loss=0.2922, pruned_loss=0.06627, over 1420531.22 frames.], batch size: 21, lr: 5.35e-04 2022-05-27 08:09:29,231 INFO [train.py:842] (0/4) Epoch 10, batch 4550, loss[loss=0.1813, simple_loss=0.2691, pruned_loss=0.04678, over 7438.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2922, pruned_loss=0.06636, over 1418068.67 frames.], batch size: 20, lr: 5.34e-04 2022-05-27 08:10:07,950 INFO [train.py:842] (0/4) Epoch 10, batch 4600, loss[loss=0.2031, simple_loss=0.288, pruned_loss=0.05912, over 7217.00 frames.], tot_loss[loss=0.2137, simple_loss=0.2937, pruned_loss=0.06687, over 1421988.77 frames.], batch size: 21, lr: 5.34e-04 2022-05-27 08:10:46,411 INFO [train.py:842] (0/4) Epoch 10, batch 4650, loss[loss=0.2306, simple_loss=0.3029, pruned_loss=0.07918, over 7212.00 frames.], tot_loss[loss=0.2132, simple_loss=0.2932, pruned_loss=0.0666, over 1419374.10 frames.], batch size: 23, lr: 5.34e-04 2022-05-27 08:11:25,341 INFO [train.py:842] (0/4) Epoch 10, batch 4700, loss[loss=0.2098, simple_loss=0.299, pruned_loss=0.06033, over 7227.00 frames.], tot_loss[loss=0.2144, simple_loss=0.2937, pruned_loss=0.06756, over 1413407.45 frames.], batch size: 21, lr: 5.34e-04 2022-05-27 08:12:03,839 INFO [train.py:842] (0/4) Epoch 10, batch 4750, loss[loss=0.179, simple_loss=0.2585, pruned_loss=0.04975, over 7009.00 frames.], tot_loss[loss=0.2141, simple_loss=0.2939, pruned_loss=0.06721, over 1416151.70 frames.], batch size: 16, lr: 5.34e-04 2022-05-27 08:12:42,491 INFO [train.py:842] (0/4) Epoch 10, batch 4800, loss[loss=0.1861, simple_loss=0.2725, pruned_loss=0.04989, over 7288.00 frames.], tot_loss[loss=0.2147, simple_loss=0.2942, pruned_loss=0.06761, over 1416977.19 frames.], batch size: 25, lr: 5.34e-04 2022-05-27 08:13:21,006 INFO [train.py:842] (0/4) Epoch 10, batch 4850, loss[loss=0.2086, simple_loss=0.292, pruned_loss=0.06263, over 7119.00 frames.], tot_loss[loss=0.2133, simple_loss=0.2933, pruned_loss=0.06664, over 1419843.97 frames.], batch size: 21, lr: 5.33e-04 2022-05-27 08:14:00,164 INFO [train.py:842] (0/4) Epoch 10, batch 4900, loss[loss=0.1764, simple_loss=0.247, pruned_loss=0.05287, over 7414.00 frames.], tot_loss[loss=0.2121, simple_loss=0.2923, pruned_loss=0.06593, over 1423677.76 frames.], batch size: 18, lr: 5.33e-04 2022-05-27 08:14:38,915 INFO [train.py:842] (0/4) Epoch 10, batch 4950, loss[loss=0.1758, simple_loss=0.2534, pruned_loss=0.04911, over 6788.00 frames.], tot_loss[loss=0.2105, simple_loss=0.2903, pruned_loss=0.06531, over 1423227.36 frames.], batch size: 15, lr: 5.33e-04 2022-05-27 08:15:17,721 INFO [train.py:842] (0/4) Epoch 10, batch 5000, loss[loss=0.1697, simple_loss=0.2511, pruned_loss=0.04415, over 7162.00 frames.], tot_loss[loss=0.2105, simple_loss=0.2903, pruned_loss=0.0654, over 1425176.76 frames.], batch size: 18, lr: 5.33e-04 2022-05-27 08:15:56,150 INFO [train.py:842] (0/4) Epoch 10, batch 5050, loss[loss=0.1799, simple_loss=0.2537, pruned_loss=0.05308, over 7282.00 frames.], tot_loss[loss=0.2104, simple_loss=0.2903, pruned_loss=0.06526, over 1422495.35 frames.], batch size: 17, lr: 5.33e-04 2022-05-27 08:16:34,970 INFO [train.py:842] (0/4) Epoch 10, batch 5100, loss[loss=0.1986, simple_loss=0.2911, pruned_loss=0.05305, over 7120.00 frames.], tot_loss[loss=0.2101, simple_loss=0.2903, pruned_loss=0.06498, over 1420453.53 frames.], batch size: 28, lr: 5.33e-04 2022-05-27 08:17:13,470 INFO [train.py:842] (0/4) Epoch 10, batch 5150, loss[loss=0.2029, simple_loss=0.2852, pruned_loss=0.06031, over 7338.00 frames.], tot_loss[loss=0.2101, simple_loss=0.2897, pruned_loss=0.06526, over 1418137.04 frames.], batch size: 22, lr: 5.33e-04 2022-05-27 08:17:52,715 INFO [train.py:842] (0/4) Epoch 10, batch 5200, loss[loss=0.1921, simple_loss=0.2828, pruned_loss=0.05074, over 7156.00 frames.], tot_loss[loss=0.2102, simple_loss=0.2899, pruned_loss=0.06528, over 1423944.39 frames.], batch size: 19, lr: 5.32e-04 2022-05-27 08:18:31,267 INFO [train.py:842] (0/4) Epoch 10, batch 5250, loss[loss=0.2329, simple_loss=0.3187, pruned_loss=0.07361, over 7303.00 frames.], tot_loss[loss=0.2106, simple_loss=0.2902, pruned_loss=0.06547, over 1426064.31 frames.], batch size: 24, lr: 5.32e-04 2022-05-27 08:18:47,795 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-88000.pt 2022-05-27 08:19:12,935 INFO [train.py:842] (0/4) Epoch 10, batch 5300, loss[loss=0.1914, simple_loss=0.272, pruned_loss=0.0554, over 7405.00 frames.], tot_loss[loss=0.2094, simple_loss=0.2893, pruned_loss=0.06469, over 1426695.33 frames.], batch size: 18, lr: 5.32e-04 2022-05-27 08:19:51,540 INFO [train.py:842] (0/4) Epoch 10, batch 5350, loss[loss=0.2221, simple_loss=0.3075, pruned_loss=0.06836, over 7377.00 frames.], tot_loss[loss=0.2128, simple_loss=0.2921, pruned_loss=0.06674, over 1428265.94 frames.], batch size: 23, lr: 5.32e-04 2022-05-27 08:20:30,700 INFO [train.py:842] (0/4) Epoch 10, batch 5400, loss[loss=0.1773, simple_loss=0.2673, pruned_loss=0.04361, over 7274.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2913, pruned_loss=0.06615, over 1432065.02 frames.], batch size: 18, lr: 5.32e-04 2022-05-27 08:21:09,279 INFO [train.py:842] (0/4) Epoch 10, batch 5450, loss[loss=0.1773, simple_loss=0.2675, pruned_loss=0.04357, over 7418.00 frames.], tot_loss[loss=0.2115, simple_loss=0.291, pruned_loss=0.06602, over 1428968.07 frames.], batch size: 21, lr: 5.32e-04 2022-05-27 08:21:47,999 INFO [train.py:842] (0/4) Epoch 10, batch 5500, loss[loss=0.1782, simple_loss=0.255, pruned_loss=0.0507, over 6998.00 frames.], tot_loss[loss=0.2134, simple_loss=0.2924, pruned_loss=0.06717, over 1424988.19 frames.], batch size: 16, lr: 5.31e-04 2022-05-27 08:22:26,510 INFO [train.py:842] (0/4) Epoch 10, batch 5550, loss[loss=0.2108, simple_loss=0.2977, pruned_loss=0.06189, over 7291.00 frames.], tot_loss[loss=0.2132, simple_loss=0.2925, pruned_loss=0.06696, over 1424971.16 frames.], batch size: 24, lr: 5.31e-04 2022-05-27 08:23:05,385 INFO [train.py:842] (0/4) Epoch 10, batch 5600, loss[loss=0.2558, simple_loss=0.3318, pruned_loss=0.08984, over 7143.00 frames.], tot_loss[loss=0.2136, simple_loss=0.2932, pruned_loss=0.06698, over 1422799.58 frames.], batch size: 20, lr: 5.31e-04 2022-05-27 08:23:44,288 INFO [train.py:842] (0/4) Epoch 10, batch 5650, loss[loss=0.2317, simple_loss=0.2855, pruned_loss=0.08894, over 7066.00 frames.], tot_loss[loss=0.2121, simple_loss=0.2918, pruned_loss=0.06617, over 1428461.32 frames.], batch size: 18, lr: 5.31e-04 2022-05-27 08:24:23,130 INFO [train.py:842] (0/4) Epoch 10, batch 5700, loss[loss=0.2235, simple_loss=0.2927, pruned_loss=0.07715, over 7076.00 frames.], tot_loss[loss=0.2116, simple_loss=0.2918, pruned_loss=0.06571, over 1430491.40 frames.], batch size: 18, lr: 5.31e-04 2022-05-27 08:25:01,707 INFO [train.py:842] (0/4) Epoch 10, batch 5750, loss[loss=0.2096, simple_loss=0.2966, pruned_loss=0.06134, over 6837.00 frames.], tot_loss[loss=0.212, simple_loss=0.2923, pruned_loss=0.06591, over 1424784.02 frames.], batch size: 31, lr: 5.31e-04 2022-05-27 08:25:40,650 INFO [train.py:842] (0/4) Epoch 10, batch 5800, loss[loss=0.1713, simple_loss=0.2459, pruned_loss=0.04842, over 7267.00 frames.], tot_loss[loss=0.2112, simple_loss=0.2915, pruned_loss=0.06546, over 1424830.03 frames.], batch size: 17, lr: 5.31e-04 2022-05-27 08:26:19,921 INFO [train.py:842] (0/4) Epoch 10, batch 5850, loss[loss=0.2235, simple_loss=0.3068, pruned_loss=0.07012, over 7332.00 frames.], tot_loss[loss=0.2114, simple_loss=0.2915, pruned_loss=0.06566, over 1426897.49 frames.], batch size: 22, lr: 5.30e-04 2022-05-27 08:26:58,598 INFO [train.py:842] (0/4) Epoch 10, batch 5900, loss[loss=0.2313, simple_loss=0.317, pruned_loss=0.07284, over 7190.00 frames.], tot_loss[loss=0.2119, simple_loss=0.2922, pruned_loss=0.06581, over 1425154.09 frames.], batch size: 23, lr: 5.30e-04 2022-05-27 08:27:37,129 INFO [train.py:842] (0/4) Epoch 10, batch 5950, loss[loss=0.1763, simple_loss=0.2529, pruned_loss=0.04984, over 7275.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2929, pruned_loss=0.06601, over 1421317.80 frames.], batch size: 18, lr: 5.30e-04 2022-05-27 08:28:15,982 INFO [train.py:842] (0/4) Epoch 10, batch 6000, loss[loss=0.2409, simple_loss=0.3144, pruned_loss=0.08369, over 7148.00 frames.], tot_loss[loss=0.2132, simple_loss=0.2933, pruned_loss=0.06659, over 1423921.98 frames.], batch size: 20, lr: 5.30e-04 2022-05-27 08:28:15,983 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 08:28:25,356 INFO [train.py:871] (0/4) Epoch 10, validation: loss=0.173, simple_loss=0.2732, pruned_loss=0.0364, over 868885.00 frames. 2022-05-27 08:29:04,387 INFO [train.py:842] (0/4) Epoch 10, batch 6050, loss[loss=0.1936, simple_loss=0.2795, pruned_loss=0.05382, over 7429.00 frames.], tot_loss[loss=0.2137, simple_loss=0.2934, pruned_loss=0.06696, over 1425656.71 frames.], batch size: 20, lr: 5.30e-04 2022-05-27 08:29:43,206 INFO [train.py:842] (0/4) Epoch 10, batch 6100, loss[loss=0.2411, simple_loss=0.3136, pruned_loss=0.08429, over 7366.00 frames.], tot_loss[loss=0.213, simple_loss=0.2928, pruned_loss=0.06657, over 1423090.85 frames.], batch size: 23, lr: 5.30e-04 2022-05-27 08:30:22,062 INFO [train.py:842] (0/4) Epoch 10, batch 6150, loss[loss=0.1866, simple_loss=0.2631, pruned_loss=0.05507, over 7212.00 frames.], tot_loss[loss=0.2122, simple_loss=0.2917, pruned_loss=0.06633, over 1422568.85 frames.], batch size: 16, lr: 5.30e-04 2022-05-27 08:31:00,961 INFO [train.py:842] (0/4) Epoch 10, batch 6200, loss[loss=0.276, simple_loss=0.339, pruned_loss=0.1065, over 7293.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2916, pruned_loss=0.06666, over 1427825.63 frames.], batch size: 24, lr: 5.29e-04 2022-05-27 08:31:39,416 INFO [train.py:842] (0/4) Epoch 10, batch 6250, loss[loss=0.1709, simple_loss=0.2578, pruned_loss=0.042, over 7159.00 frames.], tot_loss[loss=0.2121, simple_loss=0.2912, pruned_loss=0.06647, over 1418349.13 frames.], batch size: 19, lr: 5.29e-04 2022-05-27 08:32:18,076 INFO [train.py:842] (0/4) Epoch 10, batch 6300, loss[loss=0.1693, simple_loss=0.2656, pruned_loss=0.03652, over 7151.00 frames.], tot_loss[loss=0.2128, simple_loss=0.2922, pruned_loss=0.06665, over 1422589.49 frames.], batch size: 20, lr: 5.29e-04 2022-05-27 08:32:56,598 INFO [train.py:842] (0/4) Epoch 10, batch 6350, loss[loss=0.218, simple_loss=0.3012, pruned_loss=0.06745, over 7135.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2915, pruned_loss=0.066, over 1421328.83 frames.], batch size: 20, lr: 5.29e-04 2022-05-27 08:33:35,407 INFO [train.py:842] (0/4) Epoch 10, batch 6400, loss[loss=0.1915, simple_loss=0.2798, pruned_loss=0.05157, over 7416.00 frames.], tot_loss[loss=0.2114, simple_loss=0.2915, pruned_loss=0.06562, over 1423403.48 frames.], batch size: 21, lr: 5.29e-04 2022-05-27 08:34:14,098 INFO [train.py:842] (0/4) Epoch 10, batch 6450, loss[loss=0.1685, simple_loss=0.2435, pruned_loss=0.04674, over 7277.00 frames.], tot_loss[loss=0.2114, simple_loss=0.2912, pruned_loss=0.06583, over 1421811.97 frames.], batch size: 17, lr: 5.29e-04 2022-05-27 08:34:52,931 INFO [train.py:842] (0/4) Epoch 10, batch 6500, loss[loss=0.2022, simple_loss=0.2903, pruned_loss=0.05706, over 6764.00 frames.], tot_loss[loss=0.2094, simple_loss=0.2894, pruned_loss=0.0647, over 1420662.10 frames.], batch size: 31, lr: 5.28e-04 2022-05-27 08:35:31,463 INFO [train.py:842] (0/4) Epoch 10, batch 6550, loss[loss=0.27, simple_loss=0.3543, pruned_loss=0.09285, over 7297.00 frames.], tot_loss[loss=0.2097, simple_loss=0.2897, pruned_loss=0.06482, over 1422586.56 frames.], batch size: 24, lr: 5.28e-04 2022-05-27 08:36:10,163 INFO [train.py:842] (0/4) Epoch 10, batch 6600, loss[loss=0.2661, simple_loss=0.3448, pruned_loss=0.09369, over 7280.00 frames.], tot_loss[loss=0.2118, simple_loss=0.2916, pruned_loss=0.06598, over 1424718.59 frames.], batch size: 25, lr: 5.28e-04 2022-05-27 08:36:48,766 INFO [train.py:842] (0/4) Epoch 10, batch 6650, loss[loss=0.1784, simple_loss=0.2568, pruned_loss=0.05005, over 7399.00 frames.], tot_loss[loss=0.2109, simple_loss=0.2912, pruned_loss=0.06525, over 1426269.22 frames.], batch size: 18, lr: 5.28e-04 2022-05-27 08:37:27,504 INFO [train.py:842] (0/4) Epoch 10, batch 6700, loss[loss=0.2203, simple_loss=0.2802, pruned_loss=0.08016, over 7066.00 frames.], tot_loss[loss=0.2126, simple_loss=0.2925, pruned_loss=0.06635, over 1421353.44 frames.], batch size: 18, lr: 5.28e-04 2022-05-27 08:38:06,111 INFO [train.py:842] (0/4) Epoch 10, batch 6750, loss[loss=0.1367, simple_loss=0.2244, pruned_loss=0.02451, over 7399.00 frames.], tot_loss[loss=0.2149, simple_loss=0.2947, pruned_loss=0.06757, over 1423880.15 frames.], batch size: 18, lr: 5.28e-04 2022-05-27 08:38:45,065 INFO [train.py:842] (0/4) Epoch 10, batch 6800, loss[loss=0.19, simple_loss=0.284, pruned_loss=0.04806, over 7102.00 frames.], tot_loss[loss=0.2124, simple_loss=0.2922, pruned_loss=0.06633, over 1421697.71 frames.], batch size: 28, lr: 5.28e-04 2022-05-27 08:39:23,800 INFO [train.py:842] (0/4) Epoch 10, batch 6850, loss[loss=0.2229, simple_loss=0.3071, pruned_loss=0.06934, over 6710.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2924, pruned_loss=0.06632, over 1426392.48 frames.], batch size: 31, lr: 5.27e-04 2022-05-27 08:40:02,431 INFO [train.py:842] (0/4) Epoch 10, batch 6900, loss[loss=0.2263, simple_loss=0.3113, pruned_loss=0.07063, over 7202.00 frames.], tot_loss[loss=0.2138, simple_loss=0.2938, pruned_loss=0.06693, over 1421731.48 frames.], batch size: 23, lr: 5.27e-04 2022-05-27 08:40:40,966 INFO [train.py:842] (0/4) Epoch 10, batch 6950, loss[loss=0.1812, simple_loss=0.2743, pruned_loss=0.04403, over 7380.00 frames.], tot_loss[loss=0.214, simple_loss=0.2942, pruned_loss=0.06692, over 1417832.81 frames.], batch size: 23, lr: 5.27e-04 2022-05-27 08:41:19,720 INFO [train.py:842] (0/4) Epoch 10, batch 7000, loss[loss=0.1936, simple_loss=0.2838, pruned_loss=0.05174, over 7143.00 frames.], tot_loss[loss=0.2136, simple_loss=0.2935, pruned_loss=0.06681, over 1419983.59 frames.], batch size: 20, lr: 5.27e-04 2022-05-27 08:41:58,250 INFO [train.py:842] (0/4) Epoch 10, batch 7050, loss[loss=0.261, simple_loss=0.332, pruned_loss=0.09498, over 7234.00 frames.], tot_loss[loss=0.2137, simple_loss=0.2936, pruned_loss=0.06694, over 1417431.87 frames.], batch size: 20, lr: 5.27e-04 2022-05-27 08:42:37,045 INFO [train.py:842] (0/4) Epoch 10, batch 7100, loss[loss=0.1972, simple_loss=0.2849, pruned_loss=0.0548, over 7336.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2925, pruned_loss=0.06622, over 1419813.46 frames.], batch size: 22, lr: 5.27e-04 2022-05-27 08:43:15,632 INFO [train.py:842] (0/4) Epoch 10, batch 7150, loss[loss=0.2107, simple_loss=0.2846, pruned_loss=0.0684, over 7393.00 frames.], tot_loss[loss=0.2102, simple_loss=0.2905, pruned_loss=0.065, over 1423061.42 frames.], batch size: 23, lr: 5.27e-04 2022-05-27 08:43:54,157 INFO [train.py:842] (0/4) Epoch 10, batch 7200, loss[loss=0.2225, simple_loss=0.3035, pruned_loss=0.07075, over 5147.00 frames.], tot_loss[loss=0.2126, simple_loss=0.2926, pruned_loss=0.06623, over 1416177.13 frames.], batch size: 53, lr: 5.26e-04 2022-05-27 08:44:32,612 INFO [train.py:842] (0/4) Epoch 10, batch 7250, loss[loss=0.2908, simple_loss=0.3488, pruned_loss=0.1164, over 7289.00 frames.], tot_loss[loss=0.2139, simple_loss=0.2941, pruned_loss=0.06681, over 1411420.64 frames.], batch size: 25, lr: 5.26e-04 2022-05-27 08:45:11,715 INFO [train.py:842] (0/4) Epoch 10, batch 7300, loss[loss=0.1785, simple_loss=0.2617, pruned_loss=0.04763, over 7435.00 frames.], tot_loss[loss=0.2124, simple_loss=0.293, pruned_loss=0.06583, over 1416225.96 frames.], batch size: 20, lr: 5.26e-04 2022-05-27 08:45:50,392 INFO [train.py:842] (0/4) Epoch 10, batch 7350, loss[loss=0.1855, simple_loss=0.2625, pruned_loss=0.05425, over 7156.00 frames.], tot_loss[loss=0.2125, simple_loss=0.2927, pruned_loss=0.06613, over 1421107.33 frames.], batch size: 17, lr: 5.26e-04 2022-05-27 08:46:29,221 INFO [train.py:842] (0/4) Epoch 10, batch 7400, loss[loss=0.2141, simple_loss=0.2967, pruned_loss=0.06579, over 7408.00 frames.], tot_loss[loss=0.2129, simple_loss=0.2932, pruned_loss=0.06636, over 1421888.35 frames.], batch size: 21, lr: 5.26e-04 2022-05-27 08:47:07,761 INFO [train.py:842] (0/4) Epoch 10, batch 7450, loss[loss=0.2135, simple_loss=0.2833, pruned_loss=0.07186, over 7174.00 frames.], tot_loss[loss=0.2144, simple_loss=0.2942, pruned_loss=0.06732, over 1418840.85 frames.], batch size: 16, lr: 5.26e-04 2022-05-27 08:47:46,346 INFO [train.py:842] (0/4) Epoch 10, batch 7500, loss[loss=0.2058, simple_loss=0.3007, pruned_loss=0.05546, over 7213.00 frames.], tot_loss[loss=0.213, simple_loss=0.2932, pruned_loss=0.06644, over 1419482.49 frames.], batch size: 21, lr: 5.26e-04 2022-05-27 08:48:24,823 INFO [train.py:842] (0/4) Epoch 10, batch 7550, loss[loss=0.2151, simple_loss=0.295, pruned_loss=0.06763, over 7153.00 frames.], tot_loss[loss=0.2132, simple_loss=0.2932, pruned_loss=0.06664, over 1421200.84 frames.], batch size: 20, lr: 5.25e-04 2022-05-27 08:49:03,932 INFO [train.py:842] (0/4) Epoch 10, batch 7600, loss[loss=0.1477, simple_loss=0.2291, pruned_loss=0.03319, over 7264.00 frames.], tot_loss[loss=0.2104, simple_loss=0.2908, pruned_loss=0.06503, over 1422276.01 frames.], batch size: 18, lr: 5.25e-04 2022-05-27 08:49:42,450 INFO [train.py:842] (0/4) Epoch 10, batch 7650, loss[loss=0.176, simple_loss=0.2496, pruned_loss=0.0512, over 6984.00 frames.], tot_loss[loss=0.2101, simple_loss=0.2901, pruned_loss=0.06506, over 1420156.27 frames.], batch size: 16, lr: 5.25e-04 2022-05-27 08:50:21,179 INFO [train.py:842] (0/4) Epoch 10, batch 7700, loss[loss=0.2195, simple_loss=0.3035, pruned_loss=0.06778, over 7340.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2902, pruned_loss=0.06441, over 1418410.28 frames.], batch size: 22, lr: 5.25e-04 2022-05-27 08:50:59,820 INFO [train.py:842] (0/4) Epoch 10, batch 7750, loss[loss=0.2461, simple_loss=0.3283, pruned_loss=0.08195, over 6794.00 frames.], tot_loss[loss=0.2103, simple_loss=0.2907, pruned_loss=0.06501, over 1423621.57 frames.], batch size: 31, lr: 5.25e-04 2022-05-27 08:51:38,615 INFO [train.py:842] (0/4) Epoch 10, batch 7800, loss[loss=0.1901, simple_loss=0.2754, pruned_loss=0.05234, over 7169.00 frames.], tot_loss[loss=0.2106, simple_loss=0.291, pruned_loss=0.06514, over 1426796.43 frames.], batch size: 18, lr: 5.25e-04 2022-05-27 08:52:17,198 INFO [train.py:842] (0/4) Epoch 10, batch 7850, loss[loss=0.2222, simple_loss=0.3045, pruned_loss=0.07, over 7312.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2899, pruned_loss=0.06451, over 1422469.66 frames.], batch size: 21, lr: 5.25e-04 2022-05-27 08:52:56,076 INFO [train.py:842] (0/4) Epoch 10, batch 7900, loss[loss=0.1835, simple_loss=0.2733, pruned_loss=0.0469, over 7144.00 frames.], tot_loss[loss=0.2099, simple_loss=0.29, pruned_loss=0.06491, over 1424056.98 frames.], batch size: 19, lr: 5.24e-04 2022-05-27 08:53:34,561 INFO [train.py:842] (0/4) Epoch 10, batch 7950, loss[loss=0.1773, simple_loss=0.2532, pruned_loss=0.0507, over 7291.00 frames.], tot_loss[loss=0.21, simple_loss=0.2904, pruned_loss=0.06481, over 1426100.01 frames.], batch size: 18, lr: 5.24e-04 2022-05-27 08:54:13,263 INFO [train.py:842] (0/4) Epoch 10, batch 8000, loss[loss=0.206, simple_loss=0.2914, pruned_loss=0.06029, over 7063.00 frames.], tot_loss[loss=0.2089, simple_loss=0.2898, pruned_loss=0.06401, over 1426471.02 frames.], batch size: 18, lr: 5.24e-04 2022-05-27 08:54:51,834 INFO [train.py:842] (0/4) Epoch 10, batch 8050, loss[loss=0.2084, simple_loss=0.2855, pruned_loss=0.06564, over 4976.00 frames.], tot_loss[loss=0.2101, simple_loss=0.2904, pruned_loss=0.06491, over 1422047.92 frames.], batch size: 52, lr: 5.24e-04 2022-05-27 08:55:30,392 INFO [train.py:842] (0/4) Epoch 10, batch 8100, loss[loss=0.2018, simple_loss=0.269, pruned_loss=0.06725, over 7257.00 frames.], tot_loss[loss=0.2102, simple_loss=0.2905, pruned_loss=0.06492, over 1418570.59 frames.], batch size: 17, lr: 5.24e-04 2022-05-27 08:56:08,989 INFO [train.py:842] (0/4) Epoch 10, batch 8150, loss[loss=0.2104, simple_loss=0.298, pruned_loss=0.06145, over 7316.00 frames.], tot_loss[loss=0.2108, simple_loss=0.2908, pruned_loss=0.06543, over 1418766.56 frames.], batch size: 25, lr: 5.24e-04 2022-05-27 08:56:47,760 INFO [train.py:842] (0/4) Epoch 10, batch 8200, loss[loss=0.2024, simple_loss=0.2858, pruned_loss=0.05947, over 6761.00 frames.], tot_loss[loss=0.2113, simple_loss=0.2909, pruned_loss=0.06583, over 1418182.67 frames.], batch size: 31, lr: 5.24e-04 2022-05-27 08:57:26,314 INFO [train.py:842] (0/4) Epoch 10, batch 8250, loss[loss=0.1806, simple_loss=0.2588, pruned_loss=0.05125, over 7358.00 frames.], tot_loss[loss=0.2116, simple_loss=0.2914, pruned_loss=0.06589, over 1418202.01 frames.], batch size: 19, lr: 5.23e-04 2022-05-27 08:58:04,877 INFO [train.py:842] (0/4) Epoch 10, batch 8300, loss[loss=0.2023, simple_loss=0.267, pruned_loss=0.06884, over 7149.00 frames.], tot_loss[loss=0.2111, simple_loss=0.2912, pruned_loss=0.06545, over 1414496.26 frames.], batch size: 17, lr: 5.23e-04 2022-05-27 08:58:43,207 INFO [train.py:842] (0/4) Epoch 10, batch 8350, loss[loss=0.2686, simple_loss=0.3241, pruned_loss=0.1065, over 5079.00 frames.], tot_loss[loss=0.21, simple_loss=0.2902, pruned_loss=0.06494, over 1416594.45 frames.], batch size: 52, lr: 5.23e-04 2022-05-27 08:59:21,879 INFO [train.py:842] (0/4) Epoch 10, batch 8400, loss[loss=0.2035, simple_loss=0.2817, pruned_loss=0.06259, over 7284.00 frames.], tot_loss[loss=0.2092, simple_loss=0.2894, pruned_loss=0.06448, over 1416019.26 frames.], batch size: 24, lr: 5.23e-04 2022-05-27 09:00:00,366 INFO [train.py:842] (0/4) Epoch 10, batch 8450, loss[loss=0.2603, simple_loss=0.332, pruned_loss=0.09429, over 6886.00 frames.], tot_loss[loss=0.209, simple_loss=0.2895, pruned_loss=0.06422, over 1416094.93 frames.], batch size: 31, lr: 5.23e-04 2022-05-27 09:00:39,103 INFO [train.py:842] (0/4) Epoch 10, batch 8500, loss[loss=0.1855, simple_loss=0.2719, pruned_loss=0.04952, over 7163.00 frames.], tot_loss[loss=0.2097, simple_loss=0.2903, pruned_loss=0.06457, over 1416256.11 frames.], batch size: 18, lr: 5.23e-04 2022-05-27 09:01:17,516 INFO [train.py:842] (0/4) Epoch 10, batch 8550, loss[loss=0.2248, simple_loss=0.3016, pruned_loss=0.07398, over 7266.00 frames.], tot_loss[loss=0.2096, simple_loss=0.2897, pruned_loss=0.06479, over 1416234.24 frames.], batch size: 24, lr: 5.23e-04 2022-05-27 09:01:56,480 INFO [train.py:842] (0/4) Epoch 10, batch 8600, loss[loss=0.2062, simple_loss=0.2912, pruned_loss=0.06065, over 7123.00 frames.], tot_loss[loss=0.2092, simple_loss=0.2896, pruned_loss=0.06445, over 1419966.55 frames.], batch size: 21, lr: 5.22e-04 2022-05-27 09:02:35,176 INFO [train.py:842] (0/4) Epoch 10, batch 8650, loss[loss=0.1783, simple_loss=0.2631, pruned_loss=0.04678, over 7132.00 frames.], tot_loss[loss=0.2094, simple_loss=0.2899, pruned_loss=0.06443, over 1424145.78 frames.], batch size: 17, lr: 5.22e-04 2022-05-27 09:03:14,149 INFO [train.py:842] (0/4) Epoch 10, batch 8700, loss[loss=0.154, simple_loss=0.2295, pruned_loss=0.03928, over 7159.00 frames.], tot_loss[loss=0.2086, simple_loss=0.289, pruned_loss=0.0641, over 1421122.74 frames.], batch size: 18, lr: 5.22e-04 2022-05-27 09:03:52,574 INFO [train.py:842] (0/4) Epoch 10, batch 8750, loss[loss=0.2306, simple_loss=0.3183, pruned_loss=0.07141, over 7206.00 frames.], tot_loss[loss=0.2092, simple_loss=0.2897, pruned_loss=0.06434, over 1416226.30 frames.], batch size: 23, lr: 5.22e-04 2022-05-27 09:04:31,534 INFO [train.py:842] (0/4) Epoch 10, batch 8800, loss[loss=0.2114, simple_loss=0.2914, pruned_loss=0.06573, over 7191.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2892, pruned_loss=0.06496, over 1418317.13 frames.], batch size: 23, lr: 5.22e-04 2022-05-27 09:05:10,747 INFO [train.py:842] (0/4) Epoch 10, batch 8850, loss[loss=0.1983, simple_loss=0.2896, pruned_loss=0.05348, over 7047.00 frames.], tot_loss[loss=0.2097, simple_loss=0.2889, pruned_loss=0.06524, over 1421043.15 frames.], batch size: 28, lr: 5.22e-04 2022-05-27 09:05:49,801 INFO [train.py:842] (0/4) Epoch 10, batch 8900, loss[loss=0.177, simple_loss=0.2556, pruned_loss=0.04921, over 7132.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2889, pruned_loss=0.06512, over 1421974.02 frames.], batch size: 17, lr: 5.22e-04 2022-05-27 09:06:28,594 INFO [train.py:842] (0/4) Epoch 10, batch 8950, loss[loss=0.1799, simple_loss=0.2472, pruned_loss=0.05631, over 7153.00 frames.], tot_loss[loss=0.2084, simple_loss=0.288, pruned_loss=0.06438, over 1424128.55 frames.], batch size: 17, lr: 5.21e-04 2022-05-27 09:07:07,869 INFO [train.py:842] (0/4) Epoch 10, batch 9000, loss[loss=0.1457, simple_loss=0.2259, pruned_loss=0.03279, over 6743.00 frames.], tot_loss[loss=0.2081, simple_loss=0.2874, pruned_loss=0.06435, over 1421133.02 frames.], batch size: 15, lr: 5.21e-04 2022-05-27 09:07:07,870 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 09:07:17,180 INFO [train.py:871] (0/4) Epoch 10, validation: loss=0.1756, simple_loss=0.2759, pruned_loss=0.03767, over 868885.00 frames. 2022-05-27 09:07:56,082 INFO [train.py:842] (0/4) Epoch 10, batch 9050, loss[loss=0.199, simple_loss=0.2856, pruned_loss=0.05619, over 6642.00 frames.], tot_loss[loss=0.2088, simple_loss=0.2878, pruned_loss=0.06486, over 1405868.16 frames.], batch size: 39, lr: 5.21e-04 2022-05-27 09:08:34,928 INFO [train.py:842] (0/4) Epoch 10, batch 9100, loss[loss=0.1841, simple_loss=0.2695, pruned_loss=0.0493, over 7127.00 frames.], tot_loss[loss=0.2132, simple_loss=0.2914, pruned_loss=0.06755, over 1364763.01 frames.], batch size: 17, lr: 5.21e-04 2022-05-27 09:09:12,720 INFO [train.py:842] (0/4) Epoch 10, batch 9150, loss[loss=0.249, simple_loss=0.3217, pruned_loss=0.0882, over 4948.00 frames.], tot_loss[loss=0.2179, simple_loss=0.2948, pruned_loss=0.07053, over 1292757.20 frames.], batch size: 52, lr: 5.21e-04 2022-05-27 09:09:45,422 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-10.pt 2022-05-27 09:10:05,858 INFO [train.py:842] (0/4) Epoch 11, batch 0, loss[loss=0.2701, simple_loss=0.3494, pruned_loss=0.09538, over 7440.00 frames.], tot_loss[loss=0.2701, simple_loss=0.3494, pruned_loss=0.09538, over 7440.00 frames.], batch size: 20, lr: 5.01e-04 2022-05-27 09:10:44,596 INFO [train.py:842] (0/4) Epoch 11, batch 50, loss[loss=0.1889, simple_loss=0.2688, pruned_loss=0.05456, over 7429.00 frames.], tot_loss[loss=0.2126, simple_loss=0.2946, pruned_loss=0.06529, over 322135.54 frames.], batch size: 20, lr: 5.01e-04 2022-05-27 09:11:23,563 INFO [train.py:842] (0/4) Epoch 11, batch 100, loss[loss=0.2042, simple_loss=0.2783, pruned_loss=0.06503, over 7291.00 frames.], tot_loss[loss=0.2094, simple_loss=0.2902, pruned_loss=0.0643, over 566912.87 frames.], batch size: 18, lr: 5.01e-04 2022-05-27 09:12:02,326 INFO [train.py:842] (0/4) Epoch 11, batch 150, loss[loss=0.2372, simple_loss=0.2891, pruned_loss=0.09262, over 6831.00 frames.], tot_loss[loss=0.2131, simple_loss=0.2933, pruned_loss=0.06642, over 759712.02 frames.], batch size: 15, lr: 5.01e-04 2022-05-27 09:12:41,233 INFO [train.py:842] (0/4) Epoch 11, batch 200, loss[loss=0.1958, simple_loss=0.2695, pruned_loss=0.06108, over 7412.00 frames.], tot_loss[loss=0.2142, simple_loss=0.294, pruned_loss=0.06722, over 907226.60 frames.], batch size: 18, lr: 5.01e-04 2022-05-27 09:13:19,980 INFO [train.py:842] (0/4) Epoch 11, batch 250, loss[loss=0.1923, simple_loss=0.2823, pruned_loss=0.05115, over 6385.00 frames.], tot_loss[loss=0.211, simple_loss=0.2913, pruned_loss=0.06534, over 1022466.26 frames.], batch size: 37, lr: 5.01e-04 2022-05-27 09:13:59,408 INFO [train.py:842] (0/4) Epoch 11, batch 300, loss[loss=0.2828, simple_loss=0.338, pruned_loss=0.1138, over 5134.00 frames.], tot_loss[loss=0.2096, simple_loss=0.2903, pruned_loss=0.06446, over 1114542.45 frames.], batch size: 53, lr: 5.01e-04 2022-05-27 09:14:38,174 INFO [train.py:842] (0/4) Epoch 11, batch 350, loss[loss=0.2305, simple_loss=0.303, pruned_loss=0.07902, over 6843.00 frames.], tot_loss[loss=0.2087, simple_loss=0.2898, pruned_loss=0.06379, over 1187790.95 frames.], batch size: 31, lr: 5.01e-04 2022-05-27 09:15:17,118 INFO [train.py:842] (0/4) Epoch 11, batch 400, loss[loss=0.1966, simple_loss=0.2783, pruned_loss=0.0574, over 7420.00 frames.], tot_loss[loss=0.2078, simple_loss=0.2893, pruned_loss=0.06316, over 1241592.46 frames.], batch size: 20, lr: 5.00e-04 2022-05-27 09:15:56,237 INFO [train.py:842] (0/4) Epoch 11, batch 450, loss[loss=0.1942, simple_loss=0.2923, pruned_loss=0.04808, over 7237.00 frames.], tot_loss[loss=0.206, simple_loss=0.2872, pruned_loss=0.0624, over 1281557.16 frames.], batch size: 20, lr: 5.00e-04 2022-05-27 09:16:35,456 INFO [train.py:842] (0/4) Epoch 11, batch 500, loss[loss=0.2014, simple_loss=0.2814, pruned_loss=0.06066, over 7335.00 frames.], tot_loss[loss=0.2077, simple_loss=0.2889, pruned_loss=0.06326, over 1316002.54 frames.], batch size: 20, lr: 5.00e-04 2022-05-27 09:17:14,273 INFO [train.py:842] (0/4) Epoch 11, batch 550, loss[loss=0.1956, simple_loss=0.272, pruned_loss=0.05963, over 7069.00 frames.], tot_loss[loss=0.2069, simple_loss=0.2886, pruned_loss=0.06265, over 1341936.22 frames.], batch size: 18, lr: 5.00e-04 2022-05-27 09:17:53,337 INFO [train.py:842] (0/4) Epoch 11, batch 600, loss[loss=0.1749, simple_loss=0.2536, pruned_loss=0.04812, over 7010.00 frames.], tot_loss[loss=0.2069, simple_loss=0.2887, pruned_loss=0.06255, over 1360711.53 frames.], batch size: 16, lr: 5.00e-04 2022-05-27 09:18:32,208 INFO [train.py:842] (0/4) Epoch 11, batch 650, loss[loss=0.1566, simple_loss=0.2413, pruned_loss=0.03596, over 7132.00 frames.], tot_loss[loss=0.2096, simple_loss=0.2907, pruned_loss=0.06422, over 1366125.06 frames.], batch size: 17, lr: 5.00e-04 2022-05-27 09:19:11,358 INFO [train.py:842] (0/4) Epoch 11, batch 700, loss[loss=0.1766, simple_loss=0.2537, pruned_loss=0.04978, over 6791.00 frames.], tot_loss[loss=0.2103, simple_loss=0.2914, pruned_loss=0.06464, over 1376530.86 frames.], batch size: 15, lr: 5.00e-04 2022-05-27 09:19:50,232 INFO [train.py:842] (0/4) Epoch 11, batch 750, loss[loss=0.1866, simple_loss=0.28, pruned_loss=0.04661, over 7131.00 frames.], tot_loss[loss=0.2078, simple_loss=0.2893, pruned_loss=0.06319, over 1383538.14 frames.], batch size: 20, lr: 4.99e-04 2022-05-27 09:20:29,323 INFO [train.py:842] (0/4) Epoch 11, batch 800, loss[loss=0.2868, simple_loss=0.3481, pruned_loss=0.1128, over 7140.00 frames.], tot_loss[loss=0.2097, simple_loss=0.2907, pruned_loss=0.06432, over 1395097.35 frames.], batch size: 26, lr: 4.99e-04 2022-05-27 09:21:08,037 INFO [train.py:842] (0/4) Epoch 11, batch 850, loss[loss=0.2058, simple_loss=0.2927, pruned_loss=0.0594, over 7320.00 frames.], tot_loss[loss=0.2082, simple_loss=0.2895, pruned_loss=0.06342, over 1397984.51 frames.], batch size: 20, lr: 4.99e-04 2022-05-27 09:21:46,922 INFO [train.py:842] (0/4) Epoch 11, batch 900, loss[loss=0.2056, simple_loss=0.2951, pruned_loss=0.05804, over 7437.00 frames.], tot_loss[loss=0.2091, simple_loss=0.2904, pruned_loss=0.06388, over 1406366.56 frames.], batch size: 20, lr: 4.99e-04 2022-05-27 09:22:25,675 INFO [train.py:842] (0/4) Epoch 11, batch 950, loss[loss=0.188, simple_loss=0.2544, pruned_loss=0.06079, over 6995.00 frames.], tot_loss[loss=0.2091, simple_loss=0.2903, pruned_loss=0.06396, over 1408765.49 frames.], batch size: 16, lr: 4.99e-04 2022-05-27 09:23:05,041 INFO [train.py:842] (0/4) Epoch 11, batch 1000, loss[loss=0.2173, simple_loss=0.3014, pruned_loss=0.06659, over 7283.00 frames.], tot_loss[loss=0.2088, simple_loss=0.2907, pruned_loss=0.06343, over 1412989.70 frames.], batch size: 25, lr: 4.99e-04 2022-05-27 09:23:43,644 INFO [train.py:842] (0/4) Epoch 11, batch 1050, loss[loss=0.1893, simple_loss=0.2715, pruned_loss=0.05353, over 7262.00 frames.], tot_loss[loss=0.2104, simple_loss=0.2917, pruned_loss=0.0645, over 1407575.72 frames.], batch size: 19, lr: 4.99e-04 2022-05-27 09:24:22,807 INFO [train.py:842] (0/4) Epoch 11, batch 1100, loss[loss=0.1805, simple_loss=0.2646, pruned_loss=0.04818, over 7158.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2909, pruned_loss=0.06405, over 1412063.07 frames.], batch size: 18, lr: 4.99e-04 2022-05-27 09:25:01,978 INFO [train.py:842] (0/4) Epoch 11, batch 1150, loss[loss=0.2064, simple_loss=0.2809, pruned_loss=0.06595, over 7062.00 frames.], tot_loss[loss=0.208, simple_loss=0.2896, pruned_loss=0.06322, over 1416630.82 frames.], batch size: 18, lr: 4.98e-04 2022-05-27 09:25:40,989 INFO [train.py:842] (0/4) Epoch 11, batch 1200, loss[loss=0.1768, simple_loss=0.2406, pruned_loss=0.05651, over 6811.00 frames.], tot_loss[loss=0.2067, simple_loss=0.2875, pruned_loss=0.06299, over 1419378.49 frames.], batch size: 15, lr: 4.98e-04 2022-05-27 09:26:19,778 INFO [train.py:842] (0/4) Epoch 11, batch 1250, loss[loss=0.1868, simple_loss=0.2703, pruned_loss=0.05168, over 7138.00 frames.], tot_loss[loss=0.2063, simple_loss=0.2867, pruned_loss=0.06291, over 1423626.59 frames.], batch size: 17, lr: 4.98e-04 2022-05-27 09:26:58,838 INFO [train.py:842] (0/4) Epoch 11, batch 1300, loss[loss=0.2442, simple_loss=0.3169, pruned_loss=0.08576, over 7318.00 frames.], tot_loss[loss=0.2061, simple_loss=0.2866, pruned_loss=0.06277, over 1420284.94 frames.], batch size: 21, lr: 4.98e-04 2022-05-27 09:27:37,528 INFO [train.py:842] (0/4) Epoch 11, batch 1350, loss[loss=0.2068, simple_loss=0.2936, pruned_loss=0.06001, over 7314.00 frames.], tot_loss[loss=0.2068, simple_loss=0.2876, pruned_loss=0.06304, over 1424246.68 frames.], batch size: 21, lr: 4.98e-04 2022-05-27 09:28:16,452 INFO [train.py:842] (0/4) Epoch 11, batch 1400, loss[loss=0.1739, simple_loss=0.2667, pruned_loss=0.04059, over 7146.00 frames.], tot_loss[loss=0.2055, simple_loss=0.2866, pruned_loss=0.06217, over 1426944.85 frames.], batch size: 19, lr: 4.98e-04 2022-05-27 09:28:55,139 INFO [train.py:842] (0/4) Epoch 11, batch 1450, loss[loss=0.1634, simple_loss=0.2381, pruned_loss=0.04431, over 7277.00 frames.], tot_loss[loss=0.2055, simple_loss=0.2864, pruned_loss=0.06234, over 1427224.52 frames.], batch size: 17, lr: 4.98e-04 2022-05-27 09:29:34,191 INFO [train.py:842] (0/4) Epoch 11, batch 1500, loss[loss=0.1907, simple_loss=0.2816, pruned_loss=0.04992, over 7119.00 frames.], tot_loss[loss=0.2061, simple_loss=0.287, pruned_loss=0.06257, over 1424463.00 frames.], batch size: 28, lr: 4.97e-04 2022-05-27 09:30:12,899 INFO [train.py:842] (0/4) Epoch 11, batch 1550, loss[loss=0.1938, simple_loss=0.2817, pruned_loss=0.05292, over 7421.00 frames.], tot_loss[loss=0.2071, simple_loss=0.2878, pruned_loss=0.06327, over 1423039.90 frames.], batch size: 20, lr: 4.97e-04 2022-05-27 09:30:51,664 INFO [train.py:842] (0/4) Epoch 11, batch 1600, loss[loss=0.2502, simple_loss=0.3337, pruned_loss=0.08331, over 6831.00 frames.], tot_loss[loss=0.2074, simple_loss=0.2879, pruned_loss=0.06346, over 1418400.44 frames.], batch size: 31, lr: 4.97e-04 2022-05-27 09:31:30,332 INFO [train.py:842] (0/4) Epoch 11, batch 1650, loss[loss=0.1724, simple_loss=0.2503, pruned_loss=0.04721, over 7201.00 frames.], tot_loss[loss=0.2073, simple_loss=0.2876, pruned_loss=0.06349, over 1418475.85 frames.], batch size: 16, lr: 4.97e-04 2022-05-27 09:32:09,188 INFO [train.py:842] (0/4) Epoch 11, batch 1700, loss[loss=0.177, simple_loss=0.2556, pruned_loss=0.0492, over 7234.00 frames.], tot_loss[loss=0.2072, simple_loss=0.2879, pruned_loss=0.06324, over 1417972.11 frames.], batch size: 16, lr: 4.97e-04 2022-05-27 09:32:47,720 INFO [train.py:842] (0/4) Epoch 11, batch 1750, loss[loss=0.1662, simple_loss=0.254, pruned_loss=0.03919, over 7118.00 frames.], tot_loss[loss=0.2065, simple_loss=0.2867, pruned_loss=0.06317, over 1413946.91 frames.], batch size: 21, lr: 4.97e-04 2022-05-27 09:33:26,550 INFO [train.py:842] (0/4) Epoch 11, batch 1800, loss[loss=0.2417, simple_loss=0.3057, pruned_loss=0.08887, over 4958.00 frames.], tot_loss[loss=0.206, simple_loss=0.2863, pruned_loss=0.06278, over 1413319.97 frames.], batch size: 53, lr: 4.97e-04 2022-05-27 09:34:05,163 INFO [train.py:842] (0/4) Epoch 11, batch 1850, loss[loss=0.3124, simple_loss=0.3797, pruned_loss=0.1225, over 6493.00 frames.], tot_loss[loss=0.206, simple_loss=0.2868, pruned_loss=0.06265, over 1417123.96 frames.], batch size: 38, lr: 4.97e-04 2022-05-27 09:34:44,034 INFO [train.py:842] (0/4) Epoch 11, batch 1900, loss[loss=0.1865, simple_loss=0.273, pruned_loss=0.04997, over 7313.00 frames.], tot_loss[loss=0.2067, simple_loss=0.2875, pruned_loss=0.06298, over 1421463.02 frames.], batch size: 21, lr: 4.96e-04 2022-05-27 09:35:22,633 INFO [train.py:842] (0/4) Epoch 11, batch 1950, loss[loss=0.1835, simple_loss=0.266, pruned_loss=0.05046, over 7350.00 frames.], tot_loss[loss=0.2064, simple_loss=0.2874, pruned_loss=0.06272, over 1421087.48 frames.], batch size: 19, lr: 4.96e-04 2022-05-27 09:36:01,468 INFO [train.py:842] (0/4) Epoch 11, batch 2000, loss[loss=0.1513, simple_loss=0.2325, pruned_loss=0.03504, over 7169.00 frames.], tot_loss[loss=0.2047, simple_loss=0.2861, pruned_loss=0.06166, over 1422799.12 frames.], batch size: 18, lr: 4.96e-04 2022-05-27 09:36:40,062 INFO [train.py:842] (0/4) Epoch 11, batch 2050, loss[loss=0.1789, simple_loss=0.2564, pruned_loss=0.0507, over 7279.00 frames.], tot_loss[loss=0.2062, simple_loss=0.2874, pruned_loss=0.06246, over 1424849.69 frames.], batch size: 17, lr: 4.96e-04 2022-05-27 09:37:19,130 INFO [train.py:842] (0/4) Epoch 11, batch 2100, loss[loss=0.2693, simple_loss=0.3444, pruned_loss=0.09706, over 7372.00 frames.], tot_loss[loss=0.2059, simple_loss=0.2873, pruned_loss=0.0623, over 1425191.24 frames.], batch size: 23, lr: 4.96e-04 2022-05-27 09:37:57,715 INFO [train.py:842] (0/4) Epoch 11, batch 2150, loss[loss=0.214, simple_loss=0.2842, pruned_loss=0.07185, over 7160.00 frames.], tot_loss[loss=0.2067, simple_loss=0.2877, pruned_loss=0.06283, over 1425084.58 frames.], batch size: 18, lr: 4.96e-04 2022-05-27 09:38:36,749 INFO [train.py:842] (0/4) Epoch 11, batch 2200, loss[loss=0.214, simple_loss=0.3058, pruned_loss=0.06112, over 7231.00 frames.], tot_loss[loss=0.2056, simple_loss=0.287, pruned_loss=0.06213, over 1423929.78 frames.], batch size: 20, lr: 4.96e-04 2022-05-27 09:39:15,378 INFO [train.py:842] (0/4) Epoch 11, batch 2250, loss[loss=0.2419, simple_loss=0.3283, pruned_loss=0.07779, over 7334.00 frames.], tot_loss[loss=0.2065, simple_loss=0.2881, pruned_loss=0.06244, over 1427115.93 frames.], batch size: 22, lr: 4.95e-04 2022-05-27 09:39:54,338 INFO [train.py:842] (0/4) Epoch 11, batch 2300, loss[loss=0.1959, simple_loss=0.2849, pruned_loss=0.05339, over 7167.00 frames.], tot_loss[loss=0.2058, simple_loss=0.2876, pruned_loss=0.06202, over 1427106.70 frames.], batch size: 26, lr: 4.95e-04 2022-05-27 09:40:33,042 INFO [train.py:842] (0/4) Epoch 11, batch 2350, loss[loss=0.2723, simple_loss=0.3468, pruned_loss=0.09887, over 6671.00 frames.], tot_loss[loss=0.2061, simple_loss=0.2878, pruned_loss=0.06219, over 1429517.74 frames.], batch size: 31, lr: 4.95e-04 2022-05-27 09:41:11,943 INFO [train.py:842] (0/4) Epoch 11, batch 2400, loss[loss=0.1966, simple_loss=0.2844, pruned_loss=0.05441, over 7326.00 frames.], tot_loss[loss=0.2069, simple_loss=0.2883, pruned_loss=0.06276, over 1424219.09 frames.], batch size: 21, lr: 4.95e-04 2022-05-27 09:41:50,638 INFO [train.py:842] (0/4) Epoch 11, batch 2450, loss[loss=0.1694, simple_loss=0.2524, pruned_loss=0.04319, over 6989.00 frames.], tot_loss[loss=0.2052, simple_loss=0.2863, pruned_loss=0.06207, over 1424476.63 frames.], batch size: 16, lr: 4.95e-04 2022-05-27 09:42:29,489 INFO [train.py:842] (0/4) Epoch 11, batch 2500, loss[loss=0.1984, simple_loss=0.2855, pruned_loss=0.05564, over 7160.00 frames.], tot_loss[loss=0.2055, simple_loss=0.2868, pruned_loss=0.06211, over 1423928.67 frames.], batch size: 19, lr: 4.95e-04 2022-05-27 09:43:08,219 INFO [train.py:842] (0/4) Epoch 11, batch 2550, loss[loss=0.1653, simple_loss=0.2448, pruned_loss=0.0429, over 6825.00 frames.], tot_loss[loss=0.2055, simple_loss=0.2866, pruned_loss=0.06218, over 1427561.02 frames.], batch size: 15, lr: 4.95e-04 2022-05-27 09:43:47,152 INFO [train.py:842] (0/4) Epoch 11, batch 2600, loss[loss=0.2415, simple_loss=0.3314, pruned_loss=0.07584, over 7393.00 frames.], tot_loss[loss=0.206, simple_loss=0.287, pruned_loss=0.06248, over 1429290.91 frames.], batch size: 23, lr: 4.95e-04 2022-05-27 09:44:25,677 INFO [train.py:842] (0/4) Epoch 11, batch 2650, loss[loss=0.1738, simple_loss=0.2476, pruned_loss=0.04997, over 7003.00 frames.], tot_loss[loss=0.2066, simple_loss=0.2878, pruned_loss=0.0627, over 1424711.10 frames.], batch size: 16, lr: 4.94e-04 2022-05-27 09:45:04,581 INFO [train.py:842] (0/4) Epoch 11, batch 2700, loss[loss=0.2115, simple_loss=0.3036, pruned_loss=0.05969, over 7420.00 frames.], tot_loss[loss=0.2087, simple_loss=0.2898, pruned_loss=0.06383, over 1427515.00 frames.], batch size: 21, lr: 4.94e-04 2022-05-27 09:45:43,284 INFO [train.py:842] (0/4) Epoch 11, batch 2750, loss[loss=0.1875, simple_loss=0.2695, pruned_loss=0.05273, over 7285.00 frames.], tot_loss[loss=0.2082, simple_loss=0.289, pruned_loss=0.06369, over 1426247.52 frames.], batch size: 18, lr: 4.94e-04 2022-05-27 09:46:22,108 INFO [train.py:842] (0/4) Epoch 11, batch 2800, loss[loss=0.2289, simple_loss=0.3064, pruned_loss=0.07569, over 7156.00 frames.], tot_loss[loss=0.2092, simple_loss=0.2902, pruned_loss=0.06409, over 1424731.49 frames.], batch size: 19, lr: 4.94e-04 2022-05-27 09:47:01,429 INFO [train.py:842] (0/4) Epoch 11, batch 2850, loss[loss=0.2049, simple_loss=0.2874, pruned_loss=0.06118, over 7323.00 frames.], tot_loss[loss=0.2074, simple_loss=0.2889, pruned_loss=0.06299, over 1425767.33 frames.], batch size: 21, lr: 4.94e-04 2022-05-27 09:47:40,642 INFO [train.py:842] (0/4) Epoch 11, batch 2900, loss[loss=0.2064, simple_loss=0.2846, pruned_loss=0.06403, over 7219.00 frames.], tot_loss[loss=0.207, simple_loss=0.2881, pruned_loss=0.06292, over 1428572.47 frames.], batch size: 23, lr: 4.94e-04 2022-05-27 09:48:19,166 INFO [train.py:842] (0/4) Epoch 11, batch 2950, loss[loss=0.197, simple_loss=0.2906, pruned_loss=0.05169, over 7190.00 frames.], tot_loss[loss=0.2081, simple_loss=0.2891, pruned_loss=0.06358, over 1425684.50 frames.], batch size: 22, lr: 4.94e-04 2022-05-27 09:48:58,065 INFO [train.py:842] (0/4) Epoch 11, batch 3000, loss[loss=0.2492, simple_loss=0.3289, pruned_loss=0.08476, over 7147.00 frames.], tot_loss[loss=0.2094, simple_loss=0.2906, pruned_loss=0.06408, over 1424036.26 frames.], batch size: 18, lr: 4.94e-04 2022-05-27 09:48:58,067 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 09:49:07,580 INFO [train.py:871] (0/4) Epoch 11, validation: loss=0.1731, simple_loss=0.2734, pruned_loss=0.03642, over 868885.00 frames. 2022-05-27 09:49:46,387 INFO [train.py:842] (0/4) Epoch 11, batch 3050, loss[loss=0.2218, simple_loss=0.317, pruned_loss=0.06328, over 7174.00 frames.], tot_loss[loss=0.2089, simple_loss=0.29, pruned_loss=0.06386, over 1428330.04 frames.], batch size: 26, lr: 4.93e-04 2022-05-27 09:50:25,537 INFO [train.py:842] (0/4) Epoch 11, batch 3100, loss[loss=0.2154, simple_loss=0.279, pruned_loss=0.0759, over 7409.00 frames.], tot_loss[loss=0.2091, simple_loss=0.29, pruned_loss=0.06407, over 1425533.78 frames.], batch size: 18, lr: 4.93e-04 2022-05-27 09:51:04,347 INFO [train.py:842] (0/4) Epoch 11, batch 3150, loss[loss=0.2212, simple_loss=0.2936, pruned_loss=0.0744, over 7265.00 frames.], tot_loss[loss=0.2088, simple_loss=0.2893, pruned_loss=0.06416, over 1427386.46 frames.], batch size: 18, lr: 4.93e-04 2022-05-27 09:51:43,756 INFO [train.py:842] (0/4) Epoch 11, batch 3200, loss[loss=0.1913, simple_loss=0.2758, pruned_loss=0.05341, over 7163.00 frames.], tot_loss[loss=0.2076, simple_loss=0.2883, pruned_loss=0.06344, over 1428842.58 frames.], batch size: 18, lr: 4.93e-04 2022-05-27 09:52:22,394 INFO [train.py:842] (0/4) Epoch 11, batch 3250, loss[loss=0.2839, simple_loss=0.3443, pruned_loss=0.1118, over 7072.00 frames.], tot_loss[loss=0.2069, simple_loss=0.2879, pruned_loss=0.06297, over 1430345.11 frames.], batch size: 18, lr: 4.93e-04 2022-05-27 09:53:01,431 INFO [train.py:842] (0/4) Epoch 11, batch 3300, loss[loss=0.2371, simple_loss=0.3139, pruned_loss=0.08014, over 6522.00 frames.], tot_loss[loss=0.2064, simple_loss=0.2875, pruned_loss=0.06266, over 1430261.82 frames.], batch size: 38, lr: 4.93e-04 2022-05-27 09:53:39,901 INFO [train.py:842] (0/4) Epoch 11, batch 3350, loss[loss=0.2056, simple_loss=0.2968, pruned_loss=0.05718, over 7108.00 frames.], tot_loss[loss=0.2079, simple_loss=0.2885, pruned_loss=0.06361, over 1425017.95 frames.], batch size: 21, lr: 4.93e-04 2022-05-27 09:54:18,841 INFO [train.py:842] (0/4) Epoch 11, batch 3400, loss[loss=0.2657, simple_loss=0.3159, pruned_loss=0.1078, over 7001.00 frames.], tot_loss[loss=0.2097, simple_loss=0.2898, pruned_loss=0.06477, over 1420985.50 frames.], batch size: 16, lr: 4.92e-04 2022-05-27 09:54:57,564 INFO [train.py:842] (0/4) Epoch 11, batch 3450, loss[loss=0.2057, simple_loss=0.2968, pruned_loss=0.05734, over 7101.00 frames.], tot_loss[loss=0.209, simple_loss=0.2896, pruned_loss=0.06416, over 1423616.25 frames.], batch size: 21, lr: 4.92e-04 2022-05-27 09:55:36,306 INFO [train.py:842] (0/4) Epoch 11, batch 3500, loss[loss=0.2082, simple_loss=0.2858, pruned_loss=0.06528, over 7413.00 frames.], tot_loss[loss=0.2087, simple_loss=0.2892, pruned_loss=0.0641, over 1424825.92 frames.], batch size: 18, lr: 4.92e-04 2022-05-27 09:56:14,787 INFO [train.py:842] (0/4) Epoch 11, batch 3550, loss[loss=0.2252, simple_loss=0.3031, pruned_loss=0.07359, over 6492.00 frames.], tot_loss[loss=0.2082, simple_loss=0.2887, pruned_loss=0.06386, over 1424178.33 frames.], batch size: 38, lr: 4.92e-04 2022-05-27 09:56:53,630 INFO [train.py:842] (0/4) Epoch 11, batch 3600, loss[loss=0.2155, simple_loss=0.2934, pruned_loss=0.06878, over 6490.00 frames.], tot_loss[loss=0.2101, simple_loss=0.2908, pruned_loss=0.06466, over 1419897.33 frames.], batch size: 37, lr: 4.92e-04 2022-05-27 09:57:32,182 INFO [train.py:842] (0/4) Epoch 11, batch 3650, loss[loss=0.1996, simple_loss=0.294, pruned_loss=0.05256, over 7112.00 frames.], tot_loss[loss=0.2091, simple_loss=0.2904, pruned_loss=0.06393, over 1422404.76 frames.], batch size: 21, lr: 4.92e-04 2022-05-27 09:58:10,900 INFO [train.py:842] (0/4) Epoch 11, batch 3700, loss[loss=0.1995, simple_loss=0.2885, pruned_loss=0.05531, over 7114.00 frames.], tot_loss[loss=0.2105, simple_loss=0.2912, pruned_loss=0.06488, over 1418723.15 frames.], batch size: 21, lr: 4.92e-04 2022-05-27 09:58:49,437 INFO [train.py:842] (0/4) Epoch 11, batch 3750, loss[loss=0.1985, simple_loss=0.2813, pruned_loss=0.05785, over 7429.00 frames.], tot_loss[loss=0.2111, simple_loss=0.2919, pruned_loss=0.06515, over 1424672.43 frames.], batch size: 20, lr: 4.92e-04 2022-05-27 09:59:28,305 INFO [train.py:842] (0/4) Epoch 11, batch 3800, loss[loss=0.2082, simple_loss=0.288, pruned_loss=0.06415, over 7259.00 frames.], tot_loss[loss=0.2124, simple_loss=0.2928, pruned_loss=0.066, over 1423074.36 frames.], batch size: 24, lr: 4.91e-04 2022-05-27 10:00:06,875 INFO [train.py:842] (0/4) Epoch 11, batch 3850, loss[loss=0.2191, simple_loss=0.3017, pruned_loss=0.06822, over 7095.00 frames.], tot_loss[loss=0.2098, simple_loss=0.2904, pruned_loss=0.06458, over 1427492.46 frames.], batch size: 28, lr: 4.91e-04 2022-05-27 10:00:45,732 INFO [train.py:842] (0/4) Epoch 11, batch 3900, loss[loss=0.223, simple_loss=0.3007, pruned_loss=0.07269, over 7347.00 frames.], tot_loss[loss=0.2094, simple_loss=0.2901, pruned_loss=0.06433, over 1427896.16 frames.], batch size: 22, lr: 4.91e-04 2022-05-27 10:01:24,356 INFO [train.py:842] (0/4) Epoch 11, batch 3950, loss[loss=0.1851, simple_loss=0.2785, pruned_loss=0.0458, over 7423.00 frames.], tot_loss[loss=0.2097, simple_loss=0.2905, pruned_loss=0.06443, over 1428064.79 frames.], batch size: 21, lr: 4.91e-04 2022-05-27 10:02:03,400 INFO [train.py:842] (0/4) Epoch 11, batch 4000, loss[loss=0.2085, simple_loss=0.2996, pruned_loss=0.05869, over 7278.00 frames.], tot_loss[loss=0.2096, simple_loss=0.2903, pruned_loss=0.06444, over 1423171.77 frames.], batch size: 25, lr: 4.91e-04 2022-05-27 10:02:42,050 INFO [train.py:842] (0/4) Epoch 11, batch 4050, loss[loss=0.2658, simple_loss=0.3409, pruned_loss=0.09542, over 7246.00 frames.], tot_loss[loss=0.2086, simple_loss=0.2893, pruned_loss=0.06398, over 1422195.13 frames.], batch size: 26, lr: 4.91e-04 2022-05-27 10:03:04,466 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-96000.pt 2022-05-27 10:03:23,529 INFO [train.py:842] (0/4) Epoch 11, batch 4100, loss[loss=0.2012, simple_loss=0.2758, pruned_loss=0.06324, over 7131.00 frames.], tot_loss[loss=0.2084, simple_loss=0.289, pruned_loss=0.06385, over 1419917.74 frames.], batch size: 17, lr: 4.91e-04 2022-05-27 10:04:02,236 INFO [train.py:842] (0/4) Epoch 11, batch 4150, loss[loss=0.1776, simple_loss=0.2722, pruned_loss=0.04143, over 7126.00 frames.], tot_loss[loss=0.2072, simple_loss=0.2879, pruned_loss=0.06325, over 1421343.91 frames.], batch size: 21, lr: 4.91e-04 2022-05-27 10:04:40,828 INFO [train.py:842] (0/4) Epoch 11, batch 4200, loss[loss=0.2768, simple_loss=0.3467, pruned_loss=0.1034, over 7204.00 frames.], tot_loss[loss=0.2087, simple_loss=0.2897, pruned_loss=0.06388, over 1418663.06 frames.], batch size: 23, lr: 4.90e-04 2022-05-27 10:05:19,319 INFO [train.py:842] (0/4) Epoch 11, batch 4250, loss[loss=0.2419, simple_loss=0.3057, pruned_loss=0.08906, over 7072.00 frames.], tot_loss[loss=0.209, simple_loss=0.2902, pruned_loss=0.06392, over 1419108.63 frames.], batch size: 18, lr: 4.90e-04 2022-05-27 10:05:58,288 INFO [train.py:842] (0/4) Epoch 11, batch 4300, loss[loss=0.1956, simple_loss=0.2735, pruned_loss=0.05881, over 7068.00 frames.], tot_loss[loss=0.2092, simple_loss=0.2903, pruned_loss=0.06399, over 1424836.37 frames.], batch size: 18, lr: 4.90e-04 2022-05-27 10:06:36,938 INFO [train.py:842] (0/4) Epoch 11, batch 4350, loss[loss=0.2229, simple_loss=0.301, pruned_loss=0.07234, over 7367.00 frames.], tot_loss[loss=0.21, simple_loss=0.291, pruned_loss=0.06452, over 1424557.42 frames.], batch size: 19, lr: 4.90e-04 2022-05-27 10:07:16,072 INFO [train.py:842] (0/4) Epoch 11, batch 4400, loss[loss=0.204, simple_loss=0.2732, pruned_loss=0.06743, over 7235.00 frames.], tot_loss[loss=0.2088, simple_loss=0.2895, pruned_loss=0.06401, over 1423960.35 frames.], batch size: 16, lr: 4.90e-04 2022-05-27 10:07:55,067 INFO [train.py:842] (0/4) Epoch 11, batch 4450, loss[loss=0.2067, simple_loss=0.2857, pruned_loss=0.06385, over 7167.00 frames.], tot_loss[loss=0.2089, simple_loss=0.2894, pruned_loss=0.06421, over 1420323.95 frames.], batch size: 18, lr: 4.90e-04 2022-05-27 10:08:33,886 INFO [train.py:842] (0/4) Epoch 11, batch 4500, loss[loss=0.1865, simple_loss=0.2755, pruned_loss=0.04881, over 7358.00 frames.], tot_loss[loss=0.2099, simple_loss=0.2905, pruned_loss=0.06466, over 1419436.95 frames.], batch size: 19, lr: 4.90e-04 2022-05-27 10:09:12,428 INFO [train.py:842] (0/4) Epoch 11, batch 4550, loss[loss=0.1921, simple_loss=0.2791, pruned_loss=0.0526, over 7350.00 frames.], tot_loss[loss=0.2104, simple_loss=0.2911, pruned_loss=0.06491, over 1422459.75 frames.], batch size: 19, lr: 4.90e-04 2022-05-27 10:09:51,213 INFO [train.py:842] (0/4) Epoch 11, batch 4600, loss[loss=0.2416, simple_loss=0.317, pruned_loss=0.08307, over 7173.00 frames.], tot_loss[loss=0.2098, simple_loss=0.291, pruned_loss=0.06434, over 1426851.17 frames.], batch size: 18, lr: 4.89e-04 2022-05-27 10:10:29,829 INFO [train.py:842] (0/4) Epoch 11, batch 4650, loss[loss=0.1667, simple_loss=0.2421, pruned_loss=0.04559, over 7276.00 frames.], tot_loss[loss=0.2077, simple_loss=0.2891, pruned_loss=0.06314, over 1425881.22 frames.], batch size: 17, lr: 4.89e-04 2022-05-27 10:11:08,714 INFO [train.py:842] (0/4) Epoch 11, batch 4700, loss[loss=0.2113, simple_loss=0.2932, pruned_loss=0.0647, over 7168.00 frames.], tot_loss[loss=0.2076, simple_loss=0.2892, pruned_loss=0.06307, over 1427518.09 frames.], batch size: 19, lr: 4.89e-04 2022-05-27 10:11:47,135 INFO [train.py:842] (0/4) Epoch 11, batch 4750, loss[loss=0.2118, simple_loss=0.2931, pruned_loss=0.06522, over 7323.00 frames.], tot_loss[loss=0.2084, simple_loss=0.2901, pruned_loss=0.06336, over 1427410.92 frames.], batch size: 21, lr: 4.89e-04 2022-05-27 10:12:26,001 INFO [train.py:842] (0/4) Epoch 11, batch 4800, loss[loss=0.2077, simple_loss=0.2867, pruned_loss=0.06434, over 7372.00 frames.], tot_loss[loss=0.2092, simple_loss=0.2902, pruned_loss=0.0641, over 1428237.96 frames.], batch size: 19, lr: 4.89e-04 2022-05-27 10:13:04,715 INFO [train.py:842] (0/4) Epoch 11, batch 4850, loss[loss=0.2251, simple_loss=0.2938, pruned_loss=0.07825, over 7265.00 frames.], tot_loss[loss=0.2107, simple_loss=0.2918, pruned_loss=0.06482, over 1426964.95 frames.], batch size: 18, lr: 4.89e-04 2022-05-27 10:13:44,108 INFO [train.py:842] (0/4) Epoch 11, batch 4900, loss[loss=0.193, simple_loss=0.2653, pruned_loss=0.06029, over 7265.00 frames.], tot_loss[loss=0.2091, simple_loss=0.2903, pruned_loss=0.06395, over 1429797.47 frames.], batch size: 16, lr: 4.89e-04 2022-05-27 10:14:22,592 INFO [train.py:842] (0/4) Epoch 11, batch 4950, loss[loss=0.2661, simple_loss=0.3434, pruned_loss=0.09444, over 7323.00 frames.], tot_loss[loss=0.2101, simple_loss=0.2908, pruned_loss=0.06469, over 1428339.71 frames.], batch size: 21, lr: 4.89e-04 2022-05-27 10:15:01,195 INFO [train.py:842] (0/4) Epoch 11, batch 5000, loss[loss=0.2081, simple_loss=0.2916, pruned_loss=0.06234, over 7327.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2903, pruned_loss=0.06435, over 1424416.67 frames.], batch size: 20, lr: 4.88e-04 2022-05-27 10:15:39,736 INFO [train.py:842] (0/4) Epoch 11, batch 5050, loss[loss=0.2144, simple_loss=0.2909, pruned_loss=0.06893, over 7295.00 frames.], tot_loss[loss=0.2086, simple_loss=0.2902, pruned_loss=0.06347, over 1426019.87 frames.], batch size: 24, lr: 4.88e-04 2022-05-27 10:16:18,510 INFO [train.py:842] (0/4) Epoch 11, batch 5100, loss[loss=0.2005, simple_loss=0.2905, pruned_loss=0.05529, over 7169.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2909, pruned_loss=0.06402, over 1427887.60 frames.], batch size: 18, lr: 4.88e-04 2022-05-27 10:16:57,135 INFO [train.py:842] (0/4) Epoch 11, batch 5150, loss[loss=0.191, simple_loss=0.278, pruned_loss=0.05196, over 7142.00 frames.], tot_loss[loss=0.2095, simple_loss=0.2907, pruned_loss=0.06411, over 1421540.86 frames.], batch size: 20, lr: 4.88e-04 2022-05-27 10:17:36,110 INFO [train.py:842] (0/4) Epoch 11, batch 5200, loss[loss=0.2094, simple_loss=0.3038, pruned_loss=0.05751, over 7192.00 frames.], tot_loss[loss=0.2107, simple_loss=0.2913, pruned_loss=0.06503, over 1419671.71 frames.], batch size: 23, lr: 4.88e-04 2022-05-27 10:18:14,668 INFO [train.py:842] (0/4) Epoch 11, batch 5250, loss[loss=0.2154, simple_loss=0.2907, pruned_loss=0.07003, over 7257.00 frames.], tot_loss[loss=0.2123, simple_loss=0.2922, pruned_loss=0.06614, over 1423182.42 frames.], batch size: 19, lr: 4.88e-04 2022-05-27 10:18:53,615 INFO [train.py:842] (0/4) Epoch 11, batch 5300, loss[loss=0.2053, simple_loss=0.2928, pruned_loss=0.05894, over 7379.00 frames.], tot_loss[loss=0.2121, simple_loss=0.2922, pruned_loss=0.06599, over 1424784.32 frames.], batch size: 23, lr: 4.88e-04 2022-05-27 10:19:32,053 INFO [train.py:842] (0/4) Epoch 11, batch 5350, loss[loss=0.1996, simple_loss=0.2841, pruned_loss=0.05751, over 7236.00 frames.], tot_loss[loss=0.21, simple_loss=0.2905, pruned_loss=0.06475, over 1426586.64 frames.], batch size: 20, lr: 4.88e-04 2022-05-27 10:20:10,526 INFO [train.py:842] (0/4) Epoch 11, batch 5400, loss[loss=0.25, simple_loss=0.3339, pruned_loss=0.08304, over 7182.00 frames.], tot_loss[loss=0.2097, simple_loss=0.2906, pruned_loss=0.06439, over 1427369.89 frames.], batch size: 23, lr: 4.87e-04 2022-05-27 10:20:49,099 INFO [train.py:842] (0/4) Epoch 11, batch 5450, loss[loss=0.2387, simple_loss=0.3238, pruned_loss=0.07683, over 7286.00 frames.], tot_loss[loss=0.2089, simple_loss=0.2899, pruned_loss=0.06397, over 1428673.20 frames.], batch size: 24, lr: 4.87e-04 2022-05-27 10:21:28,147 INFO [train.py:842] (0/4) Epoch 11, batch 5500, loss[loss=0.1984, simple_loss=0.2697, pruned_loss=0.06352, over 7286.00 frames.], tot_loss[loss=0.2098, simple_loss=0.2905, pruned_loss=0.06449, over 1427134.96 frames.], batch size: 18, lr: 4.87e-04 2022-05-27 10:22:16,782 INFO [train.py:842] (0/4) Epoch 11, batch 5550, loss[loss=0.2074, simple_loss=0.2833, pruned_loss=0.06582, over 7162.00 frames.], tot_loss[loss=0.2102, simple_loss=0.2909, pruned_loss=0.06478, over 1427504.55 frames.], batch size: 19, lr: 4.87e-04 2022-05-27 10:22:55,819 INFO [train.py:842] (0/4) Epoch 11, batch 5600, loss[loss=0.3291, simple_loss=0.3688, pruned_loss=0.1447, over 5262.00 frames.], tot_loss[loss=0.2102, simple_loss=0.2907, pruned_loss=0.06483, over 1425949.43 frames.], batch size: 52, lr: 4.87e-04 2022-05-27 10:23:34,186 INFO [train.py:842] (0/4) Epoch 11, batch 5650, loss[loss=0.2033, simple_loss=0.2986, pruned_loss=0.05396, over 7425.00 frames.], tot_loss[loss=0.2104, simple_loss=0.2914, pruned_loss=0.06473, over 1427024.15 frames.], batch size: 21, lr: 4.87e-04 2022-05-27 10:24:13,272 INFO [train.py:842] (0/4) Epoch 11, batch 5700, loss[loss=0.2449, simple_loss=0.323, pruned_loss=0.08338, over 7374.00 frames.], tot_loss[loss=0.2092, simple_loss=0.2904, pruned_loss=0.06401, over 1425564.99 frames.], batch size: 23, lr: 4.87e-04 2022-05-27 10:24:51,693 INFO [train.py:842] (0/4) Epoch 11, batch 5750, loss[loss=0.1891, simple_loss=0.2754, pruned_loss=0.05138, over 7243.00 frames.], tot_loss[loss=0.2099, simple_loss=0.2907, pruned_loss=0.06451, over 1418949.49 frames.], batch size: 20, lr: 4.87e-04 2022-05-27 10:25:30,830 INFO [train.py:842] (0/4) Epoch 11, batch 5800, loss[loss=0.2905, simple_loss=0.3354, pruned_loss=0.1228, over 5322.00 frames.], tot_loss[loss=0.2083, simple_loss=0.2895, pruned_loss=0.0635, over 1423693.92 frames.], batch size: 55, lr: 4.86e-04 2022-05-27 10:26:09,466 INFO [train.py:842] (0/4) Epoch 11, batch 5850, loss[loss=0.2397, simple_loss=0.3205, pruned_loss=0.07942, over 7069.00 frames.], tot_loss[loss=0.2082, simple_loss=0.2894, pruned_loss=0.06348, over 1423145.94 frames.], batch size: 28, lr: 4.86e-04 2022-05-27 10:26:48,394 INFO [train.py:842] (0/4) Epoch 11, batch 5900, loss[loss=0.1862, simple_loss=0.2743, pruned_loss=0.0491, over 7429.00 frames.], tot_loss[loss=0.2074, simple_loss=0.2884, pruned_loss=0.06324, over 1425087.79 frames.], batch size: 20, lr: 4.86e-04 2022-05-27 10:27:26,936 INFO [train.py:842] (0/4) Epoch 11, batch 5950, loss[loss=0.221, simple_loss=0.2975, pruned_loss=0.07222, over 7173.00 frames.], tot_loss[loss=0.2073, simple_loss=0.2881, pruned_loss=0.06328, over 1428401.72 frames.], batch size: 26, lr: 4.86e-04 2022-05-27 10:28:06,054 INFO [train.py:842] (0/4) Epoch 11, batch 6000, loss[loss=0.1803, simple_loss=0.2793, pruned_loss=0.04065, over 7154.00 frames.], tot_loss[loss=0.2076, simple_loss=0.2882, pruned_loss=0.06347, over 1431412.83 frames.], batch size: 20, lr: 4.86e-04 2022-05-27 10:28:06,057 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 10:28:15,409 INFO [train.py:871] (0/4) Epoch 11, validation: loss=0.1722, simple_loss=0.2724, pruned_loss=0.03599, over 868885.00 frames. 2022-05-27 10:28:54,273 INFO [train.py:842] (0/4) Epoch 11, batch 6050, loss[loss=0.2145, simple_loss=0.3014, pruned_loss=0.0638, over 7273.00 frames.], tot_loss[loss=0.2084, simple_loss=0.289, pruned_loss=0.06387, over 1427634.96 frames.], batch size: 24, lr: 4.86e-04 2022-05-27 10:29:33,424 INFO [train.py:842] (0/4) Epoch 11, batch 6100, loss[loss=0.2119, simple_loss=0.2892, pruned_loss=0.06733, over 7316.00 frames.], tot_loss[loss=0.2077, simple_loss=0.2881, pruned_loss=0.06362, over 1427438.97 frames.], batch size: 24, lr: 4.86e-04 2022-05-27 10:30:11,963 INFO [train.py:842] (0/4) Epoch 11, batch 6150, loss[loss=0.1866, simple_loss=0.2643, pruned_loss=0.05444, over 7168.00 frames.], tot_loss[loss=0.208, simple_loss=0.2886, pruned_loss=0.0637, over 1431530.63 frames.], batch size: 19, lr: 4.86e-04 2022-05-27 10:30:50,915 INFO [train.py:842] (0/4) Epoch 11, batch 6200, loss[loss=0.2376, simple_loss=0.3088, pruned_loss=0.08319, over 7289.00 frames.], tot_loss[loss=0.2069, simple_loss=0.2877, pruned_loss=0.06299, over 1431639.53 frames.], batch size: 24, lr: 4.85e-04 2022-05-27 10:31:29,390 INFO [train.py:842] (0/4) Epoch 11, batch 6250, loss[loss=0.2116, simple_loss=0.2893, pruned_loss=0.06694, over 6873.00 frames.], tot_loss[loss=0.2064, simple_loss=0.2873, pruned_loss=0.06271, over 1434557.44 frames.], batch size: 31, lr: 4.85e-04 2022-05-27 10:32:08,622 INFO [train.py:842] (0/4) Epoch 11, batch 6300, loss[loss=0.1662, simple_loss=0.2452, pruned_loss=0.04361, over 7007.00 frames.], tot_loss[loss=0.2058, simple_loss=0.2867, pruned_loss=0.0625, over 1431378.60 frames.], batch size: 16, lr: 4.85e-04 2022-05-27 10:32:47,162 INFO [train.py:842] (0/4) Epoch 11, batch 6350, loss[loss=0.3023, simple_loss=0.3546, pruned_loss=0.125, over 7223.00 frames.], tot_loss[loss=0.2069, simple_loss=0.2875, pruned_loss=0.06321, over 1427960.08 frames.], batch size: 26, lr: 4.85e-04 2022-05-27 10:33:25,800 INFO [train.py:842] (0/4) Epoch 11, batch 6400, loss[loss=0.207, simple_loss=0.291, pruned_loss=0.06145, over 7159.00 frames.], tot_loss[loss=0.207, simple_loss=0.2876, pruned_loss=0.06326, over 1423554.46 frames.], batch size: 18, lr: 4.85e-04 2022-05-27 10:34:04,318 INFO [train.py:842] (0/4) Epoch 11, batch 6450, loss[loss=0.2074, simple_loss=0.306, pruned_loss=0.05442, over 7329.00 frames.], tot_loss[loss=0.208, simple_loss=0.2882, pruned_loss=0.06384, over 1414912.80 frames.], batch size: 22, lr: 4.85e-04 2022-05-27 10:34:42,916 INFO [train.py:842] (0/4) Epoch 11, batch 6500, loss[loss=0.1833, simple_loss=0.2696, pruned_loss=0.04853, over 7225.00 frames.], tot_loss[loss=0.2087, simple_loss=0.2892, pruned_loss=0.06413, over 1414962.88 frames.], batch size: 20, lr: 4.85e-04 2022-05-27 10:35:21,589 INFO [train.py:842] (0/4) Epoch 11, batch 6550, loss[loss=0.2293, simple_loss=0.311, pruned_loss=0.07382, over 7251.00 frames.], tot_loss[loss=0.2087, simple_loss=0.2889, pruned_loss=0.06422, over 1416033.87 frames.], batch size: 19, lr: 4.85e-04 2022-05-27 10:36:00,473 INFO [train.py:842] (0/4) Epoch 11, batch 6600, loss[loss=0.2286, simple_loss=0.3116, pruned_loss=0.07281, over 7102.00 frames.], tot_loss[loss=0.2084, simple_loss=0.2888, pruned_loss=0.06397, over 1415537.74 frames.], batch size: 28, lr: 4.84e-04 2022-05-27 10:36:39,119 INFO [train.py:842] (0/4) Epoch 11, batch 6650, loss[loss=0.2107, simple_loss=0.2902, pruned_loss=0.06556, over 7279.00 frames.], tot_loss[loss=0.2093, simple_loss=0.2892, pruned_loss=0.06468, over 1413736.20 frames.], batch size: 24, lr: 4.84e-04 2022-05-27 10:37:18,310 INFO [train.py:842] (0/4) Epoch 11, batch 6700, loss[loss=0.1746, simple_loss=0.2596, pruned_loss=0.04482, over 7385.00 frames.], tot_loss[loss=0.209, simple_loss=0.2892, pruned_loss=0.06437, over 1416537.19 frames.], batch size: 23, lr: 4.84e-04 2022-05-27 10:37:56,915 INFO [train.py:842] (0/4) Epoch 11, batch 6750, loss[loss=0.182, simple_loss=0.2734, pruned_loss=0.0453, over 7346.00 frames.], tot_loss[loss=0.2085, simple_loss=0.2893, pruned_loss=0.06386, over 1416299.33 frames.], batch size: 22, lr: 4.84e-04 2022-05-27 10:38:35,771 INFO [train.py:842] (0/4) Epoch 11, batch 6800, loss[loss=0.1627, simple_loss=0.2502, pruned_loss=0.03765, over 7208.00 frames.], tot_loss[loss=0.2075, simple_loss=0.2885, pruned_loss=0.06322, over 1420673.41 frames.], batch size: 16, lr: 4.84e-04 2022-05-27 10:39:14,373 INFO [train.py:842] (0/4) Epoch 11, batch 6850, loss[loss=0.1693, simple_loss=0.2469, pruned_loss=0.04586, over 7020.00 frames.], tot_loss[loss=0.2072, simple_loss=0.2881, pruned_loss=0.06317, over 1422626.09 frames.], batch size: 16, lr: 4.84e-04 2022-05-27 10:39:53,313 INFO [train.py:842] (0/4) Epoch 11, batch 6900, loss[loss=0.1814, simple_loss=0.2775, pruned_loss=0.04261, over 7225.00 frames.], tot_loss[loss=0.2062, simple_loss=0.2875, pruned_loss=0.06248, over 1426467.03 frames.], batch size: 21, lr: 4.84e-04 2022-05-27 10:40:31,860 INFO [train.py:842] (0/4) Epoch 11, batch 6950, loss[loss=0.2054, simple_loss=0.2911, pruned_loss=0.05984, over 7149.00 frames.], tot_loss[loss=0.207, simple_loss=0.2883, pruned_loss=0.06288, over 1429541.42 frames.], batch size: 20, lr: 4.84e-04 2022-05-27 10:41:10,695 INFO [train.py:842] (0/4) Epoch 11, batch 7000, loss[loss=0.2003, simple_loss=0.2798, pruned_loss=0.06037, over 6730.00 frames.], tot_loss[loss=0.2062, simple_loss=0.2882, pruned_loss=0.06213, over 1425791.04 frames.], batch size: 31, lr: 4.83e-04 2022-05-27 10:41:49,524 INFO [train.py:842] (0/4) Epoch 11, batch 7050, loss[loss=0.187, simple_loss=0.2582, pruned_loss=0.05791, over 7167.00 frames.], tot_loss[loss=0.2074, simple_loss=0.289, pruned_loss=0.06293, over 1423328.79 frames.], batch size: 18, lr: 4.83e-04 2022-05-27 10:42:28,473 INFO [train.py:842] (0/4) Epoch 11, batch 7100, loss[loss=0.1765, simple_loss=0.2709, pruned_loss=0.04103, over 6782.00 frames.], tot_loss[loss=0.2073, simple_loss=0.2888, pruned_loss=0.06289, over 1423160.19 frames.], batch size: 15, lr: 4.83e-04 2022-05-27 10:43:06,951 INFO [train.py:842] (0/4) Epoch 11, batch 7150, loss[loss=0.2238, simple_loss=0.302, pruned_loss=0.07278, over 7433.00 frames.], tot_loss[loss=0.2073, simple_loss=0.2887, pruned_loss=0.06297, over 1424341.73 frames.], batch size: 20, lr: 4.83e-04 2022-05-27 10:43:45,973 INFO [train.py:842] (0/4) Epoch 11, batch 7200, loss[loss=0.1845, simple_loss=0.27, pruned_loss=0.04943, over 7233.00 frames.], tot_loss[loss=0.2061, simple_loss=0.2874, pruned_loss=0.06243, over 1423521.55 frames.], batch size: 20, lr: 4.83e-04 2022-05-27 10:44:24,350 INFO [train.py:842] (0/4) Epoch 11, batch 7250, loss[loss=0.2001, simple_loss=0.2875, pruned_loss=0.05633, over 7144.00 frames.], tot_loss[loss=0.2072, simple_loss=0.2886, pruned_loss=0.0629, over 1423538.59 frames.], batch size: 20, lr: 4.83e-04 2022-05-27 10:45:23,832 INFO [train.py:842] (0/4) Epoch 11, batch 7300, loss[loss=0.2818, simple_loss=0.3469, pruned_loss=0.1083, over 6825.00 frames.], tot_loss[loss=0.2077, simple_loss=0.2891, pruned_loss=0.06313, over 1423545.60 frames.], batch size: 31, lr: 4.83e-04 2022-05-27 10:46:12,720 INFO [train.py:842] (0/4) Epoch 11, batch 7350, loss[loss=0.1864, simple_loss=0.2701, pruned_loss=0.05135, over 7063.00 frames.], tot_loss[loss=0.2058, simple_loss=0.2872, pruned_loss=0.06214, over 1426111.55 frames.], batch size: 28, lr: 4.83e-04 2022-05-27 10:46:51,434 INFO [train.py:842] (0/4) Epoch 11, batch 7400, loss[loss=0.257, simple_loss=0.3133, pruned_loss=0.1004, over 7232.00 frames.], tot_loss[loss=0.2069, simple_loss=0.2882, pruned_loss=0.06285, over 1424981.60 frames.], batch size: 20, lr: 4.83e-04 2022-05-27 10:47:30,205 INFO [train.py:842] (0/4) Epoch 11, batch 7450, loss[loss=0.2142, simple_loss=0.2926, pruned_loss=0.06795, over 7347.00 frames.], tot_loss[loss=0.207, simple_loss=0.2882, pruned_loss=0.06291, over 1425695.94 frames.], batch size: 25, lr: 4.82e-04 2022-05-27 10:48:09,494 INFO [train.py:842] (0/4) Epoch 11, batch 7500, loss[loss=0.2115, simple_loss=0.3025, pruned_loss=0.06026, over 7329.00 frames.], tot_loss[loss=0.2068, simple_loss=0.2878, pruned_loss=0.06289, over 1427111.33 frames.], batch size: 20, lr: 4.82e-04 2022-05-27 10:48:48,111 INFO [train.py:842] (0/4) Epoch 11, batch 7550, loss[loss=0.2501, simple_loss=0.3246, pruned_loss=0.08781, over 7340.00 frames.], tot_loss[loss=0.2073, simple_loss=0.2881, pruned_loss=0.06323, over 1425510.94 frames.], batch size: 22, lr: 4.82e-04 2022-05-27 10:49:26,800 INFO [train.py:842] (0/4) Epoch 11, batch 7600, loss[loss=0.208, simple_loss=0.2751, pruned_loss=0.07044, over 7258.00 frames.], tot_loss[loss=0.2064, simple_loss=0.2873, pruned_loss=0.06273, over 1423672.94 frames.], batch size: 19, lr: 4.82e-04 2022-05-27 10:50:05,344 INFO [train.py:842] (0/4) Epoch 11, batch 7650, loss[loss=0.2172, simple_loss=0.2823, pruned_loss=0.07602, over 6779.00 frames.], tot_loss[loss=0.2066, simple_loss=0.2874, pruned_loss=0.06293, over 1417691.17 frames.], batch size: 15, lr: 4.82e-04 2022-05-27 10:50:44,217 INFO [train.py:842] (0/4) Epoch 11, batch 7700, loss[loss=0.1858, simple_loss=0.2785, pruned_loss=0.04652, over 7214.00 frames.], tot_loss[loss=0.2072, simple_loss=0.2884, pruned_loss=0.06298, over 1422255.07 frames.], batch size: 22, lr: 4.82e-04 2022-05-27 10:51:22,649 INFO [train.py:842] (0/4) Epoch 11, batch 7750, loss[loss=0.2178, simple_loss=0.2994, pruned_loss=0.06811, over 7220.00 frames.], tot_loss[loss=0.2075, simple_loss=0.2888, pruned_loss=0.06315, over 1421341.28 frames.], batch size: 21, lr: 4.82e-04 2022-05-27 10:52:01,624 INFO [train.py:842] (0/4) Epoch 11, batch 7800, loss[loss=0.1989, simple_loss=0.2829, pruned_loss=0.05744, over 7067.00 frames.], tot_loss[loss=0.2071, simple_loss=0.2886, pruned_loss=0.06284, over 1421219.28 frames.], batch size: 18, lr: 4.82e-04 2022-05-27 10:52:40,113 INFO [train.py:842] (0/4) Epoch 11, batch 7850, loss[loss=0.1969, simple_loss=0.2988, pruned_loss=0.04746, over 7228.00 frames.], tot_loss[loss=0.2075, simple_loss=0.2892, pruned_loss=0.06293, over 1420667.87 frames.], batch size: 21, lr: 4.81e-04 2022-05-27 10:53:18,866 INFO [train.py:842] (0/4) Epoch 11, batch 7900, loss[loss=0.1692, simple_loss=0.2558, pruned_loss=0.04128, over 7155.00 frames.], tot_loss[loss=0.208, simple_loss=0.2896, pruned_loss=0.06318, over 1422077.86 frames.], batch size: 18, lr: 4.81e-04 2022-05-27 10:53:57,414 INFO [train.py:842] (0/4) Epoch 11, batch 7950, loss[loss=0.2448, simple_loss=0.3209, pruned_loss=0.08435, over 7418.00 frames.], tot_loss[loss=0.2083, simple_loss=0.2898, pruned_loss=0.06344, over 1425354.64 frames.], batch size: 21, lr: 4.81e-04 2022-05-27 10:54:36,318 INFO [train.py:842] (0/4) Epoch 11, batch 8000, loss[loss=0.1888, simple_loss=0.2851, pruned_loss=0.04623, over 7413.00 frames.], tot_loss[loss=0.2066, simple_loss=0.288, pruned_loss=0.06259, over 1425060.40 frames.], batch size: 21, lr: 4.81e-04 2022-05-27 10:55:14,886 INFO [train.py:842] (0/4) Epoch 11, batch 8050, loss[loss=0.2106, simple_loss=0.2885, pruned_loss=0.06635, over 7298.00 frames.], tot_loss[loss=0.2071, simple_loss=0.2882, pruned_loss=0.06296, over 1429368.73 frames.], batch size: 25, lr: 4.81e-04 2022-05-27 10:55:53,852 INFO [train.py:842] (0/4) Epoch 11, batch 8100, loss[loss=0.2188, simple_loss=0.3085, pruned_loss=0.06461, over 7286.00 frames.], tot_loss[loss=0.2053, simple_loss=0.2862, pruned_loss=0.06219, over 1429525.76 frames.], batch size: 24, lr: 4.81e-04 2022-05-27 10:56:32,348 INFO [train.py:842] (0/4) Epoch 11, batch 8150, loss[loss=0.281, simple_loss=0.3499, pruned_loss=0.106, over 7370.00 frames.], tot_loss[loss=0.2067, simple_loss=0.2874, pruned_loss=0.06303, over 1430652.91 frames.], batch size: 23, lr: 4.81e-04 2022-05-27 10:57:11,043 INFO [train.py:842] (0/4) Epoch 11, batch 8200, loss[loss=0.2237, simple_loss=0.3101, pruned_loss=0.06869, over 7273.00 frames.], tot_loss[loss=0.2072, simple_loss=0.2882, pruned_loss=0.06312, over 1425529.43 frames.], batch size: 24, lr: 4.81e-04 2022-05-27 10:57:49,601 INFO [train.py:842] (0/4) Epoch 11, batch 8250, loss[loss=0.1889, simple_loss=0.2781, pruned_loss=0.04986, over 7427.00 frames.], tot_loss[loss=0.2073, simple_loss=0.2887, pruned_loss=0.06297, over 1428451.88 frames.], batch size: 20, lr: 4.80e-04 2022-05-27 10:58:28,728 INFO [train.py:842] (0/4) Epoch 11, batch 8300, loss[loss=0.2305, simple_loss=0.3319, pruned_loss=0.06459, over 7215.00 frames.], tot_loss[loss=0.2078, simple_loss=0.2893, pruned_loss=0.06318, over 1430693.86 frames.], batch size: 22, lr: 4.80e-04 2022-05-27 10:59:07,272 INFO [train.py:842] (0/4) Epoch 11, batch 8350, loss[loss=0.1919, simple_loss=0.268, pruned_loss=0.05793, over 7415.00 frames.], tot_loss[loss=0.2083, simple_loss=0.2891, pruned_loss=0.06374, over 1428026.69 frames.], batch size: 18, lr: 4.80e-04 2022-05-27 10:59:46,098 INFO [train.py:842] (0/4) Epoch 11, batch 8400, loss[loss=0.227, simple_loss=0.3101, pruned_loss=0.07193, over 7296.00 frames.], tot_loss[loss=0.2079, simple_loss=0.2886, pruned_loss=0.06363, over 1430025.10 frames.], batch size: 25, lr: 4.80e-04 2022-05-27 11:00:24,531 INFO [train.py:842] (0/4) Epoch 11, batch 8450, loss[loss=0.1941, simple_loss=0.2895, pruned_loss=0.04936, over 7308.00 frames.], tot_loss[loss=0.2066, simple_loss=0.2877, pruned_loss=0.06276, over 1424841.52 frames.], batch size: 24, lr: 4.80e-04 2022-05-27 11:01:03,517 INFO [train.py:842] (0/4) Epoch 11, batch 8500, loss[loss=0.2074, simple_loss=0.289, pruned_loss=0.06287, over 7215.00 frames.], tot_loss[loss=0.2055, simple_loss=0.2865, pruned_loss=0.06226, over 1422391.18 frames.], batch size: 22, lr: 4.80e-04 2022-05-27 11:01:42,108 INFO [train.py:842] (0/4) Epoch 11, batch 8550, loss[loss=0.2099, simple_loss=0.2889, pruned_loss=0.06541, over 7353.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2856, pruned_loss=0.0617, over 1422755.99 frames.], batch size: 19, lr: 4.80e-04 2022-05-27 11:02:21,320 INFO [train.py:842] (0/4) Epoch 11, batch 8600, loss[loss=0.1662, simple_loss=0.2429, pruned_loss=0.04477, over 7147.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2846, pruned_loss=0.06149, over 1420272.09 frames.], batch size: 17, lr: 4.80e-04 2022-05-27 11:02:59,931 INFO [train.py:842] (0/4) Epoch 11, batch 8650, loss[loss=0.1932, simple_loss=0.263, pruned_loss=0.06173, over 7415.00 frames.], tot_loss[loss=0.2027, simple_loss=0.2836, pruned_loss=0.06092, over 1423854.81 frames.], batch size: 18, lr: 4.80e-04 2022-05-27 11:03:38,554 INFO [train.py:842] (0/4) Epoch 11, batch 8700, loss[loss=0.1954, simple_loss=0.2705, pruned_loss=0.06018, over 7428.00 frames.], tot_loss[loss=0.2027, simple_loss=0.2837, pruned_loss=0.06084, over 1419485.30 frames.], batch size: 18, lr: 4.79e-04 2022-05-27 11:04:16,853 INFO [train.py:842] (0/4) Epoch 11, batch 8750, loss[loss=0.2158, simple_loss=0.3019, pruned_loss=0.06486, over 7225.00 frames.], tot_loss[loss=0.2046, simple_loss=0.285, pruned_loss=0.06209, over 1418334.19 frames.], batch size: 26, lr: 4.79e-04 2022-05-27 11:04:55,654 INFO [train.py:842] (0/4) Epoch 11, batch 8800, loss[loss=0.1643, simple_loss=0.2467, pruned_loss=0.04099, over 7369.00 frames.], tot_loss[loss=0.2046, simple_loss=0.2854, pruned_loss=0.06194, over 1417501.16 frames.], batch size: 19, lr: 4.79e-04 2022-05-27 11:05:34,675 INFO [train.py:842] (0/4) Epoch 11, batch 8850, loss[loss=0.1803, simple_loss=0.2582, pruned_loss=0.05123, over 7411.00 frames.], tot_loss[loss=0.2071, simple_loss=0.2875, pruned_loss=0.06334, over 1411535.72 frames.], batch size: 18, lr: 4.79e-04 2022-05-27 11:06:13,189 INFO [train.py:842] (0/4) Epoch 11, batch 8900, loss[loss=0.1934, simple_loss=0.2898, pruned_loss=0.04856, over 7208.00 frames.], tot_loss[loss=0.2067, simple_loss=0.2875, pruned_loss=0.06298, over 1412829.70 frames.], batch size: 21, lr: 4.79e-04 2022-05-27 11:06:51,387 INFO [train.py:842] (0/4) Epoch 11, batch 8950, loss[loss=0.1861, simple_loss=0.2816, pruned_loss=0.04534, over 7153.00 frames.], tot_loss[loss=0.2071, simple_loss=0.2877, pruned_loss=0.06326, over 1398943.53 frames.], batch size: 26, lr: 4.79e-04 2022-05-27 11:07:29,530 INFO [train.py:842] (0/4) Epoch 11, batch 9000, loss[loss=0.2894, simple_loss=0.3538, pruned_loss=0.1125, over 6744.00 frames.], tot_loss[loss=0.2097, simple_loss=0.2896, pruned_loss=0.06491, over 1382060.28 frames.], batch size: 31, lr: 4.79e-04 2022-05-27 11:07:29,532 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 11:07:39,052 INFO [train.py:871] (0/4) Epoch 11, validation: loss=0.1709, simple_loss=0.2721, pruned_loss=0.03482, over 868885.00 frames. 2022-05-27 11:08:17,114 INFO [train.py:842] (0/4) Epoch 11, batch 9050, loss[loss=0.2136, simple_loss=0.2998, pruned_loss=0.06371, over 5360.00 frames.], tot_loss[loss=0.2128, simple_loss=0.2921, pruned_loss=0.06676, over 1369604.60 frames.], batch size: 53, lr: 4.79e-04 2022-05-27 11:08:55,132 INFO [train.py:842] (0/4) Epoch 11, batch 9100, loss[loss=0.2547, simple_loss=0.321, pruned_loss=0.09418, over 4941.00 frames.], tot_loss[loss=0.2197, simple_loss=0.2972, pruned_loss=0.07106, over 1295444.98 frames.], batch size: 52, lr: 4.78e-04 2022-05-27 11:09:32,710 INFO [train.py:842] (0/4) Epoch 11, batch 9150, loss[loss=0.22, simple_loss=0.3014, pruned_loss=0.06932, over 5042.00 frames.], tot_loss[loss=0.226, simple_loss=0.3021, pruned_loss=0.07495, over 1231503.87 frames.], batch size: 52, lr: 4.78e-04 2022-05-27 11:10:06,066 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-11.pt 2022-05-27 11:10:24,880 INFO [train.py:842] (0/4) Epoch 12, batch 0, loss[loss=0.2308, simple_loss=0.3196, pruned_loss=0.07103, over 7418.00 frames.], tot_loss[loss=0.2308, simple_loss=0.3196, pruned_loss=0.07103, over 7418.00 frames.], batch size: 21, lr: 4.61e-04 2022-05-27 11:11:03,716 INFO [train.py:842] (0/4) Epoch 12, batch 50, loss[loss=0.2177, simple_loss=0.2881, pruned_loss=0.07368, over 5051.00 frames.], tot_loss[loss=0.2084, simple_loss=0.2888, pruned_loss=0.06397, over 319166.39 frames.], batch size: 54, lr: 4.61e-04 2022-05-27 11:11:42,656 INFO [train.py:842] (0/4) Epoch 12, batch 100, loss[loss=0.2037, simple_loss=0.2988, pruned_loss=0.05435, over 6191.00 frames.], tot_loss[loss=0.211, simple_loss=0.2923, pruned_loss=0.06481, over 558433.88 frames.], batch size: 37, lr: 4.61e-04 2022-05-27 11:12:21,205 INFO [train.py:842] (0/4) Epoch 12, batch 150, loss[loss=0.1738, simple_loss=0.2674, pruned_loss=0.04011, over 7276.00 frames.], tot_loss[loss=0.2072, simple_loss=0.29, pruned_loss=0.06217, over 748568.67 frames.], batch size: 17, lr: 4.61e-04 2022-05-27 11:12:59,948 INFO [train.py:842] (0/4) Epoch 12, batch 200, loss[loss=0.2154, simple_loss=0.2919, pruned_loss=0.06942, over 7215.00 frames.], tot_loss[loss=0.2079, simple_loss=0.2906, pruned_loss=0.06262, over 896055.18 frames.], batch size: 22, lr: 4.61e-04 2022-05-27 11:13:38,460 INFO [train.py:842] (0/4) Epoch 12, batch 250, loss[loss=0.2286, simple_loss=0.3046, pruned_loss=0.07627, over 6646.00 frames.], tot_loss[loss=0.2058, simple_loss=0.2886, pruned_loss=0.06152, over 1013718.14 frames.], batch size: 31, lr: 4.61e-04 2022-05-27 11:14:17,121 INFO [train.py:842] (0/4) Epoch 12, batch 300, loss[loss=0.2439, simple_loss=0.3223, pruned_loss=0.08274, over 7208.00 frames.], tot_loss[loss=0.2069, simple_loss=0.2899, pruned_loss=0.06188, over 1098573.83 frames.], batch size: 22, lr: 4.61e-04 2022-05-27 11:14:55,692 INFO [train.py:842] (0/4) Epoch 12, batch 350, loss[loss=0.2067, simple_loss=0.2877, pruned_loss=0.06289, over 7332.00 frames.], tot_loss[loss=0.2049, simple_loss=0.2876, pruned_loss=0.0611, over 1165307.76 frames.], batch size: 22, lr: 4.61e-04 2022-05-27 11:15:34,514 INFO [train.py:842] (0/4) Epoch 12, batch 400, loss[loss=0.222, simple_loss=0.3091, pruned_loss=0.06746, over 7330.00 frames.], tot_loss[loss=0.205, simple_loss=0.2878, pruned_loss=0.06113, over 1219512.03 frames.], batch size: 22, lr: 4.60e-04 2022-05-27 11:16:13,174 INFO [train.py:842] (0/4) Epoch 12, batch 450, loss[loss=0.1767, simple_loss=0.2581, pruned_loss=0.04766, over 7169.00 frames.], tot_loss[loss=0.2042, simple_loss=0.2867, pruned_loss=0.06086, over 1267685.63 frames.], batch size: 19, lr: 4.60e-04 2022-05-27 11:16:52,031 INFO [train.py:842] (0/4) Epoch 12, batch 500, loss[loss=0.2431, simple_loss=0.3092, pruned_loss=0.08853, over 7357.00 frames.], tot_loss[loss=0.205, simple_loss=0.2872, pruned_loss=0.06134, over 1301757.86 frames.], batch size: 23, lr: 4.60e-04 2022-05-27 11:17:30,956 INFO [train.py:842] (0/4) Epoch 12, batch 550, loss[loss=0.1848, simple_loss=0.273, pruned_loss=0.04831, over 7415.00 frames.], tot_loss[loss=0.2041, simple_loss=0.2861, pruned_loss=0.06106, over 1328893.90 frames.], batch size: 21, lr: 4.60e-04 2022-05-27 11:18:10,099 INFO [train.py:842] (0/4) Epoch 12, batch 600, loss[loss=0.1957, simple_loss=0.2852, pruned_loss=0.05311, over 7328.00 frames.], tot_loss[loss=0.2053, simple_loss=0.2868, pruned_loss=0.06188, over 1348706.57 frames.], batch size: 22, lr: 4.60e-04 2022-05-27 11:18:48,941 INFO [train.py:842] (0/4) Epoch 12, batch 650, loss[loss=0.2068, simple_loss=0.2967, pruned_loss=0.0584, over 7379.00 frames.], tot_loss[loss=0.2043, simple_loss=0.2858, pruned_loss=0.06134, over 1369814.83 frames.], batch size: 23, lr: 4.60e-04 2022-05-27 11:19:27,773 INFO [train.py:842] (0/4) Epoch 12, batch 700, loss[loss=0.1995, simple_loss=0.2998, pruned_loss=0.04957, over 7303.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2858, pruned_loss=0.06092, over 1380432.21 frames.], batch size: 24, lr: 4.60e-04 2022-05-27 11:20:06,287 INFO [train.py:842] (0/4) Epoch 12, batch 750, loss[loss=0.1741, simple_loss=0.2634, pruned_loss=0.04238, over 7346.00 frames.], tot_loss[loss=0.2065, simple_loss=0.2879, pruned_loss=0.06254, over 1386364.17 frames.], batch size: 20, lr: 4.60e-04 2022-05-27 11:20:45,310 INFO [train.py:842] (0/4) Epoch 12, batch 800, loss[loss=0.1955, simple_loss=0.2729, pruned_loss=0.05908, over 7423.00 frames.], tot_loss[loss=0.2066, simple_loss=0.2879, pruned_loss=0.06265, over 1398425.05 frames.], batch size: 18, lr: 4.60e-04 2022-05-27 11:21:23,920 INFO [train.py:842] (0/4) Epoch 12, batch 850, loss[loss=0.2608, simple_loss=0.3387, pruned_loss=0.0915, over 7067.00 frames.], tot_loss[loss=0.2058, simple_loss=0.2875, pruned_loss=0.06209, over 1402884.43 frames.], batch size: 32, lr: 4.59e-04 2022-05-27 11:22:02,952 INFO [train.py:842] (0/4) Epoch 12, batch 900, loss[loss=0.2359, simple_loss=0.3146, pruned_loss=0.07864, over 7334.00 frames.], tot_loss[loss=0.2059, simple_loss=0.2879, pruned_loss=0.06196, over 1407341.62 frames.], batch size: 22, lr: 4.59e-04 2022-05-27 11:22:41,559 INFO [train.py:842] (0/4) Epoch 12, batch 950, loss[loss=0.1763, simple_loss=0.2571, pruned_loss=0.04776, over 7436.00 frames.], tot_loss[loss=0.2061, simple_loss=0.2874, pruned_loss=0.06238, over 1412029.79 frames.], batch size: 20, lr: 4.59e-04 2022-05-27 11:23:20,318 INFO [train.py:842] (0/4) Epoch 12, batch 1000, loss[loss=0.2118, simple_loss=0.2978, pruned_loss=0.06292, over 7154.00 frames.], tot_loss[loss=0.2052, simple_loss=0.2868, pruned_loss=0.06177, over 1415117.04 frames.], batch size: 19, lr: 4.59e-04 2022-05-27 11:23:58,896 INFO [train.py:842] (0/4) Epoch 12, batch 1050, loss[loss=0.1699, simple_loss=0.2397, pruned_loss=0.05004, over 7005.00 frames.], tot_loss[loss=0.2064, simple_loss=0.2878, pruned_loss=0.06247, over 1415365.27 frames.], batch size: 16, lr: 4.59e-04 2022-05-27 11:24:37,601 INFO [train.py:842] (0/4) Epoch 12, batch 1100, loss[loss=0.1957, simple_loss=0.2808, pruned_loss=0.05533, over 7153.00 frames.], tot_loss[loss=0.2067, simple_loss=0.2882, pruned_loss=0.06261, over 1418865.91 frames.], batch size: 19, lr: 4.59e-04 2022-05-27 11:25:16,157 INFO [train.py:842] (0/4) Epoch 12, batch 1150, loss[loss=0.2717, simple_loss=0.3288, pruned_loss=0.1073, over 5067.00 frames.], tot_loss[loss=0.2066, simple_loss=0.2882, pruned_loss=0.0625, over 1421416.09 frames.], batch size: 52, lr: 4.59e-04 2022-05-27 11:25:55,249 INFO [train.py:842] (0/4) Epoch 12, batch 1200, loss[loss=0.243, simple_loss=0.3228, pruned_loss=0.08165, over 7100.00 frames.], tot_loss[loss=0.206, simple_loss=0.2883, pruned_loss=0.0619, over 1424436.52 frames.], batch size: 21, lr: 4.59e-04 2022-05-27 11:26:33,778 INFO [train.py:842] (0/4) Epoch 12, batch 1250, loss[loss=0.1465, simple_loss=0.2297, pruned_loss=0.03162, over 7015.00 frames.], tot_loss[loss=0.2052, simple_loss=0.287, pruned_loss=0.06165, over 1425976.48 frames.], batch size: 16, lr: 4.59e-04 2022-05-27 11:27:12,547 INFO [train.py:842] (0/4) Epoch 12, batch 1300, loss[loss=0.1764, simple_loss=0.2687, pruned_loss=0.04203, over 7323.00 frames.], tot_loss[loss=0.2029, simple_loss=0.2851, pruned_loss=0.06037, over 1428247.68 frames.], batch size: 20, lr: 4.58e-04 2022-05-27 11:27:51,015 INFO [train.py:842] (0/4) Epoch 12, batch 1350, loss[loss=0.2146, simple_loss=0.3041, pruned_loss=0.06258, over 7326.00 frames.], tot_loss[loss=0.2034, simple_loss=0.2855, pruned_loss=0.06059, over 1424731.61 frames.], batch size: 21, lr: 4.58e-04 2022-05-27 11:28:30,043 INFO [train.py:842] (0/4) Epoch 12, batch 1400, loss[loss=0.2317, simple_loss=0.3149, pruned_loss=0.07426, over 7321.00 frames.], tot_loss[loss=0.2024, simple_loss=0.2846, pruned_loss=0.06015, over 1421230.34 frames.], batch size: 21, lr: 4.58e-04 2022-05-27 11:29:08,573 INFO [train.py:842] (0/4) Epoch 12, batch 1450, loss[loss=0.184, simple_loss=0.2754, pruned_loss=0.04624, over 7069.00 frames.], tot_loss[loss=0.2025, simple_loss=0.2849, pruned_loss=0.06003, over 1420784.51 frames.], batch size: 18, lr: 4.58e-04 2022-05-27 11:29:47,662 INFO [train.py:842] (0/4) Epoch 12, batch 1500, loss[loss=0.2009, simple_loss=0.2948, pruned_loss=0.05353, over 7188.00 frames.], tot_loss[loss=0.2023, simple_loss=0.2845, pruned_loss=0.06006, over 1425064.98 frames.], batch size: 23, lr: 4.58e-04 2022-05-27 11:30:26,254 INFO [train.py:842] (0/4) Epoch 12, batch 1550, loss[loss=0.1697, simple_loss=0.2649, pruned_loss=0.03728, over 7236.00 frames.], tot_loss[loss=0.2016, simple_loss=0.2837, pruned_loss=0.05977, over 1424971.26 frames.], batch size: 20, lr: 4.58e-04 2022-05-27 11:31:04,959 INFO [train.py:842] (0/4) Epoch 12, batch 1600, loss[loss=0.2065, simple_loss=0.2799, pruned_loss=0.06654, over 7346.00 frames.], tot_loss[loss=0.2028, simple_loss=0.2846, pruned_loss=0.06051, over 1426145.71 frames.], batch size: 19, lr: 4.58e-04 2022-05-27 11:31:43,502 INFO [train.py:842] (0/4) Epoch 12, batch 1650, loss[loss=0.1854, simple_loss=0.2786, pruned_loss=0.04613, over 7374.00 frames.], tot_loss[loss=0.2028, simple_loss=0.2847, pruned_loss=0.06041, over 1426793.35 frames.], batch size: 23, lr: 4.58e-04 2022-05-27 11:32:22,285 INFO [train.py:842] (0/4) Epoch 12, batch 1700, loss[loss=0.2195, simple_loss=0.3024, pruned_loss=0.06831, over 7223.00 frames.], tot_loss[loss=0.2036, simple_loss=0.2857, pruned_loss=0.06078, over 1427990.39 frames.], batch size: 21, lr: 4.58e-04 2022-05-27 11:33:00,964 INFO [train.py:842] (0/4) Epoch 12, batch 1750, loss[loss=0.2711, simple_loss=0.3386, pruned_loss=0.1019, over 7163.00 frames.], tot_loss[loss=0.2044, simple_loss=0.2865, pruned_loss=0.06121, over 1429134.05 frames.], batch size: 26, lr: 4.57e-04 2022-05-27 11:33:40,078 INFO [train.py:842] (0/4) Epoch 12, batch 1800, loss[loss=0.1921, simple_loss=0.263, pruned_loss=0.06054, over 7422.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2861, pruned_loss=0.06147, over 1430100.96 frames.], batch size: 17, lr: 4.57e-04 2022-05-27 11:34:18,687 INFO [train.py:842] (0/4) Epoch 12, batch 1850, loss[loss=0.1968, simple_loss=0.2859, pruned_loss=0.05384, over 7159.00 frames.], tot_loss[loss=0.2033, simple_loss=0.2851, pruned_loss=0.06078, over 1427976.39 frames.], batch size: 26, lr: 4.57e-04 2022-05-27 11:34:57,825 INFO [train.py:842] (0/4) Epoch 12, batch 1900, loss[loss=0.1795, simple_loss=0.2661, pruned_loss=0.04644, over 7422.00 frames.], tot_loss[loss=0.2031, simple_loss=0.2849, pruned_loss=0.06059, over 1430050.79 frames.], batch size: 20, lr: 4.57e-04 2022-05-27 11:35:36,368 INFO [train.py:842] (0/4) Epoch 12, batch 1950, loss[loss=0.1757, simple_loss=0.2578, pruned_loss=0.04684, over 6990.00 frames.], tot_loss[loss=0.2033, simple_loss=0.2851, pruned_loss=0.06075, over 1428887.53 frames.], batch size: 16, lr: 4.57e-04 2022-05-27 11:36:15,178 INFO [train.py:842] (0/4) Epoch 12, batch 2000, loss[loss=0.2269, simple_loss=0.292, pruned_loss=0.08093, over 6297.00 frames.], tot_loss[loss=0.2043, simple_loss=0.2855, pruned_loss=0.06161, over 1426950.57 frames.], batch size: 37, lr: 4.57e-04 2022-05-27 11:36:53,628 INFO [train.py:842] (0/4) Epoch 12, batch 2050, loss[loss=0.2516, simple_loss=0.3299, pruned_loss=0.08665, over 7373.00 frames.], tot_loss[loss=0.2055, simple_loss=0.286, pruned_loss=0.06245, over 1424266.63 frames.], batch size: 23, lr: 4.57e-04 2022-05-27 11:37:32,479 INFO [train.py:842] (0/4) Epoch 12, batch 2100, loss[loss=0.2276, simple_loss=0.312, pruned_loss=0.07159, over 6818.00 frames.], tot_loss[loss=0.2053, simple_loss=0.2862, pruned_loss=0.06215, over 1428205.43 frames.], batch size: 31, lr: 4.57e-04 2022-05-27 11:38:11,264 INFO [train.py:842] (0/4) Epoch 12, batch 2150, loss[loss=0.2118, simple_loss=0.2836, pruned_loss=0.07002, over 6816.00 frames.], tot_loss[loss=0.2054, simple_loss=0.2867, pruned_loss=0.06212, over 1421997.18 frames.], batch size: 15, lr: 4.57e-04 2022-05-27 11:38:50,344 INFO [train.py:842] (0/4) Epoch 12, batch 2200, loss[loss=0.1898, simple_loss=0.2833, pruned_loss=0.0481, over 7437.00 frames.], tot_loss[loss=0.2036, simple_loss=0.2854, pruned_loss=0.06095, over 1426370.55 frames.], batch size: 20, lr: 4.56e-04 2022-05-27 11:39:29,036 INFO [train.py:842] (0/4) Epoch 12, batch 2250, loss[loss=0.1894, simple_loss=0.2639, pruned_loss=0.05746, over 7138.00 frames.], tot_loss[loss=0.2035, simple_loss=0.2851, pruned_loss=0.06096, over 1425736.75 frames.], batch size: 17, lr: 4.56e-04 2022-05-27 11:40:07,789 INFO [train.py:842] (0/4) Epoch 12, batch 2300, loss[loss=0.1752, simple_loss=0.2631, pruned_loss=0.04359, over 7358.00 frames.], tot_loss[loss=0.2034, simple_loss=0.2855, pruned_loss=0.0606, over 1424131.32 frames.], batch size: 19, lr: 4.56e-04 2022-05-27 11:40:46,478 INFO [train.py:842] (0/4) Epoch 12, batch 2350, loss[loss=0.2088, simple_loss=0.302, pruned_loss=0.05777, over 7283.00 frames.], tot_loss[loss=0.2025, simple_loss=0.2845, pruned_loss=0.06023, over 1425642.28 frames.], batch size: 24, lr: 4.56e-04 2022-05-27 11:41:25,319 INFO [train.py:842] (0/4) Epoch 12, batch 2400, loss[loss=0.2342, simple_loss=0.3126, pruned_loss=0.07793, over 7112.00 frames.], tot_loss[loss=0.203, simple_loss=0.2852, pruned_loss=0.06036, over 1427750.68 frames.], batch size: 21, lr: 4.56e-04 2022-05-27 11:42:03,646 INFO [train.py:842] (0/4) Epoch 12, batch 2450, loss[loss=0.2064, simple_loss=0.2873, pruned_loss=0.06272, over 7222.00 frames.], tot_loss[loss=0.203, simple_loss=0.2853, pruned_loss=0.06035, over 1425086.91 frames.], batch size: 20, lr: 4.56e-04 2022-05-27 11:42:42,642 INFO [train.py:842] (0/4) Epoch 12, batch 2500, loss[loss=0.1905, simple_loss=0.2682, pruned_loss=0.05645, over 7076.00 frames.], tot_loss[loss=0.2027, simple_loss=0.285, pruned_loss=0.06024, over 1424400.37 frames.], batch size: 18, lr: 4.56e-04 2022-05-27 11:43:21,058 INFO [train.py:842] (0/4) Epoch 12, batch 2550, loss[loss=0.1946, simple_loss=0.2689, pruned_loss=0.06016, over 7264.00 frames.], tot_loss[loss=0.2029, simple_loss=0.2853, pruned_loss=0.06023, over 1427732.99 frames.], batch size: 17, lr: 4.56e-04 2022-05-27 11:43:59,768 INFO [train.py:842] (0/4) Epoch 12, batch 2600, loss[loss=0.1978, simple_loss=0.2828, pruned_loss=0.05635, over 7295.00 frames.], tot_loss[loss=0.2036, simple_loss=0.2857, pruned_loss=0.06079, over 1422518.29 frames.], batch size: 24, lr: 4.56e-04 2022-05-27 11:44:38,251 INFO [train.py:842] (0/4) Epoch 12, batch 2650, loss[loss=0.2238, simple_loss=0.2922, pruned_loss=0.07772, over 7257.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2861, pruned_loss=0.06076, over 1420248.98 frames.], batch size: 19, lr: 4.55e-04 2022-05-27 11:45:17,039 INFO [train.py:842] (0/4) Epoch 12, batch 2700, loss[loss=0.2021, simple_loss=0.291, pruned_loss=0.05656, over 7285.00 frames.], tot_loss[loss=0.2036, simple_loss=0.2858, pruned_loss=0.06069, over 1423915.73 frames.], batch size: 25, lr: 4.55e-04 2022-05-27 11:45:55,567 INFO [train.py:842] (0/4) Epoch 12, batch 2750, loss[loss=0.1927, simple_loss=0.2734, pruned_loss=0.05603, over 7440.00 frames.], tot_loss[loss=0.2054, simple_loss=0.2869, pruned_loss=0.0619, over 1426416.53 frames.], batch size: 20, lr: 4.55e-04 2022-05-27 11:46:34,709 INFO [train.py:842] (0/4) Epoch 12, batch 2800, loss[loss=0.1992, simple_loss=0.2893, pruned_loss=0.0545, over 7106.00 frames.], tot_loss[loss=0.2049, simple_loss=0.2869, pruned_loss=0.06144, over 1426547.53 frames.], batch size: 21, lr: 4.55e-04 2022-05-27 11:47:13,252 INFO [train.py:842] (0/4) Epoch 12, batch 2850, loss[loss=0.1892, simple_loss=0.2671, pruned_loss=0.05567, over 7317.00 frames.], tot_loss[loss=0.2055, simple_loss=0.2872, pruned_loss=0.06194, over 1428410.27 frames.], batch size: 21, lr: 4.55e-04 2022-05-27 11:47:41,149 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-104000.pt 2022-05-27 11:47:54,695 INFO [train.py:842] (0/4) Epoch 12, batch 2900, loss[loss=0.2407, simple_loss=0.3178, pruned_loss=0.08174, over 7266.00 frames.], tot_loss[loss=0.2063, simple_loss=0.2881, pruned_loss=0.0622, over 1424866.19 frames.], batch size: 24, lr: 4.55e-04 2022-05-27 11:48:33,249 INFO [train.py:842] (0/4) Epoch 12, batch 2950, loss[loss=0.2311, simple_loss=0.3125, pruned_loss=0.07484, over 7220.00 frames.], tot_loss[loss=0.2065, simple_loss=0.288, pruned_loss=0.06249, over 1420658.38 frames.], batch size: 21, lr: 4.55e-04 2022-05-27 11:49:12,199 INFO [train.py:842] (0/4) Epoch 12, batch 3000, loss[loss=0.207, simple_loss=0.293, pruned_loss=0.06044, over 7315.00 frames.], tot_loss[loss=0.2053, simple_loss=0.2867, pruned_loss=0.06194, over 1422156.01 frames.], batch size: 25, lr: 4.55e-04 2022-05-27 11:49:12,201 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 11:49:21,566 INFO [train.py:871] (0/4) Epoch 12, validation: loss=0.172, simple_loss=0.2724, pruned_loss=0.03584, over 868885.00 frames. 2022-05-27 11:50:00,011 INFO [train.py:842] (0/4) Epoch 12, batch 3050, loss[loss=0.224, simple_loss=0.3085, pruned_loss=0.06971, over 7380.00 frames.], tot_loss[loss=0.2062, simple_loss=0.2877, pruned_loss=0.06238, over 1420077.24 frames.], batch size: 23, lr: 4.55e-04 2022-05-27 11:50:39,210 INFO [train.py:842] (0/4) Epoch 12, batch 3100, loss[loss=0.1977, simple_loss=0.2762, pruned_loss=0.05959, over 7326.00 frames.], tot_loss[loss=0.205, simple_loss=0.2866, pruned_loss=0.06176, over 1422303.86 frames.], batch size: 20, lr: 4.54e-04 2022-05-27 11:51:17,933 INFO [train.py:842] (0/4) Epoch 12, batch 3150, loss[loss=0.2546, simple_loss=0.3206, pruned_loss=0.09424, over 7371.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2864, pruned_loss=0.06135, over 1423990.78 frames.], batch size: 23, lr: 4.54e-04 2022-05-27 11:51:56,829 INFO [train.py:842] (0/4) Epoch 12, batch 3200, loss[loss=0.1992, simple_loss=0.2889, pruned_loss=0.05475, over 7121.00 frames.], tot_loss[loss=0.2043, simple_loss=0.2859, pruned_loss=0.0613, over 1423483.64 frames.], batch size: 21, lr: 4.54e-04 2022-05-27 11:52:35,738 INFO [train.py:842] (0/4) Epoch 12, batch 3250, loss[loss=0.2025, simple_loss=0.2926, pruned_loss=0.05618, over 7407.00 frames.], tot_loss[loss=0.2027, simple_loss=0.285, pruned_loss=0.06022, over 1424371.69 frames.], batch size: 21, lr: 4.54e-04 2022-05-27 11:53:14,318 INFO [train.py:842] (0/4) Epoch 12, batch 3300, loss[loss=0.1707, simple_loss=0.25, pruned_loss=0.04567, over 7005.00 frames.], tot_loss[loss=0.2034, simple_loss=0.2859, pruned_loss=0.06038, over 1424892.13 frames.], batch size: 16, lr: 4.54e-04 2022-05-27 11:53:52,886 INFO [train.py:842] (0/4) Epoch 12, batch 3350, loss[loss=0.1646, simple_loss=0.2464, pruned_loss=0.04144, over 7265.00 frames.], tot_loss[loss=0.2039, simple_loss=0.2864, pruned_loss=0.06073, over 1425583.53 frames.], batch size: 18, lr: 4.54e-04 2022-05-27 11:54:31,817 INFO [train.py:842] (0/4) Epoch 12, batch 3400, loss[loss=0.2149, simple_loss=0.304, pruned_loss=0.06296, over 6517.00 frames.], tot_loss[loss=0.2044, simple_loss=0.2864, pruned_loss=0.06114, over 1420382.56 frames.], batch size: 38, lr: 4.54e-04 2022-05-27 11:55:10,419 INFO [train.py:842] (0/4) Epoch 12, batch 3450, loss[loss=0.2435, simple_loss=0.3193, pruned_loss=0.08386, over 7111.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2858, pruned_loss=0.06095, over 1418237.40 frames.], batch size: 21, lr: 4.54e-04 2022-05-27 11:55:49,186 INFO [train.py:842] (0/4) Epoch 12, batch 3500, loss[loss=0.2027, simple_loss=0.2901, pruned_loss=0.05761, over 7316.00 frames.], tot_loss[loss=0.2034, simple_loss=0.2857, pruned_loss=0.0606, over 1424510.62 frames.], batch size: 21, lr: 4.54e-04 2022-05-27 11:56:27,762 INFO [train.py:842] (0/4) Epoch 12, batch 3550, loss[loss=0.1792, simple_loss=0.2595, pruned_loss=0.04948, over 7440.00 frames.], tot_loss[loss=0.203, simple_loss=0.2851, pruned_loss=0.0604, over 1423587.11 frames.], batch size: 17, lr: 4.53e-04 2022-05-27 11:57:06,217 INFO [train.py:842] (0/4) Epoch 12, batch 3600, loss[loss=0.2297, simple_loss=0.3029, pruned_loss=0.07823, over 7228.00 frames.], tot_loss[loss=0.205, simple_loss=0.2872, pruned_loss=0.0614, over 1425763.80 frames.], batch size: 20, lr: 4.53e-04 2022-05-27 11:57:44,684 INFO [train.py:842] (0/4) Epoch 12, batch 3650, loss[loss=0.1788, simple_loss=0.2638, pruned_loss=0.04695, over 7438.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2863, pruned_loss=0.06072, over 1424595.16 frames.], batch size: 20, lr: 4.53e-04 2022-05-27 11:58:23,621 INFO [train.py:842] (0/4) Epoch 12, batch 3700, loss[loss=0.2214, simple_loss=0.3072, pruned_loss=0.06779, over 6887.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2863, pruned_loss=0.0614, over 1422637.67 frames.], batch size: 31, lr: 4.53e-04 2022-05-27 11:59:02,227 INFO [train.py:842] (0/4) Epoch 12, batch 3750, loss[loss=0.2287, simple_loss=0.313, pruned_loss=0.07218, over 7382.00 frames.], tot_loss[loss=0.203, simple_loss=0.285, pruned_loss=0.06048, over 1427185.28 frames.], batch size: 23, lr: 4.53e-04 2022-05-27 11:59:41,086 INFO [train.py:842] (0/4) Epoch 12, batch 3800, loss[loss=0.1898, simple_loss=0.2761, pruned_loss=0.05177, over 7144.00 frames.], tot_loss[loss=0.2043, simple_loss=0.2858, pruned_loss=0.06141, over 1429077.54 frames.], batch size: 26, lr: 4.53e-04 2022-05-27 12:00:19,489 INFO [train.py:842] (0/4) Epoch 12, batch 3850, loss[loss=0.2081, simple_loss=0.2888, pruned_loss=0.06366, over 7063.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2857, pruned_loss=0.06091, over 1429093.20 frames.], batch size: 18, lr: 4.53e-04 2022-05-27 12:00:58,243 INFO [train.py:842] (0/4) Epoch 12, batch 3900, loss[loss=0.2704, simple_loss=0.3337, pruned_loss=0.1036, over 4928.00 frames.], tot_loss[loss=0.2047, simple_loss=0.2867, pruned_loss=0.06139, over 1430287.83 frames.], batch size: 52, lr: 4.53e-04 2022-05-27 12:01:36,965 INFO [train.py:842] (0/4) Epoch 12, batch 3950, loss[loss=0.178, simple_loss=0.2599, pruned_loss=0.04805, over 7254.00 frames.], tot_loss[loss=0.2063, simple_loss=0.2871, pruned_loss=0.0627, over 1430886.95 frames.], batch size: 19, lr: 4.53e-04 2022-05-27 12:02:15,753 INFO [train.py:842] (0/4) Epoch 12, batch 4000, loss[loss=0.1792, simple_loss=0.2605, pruned_loss=0.04891, over 7359.00 frames.], tot_loss[loss=0.2072, simple_loss=0.2878, pruned_loss=0.06325, over 1427586.30 frames.], batch size: 19, lr: 4.53e-04 2022-05-27 12:02:54,552 INFO [train.py:842] (0/4) Epoch 12, batch 4050, loss[loss=0.2644, simple_loss=0.3279, pruned_loss=0.1005, over 7427.00 frames.], tot_loss[loss=0.2065, simple_loss=0.2875, pruned_loss=0.06275, over 1426692.56 frames.], batch size: 18, lr: 4.52e-04 2022-05-27 12:03:33,347 INFO [train.py:842] (0/4) Epoch 12, batch 4100, loss[loss=0.1917, simple_loss=0.2849, pruned_loss=0.04925, over 7110.00 frames.], tot_loss[loss=0.2071, simple_loss=0.2882, pruned_loss=0.06307, over 1422610.04 frames.], batch size: 21, lr: 4.52e-04 2022-05-27 12:04:12,128 INFO [train.py:842] (0/4) Epoch 12, batch 4150, loss[loss=0.2206, simple_loss=0.3198, pruned_loss=0.06072, over 7208.00 frames.], tot_loss[loss=0.2062, simple_loss=0.2877, pruned_loss=0.06238, over 1424282.83 frames.], batch size: 22, lr: 4.52e-04 2022-05-27 12:04:50,967 INFO [train.py:842] (0/4) Epoch 12, batch 4200, loss[loss=0.1948, simple_loss=0.2827, pruned_loss=0.05348, over 7154.00 frames.], tot_loss[loss=0.2067, simple_loss=0.288, pruned_loss=0.06267, over 1426431.15 frames.], batch size: 20, lr: 4.52e-04 2022-05-27 12:05:29,393 INFO [train.py:842] (0/4) Epoch 12, batch 4250, loss[loss=0.1998, simple_loss=0.2802, pruned_loss=0.05966, over 6818.00 frames.], tot_loss[loss=0.2058, simple_loss=0.2872, pruned_loss=0.06222, over 1424715.75 frames.], batch size: 31, lr: 4.52e-04 2022-05-27 12:06:08,255 INFO [train.py:842] (0/4) Epoch 12, batch 4300, loss[loss=0.2574, simple_loss=0.3041, pruned_loss=0.1053, over 7279.00 frames.], tot_loss[loss=0.2057, simple_loss=0.2868, pruned_loss=0.06231, over 1426187.97 frames.], batch size: 17, lr: 4.52e-04 2022-05-27 12:06:46,750 INFO [train.py:842] (0/4) Epoch 12, batch 4350, loss[loss=0.1677, simple_loss=0.2479, pruned_loss=0.04373, over 7160.00 frames.], tot_loss[loss=0.2064, simple_loss=0.2878, pruned_loss=0.06255, over 1418613.53 frames.], batch size: 18, lr: 4.52e-04 2022-05-27 12:07:25,732 INFO [train.py:842] (0/4) Epoch 12, batch 4400, loss[loss=0.2059, simple_loss=0.2945, pruned_loss=0.05865, over 7120.00 frames.], tot_loss[loss=0.2074, simple_loss=0.2887, pruned_loss=0.06305, over 1422201.48 frames.], batch size: 21, lr: 4.52e-04 2022-05-27 12:08:04,058 INFO [train.py:842] (0/4) Epoch 12, batch 4450, loss[loss=0.1863, simple_loss=0.2598, pruned_loss=0.05635, over 7252.00 frames.], tot_loss[loss=0.206, simple_loss=0.2875, pruned_loss=0.06228, over 1420903.66 frames.], batch size: 19, lr: 4.52e-04 2022-05-27 12:08:43,211 INFO [train.py:842] (0/4) Epoch 12, batch 4500, loss[loss=0.1454, simple_loss=0.2289, pruned_loss=0.03097, over 7404.00 frames.], tot_loss[loss=0.204, simple_loss=0.2858, pruned_loss=0.06113, over 1425179.60 frames.], batch size: 18, lr: 4.51e-04 2022-05-27 12:09:22,023 INFO [train.py:842] (0/4) Epoch 12, batch 4550, loss[loss=0.2181, simple_loss=0.2975, pruned_loss=0.06932, over 7149.00 frames.], tot_loss[loss=0.2039, simple_loss=0.2855, pruned_loss=0.06112, over 1425125.42 frames.], batch size: 20, lr: 4.51e-04 2022-05-27 12:10:00,790 INFO [train.py:842] (0/4) Epoch 12, batch 4600, loss[loss=0.2418, simple_loss=0.3099, pruned_loss=0.08681, over 6998.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2862, pruned_loss=0.06136, over 1420617.10 frames.], batch size: 28, lr: 4.51e-04 2022-05-27 12:10:39,200 INFO [train.py:842] (0/4) Epoch 12, batch 4650, loss[loss=0.1797, simple_loss=0.2692, pruned_loss=0.04513, over 7312.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2859, pruned_loss=0.06087, over 1422765.56 frames.], batch size: 21, lr: 4.51e-04 2022-05-27 12:11:18,044 INFO [train.py:842] (0/4) Epoch 12, batch 4700, loss[loss=0.2247, simple_loss=0.2984, pruned_loss=0.07547, over 4750.00 frames.], tot_loss[loss=0.2034, simple_loss=0.2853, pruned_loss=0.0607, over 1418718.99 frames.], batch size: 52, lr: 4.51e-04 2022-05-27 12:11:56,442 INFO [train.py:842] (0/4) Epoch 12, batch 4750, loss[loss=0.1955, simple_loss=0.2784, pruned_loss=0.05628, over 7255.00 frames.], tot_loss[loss=0.2046, simple_loss=0.2864, pruned_loss=0.0614, over 1421049.85 frames.], batch size: 19, lr: 4.51e-04 2022-05-27 12:12:35,439 INFO [train.py:842] (0/4) Epoch 12, batch 4800, loss[loss=0.1712, simple_loss=0.2572, pruned_loss=0.04256, over 7356.00 frames.], tot_loss[loss=0.2048, simple_loss=0.2869, pruned_loss=0.0614, over 1422742.76 frames.], batch size: 19, lr: 4.51e-04 2022-05-27 12:13:13,853 INFO [train.py:842] (0/4) Epoch 12, batch 4850, loss[loss=0.2326, simple_loss=0.3055, pruned_loss=0.07979, over 7159.00 frames.], tot_loss[loss=0.2033, simple_loss=0.2858, pruned_loss=0.06042, over 1425093.71 frames.], batch size: 18, lr: 4.51e-04 2022-05-27 12:13:52,886 INFO [train.py:842] (0/4) Epoch 12, batch 4900, loss[loss=0.2054, simple_loss=0.2872, pruned_loss=0.06178, over 7407.00 frames.], tot_loss[loss=0.2023, simple_loss=0.2847, pruned_loss=0.06001, over 1425613.07 frames.], batch size: 18, lr: 4.51e-04 2022-05-27 12:14:31,295 INFO [train.py:842] (0/4) Epoch 12, batch 4950, loss[loss=0.2353, simple_loss=0.3127, pruned_loss=0.0789, over 7191.00 frames.], tot_loss[loss=0.2037, simple_loss=0.2859, pruned_loss=0.06075, over 1421979.24 frames.], batch size: 26, lr: 4.50e-04 2022-05-27 12:15:10,324 INFO [train.py:842] (0/4) Epoch 12, batch 5000, loss[loss=0.181, simple_loss=0.26, pruned_loss=0.051, over 7403.00 frames.], tot_loss[loss=0.2031, simple_loss=0.285, pruned_loss=0.06058, over 1415391.02 frames.], batch size: 18, lr: 4.50e-04 2022-05-27 12:15:48,905 INFO [train.py:842] (0/4) Epoch 12, batch 5050, loss[loss=0.1898, simple_loss=0.2657, pruned_loss=0.057, over 7072.00 frames.], tot_loss[loss=0.203, simple_loss=0.2847, pruned_loss=0.0606, over 1420381.19 frames.], batch size: 18, lr: 4.50e-04 2022-05-27 12:16:27,508 INFO [train.py:842] (0/4) Epoch 12, batch 5100, loss[loss=0.2322, simple_loss=0.3153, pruned_loss=0.07462, over 7196.00 frames.], tot_loss[loss=0.2039, simple_loss=0.2858, pruned_loss=0.061, over 1415038.17 frames.], batch size: 22, lr: 4.50e-04 2022-05-27 12:17:06,103 INFO [train.py:842] (0/4) Epoch 12, batch 5150, loss[loss=0.2048, simple_loss=0.2873, pruned_loss=0.06113, over 7202.00 frames.], tot_loss[loss=0.2044, simple_loss=0.2862, pruned_loss=0.06127, over 1419193.10 frames.], batch size: 22, lr: 4.50e-04 2022-05-27 12:17:44,835 INFO [train.py:842] (0/4) Epoch 12, batch 5200, loss[loss=0.1826, simple_loss=0.2779, pruned_loss=0.04365, over 7231.00 frames.], tot_loss[loss=0.2037, simple_loss=0.2859, pruned_loss=0.06073, over 1421651.02 frames.], batch size: 20, lr: 4.50e-04 2022-05-27 12:18:23,378 INFO [train.py:842] (0/4) Epoch 12, batch 5250, loss[loss=0.2272, simple_loss=0.3111, pruned_loss=0.07165, over 7315.00 frames.], tot_loss[loss=0.2036, simple_loss=0.2858, pruned_loss=0.06073, over 1421239.12 frames.], batch size: 24, lr: 4.50e-04 2022-05-27 12:19:02,154 INFO [train.py:842] (0/4) Epoch 12, batch 5300, loss[loss=0.1705, simple_loss=0.2484, pruned_loss=0.0463, over 6813.00 frames.], tot_loss[loss=0.2026, simple_loss=0.2848, pruned_loss=0.06017, over 1420192.00 frames.], batch size: 15, lr: 4.50e-04 2022-05-27 12:19:41,019 INFO [train.py:842] (0/4) Epoch 12, batch 5350, loss[loss=0.2098, simple_loss=0.2937, pruned_loss=0.06294, over 6392.00 frames.], tot_loss[loss=0.2031, simple_loss=0.2848, pruned_loss=0.06067, over 1420448.87 frames.], batch size: 38, lr: 4.50e-04 2022-05-27 12:20:19,735 INFO [train.py:842] (0/4) Epoch 12, batch 5400, loss[loss=0.1859, simple_loss=0.2775, pruned_loss=0.04713, over 7207.00 frames.], tot_loss[loss=0.2033, simple_loss=0.2855, pruned_loss=0.06052, over 1420965.51 frames.], batch size: 21, lr: 4.50e-04 2022-05-27 12:20:58,142 INFO [train.py:842] (0/4) Epoch 12, batch 5450, loss[loss=0.2388, simple_loss=0.3069, pruned_loss=0.08538, over 7329.00 frames.], tot_loss[loss=0.2043, simple_loss=0.2867, pruned_loss=0.06094, over 1419425.20 frames.], batch size: 20, lr: 4.49e-04 2022-05-27 12:21:37,201 INFO [train.py:842] (0/4) Epoch 12, batch 5500, loss[loss=0.2013, simple_loss=0.2836, pruned_loss=0.05951, over 7286.00 frames.], tot_loss[loss=0.2054, simple_loss=0.2875, pruned_loss=0.06171, over 1418130.81 frames.], batch size: 18, lr: 4.49e-04 2022-05-27 12:22:15,858 INFO [train.py:842] (0/4) Epoch 12, batch 5550, loss[loss=0.2014, simple_loss=0.2892, pruned_loss=0.05675, over 7319.00 frames.], tot_loss[loss=0.2029, simple_loss=0.2852, pruned_loss=0.06033, over 1423750.41 frames.], batch size: 21, lr: 4.49e-04 2022-05-27 12:22:54,570 INFO [train.py:842] (0/4) Epoch 12, batch 5600, loss[loss=0.1916, simple_loss=0.2791, pruned_loss=0.05205, over 7148.00 frames.], tot_loss[loss=0.2052, simple_loss=0.2873, pruned_loss=0.06158, over 1419347.31 frames.], batch size: 20, lr: 4.49e-04 2022-05-27 12:23:33,083 INFO [train.py:842] (0/4) Epoch 12, batch 5650, loss[loss=0.2245, simple_loss=0.3037, pruned_loss=0.07266, over 7200.00 frames.], tot_loss[loss=0.2051, simple_loss=0.287, pruned_loss=0.06159, over 1423281.86 frames.], batch size: 26, lr: 4.49e-04 2022-05-27 12:24:12,161 INFO [train.py:842] (0/4) Epoch 12, batch 5700, loss[loss=0.2062, simple_loss=0.2855, pruned_loss=0.06341, over 7359.00 frames.], tot_loss[loss=0.2043, simple_loss=0.2862, pruned_loss=0.06123, over 1422156.18 frames.], batch size: 19, lr: 4.49e-04 2022-05-27 12:24:50,779 INFO [train.py:842] (0/4) Epoch 12, batch 5750, loss[loss=0.2099, simple_loss=0.2961, pruned_loss=0.06184, over 7125.00 frames.], tot_loss[loss=0.2052, simple_loss=0.287, pruned_loss=0.06174, over 1424717.62 frames.], batch size: 21, lr: 4.49e-04 2022-05-27 12:25:29,880 INFO [train.py:842] (0/4) Epoch 12, batch 5800, loss[loss=0.1788, simple_loss=0.2683, pruned_loss=0.04466, over 7307.00 frames.], tot_loss[loss=0.2046, simple_loss=0.2866, pruned_loss=0.06126, over 1423196.95 frames.], batch size: 25, lr: 4.49e-04 2022-05-27 12:26:08,571 INFO [train.py:842] (0/4) Epoch 12, batch 5850, loss[loss=0.1666, simple_loss=0.2481, pruned_loss=0.04252, over 7209.00 frames.], tot_loss[loss=0.2044, simple_loss=0.2861, pruned_loss=0.0613, over 1423252.31 frames.], batch size: 16, lr: 4.49e-04 2022-05-27 12:26:47,351 INFO [train.py:842] (0/4) Epoch 12, batch 5900, loss[loss=0.2439, simple_loss=0.3265, pruned_loss=0.08064, over 6958.00 frames.], tot_loss[loss=0.2042, simple_loss=0.2862, pruned_loss=0.06107, over 1426880.80 frames.], batch size: 32, lr: 4.48e-04 2022-05-27 12:27:25,925 INFO [train.py:842] (0/4) Epoch 12, batch 5950, loss[loss=0.1759, simple_loss=0.2625, pruned_loss=0.04461, over 7440.00 frames.], tot_loss[loss=0.2042, simple_loss=0.2861, pruned_loss=0.06116, over 1430388.38 frames.], batch size: 20, lr: 4.48e-04 2022-05-27 12:28:04,951 INFO [train.py:842] (0/4) Epoch 12, batch 6000, loss[loss=0.2202, simple_loss=0.2984, pruned_loss=0.07095, over 7130.00 frames.], tot_loss[loss=0.2036, simple_loss=0.2853, pruned_loss=0.06089, over 1428214.73 frames.], batch size: 28, lr: 4.48e-04 2022-05-27 12:28:04,952 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 12:28:14,266 INFO [train.py:871] (0/4) Epoch 12, validation: loss=0.1696, simple_loss=0.2704, pruned_loss=0.03442, over 868885.00 frames. 2022-05-27 12:28:52,729 INFO [train.py:842] (0/4) Epoch 12, batch 6050, loss[loss=0.1734, simple_loss=0.2575, pruned_loss=0.04468, over 7006.00 frames.], tot_loss[loss=0.203, simple_loss=0.2854, pruned_loss=0.06029, over 1429298.85 frames.], batch size: 16, lr: 4.48e-04 2022-05-27 12:29:32,051 INFO [train.py:842] (0/4) Epoch 12, batch 6100, loss[loss=0.2226, simple_loss=0.3026, pruned_loss=0.07137, over 7173.00 frames.], tot_loss[loss=0.2007, simple_loss=0.2832, pruned_loss=0.05913, over 1432077.12 frames.], batch size: 19, lr: 4.48e-04 2022-05-27 12:30:10,618 INFO [train.py:842] (0/4) Epoch 12, batch 6150, loss[loss=0.2038, simple_loss=0.282, pruned_loss=0.06282, over 7254.00 frames.], tot_loss[loss=0.2025, simple_loss=0.285, pruned_loss=0.06001, over 1430854.33 frames.], batch size: 19, lr: 4.48e-04 2022-05-27 12:30:49,462 INFO [train.py:842] (0/4) Epoch 12, batch 6200, loss[loss=0.1946, simple_loss=0.2899, pruned_loss=0.04963, over 7239.00 frames.], tot_loss[loss=0.2036, simple_loss=0.2856, pruned_loss=0.06085, over 1428632.39 frames.], batch size: 20, lr: 4.48e-04 2022-05-27 12:31:27,912 INFO [train.py:842] (0/4) Epoch 12, batch 6250, loss[loss=0.1786, simple_loss=0.2567, pruned_loss=0.0502, over 7158.00 frames.], tot_loss[loss=0.2032, simple_loss=0.2855, pruned_loss=0.06041, over 1425355.13 frames.], batch size: 18, lr: 4.48e-04 2022-05-27 12:32:06,960 INFO [train.py:842] (0/4) Epoch 12, batch 6300, loss[loss=0.2263, simple_loss=0.3251, pruned_loss=0.06372, over 6716.00 frames.], tot_loss[loss=0.2037, simple_loss=0.286, pruned_loss=0.06072, over 1426419.56 frames.], batch size: 31, lr: 4.48e-04 2022-05-27 12:32:45,631 INFO [train.py:842] (0/4) Epoch 12, batch 6350, loss[loss=0.1812, simple_loss=0.2676, pruned_loss=0.04742, over 7140.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2865, pruned_loss=0.06119, over 1424977.21 frames.], batch size: 20, lr: 4.48e-04 2022-05-27 12:33:24,536 INFO [train.py:842] (0/4) Epoch 12, batch 6400, loss[loss=0.2025, simple_loss=0.2981, pruned_loss=0.05344, over 7128.00 frames.], tot_loss[loss=0.2044, simple_loss=0.2864, pruned_loss=0.0612, over 1417395.53 frames.], batch size: 26, lr: 4.47e-04 2022-05-27 12:34:03,168 INFO [train.py:842] (0/4) Epoch 12, batch 6450, loss[loss=0.1741, simple_loss=0.2587, pruned_loss=0.04473, over 7176.00 frames.], tot_loss[loss=0.2041, simple_loss=0.2853, pruned_loss=0.06144, over 1416687.25 frames.], batch size: 18, lr: 4.47e-04 2022-05-27 12:34:41,817 INFO [train.py:842] (0/4) Epoch 12, batch 6500, loss[loss=0.1903, simple_loss=0.268, pruned_loss=0.05629, over 7210.00 frames.], tot_loss[loss=0.2022, simple_loss=0.2838, pruned_loss=0.06031, over 1420782.49 frames.], batch size: 22, lr: 4.47e-04 2022-05-27 12:35:20,541 INFO [train.py:842] (0/4) Epoch 12, batch 6550, loss[loss=0.1818, simple_loss=0.2566, pruned_loss=0.05354, over 7170.00 frames.], tot_loss[loss=0.2018, simple_loss=0.2831, pruned_loss=0.06024, over 1420692.76 frames.], batch size: 18, lr: 4.47e-04 2022-05-27 12:35:59,566 INFO [train.py:842] (0/4) Epoch 12, batch 6600, loss[loss=0.1822, simple_loss=0.2661, pruned_loss=0.0491, over 7438.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2818, pruned_loss=0.05943, over 1421310.23 frames.], batch size: 20, lr: 4.47e-04 2022-05-27 12:36:38,268 INFO [train.py:842] (0/4) Epoch 12, batch 6650, loss[loss=0.2185, simple_loss=0.2802, pruned_loss=0.0784, over 7264.00 frames.], tot_loss[loss=0.2015, simple_loss=0.2829, pruned_loss=0.06003, over 1422197.54 frames.], batch size: 17, lr: 4.47e-04 2022-05-27 12:37:17,218 INFO [train.py:842] (0/4) Epoch 12, batch 6700, loss[loss=0.2498, simple_loss=0.3255, pruned_loss=0.0871, over 7146.00 frames.], tot_loss[loss=0.2021, simple_loss=0.2833, pruned_loss=0.06047, over 1419711.17 frames.], batch size: 20, lr: 4.47e-04 2022-05-27 12:37:56,139 INFO [train.py:842] (0/4) Epoch 12, batch 6750, loss[loss=0.1884, simple_loss=0.282, pruned_loss=0.04736, over 7339.00 frames.], tot_loss[loss=0.201, simple_loss=0.2829, pruned_loss=0.05952, over 1421403.24 frames.], batch size: 22, lr: 4.47e-04 2022-05-27 12:38:35,123 INFO [train.py:842] (0/4) Epoch 12, batch 6800, loss[loss=0.1694, simple_loss=0.2558, pruned_loss=0.04152, over 7159.00 frames.], tot_loss[loss=0.2008, simple_loss=0.2825, pruned_loss=0.05956, over 1422823.78 frames.], batch size: 19, lr: 4.47e-04 2022-05-27 12:39:13,695 INFO [train.py:842] (0/4) Epoch 12, batch 6850, loss[loss=0.1823, simple_loss=0.2728, pruned_loss=0.04588, over 7233.00 frames.], tot_loss[loss=0.2001, simple_loss=0.2826, pruned_loss=0.05881, over 1423721.48 frames.], batch size: 20, lr: 4.47e-04 2022-05-27 12:39:52,700 INFO [train.py:842] (0/4) Epoch 12, batch 6900, loss[loss=0.2227, simple_loss=0.2913, pruned_loss=0.077, over 6817.00 frames.], tot_loss[loss=0.202, simple_loss=0.2835, pruned_loss=0.06025, over 1420954.55 frames.], batch size: 31, lr: 4.46e-04 2022-05-27 12:40:31,030 INFO [train.py:842] (0/4) Epoch 12, batch 6950, loss[loss=0.2188, simple_loss=0.3016, pruned_loss=0.06801, over 7111.00 frames.], tot_loss[loss=0.2029, simple_loss=0.2847, pruned_loss=0.06056, over 1416631.98 frames.], batch size: 21, lr: 4.46e-04 2022-05-27 12:41:09,838 INFO [train.py:842] (0/4) Epoch 12, batch 7000, loss[loss=0.1913, simple_loss=0.2886, pruned_loss=0.047, over 7193.00 frames.], tot_loss[loss=0.2038, simple_loss=0.285, pruned_loss=0.06127, over 1415764.94 frames.], batch size: 23, lr: 4.46e-04 2022-05-27 12:41:48,362 INFO [train.py:842] (0/4) Epoch 12, batch 7050, loss[loss=0.2109, simple_loss=0.2973, pruned_loss=0.06219, over 6310.00 frames.], tot_loss[loss=0.2047, simple_loss=0.2863, pruned_loss=0.06154, over 1418663.29 frames.], batch size: 37, lr: 4.46e-04 2022-05-27 12:42:27,202 INFO [train.py:842] (0/4) Epoch 12, batch 7100, loss[loss=0.1799, simple_loss=0.2731, pruned_loss=0.04341, over 7143.00 frames.], tot_loss[loss=0.2041, simple_loss=0.2858, pruned_loss=0.0612, over 1416752.56 frames.], batch size: 20, lr: 4.46e-04 2022-05-27 12:43:05,922 INFO [train.py:842] (0/4) Epoch 12, batch 7150, loss[loss=0.1696, simple_loss=0.2519, pruned_loss=0.04362, over 7432.00 frames.], tot_loss[loss=0.2046, simple_loss=0.286, pruned_loss=0.06158, over 1421006.45 frames.], batch size: 20, lr: 4.46e-04 2022-05-27 12:43:44,467 INFO [train.py:842] (0/4) Epoch 12, batch 7200, loss[loss=0.2161, simple_loss=0.3076, pruned_loss=0.06229, over 7419.00 frames.], tot_loss[loss=0.2063, simple_loss=0.2881, pruned_loss=0.06227, over 1423180.51 frames.], batch size: 21, lr: 4.46e-04 2022-05-27 12:44:23,088 INFO [train.py:842] (0/4) Epoch 12, batch 7250, loss[loss=0.1757, simple_loss=0.2687, pruned_loss=0.04137, over 7119.00 frames.], tot_loss[loss=0.2051, simple_loss=0.2871, pruned_loss=0.06154, over 1419300.16 frames.], batch size: 21, lr: 4.46e-04 2022-05-27 12:45:01,908 INFO [train.py:842] (0/4) Epoch 12, batch 7300, loss[loss=0.1823, simple_loss=0.2606, pruned_loss=0.05203, over 7006.00 frames.], tot_loss[loss=0.2037, simple_loss=0.2858, pruned_loss=0.0608, over 1422256.44 frames.], batch size: 16, lr: 4.46e-04 2022-05-27 12:45:40,533 INFO [train.py:842] (0/4) Epoch 12, batch 7350, loss[loss=0.1975, simple_loss=0.2663, pruned_loss=0.0643, over 7133.00 frames.], tot_loss[loss=0.2043, simple_loss=0.2864, pruned_loss=0.0611, over 1423114.25 frames.], batch size: 17, lr: 4.45e-04 2022-05-27 12:46:19,329 INFO [train.py:842] (0/4) Epoch 12, batch 7400, loss[loss=0.1727, simple_loss=0.2455, pruned_loss=0.04999, over 7403.00 frames.], tot_loss[loss=0.2041, simple_loss=0.286, pruned_loss=0.0611, over 1423393.76 frames.], batch size: 18, lr: 4.45e-04 2022-05-27 12:46:57,750 INFO [train.py:842] (0/4) Epoch 12, batch 7450, loss[loss=0.1592, simple_loss=0.25, pruned_loss=0.03423, over 7291.00 frames.], tot_loss[loss=0.2022, simple_loss=0.2846, pruned_loss=0.0599, over 1426043.86 frames.], batch size: 18, lr: 4.45e-04 2022-05-27 12:47:36,474 INFO [train.py:842] (0/4) Epoch 12, batch 7500, loss[loss=0.2029, simple_loss=0.2884, pruned_loss=0.0587, over 6807.00 frames.], tot_loss[loss=0.2024, simple_loss=0.2848, pruned_loss=0.06007, over 1426994.85 frames.], batch size: 31, lr: 4.45e-04 2022-05-27 12:48:15,132 INFO [train.py:842] (0/4) Epoch 12, batch 7550, loss[loss=0.1798, simple_loss=0.2673, pruned_loss=0.04612, over 7150.00 frames.], tot_loss[loss=0.2004, simple_loss=0.2833, pruned_loss=0.05879, over 1427352.97 frames.], batch size: 20, lr: 4.45e-04 2022-05-27 12:48:54,045 INFO [train.py:842] (0/4) Epoch 12, batch 7600, loss[loss=0.2532, simple_loss=0.3132, pruned_loss=0.09662, over 7316.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2841, pruned_loss=0.05904, over 1432654.77 frames.], batch size: 21, lr: 4.45e-04 2022-05-27 12:49:32,594 INFO [train.py:842] (0/4) Epoch 12, batch 7650, loss[loss=0.228, simple_loss=0.3152, pruned_loss=0.0704, over 7134.00 frames.], tot_loss[loss=0.2026, simple_loss=0.2856, pruned_loss=0.05984, over 1424663.72 frames.], batch size: 20, lr: 4.45e-04 2022-05-27 12:50:11,564 INFO [train.py:842] (0/4) Epoch 12, batch 7700, loss[loss=0.162, simple_loss=0.2376, pruned_loss=0.04315, over 7273.00 frames.], tot_loss[loss=0.2039, simple_loss=0.2865, pruned_loss=0.06061, over 1424469.93 frames.], batch size: 17, lr: 4.45e-04 2022-05-27 12:50:50,206 INFO [train.py:842] (0/4) Epoch 12, batch 7750, loss[loss=0.1731, simple_loss=0.2487, pruned_loss=0.04877, over 6837.00 frames.], tot_loss[loss=0.2033, simple_loss=0.2858, pruned_loss=0.06045, over 1422298.62 frames.], batch size: 15, lr: 4.45e-04 2022-05-27 12:51:28,916 INFO [train.py:842] (0/4) Epoch 12, batch 7800, loss[loss=0.1782, simple_loss=0.265, pruned_loss=0.04566, over 7337.00 frames.], tot_loss[loss=0.2039, simple_loss=0.2863, pruned_loss=0.06076, over 1421350.86 frames.], batch size: 20, lr: 4.45e-04 2022-05-27 12:52:07,450 INFO [train.py:842] (0/4) Epoch 12, batch 7850, loss[loss=0.2303, simple_loss=0.3166, pruned_loss=0.07206, over 4816.00 frames.], tot_loss[loss=0.2049, simple_loss=0.2871, pruned_loss=0.06133, over 1417983.48 frames.], batch size: 54, lr: 4.44e-04 2022-05-27 12:52:46,499 INFO [train.py:842] (0/4) Epoch 12, batch 7900, loss[loss=0.2147, simple_loss=0.2944, pruned_loss=0.06748, over 7225.00 frames.], tot_loss[loss=0.2038, simple_loss=0.286, pruned_loss=0.06079, over 1422893.52 frames.], batch size: 23, lr: 4.44e-04 2022-05-27 12:53:25,063 INFO [train.py:842] (0/4) Epoch 12, batch 7950, loss[loss=0.2238, simple_loss=0.3024, pruned_loss=0.07265, over 7274.00 frames.], tot_loss[loss=0.2042, simple_loss=0.2864, pruned_loss=0.06105, over 1425028.87 frames.], batch size: 24, lr: 4.44e-04 2022-05-27 12:54:03,975 INFO [train.py:842] (0/4) Epoch 12, batch 8000, loss[loss=0.1786, simple_loss=0.2609, pruned_loss=0.04816, over 7163.00 frames.], tot_loss[loss=0.2037, simple_loss=0.2858, pruned_loss=0.06076, over 1424841.63 frames.], batch size: 18, lr: 4.44e-04 2022-05-27 12:54:42,594 INFO [train.py:842] (0/4) Epoch 12, batch 8050, loss[loss=0.264, simple_loss=0.3393, pruned_loss=0.09432, over 7373.00 frames.], tot_loss[loss=0.2046, simple_loss=0.2865, pruned_loss=0.06141, over 1428518.75 frames.], batch size: 23, lr: 4.44e-04 2022-05-27 12:55:21,334 INFO [train.py:842] (0/4) Epoch 12, batch 8100, loss[loss=0.1894, simple_loss=0.2802, pruned_loss=0.04926, over 7211.00 frames.], tot_loss[loss=0.2063, simple_loss=0.2877, pruned_loss=0.0625, over 1429426.70 frames.], batch size: 21, lr: 4.44e-04 2022-05-27 12:55:59,852 INFO [train.py:842] (0/4) Epoch 12, batch 8150, loss[loss=0.2187, simple_loss=0.3037, pruned_loss=0.06684, over 7141.00 frames.], tot_loss[loss=0.2064, simple_loss=0.2877, pruned_loss=0.06249, over 1425705.07 frames.], batch size: 20, lr: 4.44e-04 2022-05-27 12:56:49,258 INFO [train.py:842] (0/4) Epoch 12, batch 8200, loss[loss=0.1807, simple_loss=0.276, pruned_loss=0.04265, over 7216.00 frames.], tot_loss[loss=0.204, simple_loss=0.2862, pruned_loss=0.06096, over 1426452.13 frames.], batch size: 21, lr: 4.44e-04 2022-05-27 12:57:27,631 INFO [train.py:842] (0/4) Epoch 12, batch 8250, loss[loss=0.2519, simple_loss=0.3204, pruned_loss=0.09169, over 7213.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2863, pruned_loss=0.06132, over 1425036.14 frames.], batch size: 26, lr: 4.44e-04 2022-05-27 12:58:06,336 INFO [train.py:842] (0/4) Epoch 12, batch 8300, loss[loss=0.2007, simple_loss=0.285, pruned_loss=0.05823, over 7200.00 frames.], tot_loss[loss=0.2033, simple_loss=0.2853, pruned_loss=0.06063, over 1424574.17 frames.], batch size: 22, lr: 4.44e-04 2022-05-27 12:58:44,892 INFO [train.py:842] (0/4) Epoch 12, batch 8350, loss[loss=0.239, simple_loss=0.3073, pruned_loss=0.0854, over 5518.00 frames.], tot_loss[loss=0.2022, simple_loss=0.2842, pruned_loss=0.06006, over 1428393.55 frames.], batch size: 53, lr: 4.43e-04 2022-05-27 12:59:23,812 INFO [train.py:842] (0/4) Epoch 12, batch 8400, loss[loss=0.2191, simple_loss=0.303, pruned_loss=0.06764, over 7304.00 frames.], tot_loss[loss=0.2026, simple_loss=0.2842, pruned_loss=0.06046, over 1429497.98 frames.], batch size: 24, lr: 4.43e-04 2022-05-27 13:00:02,282 INFO [train.py:842] (0/4) Epoch 12, batch 8450, loss[loss=0.2717, simple_loss=0.3389, pruned_loss=0.1022, over 6810.00 frames.], tot_loss[loss=0.2026, simple_loss=0.2844, pruned_loss=0.06045, over 1430193.74 frames.], batch size: 31, lr: 4.43e-04 2022-05-27 13:00:41,078 INFO [train.py:842] (0/4) Epoch 12, batch 8500, loss[loss=0.1803, simple_loss=0.2773, pruned_loss=0.04165, over 7159.00 frames.], tot_loss[loss=0.2028, simple_loss=0.2843, pruned_loss=0.0606, over 1429630.02 frames.], batch size: 19, lr: 4.43e-04 2022-05-27 13:01:19,601 INFO [train.py:842] (0/4) Epoch 12, batch 8550, loss[loss=0.2942, simple_loss=0.3351, pruned_loss=0.1267, over 7145.00 frames.], tot_loss[loss=0.2031, simple_loss=0.2844, pruned_loss=0.06095, over 1428310.99 frames.], batch size: 17, lr: 4.43e-04 2022-05-27 13:01:58,551 INFO [train.py:842] (0/4) Epoch 12, batch 8600, loss[loss=0.2095, simple_loss=0.2814, pruned_loss=0.06876, over 7273.00 frames.], tot_loss[loss=0.2044, simple_loss=0.2856, pruned_loss=0.06158, over 1425592.83 frames.], batch size: 18, lr: 4.43e-04 2022-05-27 13:02:36,975 INFO [train.py:842] (0/4) Epoch 12, batch 8650, loss[loss=0.167, simple_loss=0.2455, pruned_loss=0.04429, over 7134.00 frames.], tot_loss[loss=0.2048, simple_loss=0.2862, pruned_loss=0.06169, over 1422280.94 frames.], batch size: 17, lr: 4.43e-04 2022-05-27 13:03:15,856 INFO [train.py:842] (0/4) Epoch 12, batch 8700, loss[loss=0.2618, simple_loss=0.3312, pruned_loss=0.09619, over 7017.00 frames.], tot_loss[loss=0.205, simple_loss=0.2862, pruned_loss=0.06196, over 1422192.82 frames.], batch size: 28, lr: 4.43e-04 2022-05-27 13:03:54,263 INFO [train.py:842] (0/4) Epoch 12, batch 8750, loss[loss=0.2169, simple_loss=0.2898, pruned_loss=0.07197, over 5016.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2851, pruned_loss=0.06122, over 1419231.10 frames.], batch size: 52, lr: 4.43e-04 2022-05-27 13:04:33,561 INFO [train.py:842] (0/4) Epoch 12, batch 8800, loss[loss=0.2515, simple_loss=0.3372, pruned_loss=0.0829, over 7140.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2856, pruned_loss=0.0617, over 1415915.18 frames.], batch size: 26, lr: 4.43e-04 2022-05-27 13:05:12,066 INFO [train.py:842] (0/4) Epoch 12, batch 8850, loss[loss=0.1991, simple_loss=0.285, pruned_loss=0.05667, over 7125.00 frames.], tot_loss[loss=0.2024, simple_loss=0.2834, pruned_loss=0.06066, over 1413916.63 frames.], batch size: 21, lr: 4.42e-04 2022-05-27 13:05:50,820 INFO [train.py:842] (0/4) Epoch 12, batch 8900, loss[loss=0.1994, simple_loss=0.2965, pruned_loss=0.05113, over 7311.00 frames.], tot_loss[loss=0.2019, simple_loss=0.2829, pruned_loss=0.06048, over 1410661.33 frames.], batch size: 24, lr: 4.42e-04 2022-05-27 13:06:29,078 INFO [train.py:842] (0/4) Epoch 12, batch 8950, loss[loss=0.2118, simple_loss=0.297, pruned_loss=0.06325, over 6381.00 frames.], tot_loss[loss=0.2023, simple_loss=0.2827, pruned_loss=0.061, over 1398278.55 frames.], batch size: 38, lr: 4.42e-04 2022-05-27 13:07:07,563 INFO [train.py:842] (0/4) Epoch 12, batch 9000, loss[loss=0.2186, simple_loss=0.2905, pruned_loss=0.07336, over 5077.00 frames.], tot_loss[loss=0.2029, simple_loss=0.2829, pruned_loss=0.06144, over 1387608.93 frames.], batch size: 52, lr: 4.42e-04 2022-05-27 13:07:07,564 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 13:07:16,708 INFO [train.py:871] (0/4) Epoch 12, validation: loss=0.1688, simple_loss=0.2696, pruned_loss=0.03404, over 868885.00 frames. 2022-05-27 13:07:54,042 INFO [train.py:842] (0/4) Epoch 12, batch 9050, loss[loss=0.1886, simple_loss=0.2653, pruned_loss=0.05588, over 7462.00 frames.], tot_loss[loss=0.2051, simple_loss=0.2846, pruned_loss=0.06284, over 1363576.21 frames.], batch size: 19, lr: 4.42e-04 2022-05-27 13:08:31,793 INFO [train.py:842] (0/4) Epoch 12, batch 9100, loss[loss=0.2623, simple_loss=0.3253, pruned_loss=0.09963, over 4913.00 frames.], tot_loss[loss=0.2075, simple_loss=0.286, pruned_loss=0.06447, over 1334246.75 frames.], batch size: 53, lr: 4.42e-04 2022-05-27 13:09:08,592 INFO [train.py:842] (0/4) Epoch 12, batch 9150, loss[loss=0.2675, simple_loss=0.3276, pruned_loss=0.1037, over 4763.00 frames.], tot_loss[loss=0.2128, simple_loss=0.2904, pruned_loss=0.06763, over 1260115.93 frames.], batch size: 52, lr: 4.42e-04 2022-05-27 13:09:39,531 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-12.pt 2022-05-27 13:09:59,360 INFO [train.py:842] (0/4) Epoch 13, batch 0, loss[loss=0.2015, simple_loss=0.2932, pruned_loss=0.05492, over 7144.00 frames.], tot_loss[loss=0.2015, simple_loss=0.2932, pruned_loss=0.05492, over 7144.00 frames.], batch size: 20, lr: 4.27e-04 2022-05-27 13:10:37,598 INFO [train.py:842] (0/4) Epoch 13, batch 50, loss[loss=0.2023, simple_loss=0.2868, pruned_loss=0.05892, over 7242.00 frames.], tot_loss[loss=0.2028, simple_loss=0.2849, pruned_loss=0.06033, over 318145.51 frames.], batch size: 20, lr: 4.27e-04 2022-05-27 13:11:15,856 INFO [train.py:842] (0/4) Epoch 13, batch 100, loss[loss=0.1981, simple_loss=0.28, pruned_loss=0.05807, over 7179.00 frames.], tot_loss[loss=0.2005, simple_loss=0.2839, pruned_loss=0.05862, over 564441.68 frames.], batch size: 23, lr: 4.27e-04 2022-05-27 13:11:53,596 INFO [train.py:842] (0/4) Epoch 13, batch 150, loss[loss=0.1906, simple_loss=0.2712, pruned_loss=0.05502, over 7147.00 frames.], tot_loss[loss=0.204, simple_loss=0.2867, pruned_loss=0.06067, over 753289.50 frames.], batch size: 20, lr: 4.27e-04 2022-05-27 13:12:31,958 INFO [train.py:842] (0/4) Epoch 13, batch 200, loss[loss=0.2029, simple_loss=0.2875, pruned_loss=0.05919, over 7132.00 frames.], tot_loss[loss=0.2027, simple_loss=0.2856, pruned_loss=0.05992, over 900164.16 frames.], batch size: 20, lr: 4.27e-04 2022-05-27 13:13:09,995 INFO [train.py:842] (0/4) Epoch 13, batch 250, loss[loss=0.2017, simple_loss=0.2632, pruned_loss=0.07007, over 6834.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2863, pruned_loss=0.06063, over 1014050.04 frames.], batch size: 15, lr: 4.26e-04 2022-05-27 13:13:48,335 INFO [train.py:842] (0/4) Epoch 13, batch 300, loss[loss=0.1874, simple_loss=0.2786, pruned_loss=0.04811, over 7143.00 frames.], tot_loss[loss=0.2027, simple_loss=0.2853, pruned_loss=0.06009, over 1104247.76 frames.], batch size: 20, lr: 4.26e-04 2022-05-27 13:14:26,159 INFO [train.py:842] (0/4) Epoch 13, batch 350, loss[loss=0.215, simple_loss=0.3, pruned_loss=0.06495, over 7017.00 frames.], tot_loss[loss=0.2034, simple_loss=0.2861, pruned_loss=0.06035, over 1175896.95 frames.], batch size: 28, lr: 4.26e-04 2022-05-27 13:15:04,461 INFO [train.py:842] (0/4) Epoch 13, batch 400, loss[loss=0.1655, simple_loss=0.2545, pruned_loss=0.03827, over 7357.00 frames.], tot_loss[loss=0.2026, simple_loss=0.2855, pruned_loss=0.05979, over 1233462.22 frames.], batch size: 19, lr: 4.26e-04 2022-05-27 13:15:42,480 INFO [train.py:842] (0/4) Epoch 13, batch 450, loss[loss=0.1864, simple_loss=0.2905, pruned_loss=0.04115, over 7309.00 frames.], tot_loss[loss=0.2025, simple_loss=0.2852, pruned_loss=0.05992, over 1277097.12 frames.], batch size: 21, lr: 4.26e-04 2022-05-27 13:16:21,020 INFO [train.py:842] (0/4) Epoch 13, batch 500, loss[loss=0.216, simple_loss=0.3059, pruned_loss=0.06304, over 6292.00 frames.], tot_loss[loss=0.1996, simple_loss=0.2824, pruned_loss=0.05837, over 1309956.84 frames.], batch size: 37, lr: 4.26e-04 2022-05-27 13:16:59,156 INFO [train.py:842] (0/4) Epoch 13, batch 550, loss[loss=0.1901, simple_loss=0.2845, pruned_loss=0.04783, over 7380.00 frames.], tot_loss[loss=0.2005, simple_loss=0.2832, pruned_loss=0.05889, over 1332078.28 frames.], batch size: 23, lr: 4.26e-04 2022-05-27 13:17:37,519 INFO [train.py:842] (0/4) Epoch 13, batch 600, loss[loss=0.1699, simple_loss=0.2525, pruned_loss=0.04369, over 6814.00 frames.], tot_loss[loss=0.2005, simple_loss=0.2828, pruned_loss=0.05905, over 1345116.63 frames.], batch size: 15, lr: 4.26e-04 2022-05-27 13:18:15,526 INFO [train.py:842] (0/4) Epoch 13, batch 650, loss[loss=0.1745, simple_loss=0.2565, pruned_loss=0.04629, over 7261.00 frames.], tot_loss[loss=0.1996, simple_loss=0.2822, pruned_loss=0.05851, over 1364125.73 frames.], batch size: 18, lr: 4.26e-04 2022-05-27 13:19:03,251 INFO [train.py:842] (0/4) Epoch 13, batch 700, loss[loss=0.2042, simple_loss=0.2782, pruned_loss=0.06515, over 7251.00 frames.], tot_loss[loss=0.2003, simple_loss=0.283, pruned_loss=0.0588, over 1381936.21 frames.], batch size: 16, lr: 4.26e-04 2022-05-27 13:19:50,632 INFO [train.py:842] (0/4) Epoch 13, batch 750, loss[loss=0.2055, simple_loss=0.2955, pruned_loss=0.05774, over 7191.00 frames.], tot_loss[loss=0.2013, simple_loss=0.284, pruned_loss=0.05923, over 1394949.27 frames.], batch size: 23, lr: 4.25e-04 2022-05-27 13:20:38,506 INFO [train.py:842] (0/4) Epoch 13, batch 800, loss[loss=0.2105, simple_loss=0.2939, pruned_loss=0.06355, over 7190.00 frames.], tot_loss[loss=0.2017, simple_loss=0.2842, pruned_loss=0.05956, over 1404427.73 frames.], batch size: 22, lr: 4.25e-04 2022-05-27 13:21:16,368 INFO [train.py:842] (0/4) Epoch 13, batch 850, loss[loss=0.1827, simple_loss=0.2641, pruned_loss=0.05062, over 7149.00 frames.], tot_loss[loss=0.2006, simple_loss=0.2833, pruned_loss=0.05893, over 1411485.23 frames.], batch size: 17, lr: 4.25e-04 2022-05-27 13:21:54,755 INFO [train.py:842] (0/4) Epoch 13, batch 900, loss[loss=0.2072, simple_loss=0.2855, pruned_loss=0.06445, over 7335.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2828, pruned_loss=0.0589, over 1414670.99 frames.], batch size: 20, lr: 4.25e-04 2022-05-27 13:22:32,526 INFO [train.py:842] (0/4) Epoch 13, batch 950, loss[loss=0.2025, simple_loss=0.2929, pruned_loss=0.05605, over 7151.00 frames.], tot_loss[loss=0.2028, simple_loss=0.2846, pruned_loss=0.06054, over 1414516.47 frames.], batch size: 26, lr: 4.25e-04 2022-05-27 13:23:10,783 INFO [train.py:842] (0/4) Epoch 13, batch 1000, loss[loss=0.2254, simple_loss=0.3009, pruned_loss=0.07492, over 6362.00 frames.], tot_loss[loss=0.2022, simple_loss=0.284, pruned_loss=0.06014, over 1414473.76 frames.], batch size: 38, lr: 4.25e-04 2022-05-27 13:23:48,820 INFO [train.py:842] (0/4) Epoch 13, batch 1050, loss[loss=0.1942, simple_loss=0.2727, pruned_loss=0.05788, over 7250.00 frames.], tot_loss[loss=0.2027, simple_loss=0.2844, pruned_loss=0.06046, over 1416521.90 frames.], batch size: 19, lr: 4.25e-04 2022-05-27 13:24:27,307 INFO [train.py:842] (0/4) Epoch 13, batch 1100, loss[loss=0.1891, simple_loss=0.2784, pruned_loss=0.0499, over 7377.00 frames.], tot_loss[loss=0.2006, simple_loss=0.2834, pruned_loss=0.05887, over 1422720.57 frames.], batch size: 23, lr: 4.25e-04 2022-05-27 13:25:05,353 INFO [train.py:842] (0/4) Epoch 13, batch 1150, loss[loss=0.1939, simple_loss=0.2943, pruned_loss=0.04679, over 7327.00 frames.], tot_loss[loss=0.2018, simple_loss=0.2844, pruned_loss=0.05956, over 1425087.09 frames.], batch size: 20, lr: 4.25e-04 2022-05-27 13:25:43,695 INFO [train.py:842] (0/4) Epoch 13, batch 1200, loss[loss=0.2603, simple_loss=0.3369, pruned_loss=0.09186, over 4906.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2835, pruned_loss=0.05929, over 1421285.19 frames.], batch size: 52, lr: 4.25e-04 2022-05-27 13:26:21,645 INFO [train.py:842] (0/4) Epoch 13, batch 1250, loss[loss=0.1947, simple_loss=0.281, pruned_loss=0.05414, over 7166.00 frames.], tot_loss[loss=0.2008, simple_loss=0.2833, pruned_loss=0.0591, over 1418138.71 frames.], batch size: 19, lr: 4.25e-04 2022-05-27 13:27:00,085 INFO [train.py:842] (0/4) Epoch 13, batch 1300, loss[loss=0.1621, simple_loss=0.2397, pruned_loss=0.04228, over 7072.00 frames.], tot_loss[loss=0.1998, simple_loss=0.2825, pruned_loss=0.05858, over 1418771.42 frames.], batch size: 18, lr: 4.24e-04 2022-05-27 13:27:37,824 INFO [train.py:842] (0/4) Epoch 13, batch 1350, loss[loss=0.2499, simple_loss=0.3142, pruned_loss=0.09279, over 5271.00 frames.], tot_loss[loss=0.2021, simple_loss=0.2846, pruned_loss=0.05979, over 1417036.31 frames.], batch size: 53, lr: 4.24e-04 2022-05-27 13:28:15,968 INFO [train.py:842] (0/4) Epoch 13, batch 1400, loss[loss=0.2266, simple_loss=0.3074, pruned_loss=0.07292, over 7323.00 frames.], tot_loss[loss=0.2016, simple_loss=0.2842, pruned_loss=0.05947, over 1416131.60 frames.], batch size: 25, lr: 4.24e-04 2022-05-27 13:28:53,714 INFO [train.py:842] (0/4) Epoch 13, batch 1450, loss[loss=0.2088, simple_loss=0.2929, pruned_loss=0.06235, over 7308.00 frames.], tot_loss[loss=0.2007, simple_loss=0.2836, pruned_loss=0.05886, over 1413752.95 frames.], batch size: 21, lr: 4.24e-04 2022-05-27 13:29:31,932 INFO [train.py:842] (0/4) Epoch 13, batch 1500, loss[loss=0.196, simple_loss=0.2788, pruned_loss=0.05658, over 7215.00 frames.], tot_loss[loss=0.2, simple_loss=0.2831, pruned_loss=0.05848, over 1417418.89 frames.], batch size: 23, lr: 4.24e-04 2022-05-27 13:30:10,027 INFO [train.py:842] (0/4) Epoch 13, batch 1550, loss[loss=0.2156, simple_loss=0.2997, pruned_loss=0.06572, over 7036.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2841, pruned_loss=0.05904, over 1418877.05 frames.], batch size: 28, lr: 4.24e-04 2022-05-27 13:30:48,256 INFO [train.py:842] (0/4) Epoch 13, batch 1600, loss[loss=0.1914, simple_loss=0.2843, pruned_loss=0.04925, over 7303.00 frames.], tot_loss[loss=0.2002, simple_loss=0.2836, pruned_loss=0.05842, over 1418430.67 frames.], batch size: 25, lr: 4.24e-04 2022-05-27 13:31:26,204 INFO [train.py:842] (0/4) Epoch 13, batch 1650, loss[loss=0.2244, simple_loss=0.3137, pruned_loss=0.06751, over 7302.00 frames.], tot_loss[loss=0.1996, simple_loss=0.2829, pruned_loss=0.05816, over 1421435.34 frames.], batch size: 24, lr: 4.24e-04 2022-05-27 13:32:00,593 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-112000.pt 2022-05-27 13:32:07,090 INFO [train.py:842] (0/4) Epoch 13, batch 1700, loss[loss=0.1508, simple_loss=0.2377, pruned_loss=0.03194, over 7120.00 frames.], tot_loss[loss=0.1994, simple_loss=0.2826, pruned_loss=0.05813, over 1416828.65 frames.], batch size: 17, lr: 4.24e-04 2022-05-27 13:32:45,284 INFO [train.py:842] (0/4) Epoch 13, batch 1750, loss[loss=0.219, simple_loss=0.2946, pruned_loss=0.07164, over 7217.00 frames.], tot_loss[loss=0.1982, simple_loss=0.2814, pruned_loss=0.05753, over 1420809.82 frames.], batch size: 26, lr: 4.24e-04 2022-05-27 13:33:23,697 INFO [train.py:842] (0/4) Epoch 13, batch 1800, loss[loss=0.1702, simple_loss=0.2559, pruned_loss=0.04221, over 6993.00 frames.], tot_loss[loss=0.1966, simple_loss=0.2802, pruned_loss=0.05645, over 1426294.05 frames.], batch size: 16, lr: 4.23e-04 2022-05-27 13:34:01,769 INFO [train.py:842] (0/4) Epoch 13, batch 1850, loss[loss=0.182, simple_loss=0.2686, pruned_loss=0.04767, over 7335.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2817, pruned_loss=0.0578, over 1427820.08 frames.], batch size: 22, lr: 4.23e-04 2022-05-27 13:34:40,131 INFO [train.py:842] (0/4) Epoch 13, batch 1900, loss[loss=0.2305, simple_loss=0.311, pruned_loss=0.07501, over 7241.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2832, pruned_loss=0.05869, over 1428616.22 frames.], batch size: 20, lr: 4.23e-04 2022-05-27 13:35:17,974 INFO [train.py:842] (0/4) Epoch 13, batch 1950, loss[loss=0.2154, simple_loss=0.2864, pruned_loss=0.07219, over 7301.00 frames.], tot_loss[loss=0.2015, simple_loss=0.2839, pruned_loss=0.05958, over 1428745.33 frames.], batch size: 17, lr: 4.23e-04 2022-05-27 13:35:56,454 INFO [train.py:842] (0/4) Epoch 13, batch 2000, loss[loss=0.1653, simple_loss=0.2422, pruned_loss=0.04416, over 6990.00 frames.], tot_loss[loss=0.2009, simple_loss=0.2831, pruned_loss=0.05933, over 1427606.81 frames.], batch size: 16, lr: 4.23e-04 2022-05-27 13:36:34,501 INFO [train.py:842] (0/4) Epoch 13, batch 2050, loss[loss=0.145, simple_loss=0.2361, pruned_loss=0.02695, over 7156.00 frames.], tot_loss[loss=0.2, simple_loss=0.2824, pruned_loss=0.0588, over 1420865.49 frames.], batch size: 19, lr: 4.23e-04 2022-05-27 13:37:12,810 INFO [train.py:842] (0/4) Epoch 13, batch 2100, loss[loss=0.169, simple_loss=0.2486, pruned_loss=0.0447, over 7153.00 frames.], tot_loss[loss=0.1998, simple_loss=0.2824, pruned_loss=0.05853, over 1420638.10 frames.], batch size: 19, lr: 4.23e-04 2022-05-27 13:37:50,696 INFO [train.py:842] (0/4) Epoch 13, batch 2150, loss[loss=0.2382, simple_loss=0.299, pruned_loss=0.08867, over 7265.00 frames.], tot_loss[loss=0.2008, simple_loss=0.2834, pruned_loss=0.05916, over 1421396.74 frames.], batch size: 18, lr: 4.23e-04 2022-05-27 13:38:28,988 INFO [train.py:842] (0/4) Epoch 13, batch 2200, loss[loss=0.1953, simple_loss=0.294, pruned_loss=0.04833, over 7336.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2825, pruned_loss=0.05842, over 1422182.96 frames.], batch size: 20, lr: 4.23e-04 2022-05-27 13:39:06,919 INFO [train.py:842] (0/4) Epoch 13, batch 2250, loss[loss=0.1952, simple_loss=0.2816, pruned_loss=0.05435, over 7156.00 frames.], tot_loss[loss=0.199, simple_loss=0.2818, pruned_loss=0.0581, over 1420505.68 frames.], batch size: 28, lr: 4.23e-04 2022-05-27 13:39:45,191 INFO [train.py:842] (0/4) Epoch 13, batch 2300, loss[loss=0.1725, simple_loss=0.254, pruned_loss=0.04548, over 7113.00 frames.], tot_loss[loss=0.1992, simple_loss=0.282, pruned_loss=0.05822, over 1423413.56 frames.], batch size: 21, lr: 4.23e-04 2022-05-27 13:40:23,243 INFO [train.py:842] (0/4) Epoch 13, batch 2350, loss[loss=0.2074, simple_loss=0.2845, pruned_loss=0.06518, over 7153.00 frames.], tot_loss[loss=0.1995, simple_loss=0.2822, pruned_loss=0.05837, over 1423874.81 frames.], batch size: 19, lr: 4.22e-04 2022-05-27 13:41:01,578 INFO [train.py:842] (0/4) Epoch 13, batch 2400, loss[loss=0.1857, simple_loss=0.2633, pruned_loss=0.054, over 7144.00 frames.], tot_loss[loss=0.2, simple_loss=0.2825, pruned_loss=0.05871, over 1425283.21 frames.], batch size: 17, lr: 4.22e-04 2022-05-27 13:41:39,474 INFO [train.py:842] (0/4) Epoch 13, batch 2450, loss[loss=0.2897, simple_loss=0.3501, pruned_loss=0.1146, over 7210.00 frames.], tot_loss[loss=0.2013, simple_loss=0.2837, pruned_loss=0.05938, over 1424177.52 frames.], batch size: 21, lr: 4.22e-04 2022-05-27 13:42:17,597 INFO [train.py:842] (0/4) Epoch 13, batch 2500, loss[loss=0.1843, simple_loss=0.2596, pruned_loss=0.05451, over 7265.00 frames.], tot_loss[loss=0.2005, simple_loss=0.2835, pruned_loss=0.05877, over 1425604.56 frames.], batch size: 18, lr: 4.22e-04 2022-05-27 13:42:55,552 INFO [train.py:842] (0/4) Epoch 13, batch 2550, loss[loss=0.1911, simple_loss=0.272, pruned_loss=0.05516, over 7259.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2825, pruned_loss=0.05804, over 1427973.13 frames.], batch size: 16, lr: 4.22e-04 2022-05-27 13:43:33,891 INFO [train.py:842] (0/4) Epoch 13, batch 2600, loss[loss=0.1793, simple_loss=0.2638, pruned_loss=0.04736, over 7251.00 frames.], tot_loss[loss=0.2002, simple_loss=0.2829, pruned_loss=0.0588, over 1424999.41 frames.], batch size: 16, lr: 4.22e-04 2022-05-27 13:44:11,771 INFO [train.py:842] (0/4) Epoch 13, batch 2650, loss[loss=0.2127, simple_loss=0.2843, pruned_loss=0.07053, over 6995.00 frames.], tot_loss[loss=0.2012, simple_loss=0.2839, pruned_loss=0.0592, over 1422852.74 frames.], batch size: 16, lr: 4.22e-04 2022-05-27 13:44:50,112 INFO [train.py:842] (0/4) Epoch 13, batch 2700, loss[loss=0.1708, simple_loss=0.2499, pruned_loss=0.04584, over 7006.00 frames.], tot_loss[loss=0.2005, simple_loss=0.2832, pruned_loss=0.05892, over 1423862.12 frames.], batch size: 16, lr: 4.22e-04 2022-05-27 13:45:27,938 INFO [train.py:842] (0/4) Epoch 13, batch 2750, loss[loss=0.1881, simple_loss=0.2769, pruned_loss=0.04965, over 7114.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2831, pruned_loss=0.05877, over 1420542.05 frames.], batch size: 21, lr: 4.22e-04 2022-05-27 13:46:05,888 INFO [train.py:842] (0/4) Epoch 13, batch 2800, loss[loss=0.1822, simple_loss=0.256, pruned_loss=0.05417, over 7161.00 frames.], tot_loss[loss=0.2014, simple_loss=0.2842, pruned_loss=0.05932, over 1420722.91 frames.], batch size: 17, lr: 4.22e-04 2022-05-27 13:46:44,097 INFO [train.py:842] (0/4) Epoch 13, batch 2850, loss[loss=0.1812, simple_loss=0.2696, pruned_loss=0.04641, over 7368.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2836, pruned_loss=0.05854, over 1426431.00 frames.], batch size: 23, lr: 4.22e-04 2022-05-27 13:47:22,064 INFO [train.py:842] (0/4) Epoch 13, batch 2900, loss[loss=0.19, simple_loss=0.2765, pruned_loss=0.05173, over 7355.00 frames.], tot_loss[loss=0.2012, simple_loss=0.2843, pruned_loss=0.05902, over 1424288.10 frames.], batch size: 19, lr: 4.21e-04 2022-05-27 13:48:00,128 INFO [train.py:842] (0/4) Epoch 13, batch 2950, loss[loss=0.2197, simple_loss=0.3049, pruned_loss=0.06728, over 7114.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2822, pruned_loss=0.05755, over 1426243.44 frames.], batch size: 21, lr: 4.21e-04 2022-05-27 13:48:38,457 INFO [train.py:842] (0/4) Epoch 13, batch 3000, loss[loss=0.1819, simple_loss=0.2616, pruned_loss=0.05112, over 7275.00 frames.], tot_loss[loss=0.2006, simple_loss=0.2835, pruned_loss=0.05884, over 1427059.18 frames.], batch size: 17, lr: 4.21e-04 2022-05-27 13:48:38,458 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 13:48:47,488 INFO [train.py:871] (0/4) Epoch 13, validation: loss=0.1706, simple_loss=0.2707, pruned_loss=0.03524, over 868885.00 frames. 2022-05-27 13:49:25,619 INFO [train.py:842] (0/4) Epoch 13, batch 3050, loss[loss=0.1762, simple_loss=0.2529, pruned_loss=0.04976, over 7144.00 frames.], tot_loss[loss=0.1994, simple_loss=0.2826, pruned_loss=0.05813, over 1427829.43 frames.], batch size: 17, lr: 4.21e-04 2022-05-27 13:50:03,999 INFO [train.py:842] (0/4) Epoch 13, batch 3100, loss[loss=0.1969, simple_loss=0.2946, pruned_loss=0.04965, over 7124.00 frames.], tot_loss[loss=0.1995, simple_loss=0.2826, pruned_loss=0.0582, over 1426726.76 frames.], batch size: 21, lr: 4.21e-04 2022-05-27 13:50:41,763 INFO [train.py:842] (0/4) Epoch 13, batch 3150, loss[loss=0.1975, simple_loss=0.284, pruned_loss=0.05552, over 7276.00 frames.], tot_loss[loss=0.2038, simple_loss=0.2862, pruned_loss=0.06067, over 1424383.48 frames.], batch size: 25, lr: 4.21e-04 2022-05-27 13:51:19,882 INFO [train.py:842] (0/4) Epoch 13, batch 3200, loss[loss=0.1997, simple_loss=0.2828, pruned_loss=0.05835, over 4820.00 frames.], tot_loss[loss=0.2042, simple_loss=0.2867, pruned_loss=0.06085, over 1425506.20 frames.], batch size: 52, lr: 4.21e-04 2022-05-27 13:51:58,060 INFO [train.py:842] (0/4) Epoch 13, batch 3250, loss[loss=0.1426, simple_loss=0.2297, pruned_loss=0.0277, over 7275.00 frames.], tot_loss[loss=0.2026, simple_loss=0.2849, pruned_loss=0.06012, over 1428450.46 frames.], batch size: 17, lr: 4.21e-04 2022-05-27 13:52:36,360 INFO [train.py:842] (0/4) Epoch 13, batch 3300, loss[loss=0.1815, simple_loss=0.2662, pruned_loss=0.04842, over 7331.00 frames.], tot_loss[loss=0.2021, simple_loss=0.2847, pruned_loss=0.05975, over 1427412.10 frames.], batch size: 20, lr: 4.21e-04 2022-05-27 13:53:14,260 INFO [train.py:842] (0/4) Epoch 13, batch 3350, loss[loss=0.1695, simple_loss=0.244, pruned_loss=0.04748, over 6993.00 frames.], tot_loss[loss=0.2016, simple_loss=0.2845, pruned_loss=0.05934, over 1420649.69 frames.], batch size: 16, lr: 4.21e-04 2022-05-27 13:53:52,550 INFO [train.py:842] (0/4) Epoch 13, batch 3400, loss[loss=0.2498, simple_loss=0.3362, pruned_loss=0.08171, over 7392.00 frames.], tot_loss[loss=0.2019, simple_loss=0.285, pruned_loss=0.05942, over 1424202.71 frames.], batch size: 23, lr: 4.20e-04 2022-05-27 13:54:30,240 INFO [train.py:842] (0/4) Epoch 13, batch 3450, loss[loss=0.1887, simple_loss=0.2655, pruned_loss=0.05595, over 7415.00 frames.], tot_loss[loss=0.2037, simple_loss=0.2864, pruned_loss=0.06052, over 1414324.09 frames.], batch size: 18, lr: 4.20e-04 2022-05-27 13:55:08,614 INFO [train.py:842] (0/4) Epoch 13, batch 3500, loss[loss=0.2088, simple_loss=0.3018, pruned_loss=0.05789, over 6677.00 frames.], tot_loss[loss=0.2032, simple_loss=0.2859, pruned_loss=0.06026, over 1415814.08 frames.], batch size: 31, lr: 4.20e-04 2022-05-27 13:55:47,221 INFO [train.py:842] (0/4) Epoch 13, batch 3550, loss[loss=0.155, simple_loss=0.2374, pruned_loss=0.03631, over 6992.00 frames.], tot_loss[loss=0.2044, simple_loss=0.2866, pruned_loss=0.06105, over 1421205.16 frames.], batch size: 16, lr: 4.20e-04 2022-05-27 13:56:25,868 INFO [train.py:842] (0/4) Epoch 13, batch 3600, loss[loss=0.1665, simple_loss=0.2461, pruned_loss=0.04342, over 7271.00 frames.], tot_loss[loss=0.2056, simple_loss=0.2877, pruned_loss=0.06169, over 1421351.73 frames.], batch size: 18, lr: 4.20e-04 2022-05-27 13:57:04,081 INFO [train.py:842] (0/4) Epoch 13, batch 3650, loss[loss=0.2193, simple_loss=0.3034, pruned_loss=0.06764, over 7417.00 frames.], tot_loss[loss=0.2045, simple_loss=0.2866, pruned_loss=0.06114, over 1424294.63 frames.], batch size: 21, lr: 4.20e-04 2022-05-27 13:57:43,032 INFO [train.py:842] (0/4) Epoch 13, batch 3700, loss[loss=0.1907, simple_loss=0.2698, pruned_loss=0.05583, over 7263.00 frames.], tot_loss[loss=0.2046, simple_loss=0.2863, pruned_loss=0.06148, over 1425127.87 frames.], batch size: 19, lr: 4.20e-04 2022-05-27 13:58:21,603 INFO [train.py:842] (0/4) Epoch 13, batch 3750, loss[loss=0.1991, simple_loss=0.2874, pruned_loss=0.05544, over 7414.00 frames.], tot_loss[loss=0.2034, simple_loss=0.2855, pruned_loss=0.06067, over 1425452.38 frames.], batch size: 21, lr: 4.20e-04 2022-05-27 13:59:00,728 INFO [train.py:842] (0/4) Epoch 13, batch 3800, loss[loss=0.2604, simple_loss=0.3377, pruned_loss=0.09158, over 7023.00 frames.], tot_loss[loss=0.204, simple_loss=0.2858, pruned_loss=0.0611, over 1429374.32 frames.], batch size: 28, lr: 4.20e-04 2022-05-27 13:59:39,266 INFO [train.py:842] (0/4) Epoch 13, batch 3850, loss[loss=0.175, simple_loss=0.2527, pruned_loss=0.04867, over 7200.00 frames.], tot_loss[loss=0.2048, simple_loss=0.2868, pruned_loss=0.0614, over 1426878.42 frames.], batch size: 22, lr: 4.20e-04 2022-05-27 14:00:18,601 INFO [train.py:842] (0/4) Epoch 13, batch 3900, loss[loss=0.2023, simple_loss=0.2916, pruned_loss=0.05652, over 7155.00 frames.], tot_loss[loss=0.2017, simple_loss=0.2842, pruned_loss=0.05961, over 1426122.59 frames.], batch size: 28, lr: 4.20e-04 2022-05-27 14:00:57,293 INFO [train.py:842] (0/4) Epoch 13, batch 3950, loss[loss=0.1882, simple_loss=0.2481, pruned_loss=0.06408, over 7181.00 frames.], tot_loss[loss=0.2021, simple_loss=0.2848, pruned_loss=0.05965, over 1426296.32 frames.], batch size: 16, lr: 4.19e-04 2022-05-27 14:01:36,257 INFO [train.py:842] (0/4) Epoch 13, batch 4000, loss[loss=0.2115, simple_loss=0.2941, pruned_loss=0.06442, over 7058.00 frames.], tot_loss[loss=0.202, simple_loss=0.2843, pruned_loss=0.05979, over 1425085.38 frames.], batch size: 28, lr: 4.19e-04 2022-05-27 14:02:15,238 INFO [train.py:842] (0/4) Epoch 13, batch 4050, loss[loss=0.2517, simple_loss=0.3301, pruned_loss=0.08665, over 7201.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2836, pruned_loss=0.05931, over 1429199.38 frames.], batch size: 22, lr: 4.19e-04 2022-05-27 14:02:54,506 INFO [train.py:842] (0/4) Epoch 13, batch 4100, loss[loss=0.1852, simple_loss=0.2669, pruned_loss=0.05178, over 7166.00 frames.], tot_loss[loss=0.2001, simple_loss=0.2829, pruned_loss=0.0587, over 1430337.53 frames.], batch size: 19, lr: 4.19e-04 2022-05-27 14:03:33,683 INFO [train.py:842] (0/4) Epoch 13, batch 4150, loss[loss=0.1752, simple_loss=0.2491, pruned_loss=0.05068, over 6993.00 frames.], tot_loss[loss=0.2006, simple_loss=0.283, pruned_loss=0.05912, over 1428624.94 frames.], batch size: 16, lr: 4.19e-04 2022-05-27 14:04:12,647 INFO [train.py:842] (0/4) Epoch 13, batch 4200, loss[loss=0.2111, simple_loss=0.2919, pruned_loss=0.06519, over 6441.00 frames.], tot_loss[loss=0.2015, simple_loss=0.2835, pruned_loss=0.05973, over 1423177.40 frames.], batch size: 37, lr: 4.19e-04 2022-05-27 14:04:51,217 INFO [train.py:842] (0/4) Epoch 13, batch 4250, loss[loss=0.2276, simple_loss=0.2984, pruned_loss=0.0784, over 7435.00 frames.], tot_loss[loss=0.2012, simple_loss=0.2835, pruned_loss=0.05947, over 1425641.45 frames.], batch size: 20, lr: 4.19e-04 2022-05-27 14:05:30,463 INFO [train.py:842] (0/4) Epoch 13, batch 4300, loss[loss=0.171, simple_loss=0.2516, pruned_loss=0.0452, over 6751.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2832, pruned_loss=0.05952, over 1422449.43 frames.], batch size: 15, lr: 4.19e-04 2022-05-27 14:06:09,107 INFO [train.py:842] (0/4) Epoch 13, batch 4350, loss[loss=0.2499, simple_loss=0.3177, pruned_loss=0.0911, over 5195.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2825, pruned_loss=0.05901, over 1423695.10 frames.], batch size: 52, lr: 4.19e-04 2022-05-27 14:06:48,110 INFO [train.py:842] (0/4) Epoch 13, batch 4400, loss[loss=0.1763, simple_loss=0.2521, pruned_loss=0.05026, over 7153.00 frames.], tot_loss[loss=0.2008, simple_loss=0.2827, pruned_loss=0.0595, over 1423555.83 frames.], batch size: 17, lr: 4.19e-04 2022-05-27 14:07:27,258 INFO [train.py:842] (0/4) Epoch 13, batch 4450, loss[loss=0.2079, simple_loss=0.2728, pruned_loss=0.07149, over 7271.00 frames.], tot_loss[loss=0.2002, simple_loss=0.2821, pruned_loss=0.05915, over 1428064.75 frames.], batch size: 17, lr: 4.19e-04 2022-05-27 14:08:06,600 INFO [train.py:842] (0/4) Epoch 13, batch 4500, loss[loss=0.1918, simple_loss=0.2885, pruned_loss=0.04752, over 7227.00 frames.], tot_loss[loss=0.199, simple_loss=0.2814, pruned_loss=0.05833, over 1427100.09 frames.], batch size: 20, lr: 4.18e-04 2022-05-27 14:08:45,554 INFO [train.py:842] (0/4) Epoch 13, batch 4550, loss[loss=0.2112, simple_loss=0.3031, pruned_loss=0.05962, over 7115.00 frames.], tot_loss[loss=0.1991, simple_loss=0.2818, pruned_loss=0.05826, over 1426345.84 frames.], batch size: 28, lr: 4.18e-04 2022-05-27 14:09:25,122 INFO [train.py:842] (0/4) Epoch 13, batch 4600, loss[loss=0.2113, simple_loss=0.2939, pruned_loss=0.06436, over 7150.00 frames.], tot_loss[loss=0.1989, simple_loss=0.2813, pruned_loss=0.0582, over 1423672.59 frames.], batch size: 20, lr: 4.18e-04 2022-05-27 14:10:04,089 INFO [train.py:842] (0/4) Epoch 13, batch 4650, loss[loss=0.2368, simple_loss=0.2966, pruned_loss=0.08846, over 7052.00 frames.], tot_loss[loss=0.1988, simple_loss=0.2814, pruned_loss=0.05808, over 1421828.67 frames.], batch size: 18, lr: 4.18e-04 2022-05-27 14:10:43,261 INFO [train.py:842] (0/4) Epoch 13, batch 4700, loss[loss=0.2032, simple_loss=0.2917, pruned_loss=0.0573, over 6882.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2817, pruned_loss=0.05779, over 1426831.70 frames.], batch size: 31, lr: 4.18e-04 2022-05-27 14:11:22,087 INFO [train.py:842] (0/4) Epoch 13, batch 4750, loss[loss=0.226, simple_loss=0.2991, pruned_loss=0.07639, over 7199.00 frames.], tot_loss[loss=0.2007, simple_loss=0.2836, pruned_loss=0.05886, over 1423951.81 frames.], batch size: 22, lr: 4.18e-04 2022-05-27 14:12:01,221 INFO [train.py:842] (0/4) Epoch 13, batch 4800, loss[loss=0.215, simple_loss=0.2998, pruned_loss=0.06514, over 7181.00 frames.], tot_loss[loss=0.2017, simple_loss=0.2845, pruned_loss=0.05941, over 1419166.49 frames.], batch size: 26, lr: 4.18e-04 2022-05-27 14:12:40,051 INFO [train.py:842] (0/4) Epoch 13, batch 4850, loss[loss=0.25, simple_loss=0.3228, pruned_loss=0.08857, over 7150.00 frames.], tot_loss[loss=0.2018, simple_loss=0.2843, pruned_loss=0.0597, over 1412985.51 frames.], batch size: 20, lr: 4.18e-04 2022-05-27 14:13:19,442 INFO [train.py:842] (0/4) Epoch 13, batch 4900, loss[loss=0.1861, simple_loss=0.2559, pruned_loss=0.05813, over 7268.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2834, pruned_loss=0.05935, over 1411924.99 frames.], batch size: 18, lr: 4.18e-04 2022-05-27 14:13:58,358 INFO [train.py:842] (0/4) Epoch 13, batch 4950, loss[loss=0.1879, simple_loss=0.2854, pruned_loss=0.0452, over 7227.00 frames.], tot_loss[loss=0.2, simple_loss=0.2827, pruned_loss=0.05868, over 1408554.03 frames.], batch size: 20, lr: 4.18e-04 2022-05-27 14:14:37,845 INFO [train.py:842] (0/4) Epoch 13, batch 5000, loss[loss=0.1834, simple_loss=0.2785, pruned_loss=0.04419, over 7187.00 frames.], tot_loss[loss=0.2001, simple_loss=0.2827, pruned_loss=0.05876, over 1410511.52 frames.], batch size: 23, lr: 4.18e-04 2022-05-27 14:15:16,554 INFO [train.py:842] (0/4) Epoch 13, batch 5050, loss[loss=0.1955, simple_loss=0.2663, pruned_loss=0.06235, over 7285.00 frames.], tot_loss[loss=0.2013, simple_loss=0.2836, pruned_loss=0.05946, over 1412084.87 frames.], batch size: 17, lr: 4.17e-04 2022-05-27 14:15:55,749 INFO [train.py:842] (0/4) Epoch 13, batch 5100, loss[loss=0.1937, simple_loss=0.2715, pruned_loss=0.05793, over 7264.00 frames.], tot_loss[loss=0.2026, simple_loss=0.285, pruned_loss=0.06005, over 1415544.24 frames.], batch size: 19, lr: 4.17e-04 2022-05-27 14:16:34,507 INFO [train.py:842] (0/4) Epoch 13, batch 5150, loss[loss=0.2143, simple_loss=0.3053, pruned_loss=0.06162, over 7287.00 frames.], tot_loss[loss=0.203, simple_loss=0.2858, pruned_loss=0.06008, over 1416370.46 frames.], batch size: 24, lr: 4.17e-04 2022-05-27 14:17:13,810 INFO [train.py:842] (0/4) Epoch 13, batch 5200, loss[loss=0.2041, simple_loss=0.2933, pruned_loss=0.0575, over 7050.00 frames.], tot_loss[loss=0.2029, simple_loss=0.2855, pruned_loss=0.06017, over 1417523.08 frames.], batch size: 28, lr: 4.17e-04 2022-05-27 14:17:52,706 INFO [train.py:842] (0/4) Epoch 13, batch 5250, loss[loss=0.1744, simple_loss=0.2711, pruned_loss=0.03883, over 7180.00 frames.], tot_loss[loss=0.2022, simple_loss=0.2852, pruned_loss=0.05961, over 1419611.66 frames.], batch size: 23, lr: 4.17e-04 2022-05-27 14:18:31,892 INFO [train.py:842] (0/4) Epoch 13, batch 5300, loss[loss=0.1854, simple_loss=0.2779, pruned_loss=0.04648, over 7234.00 frames.], tot_loss[loss=0.2021, simple_loss=0.2855, pruned_loss=0.05937, over 1424386.28 frames.], batch size: 20, lr: 4.17e-04 2022-05-27 14:19:11,013 INFO [train.py:842] (0/4) Epoch 13, batch 5350, loss[loss=0.2002, simple_loss=0.2554, pruned_loss=0.07246, over 7142.00 frames.], tot_loss[loss=0.2007, simple_loss=0.2837, pruned_loss=0.05879, over 1430082.41 frames.], batch size: 17, lr: 4.17e-04 2022-05-27 14:19:50,000 INFO [train.py:842] (0/4) Epoch 13, batch 5400, loss[loss=0.189, simple_loss=0.2558, pruned_loss=0.06104, over 7135.00 frames.], tot_loss[loss=0.2014, simple_loss=0.2841, pruned_loss=0.05934, over 1428991.19 frames.], batch size: 17, lr: 4.17e-04 2022-05-27 14:20:28,968 INFO [train.py:842] (0/4) Epoch 13, batch 5450, loss[loss=0.1973, simple_loss=0.2799, pruned_loss=0.05736, over 6764.00 frames.], tot_loss[loss=0.2024, simple_loss=0.2848, pruned_loss=0.05998, over 1424999.01 frames.], batch size: 15, lr: 4.17e-04 2022-05-27 14:21:08,149 INFO [train.py:842] (0/4) Epoch 13, batch 5500, loss[loss=0.1742, simple_loss=0.2566, pruned_loss=0.04594, over 7276.00 frames.], tot_loss[loss=0.2005, simple_loss=0.283, pruned_loss=0.05901, over 1422974.94 frames.], batch size: 17, lr: 4.17e-04 2022-05-27 14:21:46,675 INFO [train.py:842] (0/4) Epoch 13, batch 5550, loss[loss=0.1994, simple_loss=0.2807, pruned_loss=0.05911, over 7352.00 frames.], tot_loss[loss=0.2, simple_loss=0.2828, pruned_loss=0.05857, over 1425441.01 frames.], batch size: 19, lr: 4.17e-04 2022-05-27 14:22:26,134 INFO [train.py:842] (0/4) Epoch 13, batch 5600, loss[loss=0.1823, simple_loss=0.2631, pruned_loss=0.05074, over 7333.00 frames.], tot_loss[loss=0.2004, simple_loss=0.2828, pruned_loss=0.05905, over 1424999.88 frames.], batch size: 20, lr: 4.16e-04 2022-05-27 14:23:05,042 INFO [train.py:842] (0/4) Epoch 13, batch 5650, loss[loss=0.1832, simple_loss=0.2651, pruned_loss=0.05066, over 7328.00 frames.], tot_loss[loss=0.201, simple_loss=0.2832, pruned_loss=0.05936, over 1423687.60 frames.], batch size: 20, lr: 4.16e-04 2022-05-27 14:23:43,965 INFO [train.py:842] (0/4) Epoch 13, batch 5700, loss[loss=0.1797, simple_loss=0.2612, pruned_loss=0.04913, over 7337.00 frames.], tot_loss[loss=0.2016, simple_loss=0.2837, pruned_loss=0.05979, over 1420432.59 frames.], batch size: 19, lr: 4.16e-04 2022-05-27 14:24:22,946 INFO [train.py:842] (0/4) Epoch 13, batch 5750, loss[loss=0.1895, simple_loss=0.2827, pruned_loss=0.04817, over 7223.00 frames.], tot_loss[loss=0.2023, simple_loss=0.2845, pruned_loss=0.06001, over 1423086.91 frames.], batch size: 21, lr: 4.16e-04 2022-05-27 14:25:02,242 INFO [train.py:842] (0/4) Epoch 13, batch 5800, loss[loss=0.1753, simple_loss=0.265, pruned_loss=0.04278, over 7233.00 frames.], tot_loss[loss=0.2002, simple_loss=0.2826, pruned_loss=0.05893, over 1422846.92 frames.], batch size: 20, lr: 4.16e-04 2022-05-27 14:25:41,231 INFO [train.py:842] (0/4) Epoch 13, batch 5850, loss[loss=0.1476, simple_loss=0.2306, pruned_loss=0.03233, over 7366.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2822, pruned_loss=0.05861, over 1426576.38 frames.], batch size: 19, lr: 4.16e-04 2022-05-27 14:26:20,585 INFO [train.py:842] (0/4) Epoch 13, batch 5900, loss[loss=0.2593, simple_loss=0.3366, pruned_loss=0.09102, over 7145.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2811, pruned_loss=0.05819, over 1426334.24 frames.], batch size: 20, lr: 4.16e-04 2022-05-27 14:26:59,199 INFO [train.py:842] (0/4) Epoch 13, batch 5950, loss[loss=0.1859, simple_loss=0.2771, pruned_loss=0.04733, over 7195.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2819, pruned_loss=0.05875, over 1426377.14 frames.], batch size: 23, lr: 4.16e-04 2022-05-27 14:27:38,222 INFO [train.py:842] (0/4) Epoch 13, batch 6000, loss[loss=0.1952, simple_loss=0.2778, pruned_loss=0.05629, over 7365.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2828, pruned_loss=0.05892, over 1423767.27 frames.], batch size: 19, lr: 4.16e-04 2022-05-27 14:27:38,223 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 14:27:48,003 INFO [train.py:871] (0/4) Epoch 13, validation: loss=0.1712, simple_loss=0.2713, pruned_loss=0.03553, over 868885.00 frames. 2022-05-27 14:28:27,125 INFO [train.py:842] (0/4) Epoch 13, batch 6050, loss[loss=0.1973, simple_loss=0.2846, pruned_loss=0.05504, over 7064.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2813, pruned_loss=0.05864, over 1422440.90 frames.], batch size: 18, lr: 4.16e-04 2022-05-27 14:29:06,290 INFO [train.py:842] (0/4) Epoch 13, batch 6100, loss[loss=0.2032, simple_loss=0.2767, pruned_loss=0.0648, over 7110.00 frames.], tot_loss[loss=0.1979, simple_loss=0.2805, pruned_loss=0.05771, over 1426176.48 frames.], batch size: 21, lr: 4.16e-04 2022-05-27 14:29:45,158 INFO [train.py:842] (0/4) Epoch 13, batch 6150, loss[loss=0.213, simple_loss=0.3051, pruned_loss=0.06044, over 7225.00 frames.], tot_loss[loss=0.1995, simple_loss=0.2819, pruned_loss=0.05858, over 1421261.30 frames.], batch size: 20, lr: 4.16e-04 2022-05-27 14:30:23,981 INFO [train.py:842] (0/4) Epoch 13, batch 6200, loss[loss=0.1915, simple_loss=0.2804, pruned_loss=0.05132, over 7193.00 frames.], tot_loss[loss=0.1996, simple_loss=0.2823, pruned_loss=0.05846, over 1424327.00 frames.], batch size: 23, lr: 4.15e-04 2022-05-27 14:31:03,017 INFO [train.py:842] (0/4) Epoch 13, batch 6250, loss[loss=0.1619, simple_loss=0.243, pruned_loss=0.0404, over 6778.00 frames.], tot_loss[loss=0.199, simple_loss=0.282, pruned_loss=0.05799, over 1423430.22 frames.], batch size: 15, lr: 4.15e-04 2022-05-27 14:31:41,871 INFO [train.py:842] (0/4) Epoch 13, batch 6300, loss[loss=0.1967, simple_loss=0.2971, pruned_loss=0.04813, over 6475.00 frames.], tot_loss[loss=0.1992, simple_loss=0.2825, pruned_loss=0.058, over 1421846.93 frames.], batch size: 38, lr: 4.15e-04 2022-05-27 14:32:20,525 INFO [train.py:842] (0/4) Epoch 13, batch 6350, loss[loss=0.2944, simple_loss=0.3591, pruned_loss=0.1148, over 4707.00 frames.], tot_loss[loss=0.1984, simple_loss=0.2818, pruned_loss=0.05751, over 1418140.26 frames.], batch size: 53, lr: 4.15e-04 2022-05-27 14:32:59,608 INFO [train.py:842] (0/4) Epoch 13, batch 6400, loss[loss=0.1972, simple_loss=0.2849, pruned_loss=0.05479, over 7062.00 frames.], tot_loss[loss=0.1992, simple_loss=0.2822, pruned_loss=0.0581, over 1416701.96 frames.], batch size: 18, lr: 4.15e-04 2022-05-27 14:33:38,320 INFO [train.py:842] (0/4) Epoch 13, batch 6450, loss[loss=0.2464, simple_loss=0.3224, pruned_loss=0.08516, over 7185.00 frames.], tot_loss[loss=0.1994, simple_loss=0.2826, pruned_loss=0.05808, over 1417864.73 frames.], batch size: 26, lr: 4.15e-04 2022-05-27 14:34:17,569 INFO [train.py:842] (0/4) Epoch 13, batch 6500, loss[loss=0.1631, simple_loss=0.2431, pruned_loss=0.04154, over 7127.00 frames.], tot_loss[loss=0.1999, simple_loss=0.2825, pruned_loss=0.05865, over 1415889.55 frames.], batch size: 17, lr: 4.15e-04 2022-05-27 14:34:56,665 INFO [train.py:842] (0/4) Epoch 13, batch 6550, loss[loss=0.1773, simple_loss=0.2614, pruned_loss=0.04659, over 7293.00 frames.], tot_loss[loss=0.1992, simple_loss=0.2817, pruned_loss=0.05833, over 1417994.85 frames.], batch size: 18, lr: 4.15e-04 2022-05-27 14:35:36,246 INFO [train.py:842] (0/4) Epoch 13, batch 6600, loss[loss=0.248, simple_loss=0.3206, pruned_loss=0.08775, over 7183.00 frames.], tot_loss[loss=0.198, simple_loss=0.2805, pruned_loss=0.05774, over 1420044.27 frames.], batch size: 26, lr: 4.15e-04 2022-05-27 14:36:15,599 INFO [train.py:842] (0/4) Epoch 13, batch 6650, loss[loss=0.2197, simple_loss=0.306, pruned_loss=0.06667, over 7057.00 frames.], tot_loss[loss=0.1976, simple_loss=0.2804, pruned_loss=0.05738, over 1423066.27 frames.], batch size: 28, lr: 4.15e-04 2022-05-27 14:36:54,867 INFO [train.py:842] (0/4) Epoch 13, batch 6700, loss[loss=0.1914, simple_loss=0.2864, pruned_loss=0.04818, over 7237.00 frames.], tot_loss[loss=0.2002, simple_loss=0.2826, pruned_loss=0.05893, over 1421092.71 frames.], batch size: 20, lr: 4.15e-04 2022-05-27 14:37:33,694 INFO [train.py:842] (0/4) Epoch 13, batch 6750, loss[loss=0.1878, simple_loss=0.2853, pruned_loss=0.0452, over 7409.00 frames.], tot_loss[loss=0.1997, simple_loss=0.282, pruned_loss=0.05867, over 1422353.77 frames.], batch size: 21, lr: 4.14e-04 2022-05-27 14:38:12,757 INFO [train.py:842] (0/4) Epoch 13, batch 6800, loss[loss=0.1665, simple_loss=0.2503, pruned_loss=0.04134, over 7415.00 frames.], tot_loss[loss=0.1998, simple_loss=0.2823, pruned_loss=0.05863, over 1423889.15 frames.], batch size: 18, lr: 4.14e-04 2022-05-27 14:38:51,505 INFO [train.py:842] (0/4) Epoch 13, batch 6850, loss[loss=0.2, simple_loss=0.2842, pruned_loss=0.05789, over 7385.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2817, pruned_loss=0.05841, over 1421610.95 frames.], batch size: 23, lr: 4.14e-04 2022-05-27 14:39:30,662 INFO [train.py:842] (0/4) Epoch 13, batch 6900, loss[loss=0.1834, simple_loss=0.2678, pruned_loss=0.04947, over 7437.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2822, pruned_loss=0.05858, over 1422202.56 frames.], batch size: 20, lr: 4.14e-04 2022-05-27 14:40:09,779 INFO [train.py:842] (0/4) Epoch 13, batch 6950, loss[loss=0.2341, simple_loss=0.3308, pruned_loss=0.06868, over 7146.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2822, pruned_loss=0.05863, over 1423491.73 frames.], batch size: 20, lr: 4.14e-04 2022-05-27 14:40:48,912 INFO [train.py:842] (0/4) Epoch 13, batch 7000, loss[loss=0.1716, simple_loss=0.2601, pruned_loss=0.04148, over 7364.00 frames.], tot_loss[loss=0.2, simple_loss=0.2827, pruned_loss=0.05868, over 1421764.43 frames.], batch size: 19, lr: 4.14e-04 2022-05-27 14:41:27,974 INFO [train.py:842] (0/4) Epoch 13, batch 7050, loss[loss=0.1872, simple_loss=0.2632, pruned_loss=0.0556, over 7164.00 frames.], tot_loss[loss=0.2001, simple_loss=0.2826, pruned_loss=0.05882, over 1424163.63 frames.], batch size: 18, lr: 4.14e-04 2022-05-27 14:42:07,632 INFO [train.py:842] (0/4) Epoch 13, batch 7100, loss[loss=0.2361, simple_loss=0.321, pruned_loss=0.0756, over 7334.00 frames.], tot_loss[loss=0.1998, simple_loss=0.2824, pruned_loss=0.0586, over 1425300.36 frames.], batch size: 22, lr: 4.14e-04 2022-05-27 14:42:46,579 INFO [train.py:842] (0/4) Epoch 13, batch 7150, loss[loss=0.191, simple_loss=0.2801, pruned_loss=0.05095, over 7206.00 frames.], tot_loss[loss=0.199, simple_loss=0.2819, pruned_loss=0.05802, over 1424502.25 frames.], batch size: 22, lr: 4.14e-04 2022-05-27 14:43:25,705 INFO [train.py:842] (0/4) Epoch 13, batch 7200, loss[loss=0.1898, simple_loss=0.2714, pruned_loss=0.05405, over 7136.00 frames.], tot_loss[loss=0.2, simple_loss=0.2825, pruned_loss=0.05874, over 1423784.96 frames.], batch size: 17, lr: 4.14e-04 2022-05-27 14:44:04,764 INFO [train.py:842] (0/4) Epoch 13, batch 7250, loss[loss=0.2067, simple_loss=0.2925, pruned_loss=0.06044, over 6295.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2831, pruned_loss=0.05874, over 1419839.75 frames.], batch size: 37, lr: 4.14e-04 2022-05-27 14:44:43,668 INFO [train.py:842] (0/4) Epoch 13, batch 7300, loss[loss=0.1545, simple_loss=0.2444, pruned_loss=0.03227, over 7074.00 frames.], tot_loss[loss=0.201, simple_loss=0.2838, pruned_loss=0.05905, over 1422963.58 frames.], batch size: 18, lr: 4.13e-04 2022-05-27 14:45:22,112 INFO [train.py:842] (0/4) Epoch 13, batch 7350, loss[loss=0.1706, simple_loss=0.2566, pruned_loss=0.04229, over 7235.00 frames.], tot_loss[loss=0.2002, simple_loss=0.2832, pruned_loss=0.05865, over 1421812.10 frames.], batch size: 20, lr: 4.13e-04 2022-05-27 14:46:01,415 INFO [train.py:842] (0/4) Epoch 13, batch 7400, loss[loss=0.267, simple_loss=0.3371, pruned_loss=0.09844, over 7123.00 frames.], tot_loss[loss=0.2012, simple_loss=0.2836, pruned_loss=0.05939, over 1418852.22 frames.], batch size: 21, lr: 4.13e-04 2022-05-27 14:46:40,471 INFO [train.py:842] (0/4) Epoch 13, batch 7450, loss[loss=0.2887, simple_loss=0.339, pruned_loss=0.1192, over 6766.00 frames.], tot_loss[loss=0.2021, simple_loss=0.2842, pruned_loss=0.06001, over 1422300.19 frames.], batch size: 31, lr: 4.13e-04 2022-05-27 14:47:19,655 INFO [train.py:842] (0/4) Epoch 13, batch 7500, loss[loss=0.2005, simple_loss=0.2911, pruned_loss=0.05495, over 7019.00 frames.], tot_loss[loss=0.2009, simple_loss=0.2834, pruned_loss=0.05923, over 1421118.46 frames.], batch size: 28, lr: 4.13e-04 2022-05-27 14:47:58,780 INFO [train.py:842] (0/4) Epoch 13, batch 7550, loss[loss=0.1701, simple_loss=0.2561, pruned_loss=0.04201, over 7412.00 frames.], tot_loss[loss=0.2006, simple_loss=0.283, pruned_loss=0.05911, over 1423411.53 frames.], batch size: 20, lr: 4.13e-04 2022-05-27 14:48:37,993 INFO [train.py:842] (0/4) Epoch 13, batch 7600, loss[loss=0.1892, simple_loss=0.2741, pruned_loss=0.05211, over 7335.00 frames.], tot_loss[loss=0.2005, simple_loss=0.2831, pruned_loss=0.05892, over 1422543.45 frames.], batch size: 20, lr: 4.13e-04 2022-05-27 14:49:16,785 INFO [train.py:842] (0/4) Epoch 13, batch 7650, loss[loss=0.1709, simple_loss=0.2588, pruned_loss=0.04148, over 7068.00 frames.], tot_loss[loss=0.2005, simple_loss=0.2837, pruned_loss=0.05862, over 1422593.83 frames.], batch size: 18, lr: 4.13e-04 2022-05-27 14:49:56,152 INFO [train.py:842] (0/4) Epoch 13, batch 7700, loss[loss=0.1868, simple_loss=0.2796, pruned_loss=0.04699, over 7207.00 frames.], tot_loss[loss=0.2008, simple_loss=0.2841, pruned_loss=0.05874, over 1424503.90 frames.], batch size: 22, lr: 4.13e-04 2022-05-27 14:50:35,214 INFO [train.py:842] (0/4) Epoch 13, batch 7750, loss[loss=0.2429, simple_loss=0.3229, pruned_loss=0.08143, over 7409.00 frames.], tot_loss[loss=0.2008, simple_loss=0.284, pruned_loss=0.05877, over 1419900.11 frames.], batch size: 21, lr: 4.13e-04 2022-05-27 14:51:14,278 INFO [train.py:842] (0/4) Epoch 13, batch 7800, loss[loss=0.2192, simple_loss=0.2934, pruned_loss=0.07247, over 7060.00 frames.], tot_loss[loss=0.1998, simple_loss=0.2825, pruned_loss=0.05859, over 1421493.25 frames.], batch size: 28, lr: 4.13e-04 2022-05-27 14:51:53,568 INFO [train.py:842] (0/4) Epoch 13, batch 7850, loss[loss=0.215, simple_loss=0.3026, pruned_loss=0.06366, over 6351.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2821, pruned_loss=0.05828, over 1426032.10 frames.], batch size: 37, lr: 4.13e-04 2022-05-27 14:52:33,066 INFO [train.py:842] (0/4) Epoch 13, batch 7900, loss[loss=0.1599, simple_loss=0.2432, pruned_loss=0.03835, over 7404.00 frames.], tot_loss[loss=0.2001, simple_loss=0.2826, pruned_loss=0.05877, over 1426679.25 frames.], batch size: 18, lr: 4.12e-04 2022-05-27 14:53:11,956 INFO [train.py:842] (0/4) Epoch 13, batch 7950, loss[loss=0.1727, simple_loss=0.2651, pruned_loss=0.04017, over 7109.00 frames.], tot_loss[loss=0.1996, simple_loss=0.2821, pruned_loss=0.05849, over 1426020.15 frames.], batch size: 21, lr: 4.12e-04 2022-05-27 14:53:51,102 INFO [train.py:842] (0/4) Epoch 13, batch 8000, loss[loss=0.1563, simple_loss=0.2414, pruned_loss=0.03562, over 7296.00 frames.], tot_loss[loss=0.2014, simple_loss=0.2838, pruned_loss=0.05951, over 1428232.10 frames.], batch size: 16, lr: 4.12e-04 2022-05-27 14:54:29,901 INFO [train.py:842] (0/4) Epoch 13, batch 8050, loss[loss=0.1741, simple_loss=0.2534, pruned_loss=0.04743, over 7276.00 frames.], tot_loss[loss=0.2016, simple_loss=0.2837, pruned_loss=0.05972, over 1426842.82 frames.], batch size: 18, lr: 4.12e-04 2022-05-27 14:55:09,279 INFO [train.py:842] (0/4) Epoch 13, batch 8100, loss[loss=0.1701, simple_loss=0.2464, pruned_loss=0.04694, over 7160.00 frames.], tot_loss[loss=0.2013, simple_loss=0.2834, pruned_loss=0.05962, over 1424844.92 frames.], batch size: 19, lr: 4.12e-04 2022-05-27 14:55:47,973 INFO [train.py:842] (0/4) Epoch 13, batch 8150, loss[loss=0.2141, simple_loss=0.2993, pruned_loss=0.06443, over 7278.00 frames.], tot_loss[loss=0.2019, simple_loss=0.284, pruned_loss=0.05988, over 1427332.50 frames.], batch size: 25, lr: 4.12e-04 2022-05-27 14:56:27,297 INFO [train.py:842] (0/4) Epoch 13, batch 8200, loss[loss=0.2001, simple_loss=0.2809, pruned_loss=0.05964, over 7204.00 frames.], tot_loss[loss=0.2023, simple_loss=0.284, pruned_loss=0.06027, over 1429363.48 frames.], batch size: 22, lr: 4.12e-04 2022-05-27 14:57:06,463 INFO [train.py:842] (0/4) Epoch 13, batch 8250, loss[loss=0.1792, simple_loss=0.2665, pruned_loss=0.04592, over 7058.00 frames.], tot_loss[loss=0.2023, simple_loss=0.2842, pruned_loss=0.06027, over 1432777.87 frames.], batch size: 18, lr: 4.12e-04 2022-05-27 14:57:45,592 INFO [train.py:842] (0/4) Epoch 13, batch 8300, loss[loss=0.2052, simple_loss=0.2906, pruned_loss=0.05992, over 6836.00 frames.], tot_loss[loss=0.2021, simple_loss=0.2839, pruned_loss=0.06013, over 1434978.53 frames.], batch size: 31, lr: 4.12e-04 2022-05-27 14:58:24,375 INFO [train.py:842] (0/4) Epoch 13, batch 8350, loss[loss=0.1604, simple_loss=0.2472, pruned_loss=0.03682, over 7266.00 frames.], tot_loss[loss=0.2009, simple_loss=0.283, pruned_loss=0.05941, over 1433710.22 frames.], batch size: 17, lr: 4.12e-04 2022-05-27 14:59:03,765 INFO [train.py:842] (0/4) Epoch 13, batch 8400, loss[loss=0.2197, simple_loss=0.2935, pruned_loss=0.07296, over 7158.00 frames.], tot_loss[loss=0.2002, simple_loss=0.2824, pruned_loss=0.05899, over 1434934.60 frames.], batch size: 18, lr: 4.12e-04 2022-05-27 14:59:42,399 INFO [train.py:842] (0/4) Epoch 13, batch 8450, loss[loss=0.2293, simple_loss=0.3194, pruned_loss=0.06961, over 7102.00 frames.], tot_loss[loss=0.1994, simple_loss=0.2819, pruned_loss=0.05847, over 1428171.83 frames.], batch size: 26, lr: 4.11e-04 2022-05-27 15:00:21,273 INFO [train.py:842] (0/4) Epoch 13, batch 8500, loss[loss=0.2166, simple_loss=0.2828, pruned_loss=0.07518, over 7272.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2824, pruned_loss=0.05851, over 1427858.12 frames.], batch size: 17, lr: 4.11e-04 2022-05-27 15:00:59,971 INFO [train.py:842] (0/4) Epoch 13, batch 8550, loss[loss=0.1907, simple_loss=0.27, pruned_loss=0.05571, over 7194.00 frames.], tot_loss[loss=0.1991, simple_loss=0.2818, pruned_loss=0.05824, over 1424560.97 frames.], batch size: 26, lr: 4.11e-04 2022-05-27 15:01:38,595 INFO [train.py:842] (0/4) Epoch 13, batch 8600, loss[loss=0.2244, simple_loss=0.3058, pruned_loss=0.0715, over 6385.00 frames.], tot_loss[loss=0.1991, simple_loss=0.2822, pruned_loss=0.05802, over 1426639.81 frames.], batch size: 38, lr: 4.11e-04 2022-05-27 15:02:17,368 INFO [train.py:842] (0/4) Epoch 13, batch 8650, loss[loss=0.216, simple_loss=0.2921, pruned_loss=0.0699, over 7430.00 frames.], tot_loss[loss=0.1995, simple_loss=0.2823, pruned_loss=0.05835, over 1429310.69 frames.], batch size: 20, lr: 4.11e-04 2022-05-27 15:02:56,149 INFO [train.py:842] (0/4) Epoch 13, batch 8700, loss[loss=0.1601, simple_loss=0.2442, pruned_loss=0.03795, over 7157.00 frames.], tot_loss[loss=0.1996, simple_loss=0.2826, pruned_loss=0.05833, over 1426696.64 frames.], batch size: 18, lr: 4.11e-04 2022-05-27 15:03:34,668 INFO [train.py:842] (0/4) Epoch 13, batch 8750, loss[loss=0.1974, simple_loss=0.301, pruned_loss=0.0469, over 7218.00 frames.], tot_loss[loss=0.1999, simple_loss=0.2833, pruned_loss=0.05822, over 1423894.70 frames.], batch size: 21, lr: 4.11e-04 2022-05-27 15:04:13,437 INFO [train.py:842] (0/4) Epoch 13, batch 8800, loss[loss=0.2109, simple_loss=0.3006, pruned_loss=0.06059, over 7105.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2832, pruned_loss=0.05814, over 1420324.20 frames.], batch size: 21, lr: 4.11e-04 2022-05-27 15:04:52,090 INFO [train.py:842] (0/4) Epoch 13, batch 8850, loss[loss=0.2662, simple_loss=0.3333, pruned_loss=0.09949, over 4866.00 frames.], tot_loss[loss=0.2007, simple_loss=0.2839, pruned_loss=0.05875, over 1417144.10 frames.], batch size: 52, lr: 4.11e-04 2022-05-27 15:05:30,820 INFO [train.py:842] (0/4) Epoch 13, batch 8900, loss[loss=0.2054, simple_loss=0.2794, pruned_loss=0.06575, over 7163.00 frames.], tot_loss[loss=0.2017, simple_loss=0.2846, pruned_loss=0.05942, over 1412462.39 frames.], batch size: 19, lr: 4.11e-04 2022-05-27 15:06:09,667 INFO [train.py:842] (0/4) Epoch 13, batch 8950, loss[loss=0.2327, simple_loss=0.3136, pruned_loss=0.07589, over 7213.00 frames.], tot_loss[loss=0.2018, simple_loss=0.2842, pruned_loss=0.05965, over 1406400.65 frames.], batch size: 26, lr: 4.11e-04 2022-05-27 15:06:48,225 INFO [train.py:842] (0/4) Epoch 13, batch 9000, loss[loss=0.2045, simple_loss=0.287, pruned_loss=0.06098, over 6400.00 frames.], tot_loss[loss=0.2027, simple_loss=0.285, pruned_loss=0.06014, over 1390503.54 frames.], batch size: 37, lr: 4.11e-04 2022-05-27 15:06:48,227 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 15:06:57,716 INFO [train.py:871] (0/4) Epoch 13, validation: loss=0.1693, simple_loss=0.2699, pruned_loss=0.03436, over 868885.00 frames. 2022-05-27 15:07:34,929 INFO [train.py:842] (0/4) Epoch 13, batch 9050, loss[loss=0.1988, simple_loss=0.2855, pruned_loss=0.05601, over 6418.00 frames.], tot_loss[loss=0.2046, simple_loss=0.2869, pruned_loss=0.06114, over 1353031.98 frames.], batch size: 37, lr: 4.10e-04 2022-05-27 15:08:12,254 INFO [train.py:842] (0/4) Epoch 13, batch 9100, loss[loss=0.1793, simple_loss=0.2767, pruned_loss=0.04091, over 6621.00 frames.], tot_loss[loss=0.208, simple_loss=0.2901, pruned_loss=0.06299, over 1310273.51 frames.], batch size: 38, lr: 4.10e-04 2022-05-27 15:08:49,695 INFO [train.py:842] (0/4) Epoch 13, batch 9150, loss[loss=0.2324, simple_loss=0.3073, pruned_loss=0.07876, over 5008.00 frames.], tot_loss[loss=0.2145, simple_loss=0.2949, pruned_loss=0.06706, over 1258298.18 frames.], batch size: 52, lr: 4.10e-04 2022-05-27 15:09:21,129 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-13.pt 2022-05-27 15:09:36,415 INFO [train.py:842] (0/4) Epoch 14, batch 0, loss[loss=0.1999, simple_loss=0.2817, pruned_loss=0.05898, over 7393.00 frames.], tot_loss[loss=0.1999, simple_loss=0.2817, pruned_loss=0.05898, over 7393.00 frames.], batch size: 23, lr: 3.97e-04 2022-05-27 15:10:16,211 INFO [train.py:842] (0/4) Epoch 14, batch 50, loss[loss=0.2006, simple_loss=0.2986, pruned_loss=0.05135, over 7110.00 frames.], tot_loss[loss=0.1951, simple_loss=0.2775, pruned_loss=0.05635, over 322502.77 frames.], batch size: 21, lr: 3.97e-04 2022-05-27 15:10:55,347 INFO [train.py:842] (0/4) Epoch 14, batch 100, loss[loss=0.1682, simple_loss=0.2616, pruned_loss=0.03739, over 7154.00 frames.], tot_loss[loss=0.1958, simple_loss=0.2801, pruned_loss=0.05577, over 572331.59 frames.], batch size: 20, lr: 3.97e-04 2022-05-27 15:11:34,739 INFO [train.py:842] (0/4) Epoch 14, batch 150, loss[loss=0.1973, simple_loss=0.2682, pruned_loss=0.06322, over 7003.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2792, pruned_loss=0.05588, over 762386.72 frames.], batch size: 16, lr: 3.97e-04 2022-05-27 15:12:13,374 INFO [train.py:842] (0/4) Epoch 14, batch 200, loss[loss=0.1935, simple_loss=0.2842, pruned_loss=0.05138, over 7201.00 frames.], tot_loss[loss=0.198, simple_loss=0.2817, pruned_loss=0.05718, over 909487.08 frames.], batch size: 22, lr: 3.97e-04 2022-05-27 15:12:52,351 INFO [train.py:842] (0/4) Epoch 14, batch 250, loss[loss=0.2269, simple_loss=0.3066, pruned_loss=0.07359, over 7216.00 frames.], tot_loss[loss=0.1974, simple_loss=0.2813, pruned_loss=0.05676, over 1025404.04 frames.], batch size: 22, lr: 3.97e-04 2022-05-27 15:13:30,900 INFO [train.py:842] (0/4) Epoch 14, batch 300, loss[loss=0.1722, simple_loss=0.2577, pruned_loss=0.04335, over 7412.00 frames.], tot_loss[loss=0.1983, simple_loss=0.2821, pruned_loss=0.05722, over 1112549.69 frames.], batch size: 21, lr: 3.97e-04 2022-05-27 15:14:09,875 INFO [train.py:842] (0/4) Epoch 14, batch 350, loss[loss=0.1912, simple_loss=0.2671, pruned_loss=0.0577, over 7429.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2794, pruned_loss=0.05577, over 1181051.78 frames.], batch size: 20, lr: 3.96e-04 2022-05-27 15:14:48,792 INFO [train.py:842] (0/4) Epoch 14, batch 400, loss[loss=0.1997, simple_loss=0.2849, pruned_loss=0.0573, over 7105.00 frames.], tot_loss[loss=0.1964, simple_loss=0.2801, pruned_loss=0.05637, over 1231487.05 frames.], batch size: 28, lr: 3.96e-04 2022-05-27 15:15:28,277 INFO [train.py:842] (0/4) Epoch 14, batch 450, loss[loss=0.2092, simple_loss=0.2854, pruned_loss=0.06657, over 6091.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2802, pruned_loss=0.05647, over 1273133.52 frames.], batch size: 37, lr: 3.96e-04 2022-05-27 15:16:07,084 INFO [train.py:842] (0/4) Epoch 14, batch 500, loss[loss=0.2342, simple_loss=0.32, pruned_loss=0.07425, over 7139.00 frames.], tot_loss[loss=0.1981, simple_loss=0.2809, pruned_loss=0.05764, over 1300690.05 frames.], batch size: 28, lr: 3.96e-04 2022-05-27 15:16:10,579 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-120000.pt 2022-05-27 15:16:48,807 INFO [train.py:842] (0/4) Epoch 14, batch 550, loss[loss=0.1869, simple_loss=0.2756, pruned_loss=0.04912, over 6281.00 frames.], tot_loss[loss=0.1972, simple_loss=0.2803, pruned_loss=0.057, over 1326535.01 frames.], batch size: 37, lr: 3.96e-04 2022-05-27 15:17:27,822 INFO [train.py:842] (0/4) Epoch 14, batch 600, loss[loss=0.2228, simple_loss=0.3139, pruned_loss=0.06588, over 7324.00 frames.], tot_loss[loss=0.1978, simple_loss=0.281, pruned_loss=0.05731, over 1348233.89 frames.], batch size: 21, lr: 3.96e-04 2022-05-27 15:18:06,509 INFO [train.py:842] (0/4) Epoch 14, batch 650, loss[loss=0.18, simple_loss=0.2751, pruned_loss=0.04243, over 7078.00 frames.], tot_loss[loss=0.1988, simple_loss=0.282, pruned_loss=0.05777, over 1361506.47 frames.], batch size: 18, lr: 3.96e-04 2022-05-27 15:18:45,274 INFO [train.py:842] (0/4) Epoch 14, batch 700, loss[loss=0.2005, simple_loss=0.2812, pruned_loss=0.05992, over 7257.00 frames.], tot_loss[loss=0.1976, simple_loss=0.281, pruned_loss=0.05714, over 1375943.32 frames.], batch size: 18, lr: 3.96e-04 2022-05-27 15:19:24,152 INFO [train.py:842] (0/4) Epoch 14, batch 750, loss[loss=0.1982, simple_loss=0.294, pruned_loss=0.0512, over 7195.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2818, pruned_loss=0.05771, over 1382644.44 frames.], batch size: 23, lr: 3.96e-04 2022-05-27 15:20:03,031 INFO [train.py:842] (0/4) Epoch 14, batch 800, loss[loss=0.199, simple_loss=0.2907, pruned_loss=0.05366, over 7309.00 frames.], tot_loss[loss=0.1988, simple_loss=0.2824, pruned_loss=0.05762, over 1391897.85 frames.], batch size: 25, lr: 3.96e-04 2022-05-27 15:20:42,357 INFO [train.py:842] (0/4) Epoch 14, batch 850, loss[loss=0.2051, simple_loss=0.2978, pruned_loss=0.05622, over 7215.00 frames.], tot_loss[loss=0.1991, simple_loss=0.2828, pruned_loss=0.0577, over 1400299.46 frames.], batch size: 21, lr: 3.96e-04 2022-05-27 15:21:21,187 INFO [train.py:842] (0/4) Epoch 14, batch 900, loss[loss=0.1864, simple_loss=0.2645, pruned_loss=0.0542, over 7161.00 frames.], tot_loss[loss=0.1983, simple_loss=0.282, pruned_loss=0.05733, over 1403215.50 frames.], batch size: 18, lr: 3.96e-04 2022-05-27 15:21:59,897 INFO [train.py:842] (0/4) Epoch 14, batch 950, loss[loss=0.247, simple_loss=0.3181, pruned_loss=0.08794, over 7214.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2839, pruned_loss=0.05918, over 1404174.80 frames.], batch size: 21, lr: 3.96e-04 2022-05-27 15:22:39,005 INFO [train.py:842] (0/4) Epoch 14, batch 1000, loss[loss=0.2019, simple_loss=0.2889, pruned_loss=0.05745, over 7202.00 frames.], tot_loss[loss=0.2002, simple_loss=0.2831, pruned_loss=0.05871, over 1410951.94 frames.], batch size: 22, lr: 3.95e-04 2022-05-27 15:23:18,290 INFO [train.py:842] (0/4) Epoch 14, batch 1050, loss[loss=0.1746, simple_loss=0.2687, pruned_loss=0.0402, over 7411.00 frames.], tot_loss[loss=0.2017, simple_loss=0.2842, pruned_loss=0.05959, over 1411495.57 frames.], batch size: 21, lr: 3.95e-04 2022-05-27 15:23:57,023 INFO [train.py:842] (0/4) Epoch 14, batch 1100, loss[loss=0.2234, simple_loss=0.3011, pruned_loss=0.07286, over 6835.00 frames.], tot_loss[loss=0.202, simple_loss=0.2836, pruned_loss=0.06024, over 1411052.13 frames.], batch size: 31, lr: 3.95e-04 2022-05-27 15:24:35,947 INFO [train.py:842] (0/4) Epoch 14, batch 1150, loss[loss=0.2332, simple_loss=0.3116, pruned_loss=0.07745, over 7333.00 frames.], tot_loss[loss=0.2027, simple_loss=0.2846, pruned_loss=0.06038, over 1411203.70 frames.], batch size: 22, lr: 3.95e-04 2022-05-27 15:25:14,707 INFO [train.py:842] (0/4) Epoch 14, batch 1200, loss[loss=0.2631, simple_loss=0.3289, pruned_loss=0.09864, over 5293.00 frames.], tot_loss[loss=0.2021, simple_loss=0.2842, pruned_loss=0.06001, over 1410646.36 frames.], batch size: 54, lr: 3.95e-04 2022-05-27 15:25:53,903 INFO [train.py:842] (0/4) Epoch 14, batch 1250, loss[loss=0.2017, simple_loss=0.2874, pruned_loss=0.05804, over 7429.00 frames.], tot_loss[loss=0.2012, simple_loss=0.2838, pruned_loss=0.05931, over 1414259.43 frames.], batch size: 20, lr: 3.95e-04 2022-05-27 15:26:32,821 INFO [train.py:842] (0/4) Epoch 14, batch 1300, loss[loss=0.2067, simple_loss=0.2967, pruned_loss=0.05839, over 7274.00 frames.], tot_loss[loss=0.2, simple_loss=0.2833, pruned_loss=0.05836, over 1418052.36 frames.], batch size: 19, lr: 3.95e-04 2022-05-27 15:27:22,191 INFO [train.py:842] (0/4) Epoch 14, batch 1350, loss[loss=0.1649, simple_loss=0.2476, pruned_loss=0.04104, over 7297.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2807, pruned_loss=0.05677, over 1421651.99 frames.], batch size: 18, lr: 3.95e-04 2022-05-27 15:28:01,183 INFO [train.py:842] (0/4) Epoch 14, batch 1400, loss[loss=0.1618, simple_loss=0.2487, pruned_loss=0.03747, over 7164.00 frames.], tot_loss[loss=0.1988, simple_loss=0.282, pruned_loss=0.05784, over 1417495.51 frames.], batch size: 18, lr: 3.95e-04 2022-05-27 15:28:40,385 INFO [train.py:842] (0/4) Epoch 14, batch 1450, loss[loss=0.1837, simple_loss=0.265, pruned_loss=0.05113, over 7283.00 frames.], tot_loss[loss=0.198, simple_loss=0.2813, pruned_loss=0.05738, over 1420989.02 frames.], batch size: 17, lr: 3.95e-04 2022-05-27 15:29:19,114 INFO [train.py:842] (0/4) Epoch 14, batch 1500, loss[loss=0.1529, simple_loss=0.2359, pruned_loss=0.03501, over 7268.00 frames.], tot_loss[loss=0.1979, simple_loss=0.281, pruned_loss=0.05737, over 1422545.60 frames.], batch size: 17, lr: 3.95e-04 2022-05-27 15:29:58,057 INFO [train.py:842] (0/4) Epoch 14, batch 1550, loss[loss=0.1781, simple_loss=0.2656, pruned_loss=0.04529, over 6351.00 frames.], tot_loss[loss=0.199, simple_loss=0.2818, pruned_loss=0.05813, over 1418409.42 frames.], batch size: 38, lr: 3.95e-04 2022-05-27 15:30:37,034 INFO [train.py:842] (0/4) Epoch 14, batch 1600, loss[loss=0.1644, simple_loss=0.2597, pruned_loss=0.03454, over 7420.00 frames.], tot_loss[loss=0.2, simple_loss=0.2829, pruned_loss=0.05858, over 1416665.25 frames.], batch size: 21, lr: 3.94e-04 2022-05-27 15:31:16,019 INFO [train.py:842] (0/4) Epoch 14, batch 1650, loss[loss=0.1896, simple_loss=0.2748, pruned_loss=0.05226, over 7231.00 frames.], tot_loss[loss=0.1992, simple_loss=0.2831, pruned_loss=0.05768, over 1418582.48 frames.], batch size: 20, lr: 3.94e-04 2022-05-27 15:31:54,469 INFO [train.py:842] (0/4) Epoch 14, batch 1700, loss[loss=0.2318, simple_loss=0.306, pruned_loss=0.07875, over 6622.00 frames.], tot_loss[loss=0.2015, simple_loss=0.2846, pruned_loss=0.05918, over 1418858.33 frames.], batch size: 39, lr: 3.94e-04 2022-05-27 15:32:33,871 INFO [train.py:842] (0/4) Epoch 14, batch 1750, loss[loss=0.1727, simple_loss=0.2494, pruned_loss=0.04802, over 7282.00 frames.], tot_loss[loss=0.2014, simple_loss=0.2841, pruned_loss=0.05939, over 1421888.52 frames.], batch size: 17, lr: 3.94e-04 2022-05-27 15:33:12,969 INFO [train.py:842] (0/4) Epoch 14, batch 1800, loss[loss=0.1925, simple_loss=0.2805, pruned_loss=0.05228, over 7152.00 frames.], tot_loss[loss=0.1999, simple_loss=0.2827, pruned_loss=0.05852, over 1426309.84 frames.], batch size: 20, lr: 3.94e-04 2022-05-27 15:33:51,929 INFO [train.py:842] (0/4) Epoch 14, batch 1850, loss[loss=0.1876, simple_loss=0.2789, pruned_loss=0.04815, over 7320.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2832, pruned_loss=0.05874, over 1426440.63 frames.], batch size: 25, lr: 3.94e-04 2022-05-27 15:34:30,716 INFO [train.py:842] (0/4) Epoch 14, batch 1900, loss[loss=0.2237, simple_loss=0.315, pruned_loss=0.06622, over 6507.00 frames.], tot_loss[loss=0.2008, simple_loss=0.2838, pruned_loss=0.05893, over 1421575.51 frames.], batch size: 38, lr: 3.94e-04 2022-05-27 15:35:09,515 INFO [train.py:842] (0/4) Epoch 14, batch 1950, loss[loss=0.1596, simple_loss=0.249, pruned_loss=0.03509, over 7261.00 frames.], tot_loss[loss=0.201, simple_loss=0.2841, pruned_loss=0.05894, over 1422741.90 frames.], batch size: 19, lr: 3.94e-04 2022-05-27 15:35:48,260 INFO [train.py:842] (0/4) Epoch 14, batch 2000, loss[loss=0.2151, simple_loss=0.3044, pruned_loss=0.06292, over 7330.00 frames.], tot_loss[loss=0.1998, simple_loss=0.2831, pruned_loss=0.05826, over 1423886.71 frames.], batch size: 22, lr: 3.94e-04 2022-05-27 15:36:27,726 INFO [train.py:842] (0/4) Epoch 14, batch 2050, loss[loss=0.1941, simple_loss=0.2817, pruned_loss=0.05323, over 7366.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2819, pruned_loss=0.05766, over 1425687.27 frames.], batch size: 23, lr: 3.94e-04 2022-05-27 15:37:06,299 INFO [train.py:842] (0/4) Epoch 14, batch 2100, loss[loss=0.2001, simple_loss=0.283, pruned_loss=0.05859, over 7222.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2833, pruned_loss=0.05805, over 1425337.85 frames.], batch size: 20, lr: 3.94e-04 2022-05-27 15:37:45,643 INFO [train.py:842] (0/4) Epoch 14, batch 2150, loss[loss=0.1874, simple_loss=0.2797, pruned_loss=0.0476, over 7105.00 frames.], tot_loss[loss=0.1979, simple_loss=0.2818, pruned_loss=0.05704, over 1428301.56 frames.], batch size: 26, lr: 3.94e-04 2022-05-27 15:38:24,813 INFO [train.py:842] (0/4) Epoch 14, batch 2200, loss[loss=0.1977, simple_loss=0.2745, pruned_loss=0.06047, over 7437.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2826, pruned_loss=0.05797, over 1426137.55 frames.], batch size: 20, lr: 3.93e-04 2022-05-27 15:39:03,996 INFO [train.py:842] (0/4) Epoch 14, batch 2250, loss[loss=0.1878, simple_loss=0.2767, pruned_loss=0.0494, over 7234.00 frames.], tot_loss[loss=0.1989, simple_loss=0.2823, pruned_loss=0.05778, over 1427413.38 frames.], batch size: 20, lr: 3.93e-04 2022-05-27 15:39:42,946 INFO [train.py:842] (0/4) Epoch 14, batch 2300, loss[loss=0.2001, simple_loss=0.2816, pruned_loss=0.05935, over 7124.00 frames.], tot_loss[loss=0.1973, simple_loss=0.2809, pruned_loss=0.0568, over 1429093.38 frames.], batch size: 28, lr: 3.93e-04 2022-05-27 15:40:22,155 INFO [train.py:842] (0/4) Epoch 14, batch 2350, loss[loss=0.2116, simple_loss=0.2917, pruned_loss=0.0658, over 4953.00 frames.], tot_loss[loss=0.1982, simple_loss=0.2817, pruned_loss=0.05736, over 1427738.43 frames.], batch size: 52, lr: 3.93e-04 2022-05-27 15:41:00,890 INFO [train.py:842] (0/4) Epoch 14, batch 2400, loss[loss=0.1548, simple_loss=0.2386, pruned_loss=0.03548, over 7273.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2823, pruned_loss=0.05753, over 1429317.09 frames.], batch size: 17, lr: 3.93e-04 2022-05-27 15:41:39,986 INFO [train.py:842] (0/4) Epoch 14, batch 2450, loss[loss=0.2191, simple_loss=0.309, pruned_loss=0.06463, over 6737.00 frames.], tot_loss[loss=0.1988, simple_loss=0.2827, pruned_loss=0.05749, over 1431417.17 frames.], batch size: 31, lr: 3.93e-04 2022-05-27 15:42:19,154 INFO [train.py:842] (0/4) Epoch 14, batch 2500, loss[loss=0.2019, simple_loss=0.2658, pruned_loss=0.06897, over 7266.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2838, pruned_loss=0.05838, over 1427948.88 frames.], batch size: 17, lr: 3.93e-04 2022-05-27 15:42:58,110 INFO [train.py:842] (0/4) Epoch 14, batch 2550, loss[loss=0.1778, simple_loss=0.2624, pruned_loss=0.04661, over 7290.00 frames.], tot_loss[loss=0.2012, simple_loss=0.2843, pruned_loss=0.05906, over 1423114.26 frames.], batch size: 25, lr: 3.93e-04 2022-05-27 15:43:37,309 INFO [train.py:842] (0/4) Epoch 14, batch 2600, loss[loss=0.2195, simple_loss=0.3023, pruned_loss=0.06835, over 7426.00 frames.], tot_loss[loss=0.2005, simple_loss=0.2836, pruned_loss=0.05869, over 1419915.33 frames.], batch size: 21, lr: 3.93e-04 2022-05-27 15:44:16,172 INFO [train.py:842] (0/4) Epoch 14, batch 2650, loss[loss=0.1938, simple_loss=0.2844, pruned_loss=0.05164, over 7114.00 frames.], tot_loss[loss=0.2006, simple_loss=0.2838, pruned_loss=0.0587, over 1417908.01 frames.], batch size: 21, lr: 3.93e-04 2022-05-27 15:44:55,673 INFO [train.py:842] (0/4) Epoch 14, batch 2700, loss[loss=0.1852, simple_loss=0.259, pruned_loss=0.0557, over 6981.00 frames.], tot_loss[loss=0.1983, simple_loss=0.2817, pruned_loss=0.05746, over 1422482.11 frames.], batch size: 16, lr: 3.93e-04 2022-05-27 15:45:35,085 INFO [train.py:842] (0/4) Epoch 14, batch 2750, loss[loss=0.2158, simple_loss=0.287, pruned_loss=0.07233, over 7280.00 frames.], tot_loss[loss=0.1974, simple_loss=0.2807, pruned_loss=0.05702, over 1427299.48 frames.], batch size: 24, lr: 3.93e-04 2022-05-27 15:46:13,994 INFO [train.py:842] (0/4) Epoch 14, batch 2800, loss[loss=0.2238, simple_loss=0.2796, pruned_loss=0.08404, over 7131.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2798, pruned_loss=0.05703, over 1425101.70 frames.], batch size: 17, lr: 3.93e-04 2022-05-27 15:46:53,095 INFO [train.py:842] (0/4) Epoch 14, batch 2850, loss[loss=0.1817, simple_loss=0.283, pruned_loss=0.04013, over 7425.00 frames.], tot_loss[loss=0.196, simple_loss=0.2787, pruned_loss=0.05662, over 1425253.37 frames.], batch size: 21, lr: 3.92e-04 2022-05-27 15:47:31,766 INFO [train.py:842] (0/4) Epoch 14, batch 2900, loss[loss=0.1891, simple_loss=0.2905, pruned_loss=0.04385, over 7099.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2789, pruned_loss=0.05603, over 1426991.20 frames.], batch size: 21, lr: 3.92e-04 2022-05-27 15:48:10,957 INFO [train.py:842] (0/4) Epoch 14, batch 2950, loss[loss=0.2206, simple_loss=0.3057, pruned_loss=0.06779, over 7212.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2797, pruned_loss=0.05607, over 1428142.08 frames.], batch size: 23, lr: 3.92e-04 2022-05-27 15:48:50,364 INFO [train.py:842] (0/4) Epoch 14, batch 3000, loss[loss=0.2242, simple_loss=0.3053, pruned_loss=0.07154, over 7315.00 frames.], tot_loss[loss=0.195, simple_loss=0.2787, pruned_loss=0.05567, over 1429047.75 frames.], batch size: 24, lr: 3.92e-04 2022-05-27 15:48:50,366 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 15:48:59,799 INFO [train.py:871] (0/4) Epoch 14, validation: loss=0.17, simple_loss=0.2697, pruned_loss=0.03515, over 868885.00 frames. 2022-05-27 15:49:39,100 INFO [train.py:842] (0/4) Epoch 14, batch 3050, loss[loss=0.1744, simple_loss=0.2467, pruned_loss=0.05105, over 7282.00 frames.], tot_loss[loss=0.1946, simple_loss=0.278, pruned_loss=0.05558, over 1429433.26 frames.], batch size: 17, lr: 3.92e-04 2022-05-27 15:50:17,909 INFO [train.py:842] (0/4) Epoch 14, batch 3100, loss[loss=0.1788, simple_loss=0.2655, pruned_loss=0.04606, over 7198.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2787, pruned_loss=0.05606, over 1431181.79 frames.], batch size: 23, lr: 3.92e-04 2022-05-27 15:50:57,000 INFO [train.py:842] (0/4) Epoch 14, batch 3150, loss[loss=0.2389, simple_loss=0.3167, pruned_loss=0.08052, over 4999.00 frames.], tot_loss[loss=0.1948, simple_loss=0.2781, pruned_loss=0.05572, over 1429470.52 frames.], batch size: 53, lr: 3.92e-04 2022-05-27 15:51:35,891 INFO [train.py:842] (0/4) Epoch 14, batch 3200, loss[loss=0.2467, simple_loss=0.3344, pruned_loss=0.07948, over 7338.00 frames.], tot_loss[loss=0.1956, simple_loss=0.2789, pruned_loss=0.05615, over 1429163.35 frames.], batch size: 22, lr: 3.92e-04 2022-05-27 15:52:14,768 INFO [train.py:842] (0/4) Epoch 14, batch 3250, loss[loss=0.1936, simple_loss=0.2892, pruned_loss=0.04903, over 7191.00 frames.], tot_loss[loss=0.195, simple_loss=0.2784, pruned_loss=0.05583, over 1426755.48 frames.], batch size: 26, lr: 3.92e-04 2022-05-27 15:52:53,651 INFO [train.py:842] (0/4) Epoch 14, batch 3300, loss[loss=0.1762, simple_loss=0.2646, pruned_loss=0.0439, over 7168.00 frames.], tot_loss[loss=0.1947, simple_loss=0.278, pruned_loss=0.05572, over 1424212.20 frames.], batch size: 18, lr: 3.92e-04 2022-05-27 15:53:32,476 INFO [train.py:842] (0/4) Epoch 14, batch 3350, loss[loss=0.1718, simple_loss=0.2566, pruned_loss=0.04343, over 7399.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2782, pruned_loss=0.05545, over 1425968.60 frames.], batch size: 18, lr: 3.92e-04 2022-05-27 15:54:11,129 INFO [train.py:842] (0/4) Epoch 14, batch 3400, loss[loss=0.151, simple_loss=0.2405, pruned_loss=0.03073, over 7169.00 frames.], tot_loss[loss=0.1948, simple_loss=0.2785, pruned_loss=0.05552, over 1426410.83 frames.], batch size: 18, lr: 3.92e-04 2022-05-27 15:55:00,323 INFO [train.py:842] (0/4) Epoch 14, batch 3450, loss[loss=0.1995, simple_loss=0.286, pruned_loss=0.05652, over 7120.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2794, pruned_loss=0.05572, over 1425568.35 frames.], batch size: 21, lr: 3.91e-04 2022-05-27 15:55:39,246 INFO [train.py:842] (0/4) Epoch 14, batch 3500, loss[loss=0.256, simple_loss=0.3296, pruned_loss=0.09116, over 7321.00 frames.], tot_loss[loss=0.1961, simple_loss=0.2796, pruned_loss=0.05628, over 1426717.78 frames.], batch size: 22, lr: 3.91e-04 2022-05-27 15:56:28,926 INFO [train.py:842] (0/4) Epoch 14, batch 3550, loss[loss=0.2012, simple_loss=0.2912, pruned_loss=0.05559, over 7318.00 frames.], tot_loss[loss=0.1968, simple_loss=0.2804, pruned_loss=0.05658, over 1426427.69 frames.], batch size: 21, lr: 3.91e-04 2022-05-27 15:57:08,504 INFO [train.py:842] (0/4) Epoch 14, batch 3600, loss[loss=0.1563, simple_loss=0.2359, pruned_loss=0.03836, over 7356.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2789, pruned_loss=0.05595, over 1429163.02 frames.], batch size: 19, lr: 3.91e-04 2022-05-27 15:57:57,864 INFO [train.py:842] (0/4) Epoch 14, batch 3650, loss[loss=0.2383, simple_loss=0.3235, pruned_loss=0.07654, over 7233.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2781, pruned_loss=0.05518, over 1428390.63 frames.], batch size: 20, lr: 3.91e-04 2022-05-27 15:58:36,736 INFO [train.py:842] (0/4) Epoch 14, batch 3700, loss[loss=0.2011, simple_loss=0.2952, pruned_loss=0.0535, over 7287.00 frames.], tot_loss[loss=0.196, simple_loss=0.2793, pruned_loss=0.05637, over 1420553.05 frames.], batch size: 24, lr: 3.91e-04 2022-05-27 15:59:15,369 INFO [train.py:842] (0/4) Epoch 14, batch 3750, loss[loss=0.2376, simple_loss=0.3053, pruned_loss=0.0849, over 5050.00 frames.], tot_loss[loss=0.1975, simple_loss=0.2805, pruned_loss=0.05723, over 1419193.09 frames.], batch size: 54, lr: 3.91e-04 2022-05-27 15:59:54,229 INFO [train.py:842] (0/4) Epoch 14, batch 3800, loss[loss=0.25, simple_loss=0.3152, pruned_loss=0.09236, over 7266.00 frames.], tot_loss[loss=0.1976, simple_loss=0.2808, pruned_loss=0.05719, over 1417922.64 frames.], batch size: 19, lr: 3.91e-04 2022-05-27 16:00:33,474 INFO [train.py:842] (0/4) Epoch 14, batch 3850, loss[loss=0.1974, simple_loss=0.2893, pruned_loss=0.05279, over 6415.00 frames.], tot_loss[loss=0.1967, simple_loss=0.2803, pruned_loss=0.05654, over 1418665.12 frames.], batch size: 37, lr: 3.91e-04 2022-05-27 16:01:12,305 INFO [train.py:842] (0/4) Epoch 14, batch 3900, loss[loss=0.1889, simple_loss=0.2665, pruned_loss=0.0557, over 7119.00 frames.], tot_loss[loss=0.198, simple_loss=0.2813, pruned_loss=0.05732, over 1420153.92 frames.], batch size: 21, lr: 3.91e-04 2022-05-27 16:01:51,361 INFO [train.py:842] (0/4) Epoch 14, batch 3950, loss[loss=0.209, simple_loss=0.2782, pruned_loss=0.06996, over 5264.00 frames.], tot_loss[loss=0.1974, simple_loss=0.2806, pruned_loss=0.05713, over 1420767.73 frames.], batch size: 52, lr: 3.91e-04 2022-05-27 16:02:30,312 INFO [train.py:842] (0/4) Epoch 14, batch 4000, loss[loss=0.1817, simple_loss=0.2565, pruned_loss=0.05342, over 7170.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2802, pruned_loss=0.05681, over 1421956.22 frames.], batch size: 18, lr: 3.91e-04 2022-05-27 16:03:09,237 INFO [train.py:842] (0/4) Epoch 14, batch 4050, loss[loss=0.1769, simple_loss=0.2631, pruned_loss=0.04538, over 7197.00 frames.], tot_loss[loss=0.1973, simple_loss=0.2807, pruned_loss=0.05697, over 1424219.18 frames.], batch size: 22, lr: 3.91e-04 2022-05-27 16:03:48,317 INFO [train.py:842] (0/4) Epoch 14, batch 4100, loss[loss=0.2013, simple_loss=0.2846, pruned_loss=0.05903, over 7213.00 frames.], tot_loss[loss=0.1973, simple_loss=0.2804, pruned_loss=0.05706, over 1426068.54 frames.], batch size: 22, lr: 3.90e-04 2022-05-27 16:04:27,755 INFO [train.py:842] (0/4) Epoch 14, batch 4150, loss[loss=0.1835, simple_loss=0.27, pruned_loss=0.04854, over 7317.00 frames.], tot_loss[loss=0.1976, simple_loss=0.281, pruned_loss=0.05713, over 1421391.95 frames.], batch size: 21, lr: 3.90e-04 2022-05-27 16:05:06,391 INFO [train.py:842] (0/4) Epoch 14, batch 4200, loss[loss=0.2011, simple_loss=0.2695, pruned_loss=0.06634, over 7147.00 frames.], tot_loss[loss=0.1983, simple_loss=0.2814, pruned_loss=0.05759, over 1423283.24 frames.], batch size: 17, lr: 3.90e-04 2022-05-27 16:05:45,521 INFO [train.py:842] (0/4) Epoch 14, batch 4250, loss[loss=0.1908, simple_loss=0.2771, pruned_loss=0.05224, over 7430.00 frames.], tot_loss[loss=0.1976, simple_loss=0.2806, pruned_loss=0.05728, over 1419353.70 frames.], batch size: 20, lr: 3.90e-04 2022-05-27 16:06:24,395 INFO [train.py:842] (0/4) Epoch 14, batch 4300, loss[loss=0.2371, simple_loss=0.3142, pruned_loss=0.08002, over 7415.00 frames.], tot_loss[loss=0.1988, simple_loss=0.2819, pruned_loss=0.0579, over 1415910.57 frames.], batch size: 21, lr: 3.90e-04 2022-05-27 16:07:03,432 INFO [train.py:842] (0/4) Epoch 14, batch 4350, loss[loss=0.184, simple_loss=0.2743, pruned_loss=0.04685, over 7432.00 frames.], tot_loss[loss=0.1979, simple_loss=0.2813, pruned_loss=0.0573, over 1418340.79 frames.], batch size: 20, lr: 3.90e-04 2022-05-27 16:07:42,435 INFO [train.py:842] (0/4) Epoch 14, batch 4400, loss[loss=0.2033, simple_loss=0.2826, pruned_loss=0.06202, over 6743.00 frames.], tot_loss[loss=0.1986, simple_loss=0.282, pruned_loss=0.05757, over 1418556.10 frames.], batch size: 31, lr: 3.90e-04 2022-05-27 16:08:21,597 INFO [train.py:842] (0/4) Epoch 14, batch 4450, loss[loss=0.2148, simple_loss=0.3008, pruned_loss=0.06434, over 7418.00 frames.], tot_loss[loss=0.1985, simple_loss=0.2815, pruned_loss=0.05772, over 1418300.90 frames.], batch size: 21, lr: 3.90e-04 2022-05-27 16:09:00,487 INFO [train.py:842] (0/4) Epoch 14, batch 4500, loss[loss=0.1812, simple_loss=0.2708, pruned_loss=0.04584, over 7221.00 frames.], tot_loss[loss=0.1991, simple_loss=0.2819, pruned_loss=0.05808, over 1418613.35 frames.], batch size: 21, lr: 3.90e-04 2022-05-27 16:09:39,602 INFO [train.py:842] (0/4) Epoch 14, batch 4550, loss[loss=0.2016, simple_loss=0.2907, pruned_loss=0.0562, over 7343.00 frames.], tot_loss[loss=0.1988, simple_loss=0.2819, pruned_loss=0.05781, over 1415168.15 frames.], batch size: 22, lr: 3.90e-04 2022-05-27 16:10:18,166 INFO [train.py:842] (0/4) Epoch 14, batch 4600, loss[loss=0.2276, simple_loss=0.3111, pruned_loss=0.07211, over 6426.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2823, pruned_loss=0.05745, over 1415599.10 frames.], batch size: 38, lr: 3.90e-04 2022-05-27 16:10:57,477 INFO [train.py:842] (0/4) Epoch 14, batch 4650, loss[loss=0.2058, simple_loss=0.2874, pruned_loss=0.06207, over 7373.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2824, pruned_loss=0.05747, over 1415378.80 frames.], batch size: 19, lr: 3.90e-04 2022-05-27 16:11:35,854 INFO [train.py:842] (0/4) Epoch 14, batch 4700, loss[loss=0.2258, simple_loss=0.3196, pruned_loss=0.06603, over 7168.00 frames.], tot_loss[loss=0.1994, simple_loss=0.2828, pruned_loss=0.05797, over 1413001.62 frames.], batch size: 26, lr: 3.90e-04 2022-05-27 16:12:14,906 INFO [train.py:842] (0/4) Epoch 14, batch 4750, loss[loss=0.2426, simple_loss=0.3131, pruned_loss=0.0861, over 7263.00 frames.], tot_loss[loss=0.1991, simple_loss=0.2828, pruned_loss=0.05769, over 1415288.95 frames.], batch size: 19, lr: 3.89e-04 2022-05-27 16:12:53,686 INFO [train.py:842] (0/4) Epoch 14, batch 4800, loss[loss=0.1797, simple_loss=0.273, pruned_loss=0.04318, over 7419.00 frames.], tot_loss[loss=0.1995, simple_loss=0.2829, pruned_loss=0.05806, over 1417713.77 frames.], batch size: 21, lr: 3.89e-04 2022-05-27 16:13:32,500 INFO [train.py:842] (0/4) Epoch 14, batch 4850, loss[loss=0.2313, simple_loss=0.3227, pruned_loss=0.06999, over 7202.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2824, pruned_loss=0.05753, over 1419277.79 frames.], batch size: 22, lr: 3.89e-04 2022-05-27 16:14:11,419 INFO [train.py:842] (0/4) Epoch 14, batch 4900, loss[loss=0.1748, simple_loss=0.2655, pruned_loss=0.04208, over 6719.00 frames.], tot_loss[loss=0.198, simple_loss=0.2819, pruned_loss=0.05707, over 1418945.84 frames.], batch size: 31, lr: 3.89e-04 2022-05-27 16:14:50,464 INFO [train.py:842] (0/4) Epoch 14, batch 4950, loss[loss=0.2155, simple_loss=0.3005, pruned_loss=0.06523, over 7214.00 frames.], tot_loss[loss=0.1973, simple_loss=0.2809, pruned_loss=0.0568, over 1420099.68 frames.], batch size: 22, lr: 3.89e-04 2022-05-27 16:15:29,221 INFO [train.py:842] (0/4) Epoch 14, batch 5000, loss[loss=0.1704, simple_loss=0.2577, pruned_loss=0.04156, over 7164.00 frames.], tot_loss[loss=0.1973, simple_loss=0.2808, pruned_loss=0.05691, over 1422742.31 frames.], batch size: 18, lr: 3.89e-04 2022-05-27 16:16:08,261 INFO [train.py:842] (0/4) Epoch 14, batch 5050, loss[loss=0.1807, simple_loss=0.2556, pruned_loss=0.05286, over 6995.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2808, pruned_loss=0.05673, over 1420921.44 frames.], batch size: 16, lr: 3.89e-04 2022-05-27 16:16:47,140 INFO [train.py:842] (0/4) Epoch 14, batch 5100, loss[loss=0.2101, simple_loss=0.2974, pruned_loss=0.06147, over 7258.00 frames.], tot_loss[loss=0.197, simple_loss=0.2805, pruned_loss=0.05674, over 1420067.36 frames.], batch size: 19, lr: 3.89e-04 2022-05-27 16:17:26,322 INFO [train.py:842] (0/4) Epoch 14, batch 5150, loss[loss=0.208, simple_loss=0.2951, pruned_loss=0.06041, over 7309.00 frames.], tot_loss[loss=0.1972, simple_loss=0.2803, pruned_loss=0.05707, over 1422866.71 frames.], batch size: 24, lr: 3.89e-04 2022-05-27 16:18:04,985 INFO [train.py:842] (0/4) Epoch 14, batch 5200, loss[loss=0.2551, simple_loss=0.3449, pruned_loss=0.08266, over 7420.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2803, pruned_loss=0.05671, over 1426167.71 frames.], batch size: 21, lr: 3.89e-04 2022-05-27 16:18:44,061 INFO [train.py:842] (0/4) Epoch 14, batch 5250, loss[loss=0.1965, simple_loss=0.2806, pruned_loss=0.05622, over 7375.00 frames.], tot_loss[loss=0.1974, simple_loss=0.281, pruned_loss=0.0569, over 1428239.12 frames.], batch size: 23, lr: 3.89e-04 2022-05-27 16:19:22,704 INFO [train.py:842] (0/4) Epoch 14, batch 5300, loss[loss=0.2038, simple_loss=0.2873, pruned_loss=0.0602, over 4952.00 frames.], tot_loss[loss=0.1991, simple_loss=0.2826, pruned_loss=0.0578, over 1421651.01 frames.], batch size: 52, lr: 3.89e-04 2022-05-27 16:20:01,788 INFO [train.py:842] (0/4) Epoch 14, batch 5350, loss[loss=0.2069, simple_loss=0.2896, pruned_loss=0.06208, over 7308.00 frames.], tot_loss[loss=0.1982, simple_loss=0.2817, pruned_loss=0.05732, over 1423129.09 frames.], batch size: 24, lr: 3.88e-04 2022-05-27 16:20:40,852 INFO [train.py:842] (0/4) Epoch 14, batch 5400, loss[loss=0.2434, simple_loss=0.3393, pruned_loss=0.07377, over 7326.00 frames.], tot_loss[loss=0.1975, simple_loss=0.2815, pruned_loss=0.05674, over 1426665.80 frames.], batch size: 22, lr: 3.88e-04 2022-05-27 16:21:20,189 INFO [train.py:842] (0/4) Epoch 14, batch 5450, loss[loss=0.2072, simple_loss=0.2897, pruned_loss=0.06235, over 6828.00 frames.], tot_loss[loss=0.1972, simple_loss=0.281, pruned_loss=0.05676, over 1427405.83 frames.], batch size: 31, lr: 3.88e-04 2022-05-27 16:21:59,180 INFO [train.py:842] (0/4) Epoch 14, batch 5500, loss[loss=0.2422, simple_loss=0.3238, pruned_loss=0.08026, over 7212.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2805, pruned_loss=0.05664, over 1429757.66 frames.], batch size: 23, lr: 3.88e-04 2022-05-27 16:22:38,747 INFO [train.py:842] (0/4) Epoch 14, batch 5550, loss[loss=0.2012, simple_loss=0.2981, pruned_loss=0.0521, over 7328.00 frames.], tot_loss[loss=0.1967, simple_loss=0.2804, pruned_loss=0.05649, over 1430993.48 frames.], batch size: 20, lr: 3.88e-04 2022-05-27 16:23:17,931 INFO [train.py:842] (0/4) Epoch 14, batch 5600, loss[loss=0.1461, simple_loss=0.2273, pruned_loss=0.03249, over 7276.00 frames.], tot_loss[loss=0.1957, simple_loss=0.279, pruned_loss=0.05615, over 1429484.04 frames.], batch size: 17, lr: 3.88e-04 2022-05-27 16:23:57,204 INFO [train.py:842] (0/4) Epoch 14, batch 5650, loss[loss=0.2392, simple_loss=0.3164, pruned_loss=0.08104, over 7273.00 frames.], tot_loss[loss=0.1963, simple_loss=0.2797, pruned_loss=0.05644, over 1429897.36 frames.], batch size: 24, lr: 3.88e-04 2022-05-27 16:24:36,053 INFO [train.py:842] (0/4) Epoch 14, batch 5700, loss[loss=0.174, simple_loss=0.2569, pruned_loss=0.04549, over 7420.00 frames.], tot_loss[loss=0.1969, simple_loss=0.28, pruned_loss=0.05692, over 1432038.65 frames.], batch size: 20, lr: 3.88e-04 2022-05-27 16:25:15,152 INFO [train.py:842] (0/4) Epoch 14, batch 5750, loss[loss=0.2401, simple_loss=0.3192, pruned_loss=0.0805, over 7287.00 frames.], tot_loss[loss=0.1964, simple_loss=0.2796, pruned_loss=0.05657, over 1429729.33 frames.], batch size: 24, lr: 3.88e-04 2022-05-27 16:25:54,353 INFO [train.py:842] (0/4) Epoch 14, batch 5800, loss[loss=0.2854, simple_loss=0.348, pruned_loss=0.1114, over 7307.00 frames.], tot_loss[loss=0.1966, simple_loss=0.2798, pruned_loss=0.05666, over 1429121.01 frames.], batch size: 21, lr: 3.88e-04 2022-05-27 16:26:33,787 INFO [train.py:842] (0/4) Epoch 14, batch 5850, loss[loss=0.1636, simple_loss=0.2618, pruned_loss=0.03269, over 7355.00 frames.], tot_loss[loss=0.1981, simple_loss=0.2812, pruned_loss=0.05748, over 1426909.32 frames.], batch size: 19, lr: 3.88e-04 2022-05-27 16:27:12,631 INFO [train.py:842] (0/4) Epoch 14, batch 5900, loss[loss=0.1908, simple_loss=0.284, pruned_loss=0.04882, over 6396.00 frames.], tot_loss[loss=0.1984, simple_loss=0.2815, pruned_loss=0.0577, over 1421929.18 frames.], batch size: 37, lr: 3.88e-04 2022-05-27 16:27:52,038 INFO [train.py:842] (0/4) Epoch 14, batch 5950, loss[loss=0.1866, simple_loss=0.2648, pruned_loss=0.05422, over 7277.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2813, pruned_loss=0.058, over 1423572.07 frames.], batch size: 17, lr: 3.88e-04 2022-05-27 16:28:31,058 INFO [train.py:842] (0/4) Epoch 14, batch 6000, loss[loss=0.2251, simple_loss=0.3068, pruned_loss=0.07166, over 6412.00 frames.], tot_loss[loss=0.1983, simple_loss=0.2811, pruned_loss=0.05773, over 1419602.13 frames.], batch size: 38, lr: 3.87e-04 2022-05-27 16:28:31,060 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 16:28:40,676 INFO [train.py:871] (0/4) Epoch 14, validation: loss=0.1701, simple_loss=0.2708, pruned_loss=0.03472, over 868885.00 frames. 2022-05-27 16:29:19,849 INFO [train.py:842] (0/4) Epoch 14, batch 6050, loss[loss=0.2267, simple_loss=0.2989, pruned_loss=0.07724, over 7240.00 frames.], tot_loss[loss=0.1963, simple_loss=0.2792, pruned_loss=0.05667, over 1418947.23 frames.], batch size: 20, lr: 3.87e-04 2022-05-27 16:29:58,684 INFO [train.py:842] (0/4) Epoch 14, batch 6100, loss[loss=0.1783, simple_loss=0.2509, pruned_loss=0.05286, over 7065.00 frames.], tot_loss[loss=0.1968, simple_loss=0.2799, pruned_loss=0.05682, over 1422101.13 frames.], batch size: 18, lr: 3.87e-04 2022-05-27 16:30:38,138 INFO [train.py:842] (0/4) Epoch 14, batch 6150, loss[loss=0.1638, simple_loss=0.2477, pruned_loss=0.03995, over 6807.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2794, pruned_loss=0.0568, over 1422148.52 frames.], batch size: 15, lr: 3.87e-04 2022-05-27 16:31:17,313 INFO [train.py:842] (0/4) Epoch 14, batch 6200, loss[loss=0.1864, simple_loss=0.2692, pruned_loss=0.05181, over 7278.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2796, pruned_loss=0.05663, over 1415984.41 frames.], batch size: 17, lr: 3.87e-04 2022-05-27 16:31:56,494 INFO [train.py:842] (0/4) Epoch 14, batch 6250, loss[loss=0.1966, simple_loss=0.2969, pruned_loss=0.04821, over 7207.00 frames.], tot_loss[loss=0.1972, simple_loss=0.2804, pruned_loss=0.05695, over 1417189.93 frames.], batch size: 22, lr: 3.87e-04 2022-05-27 16:32:35,091 INFO [train.py:842] (0/4) Epoch 14, batch 6300, loss[loss=0.1413, simple_loss=0.2293, pruned_loss=0.02669, over 7282.00 frames.], tot_loss[loss=0.1975, simple_loss=0.2814, pruned_loss=0.05683, over 1417766.21 frames.], batch size: 17, lr: 3.87e-04 2022-05-27 16:33:14,419 INFO [train.py:842] (0/4) Epoch 14, batch 6350, loss[loss=0.1711, simple_loss=0.2675, pruned_loss=0.03737, over 6478.00 frames.], tot_loss[loss=0.1967, simple_loss=0.2804, pruned_loss=0.05653, over 1417169.52 frames.], batch size: 38, lr: 3.87e-04 2022-05-27 16:33:53,374 INFO [train.py:842] (0/4) Epoch 14, batch 6400, loss[loss=0.2024, simple_loss=0.2891, pruned_loss=0.05783, over 7117.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2798, pruned_loss=0.05658, over 1419512.07 frames.], batch size: 21, lr: 3.87e-04 2022-05-27 16:34:32,584 INFO [train.py:842] (0/4) Epoch 14, batch 6450, loss[loss=0.1757, simple_loss=0.2523, pruned_loss=0.04957, over 7297.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2803, pruned_loss=0.05631, over 1422006.01 frames.], batch size: 17, lr: 3.87e-04 2022-05-27 16:35:11,318 INFO [train.py:842] (0/4) Epoch 14, batch 6500, loss[loss=0.2028, simple_loss=0.2947, pruned_loss=0.05547, over 4424.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2833, pruned_loss=0.05805, over 1414975.00 frames.], batch size: 52, lr: 3.87e-04 2022-05-27 16:35:50,515 INFO [train.py:842] (0/4) Epoch 14, batch 6550, loss[loss=0.1838, simple_loss=0.2602, pruned_loss=0.05369, over 7228.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2827, pruned_loss=0.05792, over 1416221.06 frames.], batch size: 21, lr: 3.87e-04 2022-05-27 16:36:29,258 INFO [train.py:842] (0/4) Epoch 14, batch 6600, loss[loss=0.1725, simple_loss=0.2544, pruned_loss=0.04533, over 7327.00 frames.], tot_loss[loss=0.1989, simple_loss=0.2823, pruned_loss=0.05774, over 1414302.12 frames.], batch size: 20, lr: 3.87e-04 2022-05-27 16:37:08,471 INFO [train.py:842] (0/4) Epoch 14, batch 6650, loss[loss=0.1763, simple_loss=0.268, pruned_loss=0.04235, over 7136.00 frames.], tot_loss[loss=0.2, simple_loss=0.2832, pruned_loss=0.05839, over 1414840.42 frames.], batch size: 20, lr: 3.86e-04 2022-05-27 16:37:47,798 INFO [train.py:842] (0/4) Epoch 14, batch 6700, loss[loss=0.182, simple_loss=0.2643, pruned_loss=0.04986, over 7316.00 frames.], tot_loss[loss=0.199, simple_loss=0.2822, pruned_loss=0.05789, over 1418675.48 frames.], batch size: 20, lr: 3.86e-04 2022-05-27 16:38:27,391 INFO [train.py:842] (0/4) Epoch 14, batch 6750, loss[loss=0.1839, simple_loss=0.2649, pruned_loss=0.05146, over 7258.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2823, pruned_loss=0.05818, over 1417773.25 frames.], batch size: 19, lr: 3.86e-04 2022-05-27 16:39:06,433 INFO [train.py:842] (0/4) Epoch 14, batch 6800, loss[loss=0.1838, simple_loss=0.2639, pruned_loss=0.05184, over 7165.00 frames.], tot_loss[loss=0.1983, simple_loss=0.2811, pruned_loss=0.05775, over 1413376.88 frames.], batch size: 19, lr: 3.86e-04 2022-05-27 16:39:45,772 INFO [train.py:842] (0/4) Epoch 14, batch 6850, loss[loss=0.2545, simple_loss=0.3323, pruned_loss=0.0884, over 7194.00 frames.], tot_loss[loss=0.1981, simple_loss=0.2815, pruned_loss=0.05741, over 1413850.71 frames.], batch size: 22, lr: 3.86e-04 2022-05-27 16:40:24,688 INFO [train.py:842] (0/4) Epoch 14, batch 6900, loss[loss=0.1596, simple_loss=0.2354, pruned_loss=0.04185, over 7133.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2819, pruned_loss=0.0577, over 1414866.93 frames.], batch size: 17, lr: 3.86e-04 2022-05-27 16:41:03,809 INFO [train.py:842] (0/4) Epoch 14, batch 6950, loss[loss=0.1862, simple_loss=0.2754, pruned_loss=0.04847, over 6438.00 frames.], tot_loss[loss=0.1989, simple_loss=0.2819, pruned_loss=0.05789, over 1417691.73 frames.], batch size: 38, lr: 3.86e-04 2022-05-27 16:41:42,267 INFO [train.py:842] (0/4) Epoch 14, batch 7000, loss[loss=0.177, simple_loss=0.248, pruned_loss=0.053, over 7279.00 frames.], tot_loss[loss=0.1985, simple_loss=0.2815, pruned_loss=0.05773, over 1418606.81 frames.], batch size: 18, lr: 3.86e-04 2022-05-27 16:42:21,226 INFO [train.py:842] (0/4) Epoch 14, batch 7050, loss[loss=0.2016, simple_loss=0.2858, pruned_loss=0.05872, over 7063.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2818, pruned_loss=0.05769, over 1417436.42 frames.], batch size: 18, lr: 3.86e-04 2022-05-27 16:43:00,166 INFO [train.py:842] (0/4) Epoch 14, batch 7100, loss[loss=0.2109, simple_loss=0.2866, pruned_loss=0.06766, over 7415.00 frames.], tot_loss[loss=0.1988, simple_loss=0.2818, pruned_loss=0.05794, over 1419297.25 frames.], batch size: 18, lr: 3.86e-04 2022-05-27 16:43:39,115 INFO [train.py:842] (0/4) Epoch 14, batch 7150, loss[loss=0.2088, simple_loss=0.2903, pruned_loss=0.06363, over 7066.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2838, pruned_loss=0.05919, over 1420807.30 frames.], batch size: 18, lr: 3.86e-04 2022-05-27 16:44:18,101 INFO [train.py:842] (0/4) Epoch 14, batch 7200, loss[loss=0.2, simple_loss=0.2969, pruned_loss=0.05155, over 7313.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2824, pruned_loss=0.05816, over 1422438.64 frames.], batch size: 21, lr: 3.86e-04 2022-05-27 16:44:57,219 INFO [train.py:842] (0/4) Epoch 14, batch 7250, loss[loss=0.2006, simple_loss=0.2825, pruned_loss=0.05937, over 7323.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2818, pruned_loss=0.05774, over 1422522.66 frames.], batch size: 25, lr: 3.86e-04 2022-05-27 16:45:35,822 INFO [train.py:842] (0/4) Epoch 14, batch 7300, loss[loss=0.162, simple_loss=0.2525, pruned_loss=0.03574, over 7063.00 frames.], tot_loss[loss=0.1994, simple_loss=0.2831, pruned_loss=0.05785, over 1425533.78 frames.], batch size: 18, lr: 3.85e-04 2022-05-27 16:46:14,801 INFO [train.py:842] (0/4) Epoch 14, batch 7350, loss[loss=0.2028, simple_loss=0.2862, pruned_loss=0.05974, over 7128.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2825, pruned_loss=0.05736, over 1428308.01 frames.], batch size: 28, lr: 3.85e-04 2022-05-27 16:46:53,861 INFO [train.py:842] (0/4) Epoch 14, batch 7400, loss[loss=0.155, simple_loss=0.2411, pruned_loss=0.03441, over 7280.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2813, pruned_loss=0.05645, over 1429641.89 frames.], batch size: 17, lr: 3.85e-04 2022-05-27 16:47:33,351 INFO [train.py:842] (0/4) Epoch 14, batch 7450, loss[loss=0.1933, simple_loss=0.2879, pruned_loss=0.04936, over 7378.00 frames.], tot_loss[loss=0.197, simple_loss=0.281, pruned_loss=0.05648, over 1425165.50 frames.], batch size: 23, lr: 3.85e-04 2022-05-27 16:48:12,176 INFO [train.py:842] (0/4) Epoch 14, batch 7500, loss[loss=0.2591, simple_loss=0.3166, pruned_loss=0.1008, over 7170.00 frames.], tot_loss[loss=0.1971, simple_loss=0.281, pruned_loss=0.05664, over 1422209.13 frames.], batch size: 18, lr: 3.85e-04 2022-05-27 16:48:51,392 INFO [train.py:842] (0/4) Epoch 14, batch 7550, loss[loss=0.1609, simple_loss=0.2415, pruned_loss=0.0402, over 7270.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2807, pruned_loss=0.05612, over 1422661.43 frames.], batch size: 17, lr: 3.85e-04 2022-05-27 16:49:30,421 INFO [train.py:842] (0/4) Epoch 14, batch 7600, loss[loss=0.1451, simple_loss=0.2239, pruned_loss=0.03313, over 7245.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2797, pruned_loss=0.05556, over 1424283.93 frames.], batch size: 16, lr: 3.85e-04 2022-05-27 16:50:09,771 INFO [train.py:842] (0/4) Epoch 14, batch 7650, loss[loss=0.1817, simple_loss=0.2772, pruned_loss=0.04309, over 7154.00 frames.], tot_loss[loss=0.1951, simple_loss=0.2795, pruned_loss=0.05539, over 1426143.33 frames.], batch size: 20, lr: 3.85e-04 2022-05-27 16:50:49,105 INFO [train.py:842] (0/4) Epoch 14, batch 7700, loss[loss=0.2246, simple_loss=0.3035, pruned_loss=0.07288, over 7224.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2809, pruned_loss=0.05661, over 1424988.08 frames.], batch size: 16, lr: 3.85e-04 2022-05-27 16:51:28,081 INFO [train.py:842] (0/4) Epoch 14, batch 7750, loss[loss=0.1803, simple_loss=0.2477, pruned_loss=0.05645, over 7281.00 frames.], tot_loss[loss=0.1993, simple_loss=0.2828, pruned_loss=0.0579, over 1419659.97 frames.], batch size: 18, lr: 3.85e-04 2022-05-27 16:52:06,967 INFO [train.py:842] (0/4) Epoch 14, batch 7800, loss[loss=0.1817, simple_loss=0.2623, pruned_loss=0.05055, over 7259.00 frames.], tot_loss[loss=0.1994, simple_loss=0.2826, pruned_loss=0.0581, over 1417655.11 frames.], batch size: 18, lr: 3.85e-04 2022-05-27 16:52:45,709 INFO [train.py:842] (0/4) Epoch 14, batch 7850, loss[loss=0.1928, simple_loss=0.2773, pruned_loss=0.05418, over 7266.00 frames.], tot_loss[loss=0.1979, simple_loss=0.2814, pruned_loss=0.05716, over 1415467.39 frames.], batch size: 18, lr: 3.85e-04 2022-05-27 16:53:24,463 INFO [train.py:842] (0/4) Epoch 14, batch 7900, loss[loss=0.2362, simple_loss=0.3124, pruned_loss=0.08001, over 7132.00 frames.], tot_loss[loss=0.1989, simple_loss=0.2822, pruned_loss=0.05778, over 1416567.61 frames.], batch size: 26, lr: 3.85e-04 2022-05-27 16:54:03,645 INFO [train.py:842] (0/4) Epoch 14, batch 7950, loss[loss=0.1919, simple_loss=0.2802, pruned_loss=0.05185, over 7218.00 frames.], tot_loss[loss=0.1984, simple_loss=0.2819, pruned_loss=0.05744, over 1415572.43 frames.], batch size: 21, lr: 3.85e-04 2022-05-27 16:54:42,303 INFO [train.py:842] (0/4) Epoch 14, batch 8000, loss[loss=0.2153, simple_loss=0.3033, pruned_loss=0.06366, over 7142.00 frames.], tot_loss[loss=0.199, simple_loss=0.2823, pruned_loss=0.05787, over 1411323.19 frames.], batch size: 20, lr: 3.84e-04 2022-05-27 16:55:20,886 INFO [train.py:842] (0/4) Epoch 14, batch 8050, loss[loss=0.209, simple_loss=0.2929, pruned_loss=0.06253, over 7417.00 frames.], tot_loss[loss=0.1998, simple_loss=0.2829, pruned_loss=0.0583, over 1409400.80 frames.], batch size: 21, lr: 3.84e-04 2022-05-27 16:55:59,553 INFO [train.py:842] (0/4) Epoch 14, batch 8100, loss[loss=0.2123, simple_loss=0.2897, pruned_loss=0.06745, over 7421.00 frames.], tot_loss[loss=0.1998, simple_loss=0.2831, pruned_loss=0.05824, over 1413593.47 frames.], batch size: 20, lr: 3.84e-04 2022-05-27 16:56:38,987 INFO [train.py:842] (0/4) Epoch 14, batch 8150, loss[loss=0.21, simple_loss=0.3063, pruned_loss=0.05686, over 7329.00 frames.], tot_loss[loss=0.1997, simple_loss=0.2828, pruned_loss=0.05832, over 1410647.46 frames.], batch size: 22, lr: 3.84e-04 2022-05-27 16:57:17,972 INFO [train.py:842] (0/4) Epoch 14, batch 8200, loss[loss=0.1565, simple_loss=0.2395, pruned_loss=0.03669, over 7274.00 frames.], tot_loss[loss=0.1996, simple_loss=0.283, pruned_loss=0.05811, over 1415537.56 frames.], batch size: 17, lr: 3.84e-04 2022-05-27 16:57:56,780 INFO [train.py:842] (0/4) Epoch 14, batch 8250, loss[loss=0.2058, simple_loss=0.2977, pruned_loss=0.05692, over 7218.00 frames.], tot_loss[loss=0.199, simple_loss=0.2827, pruned_loss=0.05766, over 1416449.98 frames.], batch size: 21, lr: 3.84e-04 2022-05-27 16:58:35,690 INFO [train.py:842] (0/4) Epoch 14, batch 8300, loss[loss=0.1783, simple_loss=0.2723, pruned_loss=0.04209, over 7221.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2802, pruned_loss=0.05641, over 1421461.38 frames.], batch size: 21, lr: 3.84e-04 2022-05-27 16:59:14,911 INFO [train.py:842] (0/4) Epoch 14, batch 8350, loss[loss=0.2361, simple_loss=0.3149, pruned_loss=0.07865, over 7310.00 frames.], tot_loss[loss=0.1966, simple_loss=0.2805, pruned_loss=0.05634, over 1420360.79 frames.], batch size: 25, lr: 3.84e-04 2022-05-27 16:59:53,730 INFO [train.py:842] (0/4) Epoch 14, batch 8400, loss[loss=0.1812, simple_loss=0.2648, pruned_loss=0.04883, over 7278.00 frames.], tot_loss[loss=0.1961, simple_loss=0.2796, pruned_loss=0.05627, over 1417482.36 frames.], batch size: 24, lr: 3.84e-04 2022-05-27 17:00:32,569 INFO [train.py:842] (0/4) Epoch 14, batch 8450, loss[loss=0.1863, simple_loss=0.2755, pruned_loss=0.04853, over 7156.00 frames.], tot_loss[loss=0.196, simple_loss=0.28, pruned_loss=0.05605, over 1420000.27 frames.], batch size: 20, lr: 3.84e-04 2022-05-27 17:01:11,195 INFO [train.py:842] (0/4) Epoch 14, batch 8500, loss[loss=0.214, simple_loss=0.2903, pruned_loss=0.0688, over 6799.00 frames.], tot_loss[loss=0.1968, simple_loss=0.2804, pruned_loss=0.05665, over 1420823.21 frames.], batch size: 31, lr: 3.84e-04 2022-05-27 17:01:14,618 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-128000.pt 2022-05-27 17:01:53,166 INFO [train.py:842] (0/4) Epoch 14, batch 8550, loss[loss=0.176, simple_loss=0.2753, pruned_loss=0.03839, over 6517.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2797, pruned_loss=0.05661, over 1415531.53 frames.], batch size: 37, lr: 3.84e-04 2022-05-27 17:02:32,120 INFO [train.py:842] (0/4) Epoch 14, batch 8600, loss[loss=0.1904, simple_loss=0.2749, pruned_loss=0.05292, over 7406.00 frames.], tot_loss[loss=0.1945, simple_loss=0.278, pruned_loss=0.05551, over 1418216.11 frames.], batch size: 18, lr: 3.84e-04 2022-05-27 17:03:11,220 INFO [train.py:842] (0/4) Epoch 14, batch 8650, loss[loss=0.174, simple_loss=0.2504, pruned_loss=0.04884, over 6814.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2779, pruned_loss=0.05563, over 1420639.49 frames.], batch size: 15, lr: 3.83e-04 2022-05-27 17:03:49,929 INFO [train.py:842] (0/4) Epoch 14, batch 8700, loss[loss=0.1856, simple_loss=0.271, pruned_loss=0.05012, over 7152.00 frames.], tot_loss[loss=0.1963, simple_loss=0.2795, pruned_loss=0.05655, over 1418721.39 frames.], batch size: 19, lr: 3.83e-04 2022-05-27 17:04:29,259 INFO [train.py:842] (0/4) Epoch 14, batch 8750, loss[loss=0.186, simple_loss=0.2795, pruned_loss=0.04627, over 7224.00 frames.], tot_loss[loss=0.1956, simple_loss=0.2791, pruned_loss=0.05604, over 1417069.22 frames.], batch size: 21, lr: 3.83e-04 2022-05-27 17:05:08,080 INFO [train.py:842] (0/4) Epoch 14, batch 8800, loss[loss=0.2034, simple_loss=0.2947, pruned_loss=0.05607, over 7225.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2799, pruned_loss=0.05661, over 1413646.59 frames.], batch size: 21, lr: 3.83e-04 2022-05-27 17:05:47,075 INFO [train.py:842] (0/4) Epoch 14, batch 8850, loss[loss=0.1724, simple_loss=0.2705, pruned_loss=0.03713, over 7066.00 frames.], tot_loss[loss=0.1952, simple_loss=0.2785, pruned_loss=0.05594, over 1404834.61 frames.], batch size: 18, lr: 3.83e-04 2022-05-27 17:06:26,051 INFO [train.py:842] (0/4) Epoch 14, batch 8900, loss[loss=0.2205, simple_loss=0.2928, pruned_loss=0.07411, over 6755.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2769, pruned_loss=0.05579, over 1403502.01 frames.], batch size: 31, lr: 3.83e-04 2022-05-27 17:07:05,162 INFO [train.py:842] (0/4) Epoch 14, batch 8950, loss[loss=0.1734, simple_loss=0.2591, pruned_loss=0.04383, over 7256.00 frames.], tot_loss[loss=0.1958, simple_loss=0.2784, pruned_loss=0.0566, over 1406176.19 frames.], batch size: 19, lr: 3.83e-04 2022-05-27 17:07:44,227 INFO [train.py:842] (0/4) Epoch 14, batch 9000, loss[loss=0.2274, simple_loss=0.3059, pruned_loss=0.07442, over 7112.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2772, pruned_loss=0.05604, over 1407132.50 frames.], batch size: 28, lr: 3.83e-04 2022-05-27 17:07:44,229 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 17:07:53,833 INFO [train.py:871] (0/4) Epoch 14, validation: loss=0.1693, simple_loss=0.2692, pruned_loss=0.03464, over 868885.00 frames. 2022-05-27 17:08:33,166 INFO [train.py:842] (0/4) Epoch 14, batch 9050, loss[loss=0.2498, simple_loss=0.318, pruned_loss=0.09083, over 5024.00 frames.], tot_loss[loss=0.193, simple_loss=0.2753, pruned_loss=0.05537, over 1397512.23 frames.], batch size: 52, lr: 3.83e-04 2022-05-27 17:09:11,603 INFO [train.py:842] (0/4) Epoch 14, batch 9100, loss[loss=0.2514, simple_loss=0.3215, pruned_loss=0.09065, over 5162.00 frames.], tot_loss[loss=0.1957, simple_loss=0.2776, pruned_loss=0.05686, over 1378537.42 frames.], batch size: 53, lr: 3.83e-04 2022-05-27 17:09:49,562 INFO [train.py:842] (0/4) Epoch 14, batch 9150, loss[loss=0.2146, simple_loss=0.2889, pruned_loss=0.07016, over 4761.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2821, pruned_loss=0.06008, over 1315954.74 frames.], batch size: 53, lr: 3.83e-04 2022-05-27 17:10:21,658 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-14.pt 2022-05-27 17:10:39,680 INFO [train.py:842] (0/4) Epoch 15, batch 0, loss[loss=0.2137, simple_loss=0.3113, pruned_loss=0.05799, over 7004.00 frames.], tot_loss[loss=0.2137, simple_loss=0.3113, pruned_loss=0.05799, over 7004.00 frames.], batch size: 28, lr: 3.71e-04 2022-05-27 17:11:18,908 INFO [train.py:842] (0/4) Epoch 15, batch 50, loss[loss=0.2283, simple_loss=0.3052, pruned_loss=0.07568, over 4906.00 frames.], tot_loss[loss=0.2009, simple_loss=0.2832, pruned_loss=0.05925, over 321514.61 frames.], batch size: 53, lr: 3.71e-04 2022-05-27 17:11:57,686 INFO [train.py:842] (0/4) Epoch 15, batch 100, loss[loss=0.1825, simple_loss=0.2729, pruned_loss=0.04606, over 7151.00 frames.], tot_loss[loss=0.2011, simple_loss=0.2839, pruned_loss=0.05916, over 568297.79 frames.], batch size: 18, lr: 3.71e-04 2022-05-27 17:12:36,382 INFO [train.py:842] (0/4) Epoch 15, batch 150, loss[loss=0.1736, simple_loss=0.2733, pruned_loss=0.0369, over 7114.00 frames.], tot_loss[loss=0.2, simple_loss=0.2841, pruned_loss=0.05798, over 758520.15 frames.], batch size: 21, lr: 3.71e-04 2022-05-27 17:13:15,162 INFO [train.py:842] (0/4) Epoch 15, batch 200, loss[loss=0.1714, simple_loss=0.2587, pruned_loss=0.04203, over 7339.00 frames.], tot_loss[loss=0.2001, simple_loss=0.284, pruned_loss=0.05809, over 902473.49 frames.], batch size: 20, lr: 3.71e-04 2022-05-27 17:13:54,164 INFO [train.py:842] (0/4) Epoch 15, batch 250, loss[loss=0.2193, simple_loss=0.2987, pruned_loss=0.06992, over 6169.00 frames.], tot_loss[loss=0.1975, simple_loss=0.2819, pruned_loss=0.05658, over 1019057.02 frames.], batch size: 37, lr: 3.71e-04 2022-05-27 17:14:33,337 INFO [train.py:842] (0/4) Epoch 15, batch 300, loss[loss=0.196, simple_loss=0.2775, pruned_loss=0.05721, over 7139.00 frames.], tot_loss[loss=0.1969, simple_loss=0.281, pruned_loss=0.05638, over 1109863.70 frames.], batch size: 17, lr: 3.71e-04 2022-05-27 17:15:12,264 INFO [train.py:842] (0/4) Epoch 15, batch 350, loss[loss=0.2255, simple_loss=0.2926, pruned_loss=0.07919, over 7258.00 frames.], tot_loss[loss=0.1988, simple_loss=0.2818, pruned_loss=0.05783, over 1171928.24 frames.], batch size: 16, lr: 3.70e-04 2022-05-27 17:15:51,589 INFO [train.py:842] (0/4) Epoch 15, batch 400, loss[loss=0.2237, simple_loss=0.3094, pruned_loss=0.06895, over 7143.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2797, pruned_loss=0.05604, over 1227313.89 frames.], batch size: 20, lr: 3.70e-04 2022-05-27 17:16:30,696 INFO [train.py:842] (0/4) Epoch 15, batch 450, loss[loss=0.2039, simple_loss=0.2765, pruned_loss=0.06564, over 7161.00 frames.], tot_loss[loss=0.195, simple_loss=0.2789, pruned_loss=0.05558, over 1271755.48 frames.], batch size: 19, lr: 3.70e-04 2022-05-27 17:17:09,294 INFO [train.py:842] (0/4) Epoch 15, batch 500, loss[loss=0.1861, simple_loss=0.2707, pruned_loss=0.05074, over 7440.00 frames.], tot_loss[loss=0.1962, simple_loss=0.2805, pruned_loss=0.05592, over 1303154.24 frames.], batch size: 20, lr: 3.70e-04 2022-05-27 17:17:48,646 INFO [train.py:842] (0/4) Epoch 15, batch 550, loss[loss=0.166, simple_loss=0.2403, pruned_loss=0.04588, over 7282.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2802, pruned_loss=0.05575, over 1332579.31 frames.], batch size: 18, lr: 3.70e-04 2022-05-27 17:18:27,557 INFO [train.py:842] (0/4) Epoch 15, batch 600, loss[loss=0.2484, simple_loss=0.329, pruned_loss=0.08394, over 7239.00 frames.], tot_loss[loss=0.1973, simple_loss=0.2813, pruned_loss=0.05668, over 1355183.40 frames.], batch size: 20, lr: 3.70e-04 2022-05-27 17:19:06,384 INFO [train.py:842] (0/4) Epoch 15, batch 650, loss[loss=0.1769, simple_loss=0.2713, pruned_loss=0.04128, over 7342.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2811, pruned_loss=0.05642, over 1369473.47 frames.], batch size: 22, lr: 3.70e-04 2022-05-27 17:19:45,093 INFO [train.py:842] (0/4) Epoch 15, batch 700, loss[loss=0.1922, simple_loss=0.2695, pruned_loss=0.05747, over 7331.00 frames.], tot_loss[loss=0.1962, simple_loss=0.2804, pruned_loss=0.05601, over 1382666.88 frames.], batch size: 20, lr: 3.70e-04 2022-05-27 17:20:24,129 INFO [train.py:842] (0/4) Epoch 15, batch 750, loss[loss=0.2413, simple_loss=0.3174, pruned_loss=0.0826, over 7349.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2803, pruned_loss=0.05636, over 1390672.11 frames.], batch size: 22, lr: 3.70e-04 2022-05-27 17:21:03,156 INFO [train.py:842] (0/4) Epoch 15, batch 800, loss[loss=0.1636, simple_loss=0.2591, pruned_loss=0.034, over 7340.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2801, pruned_loss=0.05592, over 1399386.27 frames.], batch size: 22, lr: 3.70e-04 2022-05-27 17:21:42,294 INFO [train.py:842] (0/4) Epoch 15, batch 850, loss[loss=0.1489, simple_loss=0.23, pruned_loss=0.03386, over 7153.00 frames.], tot_loss[loss=0.1948, simple_loss=0.2793, pruned_loss=0.0552, over 1402555.69 frames.], batch size: 17, lr: 3.70e-04 2022-05-27 17:22:21,034 INFO [train.py:842] (0/4) Epoch 15, batch 900, loss[loss=0.1696, simple_loss=0.2552, pruned_loss=0.042, over 7250.00 frames.], tot_loss[loss=0.1944, simple_loss=0.2788, pruned_loss=0.05496, over 1396986.87 frames.], batch size: 19, lr: 3.70e-04 2022-05-27 17:22:59,781 INFO [train.py:842] (0/4) Epoch 15, batch 950, loss[loss=0.1882, simple_loss=0.2848, pruned_loss=0.04578, over 7348.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2803, pruned_loss=0.05573, over 1406007.25 frames.], batch size: 22, lr: 3.70e-04 2022-05-27 17:23:38,673 INFO [train.py:842] (0/4) Epoch 15, batch 1000, loss[loss=0.2056, simple_loss=0.2992, pruned_loss=0.05601, over 7004.00 frames.], tot_loss[loss=0.197, simple_loss=0.2813, pruned_loss=0.05636, over 1406343.89 frames.], batch size: 28, lr: 3.70e-04 2022-05-27 17:24:17,870 INFO [train.py:842] (0/4) Epoch 15, batch 1050, loss[loss=0.1836, simple_loss=0.2708, pruned_loss=0.04817, over 7279.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2811, pruned_loss=0.05637, over 1411978.22 frames.], batch size: 18, lr: 3.70e-04 2022-05-27 17:24:57,071 INFO [train.py:842] (0/4) Epoch 15, batch 1100, loss[loss=0.1535, simple_loss=0.2251, pruned_loss=0.04098, over 7266.00 frames.], tot_loss[loss=0.1978, simple_loss=0.2814, pruned_loss=0.05708, over 1415456.48 frames.], batch size: 17, lr: 3.69e-04 2022-05-27 17:25:36,295 INFO [train.py:842] (0/4) Epoch 15, batch 1150, loss[loss=0.1754, simple_loss=0.2706, pruned_loss=0.04014, over 7418.00 frames.], tot_loss[loss=0.1968, simple_loss=0.2803, pruned_loss=0.05668, over 1420781.82 frames.], batch size: 21, lr: 3.69e-04 2022-05-27 17:26:15,262 INFO [train.py:842] (0/4) Epoch 15, batch 1200, loss[loss=0.1683, simple_loss=0.2561, pruned_loss=0.04027, over 7429.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2791, pruned_loss=0.05596, over 1422605.28 frames.], batch size: 20, lr: 3.69e-04 2022-05-27 17:26:54,524 INFO [train.py:842] (0/4) Epoch 15, batch 1250, loss[loss=0.2351, simple_loss=0.304, pruned_loss=0.08311, over 7350.00 frames.], tot_loss[loss=0.1967, simple_loss=0.2801, pruned_loss=0.05666, over 1426094.81 frames.], batch size: 19, lr: 3.69e-04 2022-05-27 17:27:33,148 INFO [train.py:842] (0/4) Epoch 15, batch 1300, loss[loss=0.2531, simple_loss=0.3291, pruned_loss=0.08856, over 6348.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2801, pruned_loss=0.05701, over 1419307.94 frames.], batch size: 37, lr: 3.69e-04 2022-05-27 17:28:12,409 INFO [train.py:842] (0/4) Epoch 15, batch 1350, loss[loss=0.1735, simple_loss=0.26, pruned_loss=0.04351, over 7011.00 frames.], tot_loss[loss=0.1983, simple_loss=0.2816, pruned_loss=0.05752, over 1421284.96 frames.], batch size: 16, lr: 3.69e-04 2022-05-27 17:28:51,238 INFO [train.py:842] (0/4) Epoch 15, batch 1400, loss[loss=0.1849, simple_loss=0.2733, pruned_loss=0.04826, over 7293.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2806, pruned_loss=0.05683, over 1420829.67 frames.], batch size: 24, lr: 3.69e-04 2022-05-27 17:29:30,186 INFO [train.py:842] (0/4) Epoch 15, batch 1450, loss[loss=0.2056, simple_loss=0.2995, pruned_loss=0.05587, over 7380.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2809, pruned_loss=0.05662, over 1418802.72 frames.], batch size: 23, lr: 3.69e-04 2022-05-27 17:30:08,795 INFO [train.py:842] (0/4) Epoch 15, batch 1500, loss[loss=0.2355, simple_loss=0.3175, pruned_loss=0.07678, over 7137.00 frames.], tot_loss[loss=0.1976, simple_loss=0.2815, pruned_loss=0.05686, over 1413258.57 frames.], batch size: 20, lr: 3.69e-04 2022-05-27 17:30:48,093 INFO [train.py:842] (0/4) Epoch 15, batch 1550, loss[loss=0.2081, simple_loss=0.3004, pruned_loss=0.05783, over 7112.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2807, pruned_loss=0.05661, over 1417618.61 frames.], batch size: 21, lr: 3.69e-04 2022-05-27 17:31:26,953 INFO [train.py:842] (0/4) Epoch 15, batch 1600, loss[loss=0.1921, simple_loss=0.2841, pruned_loss=0.05002, over 7406.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2798, pruned_loss=0.05604, over 1419376.27 frames.], batch size: 21, lr: 3.69e-04 2022-05-27 17:32:05,920 INFO [train.py:842] (0/4) Epoch 15, batch 1650, loss[loss=0.1905, simple_loss=0.2799, pruned_loss=0.05057, over 7201.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2788, pruned_loss=0.05518, over 1424619.23 frames.], batch size: 23, lr: 3.69e-04 2022-05-27 17:32:44,946 INFO [train.py:842] (0/4) Epoch 15, batch 1700, loss[loss=0.2051, simple_loss=0.2974, pruned_loss=0.05639, over 7307.00 frames.], tot_loss[loss=0.1922, simple_loss=0.2767, pruned_loss=0.05388, over 1428153.35 frames.], batch size: 25, lr: 3.69e-04 2022-05-27 17:33:24,382 INFO [train.py:842] (0/4) Epoch 15, batch 1750, loss[loss=0.2805, simple_loss=0.3501, pruned_loss=0.1054, over 7059.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2774, pruned_loss=0.05478, over 1430813.39 frames.], batch size: 28, lr: 3.69e-04 2022-05-27 17:34:03,117 INFO [train.py:842] (0/4) Epoch 15, batch 1800, loss[loss=0.1401, simple_loss=0.2196, pruned_loss=0.03029, over 7288.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2774, pruned_loss=0.05476, over 1427371.99 frames.], batch size: 17, lr: 3.68e-04 2022-05-27 17:34:42,351 INFO [train.py:842] (0/4) Epoch 15, batch 1850, loss[loss=0.1727, simple_loss=0.2666, pruned_loss=0.03941, over 7163.00 frames.], tot_loss[loss=0.1956, simple_loss=0.2794, pruned_loss=0.0559, over 1431678.31 frames.], batch size: 18, lr: 3.68e-04 2022-05-27 17:35:21,428 INFO [train.py:842] (0/4) Epoch 15, batch 1900, loss[loss=0.1996, simple_loss=0.2998, pruned_loss=0.04976, over 7105.00 frames.], tot_loss[loss=0.1957, simple_loss=0.2798, pruned_loss=0.05582, over 1431046.75 frames.], batch size: 21, lr: 3.68e-04 2022-05-27 17:36:00,615 INFO [train.py:842] (0/4) Epoch 15, batch 1950, loss[loss=0.1662, simple_loss=0.2471, pruned_loss=0.04264, over 7280.00 frames.], tot_loss[loss=0.1951, simple_loss=0.279, pruned_loss=0.05556, over 1431136.35 frames.], batch size: 18, lr: 3.68e-04 2022-05-27 17:36:39,351 INFO [train.py:842] (0/4) Epoch 15, batch 2000, loss[loss=0.1997, simple_loss=0.286, pruned_loss=0.05673, over 6468.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2792, pruned_loss=0.05586, over 1427298.79 frames.], batch size: 38, lr: 3.68e-04 2022-05-27 17:37:18,697 INFO [train.py:842] (0/4) Epoch 15, batch 2050, loss[loss=0.191, simple_loss=0.2772, pruned_loss=0.05238, over 7277.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2795, pruned_loss=0.05613, over 1428663.34 frames.], batch size: 25, lr: 3.68e-04 2022-05-27 17:37:57,584 INFO [train.py:842] (0/4) Epoch 15, batch 2100, loss[loss=0.176, simple_loss=0.2503, pruned_loss=0.05084, over 7402.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2789, pruned_loss=0.0559, over 1422058.41 frames.], batch size: 18, lr: 3.68e-04 2022-05-27 17:38:36,689 INFO [train.py:842] (0/4) Epoch 15, batch 2150, loss[loss=0.2738, simple_loss=0.3369, pruned_loss=0.1053, over 7201.00 frames.], tot_loss[loss=0.1948, simple_loss=0.2787, pruned_loss=0.05543, over 1420393.27 frames.], batch size: 22, lr: 3.68e-04 2022-05-27 17:39:15,642 INFO [train.py:842] (0/4) Epoch 15, batch 2200, loss[loss=0.2054, simple_loss=0.2831, pruned_loss=0.06384, over 7426.00 frames.], tot_loss[loss=0.1949, simple_loss=0.2789, pruned_loss=0.05548, over 1420989.87 frames.], batch size: 20, lr: 3.68e-04 2022-05-27 17:39:54,618 INFO [train.py:842] (0/4) Epoch 15, batch 2250, loss[loss=0.1903, simple_loss=0.2809, pruned_loss=0.04984, over 7116.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2793, pruned_loss=0.05577, over 1421849.18 frames.], batch size: 28, lr: 3.68e-04 2022-05-27 17:40:33,414 INFO [train.py:842] (0/4) Epoch 15, batch 2300, loss[loss=0.1748, simple_loss=0.246, pruned_loss=0.05176, over 6851.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2794, pruned_loss=0.05616, over 1421554.31 frames.], batch size: 15, lr: 3.68e-04 2022-05-27 17:41:12,436 INFO [train.py:842] (0/4) Epoch 15, batch 2350, loss[loss=0.1546, simple_loss=0.2285, pruned_loss=0.04033, over 7417.00 frames.], tot_loss[loss=0.195, simple_loss=0.2786, pruned_loss=0.0557, over 1424220.00 frames.], batch size: 18, lr: 3.68e-04 2022-05-27 17:41:51,228 INFO [train.py:842] (0/4) Epoch 15, batch 2400, loss[loss=0.1666, simple_loss=0.2465, pruned_loss=0.04331, over 7396.00 frames.], tot_loss[loss=0.196, simple_loss=0.2795, pruned_loss=0.05623, over 1422116.29 frames.], batch size: 18, lr: 3.68e-04 2022-05-27 17:42:30,536 INFO [train.py:842] (0/4) Epoch 15, batch 2450, loss[loss=0.1825, simple_loss=0.2673, pruned_loss=0.04883, over 7419.00 frames.], tot_loss[loss=0.1962, simple_loss=0.2795, pruned_loss=0.05642, over 1422829.65 frames.], batch size: 21, lr: 3.68e-04 2022-05-27 17:43:09,561 INFO [train.py:842] (0/4) Epoch 15, batch 2500, loss[loss=0.1841, simple_loss=0.2794, pruned_loss=0.04442, over 7313.00 frames.], tot_loss[loss=0.1958, simple_loss=0.2794, pruned_loss=0.0561, over 1424365.24 frames.], batch size: 21, lr: 3.67e-04 2022-05-27 17:43:48,435 INFO [train.py:842] (0/4) Epoch 15, batch 2550, loss[loss=0.1519, simple_loss=0.2392, pruned_loss=0.03226, over 7165.00 frames.], tot_loss[loss=0.1951, simple_loss=0.2788, pruned_loss=0.05575, over 1428494.34 frames.], batch size: 18, lr: 3.67e-04 2022-05-27 17:44:27,021 INFO [train.py:842] (0/4) Epoch 15, batch 2600, loss[loss=0.1917, simple_loss=0.2755, pruned_loss=0.05395, over 7198.00 frames.], tot_loss[loss=0.1962, simple_loss=0.2797, pruned_loss=0.0563, over 1420665.16 frames.], batch size: 23, lr: 3.67e-04 2022-05-27 17:45:06,243 INFO [train.py:842] (0/4) Epoch 15, batch 2650, loss[loss=0.196, simple_loss=0.2795, pruned_loss=0.05628, over 7320.00 frames.], tot_loss[loss=0.1968, simple_loss=0.2803, pruned_loss=0.05661, over 1421529.13 frames.], batch size: 25, lr: 3.67e-04 2022-05-27 17:45:45,118 INFO [train.py:842] (0/4) Epoch 15, batch 2700, loss[loss=0.1931, simple_loss=0.2917, pruned_loss=0.04724, over 7310.00 frames.], tot_loss[loss=0.1964, simple_loss=0.28, pruned_loss=0.05636, over 1424315.89 frames.], batch size: 21, lr: 3.67e-04 2022-05-27 17:46:24,232 INFO [train.py:842] (0/4) Epoch 15, batch 2750, loss[loss=0.1996, simple_loss=0.2799, pruned_loss=0.05963, over 7257.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2792, pruned_loss=0.05585, over 1424587.28 frames.], batch size: 24, lr: 3.67e-04 2022-05-27 17:47:03,176 INFO [train.py:842] (0/4) Epoch 15, batch 2800, loss[loss=0.1915, simple_loss=0.281, pruned_loss=0.05104, over 7152.00 frames.], tot_loss[loss=0.1944, simple_loss=0.2785, pruned_loss=0.05514, over 1428087.71 frames.], batch size: 20, lr: 3.67e-04 2022-05-27 17:47:42,026 INFO [train.py:842] (0/4) Epoch 15, batch 2850, loss[loss=0.1751, simple_loss=0.2483, pruned_loss=0.05093, over 6811.00 frames.], tot_loss[loss=0.1943, simple_loss=0.2788, pruned_loss=0.05491, over 1428054.51 frames.], batch size: 15, lr: 3.67e-04 2022-05-27 17:48:20,926 INFO [train.py:842] (0/4) Epoch 15, batch 2900, loss[loss=0.2252, simple_loss=0.3133, pruned_loss=0.06854, over 7380.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2788, pruned_loss=0.0551, over 1423658.07 frames.], batch size: 23, lr: 3.67e-04 2022-05-27 17:49:00,330 INFO [train.py:842] (0/4) Epoch 15, batch 2950, loss[loss=0.1491, simple_loss=0.2341, pruned_loss=0.03203, over 7430.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2786, pruned_loss=0.05519, over 1424800.36 frames.], batch size: 20, lr: 3.67e-04 2022-05-27 17:49:39,740 INFO [train.py:842] (0/4) Epoch 15, batch 3000, loss[loss=0.1777, simple_loss=0.2566, pruned_loss=0.04938, over 7162.00 frames.], tot_loss[loss=0.193, simple_loss=0.2774, pruned_loss=0.05429, over 1422744.19 frames.], batch size: 19, lr: 3.67e-04 2022-05-27 17:49:39,741 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 17:49:49,312 INFO [train.py:871] (0/4) Epoch 15, validation: loss=0.1691, simple_loss=0.2695, pruned_loss=0.03437, over 868885.00 frames. 2022-05-27 17:50:28,466 INFO [train.py:842] (0/4) Epoch 15, batch 3050, loss[loss=0.1784, simple_loss=0.2442, pruned_loss=0.05636, over 6806.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2776, pruned_loss=0.05483, over 1425513.04 frames.], batch size: 15, lr: 3.67e-04 2022-05-27 17:51:07,308 INFO [train.py:842] (0/4) Epoch 15, batch 3100, loss[loss=0.1716, simple_loss=0.2667, pruned_loss=0.03827, over 7330.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2777, pruned_loss=0.05461, over 1422170.14 frames.], batch size: 20, lr: 3.67e-04 2022-05-27 17:51:46,476 INFO [train.py:842] (0/4) Epoch 15, batch 3150, loss[loss=0.1548, simple_loss=0.2346, pruned_loss=0.0375, over 7291.00 frames.], tot_loss[loss=0.1938, simple_loss=0.2776, pruned_loss=0.05495, over 1427427.48 frames.], batch size: 17, lr: 3.67e-04 2022-05-27 17:52:25,459 INFO [train.py:842] (0/4) Epoch 15, batch 3200, loss[loss=0.2436, simple_loss=0.3295, pruned_loss=0.07888, over 7100.00 frames.], tot_loss[loss=0.1937, simple_loss=0.2772, pruned_loss=0.05508, over 1427275.58 frames.], batch size: 28, lr: 3.66e-04 2022-05-27 17:53:04,613 INFO [train.py:842] (0/4) Epoch 15, batch 3250, loss[loss=0.1646, simple_loss=0.2475, pruned_loss=0.04086, over 7071.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2771, pruned_loss=0.05489, over 1427856.12 frames.], batch size: 18, lr: 3.66e-04 2022-05-27 17:53:43,714 INFO [train.py:842] (0/4) Epoch 15, batch 3300, loss[loss=0.1651, simple_loss=0.2405, pruned_loss=0.04487, over 7265.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2765, pruned_loss=0.0547, over 1426900.90 frames.], batch size: 17, lr: 3.66e-04 2022-05-27 17:54:22,782 INFO [train.py:842] (0/4) Epoch 15, batch 3350, loss[loss=0.1766, simple_loss=0.2667, pruned_loss=0.04323, over 7210.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2771, pruned_loss=0.05489, over 1426745.79 frames.], batch size: 23, lr: 3.66e-04 2022-05-27 17:55:01,356 INFO [train.py:842] (0/4) Epoch 15, batch 3400, loss[loss=0.2189, simple_loss=0.3, pruned_loss=0.06892, over 7224.00 frames.], tot_loss[loss=0.1944, simple_loss=0.2784, pruned_loss=0.05519, over 1424067.55 frames.], batch size: 21, lr: 3.66e-04 2022-05-27 17:55:40,271 INFO [train.py:842] (0/4) Epoch 15, batch 3450, loss[loss=0.1997, simple_loss=0.2911, pruned_loss=0.05416, over 7063.00 frames.], tot_loss[loss=0.1961, simple_loss=0.28, pruned_loss=0.05605, over 1421854.30 frames.], batch size: 28, lr: 3.66e-04 2022-05-27 17:56:19,156 INFO [train.py:842] (0/4) Epoch 15, batch 3500, loss[loss=0.1908, simple_loss=0.2843, pruned_loss=0.04866, over 7145.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2789, pruned_loss=0.05478, over 1426649.03 frames.], batch size: 26, lr: 3.66e-04 2022-05-27 17:56:58,060 INFO [train.py:842] (0/4) Epoch 15, batch 3550, loss[loss=0.1788, simple_loss=0.2649, pruned_loss=0.0464, over 7225.00 frames.], tot_loss[loss=0.1943, simple_loss=0.2788, pruned_loss=0.05493, over 1427589.41 frames.], batch size: 20, lr: 3.66e-04 2022-05-27 17:57:36,713 INFO [train.py:842] (0/4) Epoch 15, batch 3600, loss[loss=0.2153, simple_loss=0.3079, pruned_loss=0.0614, over 7336.00 frames.], tot_loss[loss=0.1958, simple_loss=0.28, pruned_loss=0.05583, over 1423582.55 frames.], batch size: 21, lr: 3.66e-04 2022-05-27 17:58:15,828 INFO [train.py:842] (0/4) Epoch 15, batch 3650, loss[loss=0.1817, simple_loss=0.2696, pruned_loss=0.04691, over 7269.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2812, pruned_loss=0.0565, over 1424748.90 frames.], batch size: 19, lr: 3.66e-04 2022-05-27 17:58:54,659 INFO [train.py:842] (0/4) Epoch 15, batch 3700, loss[loss=0.2581, simple_loss=0.3316, pruned_loss=0.0923, over 7431.00 frames.], tot_loss[loss=0.1967, simple_loss=0.2806, pruned_loss=0.05638, over 1420839.48 frames.], batch size: 20, lr: 3.66e-04 2022-05-27 17:59:34,199 INFO [train.py:842] (0/4) Epoch 15, batch 3750, loss[loss=0.2426, simple_loss=0.3217, pruned_loss=0.08174, over 4627.00 frames.], tot_loss[loss=0.1962, simple_loss=0.2797, pruned_loss=0.05634, over 1422750.25 frames.], batch size: 52, lr: 3.66e-04 2022-05-27 18:00:12,935 INFO [train.py:842] (0/4) Epoch 15, batch 3800, loss[loss=0.1845, simple_loss=0.2619, pruned_loss=0.05359, over 7447.00 frames.], tot_loss[loss=0.1961, simple_loss=0.2799, pruned_loss=0.05612, over 1425745.78 frames.], batch size: 19, lr: 3.66e-04 2022-05-27 18:00:51,697 INFO [train.py:842] (0/4) Epoch 15, batch 3850, loss[loss=0.208, simple_loss=0.289, pruned_loss=0.06353, over 7246.00 frames.], tot_loss[loss=0.1955, simple_loss=0.28, pruned_loss=0.05549, over 1428197.90 frames.], batch size: 20, lr: 3.66e-04 2022-05-27 18:01:30,648 INFO [train.py:842] (0/4) Epoch 15, batch 3900, loss[loss=0.1506, simple_loss=0.2375, pruned_loss=0.03186, over 7254.00 frames.], tot_loss[loss=0.195, simple_loss=0.279, pruned_loss=0.05552, over 1426842.10 frames.], batch size: 19, lr: 3.66e-04 2022-05-27 18:02:19,756 INFO [train.py:842] (0/4) Epoch 15, batch 3950, loss[loss=0.1945, simple_loss=0.2876, pruned_loss=0.05069, over 7139.00 frames.], tot_loss[loss=0.1952, simple_loss=0.2788, pruned_loss=0.05582, over 1422039.32 frames.], batch size: 20, lr: 3.65e-04 2022-05-27 18:02:58,476 INFO [train.py:842] (0/4) Epoch 15, batch 4000, loss[loss=0.1915, simple_loss=0.2627, pruned_loss=0.06019, over 7144.00 frames.], tot_loss[loss=0.1967, simple_loss=0.2803, pruned_loss=0.05657, over 1422502.25 frames.], batch size: 17, lr: 3.65e-04 2022-05-27 18:03:37,543 INFO [train.py:842] (0/4) Epoch 15, batch 4050, loss[loss=0.1962, simple_loss=0.2816, pruned_loss=0.05535, over 6512.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2809, pruned_loss=0.05669, over 1426576.66 frames.], batch size: 38, lr: 3.65e-04 2022-05-27 18:04:16,174 INFO [train.py:842] (0/4) Epoch 15, batch 4100, loss[loss=0.1809, simple_loss=0.2731, pruned_loss=0.04436, over 7409.00 frames.], tot_loss[loss=0.1979, simple_loss=0.2817, pruned_loss=0.05708, over 1421755.56 frames.], batch size: 21, lr: 3.65e-04 2022-05-27 18:04:55,255 INFO [train.py:842] (0/4) Epoch 15, batch 4150, loss[loss=0.155, simple_loss=0.2333, pruned_loss=0.03837, over 7419.00 frames.], tot_loss[loss=0.1976, simple_loss=0.2817, pruned_loss=0.05682, over 1423634.19 frames.], batch size: 18, lr: 3.65e-04 2022-05-27 18:05:33,906 INFO [train.py:842] (0/4) Epoch 15, batch 4200, loss[loss=0.2347, simple_loss=0.3175, pruned_loss=0.07596, over 7387.00 frames.], tot_loss[loss=0.198, simple_loss=0.2818, pruned_loss=0.05709, over 1417532.58 frames.], batch size: 23, lr: 3.65e-04 2022-05-27 18:06:13,503 INFO [train.py:842] (0/4) Epoch 15, batch 4250, loss[loss=0.235, simple_loss=0.3132, pruned_loss=0.07836, over 7290.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2822, pruned_loss=0.05756, over 1417953.23 frames.], batch size: 24, lr: 3.65e-04 2022-05-27 18:06:52,318 INFO [train.py:842] (0/4) Epoch 15, batch 4300, loss[loss=0.2067, simple_loss=0.285, pruned_loss=0.06421, over 7312.00 frames.], tot_loss[loss=0.197, simple_loss=0.2801, pruned_loss=0.05689, over 1415879.30 frames.], batch size: 25, lr: 3.65e-04 2022-05-27 18:07:31,747 INFO [train.py:842] (0/4) Epoch 15, batch 4350, loss[loss=0.1924, simple_loss=0.2702, pruned_loss=0.0573, over 7174.00 frames.], tot_loss[loss=0.1963, simple_loss=0.2797, pruned_loss=0.05645, over 1419527.90 frames.], batch size: 18, lr: 3.65e-04 2022-05-27 18:08:10,588 INFO [train.py:842] (0/4) Epoch 15, batch 4400, loss[loss=0.1867, simple_loss=0.2664, pruned_loss=0.05346, over 7275.00 frames.], tot_loss[loss=0.197, simple_loss=0.2806, pruned_loss=0.05672, over 1418801.04 frames.], batch size: 18, lr: 3.65e-04 2022-05-27 18:08:49,793 INFO [train.py:842] (0/4) Epoch 15, batch 4450, loss[loss=0.2063, simple_loss=0.3043, pruned_loss=0.05412, over 7413.00 frames.], tot_loss[loss=0.1966, simple_loss=0.2802, pruned_loss=0.05648, over 1419526.38 frames.], batch size: 21, lr: 3.65e-04 2022-05-27 18:09:28,915 INFO [train.py:842] (0/4) Epoch 15, batch 4500, loss[loss=0.2124, simple_loss=0.2945, pruned_loss=0.06513, over 7289.00 frames.], tot_loss[loss=0.1974, simple_loss=0.2812, pruned_loss=0.05684, over 1423287.76 frames.], batch size: 25, lr: 3.65e-04 2022-05-27 18:10:07,902 INFO [train.py:842] (0/4) Epoch 15, batch 4550, loss[loss=0.1849, simple_loss=0.2626, pruned_loss=0.05356, over 7325.00 frames.], tot_loss[loss=0.1963, simple_loss=0.2809, pruned_loss=0.05584, over 1426020.87 frames.], batch size: 20, lr: 3.65e-04 2022-05-27 18:10:46,889 INFO [train.py:842] (0/4) Epoch 15, batch 4600, loss[loss=0.2191, simple_loss=0.2928, pruned_loss=0.07268, over 7218.00 frames.], tot_loss[loss=0.1968, simple_loss=0.2809, pruned_loss=0.05638, over 1426782.83 frames.], batch size: 21, lr: 3.65e-04 2022-05-27 18:11:26,083 INFO [train.py:842] (0/4) Epoch 15, batch 4650, loss[loss=0.191, simple_loss=0.2833, pruned_loss=0.04934, over 6797.00 frames.], tot_loss[loss=0.1961, simple_loss=0.2802, pruned_loss=0.05605, over 1426457.60 frames.], batch size: 31, lr: 3.64e-04 2022-05-27 18:12:04,839 INFO [train.py:842] (0/4) Epoch 15, batch 4700, loss[loss=0.1914, simple_loss=0.2772, pruned_loss=0.05279, over 7151.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2805, pruned_loss=0.05629, over 1430121.34 frames.], batch size: 20, lr: 3.64e-04 2022-05-27 18:12:44,012 INFO [train.py:842] (0/4) Epoch 15, batch 4750, loss[loss=0.2253, simple_loss=0.2745, pruned_loss=0.08806, over 7283.00 frames.], tot_loss[loss=0.1977, simple_loss=0.2814, pruned_loss=0.05696, over 1430235.83 frames.], batch size: 17, lr: 3.64e-04 2022-05-27 18:13:22,832 INFO [train.py:842] (0/4) Epoch 15, batch 4800, loss[loss=0.2112, simple_loss=0.2931, pruned_loss=0.06468, over 6742.00 frames.], tot_loss[loss=0.196, simple_loss=0.2797, pruned_loss=0.05616, over 1428157.12 frames.], batch size: 31, lr: 3.64e-04 2022-05-27 18:14:02,053 INFO [train.py:842] (0/4) Epoch 15, batch 4850, loss[loss=0.2325, simple_loss=0.3219, pruned_loss=0.07154, over 7134.00 frames.], tot_loss[loss=0.1957, simple_loss=0.2793, pruned_loss=0.05605, over 1427045.86 frames.], batch size: 28, lr: 3.64e-04 2022-05-27 18:14:40,906 INFO [train.py:842] (0/4) Epoch 15, batch 4900, loss[loss=0.2054, simple_loss=0.2829, pruned_loss=0.06394, over 7157.00 frames.], tot_loss[loss=0.1949, simple_loss=0.2788, pruned_loss=0.05554, over 1429067.53 frames.], batch size: 19, lr: 3.64e-04 2022-05-27 18:15:20,253 INFO [train.py:842] (0/4) Epoch 15, batch 4950, loss[loss=0.1867, simple_loss=0.2805, pruned_loss=0.04642, over 7152.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2782, pruned_loss=0.05501, over 1430080.37 frames.], batch size: 19, lr: 3.64e-04 2022-05-27 18:15:58,915 INFO [train.py:842] (0/4) Epoch 15, batch 5000, loss[loss=0.2048, simple_loss=0.2915, pruned_loss=0.05904, over 7156.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2791, pruned_loss=0.05497, over 1429228.74 frames.], batch size: 26, lr: 3.64e-04 2022-05-27 18:16:38,101 INFO [train.py:842] (0/4) Epoch 15, batch 5050, loss[loss=0.1887, simple_loss=0.288, pruned_loss=0.0447, over 7233.00 frames.], tot_loss[loss=0.1944, simple_loss=0.279, pruned_loss=0.05485, over 1431022.15 frames.], batch size: 20, lr: 3.64e-04 2022-05-27 18:17:17,305 INFO [train.py:842] (0/4) Epoch 15, batch 5100, loss[loss=0.1708, simple_loss=0.2564, pruned_loss=0.04253, over 7431.00 frames.], tot_loss[loss=0.1927, simple_loss=0.2772, pruned_loss=0.05405, over 1429494.22 frames.], batch size: 20, lr: 3.64e-04 2022-05-27 18:17:56,523 INFO [train.py:842] (0/4) Epoch 15, batch 5150, loss[loss=0.2382, simple_loss=0.3192, pruned_loss=0.07863, over 7377.00 frames.], tot_loss[loss=0.1928, simple_loss=0.2776, pruned_loss=0.054, over 1425531.07 frames.], batch size: 23, lr: 3.64e-04 2022-05-27 18:18:35,554 INFO [train.py:842] (0/4) Epoch 15, batch 5200, loss[loss=0.1602, simple_loss=0.2567, pruned_loss=0.03183, over 6790.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2778, pruned_loss=0.05455, over 1425271.41 frames.], batch size: 31, lr: 3.64e-04 2022-05-27 18:19:14,515 INFO [train.py:842] (0/4) Epoch 15, batch 5250, loss[loss=0.1933, simple_loss=0.2763, pruned_loss=0.05518, over 7153.00 frames.], tot_loss[loss=0.1957, simple_loss=0.2793, pruned_loss=0.05603, over 1423534.47 frames.], batch size: 19, lr: 3.64e-04 2022-05-27 18:19:53,399 INFO [train.py:842] (0/4) Epoch 15, batch 5300, loss[loss=0.2339, simple_loss=0.2979, pruned_loss=0.08497, over 7337.00 frames.], tot_loss[loss=0.196, simple_loss=0.2792, pruned_loss=0.05639, over 1419016.51 frames.], batch size: 20, lr: 3.64e-04 2022-05-27 18:20:32,605 INFO [train.py:842] (0/4) Epoch 15, batch 5350, loss[loss=0.1876, simple_loss=0.2636, pruned_loss=0.05581, over 7262.00 frames.], tot_loss[loss=0.1957, simple_loss=0.2791, pruned_loss=0.05613, over 1420544.86 frames.], batch size: 19, lr: 3.64e-04 2022-05-27 18:21:11,514 INFO [train.py:842] (0/4) Epoch 15, batch 5400, loss[loss=0.1581, simple_loss=0.2475, pruned_loss=0.03434, over 7360.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2787, pruned_loss=0.05606, over 1421840.82 frames.], batch size: 19, lr: 3.63e-04 2022-05-27 18:21:50,497 INFO [train.py:842] (0/4) Epoch 15, batch 5450, loss[loss=0.1763, simple_loss=0.2679, pruned_loss=0.04234, over 7332.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2776, pruned_loss=0.05538, over 1426647.11 frames.], batch size: 22, lr: 3.63e-04 2022-05-27 18:22:29,572 INFO [train.py:842] (0/4) Epoch 15, batch 5500, loss[loss=0.2255, simple_loss=0.3029, pruned_loss=0.07403, over 7202.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2772, pruned_loss=0.05499, over 1426285.83 frames.], batch size: 22, lr: 3.63e-04 2022-05-27 18:23:08,883 INFO [train.py:842] (0/4) Epoch 15, batch 5550, loss[loss=0.1707, simple_loss=0.2603, pruned_loss=0.04051, over 7250.00 frames.], tot_loss[loss=0.1924, simple_loss=0.276, pruned_loss=0.05439, over 1429216.74 frames.], batch size: 19, lr: 3.63e-04 2022-05-27 18:23:47,678 INFO [train.py:842] (0/4) Epoch 15, batch 5600, loss[loss=0.2354, simple_loss=0.3154, pruned_loss=0.07773, over 7193.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2775, pruned_loss=0.05511, over 1425752.18 frames.], batch size: 22, lr: 3.63e-04 2022-05-27 18:24:26,881 INFO [train.py:842] (0/4) Epoch 15, batch 5650, loss[loss=0.2672, simple_loss=0.3282, pruned_loss=0.1031, over 7161.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2768, pruned_loss=0.05493, over 1423807.44 frames.], batch size: 18, lr: 3.63e-04 2022-05-27 18:25:05,576 INFO [train.py:842] (0/4) Epoch 15, batch 5700, loss[loss=0.1541, simple_loss=0.2493, pruned_loss=0.02946, over 7423.00 frames.], tot_loss[loss=0.1943, simple_loss=0.2783, pruned_loss=0.05514, over 1421921.18 frames.], batch size: 20, lr: 3.63e-04 2022-05-27 18:25:44,760 INFO [train.py:842] (0/4) Epoch 15, batch 5750, loss[loss=0.1844, simple_loss=0.2755, pruned_loss=0.04667, over 7021.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2783, pruned_loss=0.05511, over 1424878.90 frames.], batch size: 28, lr: 3.63e-04 2022-05-27 18:26:23,925 INFO [train.py:842] (0/4) Epoch 15, batch 5800, loss[loss=0.1855, simple_loss=0.2703, pruned_loss=0.05029, over 7333.00 frames.], tot_loss[loss=0.1937, simple_loss=0.2779, pruned_loss=0.05471, over 1427140.46 frames.], batch size: 20, lr: 3.63e-04 2022-05-27 18:27:03,386 INFO [train.py:842] (0/4) Epoch 15, batch 5850, loss[loss=0.1925, simple_loss=0.2859, pruned_loss=0.04952, over 7356.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2772, pruned_loss=0.05462, over 1426099.90 frames.], batch size: 19, lr: 3.63e-04 2022-05-27 18:27:42,356 INFO [train.py:842] (0/4) Epoch 15, batch 5900, loss[loss=0.1839, simple_loss=0.274, pruned_loss=0.04685, over 7138.00 frames.], tot_loss[loss=0.192, simple_loss=0.2759, pruned_loss=0.05407, over 1424824.03 frames.], batch size: 20, lr: 3.63e-04 2022-05-27 18:28:21,552 INFO [train.py:842] (0/4) Epoch 15, batch 5950, loss[loss=0.2183, simple_loss=0.2982, pruned_loss=0.06924, over 7322.00 frames.], tot_loss[loss=0.1951, simple_loss=0.278, pruned_loss=0.0561, over 1423285.69 frames.], batch size: 20, lr: 3.63e-04 2022-05-27 18:29:00,290 INFO [train.py:842] (0/4) Epoch 15, batch 6000, loss[loss=0.2672, simple_loss=0.3122, pruned_loss=0.1111, over 7273.00 frames.], tot_loss[loss=0.1959, simple_loss=0.2788, pruned_loss=0.05647, over 1421477.17 frames.], batch size: 18, lr: 3.63e-04 2022-05-27 18:29:00,291 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 18:29:10,459 INFO [train.py:871] (0/4) Epoch 15, validation: loss=0.1678, simple_loss=0.2677, pruned_loss=0.03392, over 868885.00 frames. 2022-05-27 18:29:49,703 INFO [train.py:842] (0/4) Epoch 15, batch 6050, loss[loss=0.1793, simple_loss=0.2655, pruned_loss=0.04654, over 7010.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2779, pruned_loss=0.05552, over 1425334.29 frames.], batch size: 28, lr: 3.63e-04 2022-05-27 18:30:28,765 INFO [train.py:842] (0/4) Epoch 15, batch 6100, loss[loss=0.1824, simple_loss=0.272, pruned_loss=0.04639, over 7218.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2783, pruned_loss=0.05544, over 1426960.44 frames.], batch size: 21, lr: 3.63e-04 2022-05-27 18:31:08,004 INFO [train.py:842] (0/4) Epoch 15, batch 6150, loss[loss=0.3205, simple_loss=0.3732, pruned_loss=0.1339, over 5222.00 frames.], tot_loss[loss=0.1953, simple_loss=0.279, pruned_loss=0.05584, over 1427742.29 frames.], batch size: 52, lr: 3.62e-04 2022-05-27 18:31:47,192 INFO [train.py:842] (0/4) Epoch 15, batch 6200, loss[loss=0.2002, simple_loss=0.2727, pruned_loss=0.06384, over 7197.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2782, pruned_loss=0.0554, over 1423550.55 frames.], batch size: 23, lr: 3.62e-04 2022-05-27 18:32:26,309 INFO [train.py:842] (0/4) Epoch 15, batch 6250, loss[loss=0.193, simple_loss=0.2789, pruned_loss=0.05357, over 7198.00 frames.], tot_loss[loss=0.1948, simple_loss=0.2784, pruned_loss=0.05557, over 1420482.30 frames.], batch size: 22, lr: 3.62e-04 2022-05-27 18:33:04,924 INFO [train.py:842] (0/4) Epoch 15, batch 6300, loss[loss=0.2012, simple_loss=0.2763, pruned_loss=0.06309, over 7195.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2782, pruned_loss=0.0554, over 1419864.42 frames.], batch size: 26, lr: 3.62e-04 2022-05-27 18:33:54,388 INFO [train.py:842] (0/4) Epoch 15, batch 6350, loss[loss=0.1687, simple_loss=0.2478, pruned_loss=0.04478, over 6789.00 frames.], tot_loss[loss=0.1948, simple_loss=0.2787, pruned_loss=0.05543, over 1421829.20 frames.], batch size: 15, lr: 3.62e-04 2022-05-27 18:34:43,501 INFO [train.py:842] (0/4) Epoch 15, batch 6400, loss[loss=0.1901, simple_loss=0.27, pruned_loss=0.05515, over 7064.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2779, pruned_loss=0.05514, over 1420066.10 frames.], batch size: 18, lr: 3.62e-04 2022-05-27 18:35:22,923 INFO [train.py:842] (0/4) Epoch 15, batch 6450, loss[loss=0.1943, simple_loss=0.2851, pruned_loss=0.05174, over 7112.00 frames.], tot_loss[loss=0.1937, simple_loss=0.2777, pruned_loss=0.05492, over 1423189.35 frames.], batch size: 21, lr: 3.62e-04 2022-05-27 18:36:11,917 INFO [train.py:842] (0/4) Epoch 15, batch 6500, loss[loss=0.1848, simple_loss=0.2793, pruned_loss=0.04511, over 6778.00 frames.], tot_loss[loss=0.1938, simple_loss=0.278, pruned_loss=0.05476, over 1418076.95 frames.], batch size: 31, lr: 3.62e-04 2022-05-27 18:36:50,711 INFO [train.py:842] (0/4) Epoch 15, batch 6550, loss[loss=0.1871, simple_loss=0.2781, pruned_loss=0.04805, over 6776.00 frames.], tot_loss[loss=0.1953, simple_loss=0.2797, pruned_loss=0.05543, over 1421298.93 frames.], batch size: 31, lr: 3.62e-04 2022-05-27 18:37:29,209 INFO [train.py:842] (0/4) Epoch 15, batch 6600, loss[loss=0.2183, simple_loss=0.298, pruned_loss=0.06935, over 7236.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2805, pruned_loss=0.0566, over 1418268.12 frames.], batch size: 20, lr: 3.62e-04 2022-05-27 18:38:08,183 INFO [train.py:842] (0/4) Epoch 15, batch 6650, loss[loss=0.1961, simple_loss=0.285, pruned_loss=0.05358, over 7291.00 frames.], tot_loss[loss=0.1954, simple_loss=0.279, pruned_loss=0.05587, over 1420144.64 frames.], batch size: 24, lr: 3.62e-04 2022-05-27 18:38:47,091 INFO [train.py:842] (0/4) Epoch 15, batch 6700, loss[loss=0.1826, simple_loss=0.2689, pruned_loss=0.0482, over 7168.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2795, pruned_loss=0.05574, over 1422186.87 frames.], batch size: 18, lr: 3.62e-04 2022-05-27 18:39:26,053 INFO [train.py:842] (0/4) Epoch 15, batch 6750, loss[loss=0.1836, simple_loss=0.2689, pruned_loss=0.04919, over 7274.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2799, pruned_loss=0.05542, over 1425262.62 frames.], batch size: 18, lr: 3.62e-04 2022-05-27 18:40:05,010 INFO [train.py:842] (0/4) Epoch 15, batch 6800, loss[loss=0.1651, simple_loss=0.2485, pruned_loss=0.04082, over 7277.00 frames.], tot_loss[loss=0.1951, simple_loss=0.2797, pruned_loss=0.0552, over 1427139.18 frames.], batch size: 18, lr: 3.62e-04 2022-05-27 18:40:44,155 INFO [train.py:842] (0/4) Epoch 15, batch 6850, loss[loss=0.2196, simple_loss=0.3033, pruned_loss=0.06791, over 7368.00 frames.], tot_loss[loss=0.1964, simple_loss=0.2804, pruned_loss=0.05621, over 1425667.64 frames.], batch size: 23, lr: 3.62e-04 2022-05-27 18:41:23,583 INFO [train.py:842] (0/4) Epoch 15, batch 6900, loss[loss=0.1818, simple_loss=0.2607, pruned_loss=0.05138, over 7138.00 frames.], tot_loss[loss=0.1961, simple_loss=0.2798, pruned_loss=0.05615, over 1428019.10 frames.], batch size: 17, lr: 3.61e-04 2022-05-27 18:42:02,765 INFO [train.py:842] (0/4) Epoch 15, batch 6950, loss[loss=0.2212, simple_loss=0.3027, pruned_loss=0.06987, over 6754.00 frames.], tot_loss[loss=0.1982, simple_loss=0.2815, pruned_loss=0.0574, over 1426254.07 frames.], batch size: 31, lr: 3.61e-04 2022-05-27 18:42:41,947 INFO [train.py:842] (0/4) Epoch 15, batch 7000, loss[loss=0.1879, simple_loss=0.2742, pruned_loss=0.05076, over 7232.00 frames.], tot_loss[loss=0.197, simple_loss=0.2807, pruned_loss=0.0567, over 1428532.54 frames.], batch size: 20, lr: 3.61e-04 2022-05-27 18:43:21,061 INFO [train.py:842] (0/4) Epoch 15, batch 7050, loss[loss=0.2565, simple_loss=0.3342, pruned_loss=0.08946, over 7206.00 frames.], tot_loss[loss=0.1974, simple_loss=0.2811, pruned_loss=0.05687, over 1427435.45 frames.], batch size: 22, lr: 3.61e-04 2022-05-27 18:43:59,628 INFO [train.py:842] (0/4) Epoch 15, batch 7100, loss[loss=0.1811, simple_loss=0.2627, pruned_loss=0.04976, over 7385.00 frames.], tot_loss[loss=0.198, simple_loss=0.2817, pruned_loss=0.05713, over 1429201.64 frames.], batch size: 23, lr: 3.61e-04 2022-05-27 18:44:38,594 INFO [train.py:842] (0/4) Epoch 15, batch 7150, loss[loss=0.1761, simple_loss=0.2592, pruned_loss=0.0465, over 7230.00 frames.], tot_loss[loss=0.1979, simple_loss=0.2811, pruned_loss=0.05732, over 1425365.65 frames.], batch size: 20, lr: 3.61e-04 2022-05-27 18:45:17,407 INFO [train.py:842] (0/4) Epoch 15, batch 7200, loss[loss=0.1794, simple_loss=0.2647, pruned_loss=0.04699, over 7364.00 frames.], tot_loss[loss=0.1984, simple_loss=0.2817, pruned_loss=0.05751, over 1426049.89 frames.], batch size: 19, lr: 3.61e-04 2022-05-27 18:45:56,682 INFO [train.py:842] (0/4) Epoch 15, batch 7250, loss[loss=0.2236, simple_loss=0.299, pruned_loss=0.07407, over 7213.00 frames.], tot_loss[loss=0.1986, simple_loss=0.2816, pruned_loss=0.0578, over 1425349.74 frames.], batch size: 22, lr: 3.61e-04 2022-05-27 18:46:35,253 INFO [train.py:842] (0/4) Epoch 15, batch 7300, loss[loss=0.2134, simple_loss=0.3182, pruned_loss=0.05427, over 7302.00 frames.], tot_loss[loss=0.1975, simple_loss=0.2809, pruned_loss=0.0571, over 1424501.99 frames.], batch size: 24, lr: 3.61e-04 2022-05-27 18:46:44,745 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-136000.pt 2022-05-27 18:47:16,979 INFO [train.py:842] (0/4) Epoch 15, batch 7350, loss[loss=0.1983, simple_loss=0.2743, pruned_loss=0.0611, over 7377.00 frames.], tot_loss[loss=0.1973, simple_loss=0.2806, pruned_loss=0.057, over 1426427.93 frames.], batch size: 23, lr: 3.61e-04 2022-05-27 18:47:55,650 INFO [train.py:842] (0/4) Epoch 15, batch 7400, loss[loss=0.2468, simple_loss=0.3186, pruned_loss=0.08747, over 6257.00 frames.], tot_loss[loss=0.1974, simple_loss=0.2809, pruned_loss=0.05693, over 1427396.39 frames.], batch size: 38, lr: 3.61e-04 2022-05-27 18:48:34,722 INFO [train.py:842] (0/4) Epoch 15, batch 7450, loss[loss=0.176, simple_loss=0.268, pruned_loss=0.04203, over 7333.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2806, pruned_loss=0.05662, over 1427400.29 frames.], batch size: 20, lr: 3.61e-04 2022-05-27 18:49:13,763 INFO [train.py:842] (0/4) Epoch 15, batch 7500, loss[loss=0.1683, simple_loss=0.2557, pruned_loss=0.04049, over 7061.00 frames.], tot_loss[loss=0.1963, simple_loss=0.2804, pruned_loss=0.05612, over 1428133.65 frames.], batch size: 18, lr: 3.61e-04 2022-05-27 18:49:52,830 INFO [train.py:842] (0/4) Epoch 15, batch 7550, loss[loss=0.1579, simple_loss=0.2461, pruned_loss=0.03487, over 6771.00 frames.], tot_loss[loss=0.1949, simple_loss=0.279, pruned_loss=0.05538, over 1423376.65 frames.], batch size: 31, lr: 3.61e-04 2022-05-27 18:50:31,964 INFO [train.py:842] (0/4) Epoch 15, batch 7600, loss[loss=0.1513, simple_loss=0.2417, pruned_loss=0.03049, over 7355.00 frames.], tot_loss[loss=0.1957, simple_loss=0.2793, pruned_loss=0.05605, over 1422028.85 frames.], batch size: 19, lr: 3.61e-04 2022-05-27 18:51:10,941 INFO [train.py:842] (0/4) Epoch 15, batch 7650, loss[loss=0.1865, simple_loss=0.276, pruned_loss=0.04846, over 7061.00 frames.], tot_loss[loss=0.1976, simple_loss=0.281, pruned_loss=0.05712, over 1420242.00 frames.], batch size: 28, lr: 3.60e-04 2022-05-27 18:51:49,844 INFO [train.py:842] (0/4) Epoch 15, batch 7700, loss[loss=0.2016, simple_loss=0.2901, pruned_loss=0.05654, over 7096.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2793, pruned_loss=0.05588, over 1420415.57 frames.], batch size: 28, lr: 3.60e-04 2022-05-27 18:52:29,118 INFO [train.py:842] (0/4) Epoch 15, batch 7750, loss[loss=0.2344, simple_loss=0.3126, pruned_loss=0.07814, over 7003.00 frames.], tot_loss[loss=0.1968, simple_loss=0.2806, pruned_loss=0.05651, over 1424951.98 frames.], batch size: 32, lr: 3.60e-04 2022-05-27 18:53:07,845 INFO [train.py:842] (0/4) Epoch 15, batch 7800, loss[loss=0.1479, simple_loss=0.2272, pruned_loss=0.03429, over 7284.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2791, pruned_loss=0.05581, over 1416915.31 frames.], batch size: 17, lr: 3.60e-04 2022-05-27 18:53:47,091 INFO [train.py:842] (0/4) Epoch 15, batch 7850, loss[loss=0.1452, simple_loss=0.24, pruned_loss=0.02521, over 6979.00 frames.], tot_loss[loss=0.1947, simple_loss=0.279, pruned_loss=0.05518, over 1420458.37 frames.], batch size: 16, lr: 3.60e-04 2022-05-27 18:54:25,871 INFO [train.py:842] (0/4) Epoch 15, batch 7900, loss[loss=0.1998, simple_loss=0.2852, pruned_loss=0.05714, over 7370.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2776, pruned_loss=0.05434, over 1423123.41 frames.], batch size: 23, lr: 3.60e-04 2022-05-27 18:55:04,758 INFO [train.py:842] (0/4) Epoch 15, batch 7950, loss[loss=0.1826, simple_loss=0.2641, pruned_loss=0.05053, over 7192.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2781, pruned_loss=0.05483, over 1422336.94 frames.], batch size: 23, lr: 3.60e-04 2022-05-27 18:55:44,027 INFO [train.py:842] (0/4) Epoch 15, batch 8000, loss[loss=0.1729, simple_loss=0.2405, pruned_loss=0.05264, over 7236.00 frames.], tot_loss[loss=0.193, simple_loss=0.277, pruned_loss=0.05444, over 1422458.25 frames.], batch size: 16, lr: 3.60e-04 2022-05-27 18:56:22,905 INFO [train.py:842] (0/4) Epoch 15, batch 8050, loss[loss=0.2, simple_loss=0.2793, pruned_loss=0.0604, over 7301.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2773, pruned_loss=0.05453, over 1416052.81 frames.], batch size: 25, lr: 3.60e-04 2022-05-27 18:57:02,119 INFO [train.py:842] (0/4) Epoch 15, batch 8100, loss[loss=0.1719, simple_loss=0.2683, pruned_loss=0.03778, over 7230.00 frames.], tot_loss[loss=0.194, simple_loss=0.2782, pruned_loss=0.0549, over 1422801.02 frames.], batch size: 20, lr: 3.60e-04 2022-05-27 18:57:41,259 INFO [train.py:842] (0/4) Epoch 15, batch 8150, loss[loss=0.1891, simple_loss=0.2781, pruned_loss=0.05007, over 7337.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2781, pruned_loss=0.05506, over 1420486.86 frames.], batch size: 20, lr: 3.60e-04 2022-05-27 18:58:20,264 INFO [train.py:842] (0/4) Epoch 15, batch 8200, loss[loss=0.1737, simple_loss=0.2717, pruned_loss=0.03782, over 7258.00 frames.], tot_loss[loss=0.1928, simple_loss=0.2766, pruned_loss=0.05445, over 1422244.96 frames.], batch size: 19, lr: 3.60e-04 2022-05-27 18:58:59,235 INFO [train.py:842] (0/4) Epoch 15, batch 8250, loss[loss=0.1959, simple_loss=0.2783, pruned_loss=0.05672, over 7261.00 frames.], tot_loss[loss=0.1947, simple_loss=0.278, pruned_loss=0.05567, over 1420584.81 frames.], batch size: 19, lr: 3.60e-04 2022-05-27 18:59:37,863 INFO [train.py:842] (0/4) Epoch 15, batch 8300, loss[loss=0.1588, simple_loss=0.248, pruned_loss=0.03485, over 7334.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2779, pruned_loss=0.05523, over 1422426.90 frames.], batch size: 20, lr: 3.60e-04 2022-05-27 19:00:17,024 INFO [train.py:842] (0/4) Epoch 15, batch 8350, loss[loss=0.199, simple_loss=0.2802, pruned_loss=0.05891, over 7354.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2779, pruned_loss=0.05522, over 1422785.27 frames.], batch size: 19, lr: 3.60e-04 2022-05-27 19:00:56,012 INFO [train.py:842] (0/4) Epoch 15, batch 8400, loss[loss=0.1808, simple_loss=0.2675, pruned_loss=0.04702, over 7188.00 frames.], tot_loss[loss=0.193, simple_loss=0.2769, pruned_loss=0.05461, over 1423273.89 frames.], batch size: 26, lr: 3.59e-04 2022-05-27 19:01:34,960 INFO [train.py:842] (0/4) Epoch 15, batch 8450, loss[loss=0.2313, simple_loss=0.3063, pruned_loss=0.07814, over 7141.00 frames.], tot_loss[loss=0.1955, simple_loss=0.279, pruned_loss=0.05595, over 1422049.86 frames.], batch size: 20, lr: 3.59e-04 2022-05-27 19:02:13,485 INFO [train.py:842] (0/4) Epoch 15, batch 8500, loss[loss=0.2145, simple_loss=0.2928, pruned_loss=0.06816, over 7160.00 frames.], tot_loss[loss=0.1963, simple_loss=0.28, pruned_loss=0.05629, over 1420103.15 frames.], batch size: 18, lr: 3.59e-04 2022-05-27 19:02:52,556 INFO [train.py:842] (0/4) Epoch 15, batch 8550, loss[loss=0.1876, simple_loss=0.2902, pruned_loss=0.04248, over 7107.00 frames.], tot_loss[loss=0.1963, simple_loss=0.2799, pruned_loss=0.05633, over 1421928.02 frames.], batch size: 21, lr: 3.59e-04 2022-05-27 19:03:31,273 INFO [train.py:842] (0/4) Epoch 15, batch 8600, loss[loss=0.183, simple_loss=0.2779, pruned_loss=0.04403, over 7330.00 frames.], tot_loss[loss=0.1969, simple_loss=0.2807, pruned_loss=0.05659, over 1418615.49 frames.], batch size: 21, lr: 3.59e-04 2022-05-27 19:04:10,119 INFO [train.py:842] (0/4) Epoch 15, batch 8650, loss[loss=0.2325, simple_loss=0.3021, pruned_loss=0.08145, over 7315.00 frames.], tot_loss[loss=0.197, simple_loss=0.2811, pruned_loss=0.05647, over 1416656.69 frames.], batch size: 21, lr: 3.59e-04 2022-05-27 19:04:49,107 INFO [train.py:842] (0/4) Epoch 15, batch 8700, loss[loss=0.1959, simple_loss=0.2894, pruned_loss=0.05119, over 7215.00 frames.], tot_loss[loss=0.1962, simple_loss=0.2804, pruned_loss=0.05601, over 1421296.88 frames.], batch size: 22, lr: 3.59e-04 2022-05-27 19:05:28,368 INFO [train.py:842] (0/4) Epoch 15, batch 8750, loss[loss=0.1997, simple_loss=0.2783, pruned_loss=0.06053, over 6874.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2788, pruned_loss=0.05524, over 1421189.83 frames.], batch size: 31, lr: 3.59e-04 2022-05-27 19:06:07,388 INFO [train.py:842] (0/4) Epoch 15, batch 8800, loss[loss=0.2643, simple_loss=0.3284, pruned_loss=0.1001, over 7410.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2774, pruned_loss=0.05473, over 1419758.03 frames.], batch size: 21, lr: 3.59e-04 2022-05-27 19:06:46,150 INFO [train.py:842] (0/4) Epoch 15, batch 8850, loss[loss=0.2629, simple_loss=0.3278, pruned_loss=0.09897, over 6756.00 frames.], tot_loss[loss=0.1953, simple_loss=0.2794, pruned_loss=0.05562, over 1417367.32 frames.], batch size: 31, lr: 3.59e-04 2022-05-27 19:07:24,576 INFO [train.py:842] (0/4) Epoch 15, batch 8900, loss[loss=0.2256, simple_loss=0.3226, pruned_loss=0.06431, over 7334.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2793, pruned_loss=0.05588, over 1406670.47 frames.], batch size: 22, lr: 3.59e-04 2022-05-27 19:08:03,434 INFO [train.py:842] (0/4) Epoch 15, batch 8950, loss[loss=0.1725, simple_loss=0.2516, pruned_loss=0.04672, over 6787.00 frames.], tot_loss[loss=0.1951, simple_loss=0.2783, pruned_loss=0.05598, over 1389856.21 frames.], batch size: 15, lr: 3.59e-04 2022-05-27 19:08:41,907 INFO [train.py:842] (0/4) Epoch 15, batch 9000, loss[loss=0.1985, simple_loss=0.2841, pruned_loss=0.05647, over 7203.00 frames.], tot_loss[loss=0.1957, simple_loss=0.2786, pruned_loss=0.05636, over 1384024.25 frames.], batch size: 23, lr: 3.59e-04 2022-05-27 19:08:41,908 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 19:08:52,122 INFO [train.py:871] (0/4) Epoch 15, validation: loss=0.1676, simple_loss=0.2672, pruned_loss=0.03401, over 868885.00 frames. 2022-05-27 19:09:30,784 INFO [train.py:842] (0/4) Epoch 15, batch 9050, loss[loss=0.1944, simple_loss=0.2883, pruned_loss=0.0502, over 7209.00 frames.], tot_loss[loss=0.1972, simple_loss=0.2798, pruned_loss=0.05726, over 1365877.99 frames.], batch size: 23, lr: 3.59e-04 2022-05-27 19:10:09,048 INFO [train.py:842] (0/4) Epoch 15, batch 9100, loss[loss=0.2299, simple_loss=0.3039, pruned_loss=0.07796, over 5009.00 frames.], tot_loss[loss=0.1995, simple_loss=0.2815, pruned_loss=0.05877, over 1325265.28 frames.], batch size: 52, lr: 3.59e-04 2022-05-27 19:10:46,728 INFO [train.py:842] (0/4) Epoch 15, batch 9150, loss[loss=0.2833, simple_loss=0.3477, pruned_loss=0.1095, over 5216.00 frames.], tot_loss[loss=0.2073, simple_loss=0.2877, pruned_loss=0.0635, over 1254222.59 frames.], batch size: 52, lr: 3.58e-04 2022-05-27 19:11:18,903 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-15.pt 2022-05-27 19:11:37,248 INFO [train.py:842] (0/4) Epoch 16, batch 0, loss[loss=0.204, simple_loss=0.2901, pruned_loss=0.05892, over 7314.00 frames.], tot_loss[loss=0.204, simple_loss=0.2901, pruned_loss=0.05892, over 7314.00 frames.], batch size: 24, lr: 3.48e-04 2022-05-27 19:12:16,180 INFO [train.py:842] (0/4) Epoch 16, batch 50, loss[loss=0.1903, simple_loss=0.2742, pruned_loss=0.05323, over 7411.00 frames.], tot_loss[loss=0.1949, simple_loss=0.2809, pruned_loss=0.05443, over 320894.97 frames.], batch size: 18, lr: 3.48e-04 2022-05-27 19:12:55,623 INFO [train.py:842] (0/4) Epoch 16, batch 100, loss[loss=0.1781, simple_loss=0.2671, pruned_loss=0.04448, over 7334.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2765, pruned_loss=0.0534, over 563625.24 frames.], batch size: 20, lr: 3.48e-04 2022-05-27 19:13:38,124 INFO [train.py:842] (0/4) Epoch 16, batch 150, loss[loss=0.1931, simple_loss=0.2794, pruned_loss=0.05336, over 7141.00 frames.], tot_loss[loss=0.1947, simple_loss=0.2789, pruned_loss=0.05529, over 753491.77 frames.], batch size: 20, lr: 3.48e-04 2022-05-27 19:14:16,660 INFO [train.py:842] (0/4) Epoch 16, batch 200, loss[loss=0.2184, simple_loss=0.3046, pruned_loss=0.06614, over 7115.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2783, pruned_loss=0.0551, over 897355.07 frames.], batch size: 21, lr: 3.48e-04 2022-05-27 19:14:56,290 INFO [train.py:842] (0/4) Epoch 16, batch 250, loss[loss=0.1805, simple_loss=0.2652, pruned_loss=0.04792, over 7141.00 frames.], tot_loss[loss=0.1926, simple_loss=0.277, pruned_loss=0.05414, over 1014154.69 frames.], batch size: 19, lr: 3.48e-04 2022-05-27 19:15:36,397 INFO [train.py:842] (0/4) Epoch 16, batch 300, loss[loss=0.2687, simple_loss=0.3187, pruned_loss=0.1094, over 7159.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2771, pruned_loss=0.05469, over 1108463.15 frames.], batch size: 19, lr: 3.48e-04 2022-05-27 19:16:18,281 INFO [train.py:842] (0/4) Epoch 16, batch 350, loss[loss=0.171, simple_loss=0.2512, pruned_loss=0.0454, over 7273.00 frames.], tot_loss[loss=0.1931, simple_loss=0.2768, pruned_loss=0.05471, over 1180092.11 frames.], batch size: 18, lr: 3.48e-04 2022-05-27 19:16:56,992 INFO [train.py:842] (0/4) Epoch 16, batch 400, loss[loss=0.1926, simple_loss=0.2774, pruned_loss=0.05388, over 7272.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2773, pruned_loss=0.05462, over 1234019.11 frames.], batch size: 19, lr: 3.48e-04 2022-05-27 19:17:36,530 INFO [train.py:842] (0/4) Epoch 16, batch 450, loss[loss=0.1957, simple_loss=0.2798, pruned_loss=0.05579, over 7436.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2775, pruned_loss=0.05457, over 1280499.26 frames.], batch size: 20, lr: 3.47e-04 2022-05-27 19:18:15,370 INFO [train.py:842] (0/4) Epoch 16, batch 500, loss[loss=0.1782, simple_loss=0.276, pruned_loss=0.04025, over 7195.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2783, pruned_loss=0.05505, over 1317372.13 frames.], batch size: 23, lr: 3.47e-04 2022-05-27 19:18:54,560 INFO [train.py:842] (0/4) Epoch 16, batch 550, loss[loss=0.1715, simple_loss=0.2438, pruned_loss=0.04962, over 7294.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2759, pruned_loss=0.0537, over 1344569.67 frames.], batch size: 18, lr: 3.47e-04 2022-05-27 19:19:33,242 INFO [train.py:842] (0/4) Epoch 16, batch 600, loss[loss=0.1824, simple_loss=0.2573, pruned_loss=0.05376, over 7162.00 frames.], tot_loss[loss=0.1927, simple_loss=0.2771, pruned_loss=0.05415, over 1360705.37 frames.], batch size: 19, lr: 3.47e-04 2022-05-27 19:20:12,186 INFO [train.py:842] (0/4) Epoch 16, batch 650, loss[loss=0.2054, simple_loss=0.2869, pruned_loss=0.06192, over 6335.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2773, pruned_loss=0.05423, over 1373803.42 frames.], batch size: 37, lr: 3.47e-04 2022-05-27 19:20:51,114 INFO [train.py:842] (0/4) Epoch 16, batch 700, loss[loss=0.2026, simple_loss=0.2933, pruned_loss=0.05591, over 7037.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2777, pruned_loss=0.05435, over 1385526.33 frames.], batch size: 28, lr: 3.47e-04 2022-05-27 19:21:33,889 INFO [train.py:842] (0/4) Epoch 16, batch 750, loss[loss=0.2065, simple_loss=0.2896, pruned_loss=0.06172, over 7148.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2776, pruned_loss=0.05462, over 1394273.18 frames.], batch size: 19, lr: 3.47e-04 2022-05-27 19:22:12,441 INFO [train.py:842] (0/4) Epoch 16, batch 800, loss[loss=0.1755, simple_loss=0.2614, pruned_loss=0.04476, over 7266.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2766, pruned_loss=0.05383, over 1401811.39 frames.], batch size: 19, lr: 3.47e-04 2022-05-27 19:22:51,477 INFO [train.py:842] (0/4) Epoch 16, batch 850, loss[loss=0.1861, simple_loss=0.2768, pruned_loss=0.04773, over 7147.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2778, pruned_loss=0.05455, over 1404140.40 frames.], batch size: 20, lr: 3.47e-04 2022-05-27 19:23:30,172 INFO [train.py:842] (0/4) Epoch 16, batch 900, loss[loss=0.1664, simple_loss=0.2466, pruned_loss=0.04308, over 7359.00 frames.], tot_loss[loss=0.195, simple_loss=0.2792, pruned_loss=0.05539, over 1402659.66 frames.], batch size: 19, lr: 3.47e-04 2022-05-27 19:24:09,377 INFO [train.py:842] (0/4) Epoch 16, batch 950, loss[loss=0.1764, simple_loss=0.2572, pruned_loss=0.04781, over 7426.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2777, pruned_loss=0.05467, over 1406521.35 frames.], batch size: 20, lr: 3.47e-04 2022-05-27 19:24:48,097 INFO [train.py:842] (0/4) Epoch 16, batch 1000, loss[loss=0.2214, simple_loss=0.3097, pruned_loss=0.06655, over 7272.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2768, pruned_loss=0.05418, over 1412659.51 frames.], batch size: 25, lr: 3.47e-04 2022-05-27 19:25:27,125 INFO [train.py:842] (0/4) Epoch 16, batch 1050, loss[loss=0.1876, simple_loss=0.2703, pruned_loss=0.05245, over 7340.00 frames.], tot_loss[loss=0.194, simple_loss=0.2781, pruned_loss=0.05495, over 1417225.73 frames.], batch size: 20, lr: 3.47e-04 2022-05-27 19:26:05,987 INFO [train.py:842] (0/4) Epoch 16, batch 1100, loss[loss=0.1951, simple_loss=0.2678, pruned_loss=0.06126, over 7352.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2775, pruned_loss=0.05451, over 1420566.85 frames.], batch size: 19, lr: 3.47e-04 2022-05-27 19:26:45,428 INFO [train.py:842] (0/4) Epoch 16, batch 1150, loss[loss=0.2372, simple_loss=0.3097, pruned_loss=0.08236, over 4639.00 frames.], tot_loss[loss=0.1925, simple_loss=0.2765, pruned_loss=0.05424, over 1420420.88 frames.], batch size: 52, lr: 3.47e-04 2022-05-27 19:27:24,214 INFO [train.py:842] (0/4) Epoch 16, batch 1200, loss[loss=0.1745, simple_loss=0.2544, pruned_loss=0.04735, over 7117.00 frames.], tot_loss[loss=0.1937, simple_loss=0.2772, pruned_loss=0.05511, over 1417739.29 frames.], batch size: 21, lr: 3.47e-04 2022-05-27 19:28:03,577 INFO [train.py:842] (0/4) Epoch 16, batch 1250, loss[loss=0.1942, simple_loss=0.2868, pruned_loss=0.05077, over 6844.00 frames.], tot_loss[loss=0.1938, simple_loss=0.2772, pruned_loss=0.05519, over 1418509.11 frames.], batch size: 15, lr: 3.46e-04 2022-05-27 19:28:42,341 INFO [train.py:842] (0/4) Epoch 16, batch 1300, loss[loss=0.1862, simple_loss=0.2738, pruned_loss=0.04935, over 7192.00 frames.], tot_loss[loss=0.1947, simple_loss=0.2786, pruned_loss=0.05535, over 1424697.27 frames.], batch size: 22, lr: 3.46e-04 2022-05-27 19:29:21,394 INFO [train.py:842] (0/4) Epoch 16, batch 1350, loss[loss=0.1731, simple_loss=0.2676, pruned_loss=0.03927, over 7168.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2771, pruned_loss=0.05433, over 1417984.54 frames.], batch size: 19, lr: 3.46e-04 2022-05-27 19:30:00,113 INFO [train.py:842] (0/4) Epoch 16, batch 1400, loss[loss=0.1856, simple_loss=0.2722, pruned_loss=0.04947, over 7341.00 frames.], tot_loss[loss=0.1926, simple_loss=0.277, pruned_loss=0.05407, over 1416311.86 frames.], batch size: 22, lr: 3.46e-04 2022-05-27 19:30:39,298 INFO [train.py:842] (0/4) Epoch 16, batch 1450, loss[loss=0.1962, simple_loss=0.2836, pruned_loss=0.05441, over 7414.00 frames.], tot_loss[loss=0.1934, simple_loss=0.278, pruned_loss=0.05437, over 1422517.30 frames.], batch size: 21, lr: 3.46e-04 2022-05-27 19:31:18,229 INFO [train.py:842] (0/4) Epoch 16, batch 1500, loss[loss=0.1722, simple_loss=0.2711, pruned_loss=0.03669, over 7193.00 frames.], tot_loss[loss=0.1932, simple_loss=0.278, pruned_loss=0.0542, over 1422524.39 frames.], batch size: 23, lr: 3.46e-04 2022-05-27 19:31:57,500 INFO [train.py:842] (0/4) Epoch 16, batch 1550, loss[loss=0.1758, simple_loss=0.2636, pruned_loss=0.04403, over 6805.00 frames.], tot_loss[loss=0.1928, simple_loss=0.2778, pruned_loss=0.05387, over 1420864.52 frames.], batch size: 15, lr: 3.46e-04 2022-05-27 19:32:36,411 INFO [train.py:842] (0/4) Epoch 16, batch 1600, loss[loss=0.1443, simple_loss=0.2305, pruned_loss=0.02909, over 7259.00 frames.], tot_loss[loss=0.192, simple_loss=0.2772, pruned_loss=0.05341, over 1423241.44 frames.], batch size: 16, lr: 3.46e-04 2022-05-27 19:33:15,591 INFO [train.py:842] (0/4) Epoch 16, batch 1650, loss[loss=0.2372, simple_loss=0.3089, pruned_loss=0.08271, over 7141.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2776, pruned_loss=0.05383, over 1424993.98 frames.], batch size: 20, lr: 3.46e-04 2022-05-27 19:33:54,705 INFO [train.py:842] (0/4) Epoch 16, batch 1700, loss[loss=0.1539, simple_loss=0.2346, pruned_loss=0.03661, over 7416.00 frames.], tot_loss[loss=0.1921, simple_loss=0.277, pruned_loss=0.05366, over 1425645.64 frames.], batch size: 18, lr: 3.46e-04 2022-05-27 19:34:34,153 INFO [train.py:842] (0/4) Epoch 16, batch 1750, loss[loss=0.2329, simple_loss=0.3176, pruned_loss=0.07414, over 7382.00 frames.], tot_loss[loss=0.1943, simple_loss=0.2789, pruned_loss=0.05488, over 1424742.55 frames.], batch size: 23, lr: 3.46e-04 2022-05-27 19:35:13,027 INFO [train.py:842] (0/4) Epoch 16, batch 1800, loss[loss=0.1575, simple_loss=0.255, pruned_loss=0.03001, over 7356.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2791, pruned_loss=0.05507, over 1422828.67 frames.], batch size: 19, lr: 3.46e-04 2022-05-27 19:35:52,246 INFO [train.py:842] (0/4) Epoch 16, batch 1850, loss[loss=0.1964, simple_loss=0.2848, pruned_loss=0.05399, over 7142.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2778, pruned_loss=0.05459, over 1424952.35 frames.], batch size: 20, lr: 3.46e-04 2022-05-27 19:36:31,055 INFO [train.py:842] (0/4) Epoch 16, batch 1900, loss[loss=0.188, simple_loss=0.2769, pruned_loss=0.04953, over 7328.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2778, pruned_loss=0.05455, over 1428956.11 frames.], batch size: 25, lr: 3.46e-04 2022-05-27 19:37:10,353 INFO [train.py:842] (0/4) Epoch 16, batch 1950, loss[loss=0.1776, simple_loss=0.2698, pruned_loss=0.04266, over 7180.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2784, pruned_loss=0.05426, over 1429878.67 frames.], batch size: 23, lr: 3.46e-04 2022-05-27 19:37:49,016 INFO [train.py:842] (0/4) Epoch 16, batch 2000, loss[loss=0.2731, simple_loss=0.3418, pruned_loss=0.1022, over 4936.00 frames.], tot_loss[loss=0.1931, simple_loss=0.278, pruned_loss=0.05413, over 1423314.37 frames.], batch size: 52, lr: 3.46e-04 2022-05-27 19:38:28,269 INFO [train.py:842] (0/4) Epoch 16, batch 2050, loss[loss=0.2159, simple_loss=0.3118, pruned_loss=0.06001, over 6480.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2786, pruned_loss=0.05457, over 1422578.03 frames.], batch size: 38, lr: 3.45e-04 2022-05-27 19:39:07,373 INFO [train.py:842] (0/4) Epoch 16, batch 2100, loss[loss=0.1834, simple_loss=0.2729, pruned_loss=0.04699, over 7126.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2782, pruned_loss=0.05446, over 1423191.43 frames.], batch size: 21, lr: 3.45e-04 2022-05-27 19:39:46,894 INFO [train.py:842] (0/4) Epoch 16, batch 2150, loss[loss=0.1849, simple_loss=0.2714, pruned_loss=0.04915, over 7262.00 frames.], tot_loss[loss=0.1945, simple_loss=0.279, pruned_loss=0.055, over 1418644.03 frames.], batch size: 19, lr: 3.45e-04 2022-05-27 19:40:25,537 INFO [train.py:842] (0/4) Epoch 16, batch 2200, loss[loss=0.3079, simple_loss=0.3597, pruned_loss=0.128, over 7215.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2784, pruned_loss=0.05494, over 1416048.18 frames.], batch size: 22, lr: 3.45e-04 2022-05-27 19:41:05,218 INFO [train.py:842] (0/4) Epoch 16, batch 2250, loss[loss=0.2182, simple_loss=0.3033, pruned_loss=0.06652, over 7414.00 frames.], tot_loss[loss=0.1944, simple_loss=0.2786, pruned_loss=0.05514, over 1417447.16 frames.], batch size: 21, lr: 3.45e-04 2022-05-27 19:41:43,727 INFO [train.py:842] (0/4) Epoch 16, batch 2300, loss[loss=0.1864, simple_loss=0.2727, pruned_loss=0.05011, over 7195.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2783, pruned_loss=0.05478, over 1419966.29 frames.], batch size: 23, lr: 3.45e-04 2022-05-27 19:42:23,140 INFO [train.py:842] (0/4) Epoch 16, batch 2350, loss[loss=0.226, simple_loss=0.3019, pruned_loss=0.07504, over 7274.00 frames.], tot_loss[loss=0.193, simple_loss=0.2771, pruned_loss=0.05446, over 1422566.91 frames.], batch size: 25, lr: 3.45e-04 2022-05-27 19:43:02,161 INFO [train.py:842] (0/4) Epoch 16, batch 2400, loss[loss=0.1982, simple_loss=0.2887, pruned_loss=0.0538, over 7305.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2766, pruned_loss=0.05466, over 1425482.73 frames.], batch size: 25, lr: 3.45e-04 2022-05-27 19:43:41,237 INFO [train.py:842] (0/4) Epoch 16, batch 2450, loss[loss=0.1733, simple_loss=0.2583, pruned_loss=0.04411, over 6793.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2773, pruned_loss=0.05494, over 1424089.21 frames.], batch size: 31, lr: 3.45e-04 2022-05-27 19:44:20,557 INFO [train.py:842] (0/4) Epoch 16, batch 2500, loss[loss=0.1794, simple_loss=0.2782, pruned_loss=0.0403, over 7210.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2757, pruned_loss=0.0536, over 1426749.03 frames.], batch size: 21, lr: 3.45e-04 2022-05-27 19:44:59,647 INFO [train.py:842] (0/4) Epoch 16, batch 2550, loss[loss=0.2069, simple_loss=0.2874, pruned_loss=0.06316, over 7147.00 frames.], tot_loss[loss=0.192, simple_loss=0.2761, pruned_loss=0.0539, over 1423872.74 frames.], batch size: 20, lr: 3.45e-04 2022-05-27 19:45:38,458 INFO [train.py:842] (0/4) Epoch 16, batch 2600, loss[loss=0.1747, simple_loss=0.2593, pruned_loss=0.04505, over 7348.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2771, pruned_loss=0.05433, over 1422620.46 frames.], batch size: 19, lr: 3.45e-04 2022-05-27 19:46:17,834 INFO [train.py:842] (0/4) Epoch 16, batch 2650, loss[loss=0.2414, simple_loss=0.326, pruned_loss=0.07839, over 7374.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2762, pruned_loss=0.05362, over 1423238.08 frames.], batch size: 23, lr: 3.45e-04 2022-05-27 19:46:56,741 INFO [train.py:842] (0/4) Epoch 16, batch 2700, loss[loss=0.2069, simple_loss=0.2922, pruned_loss=0.06083, over 7190.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2773, pruned_loss=0.0546, over 1420648.45 frames.], batch size: 26, lr: 3.45e-04 2022-05-27 19:47:35,940 INFO [train.py:842] (0/4) Epoch 16, batch 2750, loss[loss=0.2549, simple_loss=0.3124, pruned_loss=0.09871, over 7280.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2783, pruned_loss=0.055, over 1424807.99 frames.], batch size: 18, lr: 3.45e-04 2022-05-27 19:48:14,872 INFO [train.py:842] (0/4) Epoch 16, batch 2800, loss[loss=0.2122, simple_loss=0.2998, pruned_loss=0.06227, over 7221.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2777, pruned_loss=0.05447, over 1426034.53 frames.], batch size: 21, lr: 3.45e-04 2022-05-27 19:48:54,086 INFO [train.py:842] (0/4) Epoch 16, batch 2850, loss[loss=0.1623, simple_loss=0.2427, pruned_loss=0.04095, over 7170.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2788, pruned_loss=0.05519, over 1426300.38 frames.], batch size: 18, lr: 3.45e-04 2022-05-27 19:49:32,811 INFO [train.py:842] (0/4) Epoch 16, batch 2900, loss[loss=0.1657, simple_loss=0.2467, pruned_loss=0.04236, over 7164.00 frames.], tot_loss[loss=0.195, simple_loss=0.2791, pruned_loss=0.05548, over 1428001.23 frames.], batch size: 18, lr: 3.44e-04 2022-05-27 19:50:12,016 INFO [train.py:842] (0/4) Epoch 16, batch 2950, loss[loss=0.2063, simple_loss=0.2961, pruned_loss=0.05824, over 7340.00 frames.], tot_loss[loss=0.1944, simple_loss=0.2786, pruned_loss=0.05512, over 1424542.93 frames.], batch size: 22, lr: 3.44e-04 2022-05-27 19:50:50,697 INFO [train.py:842] (0/4) Epoch 16, batch 3000, loss[loss=0.21, simple_loss=0.3027, pruned_loss=0.05866, over 7414.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2785, pruned_loss=0.05494, over 1428399.85 frames.], batch size: 21, lr: 3.44e-04 2022-05-27 19:50:50,698 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 19:51:00,435 INFO [train.py:871] (0/4) Epoch 16, validation: loss=0.1694, simple_loss=0.2694, pruned_loss=0.03473, over 868885.00 frames. 2022-05-27 19:51:39,559 INFO [train.py:842] (0/4) Epoch 16, batch 3050, loss[loss=0.1631, simple_loss=0.2475, pruned_loss=0.03941, over 7403.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2784, pruned_loss=0.05533, over 1426279.95 frames.], batch size: 18, lr: 3.44e-04 2022-05-27 19:52:18,311 INFO [train.py:842] (0/4) Epoch 16, batch 3100, loss[loss=0.2338, simple_loss=0.3124, pruned_loss=0.07765, over 7190.00 frames.], tot_loss[loss=0.1938, simple_loss=0.2777, pruned_loss=0.05496, over 1425583.27 frames.], batch size: 23, lr: 3.44e-04 2022-05-27 19:52:57,463 INFO [train.py:842] (0/4) Epoch 16, batch 3150, loss[loss=0.1618, simple_loss=0.2449, pruned_loss=0.03931, over 7164.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2774, pruned_loss=0.05472, over 1422901.55 frames.], batch size: 18, lr: 3.44e-04 2022-05-27 19:53:36,238 INFO [train.py:842] (0/4) Epoch 16, batch 3200, loss[loss=0.2119, simple_loss=0.3042, pruned_loss=0.05982, over 7283.00 frames.], tot_loss[loss=0.1957, simple_loss=0.2791, pruned_loss=0.05612, over 1423425.78 frames.], batch size: 24, lr: 3.44e-04 2022-05-27 19:54:15,775 INFO [train.py:842] (0/4) Epoch 16, batch 3250, loss[loss=0.2204, simple_loss=0.3065, pruned_loss=0.06718, over 7320.00 frames.], tot_loss[loss=0.1943, simple_loss=0.278, pruned_loss=0.0553, over 1424980.87 frames.], batch size: 21, lr: 3.44e-04 2022-05-27 19:54:54,886 INFO [train.py:842] (0/4) Epoch 16, batch 3300, loss[loss=0.253, simple_loss=0.328, pruned_loss=0.08893, over 7290.00 frames.], tot_loss[loss=0.194, simple_loss=0.2783, pruned_loss=0.05483, over 1428866.15 frames.], batch size: 25, lr: 3.44e-04 2022-05-27 19:55:34,112 INFO [train.py:842] (0/4) Epoch 16, batch 3350, loss[loss=0.1848, simple_loss=0.2857, pruned_loss=0.04195, over 7232.00 frames.], tot_loss[loss=0.1944, simple_loss=0.2789, pruned_loss=0.05499, over 1431191.11 frames.], batch size: 20, lr: 3.44e-04 2022-05-27 19:56:12,910 INFO [train.py:842] (0/4) Epoch 16, batch 3400, loss[loss=0.1912, simple_loss=0.2851, pruned_loss=0.04865, over 7129.00 frames.], tot_loss[loss=0.195, simple_loss=0.2791, pruned_loss=0.05546, over 1429380.95 frames.], batch size: 28, lr: 3.44e-04 2022-05-27 19:56:52,462 INFO [train.py:842] (0/4) Epoch 16, batch 3450, loss[loss=0.1843, simple_loss=0.2757, pruned_loss=0.04641, over 7356.00 frames.], tot_loss[loss=0.1952, simple_loss=0.2793, pruned_loss=0.05553, over 1430223.58 frames.], batch size: 19, lr: 3.44e-04 2022-05-27 19:57:31,233 INFO [train.py:842] (0/4) Epoch 16, batch 3500, loss[loss=0.2106, simple_loss=0.3034, pruned_loss=0.05894, over 7327.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2795, pruned_loss=0.05565, over 1428605.66 frames.], batch size: 21, lr: 3.44e-04 2022-05-27 19:58:10,267 INFO [train.py:842] (0/4) Epoch 16, batch 3550, loss[loss=0.212, simple_loss=0.294, pruned_loss=0.06495, over 7167.00 frames.], tot_loss[loss=0.1955, simple_loss=0.2799, pruned_loss=0.05555, over 1425015.53 frames.], batch size: 26, lr: 3.44e-04 2022-05-27 19:58:48,878 INFO [train.py:842] (0/4) Epoch 16, batch 3600, loss[loss=0.2037, simple_loss=0.2846, pruned_loss=0.06137, over 7315.00 frames.], tot_loss[loss=0.1953, simple_loss=0.2795, pruned_loss=0.05553, over 1426433.88 frames.], batch size: 21, lr: 3.44e-04 2022-05-27 19:59:28,035 INFO [train.py:842] (0/4) Epoch 16, batch 3650, loss[loss=0.1947, simple_loss=0.2675, pruned_loss=0.06093, over 7270.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2792, pruned_loss=0.05501, over 1426481.30 frames.], batch size: 18, lr: 3.44e-04 2022-05-27 20:00:07,142 INFO [train.py:842] (0/4) Epoch 16, batch 3700, loss[loss=0.2085, simple_loss=0.2898, pruned_loss=0.0636, over 6844.00 frames.], tot_loss[loss=0.1953, simple_loss=0.2794, pruned_loss=0.0556, over 1423617.45 frames.], batch size: 15, lr: 3.43e-04 2022-05-27 20:00:46,324 INFO [train.py:842] (0/4) Epoch 16, batch 3750, loss[loss=0.1794, simple_loss=0.2754, pruned_loss=0.04176, over 7295.00 frames.], tot_loss[loss=0.1952, simple_loss=0.2791, pruned_loss=0.05563, over 1421303.35 frames.], batch size: 25, lr: 3.43e-04 2022-05-27 20:01:24,929 INFO [train.py:842] (0/4) Epoch 16, batch 3800, loss[loss=0.1839, simple_loss=0.2735, pruned_loss=0.04716, over 7192.00 frames.], tot_loss[loss=0.1963, simple_loss=0.28, pruned_loss=0.05628, over 1425370.84 frames.], batch size: 22, lr: 3.43e-04 2022-05-27 20:02:03,872 INFO [train.py:842] (0/4) Epoch 16, batch 3850, loss[loss=0.229, simple_loss=0.2906, pruned_loss=0.08375, over 7154.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2809, pruned_loss=0.05666, over 1420322.05 frames.], batch size: 20, lr: 3.43e-04 2022-05-27 20:02:42,890 INFO [train.py:842] (0/4) Epoch 16, batch 3900, loss[loss=0.2152, simple_loss=0.2942, pruned_loss=0.06814, over 7296.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2804, pruned_loss=0.05626, over 1422692.78 frames.], batch size: 24, lr: 3.43e-04 2022-05-27 20:03:21,527 INFO [train.py:842] (0/4) Epoch 16, batch 3950, loss[loss=0.1941, simple_loss=0.2872, pruned_loss=0.05048, over 7215.00 frames.], tot_loss[loss=0.1965, simple_loss=0.2807, pruned_loss=0.05621, over 1420514.86 frames.], batch size: 21, lr: 3.43e-04 2022-05-27 20:04:00,255 INFO [train.py:842] (0/4) Epoch 16, batch 4000, loss[loss=0.1935, simple_loss=0.2798, pruned_loss=0.05358, over 7219.00 frames.], tot_loss[loss=0.1946, simple_loss=0.2795, pruned_loss=0.05482, over 1420382.32 frames.], batch size: 21, lr: 3.43e-04 2022-05-27 20:04:39,436 INFO [train.py:842] (0/4) Epoch 16, batch 4050, loss[loss=0.2606, simple_loss=0.3258, pruned_loss=0.09769, over 7209.00 frames.], tot_loss[loss=0.1938, simple_loss=0.2787, pruned_loss=0.0545, over 1421735.37 frames.], batch size: 22, lr: 3.43e-04 2022-05-27 20:05:18,195 INFO [train.py:842] (0/4) Epoch 16, batch 4100, loss[loss=0.2555, simple_loss=0.314, pruned_loss=0.09851, over 7169.00 frames.], tot_loss[loss=0.1954, simple_loss=0.28, pruned_loss=0.05536, over 1424508.23 frames.], batch size: 18, lr: 3.43e-04 2022-05-27 20:05:57,575 INFO [train.py:842] (0/4) Epoch 16, batch 4150, loss[loss=0.1728, simple_loss=0.2464, pruned_loss=0.04963, over 6996.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2787, pruned_loss=0.05473, over 1424949.50 frames.], batch size: 16, lr: 3.43e-04 2022-05-27 20:06:36,482 INFO [train.py:842] (0/4) Epoch 16, batch 4200, loss[loss=0.1879, simple_loss=0.2791, pruned_loss=0.04831, over 7407.00 frames.], tot_loss[loss=0.194, simple_loss=0.2783, pruned_loss=0.05486, over 1422041.27 frames.], batch size: 21, lr: 3.43e-04 2022-05-27 20:07:15,423 INFO [train.py:842] (0/4) Epoch 16, batch 4250, loss[loss=0.1857, simple_loss=0.2808, pruned_loss=0.04528, over 7292.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2781, pruned_loss=0.05455, over 1421619.49 frames.], batch size: 25, lr: 3.43e-04 2022-05-27 20:07:54,414 INFO [train.py:842] (0/4) Epoch 16, batch 4300, loss[loss=0.1879, simple_loss=0.2679, pruned_loss=0.05392, over 7237.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2778, pruned_loss=0.05401, over 1421153.37 frames.], batch size: 20, lr: 3.43e-04 2022-05-27 20:08:33,201 INFO [train.py:842] (0/4) Epoch 16, batch 4350, loss[loss=0.1919, simple_loss=0.2755, pruned_loss=0.05416, over 7198.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2779, pruned_loss=0.05369, over 1423511.73 frames.], batch size: 22, lr: 3.43e-04 2022-05-27 20:09:12,131 INFO [train.py:842] (0/4) Epoch 16, batch 4400, loss[loss=0.1844, simple_loss=0.2755, pruned_loss=0.04666, over 7324.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2779, pruned_loss=0.05449, over 1421290.34 frames.], batch size: 21, lr: 3.43e-04 2022-05-27 20:09:51,436 INFO [train.py:842] (0/4) Epoch 16, batch 4450, loss[loss=0.1685, simple_loss=0.2581, pruned_loss=0.03944, over 7162.00 frames.], tot_loss[loss=0.1942, simple_loss=0.279, pruned_loss=0.05473, over 1424299.27 frames.], batch size: 18, lr: 3.43e-04 2022-05-27 20:10:30,399 INFO [train.py:842] (0/4) Epoch 16, batch 4500, loss[loss=0.2256, simple_loss=0.312, pruned_loss=0.0696, over 7343.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2786, pruned_loss=0.05457, over 1427421.12 frames.], batch size: 22, lr: 3.43e-04 2022-05-27 20:11:09,722 INFO [train.py:842] (0/4) Epoch 16, batch 4550, loss[loss=0.2359, simple_loss=0.3171, pruned_loss=0.07736, over 7192.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2781, pruned_loss=0.05429, over 1428472.57 frames.], batch size: 22, lr: 3.42e-04 2022-05-27 20:11:48,699 INFO [train.py:842] (0/4) Epoch 16, batch 4600, loss[loss=0.1487, simple_loss=0.229, pruned_loss=0.03423, over 7282.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2762, pruned_loss=0.05359, over 1430209.26 frames.], batch size: 18, lr: 3.42e-04 2022-05-27 20:12:27,772 INFO [train.py:842] (0/4) Epoch 16, batch 4650, loss[loss=0.1972, simple_loss=0.2847, pruned_loss=0.05483, over 7224.00 frames.], tot_loss[loss=0.1915, simple_loss=0.276, pruned_loss=0.05351, over 1430921.25 frames.], batch size: 20, lr: 3.42e-04 2022-05-27 20:13:06,629 INFO [train.py:842] (0/4) Epoch 16, batch 4700, loss[loss=0.2252, simple_loss=0.3078, pruned_loss=0.07131, over 7123.00 frames.], tot_loss[loss=0.1919, simple_loss=0.276, pruned_loss=0.05386, over 1432095.43 frames.], batch size: 21, lr: 3.42e-04 2022-05-27 20:13:45,910 INFO [train.py:842] (0/4) Epoch 16, batch 4750, loss[loss=0.1748, simple_loss=0.2521, pruned_loss=0.04873, over 6770.00 frames.], tot_loss[loss=0.1912, simple_loss=0.2754, pruned_loss=0.05346, over 1429481.86 frames.], batch size: 15, lr: 3.42e-04 2022-05-27 20:14:24,754 INFO [train.py:842] (0/4) Epoch 16, batch 4800, loss[loss=0.1506, simple_loss=0.2416, pruned_loss=0.02983, over 7432.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2758, pruned_loss=0.05359, over 1432861.38 frames.], batch size: 20, lr: 3.42e-04 2022-05-27 20:15:03,906 INFO [train.py:842] (0/4) Epoch 16, batch 4850, loss[loss=0.2192, simple_loss=0.2978, pruned_loss=0.07031, over 7146.00 frames.], tot_loss[loss=0.1923, simple_loss=0.2765, pruned_loss=0.05402, over 1427237.90 frames.], batch size: 20, lr: 3.42e-04 2022-05-27 20:15:43,038 INFO [train.py:842] (0/4) Epoch 16, batch 4900, loss[loss=0.1843, simple_loss=0.262, pruned_loss=0.05325, over 7332.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2763, pruned_loss=0.05346, over 1425593.34 frames.], batch size: 20, lr: 3.42e-04 2022-05-27 20:16:22,175 INFO [train.py:842] (0/4) Epoch 16, batch 4950, loss[loss=0.2287, simple_loss=0.3095, pruned_loss=0.07392, over 7015.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2776, pruned_loss=0.05442, over 1426581.03 frames.], batch size: 28, lr: 3.42e-04 2022-05-27 20:17:00,880 INFO [train.py:842] (0/4) Epoch 16, batch 5000, loss[loss=0.1716, simple_loss=0.2634, pruned_loss=0.03988, over 7215.00 frames.], tot_loss[loss=0.1928, simple_loss=0.2771, pruned_loss=0.05424, over 1423343.69 frames.], batch size: 23, lr: 3.42e-04 2022-05-27 20:17:40,080 INFO [train.py:842] (0/4) Epoch 16, batch 5050, loss[loss=0.1356, simple_loss=0.2171, pruned_loss=0.0271, over 7130.00 frames.], tot_loss[loss=0.193, simple_loss=0.2771, pruned_loss=0.05444, over 1419945.54 frames.], batch size: 17, lr: 3.42e-04 2022-05-27 20:18:18,485 INFO [train.py:842] (0/4) Epoch 16, batch 5100, loss[loss=0.2027, simple_loss=0.2772, pruned_loss=0.06408, over 7136.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2785, pruned_loss=0.05496, over 1417692.81 frames.], batch size: 26, lr: 3.42e-04 2022-05-27 20:18:57,623 INFO [train.py:842] (0/4) Epoch 16, batch 5150, loss[loss=0.2077, simple_loss=0.3054, pruned_loss=0.05495, over 6490.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2779, pruned_loss=0.05451, over 1421336.78 frames.], batch size: 38, lr: 3.42e-04 2022-05-27 20:19:36,711 INFO [train.py:842] (0/4) Epoch 16, batch 5200, loss[loss=0.1954, simple_loss=0.2758, pruned_loss=0.05754, over 7062.00 frames.], tot_loss[loss=0.193, simple_loss=0.2772, pruned_loss=0.05437, over 1426456.17 frames.], batch size: 18, lr: 3.42e-04 2022-05-27 20:20:15,550 INFO [train.py:842] (0/4) Epoch 16, batch 5250, loss[loss=0.185, simple_loss=0.2725, pruned_loss=0.04879, over 7254.00 frames.], tot_loss[loss=0.1938, simple_loss=0.2782, pruned_loss=0.05469, over 1429015.32 frames.], batch size: 19, lr: 3.42e-04 2022-05-27 20:20:54,284 INFO [train.py:842] (0/4) Epoch 16, batch 5300, loss[loss=0.1577, simple_loss=0.2488, pruned_loss=0.03337, over 7321.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2779, pruned_loss=0.05463, over 1429088.53 frames.], batch size: 21, lr: 3.42e-04 2022-05-27 20:21:33,690 INFO [train.py:842] (0/4) Epoch 16, batch 5350, loss[loss=0.1762, simple_loss=0.2559, pruned_loss=0.04827, over 7253.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2774, pruned_loss=0.05392, over 1430913.75 frames.], batch size: 17, lr: 3.41e-04 2022-05-27 20:22:12,416 INFO [train.py:842] (0/4) Epoch 16, batch 5400, loss[loss=0.1934, simple_loss=0.2774, pruned_loss=0.05467, over 7426.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2781, pruned_loss=0.05446, over 1430982.03 frames.], batch size: 17, lr: 3.41e-04 2022-05-27 20:22:51,671 INFO [train.py:842] (0/4) Epoch 16, batch 5450, loss[loss=0.183, simple_loss=0.2632, pruned_loss=0.05141, over 7261.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2781, pruned_loss=0.05457, over 1431326.28 frames.], batch size: 19, lr: 3.41e-04 2022-05-27 20:23:30,920 INFO [train.py:842] (0/4) Epoch 16, batch 5500, loss[loss=0.2078, simple_loss=0.2916, pruned_loss=0.06197, over 7377.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2778, pruned_loss=0.05457, over 1432136.16 frames.], batch size: 23, lr: 3.41e-04 2022-05-27 20:24:09,993 INFO [train.py:842] (0/4) Epoch 16, batch 5550, loss[loss=0.303, simple_loss=0.3647, pruned_loss=0.1207, over 7209.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2783, pruned_loss=0.0553, over 1430186.39 frames.], batch size: 22, lr: 3.41e-04 2022-05-27 20:24:48,777 INFO [train.py:842] (0/4) Epoch 16, batch 5600, loss[loss=0.2, simple_loss=0.2903, pruned_loss=0.05486, over 6699.00 frames.], tot_loss[loss=0.1952, simple_loss=0.2792, pruned_loss=0.05564, over 1422961.09 frames.], batch size: 31, lr: 3.41e-04 2022-05-27 20:25:28,077 INFO [train.py:842] (0/4) Epoch 16, batch 5650, loss[loss=0.1549, simple_loss=0.2357, pruned_loss=0.03704, over 7280.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2796, pruned_loss=0.05554, over 1417242.51 frames.], batch size: 17, lr: 3.41e-04 2022-05-27 20:26:06,924 INFO [train.py:842] (0/4) Epoch 16, batch 5700, loss[loss=0.1875, simple_loss=0.2769, pruned_loss=0.04901, over 6665.00 frames.], tot_loss[loss=0.1958, simple_loss=0.2798, pruned_loss=0.05589, over 1418314.00 frames.], batch size: 31, lr: 3.41e-04 2022-05-27 20:26:45,771 INFO [train.py:842] (0/4) Epoch 16, batch 5750, loss[loss=0.1639, simple_loss=0.253, pruned_loss=0.03742, over 7353.00 frames.], tot_loss[loss=0.1949, simple_loss=0.2791, pruned_loss=0.05536, over 1415656.97 frames.], batch size: 19, lr: 3.41e-04 2022-05-27 20:27:25,117 INFO [train.py:842] (0/4) Epoch 16, batch 5800, loss[loss=0.1974, simple_loss=0.2956, pruned_loss=0.04959, over 7292.00 frames.], tot_loss[loss=0.1943, simple_loss=0.2783, pruned_loss=0.05518, over 1417714.83 frames.], batch size: 25, lr: 3.41e-04 2022-05-27 20:28:04,477 INFO [train.py:842] (0/4) Epoch 16, batch 5850, loss[loss=0.2132, simple_loss=0.289, pruned_loss=0.06874, over 7325.00 frames.], tot_loss[loss=0.194, simple_loss=0.2779, pruned_loss=0.05508, over 1420016.93 frames.], batch size: 21, lr: 3.41e-04 2022-05-27 20:28:43,324 INFO [train.py:842] (0/4) Epoch 16, batch 5900, loss[loss=0.2043, simple_loss=0.2766, pruned_loss=0.06602, over 7204.00 frames.], tot_loss[loss=0.194, simple_loss=0.2781, pruned_loss=0.05496, over 1423685.15 frames.], batch size: 16, lr: 3.41e-04 2022-05-27 20:29:22,141 INFO [train.py:842] (0/4) Epoch 16, batch 5950, loss[loss=0.1608, simple_loss=0.2405, pruned_loss=0.04058, over 7415.00 frames.], tot_loss[loss=0.1939, simple_loss=0.278, pruned_loss=0.05495, over 1419729.99 frames.], batch size: 18, lr: 3.41e-04 2022-05-27 20:30:01,013 INFO [train.py:842] (0/4) Epoch 16, batch 6000, loss[loss=0.1689, simple_loss=0.258, pruned_loss=0.03991, over 7240.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2781, pruned_loss=0.05517, over 1419280.23 frames.], batch size: 20, lr: 3.41e-04 2022-05-27 20:30:01,015 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 20:30:10,685 INFO [train.py:871] (0/4) Epoch 16, validation: loss=0.1676, simple_loss=0.2671, pruned_loss=0.03399, over 868885.00 frames. 2022-05-27 20:30:49,746 INFO [train.py:842] (0/4) Epoch 16, batch 6050, loss[loss=0.1384, simple_loss=0.2246, pruned_loss=0.02613, over 7135.00 frames.], tot_loss[loss=0.194, simple_loss=0.278, pruned_loss=0.05498, over 1421751.82 frames.], batch size: 17, lr: 3.41e-04 2022-05-27 20:31:28,699 INFO [train.py:842] (0/4) Epoch 16, batch 6100, loss[loss=0.1833, simple_loss=0.2705, pruned_loss=0.04801, over 7440.00 frames.], tot_loss[loss=0.1937, simple_loss=0.2783, pruned_loss=0.05453, over 1422737.15 frames.], batch size: 20, lr: 3.41e-04 2022-05-27 20:31:44,562 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-144000.pt 2022-05-27 20:32:11,230 INFO [train.py:842] (0/4) Epoch 16, batch 6150, loss[loss=0.1753, simple_loss=0.2611, pruned_loss=0.04474, over 7435.00 frames.], tot_loss[loss=0.1928, simple_loss=0.2774, pruned_loss=0.0541, over 1421253.95 frames.], batch size: 20, lr: 3.41e-04 2022-05-27 20:32:50,485 INFO [train.py:842] (0/4) Epoch 16, batch 6200, loss[loss=0.1957, simple_loss=0.2845, pruned_loss=0.05347, over 7326.00 frames.], tot_loss[loss=0.1912, simple_loss=0.2758, pruned_loss=0.05328, over 1426857.00 frames.], batch size: 21, lr: 3.40e-04 2022-05-27 20:33:29,290 INFO [train.py:842] (0/4) Epoch 16, batch 6250, loss[loss=0.175, simple_loss=0.2523, pruned_loss=0.04886, over 7327.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2764, pruned_loss=0.05345, over 1425295.01 frames.], batch size: 20, lr: 3.40e-04 2022-05-27 20:34:08,318 INFO [train.py:842] (0/4) Epoch 16, batch 6300, loss[loss=0.167, simple_loss=0.2502, pruned_loss=0.04186, over 7336.00 frames.], tot_loss[loss=0.1918, simple_loss=0.2763, pruned_loss=0.05367, over 1428529.71 frames.], batch size: 22, lr: 3.40e-04 2022-05-27 20:34:47,573 INFO [train.py:842] (0/4) Epoch 16, batch 6350, loss[loss=0.1808, simple_loss=0.2661, pruned_loss=0.04781, over 6449.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2766, pruned_loss=0.05405, over 1423840.37 frames.], batch size: 38, lr: 3.40e-04 2022-05-27 20:35:26,589 INFO [train.py:842] (0/4) Epoch 16, batch 6400, loss[loss=0.1938, simple_loss=0.2944, pruned_loss=0.04663, over 7230.00 frames.], tot_loss[loss=0.1915, simple_loss=0.276, pruned_loss=0.05352, over 1426268.31 frames.], batch size: 21, lr: 3.40e-04 2022-05-27 20:36:05,664 INFO [train.py:842] (0/4) Epoch 16, batch 6450, loss[loss=0.1546, simple_loss=0.2527, pruned_loss=0.02823, over 7308.00 frames.], tot_loss[loss=0.1922, simple_loss=0.2762, pruned_loss=0.05414, over 1424369.37 frames.], batch size: 21, lr: 3.40e-04 2022-05-27 20:36:44,212 INFO [train.py:842] (0/4) Epoch 16, batch 6500, loss[loss=0.2104, simple_loss=0.3062, pruned_loss=0.05731, over 7229.00 frames.], tot_loss[loss=0.1937, simple_loss=0.2778, pruned_loss=0.05478, over 1420644.03 frames.], batch size: 21, lr: 3.40e-04 2022-05-27 20:37:23,390 INFO [train.py:842] (0/4) Epoch 16, batch 6550, loss[loss=0.1992, simple_loss=0.287, pruned_loss=0.05571, over 7213.00 frames.], tot_loss[loss=0.1942, simple_loss=0.278, pruned_loss=0.05521, over 1420780.64 frames.], batch size: 22, lr: 3.40e-04 2022-05-27 20:38:12,085 INFO [train.py:842] (0/4) Epoch 16, batch 6600, loss[loss=0.1705, simple_loss=0.2502, pruned_loss=0.04536, over 7069.00 frames.], tot_loss[loss=0.1928, simple_loss=0.2773, pruned_loss=0.05419, over 1424551.56 frames.], batch size: 18, lr: 3.40e-04 2022-05-27 20:38:51,385 INFO [train.py:842] (0/4) Epoch 16, batch 6650, loss[loss=0.1926, simple_loss=0.2809, pruned_loss=0.05219, over 7058.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2786, pruned_loss=0.05478, over 1422757.41 frames.], batch size: 28, lr: 3.40e-04 2022-05-27 20:39:30,103 INFO [train.py:842] (0/4) Epoch 16, batch 6700, loss[loss=0.1732, simple_loss=0.273, pruned_loss=0.03669, over 7329.00 frames.], tot_loss[loss=0.1947, simple_loss=0.2793, pruned_loss=0.05502, over 1423915.25 frames.], batch size: 20, lr: 3.40e-04 2022-05-27 20:40:09,413 INFO [train.py:842] (0/4) Epoch 16, batch 6750, loss[loss=0.1703, simple_loss=0.268, pruned_loss=0.0363, over 7328.00 frames.], tot_loss[loss=0.1952, simple_loss=0.2796, pruned_loss=0.05543, over 1425539.51 frames.], batch size: 20, lr: 3.40e-04 2022-05-27 20:40:48,522 INFO [train.py:842] (0/4) Epoch 16, batch 6800, loss[loss=0.174, simple_loss=0.2654, pruned_loss=0.04133, over 7425.00 frames.], tot_loss[loss=0.195, simple_loss=0.2794, pruned_loss=0.05531, over 1428729.30 frames.], batch size: 20, lr: 3.40e-04 2022-05-27 20:41:27,658 INFO [train.py:842] (0/4) Epoch 16, batch 6850, loss[loss=0.1744, simple_loss=0.2654, pruned_loss=0.0417, over 7256.00 frames.], tot_loss[loss=0.1951, simple_loss=0.2792, pruned_loss=0.05544, over 1428779.97 frames.], batch size: 19, lr: 3.40e-04 2022-05-27 20:42:06,258 INFO [train.py:842] (0/4) Epoch 16, batch 6900, loss[loss=0.2535, simple_loss=0.3397, pruned_loss=0.08364, over 7022.00 frames.], tot_loss[loss=0.1947, simple_loss=0.2791, pruned_loss=0.05512, over 1425702.29 frames.], batch size: 28, lr: 3.40e-04 2022-05-27 20:42:45,901 INFO [train.py:842] (0/4) Epoch 16, batch 6950, loss[loss=0.1791, simple_loss=0.2689, pruned_loss=0.04463, over 7316.00 frames.], tot_loss[loss=0.1931, simple_loss=0.2776, pruned_loss=0.05429, over 1424907.75 frames.], batch size: 24, lr: 3.40e-04 2022-05-27 20:43:25,273 INFO [train.py:842] (0/4) Epoch 16, batch 7000, loss[loss=0.1311, simple_loss=0.218, pruned_loss=0.02205, over 6760.00 frames.], tot_loss[loss=0.1923, simple_loss=0.2765, pruned_loss=0.05408, over 1422289.82 frames.], batch size: 15, lr: 3.40e-04 2022-05-27 20:44:04,347 INFO [train.py:842] (0/4) Epoch 16, batch 7050, loss[loss=0.215, simple_loss=0.2995, pruned_loss=0.06522, over 7138.00 frames.], tot_loss[loss=0.1918, simple_loss=0.2757, pruned_loss=0.05391, over 1424478.56 frames.], batch size: 26, lr: 3.39e-04 2022-05-27 20:44:43,354 INFO [train.py:842] (0/4) Epoch 16, batch 7100, loss[loss=0.1838, simple_loss=0.2743, pruned_loss=0.04663, over 7131.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2771, pruned_loss=0.05436, over 1428561.61 frames.], batch size: 26, lr: 3.39e-04 2022-05-27 20:45:22,598 INFO [train.py:842] (0/4) Epoch 16, batch 7150, loss[loss=0.2417, simple_loss=0.3108, pruned_loss=0.08626, over 4986.00 frames.], tot_loss[loss=0.1934, simple_loss=0.2775, pruned_loss=0.0547, over 1424823.07 frames.], batch size: 52, lr: 3.39e-04 2022-05-27 20:46:01,387 INFO [train.py:842] (0/4) Epoch 16, batch 7200, loss[loss=0.2368, simple_loss=0.3132, pruned_loss=0.08015, over 7371.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2758, pruned_loss=0.05352, over 1426346.20 frames.], batch size: 23, lr: 3.39e-04 2022-05-27 20:46:40,337 INFO [train.py:842] (0/4) Epoch 16, batch 7250, loss[loss=0.2203, simple_loss=0.2943, pruned_loss=0.07316, over 7152.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2771, pruned_loss=0.05438, over 1424582.43 frames.], batch size: 19, lr: 3.39e-04 2022-05-27 20:47:19,331 INFO [train.py:842] (0/4) Epoch 16, batch 7300, loss[loss=0.1607, simple_loss=0.2477, pruned_loss=0.03682, over 7057.00 frames.], tot_loss[loss=0.1945, simple_loss=0.2781, pruned_loss=0.05545, over 1420282.18 frames.], batch size: 18, lr: 3.39e-04 2022-05-27 20:47:58,166 INFO [train.py:842] (0/4) Epoch 16, batch 7350, loss[loss=0.1751, simple_loss=0.2488, pruned_loss=0.0507, over 7006.00 frames.], tot_loss[loss=0.1937, simple_loss=0.2777, pruned_loss=0.0549, over 1422037.71 frames.], batch size: 16, lr: 3.39e-04 2022-05-27 20:48:36,988 INFO [train.py:842] (0/4) Epoch 16, batch 7400, loss[loss=0.1729, simple_loss=0.2572, pruned_loss=0.04431, over 6746.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2776, pruned_loss=0.0545, over 1423805.63 frames.], batch size: 15, lr: 3.39e-04 2022-05-27 20:49:15,877 INFO [train.py:842] (0/4) Epoch 16, batch 7450, loss[loss=0.1828, simple_loss=0.2503, pruned_loss=0.05766, over 7211.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2779, pruned_loss=0.05494, over 1420553.22 frames.], batch size: 16, lr: 3.39e-04 2022-05-27 20:49:54,822 INFO [train.py:842] (0/4) Epoch 16, batch 7500, loss[loss=0.165, simple_loss=0.2501, pruned_loss=0.03995, over 7174.00 frames.], tot_loss[loss=0.1926, simple_loss=0.277, pruned_loss=0.05414, over 1419751.50 frames.], batch size: 18, lr: 3.39e-04 2022-05-27 20:50:33,700 INFO [train.py:842] (0/4) Epoch 16, batch 7550, loss[loss=0.1788, simple_loss=0.2597, pruned_loss=0.04893, over 7414.00 frames.], tot_loss[loss=0.1931, simple_loss=0.2776, pruned_loss=0.05424, over 1422379.66 frames.], batch size: 18, lr: 3.39e-04 2022-05-27 20:51:12,600 INFO [train.py:842] (0/4) Epoch 16, batch 7600, loss[loss=0.2385, simple_loss=0.3151, pruned_loss=0.081, over 6580.00 frames.], tot_loss[loss=0.193, simple_loss=0.2775, pruned_loss=0.05425, over 1420028.85 frames.], batch size: 31, lr: 3.39e-04 2022-05-27 20:51:51,600 INFO [train.py:842] (0/4) Epoch 16, batch 7650, loss[loss=0.1555, simple_loss=0.2322, pruned_loss=0.03941, over 7013.00 frames.], tot_loss[loss=0.1927, simple_loss=0.2774, pruned_loss=0.05404, over 1421105.36 frames.], batch size: 16, lr: 3.39e-04 2022-05-27 20:52:30,473 INFO [train.py:842] (0/4) Epoch 16, batch 7700, loss[loss=0.1856, simple_loss=0.273, pruned_loss=0.04905, over 7403.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2774, pruned_loss=0.05388, over 1422228.49 frames.], batch size: 21, lr: 3.39e-04 2022-05-27 20:53:09,722 INFO [train.py:842] (0/4) Epoch 16, batch 7750, loss[loss=0.1743, simple_loss=0.2636, pruned_loss=0.04248, over 7163.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2774, pruned_loss=0.05365, over 1425641.91 frames.], batch size: 18, lr: 3.39e-04 2022-05-27 20:53:48,646 INFO [train.py:842] (0/4) Epoch 16, batch 7800, loss[loss=0.1874, simple_loss=0.2806, pruned_loss=0.04716, over 6764.00 frames.], tot_loss[loss=0.1925, simple_loss=0.2775, pruned_loss=0.05374, over 1427729.27 frames.], batch size: 31, lr: 3.39e-04 2022-05-27 20:54:27,497 INFO [train.py:842] (0/4) Epoch 16, batch 7850, loss[loss=0.1635, simple_loss=0.2422, pruned_loss=0.04235, over 7226.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2772, pruned_loss=0.05308, over 1429151.04 frames.], batch size: 16, lr: 3.39e-04 2022-05-27 20:55:06,435 INFO [train.py:842] (0/4) Epoch 16, batch 7900, loss[loss=0.1402, simple_loss=0.2265, pruned_loss=0.02694, over 7347.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2766, pruned_loss=0.05342, over 1426278.99 frames.], batch size: 19, lr: 3.38e-04 2022-05-27 20:55:45,925 INFO [train.py:842] (0/4) Epoch 16, batch 7950, loss[loss=0.1961, simple_loss=0.2647, pruned_loss=0.06375, over 7141.00 frames.], tot_loss[loss=0.1925, simple_loss=0.2773, pruned_loss=0.05385, over 1427795.65 frames.], batch size: 17, lr: 3.38e-04 2022-05-27 20:56:24,796 INFO [train.py:842] (0/4) Epoch 16, batch 8000, loss[loss=0.2048, simple_loss=0.2785, pruned_loss=0.06553, over 7260.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2774, pruned_loss=0.05417, over 1428573.23 frames.], batch size: 18, lr: 3.38e-04 2022-05-27 20:57:04,143 INFO [train.py:842] (0/4) Epoch 16, batch 8050, loss[loss=0.1518, simple_loss=0.2335, pruned_loss=0.03511, over 6818.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2751, pruned_loss=0.05316, over 1427204.99 frames.], batch size: 15, lr: 3.38e-04 2022-05-27 20:57:42,704 INFO [train.py:842] (0/4) Epoch 16, batch 8100, loss[loss=0.1586, simple_loss=0.2422, pruned_loss=0.03752, over 7362.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2754, pruned_loss=0.05292, over 1429965.83 frames.], batch size: 19, lr: 3.38e-04 2022-05-27 20:58:21,838 INFO [train.py:842] (0/4) Epoch 16, batch 8150, loss[loss=0.1918, simple_loss=0.2922, pruned_loss=0.04573, over 7212.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2757, pruned_loss=0.05307, over 1430124.66 frames.], batch size: 22, lr: 3.38e-04 2022-05-27 20:59:00,889 INFO [train.py:842] (0/4) Epoch 16, batch 8200, loss[loss=0.1643, simple_loss=0.2451, pruned_loss=0.04174, over 6759.00 frames.], tot_loss[loss=0.1922, simple_loss=0.2763, pruned_loss=0.05407, over 1427297.98 frames.], batch size: 15, lr: 3.38e-04 2022-05-27 20:59:39,502 INFO [train.py:842] (0/4) Epoch 16, batch 8250, loss[loss=0.1834, simple_loss=0.2755, pruned_loss=0.0456, over 7282.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2761, pruned_loss=0.0541, over 1423939.44 frames.], batch size: 25, lr: 3.38e-04 2022-05-27 21:00:18,859 INFO [train.py:842] (0/4) Epoch 16, batch 8300, loss[loss=0.1928, simple_loss=0.2731, pruned_loss=0.0563, over 7334.00 frames.], tot_loss[loss=0.192, simple_loss=0.2757, pruned_loss=0.05416, over 1422652.88 frames.], batch size: 20, lr: 3.38e-04 2022-05-27 21:00:57,540 INFO [train.py:842] (0/4) Epoch 16, batch 8350, loss[loss=0.2065, simple_loss=0.2906, pruned_loss=0.06123, over 7139.00 frames.], tot_loss[loss=0.192, simple_loss=0.2763, pruned_loss=0.0538, over 1420036.18 frames.], batch size: 26, lr: 3.38e-04 2022-05-27 21:01:36,525 INFO [train.py:842] (0/4) Epoch 16, batch 8400, loss[loss=0.1987, simple_loss=0.2703, pruned_loss=0.06352, over 7209.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2767, pruned_loss=0.05428, over 1417503.51 frames.], batch size: 16, lr: 3.38e-04 2022-05-27 21:02:15,851 INFO [train.py:842] (0/4) Epoch 16, batch 8450, loss[loss=0.1949, simple_loss=0.2782, pruned_loss=0.05578, over 7437.00 frames.], tot_loss[loss=0.1923, simple_loss=0.2768, pruned_loss=0.05394, over 1421049.75 frames.], batch size: 20, lr: 3.38e-04 2022-05-27 21:02:54,709 INFO [train.py:842] (0/4) Epoch 16, batch 8500, loss[loss=0.1754, simple_loss=0.259, pruned_loss=0.04588, over 7170.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2752, pruned_loss=0.05314, over 1421078.05 frames.], batch size: 19, lr: 3.38e-04 2022-05-27 21:03:33,696 INFO [train.py:842] (0/4) Epoch 16, batch 8550, loss[loss=0.2424, simple_loss=0.3122, pruned_loss=0.08631, over 7434.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2754, pruned_loss=0.05364, over 1420230.94 frames.], batch size: 20, lr: 3.38e-04 2022-05-27 21:04:12,817 INFO [train.py:842] (0/4) Epoch 16, batch 8600, loss[loss=0.1701, simple_loss=0.2529, pruned_loss=0.04368, over 7296.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2757, pruned_loss=0.05379, over 1418597.26 frames.], batch size: 18, lr: 3.38e-04 2022-05-27 21:04:52,305 INFO [train.py:842] (0/4) Epoch 16, batch 8650, loss[loss=0.2135, simple_loss=0.292, pruned_loss=0.06748, over 4996.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2752, pruned_loss=0.05381, over 1413568.54 frames.], batch size: 53, lr: 3.38e-04 2022-05-27 21:05:31,125 INFO [train.py:842] (0/4) Epoch 16, batch 8700, loss[loss=0.1673, simple_loss=0.2578, pruned_loss=0.03844, over 7138.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2741, pruned_loss=0.05309, over 1410512.94 frames.], batch size: 20, lr: 3.38e-04 2022-05-27 21:06:10,261 INFO [train.py:842] (0/4) Epoch 16, batch 8750, loss[loss=0.2253, simple_loss=0.3065, pruned_loss=0.072, over 7053.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2732, pruned_loss=0.05192, over 1412986.43 frames.], batch size: 18, lr: 3.38e-04 2022-05-27 21:06:48,791 INFO [train.py:842] (0/4) Epoch 16, batch 8800, loss[loss=0.2066, simple_loss=0.2909, pruned_loss=0.06113, over 7186.00 frames.], tot_loss[loss=0.1897, simple_loss=0.274, pruned_loss=0.05269, over 1411744.91 frames.], batch size: 22, lr: 3.37e-04 2022-05-27 21:07:27,875 INFO [train.py:842] (0/4) Epoch 16, batch 8850, loss[loss=0.1984, simple_loss=0.286, pruned_loss=0.05535, over 7064.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2757, pruned_loss=0.05386, over 1410272.72 frames.], batch size: 18, lr: 3.37e-04 2022-05-27 21:08:06,056 INFO [train.py:842] (0/4) Epoch 16, batch 8900, loss[loss=0.2062, simple_loss=0.2804, pruned_loss=0.06603, over 5490.00 frames.], tot_loss[loss=0.1944, simple_loss=0.278, pruned_loss=0.05536, over 1400851.51 frames.], batch size: 53, lr: 3.37e-04 2022-05-27 21:08:44,708 INFO [train.py:842] (0/4) Epoch 16, batch 8950, loss[loss=0.169, simple_loss=0.2565, pruned_loss=0.04077, over 7260.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2778, pruned_loss=0.05497, over 1396872.33 frames.], batch size: 19, lr: 3.37e-04 2022-05-27 21:09:22,802 INFO [train.py:842] (0/4) Epoch 16, batch 9000, loss[loss=0.2004, simple_loss=0.2753, pruned_loss=0.06274, over 7068.00 frames.], tot_loss[loss=0.1971, simple_loss=0.2809, pruned_loss=0.05671, over 1383445.58 frames.], batch size: 28, lr: 3.37e-04 2022-05-27 21:09:22,803 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 21:09:32,341 INFO [train.py:871] (0/4) Epoch 16, validation: loss=0.167, simple_loss=0.2669, pruned_loss=0.03357, over 868885.00 frames. 2022-05-27 21:10:10,542 INFO [train.py:842] (0/4) Epoch 16, batch 9050, loss[loss=0.1838, simple_loss=0.2643, pruned_loss=0.05168, over 7254.00 frames.], tot_loss[loss=0.1979, simple_loss=0.282, pruned_loss=0.05689, over 1367734.75 frames.], batch size: 19, lr: 3.37e-04 2022-05-27 21:10:58,395 INFO [train.py:842] (0/4) Epoch 16, batch 9100, loss[loss=0.2237, simple_loss=0.3013, pruned_loss=0.07305, over 5327.00 frames.], tot_loss[loss=0.2022, simple_loss=0.2855, pruned_loss=0.05946, over 1312221.80 frames.], batch size: 54, lr: 3.37e-04 2022-05-27 21:11:46,303 INFO [train.py:842] (0/4) Epoch 16, batch 9150, loss[loss=0.2219, simple_loss=0.2936, pruned_loss=0.07514, over 5071.00 frames.], tot_loss[loss=0.2082, simple_loss=0.2895, pruned_loss=0.06343, over 1243783.30 frames.], batch size: 53, lr: 3.37e-04 2022-05-27 21:12:28,524 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-16.pt 2022-05-27 21:12:47,123 INFO [train.py:842] (0/4) Epoch 17, batch 0, loss[loss=0.2427, simple_loss=0.3207, pruned_loss=0.08235, over 7104.00 frames.], tot_loss[loss=0.2427, simple_loss=0.3207, pruned_loss=0.08235, over 7104.00 frames.], batch size: 21, lr: 3.28e-04 2022-05-27 21:13:26,275 INFO [train.py:842] (0/4) Epoch 17, batch 50, loss[loss=0.1925, simple_loss=0.2806, pruned_loss=0.05213, over 7325.00 frames.], tot_loss[loss=0.1968, simple_loss=0.2818, pruned_loss=0.05587, over 317167.72 frames.], batch size: 21, lr: 3.28e-04 2022-05-27 21:14:04,921 INFO [train.py:842] (0/4) Epoch 17, batch 100, loss[loss=0.1938, simple_loss=0.2805, pruned_loss=0.05358, over 7162.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2756, pruned_loss=0.0519, over 559140.20 frames.], batch size: 20, lr: 3.28e-04 2022-05-27 21:14:43,823 INFO [train.py:842] (0/4) Epoch 17, batch 150, loss[loss=0.1929, simple_loss=0.268, pruned_loss=0.05887, over 7011.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2774, pruned_loss=0.05293, over 747937.80 frames.], batch size: 16, lr: 3.28e-04 2022-05-27 21:15:22,352 INFO [train.py:842] (0/4) Epoch 17, batch 200, loss[loss=0.1585, simple_loss=0.2363, pruned_loss=0.0404, over 7147.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2789, pruned_loss=0.05381, over 897716.99 frames.], batch size: 17, lr: 3.27e-04 2022-05-27 21:16:01,218 INFO [train.py:842] (0/4) Epoch 17, batch 250, loss[loss=0.1863, simple_loss=0.2735, pruned_loss=0.04958, over 7262.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2766, pruned_loss=0.0523, over 1017329.04 frames.], batch size: 19, lr: 3.27e-04 2022-05-27 21:16:39,724 INFO [train.py:842] (0/4) Epoch 17, batch 300, loss[loss=0.1648, simple_loss=0.2447, pruned_loss=0.0425, over 7069.00 frames.], tot_loss[loss=0.1938, simple_loss=0.2795, pruned_loss=0.05406, over 1102413.20 frames.], batch size: 18, lr: 3.27e-04 2022-05-27 21:17:18,934 INFO [train.py:842] (0/4) Epoch 17, batch 350, loss[loss=0.1696, simple_loss=0.2536, pruned_loss=0.04284, over 6836.00 frames.], tot_loss[loss=0.1931, simple_loss=0.2784, pruned_loss=0.05387, over 1172587.37 frames.], batch size: 15, lr: 3.27e-04 2022-05-27 21:17:57,670 INFO [train.py:842] (0/4) Epoch 17, batch 400, loss[loss=0.28, simple_loss=0.3366, pruned_loss=0.1117, over 4918.00 frames.], tot_loss[loss=0.1925, simple_loss=0.2778, pruned_loss=0.0536, over 1228177.26 frames.], batch size: 52, lr: 3.27e-04 2022-05-27 21:18:36,655 INFO [train.py:842] (0/4) Epoch 17, batch 450, loss[loss=0.2005, simple_loss=0.2869, pruned_loss=0.05707, over 7364.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2765, pruned_loss=0.05258, over 1268677.36 frames.], batch size: 19, lr: 3.27e-04 2022-05-27 21:19:15,518 INFO [train.py:842] (0/4) Epoch 17, batch 500, loss[loss=0.1886, simple_loss=0.2688, pruned_loss=0.05417, over 7165.00 frames.], tot_loss[loss=0.1903, simple_loss=0.276, pruned_loss=0.0523, over 1302289.18 frames.], batch size: 18, lr: 3.27e-04 2022-05-27 21:19:55,016 INFO [train.py:842] (0/4) Epoch 17, batch 550, loss[loss=0.1339, simple_loss=0.2218, pruned_loss=0.02301, over 7132.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2765, pruned_loss=0.05268, over 1327406.26 frames.], batch size: 17, lr: 3.27e-04 2022-05-27 21:20:33,626 INFO [train.py:842] (0/4) Epoch 17, batch 600, loss[loss=0.1996, simple_loss=0.2882, pruned_loss=0.05549, over 7105.00 frames.], tot_loss[loss=0.192, simple_loss=0.2771, pruned_loss=0.05344, over 1341610.18 frames.], batch size: 28, lr: 3.27e-04 2022-05-27 21:21:12,991 INFO [train.py:842] (0/4) Epoch 17, batch 650, loss[loss=0.1837, simple_loss=0.2663, pruned_loss=0.05058, over 7321.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2788, pruned_loss=0.05468, over 1360303.67 frames.], batch size: 20, lr: 3.27e-04 2022-05-27 21:21:51,610 INFO [train.py:842] (0/4) Epoch 17, batch 700, loss[loss=0.2011, simple_loss=0.2814, pruned_loss=0.06039, over 7255.00 frames.], tot_loss[loss=0.1935, simple_loss=0.2785, pruned_loss=0.05425, over 1367035.34 frames.], batch size: 19, lr: 3.27e-04 2022-05-27 21:22:30,807 INFO [train.py:842] (0/4) Epoch 17, batch 750, loss[loss=0.1976, simple_loss=0.2918, pruned_loss=0.05167, over 7153.00 frames.], tot_loss[loss=0.192, simple_loss=0.2772, pruned_loss=0.05341, over 1375944.41 frames.], batch size: 20, lr: 3.27e-04 2022-05-27 21:23:09,469 INFO [train.py:842] (0/4) Epoch 17, batch 800, loss[loss=0.1805, simple_loss=0.272, pruned_loss=0.04454, over 7157.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2773, pruned_loss=0.0534, over 1386911.11 frames.], batch size: 19, lr: 3.27e-04 2022-05-27 21:23:48,550 INFO [train.py:842] (0/4) Epoch 17, batch 850, loss[loss=0.2238, simple_loss=0.3129, pruned_loss=0.06734, over 6418.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2766, pruned_loss=0.05338, over 1395330.75 frames.], batch size: 38, lr: 3.27e-04 2022-05-27 21:24:27,494 INFO [train.py:842] (0/4) Epoch 17, batch 900, loss[loss=0.229, simple_loss=0.3118, pruned_loss=0.07309, over 7334.00 frames.], tot_loss[loss=0.1919, simple_loss=0.2769, pruned_loss=0.0534, over 1407380.14 frames.], batch size: 20, lr: 3.27e-04 2022-05-27 21:25:06,287 INFO [train.py:842] (0/4) Epoch 17, batch 950, loss[loss=0.1752, simple_loss=0.2454, pruned_loss=0.05254, over 7125.00 frames.], tot_loss[loss=0.1922, simple_loss=0.2771, pruned_loss=0.05368, over 1412402.59 frames.], batch size: 17, lr: 3.27e-04 2022-05-27 21:25:44,915 INFO [train.py:842] (0/4) Epoch 17, batch 1000, loss[loss=0.1673, simple_loss=0.2579, pruned_loss=0.03834, over 7118.00 frames.], tot_loss[loss=0.1919, simple_loss=0.2768, pruned_loss=0.05348, over 1416089.46 frames.], batch size: 21, lr: 3.27e-04 2022-05-27 21:26:24,307 INFO [train.py:842] (0/4) Epoch 17, batch 1050, loss[loss=0.1721, simple_loss=0.2661, pruned_loss=0.03905, over 7335.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2753, pruned_loss=0.05245, over 1421016.74 frames.], batch size: 22, lr: 3.27e-04 2022-05-27 21:27:03,132 INFO [train.py:842] (0/4) Epoch 17, batch 1100, loss[loss=0.182, simple_loss=0.2755, pruned_loss=0.04429, over 7308.00 frames.], tot_loss[loss=0.1913, simple_loss=0.276, pruned_loss=0.05328, over 1421810.54 frames.], batch size: 24, lr: 3.26e-04 2022-05-27 21:27:42,202 INFO [train.py:842] (0/4) Epoch 17, batch 1150, loss[loss=0.1906, simple_loss=0.2735, pruned_loss=0.05381, over 7265.00 frames.], tot_loss[loss=0.1923, simple_loss=0.2776, pruned_loss=0.05347, over 1422955.12 frames.], batch size: 24, lr: 3.26e-04 2022-05-27 21:28:20,861 INFO [train.py:842] (0/4) Epoch 17, batch 1200, loss[loss=0.2308, simple_loss=0.3141, pruned_loss=0.07374, over 7291.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2769, pruned_loss=0.05366, over 1420157.67 frames.], batch size: 25, lr: 3.26e-04 2022-05-27 21:28:59,935 INFO [train.py:842] (0/4) Epoch 17, batch 1250, loss[loss=0.1666, simple_loss=0.2545, pruned_loss=0.0394, over 7293.00 frames.], tot_loss[loss=0.1938, simple_loss=0.2783, pruned_loss=0.0546, over 1416300.83 frames.], batch size: 18, lr: 3.26e-04 2022-05-27 21:29:38,919 INFO [train.py:842] (0/4) Epoch 17, batch 1300, loss[loss=0.1687, simple_loss=0.2596, pruned_loss=0.03885, over 7332.00 frames.], tot_loss[loss=0.1923, simple_loss=0.2774, pruned_loss=0.05365, over 1414122.22 frames.], batch size: 22, lr: 3.26e-04 2022-05-27 21:30:18,027 INFO [train.py:842] (0/4) Epoch 17, batch 1350, loss[loss=0.1367, simple_loss=0.2155, pruned_loss=0.02898, over 6993.00 frames.], tot_loss[loss=0.1927, simple_loss=0.2778, pruned_loss=0.05386, over 1419684.01 frames.], batch size: 16, lr: 3.26e-04 2022-05-27 21:30:56,917 INFO [train.py:842] (0/4) Epoch 17, batch 1400, loss[loss=0.2098, simple_loss=0.2988, pruned_loss=0.06036, over 7152.00 frames.], tot_loss[loss=0.1919, simple_loss=0.2769, pruned_loss=0.05346, over 1421034.48 frames.], batch size: 20, lr: 3.26e-04 2022-05-27 21:31:36,140 INFO [train.py:842] (0/4) Epoch 17, batch 1450, loss[loss=0.2221, simple_loss=0.3, pruned_loss=0.07205, over 7348.00 frames.], tot_loss[loss=0.1927, simple_loss=0.2778, pruned_loss=0.05386, over 1420208.91 frames.], batch size: 22, lr: 3.26e-04 2022-05-27 21:32:15,395 INFO [train.py:842] (0/4) Epoch 17, batch 1500, loss[loss=0.1722, simple_loss=0.254, pruned_loss=0.04523, over 7252.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2768, pruned_loss=0.05369, over 1425603.16 frames.], batch size: 19, lr: 3.26e-04 2022-05-27 21:32:54,704 INFO [train.py:842] (0/4) Epoch 17, batch 1550, loss[loss=0.1662, simple_loss=0.2585, pruned_loss=0.03692, over 7214.00 frames.], tot_loss[loss=0.1922, simple_loss=0.277, pruned_loss=0.05374, over 1422916.17 frames.], batch size: 21, lr: 3.26e-04 2022-05-27 21:33:33,695 INFO [train.py:842] (0/4) Epoch 17, batch 1600, loss[loss=0.202, simple_loss=0.284, pruned_loss=0.05995, over 7432.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2757, pruned_loss=0.05298, over 1427092.59 frames.], batch size: 20, lr: 3.26e-04 2022-05-27 21:34:12,935 INFO [train.py:842] (0/4) Epoch 17, batch 1650, loss[loss=0.1893, simple_loss=0.2791, pruned_loss=0.04971, over 7416.00 frames.], tot_loss[loss=0.1921, simple_loss=0.277, pruned_loss=0.05357, over 1428966.28 frames.], batch size: 21, lr: 3.26e-04 2022-05-27 21:34:51,642 INFO [train.py:842] (0/4) Epoch 17, batch 1700, loss[loss=0.2617, simple_loss=0.3325, pruned_loss=0.09544, over 5109.00 frames.], tot_loss[loss=0.1927, simple_loss=0.2775, pruned_loss=0.05399, over 1422121.61 frames.], batch size: 52, lr: 3.26e-04 2022-05-27 21:35:30,463 INFO [train.py:842] (0/4) Epoch 17, batch 1750, loss[loss=0.1921, simple_loss=0.2789, pruned_loss=0.0527, over 7390.00 frames.], tot_loss[loss=0.1925, simple_loss=0.2778, pruned_loss=0.05366, over 1413827.72 frames.], batch size: 23, lr: 3.26e-04 2022-05-27 21:36:08,991 INFO [train.py:842] (0/4) Epoch 17, batch 1800, loss[loss=0.2062, simple_loss=0.3003, pruned_loss=0.05603, over 7212.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2786, pruned_loss=0.05402, over 1415330.57 frames.], batch size: 23, lr: 3.26e-04 2022-05-27 21:36:48,044 INFO [train.py:842] (0/4) Epoch 17, batch 1850, loss[loss=0.1721, simple_loss=0.259, pruned_loss=0.04259, over 6318.00 frames.], tot_loss[loss=0.1924, simple_loss=0.278, pruned_loss=0.05346, over 1416031.79 frames.], batch size: 37, lr: 3.26e-04 2022-05-27 21:37:26,793 INFO [train.py:842] (0/4) Epoch 17, batch 1900, loss[loss=0.1715, simple_loss=0.262, pruned_loss=0.04052, over 7437.00 frames.], tot_loss[loss=0.1919, simple_loss=0.2774, pruned_loss=0.05326, over 1420402.27 frames.], batch size: 20, lr: 3.26e-04 2022-05-27 21:38:05,889 INFO [train.py:842] (0/4) Epoch 17, batch 1950, loss[loss=0.184, simple_loss=0.2702, pruned_loss=0.04889, over 7323.00 frames.], tot_loss[loss=0.192, simple_loss=0.2771, pruned_loss=0.05341, over 1422970.71 frames.], batch size: 21, lr: 3.26e-04 2022-05-27 21:38:44,448 INFO [train.py:842] (0/4) Epoch 17, batch 2000, loss[loss=0.1515, simple_loss=0.2397, pruned_loss=0.03162, over 7271.00 frames.], tot_loss[loss=0.1918, simple_loss=0.2772, pruned_loss=0.0532, over 1424579.70 frames.], batch size: 19, lr: 3.25e-04 2022-05-27 21:39:23,693 INFO [train.py:842] (0/4) Epoch 17, batch 2050, loss[loss=0.1639, simple_loss=0.2403, pruned_loss=0.04376, over 7391.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2766, pruned_loss=0.0532, over 1427837.69 frames.], batch size: 18, lr: 3.25e-04 2022-05-27 21:40:02,208 INFO [train.py:842] (0/4) Epoch 17, batch 2100, loss[loss=0.1743, simple_loss=0.2602, pruned_loss=0.04416, over 7410.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2769, pruned_loss=0.05315, over 1428874.73 frames.], batch size: 21, lr: 3.25e-04 2022-05-27 21:40:41,275 INFO [train.py:842] (0/4) Epoch 17, batch 2150, loss[loss=0.1953, simple_loss=0.2776, pruned_loss=0.05654, over 7355.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2777, pruned_loss=0.05373, over 1424098.73 frames.], batch size: 19, lr: 3.25e-04 2022-05-27 21:41:20,096 INFO [train.py:842] (0/4) Epoch 17, batch 2200, loss[loss=0.2017, simple_loss=0.29, pruned_loss=0.05671, over 7328.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2773, pruned_loss=0.05339, over 1421207.27 frames.], batch size: 22, lr: 3.25e-04 2022-05-27 21:41:59,488 INFO [train.py:842] (0/4) Epoch 17, batch 2250, loss[loss=0.1876, simple_loss=0.2694, pruned_loss=0.05284, over 7412.00 frames.], tot_loss[loss=0.1942, simple_loss=0.2789, pruned_loss=0.05482, over 1423110.04 frames.], batch size: 21, lr: 3.25e-04 2022-05-27 21:42:38,170 INFO [train.py:842] (0/4) Epoch 17, batch 2300, loss[loss=0.2045, simple_loss=0.2805, pruned_loss=0.06425, over 7285.00 frames.], tot_loss[loss=0.1945, simple_loss=0.279, pruned_loss=0.05496, over 1422414.34 frames.], batch size: 24, lr: 3.25e-04 2022-05-27 21:43:17,576 INFO [train.py:842] (0/4) Epoch 17, batch 2350, loss[loss=0.1853, simple_loss=0.2706, pruned_loss=0.04998, over 7388.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2773, pruned_loss=0.05376, over 1425735.08 frames.], batch size: 23, lr: 3.25e-04 2022-05-27 21:43:56,290 INFO [train.py:842] (0/4) Epoch 17, batch 2400, loss[loss=0.1422, simple_loss=0.2322, pruned_loss=0.02612, over 7002.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2767, pruned_loss=0.05326, over 1423686.51 frames.], batch size: 16, lr: 3.25e-04 2022-05-27 21:44:35,702 INFO [train.py:842] (0/4) Epoch 17, batch 2450, loss[loss=0.2027, simple_loss=0.29, pruned_loss=0.05774, over 7331.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2758, pruned_loss=0.05295, over 1422982.15 frames.], batch size: 22, lr: 3.25e-04 2022-05-27 21:45:14,353 INFO [train.py:842] (0/4) Epoch 17, batch 2500, loss[loss=0.194, simple_loss=0.2833, pruned_loss=0.05237, over 7215.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2748, pruned_loss=0.05238, over 1422782.97 frames.], batch size: 21, lr: 3.25e-04 2022-05-27 21:45:53,519 INFO [train.py:842] (0/4) Epoch 17, batch 2550, loss[loss=0.201, simple_loss=0.2998, pruned_loss=0.05111, over 7216.00 frames.], tot_loss[loss=0.19, simple_loss=0.275, pruned_loss=0.05253, over 1418912.63 frames.], batch size: 21, lr: 3.25e-04 2022-05-27 21:46:32,091 INFO [train.py:842] (0/4) Epoch 17, batch 2600, loss[loss=0.1914, simple_loss=0.2809, pruned_loss=0.05097, over 7048.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2749, pruned_loss=0.05228, over 1421208.07 frames.], batch size: 28, lr: 3.25e-04 2022-05-27 21:47:11,549 INFO [train.py:842] (0/4) Epoch 17, batch 2650, loss[loss=0.1869, simple_loss=0.2741, pruned_loss=0.04987, over 7355.00 frames.], tot_loss[loss=0.1922, simple_loss=0.2775, pruned_loss=0.05344, over 1419428.99 frames.], batch size: 19, lr: 3.25e-04 2022-05-27 21:47:50,581 INFO [train.py:842] (0/4) Epoch 17, batch 2700, loss[loss=0.1891, simple_loss=0.2802, pruned_loss=0.04895, over 7346.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2764, pruned_loss=0.05339, over 1422273.54 frames.], batch size: 22, lr: 3.25e-04 2022-05-27 21:48:29,805 INFO [train.py:842] (0/4) Epoch 17, batch 2750, loss[loss=0.167, simple_loss=0.2614, pruned_loss=0.03625, over 7163.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2748, pruned_loss=0.05236, over 1421729.22 frames.], batch size: 19, lr: 3.25e-04 2022-05-27 21:49:09,001 INFO [train.py:842] (0/4) Epoch 17, batch 2800, loss[loss=0.2086, simple_loss=0.2906, pruned_loss=0.06332, over 4933.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2746, pruned_loss=0.05253, over 1421508.13 frames.], batch size: 52, lr: 3.25e-04 2022-05-27 21:49:47,933 INFO [train.py:842] (0/4) Epoch 17, batch 2850, loss[loss=0.217, simple_loss=0.3104, pruned_loss=0.06187, over 7311.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2756, pruned_loss=0.05301, over 1422023.77 frames.], batch size: 21, lr: 3.25e-04 2022-05-27 21:50:27,037 INFO [train.py:842] (0/4) Epoch 17, batch 2900, loss[loss=0.2071, simple_loss=0.283, pruned_loss=0.06561, over 7236.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2747, pruned_loss=0.05255, over 1418702.97 frames.], batch size: 20, lr: 3.24e-04 2022-05-27 21:51:06,312 INFO [train.py:842] (0/4) Epoch 17, batch 2950, loss[loss=0.1599, simple_loss=0.2336, pruned_loss=0.04308, over 7286.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2754, pruned_loss=0.05301, over 1419252.42 frames.], batch size: 18, lr: 3.24e-04 2022-05-27 21:51:45,543 INFO [train.py:842] (0/4) Epoch 17, batch 3000, loss[loss=0.1689, simple_loss=0.2591, pruned_loss=0.03942, over 7143.00 frames.], tot_loss[loss=0.1896, simple_loss=0.2747, pruned_loss=0.05222, over 1423746.20 frames.], batch size: 20, lr: 3.24e-04 2022-05-27 21:51:45,544 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 21:51:55,147 INFO [train.py:871] (0/4) Epoch 17, validation: loss=0.1666, simple_loss=0.2663, pruned_loss=0.03343, over 868885.00 frames. 2022-05-27 21:52:34,165 INFO [train.py:842] (0/4) Epoch 17, batch 3050, loss[loss=0.1832, simple_loss=0.2723, pruned_loss=0.04704, over 6670.00 frames.], tot_loss[loss=0.1903, simple_loss=0.275, pruned_loss=0.05279, over 1423608.29 frames.], batch size: 38, lr: 3.24e-04 2022-05-27 21:53:12,907 INFO [train.py:842] (0/4) Epoch 17, batch 3100, loss[loss=0.268, simple_loss=0.3376, pruned_loss=0.09917, over 7273.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2755, pruned_loss=0.05293, over 1420130.69 frames.], batch size: 25, lr: 3.24e-04 2022-05-27 21:53:52,112 INFO [train.py:842] (0/4) Epoch 17, batch 3150, loss[loss=0.1686, simple_loss=0.2619, pruned_loss=0.03771, over 7322.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2763, pruned_loss=0.05354, over 1419096.72 frames.], batch size: 20, lr: 3.24e-04 2022-05-27 21:54:30,950 INFO [train.py:842] (0/4) Epoch 17, batch 3200, loss[loss=0.1691, simple_loss=0.2572, pruned_loss=0.04045, over 7352.00 frames.], tot_loss[loss=0.192, simple_loss=0.2767, pruned_loss=0.05362, over 1419611.08 frames.], batch size: 19, lr: 3.24e-04 2022-05-27 21:55:10,327 INFO [train.py:842] (0/4) Epoch 17, batch 3250, loss[loss=0.1542, simple_loss=0.2271, pruned_loss=0.04062, over 7066.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2766, pruned_loss=0.05338, over 1424808.14 frames.], batch size: 18, lr: 3.24e-04 2022-05-27 21:55:49,261 INFO [train.py:842] (0/4) Epoch 17, batch 3300, loss[loss=0.2294, simple_loss=0.2905, pruned_loss=0.08417, over 7164.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2774, pruned_loss=0.05374, over 1426021.36 frames.], batch size: 19, lr: 3.24e-04 2022-05-27 21:56:28,695 INFO [train.py:842] (0/4) Epoch 17, batch 3350, loss[loss=0.1918, simple_loss=0.2808, pruned_loss=0.0514, over 7340.00 frames.], tot_loss[loss=0.1918, simple_loss=0.2771, pruned_loss=0.05328, over 1427045.36 frames.], batch size: 22, lr: 3.24e-04 2022-05-27 21:57:07,333 INFO [train.py:842] (0/4) Epoch 17, batch 3400, loss[loss=0.1865, simple_loss=0.2766, pruned_loss=0.04822, over 7143.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2772, pruned_loss=0.05347, over 1424378.90 frames.], batch size: 20, lr: 3.24e-04 2022-05-27 21:57:46,421 INFO [train.py:842] (0/4) Epoch 17, batch 3450, loss[loss=0.1823, simple_loss=0.2701, pruned_loss=0.04729, over 7338.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2748, pruned_loss=0.05246, over 1425656.07 frames.], batch size: 20, lr: 3.24e-04 2022-05-27 21:58:25,545 INFO [train.py:842] (0/4) Epoch 17, batch 3500, loss[loss=0.1912, simple_loss=0.284, pruned_loss=0.04917, over 7209.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2754, pruned_loss=0.0529, over 1424749.30 frames.], batch size: 22, lr: 3.24e-04 2022-05-27 21:59:04,672 INFO [train.py:842] (0/4) Epoch 17, batch 3550, loss[loss=0.1887, simple_loss=0.2899, pruned_loss=0.04375, over 7122.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2765, pruned_loss=0.05337, over 1427111.54 frames.], batch size: 21, lr: 3.24e-04 2022-05-27 21:59:43,837 INFO [train.py:842] (0/4) Epoch 17, batch 3600, loss[loss=0.1801, simple_loss=0.2574, pruned_loss=0.05142, over 7264.00 frames.], tot_loss[loss=0.1923, simple_loss=0.2775, pruned_loss=0.05356, over 1427914.04 frames.], batch size: 18, lr: 3.24e-04 2022-05-27 22:00:23,008 INFO [train.py:842] (0/4) Epoch 17, batch 3650, loss[loss=0.2066, simple_loss=0.2981, pruned_loss=0.05749, over 7326.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2763, pruned_loss=0.0532, over 1431855.53 frames.], batch size: 21, lr: 3.24e-04 2022-05-27 22:01:01,998 INFO [train.py:842] (0/4) Epoch 17, batch 3700, loss[loss=0.198, simple_loss=0.2903, pruned_loss=0.05285, over 7144.00 frames.], tot_loss[loss=0.1922, simple_loss=0.2768, pruned_loss=0.05385, over 1431288.57 frames.], batch size: 20, lr: 3.24e-04 2022-05-27 22:01:41,165 INFO [train.py:842] (0/4) Epoch 17, batch 3750, loss[loss=0.2403, simple_loss=0.3122, pruned_loss=0.08417, over 6458.00 frames.], tot_loss[loss=0.1923, simple_loss=0.277, pruned_loss=0.05374, over 1428824.23 frames.], batch size: 38, lr: 3.24e-04 2022-05-27 22:02:19,653 INFO [train.py:842] (0/4) Epoch 17, batch 3800, loss[loss=0.1947, simple_loss=0.2875, pruned_loss=0.05099, over 6313.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2779, pruned_loss=0.05424, over 1427074.60 frames.], batch size: 37, lr: 3.24e-04 2022-05-27 22:02:58,476 INFO [train.py:842] (0/4) Epoch 17, batch 3850, loss[loss=0.1747, simple_loss=0.2542, pruned_loss=0.04762, over 7004.00 frames.], tot_loss[loss=0.1922, simple_loss=0.2774, pruned_loss=0.05352, over 1426213.43 frames.], batch size: 16, lr: 3.23e-04 2022-05-27 22:03:37,704 INFO [train.py:842] (0/4) Epoch 17, batch 3900, loss[loss=0.259, simple_loss=0.3433, pruned_loss=0.08731, over 7222.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2764, pruned_loss=0.05332, over 1428209.45 frames.], batch size: 22, lr: 3.23e-04 2022-05-27 22:04:16,895 INFO [train.py:842] (0/4) Epoch 17, batch 3950, loss[loss=0.2035, simple_loss=0.2878, pruned_loss=0.05965, over 7203.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2773, pruned_loss=0.05344, over 1427374.91 frames.], batch size: 23, lr: 3.23e-04 2022-05-27 22:04:55,807 INFO [train.py:842] (0/4) Epoch 17, batch 4000, loss[loss=0.1836, simple_loss=0.255, pruned_loss=0.05612, over 7270.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2766, pruned_loss=0.05307, over 1428225.00 frames.], batch size: 18, lr: 3.23e-04 2022-05-27 22:05:35,082 INFO [train.py:842] (0/4) Epoch 17, batch 4050, loss[loss=0.2197, simple_loss=0.3029, pruned_loss=0.06827, over 6842.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2751, pruned_loss=0.05269, over 1424096.15 frames.], batch size: 31, lr: 3.23e-04 2022-05-27 22:06:13,936 INFO [train.py:842] (0/4) Epoch 17, batch 4100, loss[loss=0.1943, simple_loss=0.2932, pruned_loss=0.04769, over 6465.00 frames.], tot_loss[loss=0.1919, simple_loss=0.2766, pruned_loss=0.0536, over 1424072.14 frames.], batch size: 37, lr: 3.23e-04 2022-05-27 22:06:52,830 INFO [train.py:842] (0/4) Epoch 17, batch 4150, loss[loss=0.1748, simple_loss=0.2745, pruned_loss=0.03752, over 7324.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2765, pruned_loss=0.05384, over 1422520.26 frames.], batch size: 22, lr: 3.23e-04 2022-05-27 22:07:31,501 INFO [train.py:842] (0/4) Epoch 17, batch 4200, loss[loss=0.1697, simple_loss=0.2477, pruned_loss=0.04588, over 7158.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2759, pruned_loss=0.05334, over 1422722.87 frames.], batch size: 19, lr: 3.23e-04 2022-05-27 22:08:10,853 INFO [train.py:842] (0/4) Epoch 17, batch 4250, loss[loss=0.1546, simple_loss=0.2374, pruned_loss=0.03592, over 7142.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2749, pruned_loss=0.05265, over 1424501.38 frames.], batch size: 17, lr: 3.23e-04 2022-05-27 22:08:49,622 INFO [train.py:842] (0/4) Epoch 17, batch 4300, loss[loss=0.2081, simple_loss=0.2932, pruned_loss=0.0615, over 7319.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2758, pruned_loss=0.05336, over 1421439.72 frames.], batch size: 21, lr: 3.23e-04 2022-05-27 22:09:29,065 INFO [train.py:842] (0/4) Epoch 17, batch 4350, loss[loss=0.1989, simple_loss=0.2958, pruned_loss=0.05104, over 6809.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2772, pruned_loss=0.05384, over 1420525.05 frames.], batch size: 31, lr: 3.23e-04 2022-05-27 22:10:08,014 INFO [train.py:842] (0/4) Epoch 17, batch 4400, loss[loss=0.1702, simple_loss=0.2583, pruned_loss=0.04109, over 7258.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2764, pruned_loss=0.05351, over 1422336.47 frames.], batch size: 19, lr: 3.23e-04 2022-05-27 22:10:47,303 INFO [train.py:842] (0/4) Epoch 17, batch 4450, loss[loss=0.2084, simple_loss=0.2869, pruned_loss=0.06497, over 7064.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2753, pruned_loss=0.05298, over 1425701.74 frames.], batch size: 18, lr: 3.23e-04 2022-05-27 22:11:26,012 INFO [train.py:842] (0/4) Epoch 17, batch 4500, loss[loss=0.2455, simple_loss=0.331, pruned_loss=0.08, over 6489.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2753, pruned_loss=0.05315, over 1421603.05 frames.], batch size: 38, lr: 3.23e-04 2022-05-27 22:12:04,902 INFO [train.py:842] (0/4) Epoch 17, batch 4550, loss[loss=0.1739, simple_loss=0.2691, pruned_loss=0.03935, over 7342.00 frames.], tot_loss[loss=0.1911, simple_loss=0.2762, pruned_loss=0.05301, over 1420539.53 frames.], batch size: 22, lr: 3.23e-04 2022-05-27 22:12:43,475 INFO [train.py:842] (0/4) Epoch 17, batch 4600, loss[loss=0.2006, simple_loss=0.299, pruned_loss=0.05109, over 7210.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2758, pruned_loss=0.05259, over 1423110.21 frames.], batch size: 22, lr: 3.23e-04 2022-05-27 22:13:22,505 INFO [train.py:842] (0/4) Epoch 17, batch 4650, loss[loss=0.2084, simple_loss=0.3014, pruned_loss=0.05764, over 7334.00 frames.], tot_loss[loss=0.19, simple_loss=0.2754, pruned_loss=0.05231, over 1426599.71 frames.], batch size: 22, lr: 3.23e-04 2022-05-27 22:14:01,626 INFO [train.py:842] (0/4) Epoch 17, batch 4700, loss[loss=0.1932, simple_loss=0.2807, pruned_loss=0.05286, over 7214.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2754, pruned_loss=0.0532, over 1420454.94 frames.], batch size: 21, lr: 3.23e-04 2022-05-27 22:14:40,345 INFO [train.py:842] (0/4) Epoch 17, batch 4750, loss[loss=0.1655, simple_loss=0.2352, pruned_loss=0.04794, over 7064.00 frames.], tot_loss[loss=0.1912, simple_loss=0.2759, pruned_loss=0.05319, over 1422206.61 frames.], batch size: 18, lr: 3.23e-04 2022-05-27 22:15:19,291 INFO [train.py:842] (0/4) Epoch 17, batch 4800, loss[loss=0.154, simple_loss=0.2299, pruned_loss=0.03911, over 7272.00 frames.], tot_loss[loss=0.1891, simple_loss=0.2743, pruned_loss=0.05194, over 1422458.01 frames.], batch size: 17, lr: 3.22e-04 2022-05-27 22:15:58,775 INFO [train.py:842] (0/4) Epoch 17, batch 4850, loss[loss=0.1923, simple_loss=0.2735, pruned_loss=0.05551, over 7071.00 frames.], tot_loss[loss=0.1895, simple_loss=0.2746, pruned_loss=0.05221, over 1421832.06 frames.], batch size: 18, lr: 3.22e-04 2022-05-27 22:16:37,910 INFO [train.py:842] (0/4) Epoch 17, batch 4900, loss[loss=0.1801, simple_loss=0.2778, pruned_loss=0.04119, over 7172.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2745, pruned_loss=0.05198, over 1422850.75 frames.], batch size: 26, lr: 3.22e-04 2022-05-27 22:17:00,232 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-152000.pt 2022-05-27 22:17:20,299 INFO [train.py:842] (0/4) Epoch 17, batch 4950, loss[loss=0.2164, simple_loss=0.288, pruned_loss=0.07235, over 6754.00 frames.], tot_loss[loss=0.187, simple_loss=0.2719, pruned_loss=0.0511, over 1425063.69 frames.], batch size: 15, lr: 3.22e-04 2022-05-27 22:17:59,061 INFO [train.py:842] (0/4) Epoch 17, batch 5000, loss[loss=0.1494, simple_loss=0.2335, pruned_loss=0.03268, over 6997.00 frames.], tot_loss[loss=0.1888, simple_loss=0.2735, pruned_loss=0.05202, over 1421506.77 frames.], batch size: 16, lr: 3.22e-04 2022-05-27 22:18:38,076 INFO [train.py:842] (0/4) Epoch 17, batch 5050, loss[loss=0.1989, simple_loss=0.2782, pruned_loss=0.05986, over 7278.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2737, pruned_loss=0.05166, over 1422141.63 frames.], batch size: 18, lr: 3.22e-04 2022-05-27 22:19:17,090 INFO [train.py:842] (0/4) Epoch 17, batch 5100, loss[loss=0.171, simple_loss=0.2457, pruned_loss=0.04817, over 7004.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2744, pruned_loss=0.05217, over 1421446.80 frames.], batch size: 16, lr: 3.22e-04 2022-05-27 22:19:56,237 INFO [train.py:842] (0/4) Epoch 17, batch 5150, loss[loss=0.1875, simple_loss=0.2827, pruned_loss=0.04614, over 7228.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2753, pruned_loss=0.05293, over 1419148.74 frames.], batch size: 20, lr: 3.22e-04 2022-05-27 22:20:35,396 INFO [train.py:842] (0/4) Epoch 17, batch 5200, loss[loss=0.1967, simple_loss=0.2844, pruned_loss=0.05453, over 7145.00 frames.], tot_loss[loss=0.19, simple_loss=0.2747, pruned_loss=0.05261, over 1423913.99 frames.], batch size: 20, lr: 3.22e-04 2022-05-27 22:21:14,373 INFO [train.py:842] (0/4) Epoch 17, batch 5250, loss[loss=0.1708, simple_loss=0.2764, pruned_loss=0.03262, over 7123.00 frames.], tot_loss[loss=0.19, simple_loss=0.2751, pruned_loss=0.05245, over 1422807.85 frames.], batch size: 21, lr: 3.22e-04 2022-05-27 22:21:53,441 INFO [train.py:842] (0/4) Epoch 17, batch 5300, loss[loss=0.1949, simple_loss=0.2757, pruned_loss=0.057, over 7205.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2748, pruned_loss=0.05241, over 1422556.80 frames.], batch size: 16, lr: 3.22e-04 2022-05-27 22:22:32,681 INFO [train.py:842] (0/4) Epoch 17, batch 5350, loss[loss=0.1796, simple_loss=0.2573, pruned_loss=0.05096, over 7230.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2765, pruned_loss=0.05314, over 1424039.20 frames.], batch size: 16, lr: 3.22e-04 2022-05-27 22:23:11,619 INFO [train.py:842] (0/4) Epoch 17, batch 5400, loss[loss=0.1707, simple_loss=0.2571, pruned_loss=0.04211, over 7238.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2755, pruned_loss=0.05244, over 1425082.34 frames.], batch size: 20, lr: 3.22e-04 2022-05-27 22:23:51,008 INFO [train.py:842] (0/4) Epoch 17, batch 5450, loss[loss=0.1841, simple_loss=0.2802, pruned_loss=0.04404, over 7153.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2763, pruned_loss=0.0527, over 1427785.13 frames.], batch size: 19, lr: 3.22e-04 2022-05-27 22:24:29,918 INFO [train.py:842] (0/4) Epoch 17, batch 5500, loss[loss=0.1713, simple_loss=0.2554, pruned_loss=0.04361, over 7285.00 frames.], tot_loss[loss=0.191, simple_loss=0.2763, pruned_loss=0.05285, over 1428234.65 frames.], batch size: 24, lr: 3.22e-04 2022-05-27 22:25:09,374 INFO [train.py:842] (0/4) Epoch 17, batch 5550, loss[loss=0.1761, simple_loss=0.2604, pruned_loss=0.04588, over 7229.00 frames.], tot_loss[loss=0.1918, simple_loss=0.2767, pruned_loss=0.0535, over 1430492.78 frames.], batch size: 20, lr: 3.22e-04 2022-05-27 22:25:48,399 INFO [train.py:842] (0/4) Epoch 17, batch 5600, loss[loss=0.2344, simple_loss=0.3133, pruned_loss=0.07776, over 7324.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2755, pruned_loss=0.05288, over 1432852.90 frames.], batch size: 20, lr: 3.22e-04 2022-05-27 22:26:27,329 INFO [train.py:842] (0/4) Epoch 17, batch 5650, loss[loss=0.1992, simple_loss=0.2799, pruned_loss=0.05921, over 7298.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2775, pruned_loss=0.05383, over 1431006.64 frames.], batch size: 24, lr: 3.22e-04 2022-05-27 22:27:05,742 INFO [train.py:842] (0/4) Epoch 17, batch 5700, loss[loss=0.1426, simple_loss=0.2213, pruned_loss=0.03197, over 7432.00 frames.], tot_loss[loss=0.1925, simple_loss=0.2779, pruned_loss=0.05356, over 1429840.04 frames.], batch size: 18, lr: 3.22e-04 2022-05-27 22:27:45,010 INFO [train.py:842] (0/4) Epoch 17, batch 5750, loss[loss=0.1999, simple_loss=0.3037, pruned_loss=0.04807, over 6548.00 frames.], tot_loss[loss=0.1922, simple_loss=0.2773, pruned_loss=0.05358, over 1422713.04 frames.], batch size: 38, lr: 3.21e-04 2022-05-27 22:28:23,771 INFO [train.py:842] (0/4) Epoch 17, batch 5800, loss[loss=0.188, simple_loss=0.276, pruned_loss=0.04999, over 7314.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2768, pruned_loss=0.05321, over 1422559.79 frames.], batch size: 20, lr: 3.21e-04 2022-05-27 22:29:02,961 INFO [train.py:842] (0/4) Epoch 17, batch 5850, loss[loss=0.1965, simple_loss=0.2645, pruned_loss=0.06425, over 7287.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2769, pruned_loss=0.05292, over 1425410.71 frames.], batch size: 18, lr: 3.21e-04 2022-05-27 22:29:41,680 INFO [train.py:842] (0/4) Epoch 17, batch 5900, loss[loss=0.1903, simple_loss=0.2702, pruned_loss=0.05516, over 7374.00 frames.], tot_loss[loss=0.19, simple_loss=0.2754, pruned_loss=0.05231, over 1424814.27 frames.], batch size: 23, lr: 3.21e-04 2022-05-27 22:30:20,734 INFO [train.py:842] (0/4) Epoch 17, batch 5950, loss[loss=0.2032, simple_loss=0.2907, pruned_loss=0.05782, over 7150.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2759, pruned_loss=0.05294, over 1418381.67 frames.], batch size: 26, lr: 3.21e-04 2022-05-27 22:30:59,313 INFO [train.py:842] (0/4) Epoch 17, batch 6000, loss[loss=0.2471, simple_loss=0.3264, pruned_loss=0.08386, over 7331.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2767, pruned_loss=0.05314, over 1418190.47 frames.], batch size: 20, lr: 3.21e-04 2022-05-27 22:30:59,314 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 22:31:09,110 INFO [train.py:871] (0/4) Epoch 17, validation: loss=0.1659, simple_loss=0.2656, pruned_loss=0.03306, over 868885.00 frames. 2022-05-27 22:31:48,656 INFO [train.py:842] (0/4) Epoch 17, batch 6050, loss[loss=0.1634, simple_loss=0.2621, pruned_loss=0.03238, over 7325.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2766, pruned_loss=0.05306, over 1422431.04 frames.], batch size: 21, lr: 3.21e-04 2022-05-27 22:32:27,134 INFO [train.py:842] (0/4) Epoch 17, batch 6100, loss[loss=0.1999, simple_loss=0.2938, pruned_loss=0.05301, over 7201.00 frames.], tot_loss[loss=0.1916, simple_loss=0.277, pruned_loss=0.0531, over 1424458.28 frames.], batch size: 23, lr: 3.21e-04 2022-05-27 22:33:05,948 INFO [train.py:842] (0/4) Epoch 17, batch 6150, loss[loss=0.1583, simple_loss=0.2364, pruned_loss=0.04017, over 6817.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2763, pruned_loss=0.05261, over 1420216.07 frames.], batch size: 15, lr: 3.21e-04 2022-05-27 22:33:44,749 INFO [train.py:842] (0/4) Epoch 17, batch 6200, loss[loss=0.1797, simple_loss=0.2714, pruned_loss=0.04395, over 7421.00 frames.], tot_loss[loss=0.192, simple_loss=0.2772, pruned_loss=0.05341, over 1421804.00 frames.], batch size: 21, lr: 3.21e-04 2022-05-27 22:34:23,879 INFO [train.py:842] (0/4) Epoch 17, batch 6250, loss[loss=0.2028, simple_loss=0.2982, pruned_loss=0.05367, over 6815.00 frames.], tot_loss[loss=0.193, simple_loss=0.2781, pruned_loss=0.05391, over 1420005.08 frames.], batch size: 31, lr: 3.21e-04 2022-05-27 22:35:02,513 INFO [train.py:842] (0/4) Epoch 17, batch 6300, loss[loss=0.2096, simple_loss=0.2954, pruned_loss=0.06193, over 7401.00 frames.], tot_loss[loss=0.192, simple_loss=0.2773, pruned_loss=0.05336, over 1419561.43 frames.], batch size: 21, lr: 3.21e-04 2022-05-27 22:35:41,917 INFO [train.py:842] (0/4) Epoch 17, batch 6350, loss[loss=0.1686, simple_loss=0.2384, pruned_loss=0.04934, over 7302.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2767, pruned_loss=0.05296, over 1423672.50 frames.], batch size: 17, lr: 3.21e-04 2022-05-27 22:36:20,863 INFO [train.py:842] (0/4) Epoch 17, batch 6400, loss[loss=0.1912, simple_loss=0.2914, pruned_loss=0.04544, over 7240.00 frames.], tot_loss[loss=0.1911, simple_loss=0.2766, pruned_loss=0.05284, over 1428900.90 frames.], batch size: 20, lr: 3.21e-04 2022-05-27 22:36:59,919 INFO [train.py:842] (0/4) Epoch 17, batch 6450, loss[loss=0.1693, simple_loss=0.2547, pruned_loss=0.04194, over 7354.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2766, pruned_loss=0.05294, over 1427058.86 frames.], batch size: 19, lr: 3.21e-04 2022-05-27 22:37:38,897 INFO [train.py:842] (0/4) Epoch 17, batch 6500, loss[loss=0.1464, simple_loss=0.237, pruned_loss=0.02786, over 7433.00 frames.], tot_loss[loss=0.1903, simple_loss=0.2758, pruned_loss=0.05237, over 1427863.52 frames.], batch size: 18, lr: 3.21e-04 2022-05-27 22:38:18,074 INFO [train.py:842] (0/4) Epoch 17, batch 6550, loss[loss=0.1997, simple_loss=0.2881, pruned_loss=0.05565, over 7199.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2756, pruned_loss=0.05233, over 1425611.60 frames.], batch size: 23, lr: 3.21e-04 2022-05-27 22:38:57,332 INFO [train.py:842] (0/4) Epoch 17, batch 6600, loss[loss=0.2242, simple_loss=0.3015, pruned_loss=0.07344, over 5233.00 frames.], tot_loss[loss=0.1918, simple_loss=0.2775, pruned_loss=0.05304, over 1425801.30 frames.], batch size: 52, lr: 3.21e-04 2022-05-27 22:39:36,195 INFO [train.py:842] (0/4) Epoch 17, batch 6650, loss[loss=0.1605, simple_loss=0.2427, pruned_loss=0.03917, over 7423.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2789, pruned_loss=0.05445, over 1423711.85 frames.], batch size: 17, lr: 3.21e-04 2022-05-27 22:40:14,931 INFO [train.py:842] (0/4) Epoch 17, batch 6700, loss[loss=0.2203, simple_loss=0.3076, pruned_loss=0.06647, over 7212.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2788, pruned_loss=0.05418, over 1420056.69 frames.], batch size: 22, lr: 3.20e-04 2022-05-27 22:40:54,221 INFO [train.py:842] (0/4) Epoch 17, batch 6750, loss[loss=0.2405, simple_loss=0.3257, pruned_loss=0.07767, over 7208.00 frames.], tot_loss[loss=0.1928, simple_loss=0.278, pruned_loss=0.05383, over 1415506.54 frames.], batch size: 22, lr: 3.20e-04 2022-05-27 22:41:33,193 INFO [train.py:842] (0/4) Epoch 17, batch 6800, loss[loss=0.1806, simple_loss=0.2548, pruned_loss=0.05322, over 7422.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2764, pruned_loss=0.05309, over 1419108.53 frames.], batch size: 18, lr: 3.20e-04 2022-05-27 22:42:12,522 INFO [train.py:842] (0/4) Epoch 17, batch 6850, loss[loss=0.1637, simple_loss=0.2508, pruned_loss=0.03831, over 7059.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2757, pruned_loss=0.05297, over 1420477.32 frames.], batch size: 18, lr: 3.20e-04 2022-05-27 22:42:51,365 INFO [train.py:842] (0/4) Epoch 17, batch 6900, loss[loss=0.1735, simple_loss=0.2654, pruned_loss=0.04086, over 7221.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2753, pruned_loss=0.05256, over 1422854.75 frames.], batch size: 21, lr: 3.20e-04 2022-05-27 22:43:30,339 INFO [train.py:842] (0/4) Epoch 17, batch 6950, loss[loss=0.2084, simple_loss=0.2913, pruned_loss=0.06279, over 7420.00 frames.], tot_loss[loss=0.1891, simple_loss=0.2744, pruned_loss=0.05185, over 1423313.52 frames.], batch size: 21, lr: 3.20e-04 2022-05-27 22:44:09,754 INFO [train.py:842] (0/4) Epoch 17, batch 7000, loss[loss=0.2132, simple_loss=0.2933, pruned_loss=0.0666, over 7387.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2747, pruned_loss=0.05257, over 1424340.05 frames.], batch size: 23, lr: 3.20e-04 2022-05-27 22:44:49,097 INFO [train.py:842] (0/4) Epoch 17, batch 7050, loss[loss=0.1942, simple_loss=0.272, pruned_loss=0.05817, over 7178.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2752, pruned_loss=0.05315, over 1422892.62 frames.], batch size: 23, lr: 3.20e-04 2022-05-27 22:45:28,479 INFO [train.py:842] (0/4) Epoch 17, batch 7100, loss[loss=0.1684, simple_loss=0.257, pruned_loss=0.03994, over 7332.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2761, pruned_loss=0.05354, over 1425762.05 frames.], batch size: 21, lr: 3.20e-04 2022-05-27 22:46:07,818 INFO [train.py:842] (0/4) Epoch 17, batch 7150, loss[loss=0.2799, simple_loss=0.3458, pruned_loss=0.107, over 7277.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2753, pruned_loss=0.05364, over 1427872.08 frames.], batch size: 24, lr: 3.20e-04 2022-05-27 22:46:46,839 INFO [train.py:842] (0/4) Epoch 17, batch 7200, loss[loss=0.1823, simple_loss=0.2778, pruned_loss=0.04344, over 7207.00 frames.], tot_loss[loss=0.1912, simple_loss=0.2755, pruned_loss=0.05347, over 1426652.35 frames.], batch size: 23, lr: 3.20e-04 2022-05-27 22:47:26,410 INFO [train.py:842] (0/4) Epoch 17, batch 7250, loss[loss=0.1872, simple_loss=0.2915, pruned_loss=0.04144, over 7324.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2762, pruned_loss=0.05396, over 1428611.45 frames.], batch size: 21, lr: 3.20e-04 2022-05-27 22:48:05,623 INFO [train.py:842] (0/4) Epoch 17, batch 7300, loss[loss=0.1653, simple_loss=0.2411, pruned_loss=0.0448, over 7275.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2765, pruned_loss=0.05417, over 1430325.75 frames.], batch size: 17, lr: 3.20e-04 2022-05-27 22:48:44,783 INFO [train.py:842] (0/4) Epoch 17, batch 7350, loss[loss=0.1499, simple_loss=0.2448, pruned_loss=0.02752, over 7318.00 frames.], tot_loss[loss=0.1933, simple_loss=0.2771, pruned_loss=0.05472, over 1431849.93 frames.], batch size: 21, lr: 3.20e-04 2022-05-27 22:49:23,627 INFO [train.py:842] (0/4) Epoch 17, batch 7400, loss[loss=0.2458, simple_loss=0.3234, pruned_loss=0.08403, over 5099.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2777, pruned_loss=0.05474, over 1423443.17 frames.], batch size: 52, lr: 3.20e-04 2022-05-27 22:50:02,516 INFO [train.py:842] (0/4) Epoch 17, batch 7450, loss[loss=0.1553, simple_loss=0.2343, pruned_loss=0.03815, over 7268.00 frames.], tot_loss[loss=0.1939, simple_loss=0.2785, pruned_loss=0.05467, over 1427913.46 frames.], batch size: 17, lr: 3.20e-04 2022-05-27 22:50:41,670 INFO [train.py:842] (0/4) Epoch 17, batch 7500, loss[loss=0.2125, simple_loss=0.2884, pruned_loss=0.0683, over 7049.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2771, pruned_loss=0.05434, over 1428375.85 frames.], batch size: 18, lr: 3.20e-04 2022-05-27 22:51:20,600 INFO [train.py:842] (0/4) Epoch 17, batch 7550, loss[loss=0.2567, simple_loss=0.3434, pruned_loss=0.08494, over 7197.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2761, pruned_loss=0.05369, over 1427302.07 frames.], batch size: 23, lr: 3.20e-04 2022-05-27 22:51:59,826 INFO [train.py:842] (0/4) Epoch 17, batch 7600, loss[loss=0.1755, simple_loss=0.2615, pruned_loss=0.0448, over 7292.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2752, pruned_loss=0.05315, over 1429967.56 frames.], batch size: 18, lr: 3.20e-04 2022-05-27 22:52:38,886 INFO [train.py:842] (0/4) Epoch 17, batch 7650, loss[loss=0.1678, simple_loss=0.2474, pruned_loss=0.0441, over 7182.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2749, pruned_loss=0.05315, over 1428735.58 frames.], batch size: 16, lr: 3.19e-04 2022-05-27 22:53:17,423 INFO [train.py:842] (0/4) Epoch 17, batch 7700, loss[loss=0.1769, simple_loss=0.2681, pruned_loss=0.04289, over 7343.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2756, pruned_loss=0.05294, over 1428286.49 frames.], batch size: 22, lr: 3.19e-04 2022-05-27 22:53:56,288 INFO [train.py:842] (0/4) Epoch 17, batch 7750, loss[loss=0.179, simple_loss=0.2687, pruned_loss=0.04466, over 7202.00 frames.], tot_loss[loss=0.19, simple_loss=0.2748, pruned_loss=0.05264, over 1428067.23 frames.], batch size: 22, lr: 3.19e-04 2022-05-27 22:54:34,885 INFO [train.py:842] (0/4) Epoch 17, batch 7800, loss[loss=0.1653, simple_loss=0.2395, pruned_loss=0.0456, over 7005.00 frames.], tot_loss[loss=0.191, simple_loss=0.2758, pruned_loss=0.05307, over 1424128.56 frames.], batch size: 16, lr: 3.19e-04 2022-05-27 22:55:13,928 INFO [train.py:842] (0/4) Epoch 17, batch 7850, loss[loss=0.1471, simple_loss=0.2251, pruned_loss=0.03452, over 7134.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2763, pruned_loss=0.05324, over 1423883.96 frames.], batch size: 17, lr: 3.19e-04 2022-05-27 22:55:52,889 INFO [train.py:842] (0/4) Epoch 17, batch 7900, loss[loss=0.1663, simple_loss=0.2536, pruned_loss=0.03957, over 7254.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2765, pruned_loss=0.05341, over 1425269.02 frames.], batch size: 19, lr: 3.19e-04 2022-05-27 22:56:32,272 INFO [train.py:842] (0/4) Epoch 17, batch 7950, loss[loss=0.1689, simple_loss=0.2488, pruned_loss=0.04444, over 7063.00 frames.], tot_loss[loss=0.192, simple_loss=0.2767, pruned_loss=0.05365, over 1424426.15 frames.], batch size: 18, lr: 3.19e-04 2022-05-27 22:57:10,823 INFO [train.py:842] (0/4) Epoch 17, batch 8000, loss[loss=0.1905, simple_loss=0.277, pruned_loss=0.05198, over 7335.00 frames.], tot_loss[loss=0.1927, simple_loss=0.2773, pruned_loss=0.05403, over 1418903.94 frames.], batch size: 20, lr: 3.19e-04 2022-05-27 22:57:50,113 INFO [train.py:842] (0/4) Epoch 17, batch 8050, loss[loss=0.1955, simple_loss=0.2766, pruned_loss=0.05716, over 7165.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2765, pruned_loss=0.05412, over 1414734.43 frames.], batch size: 19, lr: 3.19e-04 2022-05-27 22:58:28,861 INFO [train.py:842] (0/4) Epoch 17, batch 8100, loss[loss=0.1869, simple_loss=0.2811, pruned_loss=0.04639, over 6390.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2766, pruned_loss=0.05411, over 1415056.54 frames.], batch size: 37, lr: 3.19e-04 2022-05-27 22:59:08,269 INFO [train.py:842] (0/4) Epoch 17, batch 8150, loss[loss=0.2053, simple_loss=0.2864, pruned_loss=0.06207, over 7208.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2768, pruned_loss=0.05404, over 1413776.85 frames.], batch size: 22, lr: 3.19e-04 2022-05-27 22:59:46,899 INFO [train.py:842] (0/4) Epoch 17, batch 8200, loss[loss=0.167, simple_loss=0.263, pruned_loss=0.03545, over 7428.00 frames.], tot_loss[loss=0.1919, simple_loss=0.2764, pruned_loss=0.05366, over 1411030.25 frames.], batch size: 20, lr: 3.19e-04 2022-05-27 23:00:26,347 INFO [train.py:842] (0/4) Epoch 17, batch 8250, loss[loss=0.1848, simple_loss=0.2768, pruned_loss=0.04644, over 7333.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2755, pruned_loss=0.05289, over 1417043.99 frames.], batch size: 21, lr: 3.19e-04 2022-05-27 23:01:04,938 INFO [train.py:842] (0/4) Epoch 17, batch 8300, loss[loss=0.1598, simple_loss=0.2614, pruned_loss=0.02907, over 7108.00 frames.], tot_loss[loss=0.1924, simple_loss=0.2774, pruned_loss=0.05367, over 1417316.48 frames.], batch size: 21, lr: 3.19e-04 2022-05-27 23:01:43,878 INFO [train.py:842] (0/4) Epoch 17, batch 8350, loss[loss=0.2173, simple_loss=0.3065, pruned_loss=0.06407, over 7269.00 frames.], tot_loss[loss=0.1917, simple_loss=0.277, pruned_loss=0.05322, over 1420402.18 frames.], batch size: 25, lr: 3.19e-04 2022-05-27 23:02:22,666 INFO [train.py:842] (0/4) Epoch 17, batch 8400, loss[loss=0.2776, simple_loss=0.3497, pruned_loss=0.1028, over 7021.00 frames.], tot_loss[loss=0.191, simple_loss=0.2763, pruned_loss=0.05285, over 1422348.54 frames.], batch size: 28, lr: 3.19e-04 2022-05-27 23:03:01,599 INFO [train.py:842] (0/4) Epoch 17, batch 8450, loss[loss=0.2846, simple_loss=0.3518, pruned_loss=0.1087, over 6468.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2759, pruned_loss=0.0528, over 1420709.97 frames.], batch size: 38, lr: 3.19e-04 2022-05-27 23:03:40,254 INFO [train.py:842] (0/4) Epoch 17, batch 8500, loss[loss=0.2317, simple_loss=0.3117, pruned_loss=0.0758, over 7136.00 frames.], tot_loss[loss=0.1903, simple_loss=0.2754, pruned_loss=0.05263, over 1414185.83 frames.], batch size: 26, lr: 3.19e-04 2022-05-27 23:04:19,017 INFO [train.py:842] (0/4) Epoch 17, batch 8550, loss[loss=0.2029, simple_loss=0.2824, pruned_loss=0.06167, over 6271.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2753, pruned_loss=0.0523, over 1412352.44 frames.], batch size: 38, lr: 3.19e-04 2022-05-27 23:04:57,763 INFO [train.py:842] (0/4) Epoch 17, batch 8600, loss[loss=0.2001, simple_loss=0.2888, pruned_loss=0.05568, over 7337.00 frames.], tot_loss[loss=0.1911, simple_loss=0.276, pruned_loss=0.05307, over 1417757.95 frames.], batch size: 22, lr: 3.19e-04 2022-05-27 23:05:36,630 INFO [train.py:842] (0/4) Epoch 17, batch 8650, loss[loss=0.2107, simple_loss=0.2839, pruned_loss=0.06871, over 7291.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2766, pruned_loss=0.05305, over 1420319.92 frames.], batch size: 18, lr: 3.18e-04 2022-05-27 23:06:15,427 INFO [train.py:842] (0/4) Epoch 17, batch 8700, loss[loss=0.2525, simple_loss=0.3268, pruned_loss=0.08906, over 7300.00 frames.], tot_loss[loss=0.192, simple_loss=0.2769, pruned_loss=0.05359, over 1423804.73 frames.], batch size: 25, lr: 3.18e-04 2022-05-27 23:06:54,772 INFO [train.py:842] (0/4) Epoch 17, batch 8750, loss[loss=0.202, simple_loss=0.2833, pruned_loss=0.06037, over 7065.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2764, pruned_loss=0.05323, over 1423708.56 frames.], batch size: 18, lr: 3.18e-04 2022-05-27 23:07:33,415 INFO [train.py:842] (0/4) Epoch 17, batch 8800, loss[loss=0.1874, simple_loss=0.273, pruned_loss=0.05092, over 7063.00 frames.], tot_loss[loss=0.1918, simple_loss=0.2768, pruned_loss=0.05337, over 1418722.76 frames.], batch size: 18, lr: 3.18e-04 2022-05-27 23:08:12,274 INFO [train.py:842] (0/4) Epoch 17, batch 8850, loss[loss=0.2539, simple_loss=0.3176, pruned_loss=0.09512, over 5040.00 frames.], tot_loss[loss=0.1925, simple_loss=0.2775, pruned_loss=0.05376, over 1416363.03 frames.], batch size: 52, lr: 3.18e-04 2022-05-27 23:08:51,332 INFO [train.py:842] (0/4) Epoch 17, batch 8900, loss[loss=0.2153, simple_loss=0.3169, pruned_loss=0.05686, over 7150.00 frames.], tot_loss[loss=0.1916, simple_loss=0.2763, pruned_loss=0.05341, over 1416756.78 frames.], batch size: 20, lr: 3.18e-04 2022-05-27 23:09:30,540 INFO [train.py:842] (0/4) Epoch 17, batch 8950, loss[loss=0.1908, simple_loss=0.2666, pruned_loss=0.05746, over 7283.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2773, pruned_loss=0.05396, over 1409296.06 frames.], batch size: 18, lr: 3.18e-04 2022-05-27 23:10:09,439 INFO [train.py:842] (0/4) Epoch 17, batch 9000, loss[loss=0.2305, simple_loss=0.3133, pruned_loss=0.07381, over 7139.00 frames.], tot_loss[loss=0.193, simple_loss=0.2777, pruned_loss=0.05412, over 1400671.37 frames.], batch size: 20, lr: 3.18e-04 2022-05-27 23:10:09,440 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 23:10:18,901 INFO [train.py:871] (0/4) Epoch 17, validation: loss=0.1657, simple_loss=0.2652, pruned_loss=0.03308, over 868885.00 frames. 2022-05-27 23:10:57,857 INFO [train.py:842] (0/4) Epoch 17, batch 9050, loss[loss=0.1754, simple_loss=0.2694, pruned_loss=0.04077, over 7319.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2779, pruned_loss=0.05392, over 1395152.93 frames.], batch size: 20, lr: 3.18e-04 2022-05-27 23:11:46,167 INFO [train.py:842] (0/4) Epoch 17, batch 9100, loss[loss=0.2298, simple_loss=0.2971, pruned_loss=0.0813, over 5128.00 frames.], tot_loss[loss=0.1954, simple_loss=0.2801, pruned_loss=0.05532, over 1348031.98 frames.], batch size: 52, lr: 3.18e-04 2022-05-27 23:12:24,144 INFO [train.py:842] (0/4) Epoch 17, batch 9150, loss[loss=0.1863, simple_loss=0.2768, pruned_loss=0.04797, over 4933.00 frames.], tot_loss[loss=0.2003, simple_loss=0.2837, pruned_loss=0.05844, over 1272673.37 frames.], batch size: 52, lr: 3.18e-04 2022-05-27 23:12:56,279 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-17.pt 2022-05-27 23:13:16,677 INFO [train.py:842] (0/4) Epoch 18, batch 0, loss[loss=0.2063, simple_loss=0.293, pruned_loss=0.05976, over 7231.00 frames.], tot_loss[loss=0.2063, simple_loss=0.293, pruned_loss=0.05976, over 7231.00 frames.], batch size: 20, lr: 3.10e-04 2022-05-27 23:13:56,054 INFO [train.py:842] (0/4) Epoch 18, batch 50, loss[loss=0.1583, simple_loss=0.2325, pruned_loss=0.04211, over 7005.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2692, pruned_loss=0.05085, over 324416.28 frames.], batch size: 16, lr: 3.09e-04 2022-05-27 23:14:34,693 INFO [train.py:842] (0/4) Epoch 18, batch 100, loss[loss=0.1686, simple_loss=0.2574, pruned_loss=0.03986, over 7162.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2731, pruned_loss=0.0517, over 566202.08 frames.], batch size: 18, lr: 3.09e-04 2022-05-27 23:15:14,074 INFO [train.py:842] (0/4) Epoch 18, batch 150, loss[loss=0.1912, simple_loss=0.2854, pruned_loss=0.04851, over 7144.00 frames.], tot_loss[loss=0.191, simple_loss=0.276, pruned_loss=0.05306, over 753261.40 frames.], batch size: 20, lr: 3.09e-04 2022-05-27 23:15:53,004 INFO [train.py:842] (0/4) Epoch 18, batch 200, loss[loss=0.2572, simple_loss=0.3183, pruned_loss=0.09807, over 7158.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2761, pruned_loss=0.05331, over 904101.37 frames.], batch size: 18, lr: 3.09e-04 2022-05-27 23:16:31,825 INFO [train.py:842] (0/4) Epoch 18, batch 250, loss[loss=0.2123, simple_loss=0.3032, pruned_loss=0.06065, over 6785.00 frames.], tot_loss[loss=0.192, simple_loss=0.2773, pruned_loss=0.05332, over 1021477.76 frames.], batch size: 31, lr: 3.09e-04 2022-05-27 23:17:10,688 INFO [train.py:842] (0/4) Epoch 18, batch 300, loss[loss=0.1794, simple_loss=0.2784, pruned_loss=0.04014, over 7048.00 frames.], tot_loss[loss=0.1926, simple_loss=0.278, pruned_loss=0.05355, over 1104864.90 frames.], batch size: 28, lr: 3.09e-04 2022-05-27 23:17:49,735 INFO [train.py:842] (0/4) Epoch 18, batch 350, loss[loss=0.1556, simple_loss=0.2436, pruned_loss=0.03381, over 7335.00 frames.], tot_loss[loss=0.1906, simple_loss=0.2754, pruned_loss=0.05291, over 1171968.25 frames.], batch size: 22, lr: 3.09e-04 2022-05-27 23:18:28,725 INFO [train.py:842] (0/4) Epoch 18, batch 400, loss[loss=0.1465, simple_loss=0.2285, pruned_loss=0.03225, over 6791.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2752, pruned_loss=0.05254, over 1231977.63 frames.], batch size: 15, lr: 3.09e-04 2022-05-27 23:19:07,681 INFO [train.py:842] (0/4) Epoch 18, batch 450, loss[loss=0.2177, simple_loss=0.2975, pruned_loss=0.06891, over 7204.00 frames.], tot_loss[loss=0.1904, simple_loss=0.2757, pruned_loss=0.05253, over 1275896.18 frames.], batch size: 22, lr: 3.09e-04 2022-05-27 23:19:46,700 INFO [train.py:842] (0/4) Epoch 18, batch 500, loss[loss=0.1871, simple_loss=0.2742, pruned_loss=0.05002, over 7340.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2756, pruned_loss=0.05235, over 1312843.92 frames.], batch size: 22, lr: 3.09e-04 2022-05-27 23:20:26,035 INFO [train.py:842] (0/4) Epoch 18, batch 550, loss[loss=0.1796, simple_loss=0.259, pruned_loss=0.05009, over 7136.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2743, pruned_loss=0.05156, over 1339305.77 frames.], batch size: 17, lr: 3.09e-04 2022-05-27 23:21:04,754 INFO [train.py:842] (0/4) Epoch 18, batch 600, loss[loss=0.1974, simple_loss=0.2786, pruned_loss=0.05813, over 6300.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2763, pruned_loss=0.0528, over 1357022.78 frames.], batch size: 37, lr: 3.09e-04 2022-05-27 23:21:43,530 INFO [train.py:842] (0/4) Epoch 18, batch 650, loss[loss=0.2155, simple_loss=0.2832, pruned_loss=0.07391, over 5158.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2763, pruned_loss=0.0527, over 1369413.94 frames.], batch size: 53, lr: 3.09e-04 2022-05-27 23:22:22,376 INFO [train.py:842] (0/4) Epoch 18, batch 700, loss[loss=0.1988, simple_loss=0.2974, pruned_loss=0.05015, over 7320.00 frames.], tot_loss[loss=0.1918, simple_loss=0.277, pruned_loss=0.0533, over 1380549.65 frames.], batch size: 21, lr: 3.09e-04 2022-05-27 23:23:01,858 INFO [train.py:842] (0/4) Epoch 18, batch 750, loss[loss=0.1905, simple_loss=0.2562, pruned_loss=0.06239, over 7417.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2734, pruned_loss=0.05181, over 1391660.30 frames.], batch size: 18, lr: 3.09e-04 2022-05-27 23:23:41,021 INFO [train.py:842] (0/4) Epoch 18, batch 800, loss[loss=0.1681, simple_loss=0.2512, pruned_loss=0.04252, over 7313.00 frames.], tot_loss[loss=0.188, simple_loss=0.2733, pruned_loss=0.05138, over 1403450.23 frames.], batch size: 21, lr: 3.09e-04 2022-05-27 23:24:20,294 INFO [train.py:842] (0/4) Epoch 18, batch 850, loss[loss=0.186, simple_loss=0.2719, pruned_loss=0.05002, over 7428.00 frames.], tot_loss[loss=0.1877, simple_loss=0.2728, pruned_loss=0.05136, over 1406744.26 frames.], batch size: 21, lr: 3.09e-04 2022-05-27 23:24:58,888 INFO [train.py:842] (0/4) Epoch 18, batch 900, loss[loss=0.1618, simple_loss=0.2609, pruned_loss=0.03131, over 7203.00 frames.], tot_loss[loss=0.1893, simple_loss=0.2745, pruned_loss=0.05205, over 1406409.80 frames.], batch size: 22, lr: 3.09e-04 2022-05-27 23:25:37,801 INFO [train.py:842] (0/4) Epoch 18, batch 950, loss[loss=0.1783, simple_loss=0.269, pruned_loss=0.04374, over 7268.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2737, pruned_loss=0.05144, over 1409230.31 frames.], batch size: 19, lr: 3.09e-04 2022-05-27 23:26:16,531 INFO [train.py:842] (0/4) Epoch 18, batch 1000, loss[loss=0.2345, simple_loss=0.3179, pruned_loss=0.07558, over 7276.00 frames.], tot_loss[loss=0.1874, simple_loss=0.2732, pruned_loss=0.05084, over 1414407.82 frames.], batch size: 24, lr: 3.09e-04 2022-05-27 23:26:55,795 INFO [train.py:842] (0/4) Epoch 18, batch 1050, loss[loss=0.1639, simple_loss=0.2459, pruned_loss=0.0409, over 7281.00 frames.], tot_loss[loss=0.1886, simple_loss=0.274, pruned_loss=0.0516, over 1417510.78 frames.], batch size: 17, lr: 3.08e-04 2022-05-27 23:27:34,971 INFO [train.py:842] (0/4) Epoch 18, batch 1100, loss[loss=0.1685, simple_loss=0.2665, pruned_loss=0.03523, over 7274.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2741, pruned_loss=0.05156, over 1420795.62 frames.], batch size: 25, lr: 3.08e-04 2022-05-27 23:28:14,176 INFO [train.py:842] (0/4) Epoch 18, batch 1150, loss[loss=0.1702, simple_loss=0.2662, pruned_loss=0.03706, over 7390.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2735, pruned_loss=0.05141, over 1419104.61 frames.], batch size: 23, lr: 3.08e-04 2022-05-27 23:28:53,123 INFO [train.py:842] (0/4) Epoch 18, batch 1200, loss[loss=0.1804, simple_loss=0.2655, pruned_loss=0.04771, over 7270.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2742, pruned_loss=0.05205, over 1416724.13 frames.], batch size: 18, lr: 3.08e-04 2022-05-27 23:29:32,540 INFO [train.py:842] (0/4) Epoch 18, batch 1250, loss[loss=0.201, simple_loss=0.2819, pruned_loss=0.06005, over 7414.00 frames.], tot_loss[loss=0.1888, simple_loss=0.274, pruned_loss=0.05181, over 1417712.11 frames.], batch size: 21, lr: 3.08e-04 2022-05-27 23:30:11,887 INFO [train.py:842] (0/4) Epoch 18, batch 1300, loss[loss=0.1969, simple_loss=0.2868, pruned_loss=0.05349, over 7172.00 frames.], tot_loss[loss=0.189, simple_loss=0.2742, pruned_loss=0.05195, over 1418793.94 frames.], batch size: 26, lr: 3.08e-04 2022-05-27 23:30:51,060 INFO [train.py:842] (0/4) Epoch 18, batch 1350, loss[loss=0.1469, simple_loss=0.2238, pruned_loss=0.035, over 7001.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2737, pruned_loss=0.05136, over 1421305.05 frames.], batch size: 16, lr: 3.08e-04 2022-05-27 23:31:29,675 INFO [train.py:842] (0/4) Epoch 18, batch 1400, loss[loss=0.19, simple_loss=0.2793, pruned_loss=0.05039, over 7117.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2743, pruned_loss=0.05141, over 1423307.30 frames.], batch size: 21, lr: 3.08e-04 2022-05-27 23:32:08,767 INFO [train.py:842] (0/4) Epoch 18, batch 1450, loss[loss=0.1871, simple_loss=0.2774, pruned_loss=0.04834, over 7145.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2752, pruned_loss=0.05181, over 1422085.87 frames.], batch size: 20, lr: 3.08e-04 2022-05-27 23:32:47,814 INFO [train.py:842] (0/4) Epoch 18, batch 1500, loss[loss=0.1761, simple_loss=0.2631, pruned_loss=0.04456, over 7259.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2755, pruned_loss=0.05199, over 1413353.72 frames.], batch size: 25, lr: 3.08e-04 2022-05-27 23:33:26,850 INFO [train.py:842] (0/4) Epoch 18, batch 1550, loss[loss=0.1736, simple_loss=0.2576, pruned_loss=0.04479, over 7157.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2747, pruned_loss=0.05161, over 1420820.97 frames.], batch size: 19, lr: 3.08e-04 2022-05-27 23:34:05,596 INFO [train.py:842] (0/4) Epoch 18, batch 1600, loss[loss=0.1782, simple_loss=0.2571, pruned_loss=0.04966, over 7439.00 frames.], tot_loss[loss=0.1891, simple_loss=0.2747, pruned_loss=0.0518, over 1421442.55 frames.], batch size: 20, lr: 3.08e-04 2022-05-27 23:34:44,622 INFO [train.py:842] (0/4) Epoch 18, batch 1650, loss[loss=0.2044, simple_loss=0.2819, pruned_loss=0.0635, over 7291.00 frames.], tot_loss[loss=0.1895, simple_loss=0.275, pruned_loss=0.05197, over 1421159.10 frames.], batch size: 17, lr: 3.08e-04 2022-05-27 23:35:23,699 INFO [train.py:842] (0/4) Epoch 18, batch 1700, loss[loss=0.1881, simple_loss=0.2691, pruned_loss=0.05353, over 7353.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2753, pruned_loss=0.05248, over 1423790.25 frames.], batch size: 19, lr: 3.08e-04 2022-05-27 23:36:03,296 INFO [train.py:842] (0/4) Epoch 18, batch 1750, loss[loss=0.1645, simple_loss=0.2686, pruned_loss=0.03018, over 7312.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2749, pruned_loss=0.05223, over 1423894.46 frames.], batch size: 21, lr: 3.08e-04 2022-05-27 23:36:42,472 INFO [train.py:842] (0/4) Epoch 18, batch 1800, loss[loss=0.1787, simple_loss=0.2673, pruned_loss=0.04506, over 7240.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2749, pruned_loss=0.05231, over 1427893.99 frames.], batch size: 20, lr: 3.08e-04 2022-05-27 23:37:22,100 INFO [train.py:842] (0/4) Epoch 18, batch 1850, loss[loss=0.3059, simple_loss=0.3584, pruned_loss=0.1266, over 4879.00 frames.], tot_loss[loss=0.1901, simple_loss=0.275, pruned_loss=0.05257, over 1425835.79 frames.], batch size: 52, lr: 3.08e-04 2022-05-27 23:38:00,789 INFO [train.py:842] (0/4) Epoch 18, batch 1900, loss[loss=0.218, simple_loss=0.3035, pruned_loss=0.06631, over 7317.00 frames.], tot_loss[loss=0.1911, simple_loss=0.2763, pruned_loss=0.05291, over 1426865.37 frames.], batch size: 21, lr: 3.08e-04 2022-05-27 23:38:39,679 INFO [train.py:842] (0/4) Epoch 18, batch 1950, loss[loss=0.2045, simple_loss=0.289, pruned_loss=0.06004, over 7319.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2766, pruned_loss=0.05256, over 1423907.74 frames.], batch size: 21, lr: 3.08e-04 2022-05-27 23:39:18,749 INFO [train.py:842] (0/4) Epoch 18, batch 2000, loss[loss=0.2528, simple_loss=0.3243, pruned_loss=0.09067, over 5010.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2753, pruned_loss=0.05209, over 1424552.80 frames.], batch size: 52, lr: 3.08e-04 2022-05-27 23:39:57,968 INFO [train.py:842] (0/4) Epoch 18, batch 2050, loss[loss=0.1654, simple_loss=0.2595, pruned_loss=0.0356, over 7117.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2753, pruned_loss=0.05258, over 1419858.86 frames.], batch size: 21, lr: 3.07e-04 2022-05-27 23:40:36,493 INFO [train.py:842] (0/4) Epoch 18, batch 2100, loss[loss=0.1982, simple_loss=0.284, pruned_loss=0.05623, over 6816.00 frames.], tot_loss[loss=0.1904, simple_loss=0.2754, pruned_loss=0.05266, over 1415720.70 frames.], batch size: 31, lr: 3.07e-04 2022-05-27 23:41:15,756 INFO [train.py:842] (0/4) Epoch 18, batch 2150, loss[loss=0.1814, simple_loss=0.2691, pruned_loss=0.04689, over 7221.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2752, pruned_loss=0.05254, over 1417569.58 frames.], batch size: 21, lr: 3.07e-04 2022-05-27 23:41:54,564 INFO [train.py:842] (0/4) Epoch 18, batch 2200, loss[loss=0.1686, simple_loss=0.2466, pruned_loss=0.04529, over 6862.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2757, pruned_loss=0.05301, over 1419782.19 frames.], batch size: 15, lr: 3.07e-04 2022-05-27 23:42:33,986 INFO [train.py:842] (0/4) Epoch 18, batch 2250, loss[loss=0.1815, simple_loss=0.2568, pruned_loss=0.05306, over 6999.00 frames.], tot_loss[loss=0.191, simple_loss=0.2759, pruned_loss=0.05304, over 1423919.60 frames.], batch size: 16, lr: 3.07e-04 2022-05-27 23:43:12,897 INFO [train.py:842] (0/4) Epoch 18, batch 2300, loss[loss=0.1899, simple_loss=0.2809, pruned_loss=0.04951, over 7148.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2753, pruned_loss=0.0525, over 1426051.75 frames.], batch size: 20, lr: 3.07e-04 2022-05-27 23:43:52,065 INFO [train.py:842] (0/4) Epoch 18, batch 2350, loss[loss=0.1803, simple_loss=0.268, pruned_loss=0.04627, over 7139.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2749, pruned_loss=0.05235, over 1426179.00 frames.], batch size: 26, lr: 3.07e-04 2022-05-27 23:44:31,192 INFO [train.py:842] (0/4) Epoch 18, batch 2400, loss[loss=0.2293, simple_loss=0.3133, pruned_loss=0.07263, over 6430.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2762, pruned_loss=0.05279, over 1424307.73 frames.], batch size: 38, lr: 3.07e-04 2022-05-27 23:45:10,087 INFO [train.py:842] (0/4) Epoch 18, batch 2450, loss[loss=0.1802, simple_loss=0.2727, pruned_loss=0.04387, over 7144.00 frames.], tot_loss[loss=0.1888, simple_loss=0.2742, pruned_loss=0.05175, over 1425505.22 frames.], batch size: 19, lr: 3.07e-04 2022-05-27 23:45:58,779 INFO [train.py:842] (0/4) Epoch 18, batch 2500, loss[loss=0.1546, simple_loss=0.2575, pruned_loss=0.02586, over 7104.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2748, pruned_loss=0.05178, over 1417362.26 frames.], batch size: 21, lr: 3.07e-04 2022-05-27 23:46:37,744 INFO [train.py:842] (0/4) Epoch 18, batch 2550, loss[loss=0.1792, simple_loss=0.2676, pruned_loss=0.04539, over 7334.00 frames.], tot_loss[loss=0.188, simple_loss=0.2737, pruned_loss=0.05113, over 1418525.76 frames.], batch size: 21, lr: 3.07e-04 2022-05-27 23:47:16,433 INFO [train.py:842] (0/4) Epoch 18, batch 2600, loss[loss=0.1608, simple_loss=0.243, pruned_loss=0.03932, over 6757.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2738, pruned_loss=0.05103, over 1417796.57 frames.], batch size: 15, lr: 3.07e-04 2022-05-27 23:48:05,809 INFO [train.py:842] (0/4) Epoch 18, batch 2650, loss[loss=0.1668, simple_loss=0.2477, pruned_loss=0.04298, over 7354.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2749, pruned_loss=0.05226, over 1419058.68 frames.], batch size: 19, lr: 3.07e-04 2022-05-27 23:48:44,815 INFO [train.py:842] (0/4) Epoch 18, batch 2700, loss[loss=0.1791, simple_loss=0.266, pruned_loss=0.04607, over 7286.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2742, pruned_loss=0.05225, over 1418661.96 frames.], batch size: 18, lr: 3.07e-04 2022-05-27 23:49:23,606 INFO [train.py:842] (0/4) Epoch 18, batch 2750, loss[loss=0.2011, simple_loss=0.2963, pruned_loss=0.05294, over 7153.00 frames.], tot_loss[loss=0.1893, simple_loss=0.274, pruned_loss=0.05226, over 1416703.80 frames.], batch size: 20, lr: 3.07e-04 2022-05-27 23:50:12,203 INFO [train.py:842] (0/4) Epoch 18, batch 2800, loss[loss=0.2431, simple_loss=0.3213, pruned_loss=0.08248, over 7321.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2744, pruned_loss=0.05262, over 1416852.28 frames.], batch size: 21, lr: 3.07e-04 2022-05-27 23:50:50,917 INFO [train.py:842] (0/4) Epoch 18, batch 2850, loss[loss=0.232, simple_loss=0.3062, pruned_loss=0.07894, over 7272.00 frames.], tot_loss[loss=0.1901, simple_loss=0.275, pruned_loss=0.05266, over 1419736.03 frames.], batch size: 25, lr: 3.07e-04 2022-05-27 23:51:29,795 INFO [train.py:842] (0/4) Epoch 18, batch 2900, loss[loss=0.2028, simple_loss=0.2953, pruned_loss=0.05518, over 7207.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2761, pruned_loss=0.05331, over 1422613.62 frames.], batch size: 22, lr: 3.07e-04 2022-05-27 23:52:08,754 INFO [train.py:842] (0/4) Epoch 18, batch 2950, loss[loss=0.1773, simple_loss=0.2723, pruned_loss=0.04115, over 6541.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2763, pruned_loss=0.05333, over 1419770.27 frames.], batch size: 39, lr: 3.07e-04 2022-05-27 23:52:47,315 INFO [train.py:842] (0/4) Epoch 18, batch 3000, loss[loss=0.2471, simple_loss=0.318, pruned_loss=0.08813, over 7309.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2781, pruned_loss=0.05412, over 1418335.75 frames.], batch size: 25, lr: 3.07e-04 2022-05-27 23:52:47,316 INFO [train.py:862] (0/4) Computing validation loss 2022-05-27 23:52:57,054 INFO [train.py:871] (0/4) Epoch 18, validation: loss=0.1661, simple_loss=0.2662, pruned_loss=0.03302, over 868885.00 frames. 2022-05-27 23:53:35,985 INFO [train.py:842] (0/4) Epoch 18, batch 3050, loss[loss=0.1835, simple_loss=0.2726, pruned_loss=0.04722, over 7113.00 frames.], tot_loss[loss=0.1932, simple_loss=0.2781, pruned_loss=0.05417, over 1417413.84 frames.], batch size: 21, lr: 3.07e-04 2022-05-27 23:54:14,578 INFO [train.py:842] (0/4) Epoch 18, batch 3100, loss[loss=0.1613, simple_loss=0.254, pruned_loss=0.03434, over 7242.00 frames.], tot_loss[loss=0.1927, simple_loss=0.2773, pruned_loss=0.05404, over 1418859.75 frames.], batch size: 20, lr: 3.06e-04 2022-05-27 23:54:53,750 INFO [train.py:842] (0/4) Epoch 18, batch 3150, loss[loss=0.1632, simple_loss=0.255, pruned_loss=0.03571, over 7259.00 frames.], tot_loss[loss=0.1923, simple_loss=0.277, pruned_loss=0.05374, over 1421617.85 frames.], batch size: 19, lr: 3.06e-04 2022-05-27 23:55:32,545 INFO [train.py:842] (0/4) Epoch 18, batch 3200, loss[loss=0.2078, simple_loss=0.2915, pruned_loss=0.06206, over 6722.00 frames.], tot_loss[loss=0.1921, simple_loss=0.2771, pruned_loss=0.0536, over 1418896.37 frames.], batch size: 31, lr: 3.06e-04 2022-05-27 23:56:11,860 INFO [train.py:842] (0/4) Epoch 18, batch 3250, loss[loss=0.1915, simple_loss=0.2735, pruned_loss=0.0548, over 7392.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2759, pruned_loss=0.05255, over 1422631.86 frames.], batch size: 23, lr: 3.06e-04 2022-05-27 23:56:51,357 INFO [train.py:842] (0/4) Epoch 18, batch 3300, loss[loss=0.1646, simple_loss=0.2498, pruned_loss=0.03968, over 7180.00 frames.], tot_loss[loss=0.1896, simple_loss=0.2749, pruned_loss=0.05213, over 1427925.91 frames.], batch size: 18, lr: 3.06e-04 2022-05-27 23:57:30,276 INFO [train.py:842] (0/4) Epoch 18, batch 3350, loss[loss=0.1749, simple_loss=0.245, pruned_loss=0.05236, over 7412.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2753, pruned_loss=0.05227, over 1427234.06 frames.], batch size: 18, lr: 3.06e-04 2022-05-27 23:58:09,187 INFO [train.py:842] (0/4) Epoch 18, batch 3400, loss[loss=0.1907, simple_loss=0.2759, pruned_loss=0.05276, over 7380.00 frames.], tot_loss[loss=0.1882, simple_loss=0.274, pruned_loss=0.05117, over 1430551.46 frames.], batch size: 23, lr: 3.06e-04 2022-05-27 23:58:48,411 INFO [train.py:842] (0/4) Epoch 18, batch 3450, loss[loss=0.1586, simple_loss=0.2377, pruned_loss=0.0397, over 7406.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2743, pruned_loss=0.05115, over 1431141.41 frames.], batch size: 18, lr: 3.06e-04 2022-05-27 23:59:27,892 INFO [train.py:842] (0/4) Epoch 18, batch 3500, loss[loss=0.1802, simple_loss=0.2648, pruned_loss=0.04773, over 6510.00 frames.], tot_loss[loss=0.1884, simple_loss=0.274, pruned_loss=0.05135, over 1433254.79 frames.], batch size: 37, lr: 3.06e-04 2022-05-28 00:00:06,997 INFO [train.py:842] (0/4) Epoch 18, batch 3550, loss[loss=0.1947, simple_loss=0.2899, pruned_loss=0.04977, over 7202.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2767, pruned_loss=0.05299, over 1431171.82 frames.], batch size: 23, lr: 3.06e-04 2022-05-28 00:00:45,905 INFO [train.py:842] (0/4) Epoch 18, batch 3600, loss[loss=0.1856, simple_loss=0.273, pruned_loss=0.04907, over 7213.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2755, pruned_loss=0.05245, over 1432583.15 frames.], batch size: 21, lr: 3.06e-04 2022-05-28 00:01:24,917 INFO [train.py:842] (0/4) Epoch 18, batch 3650, loss[loss=0.19, simple_loss=0.2833, pruned_loss=0.04837, over 7335.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2752, pruned_loss=0.05253, over 1423796.66 frames.], batch size: 22, lr: 3.06e-04 2022-05-28 00:02:03,661 INFO [train.py:842] (0/4) Epoch 18, batch 3700, loss[loss=0.1573, simple_loss=0.2382, pruned_loss=0.03816, over 6996.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2747, pruned_loss=0.05185, over 1425158.65 frames.], batch size: 16, lr: 3.06e-04 2022-05-28 00:02:31,635 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-160000.pt 2022-05-28 00:02:45,142 INFO [train.py:842] (0/4) Epoch 18, batch 3750, loss[loss=0.1955, simple_loss=0.2874, pruned_loss=0.05185, over 7253.00 frames.], tot_loss[loss=0.19, simple_loss=0.2761, pruned_loss=0.05196, over 1426727.27 frames.], batch size: 25, lr: 3.06e-04 2022-05-28 00:03:23,940 INFO [train.py:842] (0/4) Epoch 18, batch 3800, loss[loss=0.2255, simple_loss=0.3017, pruned_loss=0.07464, over 7359.00 frames.], tot_loss[loss=0.191, simple_loss=0.2765, pruned_loss=0.05278, over 1427134.59 frames.], batch size: 19, lr: 3.06e-04 2022-05-28 00:04:03,232 INFO [train.py:842] (0/4) Epoch 18, batch 3850, loss[loss=0.1485, simple_loss=0.2384, pruned_loss=0.02926, over 7412.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2745, pruned_loss=0.05167, over 1425506.71 frames.], batch size: 18, lr: 3.06e-04 2022-05-28 00:04:41,994 INFO [train.py:842] (0/4) Epoch 18, batch 3900, loss[loss=0.179, simple_loss=0.2714, pruned_loss=0.04333, over 7115.00 frames.], tot_loss[loss=0.1897, simple_loss=0.275, pruned_loss=0.0522, over 1421735.28 frames.], batch size: 21, lr: 3.06e-04 2022-05-28 00:05:21,339 INFO [train.py:842] (0/4) Epoch 18, batch 3950, loss[loss=0.1832, simple_loss=0.2714, pruned_loss=0.04748, over 7321.00 frames.], tot_loss[loss=0.19, simple_loss=0.2749, pruned_loss=0.05252, over 1424094.86 frames.], batch size: 20, lr: 3.06e-04 2022-05-28 00:06:00,170 INFO [train.py:842] (0/4) Epoch 18, batch 4000, loss[loss=0.2059, simple_loss=0.2865, pruned_loss=0.06265, over 7389.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2765, pruned_loss=0.05324, over 1425340.71 frames.], batch size: 23, lr: 3.06e-04 2022-05-28 00:06:39,309 INFO [train.py:842] (0/4) Epoch 18, batch 4050, loss[loss=0.1231, simple_loss=0.2119, pruned_loss=0.01719, over 7412.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2757, pruned_loss=0.05269, over 1430014.55 frames.], batch size: 18, lr: 3.06e-04 2022-05-28 00:07:18,088 INFO [train.py:842] (0/4) Epoch 18, batch 4100, loss[loss=0.2274, simple_loss=0.3131, pruned_loss=0.07089, over 7059.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2766, pruned_loss=0.05303, over 1429257.44 frames.], batch size: 18, lr: 3.06e-04 2022-05-28 00:07:57,194 INFO [train.py:842] (0/4) Epoch 18, batch 4150, loss[loss=0.2183, simple_loss=0.3129, pruned_loss=0.06188, over 7220.00 frames.], tot_loss[loss=0.1917, simple_loss=0.2771, pruned_loss=0.05316, over 1425717.64 frames.], batch size: 23, lr: 3.05e-04 2022-05-28 00:08:36,162 INFO [train.py:842] (0/4) Epoch 18, batch 4200, loss[loss=0.2209, simple_loss=0.2984, pruned_loss=0.07167, over 7334.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2759, pruned_loss=0.05253, over 1424382.63 frames.], batch size: 22, lr: 3.05e-04 2022-05-28 00:09:15,379 INFO [train.py:842] (0/4) Epoch 18, batch 4250, loss[loss=0.2273, simple_loss=0.3026, pruned_loss=0.07599, over 7299.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2767, pruned_loss=0.05311, over 1422247.87 frames.], batch size: 24, lr: 3.05e-04 2022-05-28 00:09:54,189 INFO [train.py:842] (0/4) Epoch 18, batch 4300, loss[loss=0.2363, simple_loss=0.3098, pruned_loss=0.08138, over 6461.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2768, pruned_loss=0.05297, over 1423341.89 frames.], batch size: 38, lr: 3.05e-04 2022-05-28 00:10:33,522 INFO [train.py:842] (0/4) Epoch 18, batch 4350, loss[loss=0.1553, simple_loss=0.2446, pruned_loss=0.03297, over 7400.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2751, pruned_loss=0.05224, over 1425446.67 frames.], batch size: 18, lr: 3.05e-04 2022-05-28 00:11:12,544 INFO [train.py:842] (0/4) Epoch 18, batch 4400, loss[loss=0.1804, simple_loss=0.2671, pruned_loss=0.04688, over 7100.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2736, pruned_loss=0.0519, over 1424170.37 frames.], batch size: 28, lr: 3.05e-04 2022-05-28 00:11:51,888 INFO [train.py:842] (0/4) Epoch 18, batch 4450, loss[loss=0.1982, simple_loss=0.2807, pruned_loss=0.05786, over 7276.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2742, pruned_loss=0.0523, over 1423359.32 frames.], batch size: 24, lr: 3.05e-04 2022-05-28 00:12:30,692 INFO [train.py:842] (0/4) Epoch 18, batch 4500, loss[loss=0.1814, simple_loss=0.2706, pruned_loss=0.0461, over 7259.00 frames.], tot_loss[loss=0.1891, simple_loss=0.2741, pruned_loss=0.05208, over 1424083.24 frames.], batch size: 19, lr: 3.05e-04 2022-05-28 00:13:09,718 INFO [train.py:842] (0/4) Epoch 18, batch 4550, loss[loss=0.2052, simple_loss=0.2983, pruned_loss=0.05609, over 7102.00 frames.], tot_loss[loss=0.1897, simple_loss=0.275, pruned_loss=0.0522, over 1423296.02 frames.], batch size: 28, lr: 3.05e-04 2022-05-28 00:13:48,423 INFO [train.py:842] (0/4) Epoch 18, batch 4600, loss[loss=0.1704, simple_loss=0.2747, pruned_loss=0.03307, over 7205.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2741, pruned_loss=0.05163, over 1422166.64 frames.], batch size: 21, lr: 3.05e-04 2022-05-28 00:14:27,477 INFO [train.py:842] (0/4) Epoch 18, batch 4650, loss[loss=0.1799, simple_loss=0.2732, pruned_loss=0.04332, over 7214.00 frames.], tot_loss[loss=0.1895, simple_loss=0.2749, pruned_loss=0.05202, over 1416714.60 frames.], batch size: 22, lr: 3.05e-04 2022-05-28 00:15:06,692 INFO [train.py:842] (0/4) Epoch 18, batch 4700, loss[loss=0.1713, simple_loss=0.2588, pruned_loss=0.04193, over 7066.00 frames.], tot_loss[loss=0.1903, simple_loss=0.2752, pruned_loss=0.05275, over 1419962.50 frames.], batch size: 18, lr: 3.05e-04 2022-05-28 00:15:46,205 INFO [train.py:842] (0/4) Epoch 18, batch 4750, loss[loss=0.1844, simple_loss=0.2672, pruned_loss=0.05079, over 7359.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2747, pruned_loss=0.05276, over 1420786.06 frames.], batch size: 19, lr: 3.05e-04 2022-05-28 00:16:25,210 INFO [train.py:842] (0/4) Epoch 18, batch 4800, loss[loss=0.1597, simple_loss=0.247, pruned_loss=0.03623, over 7257.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2752, pruned_loss=0.05284, over 1421731.30 frames.], batch size: 19, lr: 3.05e-04 2022-05-28 00:17:04,347 INFO [train.py:842] (0/4) Epoch 18, batch 4850, loss[loss=0.2203, simple_loss=0.2985, pruned_loss=0.07108, over 7058.00 frames.], tot_loss[loss=0.1895, simple_loss=0.2742, pruned_loss=0.05238, over 1425465.90 frames.], batch size: 28, lr: 3.05e-04 2022-05-28 00:17:43,300 INFO [train.py:842] (0/4) Epoch 18, batch 4900, loss[loss=0.1869, simple_loss=0.2748, pruned_loss=0.04951, over 7145.00 frames.], tot_loss[loss=0.1902, simple_loss=0.275, pruned_loss=0.05274, over 1428905.57 frames.], batch size: 20, lr: 3.05e-04 2022-05-28 00:18:22,519 INFO [train.py:842] (0/4) Epoch 18, batch 4950, loss[loss=0.1876, simple_loss=0.279, pruned_loss=0.04807, over 7259.00 frames.], tot_loss[loss=0.1904, simple_loss=0.2755, pruned_loss=0.0527, over 1427740.64 frames.], batch size: 19, lr: 3.05e-04 2022-05-28 00:19:01,609 INFO [train.py:842] (0/4) Epoch 18, batch 5000, loss[loss=0.2244, simple_loss=0.3054, pruned_loss=0.0717, over 7223.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2755, pruned_loss=0.0529, over 1428278.20 frames.], batch size: 23, lr: 3.05e-04 2022-05-28 00:19:40,980 INFO [train.py:842] (0/4) Epoch 18, batch 5050, loss[loss=0.2464, simple_loss=0.3128, pruned_loss=0.09, over 7161.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2762, pruned_loss=0.0528, over 1430661.76 frames.], batch size: 18, lr: 3.05e-04 2022-05-28 00:20:19,752 INFO [train.py:842] (0/4) Epoch 18, batch 5100, loss[loss=0.2279, simple_loss=0.3136, pruned_loss=0.07104, over 7185.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2759, pruned_loss=0.05279, over 1428262.16 frames.], batch size: 23, lr: 3.05e-04 2022-05-28 00:20:58,723 INFO [train.py:842] (0/4) Epoch 18, batch 5150, loss[loss=0.1722, simple_loss=0.2508, pruned_loss=0.04686, over 7413.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2745, pruned_loss=0.05164, over 1430685.23 frames.], batch size: 18, lr: 3.05e-04 2022-05-28 00:21:37,380 INFO [train.py:842] (0/4) Epoch 18, batch 5200, loss[loss=0.1822, simple_loss=0.2751, pruned_loss=0.04463, over 7088.00 frames.], tot_loss[loss=0.189, simple_loss=0.2747, pruned_loss=0.05163, over 1432088.04 frames.], batch size: 28, lr: 3.04e-04 2022-05-28 00:22:17,212 INFO [train.py:842] (0/4) Epoch 18, batch 5250, loss[loss=0.1799, simple_loss=0.2803, pruned_loss=0.03976, over 6737.00 frames.], tot_loss[loss=0.188, simple_loss=0.2737, pruned_loss=0.05115, over 1434292.91 frames.], batch size: 31, lr: 3.04e-04 2022-05-28 00:22:55,884 INFO [train.py:842] (0/4) Epoch 18, batch 5300, loss[loss=0.1959, simple_loss=0.2934, pruned_loss=0.04923, over 7306.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2752, pruned_loss=0.05204, over 1432363.92 frames.], batch size: 21, lr: 3.04e-04 2022-05-28 00:23:35,023 INFO [train.py:842] (0/4) Epoch 18, batch 5350, loss[loss=0.1733, simple_loss=0.2483, pruned_loss=0.04913, over 7420.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2746, pruned_loss=0.05186, over 1433285.15 frames.], batch size: 18, lr: 3.04e-04 2022-05-28 00:24:13,948 INFO [train.py:842] (0/4) Epoch 18, batch 5400, loss[loss=0.1857, simple_loss=0.2713, pruned_loss=0.05006, over 7113.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2736, pruned_loss=0.05134, over 1430040.81 frames.], batch size: 21, lr: 3.04e-04 2022-05-28 00:24:53,092 INFO [train.py:842] (0/4) Epoch 18, batch 5450, loss[loss=0.1879, simple_loss=0.2607, pruned_loss=0.05762, over 7361.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2738, pruned_loss=0.05137, over 1431927.53 frames.], batch size: 19, lr: 3.04e-04 2022-05-28 00:25:32,205 INFO [train.py:842] (0/4) Epoch 18, batch 5500, loss[loss=0.2155, simple_loss=0.2907, pruned_loss=0.07018, over 7186.00 frames.], tot_loss[loss=0.1884, simple_loss=0.2737, pruned_loss=0.05149, over 1433484.77 frames.], batch size: 26, lr: 3.04e-04 2022-05-28 00:26:11,396 INFO [train.py:842] (0/4) Epoch 18, batch 5550, loss[loss=0.2249, simple_loss=0.3065, pruned_loss=0.07166, over 7220.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2741, pruned_loss=0.05181, over 1434579.67 frames.], batch size: 22, lr: 3.04e-04 2022-05-28 00:26:50,011 INFO [train.py:842] (0/4) Epoch 18, batch 5600, loss[loss=0.1938, simple_loss=0.2809, pruned_loss=0.05335, over 7277.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2749, pruned_loss=0.05271, over 1433269.56 frames.], batch size: 24, lr: 3.04e-04 2022-05-28 00:27:28,941 INFO [train.py:842] (0/4) Epoch 18, batch 5650, loss[loss=0.1793, simple_loss=0.2588, pruned_loss=0.04988, over 7065.00 frames.], tot_loss[loss=0.1903, simple_loss=0.2755, pruned_loss=0.05254, over 1429825.02 frames.], batch size: 18, lr: 3.04e-04 2022-05-28 00:28:08,030 INFO [train.py:842] (0/4) Epoch 18, batch 5700, loss[loss=0.2019, simple_loss=0.295, pruned_loss=0.05439, over 7371.00 frames.], tot_loss[loss=0.1915, simple_loss=0.2769, pruned_loss=0.05308, over 1430478.19 frames.], batch size: 23, lr: 3.04e-04 2022-05-28 00:28:47,259 INFO [train.py:842] (0/4) Epoch 18, batch 5750, loss[loss=0.1931, simple_loss=0.2861, pruned_loss=0.05004, over 7120.00 frames.], tot_loss[loss=0.192, simple_loss=0.2776, pruned_loss=0.05317, over 1430319.60 frames.], batch size: 21, lr: 3.04e-04 2022-05-28 00:29:26,193 INFO [train.py:842] (0/4) Epoch 18, batch 5800, loss[loss=0.2279, simple_loss=0.3265, pruned_loss=0.06468, over 7299.00 frames.], tot_loss[loss=0.1912, simple_loss=0.2769, pruned_loss=0.05279, over 1430166.70 frames.], batch size: 25, lr: 3.04e-04 2022-05-28 00:30:05,283 INFO [train.py:842] (0/4) Epoch 18, batch 5850, loss[loss=0.2109, simple_loss=0.3081, pruned_loss=0.05684, over 7209.00 frames.], tot_loss[loss=0.1902, simple_loss=0.276, pruned_loss=0.05215, over 1424672.01 frames.], batch size: 23, lr: 3.04e-04 2022-05-28 00:30:44,179 INFO [train.py:842] (0/4) Epoch 18, batch 5900, loss[loss=0.1545, simple_loss=0.2348, pruned_loss=0.03708, over 7055.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2764, pruned_loss=0.05264, over 1423754.72 frames.], batch size: 18, lr: 3.04e-04 2022-05-28 00:31:22,985 INFO [train.py:842] (0/4) Epoch 18, batch 5950, loss[loss=0.143, simple_loss=0.2321, pruned_loss=0.02693, over 7253.00 frames.], tot_loss[loss=0.19, simple_loss=0.2754, pruned_loss=0.05234, over 1423216.05 frames.], batch size: 19, lr: 3.04e-04 2022-05-28 00:32:01,714 INFO [train.py:842] (0/4) Epoch 18, batch 6000, loss[loss=0.2007, simple_loss=0.2949, pruned_loss=0.0532, over 7285.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2762, pruned_loss=0.05262, over 1427642.01 frames.], batch size: 25, lr: 3.04e-04 2022-05-28 00:32:01,715 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 00:32:11,164 INFO [train.py:871] (0/4) Epoch 18, validation: loss=0.1679, simple_loss=0.2675, pruned_loss=0.03415, over 868885.00 frames. 2022-05-28 00:32:50,444 INFO [train.py:842] (0/4) Epoch 18, batch 6050, loss[loss=0.182, simple_loss=0.2659, pruned_loss=0.04904, over 7404.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2753, pruned_loss=0.05244, over 1426571.41 frames.], batch size: 21, lr: 3.04e-04 2022-05-28 00:33:29,267 INFO [train.py:842] (0/4) Epoch 18, batch 6100, loss[loss=0.1962, simple_loss=0.2695, pruned_loss=0.06146, over 7438.00 frames.], tot_loss[loss=0.1897, simple_loss=0.275, pruned_loss=0.05222, over 1428608.97 frames.], batch size: 20, lr: 3.04e-04 2022-05-28 00:34:08,672 INFO [train.py:842] (0/4) Epoch 18, batch 6150, loss[loss=0.1727, simple_loss=0.2687, pruned_loss=0.03837, over 7118.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2738, pruned_loss=0.05175, over 1431090.50 frames.], batch size: 21, lr: 3.04e-04 2022-05-28 00:34:47,537 INFO [train.py:842] (0/4) Epoch 18, batch 6200, loss[loss=0.1613, simple_loss=0.2635, pruned_loss=0.02953, over 7118.00 frames.], tot_loss[loss=0.188, simple_loss=0.2731, pruned_loss=0.05145, over 1425255.40 frames.], batch size: 21, lr: 3.04e-04 2022-05-28 00:35:26,867 INFO [train.py:842] (0/4) Epoch 18, batch 6250, loss[loss=0.1884, simple_loss=0.2815, pruned_loss=0.04768, over 7222.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2725, pruned_loss=0.05093, over 1424321.91 frames.], batch size: 21, lr: 3.04e-04 2022-05-28 00:36:05,717 INFO [train.py:842] (0/4) Epoch 18, batch 6300, loss[loss=0.263, simple_loss=0.357, pruned_loss=0.08455, over 7143.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2731, pruned_loss=0.05106, over 1422501.19 frames.], batch size: 20, lr: 3.03e-04 2022-05-28 00:36:45,093 INFO [train.py:842] (0/4) Epoch 18, batch 6350, loss[loss=0.2282, simple_loss=0.2985, pruned_loss=0.07893, over 7216.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2732, pruned_loss=0.05167, over 1426201.88 frames.], batch size: 21, lr: 3.03e-04 2022-05-28 00:37:23,932 INFO [train.py:842] (0/4) Epoch 18, batch 6400, loss[loss=0.1678, simple_loss=0.244, pruned_loss=0.04585, over 7398.00 frames.], tot_loss[loss=0.1884, simple_loss=0.2736, pruned_loss=0.05159, over 1424874.21 frames.], batch size: 18, lr: 3.03e-04 2022-05-28 00:38:03,178 INFO [train.py:842] (0/4) Epoch 18, batch 6450, loss[loss=0.1646, simple_loss=0.252, pruned_loss=0.03861, over 7362.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2735, pruned_loss=0.0517, over 1425796.21 frames.], batch size: 19, lr: 3.03e-04 2022-05-28 00:38:41,961 INFO [train.py:842] (0/4) Epoch 18, batch 6500, loss[loss=0.1698, simple_loss=0.2418, pruned_loss=0.0489, over 7143.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2739, pruned_loss=0.05162, over 1424265.65 frames.], batch size: 17, lr: 3.03e-04 2022-05-28 00:39:21,185 INFO [train.py:842] (0/4) Epoch 18, batch 6550, loss[loss=0.219, simple_loss=0.3058, pruned_loss=0.06614, over 7333.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2732, pruned_loss=0.05152, over 1427125.22 frames.], batch size: 20, lr: 3.03e-04 2022-05-28 00:40:00,111 INFO [train.py:842] (0/4) Epoch 18, batch 6600, loss[loss=0.2027, simple_loss=0.2962, pruned_loss=0.05467, over 7201.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2729, pruned_loss=0.05116, over 1426409.99 frames.], batch size: 22, lr: 3.03e-04 2022-05-28 00:40:38,929 INFO [train.py:842] (0/4) Epoch 18, batch 6650, loss[loss=0.1808, simple_loss=0.2744, pruned_loss=0.04359, over 7330.00 frames.], tot_loss[loss=0.189, simple_loss=0.2748, pruned_loss=0.05161, over 1419810.52 frames.], batch size: 22, lr: 3.03e-04 2022-05-28 00:41:17,424 INFO [train.py:842] (0/4) Epoch 18, batch 6700, loss[loss=0.2154, simple_loss=0.2943, pruned_loss=0.06825, over 7307.00 frames.], tot_loss[loss=0.1903, simple_loss=0.2764, pruned_loss=0.05213, over 1417283.34 frames.], batch size: 25, lr: 3.03e-04 2022-05-28 00:41:56,393 INFO [train.py:842] (0/4) Epoch 18, batch 6750, loss[loss=0.1996, simple_loss=0.2981, pruned_loss=0.05053, over 7212.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2754, pruned_loss=0.05173, over 1416716.37 frames.], batch size: 22, lr: 3.03e-04 2022-05-28 00:42:35,493 INFO [train.py:842] (0/4) Epoch 18, batch 6800, loss[loss=0.1736, simple_loss=0.2559, pruned_loss=0.04569, over 7270.00 frames.], tot_loss[loss=0.1883, simple_loss=0.274, pruned_loss=0.05125, over 1418078.97 frames.], batch size: 18, lr: 3.03e-04 2022-05-28 00:43:14,480 INFO [train.py:842] (0/4) Epoch 18, batch 6850, loss[loss=0.2246, simple_loss=0.3114, pruned_loss=0.06891, over 7407.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2745, pruned_loss=0.0513, over 1421484.56 frames.], batch size: 23, lr: 3.03e-04 2022-05-28 00:43:53,153 INFO [train.py:842] (0/4) Epoch 18, batch 6900, loss[loss=0.1598, simple_loss=0.2584, pruned_loss=0.0306, over 7153.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2737, pruned_loss=0.05122, over 1421641.52 frames.], batch size: 20, lr: 3.03e-04 2022-05-28 00:44:31,882 INFO [train.py:842] (0/4) Epoch 18, batch 6950, loss[loss=0.2136, simple_loss=0.2968, pruned_loss=0.06524, over 7291.00 frames.], tot_loss[loss=0.1891, simple_loss=0.275, pruned_loss=0.05156, over 1422297.70 frames.], batch size: 24, lr: 3.03e-04 2022-05-28 00:45:09,746 INFO [train.py:842] (0/4) Epoch 18, batch 7000, loss[loss=0.3063, simple_loss=0.3679, pruned_loss=0.1224, over 4921.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2744, pruned_loss=0.05099, over 1422987.36 frames.], batch size: 52, lr: 3.03e-04 2022-05-28 00:45:48,073 INFO [train.py:842] (0/4) Epoch 18, batch 7050, loss[loss=0.1648, simple_loss=0.2539, pruned_loss=0.03784, over 7157.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2724, pruned_loss=0.04997, over 1424214.03 frames.], batch size: 19, lr: 3.03e-04 2022-05-28 00:46:26,160 INFO [train.py:842] (0/4) Epoch 18, batch 7100, loss[loss=0.1809, simple_loss=0.271, pruned_loss=0.0454, over 7220.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2729, pruned_loss=0.0507, over 1424657.22 frames.], batch size: 21, lr: 3.03e-04 2022-05-28 00:47:04,375 INFO [train.py:842] (0/4) Epoch 18, batch 7150, loss[loss=0.1657, simple_loss=0.247, pruned_loss=0.04219, over 7255.00 frames.], tot_loss[loss=0.1858, simple_loss=0.272, pruned_loss=0.04975, over 1427032.79 frames.], batch size: 19, lr: 3.03e-04 2022-05-28 00:47:42,502 INFO [train.py:842] (0/4) Epoch 18, batch 7200, loss[loss=0.2262, simple_loss=0.2996, pruned_loss=0.07636, over 7161.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2741, pruned_loss=0.05085, over 1429116.72 frames.], batch size: 19, lr: 3.03e-04 2022-05-28 00:48:20,743 INFO [train.py:842] (0/4) Epoch 18, batch 7250, loss[loss=0.2547, simple_loss=0.3409, pruned_loss=0.08424, over 7203.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2747, pruned_loss=0.05137, over 1427729.49 frames.], batch size: 23, lr: 3.03e-04 2022-05-28 00:48:58,651 INFO [train.py:842] (0/4) Epoch 18, batch 7300, loss[loss=0.1784, simple_loss=0.2653, pruned_loss=0.04569, over 7241.00 frames.], tot_loss[loss=0.1888, simple_loss=0.2746, pruned_loss=0.05152, over 1425155.50 frames.], batch size: 20, lr: 3.03e-04 2022-05-28 00:49:37,075 INFO [train.py:842] (0/4) Epoch 18, batch 7350, loss[loss=0.1851, simple_loss=0.2486, pruned_loss=0.06081, over 7156.00 frames.], tot_loss[loss=0.188, simple_loss=0.2738, pruned_loss=0.05112, over 1428731.63 frames.], batch size: 17, lr: 3.02e-04 2022-05-28 00:50:15,178 INFO [train.py:842] (0/4) Epoch 18, batch 7400, loss[loss=0.1723, simple_loss=0.2686, pruned_loss=0.03798, over 7324.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2734, pruned_loss=0.0512, over 1425346.02 frames.], batch size: 21, lr: 3.02e-04 2022-05-28 00:50:53,594 INFO [train.py:842] (0/4) Epoch 18, batch 7450, loss[loss=0.1921, simple_loss=0.2828, pruned_loss=0.05069, over 7408.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2739, pruned_loss=0.05151, over 1422991.79 frames.], batch size: 21, lr: 3.02e-04 2022-05-28 00:51:31,506 INFO [train.py:842] (0/4) Epoch 18, batch 7500, loss[loss=0.161, simple_loss=0.2528, pruned_loss=0.03461, over 7232.00 frames.], tot_loss[loss=0.1884, simple_loss=0.274, pruned_loss=0.05138, over 1421279.56 frames.], batch size: 20, lr: 3.02e-04 2022-05-28 00:52:09,619 INFO [train.py:842] (0/4) Epoch 18, batch 7550, loss[loss=0.219, simple_loss=0.2952, pruned_loss=0.0714, over 5096.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2746, pruned_loss=0.05144, over 1421295.82 frames.], batch size: 53, lr: 3.02e-04 2022-05-28 00:52:47,701 INFO [train.py:842] (0/4) Epoch 18, batch 7600, loss[loss=0.2053, simple_loss=0.2871, pruned_loss=0.06171, over 7170.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2741, pruned_loss=0.05109, over 1425162.79 frames.], batch size: 26, lr: 3.02e-04 2022-05-28 00:53:25,832 INFO [train.py:842] (0/4) Epoch 18, batch 7650, loss[loss=0.2013, simple_loss=0.2697, pruned_loss=0.06648, over 7406.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2734, pruned_loss=0.05118, over 1423489.85 frames.], batch size: 18, lr: 3.02e-04 2022-05-28 00:54:03,764 INFO [train.py:842] (0/4) Epoch 18, batch 7700, loss[loss=0.1959, simple_loss=0.2912, pruned_loss=0.0503, over 6689.00 frames.], tot_loss[loss=0.1894, simple_loss=0.275, pruned_loss=0.05186, over 1423120.45 frames.], batch size: 31, lr: 3.02e-04 2022-05-28 00:54:42,120 INFO [train.py:842] (0/4) Epoch 18, batch 7750, loss[loss=0.1982, simple_loss=0.2734, pruned_loss=0.06152, over 7292.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2744, pruned_loss=0.05154, over 1424300.24 frames.], batch size: 17, lr: 3.02e-04 2022-05-28 00:55:20,294 INFO [train.py:842] (0/4) Epoch 18, batch 7800, loss[loss=0.1744, simple_loss=0.2549, pruned_loss=0.04691, over 7172.00 frames.], tot_loss[loss=0.1875, simple_loss=0.2732, pruned_loss=0.05094, over 1427441.15 frames.], batch size: 18, lr: 3.02e-04 2022-05-28 00:55:58,691 INFO [train.py:842] (0/4) Epoch 18, batch 7850, loss[loss=0.1648, simple_loss=0.2505, pruned_loss=0.03956, over 7360.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2734, pruned_loss=0.05087, over 1429316.19 frames.], batch size: 19, lr: 3.02e-04 2022-05-28 00:56:36,705 INFO [train.py:842] (0/4) Epoch 18, batch 7900, loss[loss=0.2435, simple_loss=0.3224, pruned_loss=0.0823, over 7265.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2732, pruned_loss=0.05057, over 1432365.79 frames.], batch size: 24, lr: 3.02e-04 2022-05-28 00:57:14,986 INFO [train.py:842] (0/4) Epoch 18, batch 7950, loss[loss=0.1959, simple_loss=0.2778, pruned_loss=0.05699, over 5052.00 frames.], tot_loss[loss=0.1875, simple_loss=0.2733, pruned_loss=0.05089, over 1431882.55 frames.], batch size: 54, lr: 3.02e-04 2022-05-28 00:57:52,947 INFO [train.py:842] (0/4) Epoch 18, batch 8000, loss[loss=0.1807, simple_loss=0.2713, pruned_loss=0.04505, over 7203.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2729, pruned_loss=0.05072, over 1433145.50 frames.], batch size: 21, lr: 3.02e-04 2022-05-28 00:58:31,215 INFO [train.py:842] (0/4) Epoch 18, batch 8050, loss[loss=0.1912, simple_loss=0.2852, pruned_loss=0.04858, over 7058.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2728, pruned_loss=0.05071, over 1427561.92 frames.], batch size: 28, lr: 3.02e-04 2022-05-28 00:59:09,181 INFO [train.py:842] (0/4) Epoch 18, batch 8100, loss[loss=0.1597, simple_loss=0.24, pruned_loss=0.03973, over 7230.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2735, pruned_loss=0.05084, over 1424272.88 frames.], batch size: 16, lr: 3.02e-04 2022-05-28 00:59:47,549 INFO [train.py:842] (0/4) Epoch 18, batch 8150, loss[loss=0.1957, simple_loss=0.2823, pruned_loss=0.05458, over 7062.00 frames.], tot_loss[loss=0.1884, simple_loss=0.274, pruned_loss=0.05144, over 1426982.94 frames.], batch size: 28, lr: 3.02e-04 2022-05-28 01:00:25,310 INFO [train.py:842] (0/4) Epoch 18, batch 8200, loss[loss=0.1677, simple_loss=0.2511, pruned_loss=0.04216, over 7145.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2742, pruned_loss=0.05122, over 1425559.65 frames.], batch size: 17, lr: 3.02e-04 2022-05-28 01:01:03,702 INFO [train.py:842] (0/4) Epoch 18, batch 8250, loss[loss=0.189, simple_loss=0.2771, pruned_loss=0.05043, over 7205.00 frames.], tot_loss[loss=0.1878, simple_loss=0.2735, pruned_loss=0.05104, over 1426420.80 frames.], batch size: 22, lr: 3.02e-04 2022-05-28 01:01:41,865 INFO [train.py:842] (0/4) Epoch 18, batch 8300, loss[loss=0.1734, simple_loss=0.2509, pruned_loss=0.04793, over 7124.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2726, pruned_loss=0.0508, over 1427287.06 frames.], batch size: 17, lr: 3.02e-04 2022-05-28 01:02:20,124 INFO [train.py:842] (0/4) Epoch 18, batch 8350, loss[loss=0.169, simple_loss=0.2599, pruned_loss=0.03905, over 7108.00 frames.], tot_loss[loss=0.1877, simple_loss=0.2729, pruned_loss=0.05124, over 1426311.95 frames.], batch size: 21, lr: 3.02e-04 2022-05-28 01:02:58,015 INFO [train.py:842] (0/4) Epoch 18, batch 8400, loss[loss=0.1808, simple_loss=0.2752, pruned_loss=0.04323, over 7416.00 frames.], tot_loss[loss=0.1874, simple_loss=0.2726, pruned_loss=0.05107, over 1423043.39 frames.], batch size: 21, lr: 3.02e-04 2022-05-28 01:03:36,353 INFO [train.py:842] (0/4) Epoch 18, batch 8450, loss[loss=0.1938, simple_loss=0.288, pruned_loss=0.04985, over 7171.00 frames.], tot_loss[loss=0.1877, simple_loss=0.2731, pruned_loss=0.05117, over 1424643.47 frames.], batch size: 26, lr: 3.01e-04 2022-05-28 01:04:14,378 INFO [train.py:842] (0/4) Epoch 18, batch 8500, loss[loss=0.1883, simple_loss=0.2677, pruned_loss=0.05442, over 7077.00 frames.], tot_loss[loss=0.188, simple_loss=0.2731, pruned_loss=0.05144, over 1425515.19 frames.], batch size: 18, lr: 3.01e-04 2022-05-28 01:04:52,541 INFO [train.py:842] (0/4) Epoch 18, batch 8550, loss[loss=0.1996, simple_loss=0.2796, pruned_loss=0.05978, over 7420.00 frames.], tot_loss[loss=0.1895, simple_loss=0.2746, pruned_loss=0.05215, over 1427501.23 frames.], batch size: 21, lr: 3.01e-04 2022-05-28 01:05:30,280 INFO [train.py:842] (0/4) Epoch 18, batch 8600, loss[loss=0.1788, simple_loss=0.2412, pruned_loss=0.05818, over 7255.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2753, pruned_loss=0.0521, over 1425222.38 frames.], batch size: 17, lr: 3.01e-04 2022-05-28 01:06:08,302 INFO [train.py:842] (0/4) Epoch 18, batch 8650, loss[loss=0.1695, simple_loss=0.2596, pruned_loss=0.03968, over 7249.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2761, pruned_loss=0.05246, over 1418691.54 frames.], batch size: 19, lr: 3.01e-04 2022-05-28 01:06:46,193 INFO [train.py:842] (0/4) Epoch 18, batch 8700, loss[loss=0.1739, simple_loss=0.2688, pruned_loss=0.03952, over 7284.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2755, pruned_loss=0.05216, over 1419434.50 frames.], batch size: 25, lr: 3.01e-04 2022-05-28 01:07:24,553 INFO [train.py:842] (0/4) Epoch 18, batch 8750, loss[loss=0.1801, simple_loss=0.2763, pruned_loss=0.04192, over 7191.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2752, pruned_loss=0.05222, over 1419125.12 frames.], batch size: 23, lr: 3.01e-04 2022-05-28 01:08:02,249 INFO [train.py:842] (0/4) Epoch 18, batch 8800, loss[loss=0.1664, simple_loss=0.2525, pruned_loss=0.04015, over 7128.00 frames.], tot_loss[loss=0.1904, simple_loss=0.2757, pruned_loss=0.05259, over 1408632.10 frames.], batch size: 17, lr: 3.01e-04 2022-05-28 01:08:40,264 INFO [train.py:842] (0/4) Epoch 18, batch 8850, loss[loss=0.1819, simple_loss=0.2726, pruned_loss=0.0456, over 6504.00 frames.], tot_loss[loss=0.1904, simple_loss=0.2756, pruned_loss=0.05265, over 1403250.69 frames.], batch size: 38, lr: 3.01e-04 2022-05-28 01:09:17,562 INFO [train.py:842] (0/4) Epoch 18, batch 8900, loss[loss=0.1708, simple_loss=0.2486, pruned_loss=0.04653, over 7003.00 frames.], tot_loss[loss=0.1904, simple_loss=0.2756, pruned_loss=0.05265, over 1397092.93 frames.], batch size: 16, lr: 3.01e-04 2022-05-28 01:09:55,310 INFO [train.py:842] (0/4) Epoch 18, batch 8950, loss[loss=0.2625, simple_loss=0.3379, pruned_loss=0.09353, over 5513.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2759, pruned_loss=0.0529, over 1387432.30 frames.], batch size: 53, lr: 3.01e-04 2022-05-28 01:10:32,665 INFO [train.py:842] (0/4) Epoch 18, batch 9000, loss[loss=0.1926, simple_loss=0.2753, pruned_loss=0.05496, over 6858.00 frames.], tot_loss[loss=0.1904, simple_loss=0.2759, pruned_loss=0.05242, over 1383679.14 frames.], batch size: 31, lr: 3.01e-04 2022-05-28 01:10:32,666 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 01:10:41,756 INFO [train.py:871] (0/4) Epoch 18, validation: loss=0.1664, simple_loss=0.2661, pruned_loss=0.03336, over 868885.00 frames. 2022-05-28 01:11:19,004 INFO [train.py:842] (0/4) Epoch 18, batch 9050, loss[loss=0.17, simple_loss=0.2631, pruned_loss=0.03843, over 6834.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2767, pruned_loss=0.05297, over 1367373.44 frames.], batch size: 31, lr: 3.01e-04 2022-05-28 01:11:55,783 INFO [train.py:842] (0/4) Epoch 18, batch 9100, loss[loss=0.1879, simple_loss=0.2698, pruned_loss=0.05303, over 5048.00 frames.], tot_loss[loss=0.1987, simple_loss=0.2823, pruned_loss=0.05751, over 1292873.97 frames.], batch size: 53, lr: 3.01e-04 2022-05-28 01:12:32,909 INFO [train.py:842] (0/4) Epoch 18, batch 9150, loss[loss=0.2361, simple_loss=0.3143, pruned_loss=0.07894, over 4932.00 frames.], tot_loss[loss=0.2035, simple_loss=0.2862, pruned_loss=0.06045, over 1231177.41 frames.], batch size: 52, lr: 3.01e-04 2022-05-28 01:13:04,323 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-18.pt 2022-05-28 01:13:18,594 INFO [train.py:842] (0/4) Epoch 19, batch 0, loss[loss=0.2342, simple_loss=0.3147, pruned_loss=0.07686, over 7337.00 frames.], tot_loss[loss=0.2342, simple_loss=0.3147, pruned_loss=0.07686, over 7337.00 frames.], batch size: 25, lr: 2.93e-04 2022-05-28 01:13:57,224 INFO [train.py:842] (0/4) Epoch 19, batch 50, loss[loss=0.1763, simple_loss=0.2709, pruned_loss=0.0409, over 7336.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2762, pruned_loss=0.0524, over 325054.93 frames.], batch size: 22, lr: 2.93e-04 2022-05-28 01:14:35,411 INFO [train.py:842] (0/4) Epoch 19, batch 100, loss[loss=0.1921, simple_loss=0.2767, pruned_loss=0.05378, over 7330.00 frames.], tot_loss[loss=0.187, simple_loss=0.273, pruned_loss=0.05047, over 574322.90 frames.], batch size: 22, lr: 2.93e-04 2022-05-28 01:15:13,704 INFO [train.py:842] (0/4) Epoch 19, batch 150, loss[loss=0.1813, simple_loss=0.2633, pruned_loss=0.04971, over 7221.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2716, pruned_loss=0.04993, over 764100.65 frames.], batch size: 21, lr: 2.93e-04 2022-05-28 01:15:51,846 INFO [train.py:842] (0/4) Epoch 19, batch 200, loss[loss=0.1567, simple_loss=0.2417, pruned_loss=0.03589, over 7288.00 frames.], tot_loss[loss=0.1862, simple_loss=0.272, pruned_loss=0.05026, over 909670.10 frames.], batch size: 17, lr: 2.93e-04 2022-05-28 01:16:30,152 INFO [train.py:842] (0/4) Epoch 19, batch 250, loss[loss=0.1896, simple_loss=0.2887, pruned_loss=0.04524, over 6697.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2723, pruned_loss=0.05049, over 1025756.58 frames.], batch size: 31, lr: 2.93e-04 2022-05-28 01:17:08,197 INFO [train.py:842] (0/4) Epoch 19, batch 300, loss[loss=0.2013, simple_loss=0.2935, pruned_loss=0.05455, over 7233.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2727, pruned_loss=0.05083, over 1116153.15 frames.], batch size: 20, lr: 2.93e-04 2022-05-28 01:17:46,565 INFO [train.py:842] (0/4) Epoch 19, batch 350, loss[loss=0.2103, simple_loss=0.2866, pruned_loss=0.06699, over 6716.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2733, pruned_loss=0.05167, over 1182546.70 frames.], batch size: 31, lr: 2.93e-04 2022-05-28 01:18:24,401 INFO [train.py:842] (0/4) Epoch 19, batch 400, loss[loss=0.1705, simple_loss=0.2593, pruned_loss=0.04081, over 7066.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2737, pruned_loss=0.05129, over 1234067.67 frames.], batch size: 18, lr: 2.93e-04 2022-05-28 01:19:02,586 INFO [train.py:842] (0/4) Epoch 19, batch 450, loss[loss=0.2075, simple_loss=0.2833, pruned_loss=0.06586, over 7333.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2741, pruned_loss=0.05144, over 1275514.46 frames.], batch size: 22, lr: 2.93e-04 2022-05-28 01:19:40,374 INFO [train.py:842] (0/4) Epoch 19, batch 500, loss[loss=0.1698, simple_loss=0.2545, pruned_loss=0.04258, over 7134.00 frames.], tot_loss[loss=0.1898, simple_loss=0.275, pruned_loss=0.05227, over 1305628.05 frames.], batch size: 17, lr: 2.93e-04 2022-05-28 01:20:18,785 INFO [train.py:842] (0/4) Epoch 19, batch 550, loss[loss=0.1624, simple_loss=0.2373, pruned_loss=0.04378, over 7295.00 frames.], tot_loss[loss=0.1873, simple_loss=0.2731, pruned_loss=0.05079, over 1335548.28 frames.], batch size: 17, lr: 2.93e-04 2022-05-28 01:20:56,846 INFO [train.py:842] (0/4) Epoch 19, batch 600, loss[loss=0.1624, simple_loss=0.2392, pruned_loss=0.04279, over 7285.00 frames.], tot_loss[loss=0.1885, simple_loss=0.274, pruned_loss=0.05153, over 1355688.42 frames.], batch size: 18, lr: 2.93e-04 2022-05-28 01:21:35,372 INFO [train.py:842] (0/4) Epoch 19, batch 650, loss[loss=0.1924, simple_loss=0.2789, pruned_loss=0.05296, over 7111.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2735, pruned_loss=0.05176, over 1374693.31 frames.], batch size: 21, lr: 2.93e-04 2022-05-28 01:22:13,254 INFO [train.py:842] (0/4) Epoch 19, batch 700, loss[loss=0.1897, simple_loss=0.2686, pruned_loss=0.05542, over 4979.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2746, pruned_loss=0.05205, over 1385378.64 frames.], batch size: 53, lr: 2.93e-04 2022-05-28 01:22:51,649 INFO [train.py:842] (0/4) Epoch 19, batch 750, loss[loss=0.1689, simple_loss=0.2576, pruned_loss=0.04014, over 7157.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2736, pruned_loss=0.05144, over 1393414.19 frames.], batch size: 19, lr: 2.93e-04 2022-05-28 01:23:29,307 INFO [train.py:842] (0/4) Epoch 19, batch 800, loss[loss=0.1666, simple_loss=0.2579, pruned_loss=0.03767, over 6928.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2742, pruned_loss=0.05108, over 1396297.30 frames.], batch size: 31, lr: 2.92e-04 2022-05-28 01:24:07,573 INFO [train.py:842] (0/4) Epoch 19, batch 850, loss[loss=0.2015, simple_loss=0.2765, pruned_loss=0.06325, over 7065.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2747, pruned_loss=0.05125, over 1404046.42 frames.], batch size: 18, lr: 2.92e-04 2022-05-28 01:24:45,426 INFO [train.py:842] (0/4) Epoch 19, batch 900, loss[loss=0.181, simple_loss=0.2603, pruned_loss=0.0509, over 6860.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2748, pruned_loss=0.05116, over 1409510.82 frames.], batch size: 15, lr: 2.92e-04 2022-05-28 01:25:23,694 INFO [train.py:842] (0/4) Epoch 19, batch 950, loss[loss=0.205, simple_loss=0.2927, pruned_loss=0.05869, over 7377.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2744, pruned_loss=0.05127, over 1412735.21 frames.], batch size: 23, lr: 2.92e-04 2022-05-28 01:26:01,655 INFO [train.py:842] (0/4) Epoch 19, batch 1000, loss[loss=0.1732, simple_loss=0.2593, pruned_loss=0.04357, over 7153.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2754, pruned_loss=0.05216, over 1419214.72 frames.], batch size: 20, lr: 2.92e-04 2022-05-28 01:26:39,854 INFO [train.py:842] (0/4) Epoch 19, batch 1050, loss[loss=0.1715, simple_loss=0.2695, pruned_loss=0.03678, over 7308.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2738, pruned_loss=0.05165, over 1417195.36 frames.], batch size: 25, lr: 2.92e-04 2022-05-28 01:27:17,939 INFO [train.py:842] (0/4) Epoch 19, batch 1100, loss[loss=0.1683, simple_loss=0.2513, pruned_loss=0.04259, over 7328.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2729, pruned_loss=0.05142, over 1418046.36 frames.], batch size: 20, lr: 2.92e-04 2022-05-28 01:27:56,169 INFO [train.py:842] (0/4) Epoch 19, batch 1150, loss[loss=0.2025, simple_loss=0.2856, pruned_loss=0.05972, over 7278.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2728, pruned_loss=0.05118, over 1418732.57 frames.], batch size: 24, lr: 2.92e-04 2022-05-28 01:28:34,212 INFO [train.py:842] (0/4) Epoch 19, batch 1200, loss[loss=0.247, simple_loss=0.3136, pruned_loss=0.09017, over 5116.00 frames.], tot_loss[loss=0.188, simple_loss=0.273, pruned_loss=0.0515, over 1414561.38 frames.], batch size: 52, lr: 2.92e-04 2022-05-28 01:29:12,404 INFO [train.py:842] (0/4) Epoch 19, batch 1250, loss[loss=0.1731, simple_loss=0.2609, pruned_loss=0.04264, over 7098.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2732, pruned_loss=0.05173, over 1415600.99 frames.], batch size: 21, lr: 2.92e-04 2022-05-28 01:29:50,145 INFO [train.py:842] (0/4) Epoch 19, batch 1300, loss[loss=0.1523, simple_loss=0.2389, pruned_loss=0.03284, over 7154.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2735, pruned_loss=0.05149, over 1415034.41 frames.], batch size: 19, lr: 2.92e-04 2022-05-28 01:30:28,060 INFO [train.py:842] (0/4) Epoch 19, batch 1350, loss[loss=0.2042, simple_loss=0.2882, pruned_loss=0.06014, over 7072.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2746, pruned_loss=0.05196, over 1413072.82 frames.], batch size: 28, lr: 2.92e-04 2022-05-28 01:31:06,124 INFO [train.py:842] (0/4) Epoch 19, batch 1400, loss[loss=0.1697, simple_loss=0.2557, pruned_loss=0.04185, over 7065.00 frames.], tot_loss[loss=0.1891, simple_loss=0.2742, pruned_loss=0.05204, over 1410079.37 frames.], batch size: 18, lr: 2.92e-04 2022-05-28 01:31:44,477 INFO [train.py:842] (0/4) Epoch 19, batch 1450, loss[loss=0.1715, simple_loss=0.2662, pruned_loss=0.03836, over 7326.00 frames.], tot_loss[loss=0.1896, simple_loss=0.2749, pruned_loss=0.05211, over 1417687.87 frames.], batch size: 21, lr: 2.92e-04 2022-05-28 01:32:22,418 INFO [train.py:842] (0/4) Epoch 19, batch 1500, loss[loss=0.1738, simple_loss=0.2623, pruned_loss=0.04267, over 7252.00 frames.], tot_loss[loss=0.1888, simple_loss=0.2747, pruned_loss=0.05143, over 1421405.23 frames.], batch size: 19, lr: 2.92e-04 2022-05-28 01:33:00,824 INFO [train.py:842] (0/4) Epoch 19, batch 1550, loss[loss=0.179, simple_loss=0.2722, pruned_loss=0.04288, over 7404.00 frames.], tot_loss[loss=0.1891, simple_loss=0.2751, pruned_loss=0.05159, over 1424503.19 frames.], batch size: 21, lr: 2.92e-04 2022-05-28 01:33:38,788 INFO [train.py:842] (0/4) Epoch 19, batch 1600, loss[loss=0.2014, simple_loss=0.2901, pruned_loss=0.05633, over 7197.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2739, pruned_loss=0.05093, over 1423754.04 frames.], batch size: 22, lr: 2.92e-04 2022-05-28 01:34:17,151 INFO [train.py:842] (0/4) Epoch 19, batch 1650, loss[loss=0.1694, simple_loss=0.2538, pruned_loss=0.04246, over 7172.00 frames.], tot_loss[loss=0.188, simple_loss=0.2737, pruned_loss=0.05118, over 1421817.63 frames.], batch size: 18, lr: 2.92e-04 2022-05-28 01:34:55,112 INFO [train.py:842] (0/4) Epoch 19, batch 1700, loss[loss=0.176, simple_loss=0.2602, pruned_loss=0.04594, over 7160.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2744, pruned_loss=0.05165, over 1423012.40 frames.], batch size: 18, lr: 2.92e-04 2022-05-28 01:35:32,922 INFO [train.py:842] (0/4) Epoch 19, batch 1750, loss[loss=0.2441, simple_loss=0.3187, pruned_loss=0.08473, over 7150.00 frames.], tot_loss[loss=0.1892, simple_loss=0.275, pruned_loss=0.05169, over 1416792.63 frames.], batch size: 20, lr: 2.92e-04 2022-05-28 01:36:10,613 INFO [train.py:842] (0/4) Epoch 19, batch 1800, loss[loss=0.1899, simple_loss=0.2844, pruned_loss=0.0477, over 7248.00 frames.], tot_loss[loss=0.19, simple_loss=0.2761, pruned_loss=0.05189, over 1417555.49 frames.], batch size: 19, lr: 2.92e-04 2022-05-28 01:36:48,920 INFO [train.py:842] (0/4) Epoch 19, batch 1850, loss[loss=0.2144, simple_loss=0.3022, pruned_loss=0.06335, over 7268.00 frames.], tot_loss[loss=0.1893, simple_loss=0.2755, pruned_loss=0.05157, over 1422702.06 frames.], batch size: 24, lr: 2.92e-04 2022-05-28 01:37:26,735 INFO [train.py:842] (0/4) Epoch 19, batch 1900, loss[loss=0.1852, simple_loss=0.279, pruned_loss=0.04568, over 7015.00 frames.], tot_loss[loss=0.1892, simple_loss=0.275, pruned_loss=0.05167, over 1419247.19 frames.], batch size: 28, lr: 2.92e-04 2022-05-28 01:38:04,995 INFO [train.py:842] (0/4) Epoch 19, batch 1950, loss[loss=0.1551, simple_loss=0.2367, pruned_loss=0.03673, over 6970.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2742, pruned_loss=0.05153, over 1419715.61 frames.], batch size: 16, lr: 2.91e-04 2022-05-28 01:38:43,025 INFO [train.py:842] (0/4) Epoch 19, batch 2000, loss[loss=0.1914, simple_loss=0.2817, pruned_loss=0.05056, over 7158.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2741, pruned_loss=0.05188, over 1423125.91 frames.], batch size: 20, lr: 2.91e-04 2022-05-28 01:39:21,315 INFO [train.py:842] (0/4) Epoch 19, batch 2050, loss[loss=0.2275, simple_loss=0.3063, pruned_loss=0.07433, over 7290.00 frames.], tot_loss[loss=0.1893, simple_loss=0.2746, pruned_loss=0.05202, over 1423096.35 frames.], batch size: 25, lr: 2.91e-04 2022-05-28 01:39:59,174 INFO [train.py:842] (0/4) Epoch 19, batch 2100, loss[loss=0.2, simple_loss=0.2862, pruned_loss=0.05694, over 7154.00 frames.], tot_loss[loss=0.1896, simple_loss=0.2752, pruned_loss=0.05205, over 1423615.18 frames.], batch size: 19, lr: 2.91e-04 2022-05-28 01:40:37,575 INFO [train.py:842] (0/4) Epoch 19, batch 2150, loss[loss=0.2186, simple_loss=0.293, pruned_loss=0.07213, over 7227.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2743, pruned_loss=0.05175, over 1420905.45 frames.], batch size: 21, lr: 2.91e-04 2022-05-28 01:41:15,623 INFO [train.py:842] (0/4) Epoch 19, batch 2200, loss[loss=0.205, simple_loss=0.2854, pruned_loss=0.06232, over 7135.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2735, pruned_loss=0.05132, over 1424925.76 frames.], batch size: 21, lr: 2.91e-04 2022-05-28 01:41:53,805 INFO [train.py:842] (0/4) Epoch 19, batch 2250, loss[loss=0.1847, simple_loss=0.2759, pruned_loss=0.04677, over 6482.00 frames.], tot_loss[loss=0.1903, simple_loss=0.2755, pruned_loss=0.05249, over 1423120.80 frames.], batch size: 38, lr: 2.91e-04 2022-05-28 01:42:31,838 INFO [train.py:842] (0/4) Epoch 19, batch 2300, loss[loss=0.2519, simple_loss=0.3292, pruned_loss=0.08724, over 7369.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2754, pruned_loss=0.05247, over 1425504.79 frames.], batch size: 23, lr: 2.91e-04 2022-05-28 01:43:09,941 INFO [train.py:842] (0/4) Epoch 19, batch 2350, loss[loss=0.1562, simple_loss=0.2399, pruned_loss=0.03625, over 7306.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2756, pruned_loss=0.052, over 1422145.95 frames.], batch size: 17, lr: 2.91e-04 2022-05-28 01:43:57,327 INFO [train.py:842] (0/4) Epoch 19, batch 2400, loss[loss=0.1503, simple_loss=0.2419, pruned_loss=0.02938, over 7150.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2751, pruned_loss=0.05171, over 1419422.21 frames.], batch size: 20, lr: 2.91e-04 2022-05-28 01:44:35,686 INFO [train.py:842] (0/4) Epoch 19, batch 2450, loss[loss=0.2322, simple_loss=0.3121, pruned_loss=0.07618, over 7139.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2756, pruned_loss=0.0523, over 1421928.16 frames.], batch size: 20, lr: 2.91e-04 2022-05-28 01:45:13,667 INFO [train.py:842] (0/4) Epoch 19, batch 2500, loss[loss=0.2069, simple_loss=0.2927, pruned_loss=0.06052, over 7144.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2755, pruned_loss=0.05274, over 1420824.88 frames.], batch size: 26, lr: 2.91e-04 2022-05-28 01:45:47,315 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-168000.pt 2022-05-28 01:45:54,636 INFO [train.py:842] (0/4) Epoch 19, batch 2550, loss[loss=0.1788, simple_loss=0.2645, pruned_loss=0.0465, over 7298.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2749, pruned_loss=0.05239, over 1420282.78 frames.], batch size: 24, lr: 2.91e-04 2022-05-28 01:46:32,537 INFO [train.py:842] (0/4) Epoch 19, batch 2600, loss[loss=0.1645, simple_loss=0.2456, pruned_loss=0.04165, over 6996.00 frames.], tot_loss[loss=0.1884, simple_loss=0.2737, pruned_loss=0.05149, over 1423858.98 frames.], batch size: 16, lr: 2.91e-04 2022-05-28 01:47:10,851 INFO [train.py:842] (0/4) Epoch 19, batch 2650, loss[loss=0.1946, simple_loss=0.2899, pruned_loss=0.04964, over 7288.00 frames.], tot_loss[loss=0.1891, simple_loss=0.2746, pruned_loss=0.05175, over 1425782.84 frames.], batch size: 24, lr: 2.91e-04 2022-05-28 01:47:49,146 INFO [train.py:842] (0/4) Epoch 19, batch 2700, loss[loss=0.1905, simple_loss=0.2798, pruned_loss=0.05062, over 7302.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2743, pruned_loss=0.05148, over 1429439.17 frames.], batch size: 25, lr: 2.91e-04 2022-05-28 01:48:27,303 INFO [train.py:842] (0/4) Epoch 19, batch 2750, loss[loss=0.2091, simple_loss=0.2934, pruned_loss=0.06241, over 7420.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2771, pruned_loss=0.05277, over 1428880.24 frames.], batch size: 21, lr: 2.91e-04 2022-05-28 01:49:05,388 INFO [train.py:842] (0/4) Epoch 19, batch 2800, loss[loss=0.1523, simple_loss=0.2407, pruned_loss=0.03194, over 7061.00 frames.], tot_loss[loss=0.1908, simple_loss=0.2764, pruned_loss=0.05261, over 1429892.55 frames.], batch size: 18, lr: 2.91e-04 2022-05-28 01:49:43,506 INFO [train.py:842] (0/4) Epoch 19, batch 2850, loss[loss=0.1809, simple_loss=0.2712, pruned_loss=0.04525, over 7163.00 frames.], tot_loss[loss=0.1895, simple_loss=0.2753, pruned_loss=0.05189, over 1427049.27 frames.], batch size: 19, lr: 2.91e-04 2022-05-28 01:50:21,401 INFO [train.py:842] (0/4) Epoch 19, batch 2900, loss[loss=0.1993, simple_loss=0.2844, pruned_loss=0.0571, over 7182.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2743, pruned_loss=0.05149, over 1424176.84 frames.], batch size: 26, lr: 2.91e-04 2022-05-28 01:50:59,788 INFO [train.py:842] (0/4) Epoch 19, batch 2950, loss[loss=0.1724, simple_loss=0.2506, pruned_loss=0.04705, over 7279.00 frames.], tot_loss[loss=0.1877, simple_loss=0.2735, pruned_loss=0.05093, over 1430066.55 frames.], batch size: 17, lr: 2.91e-04 2022-05-28 01:51:37,809 INFO [train.py:842] (0/4) Epoch 19, batch 3000, loss[loss=0.2369, simple_loss=0.3076, pruned_loss=0.08312, over 5004.00 frames.], tot_loss[loss=0.1874, simple_loss=0.2732, pruned_loss=0.05084, over 1430220.87 frames.], batch size: 52, lr: 2.91e-04 2022-05-28 01:51:37,810 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 01:51:46,796 INFO [train.py:871] (0/4) Epoch 19, validation: loss=0.1654, simple_loss=0.2649, pruned_loss=0.03297, over 868885.00 frames. 2022-05-28 01:52:25,070 INFO [train.py:842] (0/4) Epoch 19, batch 3050, loss[loss=0.2228, simple_loss=0.3189, pruned_loss=0.06339, over 7211.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2746, pruned_loss=0.05158, over 1431332.33 frames.], batch size: 23, lr: 2.91e-04 2022-05-28 01:53:03,099 INFO [train.py:842] (0/4) Epoch 19, batch 3100, loss[loss=0.1982, simple_loss=0.2743, pruned_loss=0.06105, over 6461.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2749, pruned_loss=0.05179, over 1432241.41 frames.], batch size: 38, lr: 2.90e-04 2022-05-28 01:53:41,175 INFO [train.py:842] (0/4) Epoch 19, batch 3150, loss[loss=0.1707, simple_loss=0.2493, pruned_loss=0.04611, over 7299.00 frames.], tot_loss[loss=0.1891, simple_loss=0.2749, pruned_loss=0.05163, over 1428873.95 frames.], batch size: 18, lr: 2.90e-04 2022-05-28 01:54:19,120 INFO [train.py:842] (0/4) Epoch 19, batch 3200, loss[loss=0.1695, simple_loss=0.2628, pruned_loss=0.03813, over 7161.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2752, pruned_loss=0.05224, over 1427605.44 frames.], batch size: 19, lr: 2.90e-04 2022-05-28 01:54:57,215 INFO [train.py:842] (0/4) Epoch 19, batch 3250, loss[loss=0.1743, simple_loss=0.2642, pruned_loss=0.04219, over 7355.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2755, pruned_loss=0.05198, over 1424531.38 frames.], batch size: 19, lr: 2.90e-04 2022-05-28 01:55:35,144 INFO [train.py:842] (0/4) Epoch 19, batch 3300, loss[loss=0.1858, simple_loss=0.2796, pruned_loss=0.04595, over 6491.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2759, pruned_loss=0.05171, over 1425109.92 frames.], batch size: 38, lr: 2.90e-04 2022-05-28 01:56:13,605 INFO [train.py:842] (0/4) Epoch 19, batch 3350, loss[loss=0.2006, simple_loss=0.2866, pruned_loss=0.05729, over 7112.00 frames.], tot_loss[loss=0.189, simple_loss=0.2754, pruned_loss=0.0513, over 1424090.11 frames.], batch size: 21, lr: 2.90e-04 2022-05-28 01:56:51,531 INFO [train.py:842] (0/4) Epoch 19, batch 3400, loss[loss=0.2067, simple_loss=0.2785, pruned_loss=0.0674, over 7284.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2759, pruned_loss=0.05194, over 1424744.95 frames.], batch size: 18, lr: 2.90e-04 2022-05-28 01:57:29,942 INFO [train.py:842] (0/4) Epoch 19, batch 3450, loss[loss=0.1455, simple_loss=0.2381, pruned_loss=0.02646, over 7355.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2743, pruned_loss=0.05141, over 1421123.82 frames.], batch size: 19, lr: 2.90e-04 2022-05-28 01:58:07,906 INFO [train.py:842] (0/4) Epoch 19, batch 3500, loss[loss=0.1985, simple_loss=0.2716, pruned_loss=0.06268, over 7276.00 frames.], tot_loss[loss=0.1883, simple_loss=0.274, pruned_loss=0.05123, over 1424121.62 frames.], batch size: 18, lr: 2.90e-04 2022-05-28 01:58:46,247 INFO [train.py:842] (0/4) Epoch 19, batch 3550, loss[loss=0.2617, simple_loss=0.3168, pruned_loss=0.1033, over 7137.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2744, pruned_loss=0.05142, over 1423818.16 frames.], batch size: 17, lr: 2.90e-04 2022-05-28 01:59:24,110 INFO [train.py:842] (0/4) Epoch 19, batch 3600, loss[loss=0.1855, simple_loss=0.2707, pruned_loss=0.0501, over 7214.00 frames.], tot_loss[loss=0.1892, simple_loss=0.2749, pruned_loss=0.05182, over 1420699.51 frames.], batch size: 23, lr: 2.90e-04 2022-05-28 02:00:02,221 INFO [train.py:842] (0/4) Epoch 19, batch 3650, loss[loss=0.186, simple_loss=0.2724, pruned_loss=0.04985, over 7330.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2763, pruned_loss=0.05254, over 1413855.92 frames.], batch size: 20, lr: 2.90e-04 2022-05-28 02:00:40,207 INFO [train.py:842] (0/4) Epoch 19, batch 3700, loss[loss=0.1874, simple_loss=0.2656, pruned_loss=0.05462, over 7294.00 frames.], tot_loss[loss=0.1907, simple_loss=0.2765, pruned_loss=0.05246, over 1415981.20 frames.], batch size: 17, lr: 2.90e-04 2022-05-28 02:01:18,449 INFO [train.py:842] (0/4) Epoch 19, batch 3750, loss[loss=0.2285, simple_loss=0.304, pruned_loss=0.07653, over 7339.00 frames.], tot_loss[loss=0.1903, simple_loss=0.2759, pruned_loss=0.05233, over 1410949.84 frames.], batch size: 22, lr: 2.90e-04 2022-05-28 02:01:56,347 INFO [train.py:842] (0/4) Epoch 19, batch 3800, loss[loss=0.1344, simple_loss=0.2237, pruned_loss=0.02255, over 7009.00 frames.], tot_loss[loss=0.1893, simple_loss=0.2751, pruned_loss=0.05179, over 1416300.39 frames.], batch size: 16, lr: 2.90e-04 2022-05-28 02:02:34,510 INFO [train.py:842] (0/4) Epoch 19, batch 3850, loss[loss=0.2594, simple_loss=0.337, pruned_loss=0.09096, over 5234.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2753, pruned_loss=0.0517, over 1414131.57 frames.], batch size: 52, lr: 2.90e-04 2022-05-28 02:03:12,469 INFO [train.py:842] (0/4) Epoch 19, batch 3900, loss[loss=0.2419, simple_loss=0.323, pruned_loss=0.08036, over 7176.00 frames.], tot_loss[loss=0.1893, simple_loss=0.2748, pruned_loss=0.05192, over 1414721.93 frames.], batch size: 26, lr: 2.90e-04 2022-05-28 02:03:50,750 INFO [train.py:842] (0/4) Epoch 19, batch 3950, loss[loss=0.2022, simple_loss=0.2926, pruned_loss=0.05586, over 7234.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2748, pruned_loss=0.05148, over 1416667.65 frames.], batch size: 20, lr: 2.90e-04 2022-05-28 02:04:28,768 INFO [train.py:842] (0/4) Epoch 19, batch 4000, loss[loss=0.1854, simple_loss=0.2813, pruned_loss=0.04475, over 7121.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2728, pruned_loss=0.05028, over 1419290.48 frames.], batch size: 21, lr: 2.90e-04 2022-05-28 02:05:07,110 INFO [train.py:842] (0/4) Epoch 19, batch 4050, loss[loss=0.1612, simple_loss=0.2433, pruned_loss=0.03959, over 7360.00 frames.], tot_loss[loss=0.1858, simple_loss=0.272, pruned_loss=0.04981, over 1419616.98 frames.], batch size: 19, lr: 2.90e-04 2022-05-28 02:05:45,234 INFO [train.py:842] (0/4) Epoch 19, batch 4100, loss[loss=0.1861, simple_loss=0.2846, pruned_loss=0.04381, over 7141.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2717, pruned_loss=0.04986, over 1416980.72 frames.], batch size: 20, lr: 2.90e-04 2022-05-28 02:06:23,171 INFO [train.py:842] (0/4) Epoch 19, batch 4150, loss[loss=0.2158, simple_loss=0.3004, pruned_loss=0.06557, over 7189.00 frames.], tot_loss[loss=0.1878, simple_loss=0.2737, pruned_loss=0.05095, over 1415424.50 frames.], batch size: 22, lr: 2.90e-04 2022-05-28 02:07:01,156 INFO [train.py:842] (0/4) Epoch 19, batch 4200, loss[loss=0.2252, simple_loss=0.3051, pruned_loss=0.07268, over 7320.00 frames.], tot_loss[loss=0.1875, simple_loss=0.2738, pruned_loss=0.05063, over 1422534.93 frames.], batch size: 22, lr: 2.90e-04 2022-05-28 02:07:39,410 INFO [train.py:842] (0/4) Epoch 19, batch 4250, loss[loss=0.1715, simple_loss=0.2577, pruned_loss=0.04268, over 7330.00 frames.], tot_loss[loss=0.188, simple_loss=0.2738, pruned_loss=0.05108, over 1420985.09 frames.], batch size: 20, lr: 2.90e-04 2022-05-28 02:08:17,280 INFO [train.py:842] (0/4) Epoch 19, batch 4300, loss[loss=0.1707, simple_loss=0.2686, pruned_loss=0.03641, over 7210.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2755, pruned_loss=0.05242, over 1420090.08 frames.], batch size: 23, lr: 2.89e-04 2022-05-28 02:08:55,439 INFO [train.py:842] (0/4) Epoch 19, batch 4350, loss[loss=0.2124, simple_loss=0.2938, pruned_loss=0.06548, over 6678.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2755, pruned_loss=0.05194, over 1420128.51 frames.], batch size: 31, lr: 2.89e-04 2022-05-28 02:09:33,403 INFO [train.py:842] (0/4) Epoch 19, batch 4400, loss[loss=0.1763, simple_loss=0.263, pruned_loss=0.04475, over 7234.00 frames.], tot_loss[loss=0.1896, simple_loss=0.2755, pruned_loss=0.05187, over 1422299.25 frames.], batch size: 20, lr: 2.89e-04 2022-05-28 02:10:11,527 INFO [train.py:842] (0/4) Epoch 19, batch 4450, loss[loss=0.1732, simple_loss=0.2663, pruned_loss=0.03999, over 7116.00 frames.], tot_loss[loss=0.1901, simple_loss=0.2761, pruned_loss=0.05203, over 1419427.11 frames.], batch size: 21, lr: 2.89e-04 2022-05-28 02:10:49,580 INFO [train.py:842] (0/4) Epoch 19, batch 4500, loss[loss=0.2106, simple_loss=0.2905, pruned_loss=0.06536, over 7274.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2751, pruned_loss=0.05181, over 1419920.79 frames.], batch size: 24, lr: 2.89e-04 2022-05-28 02:11:28,099 INFO [train.py:842] (0/4) Epoch 19, batch 4550, loss[loss=0.2003, simple_loss=0.2936, pruned_loss=0.05352, over 7372.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2737, pruned_loss=0.05108, over 1424763.37 frames.], batch size: 23, lr: 2.89e-04 2022-05-28 02:12:06,111 INFO [train.py:842] (0/4) Epoch 19, batch 4600, loss[loss=0.2099, simple_loss=0.3057, pruned_loss=0.05703, over 7414.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2739, pruned_loss=0.0513, over 1423139.81 frames.], batch size: 21, lr: 2.89e-04 2022-05-28 02:12:44,397 INFO [train.py:842] (0/4) Epoch 19, batch 4650, loss[loss=0.1888, simple_loss=0.2662, pruned_loss=0.05573, over 7351.00 frames.], tot_loss[loss=0.1875, simple_loss=0.273, pruned_loss=0.05102, over 1420672.82 frames.], batch size: 19, lr: 2.89e-04 2022-05-28 02:13:22,316 INFO [train.py:842] (0/4) Epoch 19, batch 4700, loss[loss=0.1523, simple_loss=0.2357, pruned_loss=0.03444, over 7273.00 frames.], tot_loss[loss=0.1873, simple_loss=0.273, pruned_loss=0.05077, over 1422743.68 frames.], batch size: 17, lr: 2.89e-04 2022-05-28 02:14:00,470 INFO [train.py:842] (0/4) Epoch 19, batch 4750, loss[loss=0.1777, simple_loss=0.2607, pruned_loss=0.04739, over 7278.00 frames.], tot_loss[loss=0.1868, simple_loss=0.2728, pruned_loss=0.05043, over 1424804.79 frames.], batch size: 17, lr: 2.89e-04 2022-05-28 02:14:38,596 INFO [train.py:842] (0/4) Epoch 19, batch 4800, loss[loss=0.159, simple_loss=0.2495, pruned_loss=0.03424, over 7254.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2736, pruned_loss=0.0511, over 1419788.25 frames.], batch size: 19, lr: 2.89e-04 2022-05-28 02:15:16,772 INFO [train.py:842] (0/4) Epoch 19, batch 4850, loss[loss=0.1777, simple_loss=0.256, pruned_loss=0.04971, over 7258.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2738, pruned_loss=0.05128, over 1420046.83 frames.], batch size: 16, lr: 2.89e-04 2022-05-28 02:15:54,793 INFO [train.py:842] (0/4) Epoch 19, batch 4900, loss[loss=0.2258, simple_loss=0.2993, pruned_loss=0.07612, over 7219.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2729, pruned_loss=0.05076, over 1420838.39 frames.], batch size: 21, lr: 2.89e-04 2022-05-28 02:16:32,985 INFO [train.py:842] (0/4) Epoch 19, batch 4950, loss[loss=0.2451, simple_loss=0.3149, pruned_loss=0.0877, over 4960.00 frames.], tot_loss[loss=0.187, simple_loss=0.2732, pruned_loss=0.05041, over 1419424.99 frames.], batch size: 53, lr: 2.89e-04 2022-05-28 02:17:10,624 INFO [train.py:842] (0/4) Epoch 19, batch 5000, loss[loss=0.1537, simple_loss=0.2411, pruned_loss=0.03311, over 6653.00 frames.], tot_loss[loss=0.1896, simple_loss=0.2757, pruned_loss=0.05176, over 1422128.55 frames.], batch size: 31, lr: 2.89e-04 2022-05-28 02:17:48,864 INFO [train.py:842] (0/4) Epoch 19, batch 5050, loss[loss=0.1489, simple_loss=0.2301, pruned_loss=0.03384, over 6988.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2758, pruned_loss=0.05198, over 1422250.10 frames.], batch size: 16, lr: 2.89e-04 2022-05-28 02:18:26,629 INFO [train.py:842] (0/4) Epoch 19, batch 5100, loss[loss=0.2665, simple_loss=0.3289, pruned_loss=0.1021, over 4975.00 frames.], tot_loss[loss=0.1899, simple_loss=0.2758, pruned_loss=0.05203, over 1420235.74 frames.], batch size: 52, lr: 2.89e-04 2022-05-28 02:19:05,001 INFO [train.py:842] (0/4) Epoch 19, batch 5150, loss[loss=0.231, simple_loss=0.3275, pruned_loss=0.06724, over 7167.00 frames.], tot_loss[loss=0.1873, simple_loss=0.2734, pruned_loss=0.05055, over 1423395.92 frames.], batch size: 19, lr: 2.89e-04 2022-05-28 02:19:43,063 INFO [train.py:842] (0/4) Epoch 19, batch 5200, loss[loss=0.1905, simple_loss=0.2759, pruned_loss=0.05256, over 6633.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2736, pruned_loss=0.05135, over 1425189.06 frames.], batch size: 31, lr: 2.89e-04 2022-05-28 02:20:21,225 INFO [train.py:842] (0/4) Epoch 19, batch 5250, loss[loss=0.174, simple_loss=0.2465, pruned_loss=0.05071, over 7309.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2737, pruned_loss=0.05147, over 1419002.61 frames.], batch size: 17, lr: 2.89e-04 2022-05-28 02:21:08,632 INFO [train.py:842] (0/4) Epoch 19, batch 5300, loss[loss=0.156, simple_loss=0.2515, pruned_loss=0.03029, over 7267.00 frames.], tot_loss[loss=0.1877, simple_loss=0.273, pruned_loss=0.05119, over 1422533.59 frames.], batch size: 18, lr: 2.89e-04 2022-05-28 02:21:47,028 INFO [train.py:842] (0/4) Epoch 19, batch 5350, loss[loss=0.1583, simple_loss=0.2598, pruned_loss=0.02839, over 7212.00 frames.], tot_loss[loss=0.1868, simple_loss=0.272, pruned_loss=0.05084, over 1421280.77 frames.], batch size: 21, lr: 2.89e-04 2022-05-28 02:22:24,935 INFO [train.py:842] (0/4) Epoch 19, batch 5400, loss[loss=0.2034, simple_loss=0.2818, pruned_loss=0.06251, over 7425.00 frames.], tot_loss[loss=0.1878, simple_loss=0.2727, pruned_loss=0.05144, over 1419704.16 frames.], batch size: 20, lr: 2.89e-04 2022-05-28 02:23:12,401 INFO [train.py:842] (0/4) Epoch 19, batch 5450, loss[loss=0.1996, simple_loss=0.2789, pruned_loss=0.06017, over 6802.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2732, pruned_loss=0.05144, over 1419583.81 frames.], batch size: 31, lr: 2.88e-04 2022-05-28 02:23:50,551 INFO [train.py:842] (0/4) Epoch 19, batch 5500, loss[loss=0.173, simple_loss=0.244, pruned_loss=0.051, over 7001.00 frames.], tot_loss[loss=0.1873, simple_loss=0.2724, pruned_loss=0.05107, over 1419803.57 frames.], batch size: 16, lr: 2.88e-04 2022-05-28 02:24:28,420 INFO [train.py:842] (0/4) Epoch 19, batch 5550, loss[loss=0.3689, simple_loss=0.4053, pruned_loss=0.1662, over 5002.00 frames.], tot_loss[loss=0.1895, simple_loss=0.2748, pruned_loss=0.05209, over 1418923.43 frames.], batch size: 52, lr: 2.88e-04 2022-05-28 02:25:06,223 INFO [train.py:842] (0/4) Epoch 19, batch 5600, loss[loss=0.1645, simple_loss=0.2472, pruned_loss=0.0409, over 7162.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2747, pruned_loss=0.05134, over 1421152.57 frames.], batch size: 18, lr: 2.88e-04 2022-05-28 02:25:44,620 INFO [train.py:842] (0/4) Epoch 19, batch 5650, loss[loss=0.192, simple_loss=0.2787, pruned_loss=0.05268, over 7332.00 frames.], tot_loss[loss=0.1887, simple_loss=0.2746, pruned_loss=0.05138, over 1423829.88 frames.], batch size: 22, lr: 2.88e-04 2022-05-28 02:26:31,921 INFO [train.py:842] (0/4) Epoch 19, batch 5700, loss[loss=0.1819, simple_loss=0.2714, pruned_loss=0.04627, over 7075.00 frames.], tot_loss[loss=0.1888, simple_loss=0.2745, pruned_loss=0.05159, over 1424154.23 frames.], batch size: 28, lr: 2.88e-04 2022-05-28 02:27:10,277 INFO [train.py:842] (0/4) Epoch 19, batch 5750, loss[loss=0.1729, simple_loss=0.2482, pruned_loss=0.04875, over 7142.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2745, pruned_loss=0.05137, over 1430196.97 frames.], batch size: 17, lr: 2.88e-04 2022-05-28 02:27:48,261 INFO [train.py:842] (0/4) Epoch 19, batch 5800, loss[loss=0.2162, simple_loss=0.3053, pruned_loss=0.06357, over 7325.00 frames.], tot_loss[loss=0.1896, simple_loss=0.2754, pruned_loss=0.05191, over 1428254.87 frames.], batch size: 20, lr: 2.88e-04 2022-05-28 02:28:26,598 INFO [train.py:842] (0/4) Epoch 19, batch 5850, loss[loss=0.2774, simple_loss=0.3489, pruned_loss=0.103, over 4951.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2748, pruned_loss=0.05195, over 1426368.10 frames.], batch size: 52, lr: 2.88e-04 2022-05-28 02:29:04,691 INFO [train.py:842] (0/4) Epoch 19, batch 5900, loss[loss=0.171, simple_loss=0.2587, pruned_loss=0.0416, over 7329.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2721, pruned_loss=0.05065, over 1422554.25 frames.], batch size: 20, lr: 2.88e-04 2022-05-28 02:29:43,068 INFO [train.py:842] (0/4) Epoch 19, batch 5950, loss[loss=0.1785, simple_loss=0.2768, pruned_loss=0.04012, over 7316.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2727, pruned_loss=0.05084, over 1426338.17 frames.], batch size: 21, lr: 2.88e-04 2022-05-28 02:30:20,898 INFO [train.py:842] (0/4) Epoch 19, batch 6000, loss[loss=0.2465, simple_loss=0.3209, pruned_loss=0.08608, over 6769.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2737, pruned_loss=0.05142, over 1424521.07 frames.], batch size: 31, lr: 2.88e-04 2022-05-28 02:30:20,899 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 02:30:29,955 INFO [train.py:871] (0/4) Epoch 19, validation: loss=0.1648, simple_loss=0.2646, pruned_loss=0.03256, over 868885.00 frames. 2022-05-28 02:31:08,288 INFO [train.py:842] (0/4) Epoch 19, batch 6050, loss[loss=0.1985, simple_loss=0.2965, pruned_loss=0.05024, over 7414.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2731, pruned_loss=0.05064, over 1426440.85 frames.], batch size: 21, lr: 2.88e-04 2022-05-28 02:31:45,882 INFO [train.py:842] (0/4) Epoch 19, batch 6100, loss[loss=0.1896, simple_loss=0.2805, pruned_loss=0.04931, over 6699.00 frames.], tot_loss[loss=0.1884, simple_loss=0.2747, pruned_loss=0.0511, over 1424301.53 frames.], batch size: 31, lr: 2.88e-04 2022-05-28 02:32:24,066 INFO [train.py:842] (0/4) Epoch 19, batch 6150, loss[loss=0.1858, simple_loss=0.2761, pruned_loss=0.04769, over 7149.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2743, pruned_loss=0.0505, over 1428711.47 frames.], batch size: 20, lr: 2.88e-04 2022-05-28 02:33:02,028 INFO [train.py:842] (0/4) Epoch 19, batch 6200, loss[loss=0.1795, simple_loss=0.2601, pruned_loss=0.04948, over 7252.00 frames.], tot_loss[loss=0.1884, simple_loss=0.2746, pruned_loss=0.05105, over 1424987.43 frames.], batch size: 19, lr: 2.88e-04 2022-05-28 02:33:40,510 INFO [train.py:842] (0/4) Epoch 19, batch 6250, loss[loss=0.1493, simple_loss=0.2324, pruned_loss=0.03309, over 7405.00 frames.], tot_loss[loss=0.1884, simple_loss=0.2742, pruned_loss=0.05133, over 1429346.92 frames.], batch size: 18, lr: 2.88e-04 2022-05-28 02:34:18,346 INFO [train.py:842] (0/4) Epoch 19, batch 6300, loss[loss=0.1896, simple_loss=0.2825, pruned_loss=0.04836, over 7270.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2739, pruned_loss=0.05096, over 1424445.84 frames.], batch size: 25, lr: 2.88e-04 2022-05-28 02:34:56,845 INFO [train.py:842] (0/4) Epoch 19, batch 6350, loss[loss=0.1646, simple_loss=0.2453, pruned_loss=0.04192, over 7164.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2728, pruned_loss=0.05052, over 1427001.32 frames.], batch size: 18, lr: 2.88e-04 2022-05-28 02:35:34,599 INFO [train.py:842] (0/4) Epoch 19, batch 6400, loss[loss=0.1891, simple_loss=0.2882, pruned_loss=0.04502, over 7090.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2728, pruned_loss=0.05019, over 1426241.42 frames.], batch size: 28, lr: 2.88e-04 2022-05-28 02:36:12,797 INFO [train.py:842] (0/4) Epoch 19, batch 6450, loss[loss=0.1395, simple_loss=0.2307, pruned_loss=0.0241, over 7058.00 frames.], tot_loss[loss=0.1868, simple_loss=0.2728, pruned_loss=0.05036, over 1424416.43 frames.], batch size: 18, lr: 2.88e-04 2022-05-28 02:36:50,726 INFO [train.py:842] (0/4) Epoch 19, batch 6500, loss[loss=0.1561, simple_loss=0.2428, pruned_loss=0.03467, over 6484.00 frames.], tot_loss[loss=0.1868, simple_loss=0.2728, pruned_loss=0.05039, over 1424624.12 frames.], batch size: 38, lr: 2.88e-04 2022-05-28 02:37:28,650 INFO [train.py:842] (0/4) Epoch 19, batch 6550, loss[loss=0.1683, simple_loss=0.2565, pruned_loss=0.04002, over 7125.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2738, pruned_loss=0.05073, over 1422203.64 frames.], batch size: 21, lr: 2.88e-04 2022-05-28 02:38:06,625 INFO [train.py:842] (0/4) Epoch 19, batch 6600, loss[loss=0.1741, simple_loss=0.2641, pruned_loss=0.04201, over 7244.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2742, pruned_loss=0.05116, over 1424624.81 frames.], batch size: 20, lr: 2.88e-04 2022-05-28 02:38:44,826 INFO [train.py:842] (0/4) Epoch 19, batch 6650, loss[loss=0.1552, simple_loss=0.2453, pruned_loss=0.03253, over 7327.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2744, pruned_loss=0.05108, over 1418654.27 frames.], batch size: 20, lr: 2.87e-04 2022-05-28 02:39:22,752 INFO [train.py:842] (0/4) Epoch 19, batch 6700, loss[loss=0.1981, simple_loss=0.2932, pruned_loss=0.05153, over 7340.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2738, pruned_loss=0.0507, over 1419866.64 frames.], batch size: 22, lr: 2.87e-04 2022-05-28 02:40:00,834 INFO [train.py:842] (0/4) Epoch 19, batch 6750, loss[loss=0.1506, simple_loss=0.2482, pruned_loss=0.02653, over 7113.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2733, pruned_loss=0.05051, over 1419614.99 frames.], batch size: 21, lr: 2.87e-04 2022-05-28 02:40:39,053 INFO [train.py:842] (0/4) Epoch 19, batch 6800, loss[loss=0.2306, simple_loss=0.3146, pruned_loss=0.0733, over 7279.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2734, pruned_loss=0.05019, over 1425157.11 frames.], batch size: 25, lr: 2.87e-04 2022-05-28 02:41:17,206 INFO [train.py:842] (0/4) Epoch 19, batch 6850, loss[loss=0.2367, simple_loss=0.3177, pruned_loss=0.07786, over 7184.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2734, pruned_loss=0.05, over 1425928.56 frames.], batch size: 23, lr: 2.87e-04 2022-05-28 02:41:55,496 INFO [train.py:842] (0/4) Epoch 19, batch 6900, loss[loss=0.2312, simple_loss=0.3253, pruned_loss=0.06858, over 7420.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2735, pruned_loss=0.05048, over 1427942.00 frames.], batch size: 21, lr: 2.87e-04 2022-05-28 02:42:33,704 INFO [train.py:842] (0/4) Epoch 19, batch 6950, loss[loss=0.207, simple_loss=0.2932, pruned_loss=0.06041, over 7139.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2757, pruned_loss=0.05183, over 1426125.37 frames.], batch size: 20, lr: 2.87e-04 2022-05-28 02:43:11,782 INFO [train.py:842] (0/4) Epoch 19, batch 7000, loss[loss=0.1823, simple_loss=0.259, pruned_loss=0.05278, over 7145.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2736, pruned_loss=0.05027, over 1424533.70 frames.], batch size: 17, lr: 2.87e-04 2022-05-28 02:43:50,061 INFO [train.py:842] (0/4) Epoch 19, batch 7050, loss[loss=0.1894, simple_loss=0.2832, pruned_loss=0.04785, over 6816.00 frames.], tot_loss[loss=0.1857, simple_loss=0.272, pruned_loss=0.04967, over 1424194.62 frames.], batch size: 31, lr: 2.87e-04 2022-05-28 02:44:27,961 INFO [train.py:842] (0/4) Epoch 19, batch 7100, loss[loss=0.1872, simple_loss=0.273, pruned_loss=0.05072, over 7207.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2724, pruned_loss=0.05007, over 1425731.27 frames.], batch size: 22, lr: 2.87e-04 2022-05-28 02:45:06,034 INFO [train.py:842] (0/4) Epoch 19, batch 7150, loss[loss=0.1533, simple_loss=0.2304, pruned_loss=0.03807, over 7267.00 frames.], tot_loss[loss=0.187, simple_loss=0.2732, pruned_loss=0.05039, over 1424700.88 frames.], batch size: 17, lr: 2.87e-04 2022-05-28 02:45:44,150 INFO [train.py:842] (0/4) Epoch 19, batch 7200, loss[loss=0.1701, simple_loss=0.2457, pruned_loss=0.04723, over 7270.00 frames.], tot_loss[loss=0.1868, simple_loss=0.272, pruned_loss=0.05082, over 1425590.67 frames.], batch size: 18, lr: 2.87e-04 2022-05-28 02:46:22,503 INFO [train.py:842] (0/4) Epoch 19, batch 7250, loss[loss=0.1914, simple_loss=0.2806, pruned_loss=0.05105, over 7195.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2715, pruned_loss=0.05043, over 1425815.18 frames.], batch size: 23, lr: 2.87e-04 2022-05-28 02:47:00,627 INFO [train.py:842] (0/4) Epoch 19, batch 7300, loss[loss=0.2418, simple_loss=0.3132, pruned_loss=0.0852, over 7329.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2711, pruned_loss=0.05027, over 1424934.22 frames.], batch size: 20, lr: 2.87e-04 2022-05-28 02:47:38,832 INFO [train.py:842] (0/4) Epoch 19, batch 7350, loss[loss=0.1475, simple_loss=0.2326, pruned_loss=0.03117, over 7158.00 frames.], tot_loss[loss=0.1864, simple_loss=0.272, pruned_loss=0.05037, over 1424341.82 frames.], batch size: 17, lr: 2.87e-04 2022-05-28 02:48:16,551 INFO [train.py:842] (0/4) Epoch 19, batch 7400, loss[loss=0.1669, simple_loss=0.2541, pruned_loss=0.03986, over 7371.00 frames.], tot_loss[loss=0.1878, simple_loss=0.2735, pruned_loss=0.05106, over 1421744.25 frames.], batch size: 19, lr: 2.87e-04 2022-05-28 02:48:54,935 INFO [train.py:842] (0/4) Epoch 19, batch 7450, loss[loss=0.2083, simple_loss=0.2908, pruned_loss=0.06291, over 7144.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2746, pruned_loss=0.05157, over 1420776.13 frames.], batch size: 19, lr: 2.87e-04 2022-05-28 02:49:33,064 INFO [train.py:842] (0/4) Epoch 19, batch 7500, loss[loss=0.199, simple_loss=0.28, pruned_loss=0.05906, over 7297.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2731, pruned_loss=0.05057, over 1425208.64 frames.], batch size: 17, lr: 2.87e-04 2022-05-28 02:50:11,367 INFO [train.py:842] (0/4) Epoch 19, batch 7550, loss[loss=0.1757, simple_loss=0.2709, pruned_loss=0.04031, over 7409.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2742, pruned_loss=0.05142, over 1427950.00 frames.], batch size: 21, lr: 2.87e-04 2022-05-28 02:50:49,176 INFO [train.py:842] (0/4) Epoch 19, batch 7600, loss[loss=0.236, simple_loss=0.3119, pruned_loss=0.07999, over 7060.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2739, pruned_loss=0.05109, over 1426131.57 frames.], batch size: 28, lr: 2.87e-04 2022-05-28 02:51:27,607 INFO [train.py:842] (0/4) Epoch 19, batch 7650, loss[loss=0.1867, simple_loss=0.273, pruned_loss=0.05015, over 7073.00 frames.], tot_loss[loss=0.1877, simple_loss=0.273, pruned_loss=0.05115, over 1426932.62 frames.], batch size: 28, lr: 2.87e-04 2022-05-28 02:52:05,418 INFO [train.py:842] (0/4) Epoch 19, batch 7700, loss[loss=0.2355, simple_loss=0.3213, pruned_loss=0.07482, over 7292.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2746, pruned_loss=0.05212, over 1423663.58 frames.], batch size: 24, lr: 2.87e-04 2022-05-28 02:52:43,755 INFO [train.py:842] (0/4) Epoch 19, batch 7750, loss[loss=0.1686, simple_loss=0.2514, pruned_loss=0.04292, over 7171.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2734, pruned_loss=0.05149, over 1423969.99 frames.], batch size: 18, lr: 2.87e-04 2022-05-28 02:53:21,935 INFO [train.py:842] (0/4) Epoch 19, batch 7800, loss[loss=0.1555, simple_loss=0.2449, pruned_loss=0.03306, over 7444.00 frames.], tot_loss[loss=0.1865, simple_loss=0.2722, pruned_loss=0.05044, over 1424536.38 frames.], batch size: 20, lr: 2.87e-04 2022-05-28 02:53:59,970 INFO [train.py:842] (0/4) Epoch 19, batch 7850, loss[loss=0.236, simple_loss=0.3244, pruned_loss=0.07382, over 7281.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2713, pruned_loss=0.04973, over 1417735.78 frames.], batch size: 25, lr: 2.86e-04 2022-05-28 02:54:37,946 INFO [train.py:842] (0/4) Epoch 19, batch 7900, loss[loss=0.1792, simple_loss=0.2685, pruned_loss=0.04493, over 7338.00 frames.], tot_loss[loss=0.1864, simple_loss=0.2721, pruned_loss=0.05034, over 1420139.66 frames.], batch size: 22, lr: 2.86e-04 2022-05-28 02:55:16,096 INFO [train.py:842] (0/4) Epoch 19, batch 7950, loss[loss=0.1578, simple_loss=0.2445, pruned_loss=0.03555, over 7365.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2728, pruned_loss=0.05052, over 1422304.13 frames.], batch size: 19, lr: 2.86e-04 2022-05-28 02:55:54,120 INFO [train.py:842] (0/4) Epoch 19, batch 8000, loss[loss=0.2062, simple_loss=0.2787, pruned_loss=0.06689, over 7423.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2723, pruned_loss=0.0505, over 1422253.48 frames.], batch size: 18, lr: 2.86e-04 2022-05-28 02:56:32,332 INFO [train.py:842] (0/4) Epoch 19, batch 8050, loss[loss=0.1879, simple_loss=0.2691, pruned_loss=0.05334, over 7320.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2729, pruned_loss=0.05044, over 1426017.54 frames.], batch size: 21, lr: 2.86e-04 2022-05-28 02:57:10,365 INFO [train.py:842] (0/4) Epoch 19, batch 8100, loss[loss=0.1837, simple_loss=0.2728, pruned_loss=0.04726, over 7218.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2726, pruned_loss=0.05082, over 1426592.87 frames.], batch size: 21, lr: 2.86e-04 2022-05-28 02:57:48,666 INFO [train.py:842] (0/4) Epoch 19, batch 8150, loss[loss=0.157, simple_loss=0.249, pruned_loss=0.03254, over 7422.00 frames.], tot_loss[loss=0.1865, simple_loss=0.272, pruned_loss=0.05051, over 1422550.83 frames.], batch size: 20, lr: 2.86e-04 2022-05-28 02:58:26,738 INFO [train.py:842] (0/4) Epoch 19, batch 8200, loss[loss=0.1802, simple_loss=0.2571, pruned_loss=0.05164, over 7354.00 frames.], tot_loss[loss=0.187, simple_loss=0.2722, pruned_loss=0.05093, over 1424769.02 frames.], batch size: 19, lr: 2.86e-04 2022-05-28 02:59:05,080 INFO [train.py:842] (0/4) Epoch 19, batch 8250, loss[loss=0.1716, simple_loss=0.2561, pruned_loss=0.04358, over 7162.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2723, pruned_loss=0.05052, over 1427722.57 frames.], batch size: 18, lr: 2.86e-04 2022-05-28 02:59:42,814 INFO [train.py:842] (0/4) Epoch 19, batch 8300, loss[loss=0.172, simple_loss=0.2666, pruned_loss=0.03868, over 7078.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2727, pruned_loss=0.05031, over 1427670.84 frames.], batch size: 18, lr: 2.86e-04 2022-05-28 03:00:21,249 INFO [train.py:842] (0/4) Epoch 19, batch 8350, loss[loss=0.1763, simple_loss=0.2542, pruned_loss=0.04918, over 7002.00 frames.], tot_loss[loss=0.188, simple_loss=0.2736, pruned_loss=0.05118, over 1428219.89 frames.], batch size: 16, lr: 2.86e-04 2022-05-28 03:00:59,125 INFO [train.py:842] (0/4) Epoch 19, batch 8400, loss[loss=0.2539, simple_loss=0.3279, pruned_loss=0.08994, over 7324.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2749, pruned_loss=0.05219, over 1421819.51 frames.], batch size: 21, lr: 2.86e-04 2022-05-28 03:01:37,402 INFO [train.py:842] (0/4) Epoch 19, batch 8450, loss[loss=0.1501, simple_loss=0.2305, pruned_loss=0.03479, over 7268.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2738, pruned_loss=0.05158, over 1421200.55 frames.], batch size: 18, lr: 2.86e-04 2022-05-28 03:02:15,561 INFO [train.py:842] (0/4) Epoch 19, batch 8500, loss[loss=0.1727, simple_loss=0.2609, pruned_loss=0.04225, over 7432.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2714, pruned_loss=0.05045, over 1420837.57 frames.], batch size: 20, lr: 2.86e-04 2022-05-28 03:02:53,773 INFO [train.py:842] (0/4) Epoch 19, batch 8550, loss[loss=0.2092, simple_loss=0.2929, pruned_loss=0.06272, over 7282.00 frames.], tot_loss[loss=0.1875, simple_loss=0.2728, pruned_loss=0.05114, over 1423225.31 frames.], batch size: 24, lr: 2.86e-04 2022-05-28 03:03:31,637 INFO [train.py:842] (0/4) Epoch 19, batch 8600, loss[loss=0.2087, simple_loss=0.2867, pruned_loss=0.06539, over 5096.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2728, pruned_loss=0.05121, over 1420025.50 frames.], batch size: 52, lr: 2.86e-04 2022-05-28 03:04:09,682 INFO [train.py:842] (0/4) Epoch 19, batch 8650, loss[loss=0.1586, simple_loss=0.2432, pruned_loss=0.03705, over 7160.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2732, pruned_loss=0.05125, over 1419639.22 frames.], batch size: 18, lr: 2.86e-04 2022-05-28 03:04:47,375 INFO [train.py:842] (0/4) Epoch 19, batch 8700, loss[loss=0.1774, simple_loss=0.2637, pruned_loss=0.04555, over 7353.00 frames.], tot_loss[loss=0.187, simple_loss=0.2728, pruned_loss=0.0506, over 1416312.44 frames.], batch size: 19, lr: 2.86e-04 2022-05-28 03:05:25,583 INFO [train.py:842] (0/4) Epoch 19, batch 8750, loss[loss=0.149, simple_loss=0.2328, pruned_loss=0.03254, over 7255.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2717, pruned_loss=0.04953, over 1416841.64 frames.], batch size: 16, lr: 2.86e-04 2022-05-28 03:06:03,561 INFO [train.py:842] (0/4) Epoch 19, batch 8800, loss[loss=0.1732, simple_loss=0.2624, pruned_loss=0.04204, over 7277.00 frames.], tot_loss[loss=0.1856, simple_loss=0.2716, pruned_loss=0.04975, over 1416603.90 frames.], batch size: 18, lr: 2.86e-04 2022-05-28 03:06:41,767 INFO [train.py:842] (0/4) Epoch 19, batch 8850, loss[loss=0.1848, simple_loss=0.2796, pruned_loss=0.04494, over 7192.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2718, pruned_loss=0.04997, over 1416078.48 frames.], batch size: 23, lr: 2.86e-04 2022-05-28 03:07:19,678 INFO [train.py:842] (0/4) Epoch 19, batch 8900, loss[loss=0.1391, simple_loss=0.228, pruned_loss=0.02507, over 7253.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2716, pruned_loss=0.05, over 1411318.02 frames.], batch size: 19, lr: 2.86e-04 2022-05-28 03:07:57,657 INFO [train.py:842] (0/4) Epoch 19, batch 8950, loss[loss=0.1526, simple_loss=0.2316, pruned_loss=0.03681, over 7290.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2723, pruned_loss=0.05091, over 1404058.58 frames.], batch size: 18, lr: 2.86e-04 2022-05-28 03:08:35,127 INFO [train.py:842] (0/4) Epoch 19, batch 9000, loss[loss=0.2043, simple_loss=0.286, pruned_loss=0.06129, over 7200.00 frames.], tot_loss[loss=0.188, simple_loss=0.273, pruned_loss=0.05148, over 1401894.19 frames.], batch size: 23, lr: 2.86e-04 2022-05-28 03:08:35,128 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 03:08:44,267 INFO [train.py:871] (0/4) Epoch 19, validation: loss=0.1647, simple_loss=0.2641, pruned_loss=0.03266, over 868885.00 frames. 2022-05-28 03:09:22,098 INFO [train.py:842] (0/4) Epoch 19, batch 9050, loss[loss=0.2063, simple_loss=0.2775, pruned_loss=0.06752, over 4845.00 frames.], tot_loss[loss=0.1873, simple_loss=0.2719, pruned_loss=0.05131, over 1380651.52 frames.], batch size: 53, lr: 2.86e-04 2022-05-28 03:09:59,164 INFO [train.py:842] (0/4) Epoch 19, batch 9100, loss[loss=0.2314, simple_loss=0.2981, pruned_loss=0.08239, over 4958.00 frames.], tot_loss[loss=0.1914, simple_loss=0.2753, pruned_loss=0.05372, over 1333406.70 frames.], batch size: 52, lr: 2.85e-04 2022-05-28 03:10:36,165 INFO [train.py:842] (0/4) Epoch 19, batch 9150, loss[loss=0.2482, simple_loss=0.3235, pruned_loss=0.08638, over 5166.00 frames.], tot_loss[loss=0.1976, simple_loss=0.2803, pruned_loss=0.05743, over 1263307.52 frames.], batch size: 52, lr: 2.85e-04 2022-05-28 03:11:06,827 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-19.pt 2022-05-28 03:11:25,258 INFO [train.py:842] (0/4) Epoch 20, batch 0, loss[loss=0.163, simple_loss=0.248, pruned_loss=0.03898, over 7351.00 frames.], tot_loss[loss=0.163, simple_loss=0.248, pruned_loss=0.03898, over 7351.00 frames.], batch size: 19, lr: 2.78e-04 2022-05-28 03:12:03,122 INFO [train.py:842] (0/4) Epoch 20, batch 50, loss[loss=0.2385, simple_loss=0.3029, pruned_loss=0.08706, over 7273.00 frames.], tot_loss[loss=0.1909, simple_loss=0.2782, pruned_loss=0.05177, over 320590.21 frames.], batch size: 18, lr: 2.78e-04 2022-05-28 03:12:41,511 INFO [train.py:842] (0/4) Epoch 20, batch 100, loss[loss=0.203, simple_loss=0.2889, pruned_loss=0.0585, over 4950.00 frames.], tot_loss[loss=0.1896, simple_loss=0.276, pruned_loss=0.05162, over 565477.52 frames.], batch size: 52, lr: 2.78e-04 2022-05-28 03:13:19,278 INFO [train.py:842] (0/4) Epoch 20, batch 150, loss[loss=0.161, simple_loss=0.2552, pruned_loss=0.03335, over 7321.00 frames.], tot_loss[loss=0.1923, simple_loss=0.2788, pruned_loss=0.05287, over 756077.57 frames.], batch size: 21, lr: 2.78e-04 2022-05-28 03:13:57,452 INFO [train.py:842] (0/4) Epoch 20, batch 200, loss[loss=0.2331, simple_loss=0.3151, pruned_loss=0.07559, over 7315.00 frames.], tot_loss[loss=0.1913, simple_loss=0.2781, pruned_loss=0.05231, over 903569.86 frames.], batch size: 22, lr: 2.78e-04 2022-05-28 03:14:35,700 INFO [train.py:842] (0/4) Epoch 20, batch 250, loss[loss=0.193, simple_loss=0.2938, pruned_loss=0.04607, over 7348.00 frames.], tot_loss[loss=0.1889, simple_loss=0.2756, pruned_loss=0.05114, over 1022456.45 frames.], batch size: 22, lr: 2.78e-04 2022-05-28 03:15:13,679 INFO [train.py:842] (0/4) Epoch 20, batch 300, loss[loss=0.1994, simple_loss=0.2821, pruned_loss=0.0584, over 7202.00 frames.], tot_loss[loss=0.1888, simple_loss=0.2759, pruned_loss=0.05082, over 1111875.24 frames.], batch size: 23, lr: 2.78e-04 2022-05-28 03:15:51,617 INFO [train.py:842] (0/4) Epoch 20, batch 350, loss[loss=0.169, simple_loss=0.2663, pruned_loss=0.03587, over 7143.00 frames.], tot_loss[loss=0.1894, simple_loss=0.2762, pruned_loss=0.05124, over 1184776.39 frames.], batch size: 20, lr: 2.78e-04 2022-05-28 03:16:29,692 INFO [train.py:842] (0/4) Epoch 20, batch 400, loss[loss=0.1622, simple_loss=0.262, pruned_loss=0.03121, over 7143.00 frames.], tot_loss[loss=0.1898, simple_loss=0.2767, pruned_loss=0.05142, over 1237889.39 frames.], batch size: 20, lr: 2.78e-04 2022-05-28 03:17:07,416 INFO [train.py:842] (0/4) Epoch 20, batch 450, loss[loss=0.2116, simple_loss=0.3056, pruned_loss=0.05877, over 7388.00 frames.], tot_loss[loss=0.189, simple_loss=0.2764, pruned_loss=0.0508, over 1275330.47 frames.], batch size: 23, lr: 2.78e-04 2022-05-28 03:17:45,569 INFO [train.py:842] (0/4) Epoch 20, batch 500, loss[loss=0.1795, simple_loss=0.2678, pruned_loss=0.04558, over 7212.00 frames.], tot_loss[loss=0.1895, simple_loss=0.2767, pruned_loss=0.05116, over 1307916.64 frames.], batch size: 21, lr: 2.78e-04 2022-05-28 03:18:23,483 INFO [train.py:842] (0/4) Epoch 20, batch 550, loss[loss=0.2045, simple_loss=0.285, pruned_loss=0.06194, over 6696.00 frames.], tot_loss[loss=0.19, simple_loss=0.277, pruned_loss=0.05153, over 1334261.51 frames.], batch size: 31, lr: 2.78e-04 2022-05-28 03:19:02,113 INFO [train.py:842] (0/4) Epoch 20, batch 600, loss[loss=0.1667, simple_loss=0.2596, pruned_loss=0.03691, over 7157.00 frames.], tot_loss[loss=0.1888, simple_loss=0.2751, pruned_loss=0.05124, over 1356213.43 frames.], batch size: 18, lr: 2.78e-04 2022-05-28 03:19:40,163 INFO [train.py:842] (0/4) Epoch 20, batch 650, loss[loss=0.1547, simple_loss=0.2377, pruned_loss=0.03588, over 7163.00 frames.], tot_loss[loss=0.1882, simple_loss=0.2745, pruned_loss=0.05091, over 1369985.16 frames.], batch size: 18, lr: 2.78e-04 2022-05-28 03:20:18,364 INFO [train.py:842] (0/4) Epoch 20, batch 700, loss[loss=0.1631, simple_loss=0.2456, pruned_loss=0.04034, over 7231.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2744, pruned_loss=0.05071, over 1383492.34 frames.], batch size: 20, lr: 2.78e-04 2022-05-28 03:20:56,445 INFO [train.py:842] (0/4) Epoch 20, batch 750, loss[loss=0.2219, simple_loss=0.2997, pruned_loss=0.07203, over 7331.00 frames.], tot_loss[loss=0.187, simple_loss=0.2732, pruned_loss=0.05037, over 1393573.33 frames.], batch size: 25, lr: 2.78e-04 2022-05-28 03:21:34,811 INFO [train.py:842] (0/4) Epoch 20, batch 800, loss[loss=0.1615, simple_loss=0.2548, pruned_loss=0.03406, over 7398.00 frames.], tot_loss[loss=0.1856, simple_loss=0.2721, pruned_loss=0.0495, over 1402814.13 frames.], batch size: 18, lr: 2.78e-04 2022-05-28 03:22:12,805 INFO [train.py:842] (0/4) Epoch 20, batch 850, loss[loss=0.2542, simple_loss=0.3297, pruned_loss=0.08936, over 7122.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2733, pruned_loss=0.05024, over 1410509.04 frames.], batch size: 28, lr: 2.78e-04 2022-05-28 03:22:51,268 INFO [train.py:842] (0/4) Epoch 20, batch 900, loss[loss=0.1684, simple_loss=0.2606, pruned_loss=0.03817, over 7364.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2716, pruned_loss=0.04973, over 1415609.75 frames.], batch size: 19, lr: 2.78e-04 2022-05-28 03:23:29,306 INFO [train.py:842] (0/4) Epoch 20, batch 950, loss[loss=0.1922, simple_loss=0.2907, pruned_loss=0.0468, over 7232.00 frames.], tot_loss[loss=0.1863, simple_loss=0.2724, pruned_loss=0.05004, over 1419214.88 frames.], batch size: 20, lr: 2.78e-04 2022-05-28 03:24:07,516 INFO [train.py:842] (0/4) Epoch 20, batch 1000, loss[loss=0.2107, simple_loss=0.2883, pruned_loss=0.06653, over 7286.00 frames.], tot_loss[loss=0.1865, simple_loss=0.2729, pruned_loss=0.05006, over 1420268.80 frames.], batch size: 24, lr: 2.78e-04 2022-05-28 03:24:45,370 INFO [train.py:842] (0/4) Epoch 20, batch 1050, loss[loss=0.1849, simple_loss=0.2638, pruned_loss=0.053, over 7208.00 frames.], tot_loss[loss=0.1854, simple_loss=0.272, pruned_loss=0.04939, over 1419797.16 frames.], batch size: 22, lr: 2.78e-04 2022-05-28 03:25:23,602 INFO [train.py:842] (0/4) Epoch 20, batch 1100, loss[loss=0.2427, simple_loss=0.3095, pruned_loss=0.08798, over 7213.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2717, pruned_loss=0.0493, over 1415977.37 frames.], batch size: 22, lr: 2.78e-04 2022-05-28 03:26:01,382 INFO [train.py:842] (0/4) Epoch 20, batch 1150, loss[loss=0.2064, simple_loss=0.2944, pruned_loss=0.05923, over 7287.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2725, pruned_loss=0.04944, over 1420270.08 frames.], batch size: 24, lr: 2.78e-04 2022-05-28 03:26:39,925 INFO [train.py:842] (0/4) Epoch 20, batch 1200, loss[loss=0.1926, simple_loss=0.2826, pruned_loss=0.05133, over 7343.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2719, pruned_loss=0.04941, over 1425095.39 frames.], batch size: 22, lr: 2.78e-04 2022-05-28 03:27:17,883 INFO [train.py:842] (0/4) Epoch 20, batch 1250, loss[loss=0.1616, simple_loss=0.2404, pruned_loss=0.04138, over 7129.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2724, pruned_loss=0.05003, over 1425686.99 frames.], batch size: 17, lr: 2.78e-04 2022-05-28 03:27:56,122 INFO [train.py:842] (0/4) Epoch 20, batch 1300, loss[loss=0.1794, simple_loss=0.2693, pruned_loss=0.0448, over 7123.00 frames.], tot_loss[loss=0.186, simple_loss=0.2723, pruned_loss=0.04983, over 1427875.84 frames.], batch size: 21, lr: 2.77e-04 2022-05-28 03:28:34,013 INFO [train.py:842] (0/4) Epoch 20, batch 1350, loss[loss=0.1765, simple_loss=0.2682, pruned_loss=0.04241, over 7215.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2725, pruned_loss=0.04962, over 1429882.14 frames.], batch size: 22, lr: 2.77e-04 2022-05-28 03:28:36,593 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-176000.pt 2022-05-28 03:29:15,147 INFO [train.py:842] (0/4) Epoch 20, batch 1400, loss[loss=0.1856, simple_loss=0.2728, pruned_loss=0.04916, over 7182.00 frames.], tot_loss[loss=0.1866, simple_loss=0.273, pruned_loss=0.05006, over 1431080.69 frames.], batch size: 26, lr: 2.77e-04 2022-05-28 03:29:52,999 INFO [train.py:842] (0/4) Epoch 20, batch 1450, loss[loss=0.2166, simple_loss=0.3006, pruned_loss=0.06628, over 7146.00 frames.], tot_loss[loss=0.1865, simple_loss=0.2729, pruned_loss=0.05009, over 1429502.27 frames.], batch size: 26, lr: 2.77e-04 2022-05-28 03:30:31,242 INFO [train.py:842] (0/4) Epoch 20, batch 1500, loss[loss=0.2012, simple_loss=0.2905, pruned_loss=0.05597, over 7390.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2735, pruned_loss=0.0503, over 1427805.74 frames.], batch size: 23, lr: 2.77e-04 2022-05-28 03:31:09,380 INFO [train.py:842] (0/4) Epoch 20, batch 1550, loss[loss=0.1524, simple_loss=0.2425, pruned_loss=0.03115, over 7423.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2732, pruned_loss=0.05033, over 1429572.74 frames.], batch size: 20, lr: 2.77e-04 2022-05-28 03:31:47,531 INFO [train.py:842] (0/4) Epoch 20, batch 1600, loss[loss=0.2055, simple_loss=0.2926, pruned_loss=0.05924, over 7341.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2735, pruned_loss=0.05013, over 1424429.52 frames.], batch size: 22, lr: 2.77e-04 2022-05-28 03:32:25,378 INFO [train.py:842] (0/4) Epoch 20, batch 1650, loss[loss=0.23, simple_loss=0.3152, pruned_loss=0.07237, over 7175.00 frames.], tot_loss[loss=0.1873, simple_loss=0.2737, pruned_loss=0.05046, over 1421031.94 frames.], batch size: 23, lr: 2.77e-04 2022-05-28 03:33:03,575 INFO [train.py:842] (0/4) Epoch 20, batch 1700, loss[loss=0.1634, simple_loss=0.2505, pruned_loss=0.03818, over 7164.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2725, pruned_loss=0.04994, over 1419575.28 frames.], batch size: 19, lr: 2.77e-04 2022-05-28 03:33:41,616 INFO [train.py:842] (0/4) Epoch 20, batch 1750, loss[loss=0.2292, simple_loss=0.3154, pruned_loss=0.0715, over 7331.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2724, pruned_loss=0.04969, over 1425759.05 frames.], batch size: 22, lr: 2.77e-04 2022-05-28 03:34:19,912 INFO [train.py:842] (0/4) Epoch 20, batch 1800, loss[loss=0.19, simple_loss=0.2846, pruned_loss=0.04774, over 7318.00 frames.], tot_loss[loss=0.1861, simple_loss=0.2727, pruned_loss=0.04972, over 1425134.55 frames.], batch size: 25, lr: 2.77e-04 2022-05-28 03:34:57,997 INFO [train.py:842] (0/4) Epoch 20, batch 1850, loss[loss=0.1659, simple_loss=0.2557, pruned_loss=0.03802, over 7069.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2716, pruned_loss=0.0487, over 1428590.79 frames.], batch size: 18, lr: 2.77e-04 2022-05-28 03:35:36,122 INFO [train.py:842] (0/4) Epoch 20, batch 1900, loss[loss=0.2051, simple_loss=0.2859, pruned_loss=0.06219, over 7232.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2724, pruned_loss=0.04927, over 1429315.83 frames.], batch size: 20, lr: 2.77e-04 2022-05-28 03:36:14,208 INFO [train.py:842] (0/4) Epoch 20, batch 1950, loss[loss=0.177, simple_loss=0.2633, pruned_loss=0.04535, over 6430.00 frames.], tot_loss[loss=0.1853, simple_loss=0.272, pruned_loss=0.04929, over 1430367.53 frames.], batch size: 37, lr: 2.77e-04 2022-05-28 03:36:52,617 INFO [train.py:842] (0/4) Epoch 20, batch 2000, loss[loss=0.199, simple_loss=0.289, pruned_loss=0.05453, over 7233.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2723, pruned_loss=0.05006, over 1430966.22 frames.], batch size: 20, lr: 2.77e-04 2022-05-28 03:37:30,756 INFO [train.py:842] (0/4) Epoch 20, batch 2050, loss[loss=0.2169, simple_loss=0.2977, pruned_loss=0.06802, over 7227.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2718, pruned_loss=0.05032, over 1429627.73 frames.], batch size: 21, lr: 2.77e-04 2022-05-28 03:38:09,215 INFO [train.py:842] (0/4) Epoch 20, batch 2100, loss[loss=0.1669, simple_loss=0.2537, pruned_loss=0.04007, over 7428.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2703, pruned_loss=0.04933, over 1432608.94 frames.], batch size: 20, lr: 2.77e-04 2022-05-28 03:38:47,117 INFO [train.py:842] (0/4) Epoch 20, batch 2150, loss[loss=0.1849, simple_loss=0.2724, pruned_loss=0.04871, over 7203.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2705, pruned_loss=0.04924, over 1427279.55 frames.], batch size: 22, lr: 2.77e-04 2022-05-28 03:39:25,453 INFO [train.py:842] (0/4) Epoch 20, batch 2200, loss[loss=0.1832, simple_loss=0.258, pruned_loss=0.05418, over 6819.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2705, pruned_loss=0.0492, over 1422481.18 frames.], batch size: 15, lr: 2.77e-04 2022-05-28 03:40:03,669 INFO [train.py:842] (0/4) Epoch 20, batch 2250, loss[loss=0.2071, simple_loss=0.2921, pruned_loss=0.06107, over 7146.00 frames.], tot_loss[loss=0.1839, simple_loss=0.27, pruned_loss=0.04892, over 1425663.23 frames.], batch size: 20, lr: 2.77e-04 2022-05-28 03:40:41,871 INFO [train.py:842] (0/4) Epoch 20, batch 2300, loss[loss=0.226, simple_loss=0.3001, pruned_loss=0.07599, over 7377.00 frames.], tot_loss[loss=0.1852, simple_loss=0.2709, pruned_loss=0.04973, over 1425102.27 frames.], batch size: 23, lr: 2.77e-04 2022-05-28 03:41:19,749 INFO [train.py:842] (0/4) Epoch 20, batch 2350, loss[loss=0.1733, simple_loss=0.2701, pruned_loss=0.03829, over 7319.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2708, pruned_loss=0.04913, over 1422757.11 frames.], batch size: 21, lr: 2.77e-04 2022-05-28 03:41:58,023 INFO [train.py:842] (0/4) Epoch 20, batch 2400, loss[loss=0.1826, simple_loss=0.2539, pruned_loss=0.05565, over 7426.00 frames.], tot_loss[loss=0.184, simple_loss=0.2702, pruned_loss=0.04885, over 1424622.68 frames.], batch size: 20, lr: 2.77e-04 2022-05-28 03:42:36,044 INFO [train.py:842] (0/4) Epoch 20, batch 2450, loss[loss=0.163, simple_loss=0.2594, pruned_loss=0.03332, over 6977.00 frames.], tot_loss[loss=0.1835, simple_loss=0.27, pruned_loss=0.04853, over 1427381.16 frames.], batch size: 28, lr: 2.77e-04 2022-05-28 03:43:14,460 INFO [train.py:842] (0/4) Epoch 20, batch 2500, loss[loss=0.1727, simple_loss=0.263, pruned_loss=0.04115, over 7158.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2688, pruned_loss=0.04785, over 1426573.59 frames.], batch size: 26, lr: 2.77e-04 2022-05-28 03:43:52,361 INFO [train.py:842] (0/4) Epoch 20, batch 2550, loss[loss=0.1948, simple_loss=0.2783, pruned_loss=0.05562, over 7329.00 frames.], tot_loss[loss=0.184, simple_loss=0.2701, pruned_loss=0.04889, over 1425265.41 frames.], batch size: 20, lr: 2.76e-04 2022-05-28 03:44:30,388 INFO [train.py:842] (0/4) Epoch 20, batch 2600, loss[loss=0.2017, simple_loss=0.291, pruned_loss=0.05619, over 6757.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2712, pruned_loss=0.04965, over 1426339.06 frames.], batch size: 31, lr: 2.76e-04 2022-05-28 03:45:08,530 INFO [train.py:842] (0/4) Epoch 20, batch 2650, loss[loss=0.1579, simple_loss=0.2408, pruned_loss=0.0375, over 6988.00 frames.], tot_loss[loss=0.185, simple_loss=0.2709, pruned_loss=0.04948, over 1426607.70 frames.], batch size: 16, lr: 2.76e-04 2022-05-28 03:45:46,850 INFO [train.py:842] (0/4) Epoch 20, batch 2700, loss[loss=0.1776, simple_loss=0.2752, pruned_loss=0.04002, over 7367.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2698, pruned_loss=0.04877, over 1427299.05 frames.], batch size: 23, lr: 2.76e-04 2022-05-28 03:46:24,690 INFO [train.py:842] (0/4) Epoch 20, batch 2750, loss[loss=0.2438, simple_loss=0.326, pruned_loss=0.08079, over 7195.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2697, pruned_loss=0.04836, over 1425547.50 frames.], batch size: 23, lr: 2.76e-04 2022-05-28 03:47:03,026 INFO [train.py:842] (0/4) Epoch 20, batch 2800, loss[loss=0.2138, simple_loss=0.2838, pruned_loss=0.07189, over 7165.00 frames.], tot_loss[loss=0.1836, simple_loss=0.27, pruned_loss=0.04855, over 1430100.93 frames.], batch size: 18, lr: 2.76e-04 2022-05-28 03:47:41,133 INFO [train.py:842] (0/4) Epoch 20, batch 2850, loss[loss=0.2054, simple_loss=0.2955, pruned_loss=0.05759, over 7413.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2704, pruned_loss=0.04907, over 1432037.24 frames.], batch size: 21, lr: 2.76e-04 2022-05-28 03:48:19,357 INFO [train.py:842] (0/4) Epoch 20, batch 2900, loss[loss=0.2231, simple_loss=0.3136, pruned_loss=0.06634, over 7194.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2704, pruned_loss=0.04935, over 1427376.72 frames.], batch size: 26, lr: 2.76e-04 2022-05-28 03:48:57,418 INFO [train.py:842] (0/4) Epoch 20, batch 2950, loss[loss=0.1686, simple_loss=0.2645, pruned_loss=0.03636, over 7227.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2705, pruned_loss=0.04882, over 1431593.35 frames.], batch size: 20, lr: 2.76e-04 2022-05-28 03:49:35,438 INFO [train.py:842] (0/4) Epoch 20, batch 3000, loss[loss=0.2014, simple_loss=0.2866, pruned_loss=0.05813, over 7389.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2714, pruned_loss=0.04915, over 1430519.40 frames.], batch size: 23, lr: 2.76e-04 2022-05-28 03:49:35,440 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 03:49:44,759 INFO [train.py:871] (0/4) Epoch 20, validation: loss=0.1666, simple_loss=0.2655, pruned_loss=0.0338, over 868885.00 frames. 2022-05-28 03:50:22,734 INFO [train.py:842] (0/4) Epoch 20, batch 3050, loss[loss=0.1894, simple_loss=0.2775, pruned_loss=0.05065, over 7167.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2719, pruned_loss=0.04936, over 1432046.54 frames.], batch size: 19, lr: 2.76e-04 2022-05-28 03:51:00,927 INFO [train.py:842] (0/4) Epoch 20, batch 3100, loss[loss=0.1697, simple_loss=0.2564, pruned_loss=0.04151, over 7124.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2715, pruned_loss=0.04913, over 1431584.82 frames.], batch size: 21, lr: 2.76e-04 2022-05-28 03:51:38,892 INFO [train.py:842] (0/4) Epoch 20, batch 3150, loss[loss=0.1794, simple_loss=0.2611, pruned_loss=0.04881, over 7273.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2717, pruned_loss=0.04905, over 1432399.96 frames.], batch size: 18, lr: 2.76e-04 2022-05-28 03:52:17,377 INFO [train.py:842] (0/4) Epoch 20, batch 3200, loss[loss=0.1733, simple_loss=0.2688, pruned_loss=0.03889, over 6711.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2709, pruned_loss=0.04933, over 1432127.10 frames.], batch size: 31, lr: 2.76e-04 2022-05-28 03:52:55,248 INFO [train.py:842] (0/4) Epoch 20, batch 3250, loss[loss=0.212, simple_loss=0.2799, pruned_loss=0.07208, over 7447.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2718, pruned_loss=0.04958, over 1428207.18 frames.], batch size: 19, lr: 2.76e-04 2022-05-28 03:53:33,556 INFO [train.py:842] (0/4) Epoch 20, batch 3300, loss[loss=0.1833, simple_loss=0.2539, pruned_loss=0.05635, over 7152.00 frames.], tot_loss[loss=0.1861, simple_loss=0.272, pruned_loss=0.05008, over 1426882.73 frames.], batch size: 17, lr: 2.76e-04 2022-05-28 03:54:11,566 INFO [train.py:842] (0/4) Epoch 20, batch 3350, loss[loss=0.174, simple_loss=0.2714, pruned_loss=0.03829, over 7145.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2715, pruned_loss=0.0499, over 1427748.51 frames.], batch size: 20, lr: 2.76e-04 2022-05-28 03:54:49,728 INFO [train.py:842] (0/4) Epoch 20, batch 3400, loss[loss=0.167, simple_loss=0.2504, pruned_loss=0.04179, over 7293.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2728, pruned_loss=0.05016, over 1427233.43 frames.], batch size: 17, lr: 2.76e-04 2022-05-28 03:55:27,650 INFO [train.py:842] (0/4) Epoch 20, batch 3450, loss[loss=0.1865, simple_loss=0.2736, pruned_loss=0.0497, over 7226.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2722, pruned_loss=0.04958, over 1425982.02 frames.], batch size: 20, lr: 2.76e-04 2022-05-28 03:56:05,947 INFO [train.py:842] (0/4) Epoch 20, batch 3500, loss[loss=0.1859, simple_loss=0.2583, pruned_loss=0.05679, over 7263.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2717, pruned_loss=0.04951, over 1424590.29 frames.], batch size: 19, lr: 2.76e-04 2022-05-28 03:56:43,794 INFO [train.py:842] (0/4) Epoch 20, batch 3550, loss[loss=0.1875, simple_loss=0.2827, pruned_loss=0.04618, over 7119.00 frames.], tot_loss[loss=0.1852, simple_loss=0.272, pruned_loss=0.04925, over 1427298.03 frames.], batch size: 21, lr: 2.76e-04 2022-05-28 03:57:22,217 INFO [train.py:842] (0/4) Epoch 20, batch 3600, loss[loss=0.2022, simple_loss=0.2822, pruned_loss=0.06111, over 7432.00 frames.], tot_loss[loss=0.1854, simple_loss=0.272, pruned_loss=0.04943, over 1431954.16 frames.], batch size: 20, lr: 2.76e-04 2022-05-28 03:58:00,175 INFO [train.py:842] (0/4) Epoch 20, batch 3650, loss[loss=0.2037, simple_loss=0.2885, pruned_loss=0.05942, over 7400.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2718, pruned_loss=0.04941, over 1432153.09 frames.], batch size: 21, lr: 2.76e-04 2022-05-28 03:58:38,384 INFO [train.py:842] (0/4) Epoch 20, batch 3700, loss[loss=0.1763, simple_loss=0.2653, pruned_loss=0.04365, over 7221.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2731, pruned_loss=0.05009, over 1433119.90 frames.], batch size: 21, lr: 2.76e-04 2022-05-28 03:59:16,259 INFO [train.py:842] (0/4) Epoch 20, batch 3750, loss[loss=0.2118, simple_loss=0.2949, pruned_loss=0.06434, over 7323.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2735, pruned_loss=0.05086, over 1428901.49 frames.], batch size: 21, lr: 2.76e-04 2022-05-28 03:59:54,688 INFO [train.py:842] (0/4) Epoch 20, batch 3800, loss[loss=0.1624, simple_loss=0.2348, pruned_loss=0.04501, over 7271.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2718, pruned_loss=0.05025, over 1428418.25 frames.], batch size: 17, lr: 2.76e-04 2022-05-28 04:00:32,643 INFO [train.py:842] (0/4) Epoch 20, batch 3850, loss[loss=0.1715, simple_loss=0.2653, pruned_loss=0.03884, over 7364.00 frames.], tot_loss[loss=0.186, simple_loss=0.2718, pruned_loss=0.05014, over 1424030.83 frames.], batch size: 19, lr: 2.75e-04 2022-05-28 04:01:10,453 INFO [train.py:842] (0/4) Epoch 20, batch 3900, loss[loss=0.2177, simple_loss=0.3033, pruned_loss=0.06601, over 7319.00 frames.], tot_loss[loss=0.1868, simple_loss=0.2729, pruned_loss=0.0504, over 1420747.48 frames.], batch size: 25, lr: 2.75e-04 2022-05-28 04:01:48,148 INFO [train.py:842] (0/4) Epoch 20, batch 3950, loss[loss=0.1674, simple_loss=0.2734, pruned_loss=0.03073, over 7412.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2729, pruned_loss=0.0502, over 1418255.46 frames.], batch size: 21, lr: 2.75e-04 2022-05-28 04:02:26,107 INFO [train.py:842] (0/4) Epoch 20, batch 4000, loss[loss=0.1883, simple_loss=0.2793, pruned_loss=0.04864, over 7214.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2737, pruned_loss=0.05072, over 1409960.07 frames.], batch size: 21, lr: 2.75e-04 2022-05-28 04:03:04,016 INFO [train.py:842] (0/4) Epoch 20, batch 4050, loss[loss=0.1741, simple_loss=0.2664, pruned_loss=0.04085, over 7211.00 frames.], tot_loss[loss=0.1878, simple_loss=0.2739, pruned_loss=0.05082, over 1411147.42 frames.], batch size: 21, lr: 2.75e-04 2022-05-28 04:03:42,435 INFO [train.py:842] (0/4) Epoch 20, batch 4100, loss[loss=0.1971, simple_loss=0.2811, pruned_loss=0.05649, over 7153.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2731, pruned_loss=0.05064, over 1410472.78 frames.], batch size: 26, lr: 2.75e-04 2022-05-28 04:04:20,454 INFO [train.py:842] (0/4) Epoch 20, batch 4150, loss[loss=0.2443, simple_loss=0.3373, pruned_loss=0.07566, over 7323.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2718, pruned_loss=0.05005, over 1414821.47 frames.], batch size: 20, lr: 2.75e-04 2022-05-28 04:04:58,988 INFO [train.py:842] (0/4) Epoch 20, batch 4200, loss[loss=0.1932, simple_loss=0.2835, pruned_loss=0.05142, over 7355.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2705, pruned_loss=0.04952, over 1417828.33 frames.], batch size: 19, lr: 2.75e-04 2022-05-28 04:05:36,676 INFO [train.py:842] (0/4) Epoch 20, batch 4250, loss[loss=0.1942, simple_loss=0.294, pruned_loss=0.04713, over 7141.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2737, pruned_loss=0.05129, over 1415038.47 frames.], batch size: 20, lr: 2.75e-04 2022-05-28 04:06:14,970 INFO [train.py:842] (0/4) Epoch 20, batch 4300, loss[loss=0.1962, simple_loss=0.2834, pruned_loss=0.05449, over 7136.00 frames.], tot_loss[loss=0.1881, simple_loss=0.2734, pruned_loss=0.05135, over 1414468.64 frames.], batch size: 26, lr: 2.75e-04 2022-05-28 04:06:53,167 INFO [train.py:842] (0/4) Epoch 20, batch 4350, loss[loss=0.1836, simple_loss=0.2657, pruned_loss=0.05078, over 7402.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2725, pruned_loss=0.05094, over 1417080.07 frames.], batch size: 18, lr: 2.75e-04 2022-05-28 04:07:31,384 INFO [train.py:842] (0/4) Epoch 20, batch 4400, loss[loss=0.1688, simple_loss=0.2716, pruned_loss=0.03298, over 7306.00 frames.], tot_loss[loss=0.187, simple_loss=0.2726, pruned_loss=0.05071, over 1420639.68 frames.], batch size: 25, lr: 2.75e-04 2022-05-28 04:08:09,369 INFO [train.py:842] (0/4) Epoch 20, batch 4450, loss[loss=0.1822, simple_loss=0.2754, pruned_loss=0.04448, over 7409.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2712, pruned_loss=0.0499, over 1415988.42 frames.], batch size: 21, lr: 2.75e-04 2022-05-28 04:08:47,640 INFO [train.py:842] (0/4) Epoch 20, batch 4500, loss[loss=0.1857, simple_loss=0.2775, pruned_loss=0.04701, over 7242.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2706, pruned_loss=0.04933, over 1420909.38 frames.], batch size: 26, lr: 2.75e-04 2022-05-28 04:09:25,672 INFO [train.py:842] (0/4) Epoch 20, batch 4550, loss[loss=0.2394, simple_loss=0.3001, pruned_loss=0.08937, over 7353.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2716, pruned_loss=0.04958, over 1426448.66 frames.], batch size: 19, lr: 2.75e-04 2022-05-28 04:10:04,095 INFO [train.py:842] (0/4) Epoch 20, batch 4600, loss[loss=0.1994, simple_loss=0.2826, pruned_loss=0.05811, over 7326.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2713, pruned_loss=0.05028, over 1423862.39 frames.], batch size: 20, lr: 2.75e-04 2022-05-28 04:10:42,085 INFO [train.py:842] (0/4) Epoch 20, batch 4650, loss[loss=0.2309, simple_loss=0.3056, pruned_loss=0.07806, over 7144.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2711, pruned_loss=0.05028, over 1426342.73 frames.], batch size: 20, lr: 2.75e-04 2022-05-28 04:11:20,089 INFO [train.py:842] (0/4) Epoch 20, batch 4700, loss[loss=0.2308, simple_loss=0.3092, pruned_loss=0.07618, over 7210.00 frames.], tot_loss[loss=0.1861, simple_loss=0.2716, pruned_loss=0.05029, over 1422634.23 frames.], batch size: 21, lr: 2.75e-04 2022-05-28 04:11:57,874 INFO [train.py:842] (0/4) Epoch 20, batch 4750, loss[loss=0.2447, simple_loss=0.329, pruned_loss=0.08016, over 6437.00 frames.], tot_loss[loss=0.1863, simple_loss=0.2721, pruned_loss=0.05027, over 1421096.03 frames.], batch size: 38, lr: 2.75e-04 2022-05-28 04:12:36,328 INFO [train.py:842] (0/4) Epoch 20, batch 4800, loss[loss=0.2013, simple_loss=0.2836, pruned_loss=0.05952, over 5249.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2721, pruned_loss=0.05063, over 1422232.77 frames.], batch size: 52, lr: 2.75e-04 2022-05-28 04:13:14,128 INFO [train.py:842] (0/4) Epoch 20, batch 4850, loss[loss=0.2053, simple_loss=0.2881, pruned_loss=0.06121, over 7143.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2715, pruned_loss=0.04996, over 1420072.91 frames.], batch size: 20, lr: 2.75e-04 2022-05-28 04:13:52,314 INFO [train.py:842] (0/4) Epoch 20, batch 4900, loss[loss=0.2194, simple_loss=0.2956, pruned_loss=0.07161, over 5039.00 frames.], tot_loss[loss=0.1863, simple_loss=0.272, pruned_loss=0.05034, over 1421535.37 frames.], batch size: 52, lr: 2.75e-04 2022-05-28 04:14:30,423 INFO [train.py:842] (0/4) Epoch 20, batch 4950, loss[loss=0.2082, simple_loss=0.3082, pruned_loss=0.05409, over 7151.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2724, pruned_loss=0.05073, over 1423734.71 frames.], batch size: 20, lr: 2.75e-04 2022-05-28 04:15:18,008 INFO [train.py:842] (0/4) Epoch 20, batch 5000, loss[loss=0.1828, simple_loss=0.2774, pruned_loss=0.04414, over 7133.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2732, pruned_loss=0.05052, over 1428488.76 frames.], batch size: 26, lr: 2.75e-04 2022-05-28 04:15:55,718 INFO [train.py:842] (0/4) Epoch 20, batch 5050, loss[loss=0.1736, simple_loss=0.2578, pruned_loss=0.0447, over 6836.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2725, pruned_loss=0.04997, over 1419880.53 frames.], batch size: 15, lr: 2.75e-04 2022-05-28 04:16:33,965 INFO [train.py:842] (0/4) Epoch 20, batch 5100, loss[loss=0.1804, simple_loss=0.2682, pruned_loss=0.04636, over 7356.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2725, pruned_loss=0.04972, over 1424366.02 frames.], batch size: 19, lr: 2.75e-04 2022-05-28 04:17:11,879 INFO [train.py:842] (0/4) Epoch 20, batch 5150, loss[loss=0.152, simple_loss=0.2343, pruned_loss=0.03486, over 7278.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2721, pruned_loss=0.04962, over 1425801.97 frames.], batch size: 17, lr: 2.74e-04 2022-05-28 04:17:50,060 INFO [train.py:842] (0/4) Epoch 20, batch 5200, loss[loss=0.1848, simple_loss=0.2772, pruned_loss=0.04621, over 7228.00 frames.], tot_loss[loss=0.1861, simple_loss=0.2729, pruned_loss=0.04965, over 1427931.85 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:18:27,915 INFO [train.py:842] (0/4) Epoch 20, batch 5250, loss[loss=0.22, simple_loss=0.2991, pruned_loss=0.07043, over 7333.00 frames.], tot_loss[loss=0.186, simple_loss=0.2725, pruned_loss=0.04974, over 1421956.79 frames.], batch size: 22, lr: 2.74e-04 2022-05-28 04:19:06,159 INFO [train.py:842] (0/4) Epoch 20, batch 5300, loss[loss=0.1978, simple_loss=0.2918, pruned_loss=0.05194, over 7393.00 frames.], tot_loss[loss=0.1867, simple_loss=0.273, pruned_loss=0.05016, over 1419137.50 frames.], batch size: 23, lr: 2.74e-04 2022-05-28 04:19:43,979 INFO [train.py:842] (0/4) Epoch 20, batch 5350, loss[loss=0.2415, simple_loss=0.3306, pruned_loss=0.07617, over 7297.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2725, pruned_loss=0.04967, over 1420952.81 frames.], batch size: 24, lr: 2.74e-04 2022-05-28 04:20:22,320 INFO [train.py:842] (0/4) Epoch 20, batch 5400, loss[loss=0.169, simple_loss=0.2603, pruned_loss=0.03885, over 7237.00 frames.], tot_loss[loss=0.1856, simple_loss=0.2723, pruned_loss=0.04943, over 1420758.45 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:21:00,421 INFO [train.py:842] (0/4) Epoch 20, batch 5450, loss[loss=0.2008, simple_loss=0.293, pruned_loss=0.0543, over 7426.00 frames.], tot_loss[loss=0.187, simple_loss=0.2731, pruned_loss=0.05044, over 1420957.14 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:21:38,508 INFO [train.py:842] (0/4) Epoch 20, batch 5500, loss[loss=0.1655, simple_loss=0.2581, pruned_loss=0.03639, over 7328.00 frames.], tot_loss[loss=0.1852, simple_loss=0.2717, pruned_loss=0.04936, over 1419448.23 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:22:16,444 INFO [train.py:842] (0/4) Epoch 20, batch 5550, loss[loss=0.1896, simple_loss=0.2688, pruned_loss=0.05519, over 7419.00 frames.], tot_loss[loss=0.1864, simple_loss=0.2728, pruned_loss=0.05, over 1421814.73 frames.], batch size: 21, lr: 2.74e-04 2022-05-28 04:22:54,445 INFO [train.py:842] (0/4) Epoch 20, batch 5600, loss[loss=0.1977, simple_loss=0.2803, pruned_loss=0.0575, over 7290.00 frames.], tot_loss[loss=0.1868, simple_loss=0.2738, pruned_loss=0.04991, over 1423617.59 frames.], batch size: 25, lr: 2.74e-04 2022-05-28 04:23:32,207 INFO [train.py:842] (0/4) Epoch 20, batch 5650, loss[loss=0.192, simple_loss=0.2856, pruned_loss=0.04915, over 7199.00 frames.], tot_loss[loss=0.1879, simple_loss=0.2746, pruned_loss=0.05063, over 1420816.23 frames.], batch size: 22, lr: 2.74e-04 2022-05-28 04:24:10,415 INFO [train.py:842] (0/4) Epoch 20, batch 5700, loss[loss=0.1938, simple_loss=0.2599, pruned_loss=0.06387, over 7248.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2732, pruned_loss=0.04961, over 1420041.41 frames.], batch size: 16, lr: 2.74e-04 2022-05-28 04:24:48,361 INFO [train.py:842] (0/4) Epoch 20, batch 5750, loss[loss=0.1872, simple_loss=0.2875, pruned_loss=0.04345, over 7216.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2735, pruned_loss=0.05041, over 1418545.31 frames.], batch size: 23, lr: 2.74e-04 2022-05-28 04:25:26,841 INFO [train.py:842] (0/4) Epoch 20, batch 5800, loss[loss=0.1988, simple_loss=0.292, pruned_loss=0.0528, over 7139.00 frames.], tot_loss[loss=0.186, simple_loss=0.2725, pruned_loss=0.04974, over 1422207.65 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:26:04,818 INFO [train.py:842] (0/4) Epoch 20, batch 5850, loss[loss=0.2144, simple_loss=0.3003, pruned_loss=0.06422, over 6733.00 frames.], tot_loss[loss=0.1863, simple_loss=0.2733, pruned_loss=0.04967, over 1426668.65 frames.], batch size: 31, lr: 2.74e-04 2022-05-28 04:26:42,983 INFO [train.py:842] (0/4) Epoch 20, batch 5900, loss[loss=0.1961, simple_loss=0.277, pruned_loss=0.05764, over 7333.00 frames.], tot_loss[loss=0.186, simple_loss=0.2728, pruned_loss=0.04954, over 1420533.07 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:27:20,895 INFO [train.py:842] (0/4) Epoch 20, batch 5950, loss[loss=0.1791, simple_loss=0.2604, pruned_loss=0.04891, over 7333.00 frames.], tot_loss[loss=0.187, simple_loss=0.2732, pruned_loss=0.0504, over 1418304.03 frames.], batch size: 22, lr: 2.74e-04 2022-05-28 04:27:59,345 INFO [train.py:842] (0/4) Epoch 20, batch 6000, loss[loss=0.2207, simple_loss=0.3025, pruned_loss=0.06942, over 7341.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2719, pruned_loss=0.04978, over 1422062.36 frames.], batch size: 22, lr: 2.74e-04 2022-05-28 04:27:59,346 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 04:28:08,359 INFO [train.py:871] (0/4) Epoch 20, validation: loss=0.1665, simple_loss=0.2662, pruned_loss=0.03341, over 868885.00 frames. 2022-05-28 04:28:46,199 INFO [train.py:842] (0/4) Epoch 20, batch 6050, loss[loss=0.1738, simple_loss=0.2501, pruned_loss=0.04876, over 7065.00 frames.], tot_loss[loss=0.1864, simple_loss=0.2727, pruned_loss=0.05007, over 1421438.55 frames.], batch size: 18, lr: 2.74e-04 2022-05-28 04:29:24,660 INFO [train.py:842] (0/4) Epoch 20, batch 6100, loss[loss=0.1781, simple_loss=0.2687, pruned_loss=0.0438, over 7425.00 frames.], tot_loss[loss=0.186, simple_loss=0.2721, pruned_loss=0.05001, over 1421319.56 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:30:02,605 INFO [train.py:842] (0/4) Epoch 20, batch 6150, loss[loss=0.2376, simple_loss=0.3127, pruned_loss=0.08126, over 7067.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2742, pruned_loss=0.05149, over 1423514.86 frames.], batch size: 18, lr: 2.74e-04 2022-05-28 04:30:41,018 INFO [train.py:842] (0/4) Epoch 20, batch 6200, loss[loss=0.1567, simple_loss=0.254, pruned_loss=0.02966, over 7427.00 frames.], tot_loss[loss=0.1873, simple_loss=0.2731, pruned_loss=0.0507, over 1424144.77 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:31:18,802 INFO [train.py:842] (0/4) Epoch 20, batch 6250, loss[loss=0.1803, simple_loss=0.268, pruned_loss=0.04626, over 7356.00 frames.], tot_loss[loss=0.1863, simple_loss=0.2725, pruned_loss=0.05008, over 1423572.12 frames.], batch size: 19, lr: 2.74e-04 2022-05-28 04:31:56,956 INFO [train.py:842] (0/4) Epoch 20, batch 6300, loss[loss=0.2181, simple_loss=0.3003, pruned_loss=0.06794, over 7300.00 frames.], tot_loss[loss=0.1886, simple_loss=0.2744, pruned_loss=0.05136, over 1420985.77 frames.], batch size: 25, lr: 2.74e-04 2022-05-28 04:32:34,969 INFO [train.py:842] (0/4) Epoch 20, batch 6350, loss[loss=0.1853, simple_loss=0.2745, pruned_loss=0.04808, over 7320.00 frames.], tot_loss[loss=0.1864, simple_loss=0.2726, pruned_loss=0.0501, over 1424306.68 frames.], batch size: 21, lr: 2.74e-04 2022-05-28 04:33:13,290 INFO [train.py:842] (0/4) Epoch 20, batch 6400, loss[loss=0.1574, simple_loss=0.2487, pruned_loss=0.03307, over 7322.00 frames.], tot_loss[loss=0.1868, simple_loss=0.2729, pruned_loss=0.05039, over 1424493.00 frames.], batch size: 20, lr: 2.74e-04 2022-05-28 04:33:50,978 INFO [train.py:842] (0/4) Epoch 20, batch 6450, loss[loss=0.1783, simple_loss=0.2612, pruned_loss=0.04771, over 7323.00 frames.], tot_loss[loss=0.1877, simple_loss=0.2737, pruned_loss=0.05083, over 1421405.25 frames.], batch size: 21, lr: 2.73e-04 2022-05-28 04:34:29,309 INFO [train.py:842] (0/4) Epoch 20, batch 6500, loss[loss=0.1523, simple_loss=0.2363, pruned_loss=0.0341, over 7059.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2729, pruned_loss=0.05072, over 1424627.02 frames.], batch size: 18, lr: 2.73e-04 2022-05-28 04:35:07,337 INFO [train.py:842] (0/4) Epoch 20, batch 6550, loss[loss=0.1621, simple_loss=0.2521, pruned_loss=0.03602, over 7159.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2711, pruned_loss=0.05009, over 1423362.33 frames.], batch size: 19, lr: 2.73e-04 2022-05-28 04:35:45,543 INFO [train.py:842] (0/4) Epoch 20, batch 6600, loss[loss=0.1817, simple_loss=0.2636, pruned_loss=0.0499, over 7234.00 frames.], tot_loss[loss=0.1854, simple_loss=0.271, pruned_loss=0.0499, over 1426193.17 frames.], batch size: 16, lr: 2.73e-04 2022-05-28 04:36:23,631 INFO [train.py:842] (0/4) Epoch 20, batch 6650, loss[loss=0.1878, simple_loss=0.2718, pruned_loss=0.05191, over 7239.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2715, pruned_loss=0.05002, over 1425365.66 frames.], batch size: 20, lr: 2.73e-04 2022-05-28 04:37:02,174 INFO [train.py:842] (0/4) Epoch 20, batch 6700, loss[loss=0.1696, simple_loss=0.2438, pruned_loss=0.04775, over 7139.00 frames.], tot_loss[loss=0.1863, simple_loss=0.2716, pruned_loss=0.05046, over 1424567.76 frames.], batch size: 17, lr: 2.73e-04 2022-05-28 04:37:40,086 INFO [train.py:842] (0/4) Epoch 20, batch 6750, loss[loss=0.2303, simple_loss=0.3022, pruned_loss=0.07916, over 7163.00 frames.], tot_loss[loss=0.1865, simple_loss=0.272, pruned_loss=0.05054, over 1428834.93 frames.], batch size: 18, lr: 2.73e-04 2022-05-28 04:38:18,249 INFO [train.py:842] (0/4) Epoch 20, batch 6800, loss[loss=0.1707, simple_loss=0.2414, pruned_loss=0.04997, over 7272.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2725, pruned_loss=0.05068, over 1427631.52 frames.], batch size: 17, lr: 2.73e-04 2022-05-28 04:38:56,156 INFO [train.py:842] (0/4) Epoch 20, batch 6850, loss[loss=0.1556, simple_loss=0.2435, pruned_loss=0.03391, over 6981.00 frames.], tot_loss[loss=0.186, simple_loss=0.2718, pruned_loss=0.05007, over 1427990.06 frames.], batch size: 16, lr: 2.73e-04 2022-05-28 04:39:34,134 INFO [train.py:842] (0/4) Epoch 20, batch 6900, loss[loss=0.1825, simple_loss=0.2734, pruned_loss=0.0458, over 7319.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2732, pruned_loss=0.05098, over 1424821.86 frames.], batch size: 21, lr: 2.73e-04 2022-05-28 04:40:11,934 INFO [train.py:842] (0/4) Epoch 20, batch 6950, loss[loss=0.1742, simple_loss=0.2644, pruned_loss=0.04202, over 7218.00 frames.], tot_loss[loss=0.1873, simple_loss=0.2734, pruned_loss=0.05063, over 1426121.86 frames.], batch size: 22, lr: 2.73e-04 2022-05-28 04:40:50,223 INFO [train.py:842] (0/4) Epoch 20, batch 7000, loss[loss=0.1692, simple_loss=0.2478, pruned_loss=0.0453, over 6785.00 frames.], tot_loss[loss=0.1878, simple_loss=0.2734, pruned_loss=0.05114, over 1426827.59 frames.], batch size: 15, lr: 2.73e-04 2022-05-28 04:41:28,300 INFO [train.py:842] (0/4) Epoch 20, batch 7050, loss[loss=0.1913, simple_loss=0.2796, pruned_loss=0.05147, over 7068.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2724, pruned_loss=0.05052, over 1430017.19 frames.], batch size: 28, lr: 2.73e-04 2022-05-28 04:42:06,606 INFO [train.py:842] (0/4) Epoch 20, batch 7100, loss[loss=0.195, simple_loss=0.2882, pruned_loss=0.05088, over 7151.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2723, pruned_loss=0.05058, over 1430528.24 frames.], batch size: 19, lr: 2.73e-04 2022-05-28 04:42:44,636 INFO [train.py:842] (0/4) Epoch 20, batch 7150, loss[loss=0.1689, simple_loss=0.2637, pruned_loss=0.03705, over 7216.00 frames.], tot_loss[loss=0.186, simple_loss=0.2719, pruned_loss=0.05004, over 1432244.02 frames.], batch size: 21, lr: 2.73e-04 2022-05-28 04:43:22,716 INFO [train.py:842] (0/4) Epoch 20, batch 7200, loss[loss=0.1654, simple_loss=0.2602, pruned_loss=0.03531, over 7113.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2716, pruned_loss=0.04995, over 1425384.48 frames.], batch size: 21, lr: 2.73e-04 2022-05-28 04:44:00,672 INFO [train.py:842] (0/4) Epoch 20, batch 7250, loss[loss=0.1754, simple_loss=0.2679, pruned_loss=0.04149, over 7342.00 frames.], tot_loss[loss=0.1864, simple_loss=0.2723, pruned_loss=0.05026, over 1424122.53 frames.], batch size: 22, lr: 2.73e-04 2022-05-28 04:44:38,828 INFO [train.py:842] (0/4) Epoch 20, batch 7300, loss[loss=0.2202, simple_loss=0.2839, pruned_loss=0.0782, over 5232.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2722, pruned_loss=0.04968, over 1420921.32 frames.], batch size: 52, lr: 2.73e-04 2022-05-28 04:45:16,990 INFO [train.py:842] (0/4) Epoch 20, batch 7350, loss[loss=0.1668, simple_loss=0.2516, pruned_loss=0.04095, over 7158.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2722, pruned_loss=0.04971, over 1423625.29 frames.], batch size: 18, lr: 2.73e-04 2022-05-28 04:45:55,199 INFO [train.py:842] (0/4) Epoch 20, batch 7400, loss[loss=0.1551, simple_loss=0.2373, pruned_loss=0.0365, over 7132.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2719, pruned_loss=0.04975, over 1426211.56 frames.], batch size: 17, lr: 2.73e-04 2022-05-28 04:46:32,946 INFO [train.py:842] (0/4) Epoch 20, batch 7450, loss[loss=0.1644, simple_loss=0.2569, pruned_loss=0.03602, over 7326.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2729, pruned_loss=0.05027, over 1426524.22 frames.], batch size: 21, lr: 2.73e-04 2022-05-28 04:47:11,133 INFO [train.py:842] (0/4) Epoch 20, batch 7500, loss[loss=0.2251, simple_loss=0.3127, pruned_loss=0.06876, over 7211.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2728, pruned_loss=0.05047, over 1429473.23 frames.], batch size: 21, lr: 2.73e-04 2022-05-28 04:47:48,917 INFO [train.py:842] (0/4) Epoch 20, batch 7550, loss[loss=0.1596, simple_loss=0.2396, pruned_loss=0.03979, over 7278.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2728, pruned_loss=0.05047, over 1424338.99 frames.], batch size: 18, lr: 2.73e-04 2022-05-28 04:48:27,064 INFO [train.py:842] (0/4) Epoch 20, batch 7600, loss[loss=0.2126, simple_loss=0.2902, pruned_loss=0.06756, over 7067.00 frames.], tot_loss[loss=0.1876, simple_loss=0.2737, pruned_loss=0.05071, over 1424218.59 frames.], batch size: 18, lr: 2.73e-04 2022-05-28 04:49:05,137 INFO [train.py:842] (0/4) Epoch 20, batch 7650, loss[loss=0.1714, simple_loss=0.2644, pruned_loss=0.03914, over 7219.00 frames.], tot_loss[loss=0.1866, simple_loss=0.273, pruned_loss=0.05008, over 1425303.72 frames.], batch size: 21, lr: 2.73e-04 2022-05-28 04:49:43,395 INFO [train.py:842] (0/4) Epoch 20, batch 7700, loss[loss=0.1788, simple_loss=0.2675, pruned_loss=0.04507, over 7267.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2732, pruned_loss=0.05015, over 1426911.49 frames.], batch size: 19, lr: 2.73e-04 2022-05-28 04:50:21,409 INFO [train.py:842] (0/4) Epoch 20, batch 7750, loss[loss=0.1968, simple_loss=0.2931, pruned_loss=0.05029, over 7148.00 frames.], tot_loss[loss=0.1872, simple_loss=0.2733, pruned_loss=0.05053, over 1427746.07 frames.], batch size: 20, lr: 2.73e-04 2022-05-28 04:50:59,581 INFO [train.py:842] (0/4) Epoch 20, batch 7800, loss[loss=0.1424, simple_loss=0.2285, pruned_loss=0.02817, over 7160.00 frames.], tot_loss[loss=0.186, simple_loss=0.2727, pruned_loss=0.04961, over 1427118.84 frames.], batch size: 18, lr: 2.72e-04 2022-05-28 04:51:37,534 INFO [train.py:842] (0/4) Epoch 20, batch 7850, loss[loss=0.1509, simple_loss=0.2361, pruned_loss=0.03281, over 7140.00 frames.], tot_loss[loss=0.185, simple_loss=0.2717, pruned_loss=0.04914, over 1426409.47 frames.], batch size: 20, lr: 2.72e-04 2022-05-28 04:52:15,872 INFO [train.py:842] (0/4) Epoch 20, batch 7900, loss[loss=0.2047, simple_loss=0.2883, pruned_loss=0.06059, over 7106.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2703, pruned_loss=0.04909, over 1427749.60 frames.], batch size: 21, lr: 2.72e-04 2022-05-28 04:52:53,895 INFO [train.py:842] (0/4) Epoch 20, batch 7950, loss[loss=0.1763, simple_loss=0.2599, pruned_loss=0.04634, over 7266.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2691, pruned_loss=0.04818, over 1429435.16 frames.], batch size: 25, lr: 2.72e-04 2022-05-28 04:53:32,237 INFO [train.py:842] (0/4) Epoch 20, batch 8000, loss[loss=0.1734, simple_loss=0.2622, pruned_loss=0.04229, over 7358.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2691, pruned_loss=0.04771, over 1431475.08 frames.], batch size: 19, lr: 2.72e-04 2022-05-28 04:54:10,489 INFO [train.py:842] (0/4) Epoch 20, batch 8050, loss[loss=0.1892, simple_loss=0.2796, pruned_loss=0.04939, over 7223.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2688, pruned_loss=0.04785, over 1428281.25 frames.], batch size: 21, lr: 2.72e-04 2022-05-28 04:54:48,887 INFO [train.py:842] (0/4) Epoch 20, batch 8100, loss[loss=0.1958, simple_loss=0.2787, pruned_loss=0.05645, over 7423.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2694, pruned_loss=0.0484, over 1432054.68 frames.], batch size: 20, lr: 2.72e-04 2022-05-28 04:55:36,030 INFO [train.py:842] (0/4) Epoch 20, batch 8150, loss[loss=0.1624, simple_loss=0.2491, pruned_loss=0.03788, over 7139.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2703, pruned_loss=0.04869, over 1425177.20 frames.], batch size: 17, lr: 2.72e-04 2022-05-28 04:56:14,316 INFO [train.py:842] (0/4) Epoch 20, batch 8200, loss[loss=0.1482, simple_loss=0.2283, pruned_loss=0.03407, over 7408.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2698, pruned_loss=0.04848, over 1425939.65 frames.], batch size: 18, lr: 2.72e-04 2022-05-28 04:56:52,099 INFO [train.py:842] (0/4) Epoch 20, batch 8250, loss[loss=0.1454, simple_loss=0.2357, pruned_loss=0.02752, over 7274.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2697, pruned_loss=0.04834, over 1425139.19 frames.], batch size: 18, lr: 2.72e-04 2022-05-28 04:57:30,456 INFO [train.py:842] (0/4) Epoch 20, batch 8300, loss[loss=0.2322, simple_loss=0.3119, pruned_loss=0.07626, over 7328.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2706, pruned_loss=0.04888, over 1425182.61 frames.], batch size: 21, lr: 2.72e-04 2022-05-28 04:58:17,702 INFO [train.py:842] (0/4) Epoch 20, batch 8350, loss[loss=0.1951, simple_loss=0.2908, pruned_loss=0.04971, over 7209.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2711, pruned_loss=0.04908, over 1421284.25 frames.], batch size: 23, lr: 2.72e-04 2022-05-28 04:58:55,799 INFO [train.py:842] (0/4) Epoch 20, batch 8400, loss[loss=0.1643, simple_loss=0.258, pruned_loss=0.03531, over 7330.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2711, pruned_loss=0.04874, over 1420688.29 frames.], batch size: 20, lr: 2.72e-04 2022-05-28 04:59:33,815 INFO [train.py:842] (0/4) Epoch 20, batch 8450, loss[loss=0.1877, simple_loss=0.2616, pruned_loss=0.05693, over 7416.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2715, pruned_loss=0.04907, over 1423350.08 frames.], batch size: 18, lr: 2.72e-04 2022-05-28 05:00:21,171 INFO [train.py:842] (0/4) Epoch 20, batch 8500, loss[loss=0.212, simple_loss=0.2979, pruned_loss=0.06307, over 7311.00 frames.], tot_loss[loss=0.185, simple_loss=0.2717, pruned_loss=0.04913, over 1416961.86 frames.], batch size: 24, lr: 2.72e-04 2022-05-28 05:00:59,184 INFO [train.py:842] (0/4) Epoch 20, batch 8550, loss[loss=0.2242, simple_loss=0.2997, pruned_loss=0.07442, over 7256.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2717, pruned_loss=0.04963, over 1420343.64 frames.], batch size: 19, lr: 2.72e-04 2022-05-28 05:01:37,329 INFO [train.py:842] (0/4) Epoch 20, batch 8600, loss[loss=0.1716, simple_loss=0.27, pruned_loss=0.03658, over 7320.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2703, pruned_loss=0.0487, over 1422838.68 frames.], batch size: 21, lr: 2.72e-04 2022-05-28 05:02:15,316 INFO [train.py:842] (0/4) Epoch 20, batch 8650, loss[loss=0.158, simple_loss=0.248, pruned_loss=0.03403, over 7228.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2703, pruned_loss=0.04897, over 1420054.84 frames.], batch size: 20, lr: 2.72e-04 2022-05-28 05:02:53,352 INFO [train.py:842] (0/4) Epoch 20, batch 8700, loss[loss=0.1588, simple_loss=0.2416, pruned_loss=0.03803, over 7286.00 frames.], tot_loss[loss=0.1838, simple_loss=0.27, pruned_loss=0.04879, over 1411705.15 frames.], batch size: 18, lr: 2.72e-04 2022-05-28 05:03:31,384 INFO [train.py:842] (0/4) Epoch 20, batch 8750, loss[loss=0.2055, simple_loss=0.2992, pruned_loss=0.05589, over 7218.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2706, pruned_loss=0.04935, over 1414177.78 frames.], batch size: 23, lr: 2.72e-04 2022-05-28 05:04:09,699 INFO [train.py:842] (0/4) Epoch 20, batch 8800, loss[loss=0.1909, simple_loss=0.2873, pruned_loss=0.0473, over 7149.00 frames.], tot_loss[loss=0.184, simple_loss=0.2699, pruned_loss=0.04901, over 1415473.16 frames.], batch size: 19, lr: 2.72e-04 2022-05-28 05:04:47,742 INFO [train.py:842] (0/4) Epoch 20, batch 8850, loss[loss=0.1475, simple_loss=0.2238, pruned_loss=0.03561, over 7003.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2678, pruned_loss=0.0484, over 1409675.13 frames.], batch size: 16, lr: 2.72e-04 2022-05-28 05:05:25,742 INFO [train.py:842] (0/4) Epoch 20, batch 8900, loss[loss=0.1494, simple_loss=0.2452, pruned_loss=0.02676, over 7248.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2673, pruned_loss=0.0483, over 1400452.29 frames.], batch size: 19, lr: 2.72e-04 2022-05-28 05:06:03,643 INFO [train.py:842] (0/4) Epoch 20, batch 8950, loss[loss=0.2127, simple_loss=0.3043, pruned_loss=0.06057, over 7224.00 frames.], tot_loss[loss=0.181, simple_loss=0.2662, pruned_loss=0.04789, over 1391171.29 frames.], batch size: 21, lr: 2.72e-04 2022-05-28 05:06:42,107 INFO [train.py:842] (0/4) Epoch 20, batch 9000, loss[loss=0.3006, simple_loss=0.3697, pruned_loss=0.1157, over 6472.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2657, pruned_loss=0.04872, over 1369163.64 frames.], batch size: 38, lr: 2.72e-04 2022-05-28 05:06:42,108 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 05:06:51,180 INFO [train.py:871] (0/4) Epoch 20, validation: loss=0.164, simple_loss=0.2628, pruned_loss=0.03265, over 868885.00 frames. 2022-05-28 05:07:28,673 INFO [train.py:842] (0/4) Epoch 20, batch 9050, loss[loss=0.209, simple_loss=0.2883, pruned_loss=0.06485, over 4899.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2672, pruned_loss=0.05006, over 1331067.28 frames.], batch size: 52, lr: 2.72e-04 2022-05-28 05:08:05,300 INFO [train.py:842] (0/4) Epoch 20, batch 9100, loss[loss=0.2041, simple_loss=0.3008, pruned_loss=0.05373, over 6388.00 frames.], tot_loss[loss=0.1885, simple_loss=0.2722, pruned_loss=0.0524, over 1287390.10 frames.], batch size: 37, lr: 2.72e-04 2022-05-28 05:08:41,899 INFO [train.py:842] (0/4) Epoch 20, batch 9150, loss[loss=0.2277, simple_loss=0.3154, pruned_loss=0.06996, over 4978.00 frames.], tot_loss[loss=0.1938, simple_loss=0.2769, pruned_loss=0.0553, over 1235350.94 frames.], batch size: 52, lr: 2.71e-04 2022-05-28 05:09:14,353 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-20.pt 2022-05-28 05:09:33,866 INFO [train.py:842] (0/4) Epoch 21, batch 0, loss[loss=0.1459, simple_loss=0.231, pruned_loss=0.03038, over 7008.00 frames.], tot_loss[loss=0.1459, simple_loss=0.231, pruned_loss=0.03038, over 7008.00 frames.], batch size: 16, lr: 2.65e-04 2022-05-28 05:10:12,040 INFO [train.py:842] (0/4) Epoch 21, batch 50, loss[loss=0.1649, simple_loss=0.2598, pruned_loss=0.03497, over 6388.00 frames.], tot_loss[loss=0.188, simple_loss=0.2738, pruned_loss=0.05112, over 322822.92 frames.], batch size: 37, lr: 2.65e-04 2022-05-28 05:10:50,329 INFO [train.py:842] (0/4) Epoch 21, batch 100, loss[loss=0.2012, simple_loss=0.2832, pruned_loss=0.05956, over 7215.00 frames.], tot_loss[loss=0.1856, simple_loss=0.2722, pruned_loss=0.04945, over 567463.95 frames.], batch size: 16, lr: 2.65e-04 2022-05-28 05:11:28,266 INFO [train.py:842] (0/4) Epoch 21, batch 150, loss[loss=0.1609, simple_loss=0.2497, pruned_loss=0.03601, over 7169.00 frames.], tot_loss[loss=0.1861, simple_loss=0.2725, pruned_loss=0.04981, over 756232.69 frames.], batch size: 18, lr: 2.65e-04 2022-05-28 05:11:36,073 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-184000.pt 2022-05-28 05:12:09,309 INFO [train.py:842] (0/4) Epoch 21, batch 200, loss[loss=0.1794, simple_loss=0.2707, pruned_loss=0.04407, over 6696.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2727, pruned_loss=0.04938, over 900788.52 frames.], batch size: 31, lr: 2.65e-04 2022-05-28 05:12:47,238 INFO [train.py:842] (0/4) Epoch 21, batch 250, loss[loss=0.1628, simple_loss=0.2452, pruned_loss=0.04022, over 7160.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2729, pruned_loss=0.05021, over 1013373.29 frames.], batch size: 19, lr: 2.65e-04 2022-05-28 05:13:25,533 INFO [train.py:842] (0/4) Epoch 21, batch 300, loss[loss=0.1523, simple_loss=0.2456, pruned_loss=0.02956, over 7279.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2717, pruned_loss=0.04969, over 1102106.77 frames.], batch size: 18, lr: 2.65e-04 2022-05-28 05:14:03,303 INFO [train.py:842] (0/4) Epoch 21, batch 350, loss[loss=0.1875, simple_loss=0.2721, pruned_loss=0.05143, over 7256.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2723, pruned_loss=0.04934, over 1170859.55 frames.], batch size: 19, lr: 2.65e-04 2022-05-28 05:14:41,723 INFO [train.py:842] (0/4) Epoch 21, batch 400, loss[loss=0.1635, simple_loss=0.2467, pruned_loss=0.04019, over 7061.00 frames.], tot_loss[loss=0.1864, simple_loss=0.2723, pruned_loss=0.05021, over 1230067.49 frames.], batch size: 18, lr: 2.65e-04 2022-05-28 05:15:19,616 INFO [train.py:842] (0/4) Epoch 21, batch 450, loss[loss=0.1687, simple_loss=0.2564, pruned_loss=0.0405, over 7062.00 frames.], tot_loss[loss=0.1861, simple_loss=0.272, pruned_loss=0.05007, over 1272163.21 frames.], batch size: 18, lr: 2.65e-04 2022-05-28 05:15:57,947 INFO [train.py:842] (0/4) Epoch 21, batch 500, loss[loss=0.1729, simple_loss=0.2733, pruned_loss=0.0363, over 7089.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2712, pruned_loss=0.04969, over 1311046.06 frames.], batch size: 28, lr: 2.65e-04 2022-05-28 05:16:36,036 INFO [train.py:842] (0/4) Epoch 21, batch 550, loss[loss=0.1832, simple_loss=0.2557, pruned_loss=0.05536, over 6846.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2707, pruned_loss=0.04954, over 1337049.81 frames.], batch size: 15, lr: 2.65e-04 2022-05-28 05:17:14,240 INFO [train.py:842] (0/4) Epoch 21, batch 600, loss[loss=0.1872, simple_loss=0.284, pruned_loss=0.04522, over 7214.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2717, pruned_loss=0.04994, over 1356050.56 frames.], batch size: 22, lr: 2.65e-04 2022-05-28 05:17:52,430 INFO [train.py:842] (0/4) Epoch 21, batch 650, loss[loss=0.1529, simple_loss=0.2309, pruned_loss=0.03746, over 7129.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2711, pruned_loss=0.04997, over 1370143.73 frames.], batch size: 17, lr: 2.65e-04 2022-05-28 05:18:30,488 INFO [train.py:842] (0/4) Epoch 21, batch 700, loss[loss=0.1872, simple_loss=0.2679, pruned_loss=0.05326, over 7237.00 frames.], tot_loss[loss=0.1852, simple_loss=0.271, pruned_loss=0.04967, over 1380136.86 frames.], batch size: 20, lr: 2.65e-04 2022-05-28 05:19:08,396 INFO [train.py:842] (0/4) Epoch 21, batch 750, loss[loss=0.1463, simple_loss=0.2313, pruned_loss=0.03064, over 7408.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2704, pruned_loss=0.04906, over 1386025.18 frames.], batch size: 18, lr: 2.65e-04 2022-05-28 05:19:46,506 INFO [train.py:842] (0/4) Epoch 21, batch 800, loss[loss=0.2331, simple_loss=0.3126, pruned_loss=0.07682, over 7234.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2713, pruned_loss=0.04976, over 1384823.11 frames.], batch size: 20, lr: 2.65e-04 2022-05-28 05:20:24,486 INFO [train.py:842] (0/4) Epoch 21, batch 850, loss[loss=0.239, simple_loss=0.317, pruned_loss=0.0805, over 7286.00 frames.], tot_loss[loss=0.1852, simple_loss=0.271, pruned_loss=0.04968, over 1391361.63 frames.], batch size: 25, lr: 2.65e-04 2022-05-28 05:21:02,924 INFO [train.py:842] (0/4) Epoch 21, batch 900, loss[loss=0.1728, simple_loss=0.2672, pruned_loss=0.03924, over 7240.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2698, pruned_loss=0.04884, over 1399675.28 frames.], batch size: 20, lr: 2.65e-04 2022-05-28 05:21:40,794 INFO [train.py:842] (0/4) Epoch 21, batch 950, loss[loss=0.2005, simple_loss=0.2971, pruned_loss=0.0519, over 7333.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2706, pruned_loss=0.04926, over 1405385.49 frames.], batch size: 22, lr: 2.64e-04 2022-05-28 05:22:18,907 INFO [train.py:842] (0/4) Epoch 21, batch 1000, loss[loss=0.2168, simple_loss=0.307, pruned_loss=0.0633, over 7188.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2725, pruned_loss=0.04998, over 1405117.53 frames.], batch size: 23, lr: 2.64e-04 2022-05-28 05:22:56,513 INFO [train.py:842] (0/4) Epoch 21, batch 1050, loss[loss=0.2193, simple_loss=0.3103, pruned_loss=0.06412, over 7416.00 frames.], tot_loss[loss=0.1878, simple_loss=0.2748, pruned_loss=0.05047, over 1406058.36 frames.], batch size: 21, lr: 2.64e-04 2022-05-28 05:23:34,908 INFO [train.py:842] (0/4) Epoch 21, batch 1100, loss[loss=0.1484, simple_loss=0.2305, pruned_loss=0.03317, over 7206.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2716, pruned_loss=0.04904, over 1409095.19 frames.], batch size: 16, lr: 2.64e-04 2022-05-28 05:24:12,840 INFO [train.py:842] (0/4) Epoch 21, batch 1150, loss[loss=0.1968, simple_loss=0.2915, pruned_loss=0.05102, over 7278.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2723, pruned_loss=0.04931, over 1414330.50 frames.], batch size: 24, lr: 2.64e-04 2022-05-28 05:24:50,884 INFO [train.py:842] (0/4) Epoch 21, batch 1200, loss[loss=0.1582, simple_loss=0.2391, pruned_loss=0.03866, over 7282.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2728, pruned_loss=0.04947, over 1416998.08 frames.], batch size: 18, lr: 2.64e-04 2022-05-28 05:25:28,908 INFO [train.py:842] (0/4) Epoch 21, batch 1250, loss[loss=0.1912, simple_loss=0.2825, pruned_loss=0.04992, over 7296.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2711, pruned_loss=0.04884, over 1418774.96 frames.], batch size: 24, lr: 2.64e-04 2022-05-28 05:26:07,272 INFO [train.py:842] (0/4) Epoch 21, batch 1300, loss[loss=0.1708, simple_loss=0.2511, pruned_loss=0.04529, over 7070.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2696, pruned_loss=0.04838, over 1417948.01 frames.], batch size: 18, lr: 2.64e-04 2022-05-28 05:26:45,550 INFO [train.py:842] (0/4) Epoch 21, batch 1350, loss[loss=0.1584, simple_loss=0.2487, pruned_loss=0.03411, over 7334.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2686, pruned_loss=0.04778, over 1425262.23 frames.], batch size: 22, lr: 2.64e-04 2022-05-28 05:27:23,882 INFO [train.py:842] (0/4) Epoch 21, batch 1400, loss[loss=0.2206, simple_loss=0.3118, pruned_loss=0.06466, over 7379.00 frames.], tot_loss[loss=0.1835, simple_loss=0.2701, pruned_loss=0.04847, over 1427881.82 frames.], batch size: 23, lr: 2.64e-04 2022-05-28 05:28:01,805 INFO [train.py:842] (0/4) Epoch 21, batch 1450, loss[loss=0.249, simple_loss=0.3094, pruned_loss=0.09429, over 5048.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2706, pruned_loss=0.04914, over 1422461.54 frames.], batch size: 52, lr: 2.64e-04 2022-05-28 05:28:39,858 INFO [train.py:842] (0/4) Epoch 21, batch 1500, loss[loss=0.2108, simple_loss=0.2988, pruned_loss=0.06136, over 7321.00 frames.], tot_loss[loss=0.185, simple_loss=0.2711, pruned_loss=0.04941, over 1419523.77 frames.], batch size: 22, lr: 2.64e-04 2022-05-28 05:29:17,829 INFO [train.py:842] (0/4) Epoch 21, batch 1550, loss[loss=0.2349, simple_loss=0.3148, pruned_loss=0.07748, over 6774.00 frames.], tot_loss[loss=0.1856, simple_loss=0.2714, pruned_loss=0.04994, over 1421544.29 frames.], batch size: 31, lr: 2.64e-04 2022-05-28 05:29:56,065 INFO [train.py:842] (0/4) Epoch 21, batch 1600, loss[loss=0.1781, simple_loss=0.271, pruned_loss=0.04261, over 7327.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2711, pruned_loss=0.04896, over 1422639.26 frames.], batch size: 22, lr: 2.64e-04 2022-05-28 05:30:33,970 INFO [train.py:842] (0/4) Epoch 21, batch 1650, loss[loss=0.1839, simple_loss=0.2783, pruned_loss=0.0448, over 7339.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2716, pruned_loss=0.04901, over 1423607.89 frames.], batch size: 20, lr: 2.64e-04 2022-05-28 05:31:12,220 INFO [train.py:842] (0/4) Epoch 21, batch 1700, loss[loss=0.3082, simple_loss=0.3705, pruned_loss=0.1229, over 7352.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2714, pruned_loss=0.04875, over 1422624.32 frames.], batch size: 22, lr: 2.64e-04 2022-05-28 05:31:50,195 INFO [train.py:842] (0/4) Epoch 21, batch 1750, loss[loss=0.1818, simple_loss=0.2689, pruned_loss=0.04734, over 7418.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2719, pruned_loss=0.04897, over 1422514.53 frames.], batch size: 18, lr: 2.64e-04 2022-05-28 05:32:28,365 INFO [train.py:842] (0/4) Epoch 21, batch 1800, loss[loss=0.195, simple_loss=0.2829, pruned_loss=0.05358, over 7204.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2713, pruned_loss=0.04899, over 1424218.48 frames.], batch size: 23, lr: 2.64e-04 2022-05-28 05:33:06,414 INFO [train.py:842] (0/4) Epoch 21, batch 1850, loss[loss=0.1538, simple_loss=0.2376, pruned_loss=0.03501, over 7400.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2705, pruned_loss=0.04906, over 1422344.38 frames.], batch size: 18, lr: 2.64e-04 2022-05-28 05:33:44,675 INFO [train.py:842] (0/4) Epoch 21, batch 1900, loss[loss=0.1719, simple_loss=0.2625, pruned_loss=0.04068, over 7168.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2703, pruned_loss=0.04894, over 1424716.98 frames.], batch size: 19, lr: 2.64e-04 2022-05-28 05:34:22,659 INFO [train.py:842] (0/4) Epoch 21, batch 1950, loss[loss=0.163, simple_loss=0.2519, pruned_loss=0.03699, over 7261.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2696, pruned_loss=0.04846, over 1428486.18 frames.], batch size: 19, lr: 2.64e-04 2022-05-28 05:35:00,906 INFO [train.py:842] (0/4) Epoch 21, batch 2000, loss[loss=0.182, simple_loss=0.2745, pruned_loss=0.04474, over 6858.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2698, pruned_loss=0.04868, over 1424607.61 frames.], batch size: 31, lr: 2.64e-04 2022-05-28 05:35:38,830 INFO [train.py:842] (0/4) Epoch 21, batch 2050, loss[loss=0.2031, simple_loss=0.2969, pruned_loss=0.0546, over 7221.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2711, pruned_loss=0.04957, over 1423930.78 frames.], batch size: 21, lr: 2.64e-04 2022-05-28 05:36:17,019 INFO [train.py:842] (0/4) Epoch 21, batch 2100, loss[loss=0.2388, simple_loss=0.3028, pruned_loss=0.08744, over 7061.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2712, pruned_loss=0.04989, over 1422902.58 frames.], batch size: 18, lr: 2.64e-04 2022-05-28 05:36:54,844 INFO [train.py:842] (0/4) Epoch 21, batch 2150, loss[loss=0.1768, simple_loss=0.2506, pruned_loss=0.05152, over 6769.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2717, pruned_loss=0.04968, over 1421912.57 frames.], batch size: 15, lr: 2.64e-04 2022-05-28 05:37:33,247 INFO [train.py:842] (0/4) Epoch 21, batch 2200, loss[loss=0.1777, simple_loss=0.2656, pruned_loss=0.04487, over 7195.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2704, pruned_loss=0.04884, over 1424214.83 frames.], batch size: 22, lr: 2.64e-04 2022-05-28 05:38:11,122 INFO [train.py:842] (0/4) Epoch 21, batch 2250, loss[loss=0.2024, simple_loss=0.2909, pruned_loss=0.05691, over 7198.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2703, pruned_loss=0.0483, over 1424239.90 frames.], batch size: 22, lr: 2.64e-04 2022-05-28 05:38:49,546 INFO [train.py:842] (0/4) Epoch 21, batch 2300, loss[loss=0.2073, simple_loss=0.2868, pruned_loss=0.06386, over 4981.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2689, pruned_loss=0.04825, over 1422416.16 frames.], batch size: 52, lr: 2.64e-04 2022-05-28 05:39:27,154 INFO [train.py:842] (0/4) Epoch 21, batch 2350, loss[loss=0.2076, simple_loss=0.2913, pruned_loss=0.062, over 7287.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2707, pruned_loss=0.04894, over 1417487.38 frames.], batch size: 24, lr: 2.63e-04 2022-05-28 05:40:05,573 INFO [train.py:842] (0/4) Epoch 21, batch 2400, loss[loss=0.1772, simple_loss=0.2741, pruned_loss=0.04017, over 7215.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2697, pruned_loss=0.04858, over 1421447.59 frames.], batch size: 23, lr: 2.63e-04 2022-05-28 05:40:43,635 INFO [train.py:842] (0/4) Epoch 21, batch 2450, loss[loss=0.1991, simple_loss=0.2935, pruned_loss=0.05237, over 7159.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2698, pruned_loss=0.0487, over 1422374.71 frames.], batch size: 19, lr: 2.63e-04 2022-05-28 05:41:21,973 INFO [train.py:842] (0/4) Epoch 21, batch 2500, loss[loss=0.1769, simple_loss=0.2731, pruned_loss=0.04033, over 7408.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2698, pruned_loss=0.04879, over 1423031.82 frames.], batch size: 21, lr: 2.63e-04 2022-05-28 05:41:59,711 INFO [train.py:842] (0/4) Epoch 21, batch 2550, loss[loss=0.208, simple_loss=0.2945, pruned_loss=0.06073, over 4825.00 frames.], tot_loss[loss=0.1849, simple_loss=0.271, pruned_loss=0.04939, over 1420568.43 frames.], batch size: 52, lr: 2.63e-04 2022-05-28 05:42:37,950 INFO [train.py:842] (0/4) Epoch 21, batch 2600, loss[loss=0.1348, simple_loss=0.2193, pruned_loss=0.02519, over 7083.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2709, pruned_loss=0.04911, over 1421318.95 frames.], batch size: 18, lr: 2.63e-04 2022-05-28 05:43:15,787 INFO [train.py:842] (0/4) Epoch 21, batch 2650, loss[loss=0.1659, simple_loss=0.2533, pruned_loss=0.03922, over 7324.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2715, pruned_loss=0.04952, over 1416314.10 frames.], batch size: 20, lr: 2.63e-04 2022-05-28 05:43:53,878 INFO [train.py:842] (0/4) Epoch 21, batch 2700, loss[loss=0.1524, simple_loss=0.2442, pruned_loss=0.03027, over 7410.00 frames.], tot_loss[loss=0.1852, simple_loss=0.2716, pruned_loss=0.04939, over 1420119.41 frames.], batch size: 18, lr: 2.63e-04 2022-05-28 05:44:31,848 INFO [train.py:842] (0/4) Epoch 21, batch 2750, loss[loss=0.206, simple_loss=0.2976, pruned_loss=0.05722, over 7159.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2719, pruned_loss=0.04947, over 1421331.60 frames.], batch size: 18, lr: 2.63e-04 2022-05-28 05:45:10,226 INFO [train.py:842] (0/4) Epoch 21, batch 2800, loss[loss=0.1702, simple_loss=0.2639, pruned_loss=0.03826, over 7391.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2726, pruned_loss=0.05042, over 1424554.06 frames.], batch size: 23, lr: 2.63e-04 2022-05-28 05:45:48,218 INFO [train.py:842] (0/4) Epoch 21, batch 2850, loss[loss=0.1914, simple_loss=0.2911, pruned_loss=0.04587, over 7198.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2718, pruned_loss=0.05001, over 1420325.57 frames.], batch size: 23, lr: 2.63e-04 2022-05-28 05:46:26,344 INFO [train.py:842] (0/4) Epoch 21, batch 2900, loss[loss=0.2006, simple_loss=0.287, pruned_loss=0.0571, over 7072.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2722, pruned_loss=0.0505, over 1415448.39 frames.], batch size: 28, lr: 2.63e-04 2022-05-28 05:47:04,345 INFO [train.py:842] (0/4) Epoch 21, batch 2950, loss[loss=0.1914, simple_loss=0.2703, pruned_loss=0.05627, over 7362.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2723, pruned_loss=0.05005, over 1414731.55 frames.], batch size: 19, lr: 2.63e-04 2022-05-28 05:47:42,618 INFO [train.py:842] (0/4) Epoch 21, batch 3000, loss[loss=0.2093, simple_loss=0.2917, pruned_loss=0.06348, over 6824.00 frames.], tot_loss[loss=0.1862, simple_loss=0.2722, pruned_loss=0.05014, over 1414026.99 frames.], batch size: 31, lr: 2.63e-04 2022-05-28 05:47:42,619 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 05:47:51,726 INFO [train.py:871] (0/4) Epoch 21, validation: loss=0.1652, simple_loss=0.2649, pruned_loss=0.0328, over 868885.00 frames. 2022-05-28 05:48:29,677 INFO [train.py:842] (0/4) Epoch 21, batch 3050, loss[loss=0.1462, simple_loss=0.2308, pruned_loss=0.03075, over 7266.00 frames.], tot_loss[loss=0.1861, simple_loss=0.2719, pruned_loss=0.05014, over 1415424.59 frames.], batch size: 18, lr: 2.63e-04 2022-05-28 05:49:07,873 INFO [train.py:842] (0/4) Epoch 21, batch 3100, loss[loss=0.2716, simple_loss=0.3459, pruned_loss=0.09864, over 7387.00 frames.], tot_loss[loss=0.1865, simple_loss=0.2723, pruned_loss=0.05038, over 1413338.66 frames.], batch size: 23, lr: 2.63e-04 2022-05-28 05:49:46,058 INFO [train.py:842] (0/4) Epoch 21, batch 3150, loss[loss=0.1997, simple_loss=0.281, pruned_loss=0.05921, over 7274.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2706, pruned_loss=0.04911, over 1418423.36 frames.], batch size: 24, lr: 2.63e-04 2022-05-28 05:50:24,120 INFO [train.py:842] (0/4) Epoch 21, batch 3200, loss[loss=0.2236, simple_loss=0.3033, pruned_loss=0.07193, over 7316.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2715, pruned_loss=0.04902, over 1423132.75 frames.], batch size: 21, lr: 2.63e-04 2022-05-28 05:51:01,971 INFO [train.py:842] (0/4) Epoch 21, batch 3250, loss[loss=0.172, simple_loss=0.2559, pruned_loss=0.04402, over 7070.00 frames.], tot_loss[loss=0.183, simple_loss=0.2701, pruned_loss=0.04792, over 1422000.14 frames.], batch size: 18, lr: 2.63e-04 2022-05-28 05:51:40,399 INFO [train.py:842] (0/4) Epoch 21, batch 3300, loss[loss=0.1706, simple_loss=0.2518, pruned_loss=0.04467, over 7129.00 frames.], tot_loss[loss=0.1828, simple_loss=0.2699, pruned_loss=0.04783, over 1423034.76 frames.], batch size: 17, lr: 2.63e-04 2022-05-28 05:52:18,323 INFO [train.py:842] (0/4) Epoch 21, batch 3350, loss[loss=0.1827, simple_loss=0.2611, pruned_loss=0.05214, over 7237.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2713, pruned_loss=0.04905, over 1419489.13 frames.], batch size: 20, lr: 2.63e-04 2022-05-28 05:52:56,392 INFO [train.py:842] (0/4) Epoch 21, batch 3400, loss[loss=0.1873, simple_loss=0.273, pruned_loss=0.05077, over 6270.00 frames.], tot_loss[loss=0.185, simple_loss=0.2717, pruned_loss=0.04916, over 1416667.24 frames.], batch size: 37, lr: 2.63e-04 2022-05-28 05:53:34,248 INFO [train.py:842] (0/4) Epoch 21, batch 3450, loss[loss=0.2164, simple_loss=0.3123, pruned_loss=0.06025, over 7305.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2717, pruned_loss=0.04906, over 1415395.09 frames.], batch size: 21, lr: 2.63e-04 2022-05-28 05:54:12,331 INFO [train.py:842] (0/4) Epoch 21, batch 3500, loss[loss=0.2062, simple_loss=0.2877, pruned_loss=0.06235, over 7074.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2719, pruned_loss=0.04929, over 1410052.98 frames.], batch size: 28, lr: 2.63e-04 2022-05-28 05:54:50,557 INFO [train.py:842] (0/4) Epoch 21, batch 3550, loss[loss=0.1707, simple_loss=0.2531, pruned_loss=0.04413, over 7284.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2721, pruned_loss=0.04961, over 1414263.21 frames.], batch size: 17, lr: 2.63e-04 2022-05-28 05:55:28,699 INFO [train.py:842] (0/4) Epoch 21, batch 3600, loss[loss=0.2248, simple_loss=0.3059, pruned_loss=0.07185, over 7370.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2734, pruned_loss=0.05039, over 1412411.64 frames.], batch size: 23, lr: 2.63e-04 2022-05-28 05:56:06,581 INFO [train.py:842] (0/4) Epoch 21, batch 3650, loss[loss=0.175, simple_loss=0.266, pruned_loss=0.04204, over 7147.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2731, pruned_loss=0.05002, over 1413782.75 frames.], batch size: 26, lr: 2.63e-04 2022-05-28 05:56:44,822 INFO [train.py:842] (0/4) Epoch 21, batch 3700, loss[loss=0.1982, simple_loss=0.2957, pruned_loss=0.05035, over 7324.00 frames.], tot_loss[loss=0.1864, simple_loss=0.2732, pruned_loss=0.04985, over 1415283.78 frames.], batch size: 21, lr: 2.63e-04 2022-05-28 05:57:22,921 INFO [train.py:842] (0/4) Epoch 21, batch 3750, loss[loss=0.1877, simple_loss=0.2821, pruned_loss=0.04666, over 7305.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2721, pruned_loss=0.04926, over 1418718.32 frames.], batch size: 25, lr: 2.62e-04 2022-05-28 05:58:01,030 INFO [train.py:842] (0/4) Epoch 21, batch 3800, loss[loss=0.2101, simple_loss=0.291, pruned_loss=0.06454, over 7185.00 frames.], tot_loss[loss=0.1851, simple_loss=0.272, pruned_loss=0.04915, over 1418946.47 frames.], batch size: 26, lr: 2.62e-04 2022-05-28 05:58:38,978 INFO [train.py:842] (0/4) Epoch 21, batch 3850, loss[loss=0.1943, simple_loss=0.2848, pruned_loss=0.05189, over 7327.00 frames.], tot_loss[loss=0.1851, simple_loss=0.272, pruned_loss=0.04913, over 1420283.87 frames.], batch size: 20, lr: 2.62e-04 2022-05-28 05:59:17,310 INFO [train.py:842] (0/4) Epoch 21, batch 3900, loss[loss=0.1919, simple_loss=0.2787, pruned_loss=0.05257, over 7263.00 frames.], tot_loss[loss=0.1851, simple_loss=0.272, pruned_loss=0.04912, over 1424093.51 frames.], batch size: 19, lr: 2.62e-04 2022-05-28 05:59:54,986 INFO [train.py:842] (0/4) Epoch 21, batch 3950, loss[loss=0.1581, simple_loss=0.2422, pruned_loss=0.03703, over 7410.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2723, pruned_loss=0.04926, over 1418997.25 frames.], batch size: 18, lr: 2.62e-04 2022-05-28 06:00:33,124 INFO [train.py:842] (0/4) Epoch 21, batch 4000, loss[loss=0.1627, simple_loss=0.2593, pruned_loss=0.03309, over 7362.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2721, pruned_loss=0.04882, over 1422922.37 frames.], batch size: 19, lr: 2.62e-04 2022-05-28 06:01:11,314 INFO [train.py:842] (0/4) Epoch 21, batch 4050, loss[loss=0.1601, simple_loss=0.2547, pruned_loss=0.03278, over 7426.00 frames.], tot_loss[loss=0.184, simple_loss=0.2709, pruned_loss=0.04851, over 1423262.39 frames.], batch size: 20, lr: 2.62e-04 2022-05-28 06:01:49,087 INFO [train.py:842] (0/4) Epoch 21, batch 4100, loss[loss=0.1517, simple_loss=0.2408, pruned_loss=0.03133, over 7129.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2726, pruned_loss=0.04945, over 1414670.74 frames.], batch size: 17, lr: 2.62e-04 2022-05-28 06:02:26,894 INFO [train.py:842] (0/4) Epoch 21, batch 4150, loss[loss=0.2117, simple_loss=0.2962, pruned_loss=0.06359, over 7187.00 frames.], tot_loss[loss=0.1857, simple_loss=0.2726, pruned_loss=0.04937, over 1413029.61 frames.], batch size: 23, lr: 2.62e-04 2022-05-28 06:03:05,082 INFO [train.py:842] (0/4) Epoch 21, batch 4200, loss[loss=0.2069, simple_loss=0.2934, pruned_loss=0.06016, over 5090.00 frames.], tot_loss[loss=0.186, simple_loss=0.2732, pruned_loss=0.04938, over 1417747.65 frames.], batch size: 53, lr: 2.62e-04 2022-05-28 06:03:42,983 INFO [train.py:842] (0/4) Epoch 21, batch 4250, loss[loss=0.1855, simple_loss=0.2833, pruned_loss=0.04381, over 7216.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2725, pruned_loss=0.04924, over 1418001.36 frames.], batch size: 21, lr: 2.62e-04 2022-05-28 06:04:21,416 INFO [train.py:842] (0/4) Epoch 21, batch 4300, loss[loss=0.2209, simple_loss=0.3023, pruned_loss=0.06972, over 7017.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2729, pruned_loss=0.05015, over 1420119.29 frames.], batch size: 16, lr: 2.62e-04 2022-05-28 06:04:59,213 INFO [train.py:842] (0/4) Epoch 21, batch 4350, loss[loss=0.1768, simple_loss=0.2645, pruned_loss=0.04453, over 7281.00 frames.], tot_loss[loss=0.1856, simple_loss=0.2721, pruned_loss=0.0496, over 1419687.75 frames.], batch size: 24, lr: 2.62e-04 2022-05-28 06:05:37,554 INFO [train.py:842] (0/4) Epoch 21, batch 4400, loss[loss=0.1804, simple_loss=0.2735, pruned_loss=0.0437, over 6209.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2711, pruned_loss=0.04892, over 1421332.54 frames.], batch size: 37, lr: 2.62e-04 2022-05-28 06:06:15,533 INFO [train.py:842] (0/4) Epoch 21, batch 4450, loss[loss=0.1973, simple_loss=0.287, pruned_loss=0.05379, over 7232.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2703, pruned_loss=0.04873, over 1423330.81 frames.], batch size: 21, lr: 2.62e-04 2022-05-28 06:06:53,882 INFO [train.py:842] (0/4) Epoch 21, batch 4500, loss[loss=0.1863, simple_loss=0.2717, pruned_loss=0.05041, over 7229.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2699, pruned_loss=0.04861, over 1425773.12 frames.], batch size: 20, lr: 2.62e-04 2022-05-28 06:07:31,870 INFO [train.py:842] (0/4) Epoch 21, batch 4550, loss[loss=0.2454, simple_loss=0.3142, pruned_loss=0.08826, over 7098.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2688, pruned_loss=0.0481, over 1426836.29 frames.], batch size: 28, lr: 2.62e-04 2022-05-28 06:08:10,246 INFO [train.py:842] (0/4) Epoch 21, batch 4600, loss[loss=0.1558, simple_loss=0.2349, pruned_loss=0.03835, over 7165.00 frames.], tot_loss[loss=0.183, simple_loss=0.2691, pruned_loss=0.04852, over 1424339.32 frames.], batch size: 18, lr: 2.62e-04 2022-05-28 06:08:48,131 INFO [train.py:842] (0/4) Epoch 21, batch 4650, loss[loss=0.1933, simple_loss=0.2793, pruned_loss=0.05366, over 7234.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2685, pruned_loss=0.04829, over 1424041.06 frames.], batch size: 20, lr: 2.62e-04 2022-05-28 06:09:26,156 INFO [train.py:842] (0/4) Epoch 21, batch 4700, loss[loss=0.1522, simple_loss=0.2444, pruned_loss=0.02998, over 7165.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2692, pruned_loss=0.04829, over 1425560.57 frames.], batch size: 19, lr: 2.62e-04 2022-05-28 06:10:04,161 INFO [train.py:842] (0/4) Epoch 21, batch 4750, loss[loss=0.2387, simple_loss=0.3353, pruned_loss=0.07111, over 7097.00 frames.], tot_loss[loss=0.184, simple_loss=0.27, pruned_loss=0.04898, over 1424249.01 frames.], batch size: 28, lr: 2.62e-04 2022-05-28 06:10:42,354 INFO [train.py:842] (0/4) Epoch 21, batch 4800, loss[loss=0.2301, simple_loss=0.3126, pruned_loss=0.07374, over 7305.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2711, pruned_loss=0.04951, over 1421419.61 frames.], batch size: 24, lr: 2.62e-04 2022-05-28 06:11:20,367 INFO [train.py:842] (0/4) Epoch 21, batch 4850, loss[loss=0.1794, simple_loss=0.2682, pruned_loss=0.04533, over 7335.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2707, pruned_loss=0.04931, over 1418175.35 frames.], batch size: 20, lr: 2.62e-04 2022-05-28 06:11:58,838 INFO [train.py:842] (0/4) Epoch 21, batch 4900, loss[loss=0.1686, simple_loss=0.2631, pruned_loss=0.03704, over 7274.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2702, pruned_loss=0.04858, over 1421197.80 frames.], batch size: 24, lr: 2.62e-04 2022-05-28 06:12:36,419 INFO [train.py:842] (0/4) Epoch 21, batch 4950, loss[loss=0.1656, simple_loss=0.2609, pruned_loss=0.03515, over 7142.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2707, pruned_loss=0.04878, over 1413745.60 frames.], batch size: 20, lr: 2.62e-04 2022-05-28 06:13:14,601 INFO [train.py:842] (0/4) Epoch 21, batch 5000, loss[loss=0.1907, simple_loss=0.277, pruned_loss=0.05219, over 7416.00 frames.], tot_loss[loss=0.1852, simple_loss=0.2717, pruned_loss=0.0493, over 1417836.88 frames.], batch size: 20, lr: 2.62e-04 2022-05-28 06:13:52,556 INFO [train.py:842] (0/4) Epoch 21, batch 5050, loss[loss=0.1547, simple_loss=0.2476, pruned_loss=0.03096, over 7430.00 frames.], tot_loss[loss=0.1861, simple_loss=0.272, pruned_loss=0.05008, over 1418990.80 frames.], batch size: 20, lr: 2.62e-04 2022-05-28 06:14:30,639 INFO [train.py:842] (0/4) Epoch 21, batch 5100, loss[loss=0.156, simple_loss=0.2434, pruned_loss=0.03426, over 7170.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2718, pruned_loss=0.05001, over 1421079.01 frames.], batch size: 18, lr: 2.62e-04 2022-05-28 06:15:08,572 INFO [train.py:842] (0/4) Epoch 21, batch 5150, loss[loss=0.1847, simple_loss=0.2734, pruned_loss=0.04802, over 4970.00 frames.], tot_loss[loss=0.1865, simple_loss=0.2722, pruned_loss=0.05039, over 1415352.12 frames.], batch size: 52, lr: 2.62e-04 2022-05-28 06:15:46,841 INFO [train.py:842] (0/4) Epoch 21, batch 5200, loss[loss=0.1685, simple_loss=0.2597, pruned_loss=0.03869, over 6759.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2718, pruned_loss=0.0499, over 1418840.88 frames.], batch size: 31, lr: 2.61e-04 2022-05-28 06:16:24,691 INFO [train.py:842] (0/4) Epoch 21, batch 5250, loss[loss=0.1797, simple_loss=0.2763, pruned_loss=0.04156, over 6335.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2715, pruned_loss=0.04959, over 1419871.11 frames.], batch size: 37, lr: 2.61e-04 2022-05-28 06:17:02,972 INFO [train.py:842] (0/4) Epoch 21, batch 5300, loss[loss=0.1508, simple_loss=0.2375, pruned_loss=0.03205, over 7160.00 frames.], tot_loss[loss=0.1835, simple_loss=0.2704, pruned_loss=0.0483, over 1424036.85 frames.], batch size: 18, lr: 2.61e-04 2022-05-28 06:17:40,967 INFO [train.py:842] (0/4) Epoch 21, batch 5350, loss[loss=0.2097, simple_loss=0.2866, pruned_loss=0.06637, over 7318.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2711, pruned_loss=0.04912, over 1425203.26 frames.], batch size: 25, lr: 2.61e-04 2022-05-28 06:18:19,345 INFO [train.py:842] (0/4) Epoch 21, batch 5400, loss[loss=0.1835, simple_loss=0.2576, pruned_loss=0.05472, over 7274.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2708, pruned_loss=0.04932, over 1421496.90 frames.], batch size: 18, lr: 2.61e-04 2022-05-28 06:18:57,182 INFO [train.py:842] (0/4) Epoch 21, batch 5450, loss[loss=0.2003, simple_loss=0.2836, pruned_loss=0.05846, over 7188.00 frames.], tot_loss[loss=0.185, simple_loss=0.2713, pruned_loss=0.04933, over 1423422.89 frames.], batch size: 23, lr: 2.61e-04 2022-05-28 06:19:35,456 INFO [train.py:842] (0/4) Epoch 21, batch 5500, loss[loss=0.18, simple_loss=0.2703, pruned_loss=0.04487, over 7363.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2715, pruned_loss=0.04914, over 1422377.83 frames.], batch size: 23, lr: 2.61e-04 2022-05-28 06:20:13,582 INFO [train.py:842] (0/4) Epoch 21, batch 5550, loss[loss=0.1591, simple_loss=0.2567, pruned_loss=0.03073, over 7348.00 frames.], tot_loss[loss=0.1858, simple_loss=0.2718, pruned_loss=0.04987, over 1418759.49 frames.], batch size: 22, lr: 2.61e-04 2022-05-28 06:20:51,853 INFO [train.py:842] (0/4) Epoch 21, batch 5600, loss[loss=0.1474, simple_loss=0.225, pruned_loss=0.03491, over 7430.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2712, pruned_loss=0.04968, over 1416605.68 frames.], batch size: 17, lr: 2.61e-04 2022-05-28 06:21:29,930 INFO [train.py:842] (0/4) Epoch 21, batch 5650, loss[loss=0.1896, simple_loss=0.2785, pruned_loss=0.05041, over 7322.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2703, pruned_loss=0.04941, over 1419777.37 frames.], batch size: 21, lr: 2.61e-04 2022-05-28 06:22:08,294 INFO [train.py:842] (0/4) Epoch 21, batch 5700, loss[loss=0.1754, simple_loss=0.2652, pruned_loss=0.0428, over 7037.00 frames.], tot_loss[loss=0.1856, simple_loss=0.2711, pruned_loss=0.05004, over 1422599.05 frames.], batch size: 28, lr: 2.61e-04 2022-05-28 06:22:46,369 INFO [train.py:842] (0/4) Epoch 21, batch 5750, loss[loss=0.1682, simple_loss=0.2669, pruned_loss=0.03474, over 7341.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2708, pruned_loss=0.04969, over 1425875.78 frames.], batch size: 22, lr: 2.61e-04 2022-05-28 06:23:24,676 INFO [train.py:842] (0/4) Epoch 21, batch 5800, loss[loss=0.2151, simple_loss=0.3006, pruned_loss=0.06475, over 7273.00 frames.], tot_loss[loss=0.1861, simple_loss=0.2715, pruned_loss=0.05038, over 1428480.57 frames.], batch size: 25, lr: 2.61e-04 2022-05-28 06:24:02,708 INFO [train.py:842] (0/4) Epoch 21, batch 5850, loss[loss=0.16, simple_loss=0.2549, pruned_loss=0.03251, over 7441.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2717, pruned_loss=0.05085, over 1423261.24 frames.], batch size: 20, lr: 2.61e-04 2022-05-28 06:24:40,934 INFO [train.py:842] (0/4) Epoch 21, batch 5900, loss[loss=0.1786, simple_loss=0.2744, pruned_loss=0.04139, over 7293.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2723, pruned_loss=0.05075, over 1422113.34 frames.], batch size: 24, lr: 2.61e-04 2022-05-28 06:25:18,656 INFO [train.py:842] (0/4) Epoch 21, batch 5950, loss[loss=0.2359, simple_loss=0.3035, pruned_loss=0.08419, over 6687.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2726, pruned_loss=0.05032, over 1416146.49 frames.], batch size: 31, lr: 2.61e-04 2022-05-28 06:25:56,953 INFO [train.py:842] (0/4) Epoch 21, batch 6000, loss[loss=0.1926, simple_loss=0.2623, pruned_loss=0.06148, over 7174.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2729, pruned_loss=0.05015, over 1418592.15 frames.], batch size: 16, lr: 2.61e-04 2022-05-28 06:25:56,954 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 06:26:05,956 INFO [train.py:871] (0/4) Epoch 21, validation: loss=0.1654, simple_loss=0.2646, pruned_loss=0.03309, over 868885.00 frames. 2022-05-28 06:26:43,745 INFO [train.py:842] (0/4) Epoch 21, batch 6050, loss[loss=0.1819, simple_loss=0.2685, pruned_loss=0.04766, over 6424.00 frames.], tot_loss[loss=0.1863, simple_loss=0.2725, pruned_loss=0.05002, over 1415652.56 frames.], batch size: 37, lr: 2.61e-04 2022-05-28 06:27:22,087 INFO [train.py:842] (0/4) Epoch 21, batch 6100, loss[loss=0.1453, simple_loss=0.2242, pruned_loss=0.03325, over 7137.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2711, pruned_loss=0.04916, over 1417652.67 frames.], batch size: 17, lr: 2.61e-04 2022-05-28 06:27:59,903 INFO [train.py:842] (0/4) Epoch 21, batch 6150, loss[loss=0.1756, simple_loss=0.2626, pruned_loss=0.0443, over 7341.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2713, pruned_loss=0.04924, over 1418686.61 frames.], batch size: 22, lr: 2.61e-04 2022-05-28 06:28:38,150 INFO [train.py:842] (0/4) Epoch 21, batch 6200, loss[loss=0.1985, simple_loss=0.2801, pruned_loss=0.05848, over 7183.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2712, pruned_loss=0.04892, over 1422832.59 frames.], batch size: 26, lr: 2.61e-04 2022-05-28 06:29:16,114 INFO [train.py:842] (0/4) Epoch 21, batch 6250, loss[loss=0.2111, simple_loss=0.3009, pruned_loss=0.0606, over 7270.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2714, pruned_loss=0.04888, over 1421950.47 frames.], batch size: 24, lr: 2.61e-04 2022-05-28 06:29:54,352 INFO [train.py:842] (0/4) Epoch 21, batch 6300, loss[loss=0.2232, simple_loss=0.3066, pruned_loss=0.06997, over 7339.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2715, pruned_loss=0.0491, over 1424594.01 frames.], batch size: 22, lr: 2.61e-04 2022-05-28 06:30:32,471 INFO [train.py:842] (0/4) Epoch 21, batch 6350, loss[loss=0.1561, simple_loss=0.2558, pruned_loss=0.02824, over 7327.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2705, pruned_loss=0.04854, over 1427778.21 frames.], batch size: 20, lr: 2.61e-04 2022-05-28 06:31:10,710 INFO [train.py:842] (0/4) Epoch 21, batch 6400, loss[loss=0.2471, simple_loss=0.3149, pruned_loss=0.0897, over 5112.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2692, pruned_loss=0.04814, over 1425988.41 frames.], batch size: 52, lr: 2.61e-04 2022-05-28 06:31:48,568 INFO [train.py:842] (0/4) Epoch 21, batch 6450, loss[loss=0.1713, simple_loss=0.2592, pruned_loss=0.04168, over 7429.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2698, pruned_loss=0.04828, over 1424627.74 frames.], batch size: 20, lr: 2.61e-04 2022-05-28 06:32:26,773 INFO [train.py:842] (0/4) Epoch 21, batch 6500, loss[loss=0.1614, simple_loss=0.2399, pruned_loss=0.04142, over 7069.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2712, pruned_loss=0.04924, over 1426781.21 frames.], batch size: 18, lr: 2.61e-04 2022-05-28 06:33:04,410 INFO [train.py:842] (0/4) Epoch 21, batch 6550, loss[loss=0.1719, simple_loss=0.2596, pruned_loss=0.04214, over 7440.00 frames.], tot_loss[loss=0.1852, simple_loss=0.2716, pruned_loss=0.0494, over 1423670.68 frames.], batch size: 20, lr: 2.61e-04 2022-05-28 06:33:42,753 INFO [train.py:842] (0/4) Epoch 21, batch 6600, loss[loss=0.1564, simple_loss=0.2482, pruned_loss=0.03234, over 7335.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2714, pruned_loss=0.04919, over 1422293.97 frames.], batch size: 22, lr: 2.61e-04 2022-05-28 06:34:20,556 INFO [train.py:842] (0/4) Epoch 21, batch 6650, loss[loss=0.1948, simple_loss=0.2724, pruned_loss=0.05864, over 7419.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2718, pruned_loss=0.04949, over 1417824.68 frames.], batch size: 18, lr: 2.60e-04 2022-05-28 06:34:59,046 INFO [train.py:842] (0/4) Epoch 21, batch 6700, loss[loss=0.209, simple_loss=0.2962, pruned_loss=0.06091, over 7377.00 frames.], tot_loss[loss=0.185, simple_loss=0.2711, pruned_loss=0.04939, over 1423249.90 frames.], batch size: 23, lr: 2.60e-04 2022-05-28 06:35:36,972 INFO [train.py:842] (0/4) Epoch 21, batch 6750, loss[loss=0.1282, simple_loss=0.2175, pruned_loss=0.01951, over 6994.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2709, pruned_loss=0.04889, over 1425477.30 frames.], batch size: 16, lr: 2.60e-04 2022-05-28 06:36:14,935 INFO [train.py:842] (0/4) Epoch 21, batch 6800, loss[loss=0.2236, simple_loss=0.3028, pruned_loss=0.07225, over 7413.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2718, pruned_loss=0.04945, over 1423427.42 frames.], batch size: 21, lr: 2.60e-04 2022-05-28 06:36:52,969 INFO [train.py:842] (0/4) Epoch 21, batch 6850, loss[loss=0.1681, simple_loss=0.2505, pruned_loss=0.0429, over 7456.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2713, pruned_loss=0.04923, over 1426324.59 frames.], batch size: 19, lr: 2.60e-04 2022-05-28 06:37:31,254 INFO [train.py:842] (0/4) Epoch 21, batch 6900, loss[loss=0.1438, simple_loss=0.234, pruned_loss=0.02684, over 7155.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2717, pruned_loss=0.04955, over 1427270.89 frames.], batch size: 19, lr: 2.60e-04 2022-05-28 06:38:09,061 INFO [train.py:842] (0/4) Epoch 21, batch 6950, loss[loss=0.193, simple_loss=0.2857, pruned_loss=0.05018, over 7208.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2731, pruned_loss=0.05014, over 1427198.86 frames.], batch size: 23, lr: 2.60e-04 2022-05-28 06:38:47,239 INFO [train.py:842] (0/4) Epoch 21, batch 7000, loss[loss=0.1904, simple_loss=0.2836, pruned_loss=0.04864, over 7068.00 frames.], tot_loss[loss=0.187, simple_loss=0.2737, pruned_loss=0.05013, over 1427490.47 frames.], batch size: 18, lr: 2.60e-04 2022-05-28 06:39:25,163 INFO [train.py:842] (0/4) Epoch 21, batch 7050, loss[loss=0.1851, simple_loss=0.2649, pruned_loss=0.05262, over 7205.00 frames.], tot_loss[loss=0.1861, simple_loss=0.2724, pruned_loss=0.04985, over 1428063.67 frames.], batch size: 22, lr: 2.60e-04 2022-05-28 06:40:03,301 INFO [train.py:842] (0/4) Epoch 21, batch 7100, loss[loss=0.1569, simple_loss=0.2343, pruned_loss=0.03974, over 7069.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2717, pruned_loss=0.04928, over 1425104.90 frames.], batch size: 18, lr: 2.60e-04 2022-05-28 06:40:41,323 INFO [train.py:842] (0/4) Epoch 21, batch 7150, loss[loss=0.2214, simple_loss=0.2925, pruned_loss=0.07517, over 7156.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2704, pruned_loss=0.04874, over 1428401.99 frames.], batch size: 19, lr: 2.60e-04 2022-05-28 06:41:19,791 INFO [train.py:842] (0/4) Epoch 21, batch 7200, loss[loss=0.1637, simple_loss=0.2583, pruned_loss=0.03455, over 7336.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2706, pruned_loss=0.04913, over 1430773.75 frames.], batch size: 20, lr: 2.60e-04 2022-05-28 06:41:57,863 INFO [train.py:842] (0/4) Epoch 21, batch 7250, loss[loss=0.164, simple_loss=0.2504, pruned_loss=0.03882, over 7422.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2698, pruned_loss=0.04875, over 1426163.31 frames.], batch size: 20, lr: 2.60e-04 2022-05-28 06:42:35,996 INFO [train.py:842] (0/4) Epoch 21, batch 7300, loss[loss=0.1805, simple_loss=0.2615, pruned_loss=0.04977, over 7147.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2694, pruned_loss=0.04818, over 1426708.32 frames.], batch size: 17, lr: 2.60e-04 2022-05-28 06:43:13,735 INFO [train.py:842] (0/4) Epoch 21, batch 7350, loss[loss=0.231, simple_loss=0.3078, pruned_loss=0.07711, over 7282.00 frames.], tot_loss[loss=0.1836, simple_loss=0.27, pruned_loss=0.04861, over 1424794.78 frames.], batch size: 24, lr: 2.60e-04 2022-05-28 06:43:51,811 INFO [train.py:842] (0/4) Epoch 21, batch 7400, loss[loss=0.2098, simple_loss=0.3025, pruned_loss=0.05855, over 7328.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2713, pruned_loss=0.04903, over 1424430.56 frames.], batch size: 20, lr: 2.60e-04 2022-05-28 06:44:29,671 INFO [train.py:842] (0/4) Epoch 21, batch 7450, loss[loss=0.1702, simple_loss=0.2649, pruned_loss=0.03779, over 7258.00 frames.], tot_loss[loss=0.1865, simple_loss=0.273, pruned_loss=0.05002, over 1424053.00 frames.], batch size: 19, lr: 2.60e-04 2022-05-28 06:45:08,001 INFO [train.py:842] (0/4) Epoch 21, batch 7500, loss[loss=0.2651, simple_loss=0.333, pruned_loss=0.09855, over 7269.00 frames.], tot_loss[loss=0.1877, simple_loss=0.274, pruned_loss=0.05071, over 1422937.58 frames.], batch size: 19, lr: 2.60e-04 2022-05-28 06:45:45,996 INFO [train.py:842] (0/4) Epoch 21, batch 7550, loss[loss=0.227, simple_loss=0.3155, pruned_loss=0.06923, over 7149.00 frames.], tot_loss[loss=0.1864, simple_loss=0.2727, pruned_loss=0.05003, over 1422619.28 frames.], batch size: 28, lr: 2.60e-04 2022-05-28 06:46:33,468 INFO [train.py:842] (0/4) Epoch 21, batch 7600, loss[loss=0.1771, simple_loss=0.2772, pruned_loss=0.03854, over 7192.00 frames.], tot_loss[loss=0.1869, simple_loss=0.2727, pruned_loss=0.05055, over 1416952.58 frames.], batch size: 22, lr: 2.60e-04 2022-05-28 06:47:11,534 INFO [train.py:842] (0/4) Epoch 21, batch 7650, loss[loss=0.1839, simple_loss=0.2665, pruned_loss=0.05067, over 7284.00 frames.], tot_loss[loss=0.1853, simple_loss=0.271, pruned_loss=0.04981, over 1418681.74 frames.], batch size: 17, lr: 2.60e-04 2022-05-28 06:47:49,871 INFO [train.py:842] (0/4) Epoch 21, batch 7700, loss[loss=0.1714, simple_loss=0.2689, pruned_loss=0.03699, over 7332.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2709, pruned_loss=0.04936, over 1419060.63 frames.], batch size: 22, lr: 2.60e-04 2022-05-28 06:48:27,629 INFO [train.py:842] (0/4) Epoch 21, batch 7750, loss[loss=0.1625, simple_loss=0.245, pruned_loss=0.04, over 7162.00 frames.], tot_loss[loss=0.1856, simple_loss=0.2715, pruned_loss=0.04988, over 1418332.47 frames.], batch size: 18, lr: 2.60e-04 2022-05-28 06:49:05,967 INFO [train.py:842] (0/4) Epoch 21, batch 7800, loss[loss=0.1644, simple_loss=0.2516, pruned_loss=0.03858, over 7403.00 frames.], tot_loss[loss=0.1864, simple_loss=0.272, pruned_loss=0.05036, over 1423368.16 frames.], batch size: 18, lr: 2.60e-04 2022-05-28 06:49:44,027 INFO [train.py:842] (0/4) Epoch 21, batch 7850, loss[loss=0.1717, simple_loss=0.2626, pruned_loss=0.04044, over 7223.00 frames.], tot_loss[loss=0.185, simple_loss=0.2707, pruned_loss=0.04968, over 1423495.58 frames.], batch size: 21, lr: 2.60e-04 2022-05-28 06:50:22,196 INFO [train.py:842] (0/4) Epoch 21, batch 7900, loss[loss=0.1777, simple_loss=0.2729, pruned_loss=0.04128, over 7321.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2698, pruned_loss=0.04892, over 1424895.26 frames.], batch size: 21, lr: 2.60e-04 2022-05-28 06:51:00,110 INFO [train.py:842] (0/4) Epoch 21, batch 7950, loss[loss=0.1728, simple_loss=0.2604, pruned_loss=0.04259, over 6993.00 frames.], tot_loss[loss=0.184, simple_loss=0.2699, pruned_loss=0.04901, over 1427390.48 frames.], batch size: 16, lr: 2.60e-04 2022-05-28 06:51:38,305 INFO [train.py:842] (0/4) Epoch 21, batch 8000, loss[loss=0.1841, simple_loss=0.2705, pruned_loss=0.04879, over 7308.00 frames.], tot_loss[loss=0.1849, simple_loss=0.271, pruned_loss=0.04939, over 1426096.43 frames.], batch size: 25, lr: 2.60e-04 2022-05-28 06:52:16,320 INFO [train.py:842] (0/4) Epoch 21, batch 8050, loss[loss=0.2418, simple_loss=0.3053, pruned_loss=0.08908, over 7382.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2714, pruned_loss=0.04939, over 1428631.18 frames.], batch size: 23, lr: 2.60e-04 2022-05-28 06:52:54,472 INFO [train.py:842] (0/4) Epoch 21, batch 8100, loss[loss=0.1843, simple_loss=0.2765, pruned_loss=0.04601, over 7257.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2713, pruned_loss=0.04965, over 1427791.01 frames.], batch size: 24, lr: 2.60e-04 2022-05-28 06:53:32,524 INFO [train.py:842] (0/4) Epoch 21, batch 8150, loss[loss=0.2074, simple_loss=0.2866, pruned_loss=0.06409, over 7335.00 frames.], tot_loss[loss=0.1846, simple_loss=0.271, pruned_loss=0.04909, over 1427330.59 frames.], batch size: 20, lr: 2.59e-04 2022-05-28 06:53:40,460 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-192000.pt 2022-05-28 06:54:13,534 INFO [train.py:842] (0/4) Epoch 21, batch 8200, loss[loss=0.1825, simple_loss=0.2659, pruned_loss=0.04952, over 7060.00 frames.], tot_loss[loss=0.1851, simple_loss=0.272, pruned_loss=0.04913, over 1428959.04 frames.], batch size: 18, lr: 2.59e-04 2022-05-28 06:54:51,647 INFO [train.py:842] (0/4) Epoch 21, batch 8250, loss[loss=0.2705, simple_loss=0.3499, pruned_loss=0.09552, over 5056.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2718, pruned_loss=0.04888, over 1428865.27 frames.], batch size: 52, lr: 2.59e-04 2022-05-28 06:55:30,094 INFO [train.py:842] (0/4) Epoch 21, batch 8300, loss[loss=0.1866, simple_loss=0.2747, pruned_loss=0.04925, over 7306.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2697, pruned_loss=0.04782, over 1428910.99 frames.], batch size: 25, lr: 2.59e-04 2022-05-28 06:56:07,847 INFO [train.py:842] (0/4) Epoch 21, batch 8350, loss[loss=0.2081, simple_loss=0.3001, pruned_loss=0.05804, over 7421.00 frames.], tot_loss[loss=0.185, simple_loss=0.2716, pruned_loss=0.04919, over 1428087.05 frames.], batch size: 21, lr: 2.59e-04 2022-05-28 06:56:46,223 INFO [train.py:842] (0/4) Epoch 21, batch 8400, loss[loss=0.2007, simple_loss=0.2914, pruned_loss=0.05496, over 7094.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2714, pruned_loss=0.04937, over 1431315.52 frames.], batch size: 26, lr: 2.59e-04 2022-05-28 06:57:24,196 INFO [train.py:842] (0/4) Epoch 21, batch 8450, loss[loss=0.1509, simple_loss=0.2487, pruned_loss=0.02655, over 7138.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2702, pruned_loss=0.04876, over 1426115.19 frames.], batch size: 20, lr: 2.59e-04 2022-05-28 06:58:02,574 INFO [train.py:842] (0/4) Epoch 21, batch 8500, loss[loss=0.163, simple_loss=0.2534, pruned_loss=0.03627, over 7431.00 frames.], tot_loss[loss=0.1852, simple_loss=0.2712, pruned_loss=0.04961, over 1424486.90 frames.], batch size: 20, lr: 2.59e-04 2022-05-28 06:58:40,505 INFO [train.py:842] (0/4) Epoch 21, batch 8550, loss[loss=0.1515, simple_loss=0.2364, pruned_loss=0.03328, over 7258.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2702, pruned_loss=0.04886, over 1425105.48 frames.], batch size: 17, lr: 2.59e-04 2022-05-28 06:59:18,639 INFO [train.py:842] (0/4) Epoch 21, batch 8600, loss[loss=0.2036, simple_loss=0.3004, pruned_loss=0.05335, over 7314.00 frames.], tot_loss[loss=0.1833, simple_loss=0.2702, pruned_loss=0.04823, over 1422149.28 frames.], batch size: 25, lr: 2.59e-04 2022-05-28 06:59:56,661 INFO [train.py:842] (0/4) Epoch 21, batch 8650, loss[loss=0.2181, simple_loss=0.2887, pruned_loss=0.07377, over 7167.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2704, pruned_loss=0.04913, over 1418787.22 frames.], batch size: 18, lr: 2.59e-04 2022-05-28 07:00:35,009 INFO [train.py:842] (0/4) Epoch 21, batch 8700, loss[loss=0.1742, simple_loss=0.2716, pruned_loss=0.0384, over 7113.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2683, pruned_loss=0.04832, over 1414989.83 frames.], batch size: 21, lr: 2.59e-04 2022-05-28 07:01:12,967 INFO [train.py:842] (0/4) Epoch 21, batch 8750, loss[loss=0.1751, simple_loss=0.2644, pruned_loss=0.04284, over 6728.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2677, pruned_loss=0.04795, over 1417010.05 frames.], batch size: 31, lr: 2.59e-04 2022-05-28 07:01:51,338 INFO [train.py:842] (0/4) Epoch 21, batch 8800, loss[loss=0.1507, simple_loss=0.2249, pruned_loss=0.03822, over 7282.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2674, pruned_loss=0.04759, over 1420749.20 frames.], batch size: 17, lr: 2.59e-04 2022-05-28 07:02:29,285 INFO [train.py:842] (0/4) Epoch 21, batch 8850, loss[loss=0.1867, simple_loss=0.274, pruned_loss=0.04966, over 6294.00 frames.], tot_loss[loss=0.1826, simple_loss=0.269, pruned_loss=0.04805, over 1417682.45 frames.], batch size: 37, lr: 2.59e-04 2022-05-28 07:03:07,721 INFO [train.py:842] (0/4) Epoch 21, batch 8900, loss[loss=0.202, simple_loss=0.28, pruned_loss=0.06205, over 7121.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2684, pruned_loss=0.04789, over 1418862.88 frames.], batch size: 21, lr: 2.59e-04 2022-05-28 07:03:45,611 INFO [train.py:842] (0/4) Epoch 21, batch 8950, loss[loss=0.1724, simple_loss=0.2685, pruned_loss=0.03816, over 7144.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2693, pruned_loss=0.04857, over 1409985.24 frames.], batch size: 20, lr: 2.59e-04 2022-05-28 07:04:23,621 INFO [train.py:842] (0/4) Epoch 21, batch 9000, loss[loss=0.1776, simple_loss=0.2807, pruned_loss=0.03727, over 6309.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2711, pruned_loss=0.04992, over 1396306.40 frames.], batch size: 37, lr: 2.59e-04 2022-05-28 07:04:23,622 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 07:04:32,660 INFO [train.py:871] (0/4) Epoch 21, validation: loss=0.1634, simple_loss=0.2627, pruned_loss=0.03203, over 868885.00 frames. 2022-05-28 07:05:10,702 INFO [train.py:842] (0/4) Epoch 21, batch 9050, loss[loss=0.2019, simple_loss=0.2687, pruned_loss=0.06755, over 6839.00 frames.], tot_loss[loss=0.1874, simple_loss=0.2722, pruned_loss=0.05132, over 1383826.80 frames.], batch size: 15, lr: 2.59e-04 2022-05-28 07:05:48,353 INFO [train.py:842] (0/4) Epoch 21, batch 9100, loss[loss=0.1949, simple_loss=0.2771, pruned_loss=0.0563, over 5233.00 frames.], tot_loss[loss=0.188, simple_loss=0.2729, pruned_loss=0.05149, over 1355799.62 frames.], batch size: 52, lr: 2.59e-04 2022-05-28 07:06:25,189 INFO [train.py:842] (0/4) Epoch 21, batch 9150, loss[loss=0.2159, simple_loss=0.3008, pruned_loss=0.06549, over 7109.00 frames.], tot_loss[loss=0.1905, simple_loss=0.2759, pruned_loss=0.05259, over 1320676.66 frames.], batch size: 21, lr: 2.59e-04 2022-05-28 07:06:56,223 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-21.pt 2022-05-28 07:07:15,921 INFO [train.py:842] (0/4) Epoch 22, batch 0, loss[loss=0.1897, simple_loss=0.2831, pruned_loss=0.04812, over 7270.00 frames.], tot_loss[loss=0.1897, simple_loss=0.2831, pruned_loss=0.04812, over 7270.00 frames.], batch size: 25, lr: 2.53e-04 2022-05-28 07:07:53,994 INFO [train.py:842] (0/4) Epoch 22, batch 50, loss[loss=0.1448, simple_loss=0.2438, pruned_loss=0.02284, over 7160.00 frames.], tot_loss[loss=0.1854, simple_loss=0.273, pruned_loss=0.04896, over 317953.50 frames.], batch size: 18, lr: 2.53e-04 2022-05-28 07:08:32,412 INFO [train.py:842] (0/4) Epoch 22, batch 100, loss[loss=0.184, simple_loss=0.2813, pruned_loss=0.04331, over 7115.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2701, pruned_loss=0.04835, over 563966.60 frames.], batch size: 21, lr: 2.53e-04 2022-05-28 07:09:10,274 INFO [train.py:842] (0/4) Epoch 22, batch 150, loss[loss=0.2054, simple_loss=0.2914, pruned_loss=0.05967, over 7324.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2708, pruned_loss=0.04835, over 753744.01 frames.], batch size: 21, lr: 2.53e-04 2022-05-28 07:09:48,501 INFO [train.py:842] (0/4) Epoch 22, batch 200, loss[loss=0.167, simple_loss=0.2721, pruned_loss=0.03095, over 7338.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2697, pruned_loss=0.04747, over 901419.13 frames.], batch size: 22, lr: 2.53e-04 2022-05-28 07:10:26,523 INFO [train.py:842] (0/4) Epoch 22, batch 250, loss[loss=0.1644, simple_loss=0.258, pruned_loss=0.0354, over 7263.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2692, pruned_loss=0.04711, over 1015431.94 frames.], batch size: 19, lr: 2.53e-04 2022-05-28 07:11:04,686 INFO [train.py:842] (0/4) Epoch 22, batch 300, loss[loss=0.1727, simple_loss=0.2621, pruned_loss=0.04169, over 7232.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2702, pruned_loss=0.04779, over 1107658.42 frames.], batch size: 20, lr: 2.53e-04 2022-05-28 07:11:42,631 INFO [train.py:842] (0/4) Epoch 22, batch 350, loss[loss=0.1578, simple_loss=0.2572, pruned_loss=0.02922, over 7166.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2692, pruned_loss=0.0479, over 1177985.37 frames.], batch size: 19, lr: 2.53e-04 2022-05-28 07:12:20,795 INFO [train.py:842] (0/4) Epoch 22, batch 400, loss[loss=0.2105, simple_loss=0.2979, pruned_loss=0.06158, over 7214.00 frames.], tot_loss[loss=0.1824, simple_loss=0.2694, pruned_loss=0.0477, over 1230843.40 frames.], batch size: 21, lr: 2.53e-04 2022-05-28 07:12:58,840 INFO [train.py:842] (0/4) Epoch 22, batch 450, loss[loss=0.2166, simple_loss=0.291, pruned_loss=0.07106, over 4953.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2694, pruned_loss=0.04801, over 1273540.52 frames.], batch size: 52, lr: 2.53e-04 2022-05-28 07:13:36,979 INFO [train.py:842] (0/4) Epoch 22, batch 500, loss[loss=0.1825, simple_loss=0.2788, pruned_loss=0.04312, over 7298.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2702, pruned_loss=0.04757, over 1308896.18 frames.], batch size: 25, lr: 2.53e-04 2022-05-28 07:14:14,810 INFO [train.py:842] (0/4) Epoch 22, batch 550, loss[loss=0.1566, simple_loss=0.2519, pruned_loss=0.0307, over 7436.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2715, pruned_loss=0.04808, over 1332279.41 frames.], batch size: 20, lr: 2.53e-04 2022-05-28 07:14:53,168 INFO [train.py:842] (0/4) Epoch 22, batch 600, loss[loss=0.2107, simple_loss=0.2964, pruned_loss=0.06249, over 7331.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2705, pruned_loss=0.0481, over 1354428.80 frames.], batch size: 22, lr: 2.53e-04 2022-05-28 07:15:30,880 INFO [train.py:842] (0/4) Epoch 22, batch 650, loss[loss=0.2179, simple_loss=0.3068, pruned_loss=0.06448, over 7332.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2712, pruned_loss=0.04816, over 1369674.96 frames.], batch size: 22, lr: 2.53e-04 2022-05-28 07:16:09,253 INFO [train.py:842] (0/4) Epoch 22, batch 700, loss[loss=0.1744, simple_loss=0.2699, pruned_loss=0.03951, over 7316.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2704, pruned_loss=0.0484, over 1379428.98 frames.], batch size: 25, lr: 2.53e-04 2022-05-28 07:16:47,343 INFO [train.py:842] (0/4) Epoch 22, batch 750, loss[loss=0.1507, simple_loss=0.2383, pruned_loss=0.03156, over 7171.00 frames.], tot_loss[loss=0.1828, simple_loss=0.2698, pruned_loss=0.0479, over 1386930.60 frames.], batch size: 18, lr: 2.53e-04 2022-05-28 07:17:25,593 INFO [train.py:842] (0/4) Epoch 22, batch 800, loss[loss=0.2033, simple_loss=0.2892, pruned_loss=0.05868, over 7287.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2699, pruned_loss=0.04771, over 1399345.33 frames.], batch size: 25, lr: 2.53e-04 2022-05-28 07:18:03,627 INFO [train.py:842] (0/4) Epoch 22, batch 850, loss[loss=0.1652, simple_loss=0.2398, pruned_loss=0.04528, over 7396.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2695, pruned_loss=0.04791, over 1404710.28 frames.], batch size: 18, lr: 2.52e-04 2022-05-28 07:18:41,828 INFO [train.py:842] (0/4) Epoch 22, batch 900, loss[loss=0.1673, simple_loss=0.2558, pruned_loss=0.03938, over 6408.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2692, pruned_loss=0.04768, over 1408014.41 frames.], batch size: 38, lr: 2.52e-04 2022-05-28 07:19:19,850 INFO [train.py:842] (0/4) Epoch 22, batch 950, loss[loss=0.1933, simple_loss=0.2738, pruned_loss=0.05643, over 7281.00 frames.], tot_loss[loss=0.1833, simple_loss=0.27, pruned_loss=0.04832, over 1411150.43 frames.], batch size: 18, lr: 2.52e-04 2022-05-28 07:19:57,840 INFO [train.py:842] (0/4) Epoch 22, batch 1000, loss[loss=0.1486, simple_loss=0.2398, pruned_loss=0.02869, over 7163.00 frames.], tot_loss[loss=0.1852, simple_loss=0.2719, pruned_loss=0.04922, over 1411872.02 frames.], batch size: 19, lr: 2.52e-04 2022-05-28 07:20:36,020 INFO [train.py:842] (0/4) Epoch 22, batch 1050, loss[loss=0.2113, simple_loss=0.3016, pruned_loss=0.06049, over 7345.00 frames.], tot_loss[loss=0.1835, simple_loss=0.2699, pruned_loss=0.04853, over 1415313.90 frames.], batch size: 22, lr: 2.52e-04 2022-05-28 07:21:14,300 INFO [train.py:842] (0/4) Epoch 22, batch 1100, loss[loss=0.2123, simple_loss=0.297, pruned_loss=0.06377, over 6341.00 frames.], tot_loss[loss=0.1835, simple_loss=0.2703, pruned_loss=0.04837, over 1418869.70 frames.], batch size: 37, lr: 2.52e-04 2022-05-28 07:21:52,335 INFO [train.py:842] (0/4) Epoch 22, batch 1150, loss[loss=0.1924, simple_loss=0.267, pruned_loss=0.05889, over 7243.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2705, pruned_loss=0.04842, over 1420743.37 frames.], batch size: 19, lr: 2.52e-04 2022-05-28 07:22:30,714 INFO [train.py:842] (0/4) Epoch 22, batch 1200, loss[loss=0.2026, simple_loss=0.2997, pruned_loss=0.05275, over 7261.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2701, pruned_loss=0.04854, over 1421899.33 frames.], batch size: 25, lr: 2.52e-04 2022-05-28 07:23:08,666 INFO [train.py:842] (0/4) Epoch 22, batch 1250, loss[loss=0.1474, simple_loss=0.2239, pruned_loss=0.03544, over 7003.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2702, pruned_loss=0.04831, over 1421573.61 frames.], batch size: 16, lr: 2.52e-04 2022-05-28 07:23:46,945 INFO [train.py:842] (0/4) Epoch 22, batch 1300, loss[loss=0.1585, simple_loss=0.2357, pruned_loss=0.04067, over 7169.00 frames.], tot_loss[loss=0.1833, simple_loss=0.2701, pruned_loss=0.04824, over 1420005.81 frames.], batch size: 19, lr: 2.52e-04 2022-05-28 07:24:25,110 INFO [train.py:842] (0/4) Epoch 22, batch 1350, loss[loss=0.1823, simple_loss=0.2838, pruned_loss=0.04035, over 7395.00 frames.], tot_loss[loss=0.184, simple_loss=0.2705, pruned_loss=0.04877, over 1423957.72 frames.], batch size: 21, lr: 2.52e-04 2022-05-28 07:25:03,515 INFO [train.py:842] (0/4) Epoch 22, batch 1400, loss[loss=0.2099, simple_loss=0.3044, pruned_loss=0.05775, over 7209.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2698, pruned_loss=0.04872, over 1420201.26 frames.], batch size: 22, lr: 2.52e-04 2022-05-28 07:25:41,554 INFO [train.py:842] (0/4) Epoch 22, batch 1450, loss[loss=0.182, simple_loss=0.2748, pruned_loss=0.04465, over 7420.00 frames.], tot_loss[loss=0.184, simple_loss=0.2706, pruned_loss=0.04875, over 1425077.02 frames.], batch size: 20, lr: 2.52e-04 2022-05-28 07:26:19,928 INFO [train.py:842] (0/4) Epoch 22, batch 1500, loss[loss=0.1758, simple_loss=0.2681, pruned_loss=0.04176, over 7233.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2705, pruned_loss=0.0487, over 1427133.60 frames.], batch size: 20, lr: 2.52e-04 2022-05-28 07:26:58,143 INFO [train.py:842] (0/4) Epoch 22, batch 1550, loss[loss=0.1667, simple_loss=0.2562, pruned_loss=0.03862, over 7225.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2696, pruned_loss=0.04844, over 1429069.17 frames.], batch size: 20, lr: 2.52e-04 2022-05-28 07:27:45,576 INFO [train.py:842] (0/4) Epoch 22, batch 1600, loss[loss=0.142, simple_loss=0.2296, pruned_loss=0.0272, over 7181.00 frames.], tot_loss[loss=0.183, simple_loss=0.2697, pruned_loss=0.04819, over 1430574.55 frames.], batch size: 16, lr: 2.52e-04 2022-05-28 07:28:23,577 INFO [train.py:842] (0/4) Epoch 22, batch 1650, loss[loss=0.1732, simple_loss=0.2628, pruned_loss=0.0418, over 6852.00 frames.], tot_loss[loss=0.184, simple_loss=0.2709, pruned_loss=0.04853, over 1432411.76 frames.], batch size: 31, lr: 2.52e-04 2022-05-28 07:29:02,069 INFO [train.py:842] (0/4) Epoch 22, batch 1700, loss[loss=0.1735, simple_loss=0.2704, pruned_loss=0.03828, over 7331.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2695, pruned_loss=0.04788, over 1434044.05 frames.], batch size: 22, lr: 2.52e-04 2022-05-28 07:29:40,020 INFO [train.py:842] (0/4) Epoch 22, batch 1750, loss[loss=0.1717, simple_loss=0.2631, pruned_loss=0.0402, over 7226.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2696, pruned_loss=0.04812, over 1433360.83 frames.], batch size: 20, lr: 2.52e-04 2022-05-28 07:30:27,696 INFO [train.py:842] (0/4) Epoch 22, batch 1800, loss[loss=0.1788, simple_loss=0.2684, pruned_loss=0.0446, over 7278.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2697, pruned_loss=0.04853, over 1430249.79 frames.], batch size: 17, lr: 2.52e-04 2022-05-28 07:31:05,562 INFO [train.py:842] (0/4) Epoch 22, batch 1850, loss[loss=0.1665, simple_loss=0.2577, pruned_loss=0.03771, over 6427.00 frames.], tot_loss[loss=0.1833, simple_loss=0.2695, pruned_loss=0.04857, over 1426094.52 frames.], batch size: 38, lr: 2.52e-04 2022-05-28 07:31:53,091 INFO [train.py:842] (0/4) Epoch 22, batch 1900, loss[loss=0.2311, simple_loss=0.3193, pruned_loss=0.07139, over 5117.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2705, pruned_loss=0.04897, over 1423908.90 frames.], batch size: 53, lr: 2.52e-04 2022-05-28 07:32:31,068 INFO [train.py:842] (0/4) Epoch 22, batch 1950, loss[loss=0.1512, simple_loss=0.2322, pruned_loss=0.0351, over 7269.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2695, pruned_loss=0.04838, over 1425285.38 frames.], batch size: 17, lr: 2.52e-04 2022-05-28 07:33:09,366 INFO [train.py:842] (0/4) Epoch 22, batch 2000, loss[loss=0.1884, simple_loss=0.2696, pruned_loss=0.05358, over 7339.00 frames.], tot_loss[loss=0.1838, simple_loss=0.27, pruned_loss=0.04881, over 1428272.89 frames.], batch size: 20, lr: 2.52e-04 2022-05-28 07:33:47,294 INFO [train.py:842] (0/4) Epoch 22, batch 2050, loss[loss=0.1821, simple_loss=0.2581, pruned_loss=0.05306, over 7290.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2713, pruned_loss=0.04926, over 1428886.38 frames.], batch size: 17, lr: 2.52e-04 2022-05-28 07:34:25,510 INFO [train.py:842] (0/4) Epoch 22, batch 2100, loss[loss=0.1672, simple_loss=0.2523, pruned_loss=0.04103, over 7400.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2719, pruned_loss=0.04945, over 1427619.85 frames.], batch size: 18, lr: 2.52e-04 2022-05-28 07:35:03,400 INFO [train.py:842] (0/4) Epoch 22, batch 2150, loss[loss=0.1548, simple_loss=0.2409, pruned_loss=0.03442, over 7165.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2715, pruned_loss=0.04936, over 1423953.36 frames.], batch size: 18, lr: 2.52e-04 2022-05-28 07:35:41,675 INFO [train.py:842] (0/4) Epoch 22, batch 2200, loss[loss=0.1779, simple_loss=0.2698, pruned_loss=0.04302, over 7107.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2706, pruned_loss=0.04846, over 1426988.72 frames.], batch size: 21, lr: 2.52e-04 2022-05-28 07:36:19,629 INFO [train.py:842] (0/4) Epoch 22, batch 2250, loss[loss=0.1364, simple_loss=0.221, pruned_loss=0.0259, over 7164.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2697, pruned_loss=0.04765, over 1424354.32 frames.], batch size: 16, lr: 2.52e-04 2022-05-28 07:36:57,801 INFO [train.py:842] (0/4) Epoch 22, batch 2300, loss[loss=0.2198, simple_loss=0.3091, pruned_loss=0.06523, over 4773.00 frames.], tot_loss[loss=0.1815, simple_loss=0.269, pruned_loss=0.04705, over 1425289.42 frames.], batch size: 53, lr: 2.52e-04 2022-05-28 07:37:35,868 INFO [train.py:842] (0/4) Epoch 22, batch 2350, loss[loss=0.1774, simple_loss=0.2725, pruned_loss=0.04114, over 6470.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2684, pruned_loss=0.04641, over 1427632.14 frames.], batch size: 38, lr: 2.52e-04 2022-05-28 07:38:14,394 INFO [train.py:842] (0/4) Epoch 22, batch 2400, loss[loss=0.163, simple_loss=0.2347, pruned_loss=0.04565, over 7119.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2679, pruned_loss=0.04654, over 1427215.88 frames.], batch size: 17, lr: 2.51e-04 2022-05-28 07:38:52,298 INFO [train.py:842] (0/4) Epoch 22, batch 2450, loss[loss=0.1634, simple_loss=0.2385, pruned_loss=0.04416, over 7286.00 frames.], tot_loss[loss=0.181, simple_loss=0.2683, pruned_loss=0.04685, over 1426180.07 frames.], batch size: 17, lr: 2.51e-04 2022-05-28 07:39:30,626 INFO [train.py:842] (0/4) Epoch 22, batch 2500, loss[loss=0.1819, simple_loss=0.2734, pruned_loss=0.04515, over 7414.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2674, pruned_loss=0.04615, over 1423505.28 frames.], batch size: 21, lr: 2.51e-04 2022-05-28 07:40:08,484 INFO [train.py:842] (0/4) Epoch 22, batch 2550, loss[loss=0.1514, simple_loss=0.2405, pruned_loss=0.03115, over 7070.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2671, pruned_loss=0.04613, over 1422326.29 frames.], batch size: 18, lr: 2.51e-04 2022-05-28 07:40:46,512 INFO [train.py:842] (0/4) Epoch 22, batch 2600, loss[loss=0.1795, simple_loss=0.264, pruned_loss=0.04751, over 7158.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2691, pruned_loss=0.04722, over 1418803.28 frames.], batch size: 19, lr: 2.51e-04 2022-05-28 07:41:24,749 INFO [train.py:842] (0/4) Epoch 22, batch 2650, loss[loss=0.1638, simple_loss=0.2617, pruned_loss=0.03298, over 7265.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2684, pruned_loss=0.04713, over 1421919.85 frames.], batch size: 19, lr: 2.51e-04 2022-05-28 07:42:02,989 INFO [train.py:842] (0/4) Epoch 22, batch 2700, loss[loss=0.2166, simple_loss=0.3084, pruned_loss=0.06236, over 7171.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2686, pruned_loss=0.0474, over 1421078.61 frames.], batch size: 18, lr: 2.51e-04 2022-05-28 07:42:40,714 INFO [train.py:842] (0/4) Epoch 22, batch 2750, loss[loss=0.1598, simple_loss=0.2442, pruned_loss=0.03769, over 7070.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2679, pruned_loss=0.04696, over 1420756.27 frames.], batch size: 18, lr: 2.51e-04 2022-05-28 07:43:18,873 INFO [train.py:842] (0/4) Epoch 22, batch 2800, loss[loss=0.1554, simple_loss=0.2448, pruned_loss=0.03306, over 7280.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2674, pruned_loss=0.04691, over 1420879.77 frames.], batch size: 18, lr: 2.51e-04 2022-05-28 07:43:56,990 INFO [train.py:842] (0/4) Epoch 22, batch 2850, loss[loss=0.2093, simple_loss=0.3007, pruned_loss=0.05898, over 7162.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2671, pruned_loss=0.04682, over 1419316.94 frames.], batch size: 19, lr: 2.51e-04 2022-05-28 07:44:35,179 INFO [train.py:842] (0/4) Epoch 22, batch 2900, loss[loss=0.1927, simple_loss=0.2807, pruned_loss=0.05237, over 7162.00 frames.], tot_loss[loss=0.181, simple_loss=0.2679, pruned_loss=0.04708, over 1421290.00 frames.], batch size: 19, lr: 2.51e-04 2022-05-28 07:45:13,313 INFO [train.py:842] (0/4) Epoch 22, batch 2950, loss[loss=0.1886, simple_loss=0.2782, pruned_loss=0.04955, over 7421.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2682, pruned_loss=0.04739, over 1421492.60 frames.], batch size: 21, lr: 2.51e-04 2022-05-28 07:45:51,548 INFO [train.py:842] (0/4) Epoch 22, batch 3000, loss[loss=0.1496, simple_loss=0.2427, pruned_loss=0.02829, over 7170.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2675, pruned_loss=0.04698, over 1425505.74 frames.], batch size: 18, lr: 2.51e-04 2022-05-28 07:45:51,549 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 07:46:00,591 INFO [train.py:871] (0/4) Epoch 22, validation: loss=0.1649, simple_loss=0.2645, pruned_loss=0.03269, over 868885.00 frames. 2022-05-28 07:46:38,639 INFO [train.py:842] (0/4) Epoch 22, batch 3050, loss[loss=0.2023, simple_loss=0.2924, pruned_loss=0.05611, over 6992.00 frames.], tot_loss[loss=0.1812, simple_loss=0.268, pruned_loss=0.04726, over 1427471.38 frames.], batch size: 28, lr: 2.51e-04 2022-05-28 07:47:17,289 INFO [train.py:842] (0/4) Epoch 22, batch 3100, loss[loss=0.1976, simple_loss=0.289, pruned_loss=0.0531, over 4773.00 frames.], tot_loss[loss=0.181, simple_loss=0.2677, pruned_loss=0.04721, over 1427468.06 frames.], batch size: 52, lr: 2.51e-04 2022-05-28 07:47:55,457 INFO [train.py:842] (0/4) Epoch 22, batch 3150, loss[loss=0.1577, simple_loss=0.2503, pruned_loss=0.0326, over 7416.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2675, pruned_loss=0.0474, over 1425305.95 frames.], batch size: 21, lr: 2.51e-04 2022-05-28 07:48:33,859 INFO [train.py:842] (0/4) Epoch 22, batch 3200, loss[loss=0.1518, simple_loss=0.2379, pruned_loss=0.03279, over 7061.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2678, pruned_loss=0.04769, over 1426808.59 frames.], batch size: 18, lr: 2.51e-04 2022-05-28 07:49:11,737 INFO [train.py:842] (0/4) Epoch 22, batch 3250, loss[loss=0.1838, simple_loss=0.26, pruned_loss=0.0538, over 7020.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2689, pruned_loss=0.04818, over 1428433.25 frames.], batch size: 16, lr: 2.51e-04 2022-05-28 07:49:49,917 INFO [train.py:842] (0/4) Epoch 22, batch 3300, loss[loss=0.1805, simple_loss=0.2727, pruned_loss=0.04412, over 7431.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2694, pruned_loss=0.04797, over 1431214.64 frames.], batch size: 20, lr: 2.51e-04 2022-05-28 07:50:27,863 INFO [train.py:842] (0/4) Epoch 22, batch 3350, loss[loss=0.1818, simple_loss=0.2592, pruned_loss=0.05213, over 7355.00 frames.], tot_loss[loss=0.1833, simple_loss=0.2701, pruned_loss=0.04821, over 1430016.96 frames.], batch size: 19, lr: 2.51e-04 2022-05-28 07:51:05,905 INFO [train.py:842] (0/4) Epoch 22, batch 3400, loss[loss=0.1978, simple_loss=0.2756, pruned_loss=0.05999, over 7142.00 frames.], tot_loss[loss=0.1819, simple_loss=0.269, pruned_loss=0.04739, over 1425934.62 frames.], batch size: 17, lr: 2.51e-04 2022-05-28 07:51:43,851 INFO [train.py:842] (0/4) Epoch 22, batch 3450, loss[loss=0.2111, simple_loss=0.3003, pruned_loss=0.06094, over 7345.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2692, pruned_loss=0.04758, over 1427519.97 frames.], batch size: 22, lr: 2.51e-04 2022-05-28 07:52:22,331 INFO [train.py:842] (0/4) Epoch 22, batch 3500, loss[loss=0.1913, simple_loss=0.2853, pruned_loss=0.04863, over 7342.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2699, pruned_loss=0.04798, over 1430435.27 frames.], batch size: 22, lr: 2.51e-04 2022-05-28 07:53:00,184 INFO [train.py:842] (0/4) Epoch 22, batch 3550, loss[loss=0.215, simple_loss=0.3045, pruned_loss=0.06274, over 6644.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2696, pruned_loss=0.04769, over 1427939.03 frames.], batch size: 31, lr: 2.51e-04 2022-05-28 07:53:38,511 INFO [train.py:842] (0/4) Epoch 22, batch 3600, loss[loss=0.1743, simple_loss=0.2555, pruned_loss=0.04658, over 7260.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2685, pruned_loss=0.04742, over 1423663.17 frames.], batch size: 17, lr: 2.51e-04 2022-05-28 07:54:16,513 INFO [train.py:842] (0/4) Epoch 22, batch 3650, loss[loss=0.146, simple_loss=0.2417, pruned_loss=0.02516, over 7254.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2683, pruned_loss=0.04708, over 1426238.91 frames.], batch size: 19, lr: 2.51e-04 2022-05-28 07:54:54,674 INFO [train.py:842] (0/4) Epoch 22, batch 3700, loss[loss=0.2088, simple_loss=0.2945, pruned_loss=0.06153, over 7139.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2676, pruned_loss=0.0466, over 1427580.58 frames.], batch size: 20, lr: 2.51e-04 2022-05-28 07:55:32,495 INFO [train.py:842] (0/4) Epoch 22, batch 3750, loss[loss=0.1673, simple_loss=0.2609, pruned_loss=0.03684, over 7267.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2675, pruned_loss=0.04663, over 1429954.44 frames.], batch size: 24, lr: 2.51e-04 2022-05-28 07:56:11,026 INFO [train.py:842] (0/4) Epoch 22, batch 3800, loss[loss=0.2605, simple_loss=0.3483, pruned_loss=0.08638, over 5306.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2672, pruned_loss=0.04689, over 1426167.20 frames.], batch size: 53, lr: 2.51e-04 2022-05-28 07:56:49,074 INFO [train.py:842] (0/4) Epoch 22, batch 3850, loss[loss=0.1983, simple_loss=0.2642, pruned_loss=0.06617, over 7276.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2675, pruned_loss=0.04711, over 1426803.25 frames.], batch size: 18, lr: 2.51e-04 2022-05-28 07:57:27,541 INFO [train.py:842] (0/4) Epoch 22, batch 3900, loss[loss=0.2125, simple_loss=0.2962, pruned_loss=0.0644, over 7326.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2681, pruned_loss=0.04753, over 1429724.90 frames.], batch size: 20, lr: 2.51e-04 2022-05-28 07:58:05,556 INFO [train.py:842] (0/4) Epoch 22, batch 3950, loss[loss=0.1895, simple_loss=0.2777, pruned_loss=0.05059, over 7425.00 frames.], tot_loss[loss=0.1824, simple_loss=0.2691, pruned_loss=0.04789, over 1428428.35 frames.], batch size: 21, lr: 2.50e-04 2022-05-28 07:58:43,885 INFO [train.py:842] (0/4) Epoch 22, batch 4000, loss[loss=0.1794, simple_loss=0.2743, pruned_loss=0.04225, over 6882.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2682, pruned_loss=0.04777, over 1428317.11 frames.], batch size: 31, lr: 2.50e-04 2022-05-28 07:59:21,788 INFO [train.py:842] (0/4) Epoch 22, batch 4050, loss[loss=0.1999, simple_loss=0.2937, pruned_loss=0.05302, over 7415.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2694, pruned_loss=0.0482, over 1425384.74 frames.], batch size: 21, lr: 2.50e-04 2022-05-28 08:00:00,257 INFO [train.py:842] (0/4) Epoch 22, batch 4100, loss[loss=0.192, simple_loss=0.2846, pruned_loss=0.04971, over 7347.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2699, pruned_loss=0.04852, over 1423660.32 frames.], batch size: 22, lr: 2.50e-04 2022-05-28 08:00:38,142 INFO [train.py:842] (0/4) Epoch 22, batch 4150, loss[loss=0.1479, simple_loss=0.2505, pruned_loss=0.02267, over 7345.00 frames.], tot_loss[loss=0.1833, simple_loss=0.27, pruned_loss=0.04832, over 1426478.09 frames.], batch size: 22, lr: 2.50e-04 2022-05-28 08:01:16,330 INFO [train.py:842] (0/4) Epoch 22, batch 4200, loss[loss=0.2705, simple_loss=0.344, pruned_loss=0.09849, over 5196.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2693, pruned_loss=0.04847, over 1419420.69 frames.], batch size: 53, lr: 2.50e-04 2022-05-28 08:01:54,108 INFO [train.py:842] (0/4) Epoch 22, batch 4250, loss[loss=0.2063, simple_loss=0.2909, pruned_loss=0.06085, over 4754.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2696, pruned_loss=0.04815, over 1416098.91 frames.], batch size: 52, lr: 2.50e-04 2022-05-28 08:02:32,396 INFO [train.py:842] (0/4) Epoch 22, batch 4300, loss[loss=0.2275, simple_loss=0.2895, pruned_loss=0.08274, over 7409.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2709, pruned_loss=0.04898, over 1419483.61 frames.], batch size: 18, lr: 2.50e-04 2022-05-28 08:03:10,569 INFO [train.py:842] (0/4) Epoch 22, batch 4350, loss[loss=0.156, simple_loss=0.2444, pruned_loss=0.0338, over 7265.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2695, pruned_loss=0.04843, over 1420120.35 frames.], batch size: 17, lr: 2.50e-04 2022-05-28 08:03:48,874 INFO [train.py:842] (0/4) Epoch 22, batch 4400, loss[loss=0.1884, simple_loss=0.281, pruned_loss=0.04791, over 7319.00 frames.], tot_loss[loss=0.1831, simple_loss=0.27, pruned_loss=0.04814, over 1421959.02 frames.], batch size: 21, lr: 2.50e-04 2022-05-28 08:04:26,816 INFO [train.py:842] (0/4) Epoch 22, batch 4450, loss[loss=0.18, simple_loss=0.2716, pruned_loss=0.04421, over 7270.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2695, pruned_loss=0.04783, over 1418732.66 frames.], batch size: 24, lr: 2.50e-04 2022-05-28 08:05:05,053 INFO [train.py:842] (0/4) Epoch 22, batch 4500, loss[loss=0.2053, simple_loss=0.2942, pruned_loss=0.0582, over 7373.00 frames.], tot_loss[loss=0.1828, simple_loss=0.2699, pruned_loss=0.04789, over 1421231.45 frames.], batch size: 23, lr: 2.50e-04 2022-05-28 08:05:43,122 INFO [train.py:842] (0/4) Epoch 22, batch 4550, loss[loss=0.1878, simple_loss=0.2744, pruned_loss=0.05057, over 7158.00 frames.], tot_loss[loss=0.183, simple_loss=0.2698, pruned_loss=0.0481, over 1421356.59 frames.], batch size: 18, lr: 2.50e-04 2022-05-28 08:06:21,009 INFO [train.py:842] (0/4) Epoch 22, batch 4600, loss[loss=0.1819, simple_loss=0.2736, pruned_loss=0.04512, over 7230.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2705, pruned_loss=0.04844, over 1420646.19 frames.], batch size: 20, lr: 2.50e-04 2022-05-28 08:06:58,735 INFO [train.py:842] (0/4) Epoch 22, batch 4650, loss[loss=0.1647, simple_loss=0.2529, pruned_loss=0.0383, over 7067.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2711, pruned_loss=0.04908, over 1417851.57 frames.], batch size: 18, lr: 2.50e-04 2022-05-28 08:07:37,051 INFO [train.py:842] (0/4) Epoch 22, batch 4700, loss[loss=0.1673, simple_loss=0.254, pruned_loss=0.04031, over 7359.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2705, pruned_loss=0.04846, over 1419223.22 frames.], batch size: 19, lr: 2.50e-04 2022-05-28 08:08:15,470 INFO [train.py:842] (0/4) Epoch 22, batch 4750, loss[loss=0.1793, simple_loss=0.2563, pruned_loss=0.05118, over 7270.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2678, pruned_loss=0.0473, over 1424447.56 frames.], batch size: 18, lr: 2.50e-04 2022-05-28 08:08:53,581 INFO [train.py:842] (0/4) Epoch 22, batch 4800, loss[loss=0.2114, simple_loss=0.2913, pruned_loss=0.0657, over 5241.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2689, pruned_loss=0.04786, over 1420088.59 frames.], batch size: 53, lr: 2.50e-04 2022-05-28 08:09:31,583 INFO [train.py:842] (0/4) Epoch 22, batch 4850, loss[loss=0.2039, simple_loss=0.2812, pruned_loss=0.06329, over 7121.00 frames.], tot_loss[loss=0.1835, simple_loss=0.2701, pruned_loss=0.04843, over 1423142.74 frames.], batch size: 21, lr: 2.50e-04 2022-05-28 08:10:09,776 INFO [train.py:842] (0/4) Epoch 22, batch 4900, loss[loss=0.2033, simple_loss=0.2936, pruned_loss=0.05649, over 7209.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2705, pruned_loss=0.04835, over 1421525.01 frames.], batch size: 23, lr: 2.50e-04 2022-05-28 08:10:47,868 INFO [train.py:842] (0/4) Epoch 22, batch 4950, loss[loss=0.1535, simple_loss=0.2531, pruned_loss=0.0269, over 7251.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2694, pruned_loss=0.04789, over 1416706.99 frames.], batch size: 19, lr: 2.50e-04 2022-05-28 08:11:25,835 INFO [train.py:842] (0/4) Epoch 22, batch 5000, loss[loss=0.2054, simple_loss=0.3008, pruned_loss=0.05499, over 6329.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2702, pruned_loss=0.04863, over 1413626.50 frames.], batch size: 37, lr: 2.50e-04 2022-05-28 08:12:03,910 INFO [train.py:842] (0/4) Epoch 22, batch 5050, loss[loss=0.1823, simple_loss=0.2671, pruned_loss=0.04877, over 7422.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2702, pruned_loss=0.04886, over 1417045.63 frames.], batch size: 18, lr: 2.50e-04 2022-05-28 08:12:42,389 INFO [train.py:842] (0/4) Epoch 22, batch 5100, loss[loss=0.2021, simple_loss=0.2856, pruned_loss=0.05926, over 7328.00 frames.], tot_loss[loss=0.1838, simple_loss=0.27, pruned_loss=0.04886, over 1421584.28 frames.], batch size: 21, lr: 2.50e-04 2022-05-28 08:13:20,353 INFO [train.py:842] (0/4) Epoch 22, batch 5150, loss[loss=0.1619, simple_loss=0.2572, pruned_loss=0.03326, over 7336.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2702, pruned_loss=0.04831, over 1427413.54 frames.], batch size: 22, lr: 2.50e-04 2022-05-28 08:13:58,902 INFO [train.py:842] (0/4) Epoch 22, batch 5200, loss[loss=0.1834, simple_loss=0.2795, pruned_loss=0.04362, over 7335.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2695, pruned_loss=0.04838, over 1426023.55 frames.], batch size: 20, lr: 2.50e-04 2022-05-28 08:14:36,954 INFO [train.py:842] (0/4) Epoch 22, batch 5250, loss[loss=0.1823, simple_loss=0.275, pruned_loss=0.04479, over 7094.00 frames.], tot_loss[loss=0.1824, simple_loss=0.2684, pruned_loss=0.04818, over 1422189.77 frames.], batch size: 28, lr: 2.50e-04 2022-05-28 08:15:14,907 INFO [train.py:842] (0/4) Epoch 22, batch 5300, loss[loss=0.1719, simple_loss=0.2544, pruned_loss=0.04471, over 7345.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2688, pruned_loss=0.04791, over 1423374.20 frames.], batch size: 22, lr: 2.50e-04 2022-05-28 08:15:52,742 INFO [train.py:842] (0/4) Epoch 22, batch 5350, loss[loss=0.2005, simple_loss=0.2839, pruned_loss=0.05858, over 6806.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2699, pruned_loss=0.04798, over 1425174.18 frames.], batch size: 31, lr: 2.50e-04 2022-05-28 08:16:30,988 INFO [train.py:842] (0/4) Epoch 22, batch 5400, loss[loss=0.1787, simple_loss=0.2553, pruned_loss=0.05108, over 7062.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2697, pruned_loss=0.04771, over 1425482.37 frames.], batch size: 18, lr: 2.50e-04 2022-05-28 08:17:09,158 INFO [train.py:842] (0/4) Epoch 22, batch 5450, loss[loss=0.1786, simple_loss=0.2681, pruned_loss=0.04455, over 7422.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2699, pruned_loss=0.04777, over 1425172.10 frames.], batch size: 20, lr: 2.50e-04 2022-05-28 08:17:47,414 INFO [train.py:842] (0/4) Epoch 22, batch 5500, loss[loss=0.1946, simple_loss=0.2872, pruned_loss=0.05096, over 7423.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2697, pruned_loss=0.04777, over 1422665.68 frames.], batch size: 21, lr: 2.49e-04 2022-05-28 08:18:25,334 INFO [train.py:842] (0/4) Epoch 22, batch 5550, loss[loss=0.1416, simple_loss=0.2285, pruned_loss=0.02736, over 7155.00 frames.], tot_loss[loss=0.1818, simple_loss=0.269, pruned_loss=0.0473, over 1420166.11 frames.], batch size: 18, lr: 2.49e-04 2022-05-28 08:19:03,347 INFO [train.py:842] (0/4) Epoch 22, batch 5600, loss[loss=0.178, simple_loss=0.2721, pruned_loss=0.04199, over 7155.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2687, pruned_loss=0.04712, over 1421253.05 frames.], batch size: 19, lr: 2.49e-04 2022-05-28 08:19:41,112 INFO [train.py:842] (0/4) Epoch 22, batch 5650, loss[loss=0.1807, simple_loss=0.2607, pruned_loss=0.05034, over 7355.00 frames.], tot_loss[loss=0.182, simple_loss=0.2693, pruned_loss=0.04735, over 1419683.46 frames.], batch size: 19, lr: 2.49e-04 2022-05-28 08:20:19,519 INFO [train.py:842] (0/4) Epoch 22, batch 5700, loss[loss=0.1562, simple_loss=0.2468, pruned_loss=0.03281, over 7336.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2683, pruned_loss=0.04746, over 1425349.92 frames.], batch size: 22, lr: 2.49e-04 2022-05-28 08:20:57,462 INFO [train.py:842] (0/4) Epoch 22, batch 5750, loss[loss=0.1707, simple_loss=0.2566, pruned_loss=0.0424, over 7403.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2695, pruned_loss=0.04782, over 1427516.14 frames.], batch size: 18, lr: 2.49e-04 2022-05-28 08:21:35,773 INFO [train.py:842] (0/4) Epoch 22, batch 5800, loss[loss=0.146, simple_loss=0.2292, pruned_loss=0.03136, over 7145.00 frames.], tot_loss[loss=0.183, simple_loss=0.2692, pruned_loss=0.0484, over 1428471.10 frames.], batch size: 17, lr: 2.49e-04 2022-05-28 08:22:13,660 INFO [train.py:842] (0/4) Epoch 22, batch 5850, loss[loss=0.2542, simple_loss=0.33, pruned_loss=0.08918, over 7305.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2702, pruned_loss=0.04864, over 1429396.32 frames.], batch size: 24, lr: 2.49e-04 2022-05-28 08:22:52,077 INFO [train.py:842] (0/4) Epoch 22, batch 5900, loss[loss=0.1589, simple_loss=0.2508, pruned_loss=0.03352, over 7204.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2693, pruned_loss=0.04855, over 1431621.10 frames.], batch size: 23, lr: 2.49e-04 2022-05-28 08:23:29,851 INFO [train.py:842] (0/4) Epoch 22, batch 5950, loss[loss=0.1712, simple_loss=0.2558, pruned_loss=0.04328, over 7327.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2699, pruned_loss=0.04884, over 1425127.72 frames.], batch size: 22, lr: 2.49e-04 2022-05-28 08:24:08,105 INFO [train.py:842] (0/4) Epoch 22, batch 6000, loss[loss=0.1646, simple_loss=0.2464, pruned_loss=0.04138, over 7419.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2702, pruned_loss=0.04867, over 1427257.51 frames.], batch size: 18, lr: 2.49e-04 2022-05-28 08:24:08,106 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 08:24:17,058 INFO [train.py:871] (0/4) Epoch 22, validation: loss=0.1664, simple_loss=0.2658, pruned_loss=0.03347, over 868885.00 frames. 2022-05-28 08:24:55,051 INFO [train.py:842] (0/4) Epoch 22, batch 6050, loss[loss=0.1736, simple_loss=0.2533, pruned_loss=0.04689, over 7285.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2702, pruned_loss=0.0485, over 1425344.71 frames.], batch size: 17, lr: 2.49e-04 2022-05-28 08:25:33,539 INFO [train.py:842] (0/4) Epoch 22, batch 6100, loss[loss=0.1604, simple_loss=0.2389, pruned_loss=0.04095, over 7157.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2693, pruned_loss=0.04799, over 1426103.27 frames.], batch size: 19, lr: 2.49e-04 2022-05-28 08:26:11,600 INFO [train.py:842] (0/4) Epoch 22, batch 6150, loss[loss=0.1937, simple_loss=0.2752, pruned_loss=0.05613, over 7068.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2703, pruned_loss=0.04904, over 1422474.60 frames.], batch size: 18, lr: 2.49e-04 2022-05-28 08:26:49,985 INFO [train.py:842] (0/4) Epoch 22, batch 6200, loss[loss=0.1741, simple_loss=0.2594, pruned_loss=0.04436, over 7411.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2689, pruned_loss=0.04789, over 1424504.76 frames.], batch size: 21, lr: 2.49e-04 2022-05-28 08:27:27,696 INFO [train.py:842] (0/4) Epoch 22, batch 6250, loss[loss=0.1937, simple_loss=0.282, pruned_loss=0.0527, over 6747.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2694, pruned_loss=0.04803, over 1421214.81 frames.], batch size: 31, lr: 2.49e-04 2022-05-28 08:28:05,771 INFO [train.py:842] (0/4) Epoch 22, batch 6300, loss[loss=0.2154, simple_loss=0.2978, pruned_loss=0.06646, over 7334.00 frames.], tot_loss[loss=0.1824, simple_loss=0.2693, pruned_loss=0.04773, over 1421064.01 frames.], batch size: 22, lr: 2.49e-04 2022-05-28 08:28:43,773 INFO [train.py:842] (0/4) Epoch 22, batch 6350, loss[loss=0.2393, simple_loss=0.3072, pruned_loss=0.08564, over 4978.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2701, pruned_loss=0.04811, over 1422779.32 frames.], batch size: 53, lr: 2.49e-04 2022-05-28 08:29:22,114 INFO [train.py:842] (0/4) Epoch 22, batch 6400, loss[loss=0.164, simple_loss=0.2443, pruned_loss=0.04181, over 7154.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2706, pruned_loss=0.04813, over 1423195.73 frames.], batch size: 19, lr: 2.49e-04 2022-05-28 08:30:00,137 INFO [train.py:842] (0/4) Epoch 22, batch 6450, loss[loss=0.1528, simple_loss=0.2389, pruned_loss=0.03337, over 7268.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2705, pruned_loss=0.04813, over 1422546.07 frames.], batch size: 19, lr: 2.49e-04 2022-05-28 08:30:38,605 INFO [train.py:842] (0/4) Epoch 22, batch 6500, loss[loss=0.1813, simple_loss=0.258, pruned_loss=0.05231, over 7435.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2714, pruned_loss=0.04899, over 1424374.18 frames.], batch size: 17, lr: 2.49e-04 2022-05-28 08:31:16,545 INFO [train.py:842] (0/4) Epoch 22, batch 6550, loss[loss=0.1686, simple_loss=0.2501, pruned_loss=0.04355, over 7056.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2703, pruned_loss=0.04856, over 1420698.49 frames.], batch size: 18, lr: 2.49e-04 2022-05-28 08:31:54,815 INFO [train.py:842] (0/4) Epoch 22, batch 6600, loss[loss=0.1696, simple_loss=0.2591, pruned_loss=0.04005, over 7263.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2713, pruned_loss=0.04923, over 1418040.54 frames.], batch size: 19, lr: 2.49e-04 2022-05-28 08:32:32,830 INFO [train.py:842] (0/4) Epoch 22, batch 6650, loss[loss=0.1586, simple_loss=0.2616, pruned_loss=0.02778, over 7336.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2707, pruned_loss=0.0483, over 1420915.56 frames.], batch size: 22, lr: 2.49e-04 2022-05-28 08:33:11,310 INFO [train.py:842] (0/4) Epoch 22, batch 6700, loss[loss=0.1767, simple_loss=0.2713, pruned_loss=0.04101, over 7265.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2693, pruned_loss=0.0479, over 1425044.65 frames.], batch size: 19, lr: 2.49e-04 2022-05-28 08:33:49,371 INFO [train.py:842] (0/4) Epoch 22, batch 6750, loss[loss=0.1906, simple_loss=0.2757, pruned_loss=0.05274, over 7231.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2702, pruned_loss=0.0486, over 1423557.99 frames.], batch size: 20, lr: 2.49e-04 2022-05-28 08:34:27,412 INFO [train.py:842] (0/4) Epoch 22, batch 6800, loss[loss=0.163, simple_loss=0.2541, pruned_loss=0.03591, over 6549.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2708, pruned_loss=0.0488, over 1421669.49 frames.], batch size: 38, lr: 2.49e-04 2022-05-28 08:35:05,253 INFO [train.py:842] (0/4) Epoch 22, batch 6850, loss[loss=0.1839, simple_loss=0.2772, pruned_loss=0.04533, over 7302.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2708, pruned_loss=0.04865, over 1420039.49 frames.], batch size: 24, lr: 2.49e-04 2022-05-28 08:35:43,334 INFO [train.py:842] (0/4) Epoch 22, batch 6900, loss[loss=0.1818, simple_loss=0.288, pruned_loss=0.03784, over 7219.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2714, pruned_loss=0.04866, over 1416185.98 frames.], batch size: 21, lr: 2.49e-04 2022-05-28 08:36:21,220 INFO [train.py:842] (0/4) Epoch 22, batch 6950, loss[loss=0.1556, simple_loss=0.2467, pruned_loss=0.03229, over 7418.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2704, pruned_loss=0.04817, over 1413204.29 frames.], batch size: 20, lr: 2.49e-04 2022-05-28 08:36:35,974 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-200000.pt 2022-05-28 08:37:02,015 INFO [train.py:842] (0/4) Epoch 22, batch 7000, loss[loss=0.1803, simple_loss=0.269, pruned_loss=0.04581, over 7320.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2709, pruned_loss=0.04881, over 1415090.22 frames.], batch size: 21, lr: 2.49e-04 2022-05-28 08:37:39,798 INFO [train.py:842] (0/4) Epoch 22, batch 7050, loss[loss=0.237, simple_loss=0.3192, pruned_loss=0.07738, over 7291.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2708, pruned_loss=0.04845, over 1417648.97 frames.], batch size: 24, lr: 2.49e-04 2022-05-28 08:38:17,982 INFO [train.py:842] (0/4) Epoch 22, batch 7100, loss[loss=0.1595, simple_loss=0.2408, pruned_loss=0.03907, over 7413.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2703, pruned_loss=0.04796, over 1419196.04 frames.], batch size: 17, lr: 2.49e-04 2022-05-28 08:38:55,909 INFO [train.py:842] (0/4) Epoch 22, batch 7150, loss[loss=0.1412, simple_loss=0.2265, pruned_loss=0.02795, over 7403.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2699, pruned_loss=0.04777, over 1418364.88 frames.], batch size: 18, lr: 2.48e-04 2022-05-28 08:39:34,201 INFO [train.py:842] (0/4) Epoch 22, batch 7200, loss[loss=0.1512, simple_loss=0.2396, pruned_loss=0.0314, over 7240.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2699, pruned_loss=0.04818, over 1410678.50 frames.], batch size: 20, lr: 2.48e-04 2022-05-28 08:40:12,372 INFO [train.py:842] (0/4) Epoch 22, batch 7250, loss[loss=0.2149, simple_loss=0.3034, pruned_loss=0.06319, over 7206.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2699, pruned_loss=0.048, over 1413407.88 frames.], batch size: 23, lr: 2.48e-04 2022-05-28 08:40:50,457 INFO [train.py:842] (0/4) Epoch 22, batch 7300, loss[loss=0.1518, simple_loss=0.2319, pruned_loss=0.03589, over 7270.00 frames.], tot_loss[loss=0.1829, simple_loss=0.27, pruned_loss=0.04796, over 1411180.97 frames.], batch size: 17, lr: 2.48e-04 2022-05-28 08:41:28,162 INFO [train.py:842] (0/4) Epoch 22, batch 7350, loss[loss=0.1703, simple_loss=0.2631, pruned_loss=0.03869, over 7142.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2713, pruned_loss=0.04847, over 1414299.96 frames.], batch size: 20, lr: 2.48e-04 2022-05-28 08:42:06,479 INFO [train.py:842] (0/4) Epoch 22, batch 7400, loss[loss=0.21, simple_loss=0.2924, pruned_loss=0.06378, over 7330.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2711, pruned_loss=0.04897, over 1416455.82 frames.], batch size: 25, lr: 2.48e-04 2022-05-28 08:42:44,748 INFO [train.py:842] (0/4) Epoch 22, batch 7450, loss[loss=0.18, simple_loss=0.262, pruned_loss=0.04904, over 7155.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2704, pruned_loss=0.04886, over 1416345.34 frames.], batch size: 18, lr: 2.48e-04 2022-05-28 08:43:22,936 INFO [train.py:842] (0/4) Epoch 22, batch 7500, loss[loss=0.1654, simple_loss=0.2421, pruned_loss=0.04442, over 7394.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2716, pruned_loss=0.0491, over 1417942.41 frames.], batch size: 18, lr: 2.48e-04 2022-05-28 08:44:01,045 INFO [train.py:842] (0/4) Epoch 22, batch 7550, loss[loss=0.2213, simple_loss=0.3109, pruned_loss=0.06582, over 7208.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2704, pruned_loss=0.04885, over 1416882.05 frames.], batch size: 23, lr: 2.48e-04 2022-05-28 08:44:39,275 INFO [train.py:842] (0/4) Epoch 22, batch 7600, loss[loss=0.1953, simple_loss=0.2952, pruned_loss=0.04769, over 7190.00 frames.], tot_loss[loss=0.184, simple_loss=0.2701, pruned_loss=0.04891, over 1416793.08 frames.], batch size: 22, lr: 2.48e-04 2022-05-28 08:45:17,058 INFO [train.py:842] (0/4) Epoch 22, batch 7650, loss[loss=0.189, simple_loss=0.2854, pruned_loss=0.04633, over 7320.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2712, pruned_loss=0.0491, over 1418714.87 frames.], batch size: 20, lr: 2.48e-04 2022-05-28 08:45:55,430 INFO [train.py:842] (0/4) Epoch 22, batch 7700, loss[loss=0.1688, simple_loss=0.2653, pruned_loss=0.03611, over 7222.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2703, pruned_loss=0.04867, over 1419359.15 frames.], batch size: 21, lr: 2.48e-04 2022-05-28 08:46:33,279 INFO [train.py:842] (0/4) Epoch 22, batch 7750, loss[loss=0.2022, simple_loss=0.2941, pruned_loss=0.05515, over 7145.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2706, pruned_loss=0.0486, over 1422308.81 frames.], batch size: 20, lr: 2.48e-04 2022-05-28 08:47:11,741 INFO [train.py:842] (0/4) Epoch 22, batch 7800, loss[loss=0.2119, simple_loss=0.2986, pruned_loss=0.06256, over 7287.00 frames.], tot_loss[loss=0.184, simple_loss=0.2705, pruned_loss=0.04873, over 1418838.29 frames.], batch size: 25, lr: 2.48e-04 2022-05-28 08:47:49,715 INFO [train.py:842] (0/4) Epoch 22, batch 7850, loss[loss=0.1874, simple_loss=0.2725, pruned_loss=0.05112, over 7291.00 frames.], tot_loss[loss=0.1835, simple_loss=0.2704, pruned_loss=0.04835, over 1419083.08 frames.], batch size: 25, lr: 2.48e-04 2022-05-28 08:48:27,739 INFO [train.py:842] (0/4) Epoch 22, batch 7900, loss[loss=0.1786, simple_loss=0.2597, pruned_loss=0.04869, over 7066.00 frames.], tot_loss[loss=0.1852, simple_loss=0.2718, pruned_loss=0.0493, over 1414619.15 frames.], batch size: 18, lr: 2.48e-04 2022-05-28 08:49:05,722 INFO [train.py:842] (0/4) Epoch 22, batch 7950, loss[loss=0.16, simple_loss=0.2457, pruned_loss=0.03714, over 7279.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2705, pruned_loss=0.0489, over 1412875.45 frames.], batch size: 25, lr: 2.48e-04 2022-05-28 08:49:43,859 INFO [train.py:842] (0/4) Epoch 22, batch 8000, loss[loss=0.2177, simple_loss=0.2916, pruned_loss=0.07184, over 7066.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2708, pruned_loss=0.04899, over 1414163.84 frames.], batch size: 28, lr: 2.48e-04 2022-05-28 08:50:21,585 INFO [train.py:842] (0/4) Epoch 22, batch 8050, loss[loss=0.14, simple_loss=0.2223, pruned_loss=0.0289, over 7011.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2714, pruned_loss=0.04915, over 1411757.27 frames.], batch size: 16, lr: 2.48e-04 2022-05-28 08:50:59,773 INFO [train.py:842] (0/4) Epoch 22, batch 8100, loss[loss=0.3361, simple_loss=0.3897, pruned_loss=0.1412, over 7071.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2724, pruned_loss=0.05045, over 1408514.54 frames.], batch size: 18, lr: 2.48e-04 2022-05-28 08:51:37,734 INFO [train.py:842] (0/4) Epoch 22, batch 8150, loss[loss=0.1531, simple_loss=0.2377, pruned_loss=0.03422, over 7269.00 frames.], tot_loss[loss=0.1865, simple_loss=0.2722, pruned_loss=0.05038, over 1413767.78 frames.], batch size: 17, lr: 2.48e-04 2022-05-28 08:52:15,699 INFO [train.py:842] (0/4) Epoch 22, batch 8200, loss[loss=0.1794, simple_loss=0.2729, pruned_loss=0.04298, over 6365.00 frames.], tot_loss[loss=0.1865, simple_loss=0.273, pruned_loss=0.04998, over 1417431.90 frames.], batch size: 37, lr: 2.48e-04 2022-05-28 08:52:53,587 INFO [train.py:842] (0/4) Epoch 22, batch 8250, loss[loss=0.1629, simple_loss=0.2558, pruned_loss=0.03501, over 7007.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2716, pruned_loss=0.04935, over 1418278.21 frames.], batch size: 28, lr: 2.48e-04 2022-05-28 08:53:31,678 INFO [train.py:842] (0/4) Epoch 22, batch 8300, loss[loss=0.2147, simple_loss=0.2985, pruned_loss=0.06543, over 7292.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2709, pruned_loss=0.04847, over 1419522.28 frames.], batch size: 24, lr: 2.48e-04 2022-05-28 08:54:09,568 INFO [train.py:842] (0/4) Epoch 22, batch 8350, loss[loss=0.1732, simple_loss=0.2682, pruned_loss=0.03908, over 7219.00 frames.], tot_loss[loss=0.185, simple_loss=0.2718, pruned_loss=0.04909, over 1421192.29 frames.], batch size: 21, lr: 2.48e-04 2022-05-28 08:54:47,762 INFO [train.py:842] (0/4) Epoch 22, batch 8400, loss[loss=0.1769, simple_loss=0.2675, pruned_loss=0.04318, over 7228.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2715, pruned_loss=0.04891, over 1423093.92 frames.], batch size: 21, lr: 2.48e-04 2022-05-28 08:55:25,639 INFO [train.py:842] (0/4) Epoch 22, batch 8450, loss[loss=0.2177, simple_loss=0.3024, pruned_loss=0.06653, over 7313.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2719, pruned_loss=0.04937, over 1418530.76 frames.], batch size: 20, lr: 2.48e-04 2022-05-28 08:56:03,898 INFO [train.py:842] (0/4) Epoch 22, batch 8500, loss[loss=0.1477, simple_loss=0.2303, pruned_loss=0.03256, over 6984.00 frames.], tot_loss[loss=0.1848, simple_loss=0.2715, pruned_loss=0.04907, over 1421179.95 frames.], batch size: 16, lr: 2.48e-04 2022-05-28 08:56:41,565 INFO [train.py:842] (0/4) Epoch 22, batch 8550, loss[loss=0.2558, simple_loss=0.3433, pruned_loss=0.08414, over 7297.00 frames.], tot_loss[loss=0.1866, simple_loss=0.2733, pruned_loss=0.04991, over 1415922.91 frames.], batch size: 25, lr: 2.48e-04 2022-05-28 08:57:19,533 INFO [train.py:842] (0/4) Epoch 22, batch 8600, loss[loss=0.1727, simple_loss=0.2598, pruned_loss=0.04283, over 7305.00 frames.], tot_loss[loss=0.1865, simple_loss=0.2733, pruned_loss=0.04986, over 1418377.03 frames.], batch size: 24, lr: 2.48e-04 2022-05-28 08:57:57,114 INFO [train.py:842] (0/4) Epoch 22, batch 8650, loss[loss=0.1593, simple_loss=0.2496, pruned_loss=0.03447, over 7146.00 frames.], tot_loss[loss=0.186, simple_loss=0.2727, pruned_loss=0.04967, over 1410918.93 frames.], batch size: 18, lr: 2.48e-04 2022-05-28 08:58:35,186 INFO [train.py:842] (0/4) Epoch 22, batch 8700, loss[loss=0.162, simple_loss=0.2402, pruned_loss=0.04187, over 7354.00 frames.], tot_loss[loss=0.1855, simple_loss=0.2726, pruned_loss=0.04922, over 1411795.62 frames.], batch size: 19, lr: 2.48e-04 2022-05-28 08:59:12,996 INFO [train.py:842] (0/4) Epoch 22, batch 8750, loss[loss=0.2032, simple_loss=0.3061, pruned_loss=0.05016, over 7347.00 frames.], tot_loss[loss=0.185, simple_loss=0.2719, pruned_loss=0.04905, over 1414579.25 frames.], batch size: 22, lr: 2.47e-04 2022-05-28 08:59:50,895 INFO [train.py:842] (0/4) Epoch 22, batch 8800, loss[loss=0.1533, simple_loss=0.2391, pruned_loss=0.03374, over 7161.00 frames.], tot_loss[loss=0.1838, simple_loss=0.271, pruned_loss=0.04835, over 1411588.82 frames.], batch size: 18, lr: 2.47e-04 2022-05-28 09:00:28,598 INFO [train.py:842] (0/4) Epoch 22, batch 8850, loss[loss=0.1717, simple_loss=0.2563, pruned_loss=0.04355, over 7427.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2711, pruned_loss=0.04868, over 1409378.00 frames.], batch size: 20, lr: 2.47e-04 2022-05-28 09:01:06,625 INFO [train.py:842] (0/4) Epoch 22, batch 8900, loss[loss=0.159, simple_loss=0.2506, pruned_loss=0.03371, over 7233.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2705, pruned_loss=0.04817, over 1411180.77 frames.], batch size: 20, lr: 2.47e-04 2022-05-28 09:01:44,248 INFO [train.py:842] (0/4) Epoch 22, batch 8950, loss[loss=0.1978, simple_loss=0.2855, pruned_loss=0.05504, over 7307.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2714, pruned_loss=0.04847, over 1405094.41 frames.], batch size: 25, lr: 2.47e-04 2022-05-28 09:02:22,231 INFO [train.py:842] (0/4) Epoch 22, batch 9000, loss[loss=0.1713, simple_loss=0.2583, pruned_loss=0.04216, over 6987.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2724, pruned_loss=0.04885, over 1400043.15 frames.], batch size: 16, lr: 2.47e-04 2022-05-28 09:02:22,231 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 09:02:31,315 INFO [train.py:871] (0/4) Epoch 22, validation: loss=0.1647, simple_loss=0.2634, pruned_loss=0.03302, over 868885.00 frames. 2022-05-28 09:03:08,927 INFO [train.py:842] (0/4) Epoch 22, batch 9050, loss[loss=0.2118, simple_loss=0.2954, pruned_loss=0.06407, over 5342.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2726, pruned_loss=0.04899, over 1393672.47 frames.], batch size: 53, lr: 2.47e-04 2022-05-28 09:03:46,322 INFO [train.py:842] (0/4) Epoch 22, batch 9100, loss[loss=0.1865, simple_loss=0.2746, pruned_loss=0.04915, over 4959.00 frames.], tot_loss[loss=0.1871, simple_loss=0.2742, pruned_loss=0.05001, over 1370204.19 frames.], batch size: 53, lr: 2.47e-04 2022-05-28 09:04:23,080 INFO [train.py:842] (0/4) Epoch 22, batch 9150, loss[loss=0.2235, simple_loss=0.3131, pruned_loss=0.06699, over 5161.00 frames.], tot_loss[loss=0.1926, simple_loss=0.2782, pruned_loss=0.05351, over 1291299.37 frames.], batch size: 53, lr: 2.47e-04 2022-05-28 09:04:54,699 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-22.pt 2022-05-28 09:05:08,936 INFO [train.py:842] (0/4) Epoch 23, batch 0, loss[loss=0.1492, simple_loss=0.2294, pruned_loss=0.03453, over 6831.00 frames.], tot_loss[loss=0.1492, simple_loss=0.2294, pruned_loss=0.03453, over 6831.00 frames.], batch size: 15, lr: 2.42e-04 2022-05-28 09:05:47,156 INFO [train.py:842] (0/4) Epoch 23, batch 50, loss[loss=0.2379, simple_loss=0.3222, pruned_loss=0.07682, over 7151.00 frames.], tot_loss[loss=0.179, simple_loss=0.2677, pruned_loss=0.04513, over 319192.41 frames.], batch size: 19, lr: 2.42e-04 2022-05-28 09:06:25,601 INFO [train.py:842] (0/4) Epoch 23, batch 100, loss[loss=0.1526, simple_loss=0.2385, pruned_loss=0.03335, over 7275.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2676, pruned_loss=0.04492, over 566500.90 frames.], batch size: 18, lr: 2.42e-04 2022-05-28 09:07:03,393 INFO [train.py:842] (0/4) Epoch 23, batch 150, loss[loss=0.1958, simple_loss=0.2778, pruned_loss=0.05684, over 7278.00 frames.], tot_loss[loss=0.182, simple_loss=0.27, pruned_loss=0.04698, over 754071.45 frames.], batch size: 24, lr: 2.42e-04 2022-05-28 09:07:41,556 INFO [train.py:842] (0/4) Epoch 23, batch 200, loss[loss=0.2589, simple_loss=0.3339, pruned_loss=0.09191, over 6402.00 frames.], tot_loss[loss=0.185, simple_loss=0.2729, pruned_loss=0.04857, over 902360.40 frames.], batch size: 38, lr: 2.42e-04 2022-05-28 09:08:19,368 INFO [train.py:842] (0/4) Epoch 23, batch 250, loss[loss=0.2172, simple_loss=0.306, pruned_loss=0.06417, over 7188.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2725, pruned_loss=0.04823, over 1017499.54 frames.], batch size: 23, lr: 2.42e-04 2022-05-28 09:08:57,664 INFO [train.py:842] (0/4) Epoch 23, batch 300, loss[loss=0.1612, simple_loss=0.2509, pruned_loss=0.03576, over 7151.00 frames.], tot_loss[loss=0.183, simple_loss=0.2708, pruned_loss=0.04761, over 1102473.88 frames.], batch size: 19, lr: 2.42e-04 2022-05-28 09:09:35,669 INFO [train.py:842] (0/4) Epoch 23, batch 350, loss[loss=0.1862, simple_loss=0.2782, pruned_loss=0.04717, over 7346.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2692, pruned_loss=0.04703, over 1176747.26 frames.], batch size: 22, lr: 2.42e-04 2022-05-28 09:10:13,898 INFO [train.py:842] (0/4) Epoch 23, batch 400, loss[loss=0.1885, simple_loss=0.2878, pruned_loss=0.04455, over 7189.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2691, pruned_loss=0.04674, over 1229659.85 frames.], batch size: 23, lr: 2.42e-04 2022-05-28 09:10:51,789 INFO [train.py:842] (0/4) Epoch 23, batch 450, loss[loss=0.2187, simple_loss=0.2996, pruned_loss=0.06887, over 7288.00 frames.], tot_loss[loss=0.1824, simple_loss=0.2701, pruned_loss=0.04734, over 1271356.91 frames.], batch size: 24, lr: 2.42e-04 2022-05-28 09:11:30,074 INFO [train.py:842] (0/4) Epoch 23, batch 500, loss[loss=0.1771, simple_loss=0.2509, pruned_loss=0.05169, over 6762.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2695, pruned_loss=0.04706, over 1306190.99 frames.], batch size: 15, lr: 2.42e-04 2022-05-28 09:12:08,359 INFO [train.py:842] (0/4) Epoch 23, batch 550, loss[loss=0.1828, simple_loss=0.2677, pruned_loss=0.04893, over 7275.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2702, pruned_loss=0.048, over 1337069.32 frames.], batch size: 24, lr: 2.42e-04 2022-05-28 09:12:46,589 INFO [train.py:842] (0/4) Epoch 23, batch 600, loss[loss=0.1856, simple_loss=0.2746, pruned_loss=0.04831, over 7111.00 frames.], tot_loss[loss=0.183, simple_loss=0.27, pruned_loss=0.04797, over 1359498.11 frames.], batch size: 21, lr: 2.42e-04 2022-05-28 09:13:24,490 INFO [train.py:842] (0/4) Epoch 23, batch 650, loss[loss=0.1608, simple_loss=0.2533, pruned_loss=0.03411, over 6720.00 frames.], tot_loss[loss=0.183, simple_loss=0.27, pruned_loss=0.048, over 1374403.73 frames.], batch size: 31, lr: 2.42e-04 2022-05-28 09:14:02,707 INFO [train.py:842] (0/4) Epoch 23, batch 700, loss[loss=0.1903, simple_loss=0.2697, pruned_loss=0.05544, over 4748.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2692, pruned_loss=0.04719, over 1380765.19 frames.], batch size: 53, lr: 2.42e-04 2022-05-28 09:14:40,546 INFO [train.py:842] (0/4) Epoch 23, batch 750, loss[loss=0.1751, simple_loss=0.2663, pruned_loss=0.04193, over 7198.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2705, pruned_loss=0.0475, over 1392243.30 frames.], batch size: 23, lr: 2.41e-04 2022-05-28 09:15:18,798 INFO [train.py:842] (0/4) Epoch 23, batch 800, loss[loss=0.1895, simple_loss=0.2799, pruned_loss=0.04954, over 7363.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2704, pruned_loss=0.04737, over 1397058.01 frames.], batch size: 19, lr: 2.41e-04 2022-05-28 09:16:05,926 INFO [train.py:842] (0/4) Epoch 23, batch 850, loss[loss=0.205, simple_loss=0.3019, pruned_loss=0.05406, over 7431.00 frames.], tot_loss[loss=0.1814, simple_loss=0.2694, pruned_loss=0.04671, over 1404556.82 frames.], batch size: 20, lr: 2.41e-04 2022-05-28 09:16:44,107 INFO [train.py:842] (0/4) Epoch 23, batch 900, loss[loss=0.1947, simple_loss=0.2734, pruned_loss=0.058, over 7166.00 frames.], tot_loss[loss=0.183, simple_loss=0.2708, pruned_loss=0.04762, over 1409253.89 frames.], batch size: 19, lr: 2.41e-04 2022-05-28 09:17:21,899 INFO [train.py:842] (0/4) Epoch 23, batch 950, loss[loss=0.1739, simple_loss=0.2669, pruned_loss=0.04046, over 6943.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2718, pruned_loss=0.04827, over 1411068.04 frames.], batch size: 28, lr: 2.41e-04 2022-05-28 09:18:00,182 INFO [train.py:842] (0/4) Epoch 23, batch 1000, loss[loss=0.1654, simple_loss=0.2552, pruned_loss=0.03779, over 7370.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2723, pruned_loss=0.04829, over 1418606.24 frames.], batch size: 19, lr: 2.41e-04 2022-05-28 09:18:38,350 INFO [train.py:842] (0/4) Epoch 23, batch 1050, loss[loss=0.1764, simple_loss=0.2709, pruned_loss=0.04089, over 5025.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2712, pruned_loss=0.04855, over 1419400.81 frames.], batch size: 52, lr: 2.41e-04 2022-05-28 09:19:16,278 INFO [train.py:842] (0/4) Epoch 23, batch 1100, loss[loss=0.1519, simple_loss=0.2302, pruned_loss=0.03675, over 7280.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2714, pruned_loss=0.04847, over 1419420.60 frames.], batch size: 17, lr: 2.41e-04 2022-05-28 09:19:54,199 INFO [train.py:842] (0/4) Epoch 23, batch 1150, loss[loss=0.1818, simple_loss=0.2666, pruned_loss=0.04846, over 7425.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2703, pruned_loss=0.04756, over 1423641.04 frames.], batch size: 20, lr: 2.41e-04 2022-05-28 09:20:32,490 INFO [train.py:842] (0/4) Epoch 23, batch 1200, loss[loss=0.1865, simple_loss=0.2715, pruned_loss=0.05078, over 7295.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2701, pruned_loss=0.04765, over 1423156.49 frames.], batch size: 18, lr: 2.41e-04 2022-05-28 09:21:10,632 INFO [train.py:842] (0/4) Epoch 23, batch 1250, loss[loss=0.1813, simple_loss=0.2579, pruned_loss=0.05236, over 6812.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2695, pruned_loss=0.04746, over 1426402.43 frames.], batch size: 15, lr: 2.41e-04 2022-05-28 09:21:48,972 INFO [train.py:842] (0/4) Epoch 23, batch 1300, loss[loss=0.2132, simple_loss=0.3013, pruned_loss=0.06256, over 7219.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2693, pruned_loss=0.04687, over 1428665.98 frames.], batch size: 23, lr: 2.41e-04 2022-05-28 09:22:27,064 INFO [train.py:842] (0/4) Epoch 23, batch 1350, loss[loss=0.1625, simple_loss=0.2419, pruned_loss=0.04156, over 7284.00 frames.], tot_loss[loss=0.1806, simple_loss=0.268, pruned_loss=0.04658, over 1427933.77 frames.], batch size: 18, lr: 2.41e-04 2022-05-28 09:23:05,256 INFO [train.py:842] (0/4) Epoch 23, batch 1400, loss[loss=0.169, simple_loss=0.2579, pruned_loss=0.04009, over 7113.00 frames.], tot_loss[loss=0.1814, simple_loss=0.2687, pruned_loss=0.04706, over 1428303.52 frames.], batch size: 21, lr: 2.41e-04 2022-05-28 09:23:43,274 INFO [train.py:842] (0/4) Epoch 23, batch 1450, loss[loss=0.1754, simple_loss=0.2579, pruned_loss=0.04641, over 7397.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2689, pruned_loss=0.04739, over 1422304.49 frames.], batch size: 18, lr: 2.41e-04 2022-05-28 09:24:21,882 INFO [train.py:842] (0/4) Epoch 23, batch 1500, loss[loss=0.1799, simple_loss=0.2801, pruned_loss=0.03989, over 7075.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2677, pruned_loss=0.04729, over 1423051.18 frames.], batch size: 28, lr: 2.41e-04 2022-05-28 09:24:59,673 INFO [train.py:842] (0/4) Epoch 23, batch 1550, loss[loss=0.1469, simple_loss=0.2301, pruned_loss=0.03189, over 7354.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2689, pruned_loss=0.04808, over 1414307.99 frames.], batch size: 19, lr: 2.41e-04 2022-05-28 09:25:37,902 INFO [train.py:842] (0/4) Epoch 23, batch 1600, loss[loss=0.1622, simple_loss=0.2447, pruned_loss=0.03985, over 7220.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2692, pruned_loss=0.04801, over 1412238.95 frames.], batch size: 21, lr: 2.41e-04 2022-05-28 09:26:16,014 INFO [train.py:842] (0/4) Epoch 23, batch 1650, loss[loss=0.1833, simple_loss=0.2745, pruned_loss=0.04604, over 7397.00 frames.], tot_loss[loss=0.1828, simple_loss=0.2692, pruned_loss=0.04813, over 1414766.27 frames.], batch size: 23, lr: 2.41e-04 2022-05-28 09:26:54,164 INFO [train.py:842] (0/4) Epoch 23, batch 1700, loss[loss=0.1558, simple_loss=0.2401, pruned_loss=0.03576, over 7410.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2687, pruned_loss=0.04771, over 1416195.00 frames.], batch size: 18, lr: 2.41e-04 2022-05-28 09:27:31,881 INFO [train.py:842] (0/4) Epoch 23, batch 1750, loss[loss=0.1951, simple_loss=0.2873, pruned_loss=0.05147, over 7177.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2704, pruned_loss=0.04851, over 1414592.88 frames.], batch size: 26, lr: 2.41e-04 2022-05-28 09:28:10,167 INFO [train.py:842] (0/4) Epoch 23, batch 1800, loss[loss=0.1861, simple_loss=0.2755, pruned_loss=0.04833, over 5340.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2712, pruned_loss=0.0489, over 1413003.98 frames.], batch size: 53, lr: 2.41e-04 2022-05-28 09:28:48,255 INFO [train.py:842] (0/4) Epoch 23, batch 1850, loss[loss=0.1699, simple_loss=0.2594, pruned_loss=0.04023, over 7433.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2697, pruned_loss=0.0481, over 1418221.19 frames.], batch size: 20, lr: 2.41e-04 2022-05-28 09:29:26,608 INFO [train.py:842] (0/4) Epoch 23, batch 1900, loss[loss=0.1833, simple_loss=0.2788, pruned_loss=0.04388, over 7149.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2689, pruned_loss=0.04767, over 1421938.62 frames.], batch size: 20, lr: 2.41e-04 2022-05-28 09:30:04,581 INFO [train.py:842] (0/4) Epoch 23, batch 1950, loss[loss=0.1585, simple_loss=0.247, pruned_loss=0.03499, over 7145.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2694, pruned_loss=0.04798, over 1418451.17 frames.], batch size: 20, lr: 2.41e-04 2022-05-28 09:30:42,752 INFO [train.py:842] (0/4) Epoch 23, batch 2000, loss[loss=0.1398, simple_loss=0.2253, pruned_loss=0.02708, over 7263.00 frames.], tot_loss[loss=0.1832, simple_loss=0.27, pruned_loss=0.04823, over 1421721.65 frames.], batch size: 19, lr: 2.41e-04 2022-05-28 09:31:20,832 INFO [train.py:842] (0/4) Epoch 23, batch 2050, loss[loss=0.1608, simple_loss=0.2593, pruned_loss=0.03113, over 7236.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2702, pruned_loss=0.04802, over 1425780.26 frames.], batch size: 20, lr: 2.41e-04 2022-05-28 09:31:59,064 INFO [train.py:842] (0/4) Epoch 23, batch 2100, loss[loss=0.1694, simple_loss=0.2619, pruned_loss=0.03847, over 7223.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2702, pruned_loss=0.04806, over 1420262.78 frames.], batch size: 23, lr: 2.41e-04 2022-05-28 09:32:37,113 INFO [train.py:842] (0/4) Epoch 23, batch 2150, loss[loss=0.1629, simple_loss=0.2533, pruned_loss=0.03625, over 7151.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2696, pruned_loss=0.04757, over 1421110.14 frames.], batch size: 19, lr: 2.41e-04 2022-05-28 09:33:15,366 INFO [train.py:842] (0/4) Epoch 23, batch 2200, loss[loss=0.1795, simple_loss=0.2638, pruned_loss=0.04762, over 7144.00 frames.], tot_loss[loss=0.1819, simple_loss=0.269, pruned_loss=0.0474, over 1415992.38 frames.], batch size: 20, lr: 2.41e-04 2022-05-28 09:33:53,110 INFO [train.py:842] (0/4) Epoch 23, batch 2250, loss[loss=0.164, simple_loss=0.2519, pruned_loss=0.03802, over 7166.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2698, pruned_loss=0.04797, over 1411318.17 frames.], batch size: 19, lr: 2.41e-04 2022-05-28 09:34:31,493 INFO [train.py:842] (0/4) Epoch 23, batch 2300, loss[loss=0.161, simple_loss=0.2545, pruned_loss=0.03372, over 7311.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2675, pruned_loss=0.04689, over 1412910.86 frames.], batch size: 21, lr: 2.41e-04 2022-05-28 09:35:09,538 INFO [train.py:842] (0/4) Epoch 23, batch 2350, loss[loss=0.1544, simple_loss=0.2495, pruned_loss=0.0296, over 7335.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2679, pruned_loss=0.04713, over 1415603.45 frames.], batch size: 22, lr: 2.41e-04 2022-05-28 09:35:47,646 INFO [train.py:842] (0/4) Epoch 23, batch 2400, loss[loss=0.1856, simple_loss=0.2736, pruned_loss=0.04883, over 7296.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2687, pruned_loss=0.04717, over 1418124.02 frames.], batch size: 24, lr: 2.41e-04 2022-05-28 09:36:25,391 INFO [train.py:842] (0/4) Epoch 23, batch 2450, loss[loss=0.2034, simple_loss=0.2944, pruned_loss=0.05625, over 7199.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2702, pruned_loss=0.04805, over 1421844.18 frames.], batch size: 22, lr: 2.40e-04 2022-05-28 09:37:03,845 INFO [train.py:842] (0/4) Epoch 23, batch 2500, loss[loss=0.1775, simple_loss=0.2685, pruned_loss=0.04329, over 6384.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2689, pruned_loss=0.04751, over 1419978.63 frames.], batch size: 37, lr: 2.40e-04 2022-05-28 09:37:41,685 INFO [train.py:842] (0/4) Epoch 23, batch 2550, loss[loss=0.1625, simple_loss=0.2536, pruned_loss=0.03568, over 7381.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2694, pruned_loss=0.04782, over 1421303.01 frames.], batch size: 23, lr: 2.40e-04 2022-05-28 09:38:20,086 INFO [train.py:842] (0/4) Epoch 23, batch 2600, loss[loss=0.1631, simple_loss=0.2657, pruned_loss=0.03024, over 7344.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2687, pruned_loss=0.0471, over 1425312.19 frames.], batch size: 22, lr: 2.40e-04 2022-05-28 09:38:58,226 INFO [train.py:842] (0/4) Epoch 23, batch 2650, loss[loss=0.1863, simple_loss=0.2856, pruned_loss=0.04347, over 7273.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2678, pruned_loss=0.04687, over 1422190.88 frames.], batch size: 25, lr: 2.40e-04 2022-05-28 09:39:36,474 INFO [train.py:842] (0/4) Epoch 23, batch 2700, loss[loss=0.1623, simple_loss=0.2466, pruned_loss=0.039, over 7159.00 frames.], tot_loss[loss=0.181, simple_loss=0.2681, pruned_loss=0.04696, over 1421392.68 frames.], batch size: 19, lr: 2.40e-04 2022-05-28 09:40:14,500 INFO [train.py:842] (0/4) Epoch 23, batch 2750, loss[loss=0.1661, simple_loss=0.2517, pruned_loss=0.04019, over 7160.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2672, pruned_loss=0.04659, over 1419742.89 frames.], batch size: 18, lr: 2.40e-04 2022-05-28 09:40:52,718 INFO [train.py:842] (0/4) Epoch 23, batch 2800, loss[loss=0.2048, simple_loss=0.2868, pruned_loss=0.0614, over 7166.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2669, pruned_loss=0.04629, over 1419553.93 frames.], batch size: 18, lr: 2.40e-04 2022-05-28 09:41:30,788 INFO [train.py:842] (0/4) Epoch 23, batch 2850, loss[loss=0.2356, simple_loss=0.3186, pruned_loss=0.07626, over 7146.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2666, pruned_loss=0.04602, over 1421363.05 frames.], batch size: 28, lr: 2.40e-04 2022-05-28 09:42:08,983 INFO [train.py:842] (0/4) Epoch 23, batch 2900, loss[loss=0.1781, simple_loss=0.2679, pruned_loss=0.04418, over 7281.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2676, pruned_loss=0.04666, over 1423122.94 frames.], batch size: 25, lr: 2.40e-04 2022-05-28 09:42:46,951 INFO [train.py:842] (0/4) Epoch 23, batch 2950, loss[loss=0.1978, simple_loss=0.2803, pruned_loss=0.05761, over 7203.00 frames.], tot_loss[loss=0.181, simple_loss=0.2683, pruned_loss=0.04684, over 1424592.74 frames.], batch size: 22, lr: 2.40e-04 2022-05-28 09:43:25,056 INFO [train.py:842] (0/4) Epoch 23, batch 3000, loss[loss=0.1229, simple_loss=0.205, pruned_loss=0.02042, over 6991.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2672, pruned_loss=0.04632, over 1424027.52 frames.], batch size: 16, lr: 2.40e-04 2022-05-28 09:43:25,057 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 09:43:34,050 INFO [train.py:871] (0/4) Epoch 23, validation: loss=0.1673, simple_loss=0.2658, pruned_loss=0.03441, over 868885.00 frames. 2022-05-28 09:44:12,028 INFO [train.py:842] (0/4) Epoch 23, batch 3050, loss[loss=0.1475, simple_loss=0.2386, pruned_loss=0.02824, over 7146.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2676, pruned_loss=0.04669, over 1425909.94 frames.], batch size: 19, lr: 2.40e-04 2022-05-28 09:44:50,427 INFO [train.py:842] (0/4) Epoch 23, batch 3100, loss[loss=0.1576, simple_loss=0.2413, pruned_loss=0.03698, over 7240.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2675, pruned_loss=0.04685, over 1425252.15 frames.], batch size: 20, lr: 2.40e-04 2022-05-28 09:45:28,469 INFO [train.py:842] (0/4) Epoch 23, batch 3150, loss[loss=0.1693, simple_loss=0.2712, pruned_loss=0.03368, over 7329.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2677, pruned_loss=0.04728, over 1427164.69 frames.], batch size: 20, lr: 2.40e-04 2022-05-28 09:46:06,786 INFO [train.py:842] (0/4) Epoch 23, batch 3200, loss[loss=0.1945, simple_loss=0.2859, pruned_loss=0.05149, over 7124.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2681, pruned_loss=0.04751, over 1428496.13 frames.], batch size: 21, lr: 2.40e-04 2022-05-28 09:46:44,582 INFO [train.py:842] (0/4) Epoch 23, batch 3250, loss[loss=0.1744, simple_loss=0.2723, pruned_loss=0.0382, over 6451.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2703, pruned_loss=0.04848, over 1423417.87 frames.], batch size: 38, lr: 2.40e-04 2022-05-28 09:47:22,765 INFO [train.py:842] (0/4) Epoch 23, batch 3300, loss[loss=0.2009, simple_loss=0.2909, pruned_loss=0.05547, over 7316.00 frames.], tot_loss[loss=0.182, simple_loss=0.269, pruned_loss=0.04753, over 1423849.54 frames.], batch size: 24, lr: 2.40e-04 2022-05-28 09:48:00,967 INFO [train.py:842] (0/4) Epoch 23, batch 3350, loss[loss=0.1936, simple_loss=0.2924, pruned_loss=0.04739, over 7177.00 frames.], tot_loss[loss=0.1814, simple_loss=0.2683, pruned_loss=0.04727, over 1428524.04 frames.], batch size: 26, lr: 2.40e-04 2022-05-28 09:48:39,179 INFO [train.py:842] (0/4) Epoch 23, batch 3400, loss[loss=0.1818, simple_loss=0.2595, pruned_loss=0.05205, over 7161.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2696, pruned_loss=0.04796, over 1429097.85 frames.], batch size: 19, lr: 2.40e-04 2022-05-28 09:49:17,341 INFO [train.py:842] (0/4) Epoch 23, batch 3450, loss[loss=0.1818, simple_loss=0.2455, pruned_loss=0.05909, over 6767.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2691, pruned_loss=0.04814, over 1430510.35 frames.], batch size: 15, lr: 2.40e-04 2022-05-28 09:49:55,697 INFO [train.py:842] (0/4) Epoch 23, batch 3500, loss[loss=0.1824, simple_loss=0.2725, pruned_loss=0.04611, over 7232.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2683, pruned_loss=0.04737, over 1431405.72 frames.], batch size: 16, lr: 2.40e-04 2022-05-28 09:50:33,623 INFO [train.py:842] (0/4) Epoch 23, batch 3550, loss[loss=0.1348, simple_loss=0.2222, pruned_loss=0.02373, over 7414.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2675, pruned_loss=0.04716, over 1431030.74 frames.], batch size: 18, lr: 2.40e-04 2022-05-28 09:51:11,852 INFO [train.py:842] (0/4) Epoch 23, batch 3600, loss[loss=0.1401, simple_loss=0.2223, pruned_loss=0.02898, over 7297.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2689, pruned_loss=0.0474, over 1431433.81 frames.], batch size: 17, lr: 2.40e-04 2022-05-28 09:51:49,914 INFO [train.py:842] (0/4) Epoch 23, batch 3650, loss[loss=0.2192, simple_loss=0.3002, pruned_loss=0.06905, over 6358.00 frames.], tot_loss[loss=0.1827, simple_loss=0.27, pruned_loss=0.04772, over 1431234.10 frames.], batch size: 38, lr: 2.40e-04 2022-05-28 09:52:28,067 INFO [train.py:842] (0/4) Epoch 23, batch 3700, loss[loss=0.1913, simple_loss=0.2718, pruned_loss=0.05539, over 7154.00 frames.], tot_loss[loss=0.1833, simple_loss=0.2703, pruned_loss=0.04816, over 1430540.43 frames.], batch size: 19, lr: 2.40e-04 2022-05-28 09:53:05,868 INFO [train.py:842] (0/4) Epoch 23, batch 3750, loss[loss=0.2303, simple_loss=0.2989, pruned_loss=0.08084, over 7283.00 frames.], tot_loss[loss=0.1859, simple_loss=0.2717, pruned_loss=0.05002, over 1428047.92 frames.], batch size: 17, lr: 2.40e-04 2022-05-28 09:53:44,122 INFO [train.py:842] (0/4) Epoch 23, batch 3800, loss[loss=0.1661, simple_loss=0.2638, pruned_loss=0.03426, over 7371.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2711, pruned_loss=0.04957, over 1429917.99 frames.], batch size: 23, lr: 2.40e-04 2022-05-28 09:54:22,214 INFO [train.py:842] (0/4) Epoch 23, batch 3850, loss[loss=0.1761, simple_loss=0.2688, pruned_loss=0.04166, over 7080.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2709, pruned_loss=0.04885, over 1430819.13 frames.], batch size: 28, lr: 2.40e-04 2022-05-28 09:55:00,585 INFO [train.py:842] (0/4) Epoch 23, batch 3900, loss[loss=0.1524, simple_loss=0.2462, pruned_loss=0.02926, over 7112.00 frames.], tot_loss[loss=0.1846, simple_loss=0.2709, pruned_loss=0.04922, over 1430928.17 frames.], batch size: 21, lr: 2.40e-04 2022-05-28 09:55:38,442 INFO [train.py:842] (0/4) Epoch 23, batch 3950, loss[loss=0.1616, simple_loss=0.254, pruned_loss=0.03458, over 7151.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2695, pruned_loss=0.04779, over 1430376.57 frames.], batch size: 19, lr: 2.40e-04 2022-05-28 09:56:16,610 INFO [train.py:842] (0/4) Epoch 23, batch 4000, loss[loss=0.1805, simple_loss=0.264, pruned_loss=0.04849, over 7261.00 frames.], tot_loss[loss=0.1829, simple_loss=0.27, pruned_loss=0.0479, over 1427757.09 frames.], batch size: 17, lr: 2.40e-04 2022-05-28 09:56:54,395 INFO [train.py:842] (0/4) Epoch 23, batch 4050, loss[loss=0.1444, simple_loss=0.2262, pruned_loss=0.03128, over 7192.00 frames.], tot_loss[loss=0.183, simple_loss=0.2703, pruned_loss=0.0479, over 1423118.93 frames.], batch size: 16, lr: 2.40e-04 2022-05-28 09:57:32,626 INFO [train.py:842] (0/4) Epoch 23, batch 4100, loss[loss=0.1511, simple_loss=0.2498, pruned_loss=0.02622, over 7140.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2694, pruned_loss=0.04782, over 1420560.89 frames.], batch size: 20, lr: 2.40e-04 2022-05-28 09:58:10,723 INFO [train.py:842] (0/4) Epoch 23, batch 4150, loss[loss=0.1446, simple_loss=0.2334, pruned_loss=0.02788, over 7061.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2681, pruned_loss=0.04715, over 1420258.61 frames.], batch size: 18, lr: 2.39e-04 2022-05-28 09:58:48,821 INFO [train.py:842] (0/4) Epoch 23, batch 4200, loss[loss=0.1476, simple_loss=0.2475, pruned_loss=0.02384, over 7435.00 frames.], tot_loss[loss=0.1818, simple_loss=0.269, pruned_loss=0.04732, over 1423172.80 frames.], batch size: 20, lr: 2.39e-04 2022-05-28 09:59:26,827 INFO [train.py:842] (0/4) Epoch 23, batch 4250, loss[loss=0.131, simple_loss=0.2105, pruned_loss=0.02573, over 7275.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2694, pruned_loss=0.04735, over 1427576.38 frames.], batch size: 17, lr: 2.39e-04 2022-05-28 10:00:04,964 INFO [train.py:842] (0/4) Epoch 23, batch 4300, loss[loss=0.1682, simple_loss=0.2595, pruned_loss=0.03845, over 6823.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2689, pruned_loss=0.04677, over 1428332.59 frames.], batch size: 31, lr: 2.39e-04 2022-05-28 10:00:42,792 INFO [train.py:842] (0/4) Epoch 23, batch 4350, loss[loss=0.1895, simple_loss=0.2812, pruned_loss=0.04884, over 7419.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2692, pruned_loss=0.04691, over 1426562.57 frames.], batch size: 21, lr: 2.39e-04 2022-05-28 10:01:30,388 INFO [train.py:842] (0/4) Epoch 23, batch 4400, loss[loss=0.1671, simple_loss=0.2608, pruned_loss=0.03664, over 6750.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2682, pruned_loss=0.04619, over 1427879.27 frames.], batch size: 31, lr: 2.39e-04 2022-05-28 10:02:08,603 INFO [train.py:842] (0/4) Epoch 23, batch 4450, loss[loss=0.146, simple_loss=0.232, pruned_loss=0.02998, over 7137.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2677, pruned_loss=0.04637, over 1428456.55 frames.], batch size: 17, lr: 2.39e-04 2022-05-28 10:02:46,913 INFO [train.py:842] (0/4) Epoch 23, batch 4500, loss[loss=0.1589, simple_loss=0.2393, pruned_loss=0.03928, over 7433.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2696, pruned_loss=0.04746, over 1427545.69 frames.], batch size: 18, lr: 2.39e-04 2022-05-28 10:03:24,942 INFO [train.py:842] (0/4) Epoch 23, batch 4550, loss[loss=0.2121, simple_loss=0.304, pruned_loss=0.06008, over 7211.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2693, pruned_loss=0.0467, over 1429517.14 frames.], batch size: 22, lr: 2.39e-04 2022-05-28 10:04:12,779 INFO [train.py:842] (0/4) Epoch 23, batch 4600, loss[loss=0.2057, simple_loss=0.2834, pruned_loss=0.06399, over 7376.00 frames.], tot_loss[loss=0.1824, simple_loss=0.2699, pruned_loss=0.04747, over 1424541.43 frames.], batch size: 23, lr: 2.39e-04 2022-05-28 10:04:50,907 INFO [train.py:842] (0/4) Epoch 23, batch 4650, loss[loss=0.1838, simple_loss=0.2723, pruned_loss=0.04767, over 7425.00 frames.], tot_loss[loss=0.1817, simple_loss=0.269, pruned_loss=0.04717, over 1428591.51 frames.], batch size: 20, lr: 2.39e-04 2022-05-28 10:05:38,560 INFO [train.py:842] (0/4) Epoch 23, batch 4700, loss[loss=0.162, simple_loss=0.259, pruned_loss=0.03252, over 7412.00 frames.], tot_loss[loss=0.1808, simple_loss=0.268, pruned_loss=0.04678, over 1430394.17 frames.], batch size: 21, lr: 2.39e-04 2022-05-28 10:06:16,320 INFO [train.py:842] (0/4) Epoch 23, batch 4750, loss[loss=0.1763, simple_loss=0.266, pruned_loss=0.04332, over 7140.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2688, pruned_loss=0.04708, over 1423806.34 frames.], batch size: 20, lr: 2.39e-04 2022-05-28 10:06:54,360 INFO [train.py:842] (0/4) Epoch 23, batch 4800, loss[loss=0.1527, simple_loss=0.2403, pruned_loss=0.03257, over 7057.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2698, pruned_loss=0.04761, over 1421472.15 frames.], batch size: 18, lr: 2.39e-04 2022-05-28 10:07:32,375 INFO [train.py:842] (0/4) Epoch 23, batch 4850, loss[loss=0.1869, simple_loss=0.2615, pruned_loss=0.05616, over 7403.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2693, pruned_loss=0.04728, over 1420379.67 frames.], batch size: 18, lr: 2.39e-04 2022-05-28 10:08:10,710 INFO [train.py:842] (0/4) Epoch 23, batch 4900, loss[loss=0.1769, simple_loss=0.2758, pruned_loss=0.03905, over 7192.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2691, pruned_loss=0.0474, over 1423976.00 frames.], batch size: 22, lr: 2.39e-04 2022-05-28 10:08:48,734 INFO [train.py:842] (0/4) Epoch 23, batch 4950, loss[loss=0.178, simple_loss=0.2631, pruned_loss=0.04644, over 7416.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2689, pruned_loss=0.04717, over 1423925.99 frames.], batch size: 21, lr: 2.39e-04 2022-05-28 10:09:27,010 INFO [train.py:842] (0/4) Epoch 23, batch 5000, loss[loss=0.1859, simple_loss=0.2727, pruned_loss=0.04956, over 7433.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2693, pruned_loss=0.04748, over 1422606.21 frames.], batch size: 20, lr: 2.39e-04 2022-05-28 10:10:05,040 INFO [train.py:842] (0/4) Epoch 23, batch 5050, loss[loss=0.1735, simple_loss=0.2597, pruned_loss=0.04366, over 7152.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2681, pruned_loss=0.04677, over 1421424.22 frames.], batch size: 19, lr: 2.39e-04 2022-05-28 10:10:43,282 INFO [train.py:842] (0/4) Epoch 23, batch 5100, loss[loss=0.2301, simple_loss=0.3174, pruned_loss=0.07146, over 7281.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2693, pruned_loss=0.04765, over 1422382.38 frames.], batch size: 24, lr: 2.39e-04 2022-05-28 10:11:21,302 INFO [train.py:842] (0/4) Epoch 23, batch 5150, loss[loss=0.1964, simple_loss=0.2857, pruned_loss=0.05358, over 7402.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2696, pruned_loss=0.0475, over 1427234.39 frames.], batch size: 21, lr: 2.39e-04 2022-05-28 10:11:59,862 INFO [train.py:842] (0/4) Epoch 23, batch 5200, loss[loss=0.18, simple_loss=0.2718, pruned_loss=0.04409, over 7377.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2685, pruned_loss=0.04698, over 1429012.42 frames.], batch size: 23, lr: 2.39e-04 2022-05-28 10:12:37,834 INFO [train.py:842] (0/4) Epoch 23, batch 5250, loss[loss=0.1972, simple_loss=0.281, pruned_loss=0.05671, over 7317.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2699, pruned_loss=0.04731, over 1430847.74 frames.], batch size: 22, lr: 2.39e-04 2022-05-28 10:13:16,068 INFO [train.py:842] (0/4) Epoch 23, batch 5300, loss[loss=0.196, simple_loss=0.2966, pruned_loss=0.04774, over 6462.00 frames.], tot_loss[loss=0.1812, simple_loss=0.269, pruned_loss=0.04669, over 1429527.55 frames.], batch size: 37, lr: 2.39e-04 2022-05-28 10:13:54,135 INFO [train.py:842] (0/4) Epoch 23, batch 5350, loss[loss=0.2031, simple_loss=0.2936, pruned_loss=0.05632, over 7098.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2681, pruned_loss=0.04678, over 1426214.60 frames.], batch size: 21, lr: 2.39e-04 2022-05-28 10:14:32,591 INFO [train.py:842] (0/4) Epoch 23, batch 5400, loss[loss=0.1911, simple_loss=0.2834, pruned_loss=0.04937, over 7335.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2688, pruned_loss=0.04732, over 1429780.19 frames.], batch size: 20, lr: 2.39e-04 2022-05-28 10:15:10,491 INFO [train.py:842] (0/4) Epoch 23, batch 5450, loss[loss=0.1805, simple_loss=0.2689, pruned_loss=0.04604, over 7068.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2696, pruned_loss=0.04744, over 1431359.97 frames.], batch size: 28, lr: 2.39e-04 2022-05-28 10:15:48,464 INFO [train.py:842] (0/4) Epoch 23, batch 5500, loss[loss=0.2033, simple_loss=0.2904, pruned_loss=0.0581, over 7207.00 frames.], tot_loss[loss=0.185, simple_loss=0.2724, pruned_loss=0.04882, over 1425501.55 frames.], batch size: 26, lr: 2.39e-04 2022-05-28 10:16:26,509 INFO [train.py:842] (0/4) Epoch 23, batch 5550, loss[loss=0.1876, simple_loss=0.2802, pruned_loss=0.04752, over 7190.00 frames.], tot_loss[loss=0.1841, simple_loss=0.2712, pruned_loss=0.04854, over 1427327.10 frames.], batch size: 22, lr: 2.39e-04 2022-05-28 10:17:04,851 INFO [train.py:842] (0/4) Epoch 23, batch 5600, loss[loss=0.1847, simple_loss=0.2513, pruned_loss=0.05906, over 6795.00 frames.], tot_loss[loss=0.184, simple_loss=0.2711, pruned_loss=0.04845, over 1427632.78 frames.], batch size: 15, lr: 2.39e-04 2022-05-28 10:17:43,158 INFO [train.py:842] (0/4) Epoch 23, batch 5650, loss[loss=0.1646, simple_loss=0.2601, pruned_loss=0.03456, over 7424.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2693, pruned_loss=0.04781, over 1430331.24 frames.], batch size: 20, lr: 2.39e-04 2022-05-28 10:18:21,418 INFO [train.py:842] (0/4) Epoch 23, batch 5700, loss[loss=0.1656, simple_loss=0.2713, pruned_loss=0.02996, over 7144.00 frames.], tot_loss[loss=0.1828, simple_loss=0.2696, pruned_loss=0.04794, over 1426077.40 frames.], batch size: 20, lr: 2.39e-04 2022-05-28 10:18:59,376 INFO [train.py:842] (0/4) Epoch 23, batch 5750, loss[loss=0.1706, simple_loss=0.2388, pruned_loss=0.05117, over 7137.00 frames.], tot_loss[loss=0.182, simple_loss=0.2685, pruned_loss=0.04771, over 1424435.87 frames.], batch size: 17, lr: 2.39e-04 2022-05-28 10:19:20,302 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-208000.pt 2022-05-28 10:19:40,409 INFO [train.py:842] (0/4) Epoch 23, batch 5800, loss[loss=0.1679, simple_loss=0.2559, pruned_loss=0.03998, over 7057.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2659, pruned_loss=0.0464, over 1426977.05 frames.], batch size: 18, lr: 2.39e-04 2022-05-28 10:20:18,493 INFO [train.py:842] (0/4) Epoch 23, batch 5850, loss[loss=0.1852, simple_loss=0.2773, pruned_loss=0.04652, over 7312.00 frames.], tot_loss[loss=0.1786, simple_loss=0.265, pruned_loss=0.04604, over 1429291.06 frames.], batch size: 21, lr: 2.39e-04 2022-05-28 10:20:56,744 INFO [train.py:842] (0/4) Epoch 23, batch 5900, loss[loss=0.1611, simple_loss=0.2501, pruned_loss=0.03605, over 7426.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2661, pruned_loss=0.04631, over 1424957.80 frames.], batch size: 20, lr: 2.38e-04 2022-05-28 10:21:34,801 INFO [train.py:842] (0/4) Epoch 23, batch 5950, loss[loss=0.1556, simple_loss=0.2406, pruned_loss=0.03526, over 7408.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2664, pruned_loss=0.04675, over 1420772.70 frames.], batch size: 17, lr: 2.38e-04 2022-05-28 10:22:13,043 INFO [train.py:842] (0/4) Epoch 23, batch 6000, loss[loss=0.1919, simple_loss=0.2776, pruned_loss=0.05303, over 6806.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2669, pruned_loss=0.04712, over 1420344.86 frames.], batch size: 31, lr: 2.38e-04 2022-05-28 10:22:13,044 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 10:22:22,049 INFO [train.py:871] (0/4) Epoch 23, validation: loss=0.1637, simple_loss=0.2625, pruned_loss=0.03241, over 868885.00 frames. 2022-05-28 10:22:59,809 INFO [train.py:842] (0/4) Epoch 23, batch 6050, loss[loss=0.1256, simple_loss=0.2075, pruned_loss=0.0219, over 7427.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2678, pruned_loss=0.04753, over 1419648.52 frames.], batch size: 18, lr: 2.38e-04 2022-05-28 10:23:37,967 INFO [train.py:842] (0/4) Epoch 23, batch 6100, loss[loss=0.1815, simple_loss=0.2764, pruned_loss=0.04328, over 6798.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2685, pruned_loss=0.04771, over 1421581.10 frames.], batch size: 31, lr: 2.38e-04 2022-05-28 10:24:16,055 INFO [train.py:842] (0/4) Epoch 23, batch 6150, loss[loss=0.1796, simple_loss=0.2788, pruned_loss=0.04014, over 7302.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2692, pruned_loss=0.04804, over 1422536.00 frames.], batch size: 24, lr: 2.38e-04 2022-05-28 10:24:54,366 INFO [train.py:842] (0/4) Epoch 23, batch 6200, loss[loss=0.1923, simple_loss=0.2802, pruned_loss=0.05216, over 7143.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2698, pruned_loss=0.04831, over 1424529.11 frames.], batch size: 26, lr: 2.38e-04 2022-05-28 10:25:32,197 INFO [train.py:842] (0/4) Epoch 23, batch 6250, loss[loss=0.1899, simple_loss=0.275, pruned_loss=0.05238, over 6671.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2693, pruned_loss=0.04795, over 1422150.01 frames.], batch size: 31, lr: 2.38e-04 2022-05-28 10:26:10,647 INFO [train.py:842] (0/4) Epoch 23, batch 6300, loss[loss=0.1969, simple_loss=0.2813, pruned_loss=0.05624, over 7320.00 frames.], tot_loss[loss=0.1824, simple_loss=0.2687, pruned_loss=0.04808, over 1423148.02 frames.], batch size: 25, lr: 2.38e-04 2022-05-28 10:26:48,651 INFO [train.py:842] (0/4) Epoch 23, batch 6350, loss[loss=0.1833, simple_loss=0.2705, pruned_loss=0.04811, over 7169.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2675, pruned_loss=0.04745, over 1421361.12 frames.], batch size: 26, lr: 2.38e-04 2022-05-28 10:27:27,046 INFO [train.py:842] (0/4) Epoch 23, batch 6400, loss[loss=0.1762, simple_loss=0.2653, pruned_loss=0.04353, over 7122.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2665, pruned_loss=0.0471, over 1424849.74 frames.], batch size: 28, lr: 2.38e-04 2022-05-28 10:28:05,058 INFO [train.py:842] (0/4) Epoch 23, batch 6450, loss[loss=0.233, simple_loss=0.3041, pruned_loss=0.08091, over 7333.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2664, pruned_loss=0.04713, over 1421235.26 frames.], batch size: 20, lr: 2.38e-04 2022-05-28 10:28:43,391 INFO [train.py:842] (0/4) Epoch 23, batch 6500, loss[loss=0.1664, simple_loss=0.2545, pruned_loss=0.03914, over 7161.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2661, pruned_loss=0.04664, over 1421856.64 frames.], batch size: 18, lr: 2.38e-04 2022-05-28 10:29:21,289 INFO [train.py:842] (0/4) Epoch 23, batch 6550, loss[loss=0.1802, simple_loss=0.2658, pruned_loss=0.04735, over 7251.00 frames.], tot_loss[loss=0.181, simple_loss=0.2676, pruned_loss=0.04719, over 1422566.93 frames.], batch size: 19, lr: 2.38e-04 2022-05-28 10:29:59,627 INFO [train.py:842] (0/4) Epoch 23, batch 6600, loss[loss=0.1922, simple_loss=0.2824, pruned_loss=0.05096, over 6814.00 frames.], tot_loss[loss=0.1814, simple_loss=0.2684, pruned_loss=0.04719, over 1426414.73 frames.], batch size: 31, lr: 2.38e-04 2022-05-28 10:30:37,531 INFO [train.py:842] (0/4) Epoch 23, batch 6650, loss[loss=0.2293, simple_loss=0.3066, pruned_loss=0.076, over 7327.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2682, pruned_loss=0.04696, over 1428731.71 frames.], batch size: 21, lr: 2.38e-04 2022-05-28 10:31:15,833 INFO [train.py:842] (0/4) Epoch 23, batch 6700, loss[loss=0.1693, simple_loss=0.2635, pruned_loss=0.03759, over 7366.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2679, pruned_loss=0.04686, over 1428911.03 frames.], batch size: 19, lr: 2.38e-04 2022-05-28 10:31:54,040 INFO [train.py:842] (0/4) Epoch 23, batch 6750, loss[loss=0.1836, simple_loss=0.2634, pruned_loss=0.0519, over 7418.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2672, pruned_loss=0.04671, over 1431037.03 frames.], batch size: 21, lr: 2.38e-04 2022-05-28 10:32:32,268 INFO [train.py:842] (0/4) Epoch 23, batch 6800, loss[loss=0.16, simple_loss=0.2426, pruned_loss=0.03873, over 7349.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2685, pruned_loss=0.04721, over 1432515.53 frames.], batch size: 19, lr: 2.38e-04 2022-05-28 10:33:10,268 INFO [train.py:842] (0/4) Epoch 23, batch 6850, loss[loss=0.1637, simple_loss=0.2454, pruned_loss=0.04104, over 7284.00 frames.], tot_loss[loss=0.1821, simple_loss=0.269, pruned_loss=0.04764, over 1427230.58 frames.], batch size: 18, lr: 2.38e-04 2022-05-28 10:33:48,535 INFO [train.py:842] (0/4) Epoch 23, batch 6900, loss[loss=0.2079, simple_loss=0.2898, pruned_loss=0.06294, over 7423.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2698, pruned_loss=0.048, over 1426450.93 frames.], batch size: 21, lr: 2.38e-04 2022-05-28 10:34:26,510 INFO [train.py:842] (0/4) Epoch 23, batch 6950, loss[loss=0.1321, simple_loss=0.2125, pruned_loss=0.02586, over 6989.00 frames.], tot_loss[loss=0.183, simple_loss=0.2698, pruned_loss=0.04806, over 1428758.72 frames.], batch size: 16, lr: 2.38e-04 2022-05-28 10:35:04,766 INFO [train.py:842] (0/4) Epoch 23, batch 7000, loss[loss=0.258, simple_loss=0.3217, pruned_loss=0.09712, over 5364.00 frames.], tot_loss[loss=0.1833, simple_loss=0.2701, pruned_loss=0.04824, over 1427912.40 frames.], batch size: 52, lr: 2.38e-04 2022-05-28 10:35:42,811 INFO [train.py:842] (0/4) Epoch 23, batch 7050, loss[loss=0.233, simple_loss=0.3088, pruned_loss=0.07859, over 7224.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2685, pruned_loss=0.04745, over 1427921.61 frames.], batch size: 20, lr: 2.38e-04 2022-05-28 10:36:21,009 INFO [train.py:842] (0/4) Epoch 23, batch 7100, loss[loss=0.182, simple_loss=0.2697, pruned_loss=0.04712, over 7250.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2686, pruned_loss=0.04727, over 1423220.26 frames.], batch size: 24, lr: 2.38e-04 2022-05-28 10:36:58,900 INFO [train.py:842] (0/4) Epoch 23, batch 7150, loss[loss=0.1892, simple_loss=0.2815, pruned_loss=0.04841, over 7283.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2684, pruned_loss=0.04742, over 1425868.93 frames.], batch size: 25, lr: 2.38e-04 2022-05-28 10:37:37,086 INFO [train.py:842] (0/4) Epoch 23, batch 7200, loss[loss=0.1797, simple_loss=0.2624, pruned_loss=0.04847, over 7342.00 frames.], tot_loss[loss=0.181, simple_loss=0.2679, pruned_loss=0.04709, over 1418805.28 frames.], batch size: 20, lr: 2.38e-04 2022-05-28 10:38:15,058 INFO [train.py:842] (0/4) Epoch 23, batch 7250, loss[loss=0.1695, simple_loss=0.2567, pruned_loss=0.04111, over 7161.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2687, pruned_loss=0.04742, over 1417103.11 frames.], batch size: 19, lr: 2.38e-04 2022-05-28 10:38:53,290 INFO [train.py:842] (0/4) Epoch 23, batch 7300, loss[loss=0.1785, simple_loss=0.2676, pruned_loss=0.04472, over 7173.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2683, pruned_loss=0.04744, over 1415811.07 frames.], batch size: 26, lr: 2.38e-04 2022-05-28 10:39:31,502 INFO [train.py:842] (0/4) Epoch 23, batch 7350, loss[loss=0.2047, simple_loss=0.2921, pruned_loss=0.05866, over 5187.00 frames.], tot_loss[loss=0.1812, simple_loss=0.268, pruned_loss=0.04719, over 1419006.42 frames.], batch size: 52, lr: 2.38e-04 2022-05-28 10:40:09,506 INFO [train.py:842] (0/4) Epoch 23, batch 7400, loss[loss=0.1806, simple_loss=0.2693, pruned_loss=0.04589, over 7143.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2683, pruned_loss=0.04736, over 1420650.31 frames.], batch size: 20, lr: 2.38e-04 2022-05-28 10:40:47,420 INFO [train.py:842] (0/4) Epoch 23, batch 7450, loss[loss=0.1915, simple_loss=0.2743, pruned_loss=0.05436, over 7155.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2679, pruned_loss=0.04681, over 1422666.55 frames.], batch size: 19, lr: 2.38e-04 2022-05-28 10:41:25,541 INFO [train.py:842] (0/4) Epoch 23, batch 7500, loss[loss=0.2546, simple_loss=0.3514, pruned_loss=0.07891, over 7213.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2689, pruned_loss=0.04733, over 1417026.55 frames.], batch size: 22, lr: 2.38e-04 2022-05-28 10:42:03,561 INFO [train.py:842] (0/4) Epoch 23, batch 7550, loss[loss=0.1923, simple_loss=0.2816, pruned_loss=0.0515, over 7421.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2682, pruned_loss=0.0472, over 1421484.86 frames.], batch size: 21, lr: 2.38e-04 2022-05-28 10:42:41,660 INFO [train.py:842] (0/4) Epoch 23, batch 7600, loss[loss=0.1614, simple_loss=0.2524, pruned_loss=0.03518, over 4748.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2689, pruned_loss=0.04725, over 1417144.22 frames.], batch size: 52, lr: 2.38e-04 2022-05-28 10:43:19,664 INFO [train.py:842] (0/4) Epoch 23, batch 7650, loss[loss=0.1853, simple_loss=0.2699, pruned_loss=0.05032, over 6827.00 frames.], tot_loss[loss=0.1817, simple_loss=0.269, pruned_loss=0.0472, over 1414692.18 frames.], batch size: 31, lr: 2.37e-04 2022-05-28 10:43:57,920 INFO [train.py:842] (0/4) Epoch 23, batch 7700, loss[loss=0.1729, simple_loss=0.2593, pruned_loss=0.0432, over 7148.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2693, pruned_loss=0.04766, over 1414959.43 frames.], batch size: 20, lr: 2.37e-04 2022-05-28 10:44:35,744 INFO [train.py:842] (0/4) Epoch 23, batch 7750, loss[loss=0.1853, simple_loss=0.2816, pruned_loss=0.04452, over 7412.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2686, pruned_loss=0.04732, over 1418117.55 frames.], batch size: 21, lr: 2.37e-04 2022-05-28 10:45:14,277 INFO [train.py:842] (0/4) Epoch 23, batch 7800, loss[loss=0.2429, simple_loss=0.3231, pruned_loss=0.08134, over 7134.00 frames.], tot_loss[loss=0.1814, simple_loss=0.2684, pruned_loss=0.04722, over 1421859.14 frames.], batch size: 26, lr: 2.37e-04 2022-05-28 10:45:52,171 INFO [train.py:842] (0/4) Epoch 23, batch 7850, loss[loss=0.2306, simple_loss=0.3051, pruned_loss=0.07801, over 4862.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2674, pruned_loss=0.04683, over 1416909.06 frames.], batch size: 52, lr: 2.37e-04 2022-05-28 10:46:30,462 INFO [train.py:842] (0/4) Epoch 23, batch 7900, loss[loss=0.1905, simple_loss=0.2707, pruned_loss=0.05519, over 5341.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2678, pruned_loss=0.04703, over 1418336.89 frames.], batch size: 52, lr: 2.37e-04 2022-05-28 10:47:08,306 INFO [train.py:842] (0/4) Epoch 23, batch 7950, loss[loss=0.1806, simple_loss=0.2518, pruned_loss=0.05468, over 7135.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2684, pruned_loss=0.04734, over 1419100.16 frames.], batch size: 17, lr: 2.37e-04 2022-05-28 10:47:46,446 INFO [train.py:842] (0/4) Epoch 23, batch 8000, loss[loss=0.1804, simple_loss=0.2723, pruned_loss=0.04424, over 7192.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2689, pruned_loss=0.04726, over 1423401.44 frames.], batch size: 26, lr: 2.37e-04 2022-05-28 10:48:24,463 INFO [train.py:842] (0/4) Epoch 23, batch 8050, loss[loss=0.1805, simple_loss=0.259, pruned_loss=0.05102, over 7165.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2689, pruned_loss=0.04745, over 1422833.44 frames.], batch size: 18, lr: 2.37e-04 2022-05-28 10:49:02,718 INFO [train.py:842] (0/4) Epoch 23, batch 8100, loss[loss=0.1556, simple_loss=0.2411, pruned_loss=0.03509, over 7284.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2674, pruned_loss=0.04662, over 1424036.09 frames.], batch size: 17, lr: 2.37e-04 2022-05-28 10:49:40,299 INFO [train.py:842] (0/4) Epoch 23, batch 8150, loss[loss=0.1754, simple_loss=0.2726, pruned_loss=0.03908, over 7225.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2688, pruned_loss=0.04682, over 1420260.36 frames.], batch size: 21, lr: 2.37e-04 2022-05-28 10:50:18,851 INFO [train.py:842] (0/4) Epoch 23, batch 8200, loss[loss=0.1927, simple_loss=0.2833, pruned_loss=0.05104, over 7321.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2687, pruned_loss=0.04717, over 1421828.07 frames.], batch size: 25, lr: 2.37e-04 2022-05-28 10:50:56,563 INFO [train.py:842] (0/4) Epoch 23, batch 8250, loss[loss=0.1665, simple_loss=0.2596, pruned_loss=0.03668, over 7190.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2688, pruned_loss=0.04705, over 1420726.79 frames.], batch size: 22, lr: 2.37e-04 2022-05-28 10:51:34,714 INFO [train.py:842] (0/4) Epoch 23, batch 8300, loss[loss=0.186, simple_loss=0.2715, pruned_loss=0.05032, over 7072.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2679, pruned_loss=0.04699, over 1415099.67 frames.], batch size: 18, lr: 2.37e-04 2022-05-28 10:52:12,731 INFO [train.py:842] (0/4) Epoch 23, batch 8350, loss[loss=0.2055, simple_loss=0.2957, pruned_loss=0.05766, over 6301.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2682, pruned_loss=0.04714, over 1413415.61 frames.], batch size: 38, lr: 2.37e-04 2022-05-28 10:52:50,725 INFO [train.py:842] (0/4) Epoch 23, batch 8400, loss[loss=0.1937, simple_loss=0.2809, pruned_loss=0.05324, over 7063.00 frames.], tot_loss[loss=0.1828, simple_loss=0.2696, pruned_loss=0.04804, over 1408911.95 frames.], batch size: 18, lr: 2.37e-04 2022-05-28 10:53:28,670 INFO [train.py:842] (0/4) Epoch 23, batch 8450, loss[loss=0.1937, simple_loss=0.2681, pruned_loss=0.05963, over 7008.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2682, pruned_loss=0.04742, over 1408071.06 frames.], batch size: 16, lr: 2.37e-04 2022-05-28 10:54:06,868 INFO [train.py:842] (0/4) Epoch 23, batch 8500, loss[loss=0.1541, simple_loss=0.2422, pruned_loss=0.03301, over 6817.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2683, pruned_loss=0.04751, over 1408981.54 frames.], batch size: 15, lr: 2.37e-04 2022-05-28 10:54:44,742 INFO [train.py:842] (0/4) Epoch 23, batch 8550, loss[loss=0.1833, simple_loss=0.2795, pruned_loss=0.04358, over 7223.00 frames.], tot_loss[loss=0.1836, simple_loss=0.27, pruned_loss=0.04865, over 1409521.41 frames.], batch size: 21, lr: 2.37e-04 2022-05-28 10:55:22,734 INFO [train.py:842] (0/4) Epoch 23, batch 8600, loss[loss=0.2216, simple_loss=0.3016, pruned_loss=0.0708, over 7376.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2706, pruned_loss=0.0486, over 1410034.19 frames.], batch size: 23, lr: 2.37e-04 2022-05-28 10:56:00,905 INFO [train.py:842] (0/4) Epoch 23, batch 8650, loss[loss=0.1246, simple_loss=0.206, pruned_loss=0.02161, over 7280.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2692, pruned_loss=0.04766, over 1415680.70 frames.], batch size: 17, lr: 2.37e-04 2022-05-28 10:56:39,171 INFO [train.py:842] (0/4) Epoch 23, batch 8700, loss[loss=0.1524, simple_loss=0.2279, pruned_loss=0.03843, over 6994.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2684, pruned_loss=0.04733, over 1414805.71 frames.], batch size: 16, lr: 2.37e-04 2022-05-28 10:57:17,035 INFO [train.py:842] (0/4) Epoch 23, batch 8750, loss[loss=0.1453, simple_loss=0.2291, pruned_loss=0.03081, over 7128.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2674, pruned_loss=0.04684, over 1412705.03 frames.], batch size: 17, lr: 2.37e-04 2022-05-28 10:57:55,223 INFO [train.py:842] (0/4) Epoch 23, batch 8800, loss[loss=0.1899, simple_loss=0.2817, pruned_loss=0.04907, over 7293.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2683, pruned_loss=0.04766, over 1407884.99 frames.], batch size: 24, lr: 2.37e-04 2022-05-28 10:58:33,254 INFO [train.py:842] (0/4) Epoch 23, batch 8850, loss[loss=0.164, simple_loss=0.2624, pruned_loss=0.0328, over 7136.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2675, pruned_loss=0.04672, over 1410842.22 frames.], batch size: 21, lr: 2.37e-04 2022-05-28 10:59:11,040 INFO [train.py:842] (0/4) Epoch 23, batch 8900, loss[loss=0.2291, simple_loss=0.3134, pruned_loss=0.0724, over 7153.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2685, pruned_loss=0.04736, over 1402891.13 frames.], batch size: 26, lr: 2.37e-04 2022-05-28 10:59:48,797 INFO [train.py:842] (0/4) Epoch 23, batch 8950, loss[loss=0.2003, simple_loss=0.2953, pruned_loss=0.05268, over 6510.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2695, pruned_loss=0.0478, over 1396224.42 frames.], batch size: 38, lr: 2.37e-04 2022-05-28 11:00:26,423 INFO [train.py:842] (0/4) Epoch 23, batch 9000, loss[loss=0.1779, simple_loss=0.2673, pruned_loss=0.04429, over 5259.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2719, pruned_loss=0.04834, over 1392411.93 frames.], batch size: 52, lr: 2.37e-04 2022-05-28 11:00:26,424 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 11:00:35,435 INFO [train.py:871] (0/4) Epoch 23, validation: loss=0.1641, simple_loss=0.2627, pruned_loss=0.03273, over 868885.00 frames. 2022-05-28 11:01:12,462 INFO [train.py:842] (0/4) Epoch 23, batch 9050, loss[loss=0.1855, simple_loss=0.284, pruned_loss=0.0435, over 6808.00 frames.], tot_loss[loss=0.1867, simple_loss=0.2743, pruned_loss=0.04956, over 1380932.69 frames.], batch size: 31, lr: 2.37e-04 2022-05-28 11:01:49,739 INFO [train.py:842] (0/4) Epoch 23, batch 9100, loss[loss=0.2007, simple_loss=0.2873, pruned_loss=0.05702, over 7154.00 frames.], tot_loss[loss=0.1902, simple_loss=0.2775, pruned_loss=0.05151, over 1345186.92 frames.], batch size: 26, lr: 2.37e-04 2022-05-28 11:02:26,559 INFO [train.py:842] (0/4) Epoch 23, batch 9150, loss[loss=0.2124, simple_loss=0.2897, pruned_loss=0.06755, over 4787.00 frames.], tot_loss[loss=0.1941, simple_loss=0.2802, pruned_loss=0.05396, over 1280425.92 frames.], batch size: 52, lr: 2.37e-04 2022-05-28 11:02:57,566 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-23.pt 2022-05-28 11:03:11,876 INFO [train.py:842] (0/4) Epoch 24, batch 0, loss[loss=0.155, simple_loss=0.2372, pruned_loss=0.03638, over 6826.00 frames.], tot_loss[loss=0.155, simple_loss=0.2372, pruned_loss=0.03638, over 6826.00 frames.], batch size: 15, lr: 2.32e-04 2022-05-28 11:03:49,768 INFO [train.py:842] (0/4) Epoch 24, batch 50, loss[loss=0.1513, simple_loss=0.2318, pruned_loss=0.03536, over 7280.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2633, pruned_loss=0.04398, over 316227.63 frames.], batch size: 17, lr: 2.32e-04 2022-05-28 11:04:28,171 INFO [train.py:842] (0/4) Epoch 24, batch 100, loss[loss=0.1803, simple_loss=0.2639, pruned_loss=0.04837, over 7334.00 frames.], tot_loss[loss=0.1797, simple_loss=0.267, pruned_loss=0.04613, over 567571.31 frames.], batch size: 20, lr: 2.32e-04 2022-05-28 11:05:05,926 INFO [train.py:842] (0/4) Epoch 24, batch 150, loss[loss=0.1909, simple_loss=0.2829, pruned_loss=0.04947, over 7384.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2693, pruned_loss=0.04755, over 752768.21 frames.], batch size: 23, lr: 2.32e-04 2022-05-28 11:05:44,208 INFO [train.py:842] (0/4) Epoch 24, batch 200, loss[loss=0.2183, simple_loss=0.314, pruned_loss=0.0613, over 7204.00 frames.], tot_loss[loss=0.1815, simple_loss=0.269, pruned_loss=0.04695, over 903680.96 frames.], batch size: 22, lr: 2.32e-04 2022-05-28 11:06:22,171 INFO [train.py:842] (0/4) Epoch 24, batch 250, loss[loss=0.1609, simple_loss=0.2583, pruned_loss=0.03174, over 7409.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2689, pruned_loss=0.04673, over 1015965.24 frames.], batch size: 21, lr: 2.32e-04 2022-05-28 11:07:00,465 INFO [train.py:842] (0/4) Epoch 24, batch 300, loss[loss=0.1842, simple_loss=0.2791, pruned_loss=0.04466, over 7144.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2691, pruned_loss=0.0476, over 1107388.29 frames.], batch size: 20, lr: 2.32e-04 2022-05-28 11:07:38,461 INFO [train.py:842] (0/4) Epoch 24, batch 350, loss[loss=0.1928, simple_loss=0.2892, pruned_loss=0.0482, over 7286.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2688, pruned_loss=0.04679, over 1178934.49 frames.], batch size: 25, lr: 2.32e-04 2022-05-28 11:08:16,626 INFO [train.py:842] (0/4) Epoch 24, batch 400, loss[loss=0.2151, simple_loss=0.3017, pruned_loss=0.06426, over 7301.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2683, pruned_loss=0.04648, over 1230311.62 frames.], batch size: 24, lr: 2.32e-04 2022-05-28 11:08:54,680 INFO [train.py:842] (0/4) Epoch 24, batch 450, loss[loss=0.167, simple_loss=0.2678, pruned_loss=0.03314, over 7143.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2679, pruned_loss=0.04625, over 1276081.05 frames.], batch size: 20, lr: 2.32e-04 2022-05-28 11:09:32,934 INFO [train.py:842] (0/4) Epoch 24, batch 500, loss[loss=0.1842, simple_loss=0.2665, pruned_loss=0.0509, over 7356.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2676, pruned_loss=0.04658, over 1307628.27 frames.], batch size: 19, lr: 2.31e-04 2022-05-28 11:10:11,036 INFO [train.py:842] (0/4) Epoch 24, batch 550, loss[loss=0.2135, simple_loss=0.3044, pruned_loss=0.0613, over 7206.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2661, pruned_loss=0.04562, over 1336653.61 frames.], batch size: 22, lr: 2.31e-04 2022-05-28 11:10:49,408 INFO [train.py:842] (0/4) Epoch 24, batch 600, loss[loss=0.1761, simple_loss=0.2604, pruned_loss=0.04593, over 7353.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2649, pruned_loss=0.04529, over 1354562.47 frames.], batch size: 19, lr: 2.31e-04 2022-05-28 11:11:27,459 INFO [train.py:842] (0/4) Epoch 24, batch 650, loss[loss=0.1614, simple_loss=0.2541, pruned_loss=0.03439, over 7350.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2648, pruned_loss=0.04548, over 1365379.55 frames.], batch size: 19, lr: 2.31e-04 2022-05-28 11:12:06,160 INFO [train.py:842] (0/4) Epoch 24, batch 700, loss[loss=0.1909, simple_loss=0.2862, pruned_loss=0.04778, over 7180.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2643, pruned_loss=0.04569, over 1382155.82 frames.], batch size: 26, lr: 2.31e-04 2022-05-28 11:12:44,039 INFO [train.py:842] (0/4) Epoch 24, batch 750, loss[loss=0.1937, simple_loss=0.2694, pruned_loss=0.05899, over 7002.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2655, pruned_loss=0.046, over 1393028.65 frames.], batch size: 16, lr: 2.31e-04 2022-05-28 11:13:22,543 INFO [train.py:842] (0/4) Epoch 24, batch 800, loss[loss=0.1489, simple_loss=0.2387, pruned_loss=0.02958, over 7252.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2641, pruned_loss=0.04501, over 1400255.47 frames.], batch size: 19, lr: 2.31e-04 2022-05-28 11:14:00,594 INFO [train.py:842] (0/4) Epoch 24, batch 850, loss[loss=0.1995, simple_loss=0.2802, pruned_loss=0.05938, over 6803.00 frames.], tot_loss[loss=0.178, simple_loss=0.2647, pruned_loss=0.04563, over 1406298.70 frames.], batch size: 31, lr: 2.31e-04 2022-05-28 11:14:38,790 INFO [train.py:842] (0/4) Epoch 24, batch 900, loss[loss=0.1928, simple_loss=0.2757, pruned_loss=0.05497, over 7430.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2656, pruned_loss=0.0457, over 1412374.41 frames.], batch size: 20, lr: 2.31e-04 2022-05-28 11:15:16,791 INFO [train.py:842] (0/4) Epoch 24, batch 950, loss[loss=0.198, simple_loss=0.289, pruned_loss=0.05353, over 6466.00 frames.], tot_loss[loss=0.179, simple_loss=0.2658, pruned_loss=0.04608, over 1417010.15 frames.], batch size: 38, lr: 2.31e-04 2022-05-28 11:15:55,338 INFO [train.py:842] (0/4) Epoch 24, batch 1000, loss[loss=0.2193, simple_loss=0.3018, pruned_loss=0.0684, over 7331.00 frames.], tot_loss[loss=0.1794, simple_loss=0.266, pruned_loss=0.04638, over 1418806.66 frames.], batch size: 21, lr: 2.31e-04 2022-05-28 11:16:33,147 INFO [train.py:842] (0/4) Epoch 24, batch 1050, loss[loss=0.1784, simple_loss=0.2757, pruned_loss=0.04054, over 7244.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2672, pruned_loss=0.04706, over 1413554.93 frames.], batch size: 20, lr: 2.31e-04 2022-05-28 11:17:11,281 INFO [train.py:842] (0/4) Epoch 24, batch 1100, loss[loss=0.1982, simple_loss=0.288, pruned_loss=0.05419, over 7144.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2672, pruned_loss=0.04702, over 1412483.26 frames.], batch size: 20, lr: 2.31e-04 2022-05-28 11:17:49,522 INFO [train.py:842] (0/4) Epoch 24, batch 1150, loss[loss=0.162, simple_loss=0.2494, pruned_loss=0.03727, over 6354.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2668, pruned_loss=0.04703, over 1416660.66 frames.], batch size: 38, lr: 2.31e-04 2022-05-28 11:18:27,584 INFO [train.py:842] (0/4) Epoch 24, batch 1200, loss[loss=0.1541, simple_loss=0.2357, pruned_loss=0.03631, over 7173.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2664, pruned_loss=0.04614, over 1418982.86 frames.], batch size: 18, lr: 2.31e-04 2022-05-28 11:19:05,763 INFO [train.py:842] (0/4) Epoch 24, batch 1250, loss[loss=0.1878, simple_loss=0.2752, pruned_loss=0.05018, over 7314.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2666, pruned_loss=0.04621, over 1419104.37 frames.], batch size: 20, lr: 2.31e-04 2022-05-28 11:19:44,005 INFO [train.py:842] (0/4) Epoch 24, batch 1300, loss[loss=0.1819, simple_loss=0.2748, pruned_loss=0.04444, over 6844.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2659, pruned_loss=0.04549, over 1420802.27 frames.], batch size: 31, lr: 2.31e-04 2022-05-28 11:20:21,956 INFO [train.py:842] (0/4) Epoch 24, batch 1350, loss[loss=0.1569, simple_loss=0.242, pruned_loss=0.0359, over 7420.00 frames.], tot_loss[loss=0.179, simple_loss=0.2666, pruned_loss=0.04566, over 1426427.06 frames.], batch size: 18, lr: 2.31e-04 2022-05-28 11:21:00,271 INFO [train.py:842] (0/4) Epoch 24, batch 1400, loss[loss=0.2318, simple_loss=0.3207, pruned_loss=0.0714, over 7200.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2672, pruned_loss=0.04609, over 1425133.15 frames.], batch size: 26, lr: 2.31e-04 2022-05-28 11:21:38,255 INFO [train.py:842] (0/4) Epoch 24, batch 1450, loss[loss=0.1824, simple_loss=0.2776, pruned_loss=0.04365, over 7141.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2678, pruned_loss=0.0467, over 1423141.60 frames.], batch size: 20, lr: 2.31e-04 2022-05-28 11:22:16,538 INFO [train.py:842] (0/4) Epoch 24, batch 1500, loss[loss=0.1813, simple_loss=0.2798, pruned_loss=0.04135, over 7154.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2681, pruned_loss=0.04727, over 1421340.90 frames.], batch size: 20, lr: 2.31e-04 2022-05-28 11:22:54,642 INFO [train.py:842] (0/4) Epoch 24, batch 1550, loss[loss=0.2247, simple_loss=0.3156, pruned_loss=0.06697, over 6811.00 frames.], tot_loss[loss=0.1824, simple_loss=0.269, pruned_loss=0.0479, over 1421704.97 frames.], batch size: 31, lr: 2.31e-04 2022-05-28 11:23:32,821 INFO [train.py:842] (0/4) Epoch 24, batch 1600, loss[loss=0.1567, simple_loss=0.2503, pruned_loss=0.03155, over 7328.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2706, pruned_loss=0.04776, over 1422936.95 frames.], batch size: 20, lr: 2.31e-04 2022-05-28 11:24:10,614 INFO [train.py:842] (0/4) Epoch 24, batch 1650, loss[loss=0.1612, simple_loss=0.2508, pruned_loss=0.0358, over 7204.00 frames.], tot_loss[loss=0.1834, simple_loss=0.271, pruned_loss=0.04793, over 1415223.85 frames.], batch size: 16, lr: 2.31e-04 2022-05-28 11:24:48,818 INFO [train.py:842] (0/4) Epoch 24, batch 1700, loss[loss=0.1815, simple_loss=0.2731, pruned_loss=0.04496, over 7316.00 frames.], tot_loss[loss=0.1819, simple_loss=0.27, pruned_loss=0.0469, over 1418455.58 frames.], batch size: 21, lr: 2.31e-04 2022-05-28 11:25:26,710 INFO [train.py:842] (0/4) Epoch 24, batch 1750, loss[loss=0.1739, simple_loss=0.2512, pruned_loss=0.04831, over 7074.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2693, pruned_loss=0.047, over 1420094.00 frames.], batch size: 18, lr: 2.31e-04 2022-05-28 11:26:05,003 INFO [train.py:842] (0/4) Epoch 24, batch 1800, loss[loss=0.2109, simple_loss=0.2918, pruned_loss=0.06505, over 7333.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2691, pruned_loss=0.04714, over 1420226.23 frames.], batch size: 22, lr: 2.31e-04 2022-05-28 11:26:42,916 INFO [train.py:842] (0/4) Epoch 24, batch 1850, loss[loss=0.2455, simple_loss=0.3173, pruned_loss=0.08685, over 7277.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2692, pruned_loss=0.04726, over 1424402.50 frames.], batch size: 24, lr: 2.31e-04 2022-05-28 11:27:21,185 INFO [train.py:842] (0/4) Epoch 24, batch 1900, loss[loss=0.1784, simple_loss=0.2651, pruned_loss=0.04581, over 7094.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2697, pruned_loss=0.04764, over 1422065.33 frames.], batch size: 28, lr: 2.31e-04 2022-05-28 11:27:59,203 INFO [train.py:842] (0/4) Epoch 24, batch 1950, loss[loss=0.1898, simple_loss=0.2829, pruned_loss=0.04831, over 7114.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2687, pruned_loss=0.0474, over 1423638.57 frames.], batch size: 21, lr: 2.31e-04 2022-05-28 11:28:37,365 INFO [train.py:842] (0/4) Epoch 24, batch 2000, loss[loss=0.256, simple_loss=0.3235, pruned_loss=0.09426, over 4902.00 frames.], tot_loss[loss=0.1831, simple_loss=0.27, pruned_loss=0.04812, over 1421278.10 frames.], batch size: 52, lr: 2.31e-04 2022-05-28 11:29:15,370 INFO [train.py:842] (0/4) Epoch 24, batch 2050, loss[loss=0.1903, simple_loss=0.2718, pruned_loss=0.05435, over 7425.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2689, pruned_loss=0.0472, over 1421891.35 frames.], batch size: 20, lr: 2.31e-04 2022-05-28 11:29:53,647 INFO [train.py:842] (0/4) Epoch 24, batch 2100, loss[loss=0.1463, simple_loss=0.2267, pruned_loss=0.03294, over 7012.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2688, pruned_loss=0.04724, over 1422643.26 frames.], batch size: 16, lr: 2.31e-04 2022-05-28 11:30:31,675 INFO [train.py:842] (0/4) Epoch 24, batch 2150, loss[loss=0.2272, simple_loss=0.3139, pruned_loss=0.0703, over 5431.00 frames.], tot_loss[loss=0.181, simple_loss=0.268, pruned_loss=0.04697, over 1420771.20 frames.], batch size: 52, lr: 2.31e-04 2022-05-28 11:31:10,137 INFO [train.py:842] (0/4) Epoch 24, batch 2200, loss[loss=0.1676, simple_loss=0.2494, pruned_loss=0.04295, over 7139.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2674, pruned_loss=0.04654, over 1419921.80 frames.], batch size: 17, lr: 2.31e-04 2022-05-28 11:31:47,811 INFO [train.py:842] (0/4) Epoch 24, batch 2250, loss[loss=0.2061, simple_loss=0.2897, pruned_loss=0.06128, over 7293.00 frames.], tot_loss[loss=0.1819, simple_loss=0.269, pruned_loss=0.04741, over 1408787.47 frames.], batch size: 25, lr: 2.31e-04 2022-05-28 11:32:26,192 INFO [train.py:842] (0/4) Epoch 24, batch 2300, loss[loss=0.1703, simple_loss=0.2545, pruned_loss=0.04303, over 7297.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2676, pruned_loss=0.04662, over 1415932.50 frames.], batch size: 17, lr: 2.31e-04 2022-05-28 11:33:04,223 INFO [train.py:842] (0/4) Epoch 24, batch 2350, loss[loss=0.1995, simple_loss=0.2956, pruned_loss=0.05167, over 7333.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2673, pruned_loss=0.04595, over 1417932.92 frames.], batch size: 22, lr: 2.30e-04 2022-05-28 11:33:42,612 INFO [train.py:842] (0/4) Epoch 24, batch 2400, loss[loss=0.1612, simple_loss=0.2317, pruned_loss=0.04534, over 6814.00 frames.], tot_loss[loss=0.1811, simple_loss=0.269, pruned_loss=0.04655, over 1420770.62 frames.], batch size: 15, lr: 2.30e-04 2022-05-28 11:34:20,476 INFO [train.py:842] (0/4) Epoch 24, batch 2450, loss[loss=0.1733, simple_loss=0.2663, pruned_loss=0.04016, over 7232.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2686, pruned_loss=0.04656, over 1416991.13 frames.], batch size: 20, lr: 2.30e-04 2022-05-28 11:34:58,778 INFO [train.py:842] (0/4) Epoch 24, batch 2500, loss[loss=0.1773, simple_loss=0.2725, pruned_loss=0.04104, over 7313.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2679, pruned_loss=0.04628, over 1416927.03 frames.], batch size: 21, lr: 2.30e-04 2022-05-28 11:35:36,689 INFO [train.py:842] (0/4) Epoch 24, batch 2550, loss[loss=0.1661, simple_loss=0.2548, pruned_loss=0.03868, over 5051.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2681, pruned_loss=0.04659, over 1412758.68 frames.], batch size: 52, lr: 2.30e-04 2022-05-28 11:36:14,956 INFO [train.py:842] (0/4) Epoch 24, batch 2600, loss[loss=0.1773, simple_loss=0.2573, pruned_loss=0.04862, over 7269.00 frames.], tot_loss[loss=0.181, simple_loss=0.2683, pruned_loss=0.04683, over 1416820.66 frames.], batch size: 18, lr: 2.30e-04 2022-05-28 11:36:52,929 INFO [train.py:842] (0/4) Epoch 24, batch 2650, loss[loss=0.1532, simple_loss=0.2531, pruned_loss=0.02667, over 7325.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2682, pruned_loss=0.04661, over 1416253.82 frames.], batch size: 21, lr: 2.30e-04 2022-05-28 11:37:31,146 INFO [train.py:842] (0/4) Epoch 24, batch 2700, loss[loss=0.1764, simple_loss=0.2756, pruned_loss=0.03858, over 7340.00 frames.], tot_loss[loss=0.18, simple_loss=0.2675, pruned_loss=0.04623, over 1421508.57 frames.], batch size: 22, lr: 2.30e-04 2022-05-28 11:38:09,217 INFO [train.py:842] (0/4) Epoch 24, batch 2750, loss[loss=0.1532, simple_loss=0.2458, pruned_loss=0.03036, over 7412.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2671, pruned_loss=0.04605, over 1424543.26 frames.], batch size: 21, lr: 2.30e-04 2022-05-28 11:38:47,215 INFO [train.py:842] (0/4) Epoch 24, batch 2800, loss[loss=0.1528, simple_loss=0.2405, pruned_loss=0.03254, over 7229.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2678, pruned_loss=0.04641, over 1421178.96 frames.], batch size: 20, lr: 2.30e-04 2022-05-28 11:39:25,096 INFO [train.py:842] (0/4) Epoch 24, batch 2850, loss[loss=0.1854, simple_loss=0.2749, pruned_loss=0.04795, over 7353.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2676, pruned_loss=0.04592, over 1421669.95 frames.], batch size: 19, lr: 2.30e-04 2022-05-28 11:40:03,389 INFO [train.py:842] (0/4) Epoch 24, batch 2900, loss[loss=0.182, simple_loss=0.2687, pruned_loss=0.04764, over 7323.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2685, pruned_loss=0.04633, over 1421254.70 frames.], batch size: 25, lr: 2.30e-04 2022-05-28 11:40:41,530 INFO [train.py:842] (0/4) Epoch 24, batch 2950, loss[loss=0.1776, simple_loss=0.251, pruned_loss=0.05209, over 7273.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2682, pruned_loss=0.04629, over 1425245.31 frames.], batch size: 17, lr: 2.30e-04 2022-05-28 11:41:19,735 INFO [train.py:842] (0/4) Epoch 24, batch 3000, loss[loss=0.1718, simple_loss=0.2652, pruned_loss=0.0392, over 7119.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2691, pruned_loss=0.04702, over 1421990.74 frames.], batch size: 21, lr: 2.30e-04 2022-05-28 11:41:19,736 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 11:41:28,713 INFO [train.py:871] (0/4) Epoch 24, validation: loss=0.1662, simple_loss=0.2647, pruned_loss=0.03391, over 868885.00 frames. 2022-05-28 11:42:06,675 INFO [train.py:842] (0/4) Epoch 24, batch 3050, loss[loss=0.1464, simple_loss=0.2316, pruned_loss=0.03056, over 7273.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2687, pruned_loss=0.04674, over 1417027.62 frames.], batch size: 18, lr: 2.30e-04 2022-05-28 11:42:45,176 INFO [train.py:842] (0/4) Epoch 24, batch 3100, loss[loss=0.1819, simple_loss=0.2691, pruned_loss=0.04738, over 6702.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2689, pruned_loss=0.04713, over 1419886.86 frames.], batch size: 31, lr: 2.30e-04 2022-05-28 11:43:23,173 INFO [train.py:842] (0/4) Epoch 24, batch 3150, loss[loss=0.1673, simple_loss=0.2578, pruned_loss=0.03836, over 6989.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2677, pruned_loss=0.04646, over 1421986.05 frames.], batch size: 16, lr: 2.30e-04 2022-05-28 11:44:01,729 INFO [train.py:842] (0/4) Epoch 24, batch 3200, loss[loss=0.2143, simple_loss=0.3192, pruned_loss=0.05467, over 7319.00 frames.], tot_loss[loss=0.1803, simple_loss=0.268, pruned_loss=0.04626, over 1425981.93 frames.], batch size: 21, lr: 2.30e-04 2022-05-28 11:44:39,930 INFO [train.py:842] (0/4) Epoch 24, batch 3250, loss[loss=0.1644, simple_loss=0.2497, pruned_loss=0.03959, over 7147.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2687, pruned_loss=0.04676, over 1427512.53 frames.], batch size: 18, lr: 2.30e-04 2022-05-28 11:45:18,238 INFO [train.py:842] (0/4) Epoch 24, batch 3300, loss[loss=0.2314, simple_loss=0.3178, pruned_loss=0.07253, over 7275.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2688, pruned_loss=0.04682, over 1427203.17 frames.], batch size: 24, lr: 2.30e-04 2022-05-28 11:45:56,716 INFO [train.py:842] (0/4) Epoch 24, batch 3350, loss[loss=0.2113, simple_loss=0.2937, pruned_loss=0.0644, over 7307.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2695, pruned_loss=0.04705, over 1423812.06 frames.], batch size: 24, lr: 2.30e-04 2022-05-28 11:46:35,160 INFO [train.py:842] (0/4) Epoch 24, batch 3400, loss[loss=0.1842, simple_loss=0.2749, pruned_loss=0.04675, over 7354.00 frames.], tot_loss[loss=0.1814, simple_loss=0.2688, pruned_loss=0.04693, over 1427678.48 frames.], batch size: 19, lr: 2.30e-04 2022-05-28 11:47:13,091 INFO [train.py:842] (0/4) Epoch 24, batch 3450, loss[loss=0.1927, simple_loss=0.2775, pruned_loss=0.05398, over 7324.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2702, pruned_loss=0.04748, over 1423770.20 frames.], batch size: 22, lr: 2.30e-04 2022-05-28 11:47:51,622 INFO [train.py:842] (0/4) Epoch 24, batch 3500, loss[loss=0.1576, simple_loss=0.2368, pruned_loss=0.03914, over 6817.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2679, pruned_loss=0.0466, over 1422483.53 frames.], batch size: 15, lr: 2.30e-04 2022-05-28 11:48:29,729 INFO [train.py:842] (0/4) Epoch 24, batch 3550, loss[loss=0.2301, simple_loss=0.3255, pruned_loss=0.06738, over 7122.00 frames.], tot_loss[loss=0.18, simple_loss=0.2673, pruned_loss=0.04638, over 1423314.36 frames.], batch size: 21, lr: 2.30e-04 2022-05-28 11:49:17,336 INFO [train.py:842] (0/4) Epoch 24, batch 3600, loss[loss=0.1703, simple_loss=0.2544, pruned_loss=0.04305, over 7056.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2673, pruned_loss=0.04576, over 1422526.90 frames.], batch size: 18, lr: 2.30e-04 2022-05-28 11:49:55,159 INFO [train.py:842] (0/4) Epoch 24, batch 3650, loss[loss=0.1681, simple_loss=0.2506, pruned_loss=0.04279, over 7352.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2685, pruned_loss=0.04614, over 1424278.10 frames.], batch size: 19, lr: 2.30e-04 2022-05-28 11:50:33,269 INFO [train.py:842] (0/4) Epoch 24, batch 3700, loss[loss=0.1808, simple_loss=0.2719, pruned_loss=0.0449, over 6243.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2684, pruned_loss=0.04598, over 1420960.43 frames.], batch size: 37, lr: 2.30e-04 2022-05-28 11:51:11,247 INFO [train.py:842] (0/4) Epoch 24, batch 3750, loss[loss=0.152, simple_loss=0.2357, pruned_loss=0.03414, over 7290.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2685, pruned_loss=0.0462, over 1422383.14 frames.], batch size: 18, lr: 2.30e-04 2022-05-28 11:51:49,639 INFO [train.py:842] (0/4) Epoch 24, batch 3800, loss[loss=0.1614, simple_loss=0.2573, pruned_loss=0.03273, over 7434.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2683, pruned_loss=0.04609, over 1424322.93 frames.], batch size: 20, lr: 2.30e-04 2022-05-28 11:52:27,526 INFO [train.py:842] (0/4) Epoch 24, batch 3850, loss[loss=0.276, simple_loss=0.3422, pruned_loss=0.1049, over 5082.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2693, pruned_loss=0.04693, over 1420027.35 frames.], batch size: 52, lr: 2.30e-04 2022-05-28 11:53:05,680 INFO [train.py:842] (0/4) Epoch 24, batch 3900, loss[loss=0.1912, simple_loss=0.2836, pruned_loss=0.04947, over 6735.00 frames.], tot_loss[loss=0.183, simple_loss=0.2703, pruned_loss=0.04781, over 1416436.50 frames.], batch size: 31, lr: 2.30e-04 2022-05-28 11:53:43,319 INFO [train.py:842] (0/4) Epoch 24, batch 3950, loss[loss=0.2062, simple_loss=0.2976, pruned_loss=0.05742, over 7329.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2703, pruned_loss=0.04748, over 1416920.30 frames.], batch size: 21, lr: 2.30e-04 2022-05-28 11:54:21,406 INFO [train.py:842] (0/4) Epoch 24, batch 4000, loss[loss=0.1791, simple_loss=0.2657, pruned_loss=0.04623, over 7166.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2698, pruned_loss=0.0472, over 1417760.32 frames.], batch size: 19, lr: 2.30e-04 2022-05-28 11:54:59,369 INFO [train.py:842] (0/4) Epoch 24, batch 4050, loss[loss=0.1576, simple_loss=0.26, pruned_loss=0.02766, over 7415.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2698, pruned_loss=0.04689, over 1421743.68 frames.], batch size: 21, lr: 2.30e-04 2022-05-28 11:55:37,618 INFO [train.py:842] (0/4) Epoch 24, batch 4100, loss[loss=0.1748, simple_loss=0.2568, pruned_loss=0.04641, over 7290.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2694, pruned_loss=0.04713, over 1420924.05 frames.], batch size: 24, lr: 2.30e-04 2022-05-28 11:56:15,699 INFO [train.py:842] (0/4) Epoch 24, batch 4150, loss[loss=0.1844, simple_loss=0.2646, pruned_loss=0.05207, over 7274.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2699, pruned_loss=0.04757, over 1425972.11 frames.], batch size: 19, lr: 2.30e-04 2022-05-28 11:56:53,999 INFO [train.py:842] (0/4) Epoch 24, batch 4200, loss[loss=0.1923, simple_loss=0.2716, pruned_loss=0.05648, over 5090.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2689, pruned_loss=0.04719, over 1424444.59 frames.], batch size: 54, lr: 2.29e-04 2022-05-28 11:57:31,909 INFO [train.py:842] (0/4) Epoch 24, batch 4250, loss[loss=0.1985, simple_loss=0.2644, pruned_loss=0.0663, over 7142.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2694, pruned_loss=0.04788, over 1428201.02 frames.], batch size: 17, lr: 2.29e-04 2022-05-28 11:58:10,161 INFO [train.py:842] (0/4) Epoch 24, batch 4300, loss[loss=0.1797, simple_loss=0.2715, pruned_loss=0.04396, over 7217.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2697, pruned_loss=0.04785, over 1424006.49 frames.], batch size: 21, lr: 2.29e-04 2022-05-28 11:58:48,027 INFO [train.py:842] (0/4) Epoch 24, batch 4350, loss[loss=0.1891, simple_loss=0.2886, pruned_loss=0.04484, over 7329.00 frames.], tot_loss[loss=0.1838, simple_loss=0.2707, pruned_loss=0.04849, over 1421796.44 frames.], batch size: 20, lr: 2.29e-04 2022-05-28 11:59:26,287 INFO [train.py:842] (0/4) Epoch 24, batch 4400, loss[loss=0.1731, simple_loss=0.2561, pruned_loss=0.04499, over 7415.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2698, pruned_loss=0.04763, over 1423002.67 frames.], batch size: 21, lr: 2.29e-04 2022-05-28 12:00:04,049 INFO [train.py:842] (0/4) Epoch 24, batch 4450, loss[loss=0.1548, simple_loss=0.2393, pruned_loss=0.03517, over 7144.00 frames.], tot_loss[loss=0.182, simple_loss=0.2699, pruned_loss=0.04706, over 1421402.11 frames.], batch size: 17, lr: 2.29e-04 2022-05-28 12:00:42,349 INFO [train.py:842] (0/4) Epoch 24, batch 4500, loss[loss=0.1477, simple_loss=0.2282, pruned_loss=0.03357, over 7136.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2689, pruned_loss=0.0467, over 1423192.43 frames.], batch size: 17, lr: 2.29e-04 2022-05-28 12:01:20,175 INFO [train.py:842] (0/4) Epoch 24, batch 4550, loss[loss=0.1712, simple_loss=0.2568, pruned_loss=0.04282, over 7364.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2678, pruned_loss=0.04602, over 1422227.26 frames.], batch size: 19, lr: 2.29e-04 2022-05-28 12:01:47,901 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-216000.pt 2022-05-28 12:02:01,222 INFO [train.py:842] (0/4) Epoch 24, batch 4600, loss[loss=0.1669, simple_loss=0.2426, pruned_loss=0.0456, over 7282.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2671, pruned_loss=0.04584, over 1427796.35 frames.], batch size: 17, lr: 2.29e-04 2022-05-28 12:02:38,967 INFO [train.py:842] (0/4) Epoch 24, batch 4650, loss[loss=0.2124, simple_loss=0.2896, pruned_loss=0.06766, over 6752.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2681, pruned_loss=0.0463, over 1425852.54 frames.], batch size: 31, lr: 2.29e-04 2022-05-28 12:03:17,221 INFO [train.py:842] (0/4) Epoch 24, batch 4700, loss[loss=0.2037, simple_loss=0.2721, pruned_loss=0.06771, over 7141.00 frames.], tot_loss[loss=0.1806, simple_loss=0.268, pruned_loss=0.04656, over 1424017.40 frames.], batch size: 17, lr: 2.29e-04 2022-05-28 12:03:55,140 INFO [train.py:842] (0/4) Epoch 24, batch 4750, loss[loss=0.1711, simple_loss=0.2599, pruned_loss=0.04109, over 7161.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2674, pruned_loss=0.04611, over 1424851.79 frames.], batch size: 18, lr: 2.29e-04 2022-05-28 12:04:33,317 INFO [train.py:842] (0/4) Epoch 24, batch 4800, loss[loss=0.1772, simple_loss=0.2675, pruned_loss=0.04347, over 7048.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2672, pruned_loss=0.04604, over 1426124.07 frames.], batch size: 28, lr: 2.29e-04 2022-05-28 12:05:10,963 INFO [train.py:842] (0/4) Epoch 24, batch 4850, loss[loss=0.1661, simple_loss=0.2623, pruned_loss=0.03496, over 6584.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2693, pruned_loss=0.04726, over 1421514.24 frames.], batch size: 38, lr: 2.29e-04 2022-05-28 12:05:49,025 INFO [train.py:842] (0/4) Epoch 24, batch 4900, loss[loss=0.1771, simple_loss=0.2543, pruned_loss=0.04994, over 7428.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2697, pruned_loss=0.04765, over 1420625.76 frames.], batch size: 18, lr: 2.29e-04 2022-05-28 12:06:27,194 INFO [train.py:842] (0/4) Epoch 24, batch 4950, loss[loss=0.1396, simple_loss=0.2251, pruned_loss=0.02705, over 7287.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2692, pruned_loss=0.04746, over 1419585.34 frames.], batch size: 17, lr: 2.29e-04 2022-05-28 12:07:05,411 INFO [train.py:842] (0/4) Epoch 24, batch 5000, loss[loss=0.1979, simple_loss=0.2825, pruned_loss=0.05665, over 7228.00 frames.], tot_loss[loss=0.1818, simple_loss=0.269, pruned_loss=0.04729, over 1419235.10 frames.], batch size: 22, lr: 2.29e-04 2022-05-28 12:07:43,335 INFO [train.py:842] (0/4) Epoch 24, batch 5050, loss[loss=0.1442, simple_loss=0.2302, pruned_loss=0.02911, over 7269.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2683, pruned_loss=0.04693, over 1420228.93 frames.], batch size: 16, lr: 2.29e-04 2022-05-28 12:08:21,582 INFO [train.py:842] (0/4) Epoch 24, batch 5100, loss[loss=0.1814, simple_loss=0.2669, pruned_loss=0.04791, over 5053.00 frames.], tot_loss[loss=0.1816, simple_loss=0.269, pruned_loss=0.04715, over 1420382.75 frames.], batch size: 52, lr: 2.29e-04 2022-05-28 12:08:59,664 INFO [train.py:842] (0/4) Epoch 24, batch 5150, loss[loss=0.2101, simple_loss=0.2889, pruned_loss=0.06566, over 7366.00 frames.], tot_loss[loss=0.1825, simple_loss=0.2699, pruned_loss=0.04755, over 1425225.14 frames.], batch size: 19, lr: 2.29e-04 2022-05-28 12:09:37,926 INFO [train.py:842] (0/4) Epoch 24, batch 5200, loss[loss=0.1525, simple_loss=0.2502, pruned_loss=0.02739, over 7348.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2694, pruned_loss=0.04689, over 1427870.33 frames.], batch size: 19, lr: 2.29e-04 2022-05-28 12:10:15,997 INFO [train.py:842] (0/4) Epoch 24, batch 5250, loss[loss=0.2051, simple_loss=0.2891, pruned_loss=0.06054, over 7382.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2687, pruned_loss=0.04687, over 1429486.68 frames.], batch size: 23, lr: 2.29e-04 2022-05-28 12:10:54,302 INFO [train.py:842] (0/4) Epoch 24, batch 5300, loss[loss=0.1914, simple_loss=0.2751, pruned_loss=0.05382, over 7116.00 frames.], tot_loss[loss=0.181, simple_loss=0.2687, pruned_loss=0.0466, over 1430969.79 frames.], batch size: 26, lr: 2.29e-04 2022-05-28 12:11:32,300 INFO [train.py:842] (0/4) Epoch 24, batch 5350, loss[loss=0.1966, simple_loss=0.2948, pruned_loss=0.04924, over 7418.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2679, pruned_loss=0.046, over 1427815.61 frames.], batch size: 21, lr: 2.29e-04 2022-05-28 12:12:10,618 INFO [train.py:842] (0/4) Epoch 24, batch 5400, loss[loss=0.2676, simple_loss=0.3368, pruned_loss=0.09926, over 4993.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2681, pruned_loss=0.04616, over 1427934.66 frames.], batch size: 52, lr: 2.29e-04 2022-05-28 12:12:48,583 INFO [train.py:842] (0/4) Epoch 24, batch 5450, loss[loss=0.1967, simple_loss=0.2811, pruned_loss=0.05613, over 7187.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2673, pruned_loss=0.04593, over 1431485.39 frames.], batch size: 23, lr: 2.29e-04 2022-05-28 12:13:26,954 INFO [train.py:842] (0/4) Epoch 24, batch 5500, loss[loss=0.1896, simple_loss=0.2823, pruned_loss=0.04844, over 7160.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2678, pruned_loss=0.04585, over 1425964.61 frames.], batch size: 26, lr: 2.29e-04 2022-05-28 12:14:04,953 INFO [train.py:842] (0/4) Epoch 24, batch 5550, loss[loss=0.1833, simple_loss=0.2785, pruned_loss=0.04408, over 7294.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2683, pruned_loss=0.0462, over 1425989.57 frames.], batch size: 25, lr: 2.29e-04 2022-05-28 12:14:43,505 INFO [train.py:842] (0/4) Epoch 24, batch 5600, loss[loss=0.1828, simple_loss=0.2538, pruned_loss=0.05589, over 6990.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2667, pruned_loss=0.04574, over 1423776.65 frames.], batch size: 16, lr: 2.29e-04 2022-05-28 12:15:21,844 INFO [train.py:842] (0/4) Epoch 24, batch 5650, loss[loss=0.1581, simple_loss=0.2517, pruned_loss=0.03218, over 7149.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2672, pruned_loss=0.046, over 1423063.97 frames.], batch size: 20, lr: 2.29e-04 2022-05-28 12:16:00,294 INFO [train.py:842] (0/4) Epoch 24, batch 5700, loss[loss=0.2075, simple_loss=0.2891, pruned_loss=0.06299, over 7282.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2693, pruned_loss=0.04725, over 1424355.64 frames.], batch size: 25, lr: 2.29e-04 2022-05-28 12:16:38,561 INFO [train.py:842] (0/4) Epoch 24, batch 5750, loss[loss=0.1533, simple_loss=0.2418, pruned_loss=0.03247, over 7277.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2705, pruned_loss=0.04736, over 1421738.87 frames.], batch size: 17, lr: 2.29e-04 2022-05-28 12:17:17,295 INFO [train.py:842] (0/4) Epoch 24, batch 5800, loss[loss=0.1698, simple_loss=0.2589, pruned_loss=0.04036, over 7213.00 frames.], tot_loss[loss=0.1824, simple_loss=0.2699, pruned_loss=0.04746, over 1421689.14 frames.], batch size: 22, lr: 2.29e-04 2022-05-28 12:17:55,617 INFO [train.py:842] (0/4) Epoch 24, batch 5850, loss[loss=0.1673, simple_loss=0.2628, pruned_loss=0.03588, over 7211.00 frames.], tot_loss[loss=0.1819, simple_loss=0.269, pruned_loss=0.04745, over 1420762.38 frames.], batch size: 23, lr: 2.29e-04 2022-05-28 12:18:34,481 INFO [train.py:842] (0/4) Epoch 24, batch 5900, loss[loss=0.1732, simple_loss=0.2518, pruned_loss=0.04735, over 6864.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2671, pruned_loss=0.04668, over 1420301.60 frames.], batch size: 15, lr: 2.29e-04 2022-05-28 12:19:12,838 INFO [train.py:842] (0/4) Epoch 24, batch 5950, loss[loss=0.1971, simple_loss=0.2852, pruned_loss=0.05452, over 7170.00 frames.], tot_loss[loss=0.1801, simple_loss=0.267, pruned_loss=0.04656, over 1419779.93 frames.], batch size: 26, lr: 2.29e-04 2022-05-28 12:19:51,435 INFO [train.py:842] (0/4) Epoch 24, batch 6000, loss[loss=0.2187, simple_loss=0.2959, pruned_loss=0.07075, over 5196.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2676, pruned_loss=0.04672, over 1419012.40 frames.], batch size: 52, lr: 2.29e-04 2022-05-28 12:19:51,437 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 12:20:00,718 INFO [train.py:871] (0/4) Epoch 24, validation: loss=0.166, simple_loss=0.2648, pruned_loss=0.03361, over 868885.00 frames. 2022-05-28 12:20:38,956 INFO [train.py:842] (0/4) Epoch 24, batch 6050, loss[loss=0.1756, simple_loss=0.2651, pruned_loss=0.04303, over 6745.00 frames.], tot_loss[loss=0.1814, simple_loss=0.2685, pruned_loss=0.04712, over 1420870.03 frames.], batch size: 31, lr: 2.29e-04 2022-05-28 12:21:17,781 INFO [train.py:842] (0/4) Epoch 24, batch 6100, loss[loss=0.1849, simple_loss=0.2747, pruned_loss=0.04758, over 7372.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2685, pruned_loss=0.04726, over 1423239.79 frames.], batch size: 23, lr: 2.28e-04 2022-05-28 12:21:56,405 INFO [train.py:842] (0/4) Epoch 24, batch 6150, loss[loss=0.1768, simple_loss=0.2725, pruned_loss=0.04055, over 7421.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2673, pruned_loss=0.04661, over 1423557.01 frames.], batch size: 20, lr: 2.28e-04 2022-05-28 12:22:35,162 INFO [train.py:842] (0/4) Epoch 24, batch 6200, loss[loss=0.175, simple_loss=0.2685, pruned_loss=0.04073, over 7292.00 frames.], tot_loss[loss=0.1805, simple_loss=0.268, pruned_loss=0.04649, over 1423494.08 frames.], batch size: 25, lr: 2.28e-04 2022-05-28 12:23:13,736 INFO [train.py:842] (0/4) Epoch 24, batch 6250, loss[loss=0.1722, simple_loss=0.2645, pruned_loss=0.0399, over 7044.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2672, pruned_loss=0.04592, over 1425661.88 frames.], batch size: 28, lr: 2.28e-04 2022-05-28 12:23:52,505 INFO [train.py:842] (0/4) Epoch 24, batch 6300, loss[loss=0.1972, simple_loss=0.2877, pruned_loss=0.05335, over 7419.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2666, pruned_loss=0.04562, over 1423366.15 frames.], batch size: 21, lr: 2.28e-04 2022-05-28 12:24:30,642 INFO [train.py:842] (0/4) Epoch 24, batch 6350, loss[loss=0.1845, simple_loss=0.2727, pruned_loss=0.04817, over 6752.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2673, pruned_loss=0.04564, over 1419650.12 frames.], batch size: 31, lr: 2.28e-04 2022-05-28 12:25:09,316 INFO [train.py:842] (0/4) Epoch 24, batch 6400, loss[loss=0.167, simple_loss=0.2524, pruned_loss=0.04073, over 6987.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2672, pruned_loss=0.04582, over 1421209.99 frames.], batch size: 16, lr: 2.28e-04 2022-05-28 12:25:47,749 INFO [train.py:842] (0/4) Epoch 24, batch 6450, loss[loss=0.2187, simple_loss=0.3055, pruned_loss=0.06599, over 6875.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2673, pruned_loss=0.04573, over 1421343.13 frames.], batch size: 32, lr: 2.28e-04 2022-05-28 12:26:26,362 INFO [train.py:842] (0/4) Epoch 24, batch 6500, loss[loss=0.1539, simple_loss=0.2414, pruned_loss=0.03325, over 7427.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2657, pruned_loss=0.04469, over 1421942.69 frames.], batch size: 17, lr: 2.28e-04 2022-05-28 12:27:04,756 INFO [train.py:842] (0/4) Epoch 24, batch 6550, loss[loss=0.2219, simple_loss=0.3171, pruned_loss=0.06336, over 7283.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2658, pruned_loss=0.0447, over 1424715.10 frames.], batch size: 24, lr: 2.28e-04 2022-05-28 12:27:43,573 INFO [train.py:842] (0/4) Epoch 24, batch 6600, loss[loss=0.1961, simple_loss=0.2877, pruned_loss=0.05227, over 7070.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2657, pruned_loss=0.0446, over 1426454.81 frames.], batch size: 28, lr: 2.28e-04 2022-05-28 12:28:22,106 INFO [train.py:842] (0/4) Epoch 24, batch 6650, loss[loss=0.1885, simple_loss=0.2799, pruned_loss=0.0486, over 7285.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2658, pruned_loss=0.04479, over 1428819.34 frames.], batch size: 24, lr: 2.28e-04 2022-05-28 12:29:00,905 INFO [train.py:842] (0/4) Epoch 24, batch 6700, loss[loss=0.1494, simple_loss=0.2275, pruned_loss=0.03564, over 7271.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2657, pruned_loss=0.04505, over 1430763.82 frames.], batch size: 17, lr: 2.28e-04 2022-05-28 12:29:39,678 INFO [train.py:842] (0/4) Epoch 24, batch 6750, loss[loss=0.1674, simple_loss=0.2542, pruned_loss=0.0403, over 7163.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2653, pruned_loss=0.04526, over 1430535.68 frames.], batch size: 19, lr: 2.28e-04 2022-05-28 12:30:18,208 INFO [train.py:842] (0/4) Epoch 24, batch 6800, loss[loss=0.1658, simple_loss=0.2529, pruned_loss=0.03931, over 7151.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2674, pruned_loss=0.04643, over 1427574.03 frames.], batch size: 20, lr: 2.28e-04 2022-05-28 12:30:56,684 INFO [train.py:842] (0/4) Epoch 24, batch 6850, loss[loss=0.2117, simple_loss=0.2974, pruned_loss=0.06301, over 6633.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2675, pruned_loss=0.04646, over 1427747.64 frames.], batch size: 38, lr: 2.28e-04 2022-05-28 12:31:35,006 INFO [train.py:842] (0/4) Epoch 24, batch 6900, loss[loss=0.1922, simple_loss=0.2819, pruned_loss=0.05129, over 7213.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2685, pruned_loss=0.04634, over 1427879.70 frames.], batch size: 23, lr: 2.28e-04 2022-05-28 12:32:13,552 INFO [train.py:842] (0/4) Epoch 24, batch 6950, loss[loss=0.1832, simple_loss=0.2601, pruned_loss=0.05319, over 7217.00 frames.], tot_loss[loss=0.1814, simple_loss=0.269, pruned_loss=0.04693, over 1429028.99 frames.], batch size: 16, lr: 2.28e-04 2022-05-28 12:32:52,256 INFO [train.py:842] (0/4) Epoch 24, batch 7000, loss[loss=0.1661, simple_loss=0.2425, pruned_loss=0.04484, over 7185.00 frames.], tot_loss[loss=0.1819, simple_loss=0.2696, pruned_loss=0.04712, over 1427110.31 frames.], batch size: 16, lr: 2.28e-04 2022-05-28 12:33:30,583 INFO [train.py:842] (0/4) Epoch 24, batch 7050, loss[loss=0.1967, simple_loss=0.2841, pruned_loss=0.05467, over 7164.00 frames.], tot_loss[loss=0.1814, simple_loss=0.269, pruned_loss=0.04688, over 1427645.41 frames.], batch size: 26, lr: 2.28e-04 2022-05-28 12:34:09,116 INFO [train.py:842] (0/4) Epoch 24, batch 7100, loss[loss=0.1855, simple_loss=0.2787, pruned_loss=0.04615, over 7116.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2673, pruned_loss=0.04563, over 1426871.04 frames.], batch size: 28, lr: 2.28e-04 2022-05-28 12:34:47,727 INFO [train.py:842] (0/4) Epoch 24, batch 7150, loss[loss=0.1558, simple_loss=0.2547, pruned_loss=0.02841, over 7160.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2666, pruned_loss=0.04552, over 1426308.71 frames.], batch size: 19, lr: 2.28e-04 2022-05-28 12:35:36,394 INFO [train.py:842] (0/4) Epoch 24, batch 7200, loss[loss=0.2098, simple_loss=0.2966, pruned_loss=0.06153, over 7107.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2673, pruned_loss=0.04564, over 1427362.02 frames.], batch size: 21, lr: 2.28e-04 2022-05-28 12:36:14,992 INFO [train.py:842] (0/4) Epoch 24, batch 7250, loss[loss=0.2122, simple_loss=0.2915, pruned_loss=0.06642, over 7332.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2686, pruned_loss=0.0469, over 1432157.17 frames.], batch size: 22, lr: 2.28e-04 2022-05-28 12:36:53,725 INFO [train.py:842] (0/4) Epoch 24, batch 7300, loss[loss=0.1525, simple_loss=0.2428, pruned_loss=0.03107, over 7073.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2677, pruned_loss=0.04652, over 1430668.32 frames.], batch size: 18, lr: 2.28e-04 2022-05-28 12:37:32,028 INFO [train.py:842] (0/4) Epoch 24, batch 7350, loss[loss=0.1861, simple_loss=0.261, pruned_loss=0.05564, over 7005.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2681, pruned_loss=0.04677, over 1432617.90 frames.], batch size: 16, lr: 2.28e-04 2022-05-28 12:38:10,661 INFO [train.py:842] (0/4) Epoch 24, batch 7400, loss[loss=0.1877, simple_loss=0.287, pruned_loss=0.04422, over 7242.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2685, pruned_loss=0.04666, over 1431685.28 frames.], batch size: 20, lr: 2.28e-04 2022-05-28 12:38:59,111 INFO [train.py:842] (0/4) Epoch 24, batch 7450, loss[loss=0.1931, simple_loss=0.29, pruned_loss=0.04811, over 7414.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2689, pruned_loss=0.04705, over 1429955.63 frames.], batch size: 20, lr: 2.28e-04 2022-05-28 12:39:37,696 INFO [train.py:842] (0/4) Epoch 24, batch 7500, loss[loss=0.1827, simple_loss=0.2782, pruned_loss=0.04362, over 7262.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2692, pruned_loss=0.04701, over 1428146.31 frames.], batch size: 19, lr: 2.28e-04 2022-05-28 12:40:26,189 INFO [train.py:842] (0/4) Epoch 24, batch 7550, loss[loss=0.1799, simple_loss=0.2598, pruned_loss=0.04998, over 7344.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2701, pruned_loss=0.04751, over 1427989.48 frames.], batch size: 19, lr: 2.28e-04 2022-05-28 12:41:04,827 INFO [train.py:842] (0/4) Epoch 24, batch 7600, loss[loss=0.2289, simple_loss=0.319, pruned_loss=0.06941, over 7195.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2711, pruned_loss=0.04757, over 1428950.14 frames.], batch size: 26, lr: 2.28e-04 2022-05-28 12:41:43,339 INFO [train.py:842] (0/4) Epoch 24, batch 7650, loss[loss=0.1942, simple_loss=0.2856, pruned_loss=0.05139, over 7319.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2701, pruned_loss=0.04722, over 1431640.73 frames.], batch size: 25, lr: 2.28e-04 2022-05-28 12:42:22,065 INFO [train.py:842] (0/4) Epoch 24, batch 7700, loss[loss=0.1833, simple_loss=0.2589, pruned_loss=0.05382, over 7065.00 frames.], tot_loss[loss=0.1824, simple_loss=0.27, pruned_loss=0.04736, over 1429815.61 frames.], batch size: 28, lr: 2.28e-04 2022-05-28 12:43:00,402 INFO [train.py:842] (0/4) Epoch 24, batch 7750, loss[loss=0.1787, simple_loss=0.2642, pruned_loss=0.04656, over 7350.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2718, pruned_loss=0.04841, over 1430288.94 frames.], batch size: 19, lr: 2.28e-04 2022-05-28 12:43:39,165 INFO [train.py:842] (0/4) Epoch 24, batch 7800, loss[loss=0.1817, simple_loss=0.2706, pruned_loss=0.0464, over 7373.00 frames.], tot_loss[loss=0.1849, simple_loss=0.2724, pruned_loss=0.04869, over 1431112.74 frames.], batch size: 23, lr: 2.28e-04 2022-05-28 12:44:17,731 INFO [train.py:842] (0/4) Epoch 24, batch 7850, loss[loss=0.266, simple_loss=0.3374, pruned_loss=0.09731, over 4984.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2727, pruned_loss=0.04872, over 1426385.17 frames.], batch size: 52, lr: 2.28e-04 2022-05-28 12:44:56,189 INFO [train.py:842] (0/4) Epoch 24, batch 7900, loss[loss=0.1884, simple_loss=0.2851, pruned_loss=0.0458, over 7388.00 frames.], tot_loss[loss=0.1854, simple_loss=0.2732, pruned_loss=0.04882, over 1422657.57 frames.], batch size: 18, lr: 2.28e-04 2022-05-28 12:45:34,546 INFO [train.py:842] (0/4) Epoch 24, batch 7950, loss[loss=0.2097, simple_loss=0.2919, pruned_loss=0.06374, over 6453.00 frames.], tot_loss[loss=0.1854, simple_loss=0.273, pruned_loss=0.04887, over 1425740.73 frames.], batch size: 37, lr: 2.28e-04 2022-05-28 12:46:13,251 INFO [train.py:842] (0/4) Epoch 24, batch 8000, loss[loss=0.1836, simple_loss=0.2703, pruned_loss=0.04846, over 7281.00 frames.], tot_loss[loss=0.1845, simple_loss=0.2715, pruned_loss=0.04872, over 1418326.39 frames.], batch size: 24, lr: 2.27e-04 2022-05-28 12:46:51,894 INFO [train.py:842] (0/4) Epoch 24, batch 8050, loss[loss=0.1899, simple_loss=0.2909, pruned_loss=0.04442, over 6746.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2703, pruned_loss=0.04807, over 1418169.14 frames.], batch size: 31, lr: 2.27e-04 2022-05-28 12:47:30,426 INFO [train.py:842] (0/4) Epoch 24, batch 8100, loss[loss=0.2222, simple_loss=0.3088, pruned_loss=0.06778, over 7293.00 frames.], tot_loss[loss=0.1844, simple_loss=0.2716, pruned_loss=0.04863, over 1416517.46 frames.], batch size: 24, lr: 2.27e-04 2022-05-28 12:48:08,866 INFO [train.py:842] (0/4) Epoch 24, batch 8150, loss[loss=0.1749, simple_loss=0.2517, pruned_loss=0.04905, over 7064.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2711, pruned_loss=0.04815, over 1415269.94 frames.], batch size: 18, lr: 2.27e-04 2022-05-28 12:48:47,288 INFO [train.py:842] (0/4) Epoch 24, batch 8200, loss[loss=0.1916, simple_loss=0.2793, pruned_loss=0.05195, over 7154.00 frames.], tot_loss[loss=0.1839, simple_loss=0.2712, pruned_loss=0.04828, over 1419024.77 frames.], batch size: 19, lr: 2.27e-04 2022-05-28 12:49:25,440 INFO [train.py:842] (0/4) Epoch 24, batch 8250, loss[loss=0.1826, simple_loss=0.2758, pruned_loss=0.04471, over 6398.00 frames.], tot_loss[loss=0.1832, simple_loss=0.2704, pruned_loss=0.048, over 1420028.15 frames.], batch size: 37, lr: 2.27e-04 2022-05-28 12:50:04,141 INFO [train.py:842] (0/4) Epoch 24, batch 8300, loss[loss=0.1762, simple_loss=0.2717, pruned_loss=0.04038, over 7316.00 frames.], tot_loss[loss=0.1828, simple_loss=0.2699, pruned_loss=0.0478, over 1422934.30 frames.], batch size: 21, lr: 2.27e-04 2022-05-28 12:50:42,560 INFO [train.py:842] (0/4) Epoch 24, batch 8350, loss[loss=0.198, simple_loss=0.2826, pruned_loss=0.05672, over 7068.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2703, pruned_loss=0.04819, over 1421458.11 frames.], batch size: 28, lr: 2.27e-04 2022-05-28 12:51:21,494 INFO [train.py:842] (0/4) Epoch 24, batch 8400, loss[loss=0.2084, simple_loss=0.2846, pruned_loss=0.06612, over 7409.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2684, pruned_loss=0.04701, over 1426630.61 frames.], batch size: 18, lr: 2.27e-04 2022-05-28 12:51:59,867 INFO [train.py:842] (0/4) Epoch 24, batch 8450, loss[loss=0.1789, simple_loss=0.2692, pruned_loss=0.04435, over 6290.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2684, pruned_loss=0.04702, over 1425436.14 frames.], batch size: 37, lr: 2.27e-04 2022-05-28 12:52:39,130 INFO [train.py:842] (0/4) Epoch 24, batch 8500, loss[loss=0.1999, simple_loss=0.2792, pruned_loss=0.06036, over 7199.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2697, pruned_loss=0.04786, over 1427280.28 frames.], batch size: 22, lr: 2.27e-04 2022-05-28 12:53:18,139 INFO [train.py:842] (0/4) Epoch 24, batch 8550, loss[loss=0.224, simple_loss=0.3039, pruned_loss=0.07208, over 7182.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2673, pruned_loss=0.04679, over 1427810.10 frames.], batch size: 26, lr: 2.27e-04 2022-05-28 12:53:57,147 INFO [train.py:842] (0/4) Epoch 24, batch 8600, loss[loss=0.1449, simple_loss=0.2307, pruned_loss=0.02959, over 7181.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2673, pruned_loss=0.04658, over 1430473.98 frames.], batch size: 18, lr: 2.27e-04 2022-05-28 12:54:35,698 INFO [train.py:842] (0/4) Epoch 24, batch 8650, loss[loss=0.1446, simple_loss=0.2437, pruned_loss=0.02274, over 7238.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2687, pruned_loss=0.04716, over 1424456.24 frames.], batch size: 20, lr: 2.27e-04 2022-05-28 12:55:14,296 INFO [train.py:842] (0/4) Epoch 24, batch 8700, loss[loss=0.2121, simple_loss=0.2886, pruned_loss=0.06774, over 7170.00 frames.], tot_loss[loss=0.1815, simple_loss=0.2689, pruned_loss=0.04699, over 1419516.38 frames.], batch size: 18, lr: 2.27e-04 2022-05-28 12:55:52,910 INFO [train.py:842] (0/4) Epoch 24, batch 8750, loss[loss=0.2157, simple_loss=0.299, pruned_loss=0.06622, over 7219.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2681, pruned_loss=0.04669, over 1420895.49 frames.], batch size: 23, lr: 2.27e-04 2022-05-28 12:56:31,821 INFO [train.py:842] (0/4) Epoch 24, batch 8800, loss[loss=0.1389, simple_loss=0.2369, pruned_loss=0.02044, over 7228.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2688, pruned_loss=0.04695, over 1420620.21 frames.], batch size: 21, lr: 2.27e-04 2022-05-28 12:57:10,117 INFO [train.py:842] (0/4) Epoch 24, batch 8850, loss[loss=0.1572, simple_loss=0.2637, pruned_loss=0.02534, over 6390.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2683, pruned_loss=0.04676, over 1411589.45 frames.], batch size: 37, lr: 2.27e-04 2022-05-28 12:57:48,731 INFO [train.py:842] (0/4) Epoch 24, batch 8900, loss[loss=0.1967, simple_loss=0.276, pruned_loss=0.05867, over 7195.00 frames.], tot_loss[loss=0.1828, simple_loss=0.2698, pruned_loss=0.04787, over 1401424.51 frames.], batch size: 22, lr: 2.27e-04 2022-05-28 12:58:27,173 INFO [train.py:842] (0/4) Epoch 24, batch 8950, loss[loss=0.2425, simple_loss=0.317, pruned_loss=0.08399, over 5477.00 frames.], tot_loss[loss=0.1837, simple_loss=0.2703, pruned_loss=0.04855, over 1398149.37 frames.], batch size: 52, lr: 2.27e-04 2022-05-28 12:59:05,934 INFO [train.py:842] (0/4) Epoch 24, batch 9000, loss[loss=0.1359, simple_loss=0.216, pruned_loss=0.02787, over 6790.00 frames.], tot_loss[loss=0.1834, simple_loss=0.2697, pruned_loss=0.04857, over 1391659.86 frames.], batch size: 15, lr: 2.27e-04 2022-05-28 12:59:05,935 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 12:59:15,343 INFO [train.py:871] (0/4) Epoch 24, validation: loss=0.1628, simple_loss=0.2615, pruned_loss=0.03209, over 868885.00 frames. 2022-05-28 12:59:53,633 INFO [train.py:842] (0/4) Epoch 24, batch 9050, loss[loss=0.1752, simple_loss=0.2662, pruned_loss=0.04206, over 7385.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2718, pruned_loss=0.04924, over 1380395.54 frames.], batch size: 23, lr: 2.27e-04 2022-05-28 13:00:31,859 INFO [train.py:842] (0/4) Epoch 24, batch 9100, loss[loss=0.1995, simple_loss=0.2825, pruned_loss=0.05821, over 4893.00 frames.], tot_loss[loss=0.1881, simple_loss=0.274, pruned_loss=0.05109, over 1332526.83 frames.], batch size: 52, lr: 2.27e-04 2022-05-28 13:01:09,340 INFO [train.py:842] (0/4) Epoch 24, batch 9150, loss[loss=0.2535, simple_loss=0.3345, pruned_loss=0.08626, over 5110.00 frames.], tot_loss[loss=0.1936, simple_loss=0.2782, pruned_loss=0.05448, over 1261232.20 frames.], batch size: 52, lr: 2.27e-04 2022-05-28 13:01:42,494 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-24.pt 2022-05-28 13:02:00,921 INFO [train.py:842] (0/4) Epoch 25, batch 0, loss[loss=0.1859, simple_loss=0.275, pruned_loss=0.04839, over 7077.00 frames.], tot_loss[loss=0.1859, simple_loss=0.275, pruned_loss=0.04839, over 7077.00 frames.], batch size: 18, lr: 2.22e-04 2022-05-28 13:02:39,742 INFO [train.py:842] (0/4) Epoch 25, batch 50, loss[loss=0.1543, simple_loss=0.2472, pruned_loss=0.0307, over 7253.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2723, pruned_loss=0.04895, over 321390.61 frames.], batch size: 19, lr: 2.22e-04 2022-05-28 13:03:18,844 INFO [train.py:842] (0/4) Epoch 25, batch 100, loss[loss=0.1837, simple_loss=0.2717, pruned_loss=0.04784, over 7340.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2697, pruned_loss=0.04777, over 569367.34 frames.], batch size: 20, lr: 2.22e-04 2022-05-28 13:03:57,396 INFO [train.py:842] (0/4) Epoch 25, batch 150, loss[loss=0.1767, simple_loss=0.2769, pruned_loss=0.03824, over 7320.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2681, pruned_loss=0.04657, over 760664.44 frames.], batch size: 21, lr: 2.22e-04 2022-05-28 13:04:36,560 INFO [train.py:842] (0/4) Epoch 25, batch 200, loss[loss=0.1409, simple_loss=0.2294, pruned_loss=0.02621, over 6777.00 frames.], tot_loss[loss=0.179, simple_loss=0.2666, pruned_loss=0.04565, over 906465.67 frames.], batch size: 15, lr: 2.22e-04 2022-05-28 13:05:14,873 INFO [train.py:842] (0/4) Epoch 25, batch 250, loss[loss=0.1504, simple_loss=0.2371, pruned_loss=0.0319, over 7242.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2667, pruned_loss=0.04603, over 1018336.59 frames.], batch size: 20, lr: 2.22e-04 2022-05-28 13:05:53,575 INFO [train.py:842] (0/4) Epoch 25, batch 300, loss[loss=0.1606, simple_loss=0.2491, pruned_loss=0.03604, over 7147.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2671, pruned_loss=0.04616, over 1111990.89 frames.], batch size: 19, lr: 2.22e-04 2022-05-28 13:06:31,932 INFO [train.py:842] (0/4) Epoch 25, batch 350, loss[loss=0.202, simple_loss=0.2935, pruned_loss=0.05518, over 7192.00 frames.], tot_loss[loss=0.179, simple_loss=0.2667, pruned_loss=0.04567, over 1181608.76 frames.], batch size: 23, lr: 2.22e-04 2022-05-28 13:07:10,743 INFO [train.py:842] (0/4) Epoch 25, batch 400, loss[loss=0.1784, simple_loss=0.2695, pruned_loss=0.04361, over 7229.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2682, pruned_loss=0.04613, over 1236202.46 frames.], batch size: 20, lr: 2.22e-04 2022-05-28 13:07:49,165 INFO [train.py:842] (0/4) Epoch 25, batch 450, loss[loss=0.1961, simple_loss=0.2867, pruned_loss=0.05275, over 6998.00 frames.], tot_loss[loss=0.181, simple_loss=0.2683, pruned_loss=0.04688, over 1276734.99 frames.], batch size: 28, lr: 2.22e-04 2022-05-28 13:08:27,848 INFO [train.py:842] (0/4) Epoch 25, batch 500, loss[loss=0.168, simple_loss=0.2501, pruned_loss=0.04297, over 7163.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2672, pruned_loss=0.04623, over 1312213.74 frames.], batch size: 18, lr: 2.22e-04 2022-05-28 13:09:06,206 INFO [train.py:842] (0/4) Epoch 25, batch 550, loss[loss=0.1432, simple_loss=0.2263, pruned_loss=0.03007, over 7156.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2676, pruned_loss=0.0467, over 1339628.22 frames.], batch size: 18, lr: 2.22e-04 2022-05-28 13:09:45,028 INFO [train.py:842] (0/4) Epoch 25, batch 600, loss[loss=0.1776, simple_loss=0.2685, pruned_loss=0.0434, over 7187.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2675, pruned_loss=0.04635, over 1359463.72 frames.], batch size: 23, lr: 2.22e-04 2022-05-28 13:10:23,507 INFO [train.py:842] (0/4) Epoch 25, batch 650, loss[loss=0.1433, simple_loss=0.2204, pruned_loss=0.0331, over 7271.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2663, pruned_loss=0.04611, over 1372253.70 frames.], batch size: 17, lr: 2.22e-04 2022-05-28 13:11:02,298 INFO [train.py:842] (0/4) Epoch 25, batch 700, loss[loss=0.1625, simple_loss=0.239, pruned_loss=0.04301, over 7240.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2662, pruned_loss=0.04571, over 1388490.10 frames.], batch size: 16, lr: 2.22e-04 2022-05-28 13:11:40,746 INFO [train.py:842] (0/4) Epoch 25, batch 750, loss[loss=0.1856, simple_loss=0.2914, pruned_loss=0.03997, over 7233.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2672, pruned_loss=0.04632, over 1399550.12 frames.], batch size: 20, lr: 2.22e-04 2022-05-28 13:12:19,432 INFO [train.py:842] (0/4) Epoch 25, batch 800, loss[loss=0.1561, simple_loss=0.251, pruned_loss=0.03059, over 7410.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2678, pruned_loss=0.04626, over 1405862.34 frames.], batch size: 21, lr: 2.22e-04 2022-05-28 13:12:57,618 INFO [train.py:842] (0/4) Epoch 25, batch 850, loss[loss=0.1934, simple_loss=0.2854, pruned_loss=0.05071, over 7323.00 frames.], tot_loss[loss=0.1792, simple_loss=0.267, pruned_loss=0.04567, over 1407432.82 frames.], batch size: 21, lr: 2.22e-04 2022-05-28 13:13:36,193 INFO [train.py:842] (0/4) Epoch 25, batch 900, loss[loss=0.1712, simple_loss=0.2549, pruned_loss=0.04374, over 7333.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2675, pruned_loss=0.04581, over 1409530.00 frames.], batch size: 25, lr: 2.22e-04 2022-05-28 13:14:14,459 INFO [train.py:842] (0/4) Epoch 25, batch 950, loss[loss=0.1914, simple_loss=0.2744, pruned_loss=0.05415, over 5067.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2683, pruned_loss=0.04644, over 1405651.82 frames.], batch size: 53, lr: 2.22e-04 2022-05-28 13:14:53,222 INFO [train.py:842] (0/4) Epoch 25, batch 1000, loss[loss=0.1823, simple_loss=0.2601, pruned_loss=0.05226, over 7408.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2686, pruned_loss=0.04682, over 1411944.95 frames.], batch size: 21, lr: 2.22e-04 2022-05-28 13:15:31,610 INFO [train.py:842] (0/4) Epoch 25, batch 1050, loss[loss=0.1594, simple_loss=0.2622, pruned_loss=0.02832, over 7327.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2696, pruned_loss=0.04689, over 1418773.70 frames.], batch size: 20, lr: 2.22e-04 2022-05-28 13:16:10,344 INFO [train.py:842] (0/4) Epoch 25, batch 1100, loss[loss=0.1982, simple_loss=0.2854, pruned_loss=0.05551, over 7343.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2688, pruned_loss=0.04671, over 1421356.33 frames.], batch size: 22, lr: 2.22e-04 2022-05-28 13:16:48,919 INFO [train.py:842] (0/4) Epoch 25, batch 1150, loss[loss=0.1718, simple_loss=0.2615, pruned_loss=0.04106, over 7192.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2689, pruned_loss=0.04662, over 1424095.20 frames.], batch size: 23, lr: 2.22e-04 2022-05-28 13:17:27,821 INFO [train.py:842] (0/4) Epoch 25, batch 1200, loss[loss=0.1881, simple_loss=0.2703, pruned_loss=0.05293, over 7368.00 frames.], tot_loss[loss=0.181, simple_loss=0.2686, pruned_loss=0.04673, over 1423155.53 frames.], batch size: 23, lr: 2.22e-04 2022-05-28 13:18:06,179 INFO [train.py:842] (0/4) Epoch 25, batch 1250, loss[loss=0.1598, simple_loss=0.2551, pruned_loss=0.03227, over 7140.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2677, pruned_loss=0.0463, over 1421834.75 frames.], batch size: 20, lr: 2.22e-04 2022-05-28 13:18:44,990 INFO [train.py:842] (0/4) Epoch 25, batch 1300, loss[loss=0.1603, simple_loss=0.2383, pruned_loss=0.04118, over 7203.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2681, pruned_loss=0.04653, over 1422367.52 frames.], batch size: 16, lr: 2.22e-04 2022-05-28 13:19:23,291 INFO [train.py:842] (0/4) Epoch 25, batch 1350, loss[loss=0.1699, simple_loss=0.2608, pruned_loss=0.03952, over 6614.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2686, pruned_loss=0.04695, over 1422858.37 frames.], batch size: 38, lr: 2.22e-04 2022-05-28 13:20:01,954 INFO [train.py:842] (0/4) Epoch 25, batch 1400, loss[loss=0.1598, simple_loss=0.2537, pruned_loss=0.03292, over 7272.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2677, pruned_loss=0.04622, over 1427616.30 frames.], batch size: 17, lr: 2.22e-04 2022-05-28 13:20:40,263 INFO [train.py:842] (0/4) Epoch 25, batch 1450, loss[loss=0.1635, simple_loss=0.2547, pruned_loss=0.03616, over 7146.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2679, pruned_loss=0.04657, over 1423361.23 frames.], batch size: 20, lr: 2.22e-04 2022-05-28 13:21:18,922 INFO [train.py:842] (0/4) Epoch 25, batch 1500, loss[loss=0.1779, simple_loss=0.2707, pruned_loss=0.04255, over 6742.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2682, pruned_loss=0.04666, over 1421494.94 frames.], batch size: 31, lr: 2.22e-04 2022-05-28 13:21:57,241 INFO [train.py:842] (0/4) Epoch 25, batch 1550, loss[loss=0.1952, simple_loss=0.2836, pruned_loss=0.05339, over 7275.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2695, pruned_loss=0.04701, over 1423031.25 frames.], batch size: 18, lr: 2.22e-04 2022-05-28 13:22:36,323 INFO [train.py:842] (0/4) Epoch 25, batch 1600, loss[loss=0.1476, simple_loss=0.237, pruned_loss=0.02909, over 6797.00 frames.], tot_loss[loss=0.1814, simple_loss=0.269, pruned_loss=0.04695, over 1420944.64 frames.], batch size: 15, lr: 2.22e-04 2022-05-28 13:23:14,830 INFO [train.py:842] (0/4) Epoch 25, batch 1650, loss[loss=0.2035, simple_loss=0.2846, pruned_loss=0.06124, over 7217.00 frames.], tot_loss[loss=0.1821, simple_loss=0.2695, pruned_loss=0.0474, over 1421948.81 frames.], batch size: 21, lr: 2.22e-04 2022-05-28 13:23:53,567 INFO [train.py:842] (0/4) Epoch 25, batch 1700, loss[loss=0.2045, simple_loss=0.2949, pruned_loss=0.05708, over 7389.00 frames.], tot_loss[loss=0.1818, simple_loss=0.269, pruned_loss=0.04728, over 1420071.21 frames.], batch size: 23, lr: 2.22e-04 2022-05-28 13:24:31,713 INFO [train.py:842] (0/4) Epoch 25, batch 1750, loss[loss=0.1831, simple_loss=0.2597, pruned_loss=0.05325, over 7133.00 frames.], tot_loss[loss=0.182, simple_loss=0.2693, pruned_loss=0.04729, over 1422408.37 frames.], batch size: 17, lr: 2.22e-04 2022-05-28 13:25:10,180 INFO [train.py:842] (0/4) Epoch 25, batch 1800, loss[loss=0.1467, simple_loss=0.2321, pruned_loss=0.03065, over 6992.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2698, pruned_loss=0.04777, over 1422646.00 frames.], batch size: 16, lr: 2.21e-04 2022-05-28 13:25:48,929 INFO [train.py:842] (0/4) Epoch 25, batch 1850, loss[loss=0.14, simple_loss=0.2192, pruned_loss=0.03042, over 6801.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2696, pruned_loss=0.04787, over 1420043.21 frames.], batch size: 15, lr: 2.21e-04 2022-05-28 13:26:27,662 INFO [train.py:842] (0/4) Epoch 25, batch 1900, loss[loss=0.1552, simple_loss=0.2471, pruned_loss=0.03162, over 7292.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2684, pruned_loss=0.04706, over 1422273.94 frames.], batch size: 25, lr: 2.21e-04 2022-05-28 13:27:06,011 INFO [train.py:842] (0/4) Epoch 25, batch 1950, loss[loss=0.1572, simple_loss=0.2392, pruned_loss=0.03756, over 7248.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2676, pruned_loss=0.04673, over 1423602.33 frames.], batch size: 19, lr: 2.21e-04 2022-05-28 13:27:44,841 INFO [train.py:842] (0/4) Epoch 25, batch 2000, loss[loss=0.1437, simple_loss=0.2363, pruned_loss=0.02556, over 7159.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2675, pruned_loss=0.04676, over 1423684.93 frames.], batch size: 18, lr: 2.21e-04 2022-05-28 13:28:23,557 INFO [train.py:842] (0/4) Epoch 25, batch 2050, loss[loss=0.18, simple_loss=0.2658, pruned_loss=0.04708, over 7320.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2666, pruned_loss=0.04641, over 1426468.29 frames.], batch size: 21, lr: 2.21e-04 2022-05-28 13:29:02,057 INFO [train.py:842] (0/4) Epoch 25, batch 2100, loss[loss=0.16, simple_loss=0.2423, pruned_loss=0.03883, over 7248.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2666, pruned_loss=0.04606, over 1423105.26 frames.], batch size: 19, lr: 2.21e-04 2022-05-28 13:29:40,344 INFO [train.py:842] (0/4) Epoch 25, batch 2150, loss[loss=0.1536, simple_loss=0.2431, pruned_loss=0.03211, over 7421.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2676, pruned_loss=0.04631, over 1421973.44 frames.], batch size: 20, lr: 2.21e-04 2022-05-28 13:30:19,187 INFO [train.py:842] (0/4) Epoch 25, batch 2200, loss[loss=0.1435, simple_loss=0.23, pruned_loss=0.02849, over 7263.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2665, pruned_loss=0.04601, over 1421816.46 frames.], batch size: 16, lr: 2.21e-04 2022-05-28 13:30:57,858 INFO [train.py:842] (0/4) Epoch 25, batch 2250, loss[loss=0.1577, simple_loss=0.2517, pruned_loss=0.03183, over 7063.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2676, pruned_loss=0.04696, over 1418370.72 frames.], batch size: 18, lr: 2.21e-04 2022-05-28 13:31:36,586 INFO [train.py:842] (0/4) Epoch 25, batch 2300, loss[loss=0.1702, simple_loss=0.2478, pruned_loss=0.04629, over 7273.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2661, pruned_loss=0.04609, over 1420190.52 frames.], batch size: 16, lr: 2.21e-04 2022-05-28 13:32:14,937 INFO [train.py:842] (0/4) Epoch 25, batch 2350, loss[loss=0.1926, simple_loss=0.2857, pruned_loss=0.04979, over 7324.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2666, pruned_loss=0.04604, over 1420024.72 frames.], batch size: 21, lr: 2.21e-04 2022-05-28 13:32:53,605 INFO [train.py:842] (0/4) Epoch 25, batch 2400, loss[loss=0.1929, simple_loss=0.2733, pruned_loss=0.05619, over 7358.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2675, pruned_loss=0.04597, over 1424845.60 frames.], batch size: 19, lr: 2.21e-04 2022-05-28 13:33:32,094 INFO [train.py:842] (0/4) Epoch 25, batch 2450, loss[loss=0.1516, simple_loss=0.2272, pruned_loss=0.038, over 7134.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2673, pruned_loss=0.04611, over 1424248.95 frames.], batch size: 17, lr: 2.21e-04 2022-05-28 13:34:10,892 INFO [train.py:842] (0/4) Epoch 25, batch 2500, loss[loss=0.2433, simple_loss=0.329, pruned_loss=0.07876, over 7410.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2673, pruned_loss=0.04607, over 1424015.92 frames.], batch size: 21, lr: 2.21e-04 2022-05-28 13:34:49,190 INFO [train.py:842] (0/4) Epoch 25, batch 2550, loss[loss=0.1641, simple_loss=0.2489, pruned_loss=0.03962, over 7434.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2671, pruned_loss=0.04579, over 1424365.84 frames.], batch size: 20, lr: 2.21e-04 2022-05-28 13:35:27,916 INFO [train.py:842] (0/4) Epoch 25, batch 2600, loss[loss=0.1442, simple_loss=0.2207, pruned_loss=0.03382, over 7145.00 frames.], tot_loss[loss=0.1793, simple_loss=0.267, pruned_loss=0.04579, over 1422283.66 frames.], batch size: 17, lr: 2.21e-04 2022-05-28 13:36:06,375 INFO [train.py:842] (0/4) Epoch 25, batch 2650, loss[loss=0.173, simple_loss=0.2632, pruned_loss=0.04147, over 7218.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2667, pruned_loss=0.045, over 1424287.29 frames.], batch size: 22, lr: 2.21e-04 2022-05-28 13:36:45,200 INFO [train.py:842] (0/4) Epoch 25, batch 2700, loss[loss=0.1715, simple_loss=0.2535, pruned_loss=0.04473, over 7063.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2664, pruned_loss=0.04512, over 1426013.49 frames.], batch size: 18, lr: 2.21e-04 2022-05-28 13:37:23,654 INFO [train.py:842] (0/4) Epoch 25, batch 2750, loss[loss=0.1627, simple_loss=0.2584, pruned_loss=0.03347, over 7149.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2664, pruned_loss=0.04527, over 1421730.23 frames.], batch size: 20, lr: 2.21e-04 2022-05-28 13:38:02,432 INFO [train.py:842] (0/4) Epoch 25, batch 2800, loss[loss=0.1425, simple_loss=0.2272, pruned_loss=0.02888, over 7259.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2654, pruned_loss=0.04495, over 1422397.94 frames.], batch size: 19, lr: 2.21e-04 2022-05-28 13:38:40,761 INFO [train.py:842] (0/4) Epoch 25, batch 2850, loss[loss=0.1687, simple_loss=0.2485, pruned_loss=0.04443, over 7430.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2653, pruned_loss=0.04496, over 1420366.11 frames.], batch size: 20, lr: 2.21e-04 2022-05-28 13:39:19,308 INFO [train.py:842] (0/4) Epoch 25, batch 2900, loss[loss=0.2047, simple_loss=0.2932, pruned_loss=0.0581, over 7199.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2662, pruned_loss=0.0451, over 1420665.82 frames.], batch size: 23, lr: 2.21e-04 2022-05-28 13:39:57,703 INFO [train.py:842] (0/4) Epoch 25, batch 2950, loss[loss=0.1778, simple_loss=0.2728, pruned_loss=0.04138, over 7129.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2657, pruned_loss=0.04474, over 1425667.40 frames.], batch size: 21, lr: 2.21e-04 2022-05-28 13:40:36,695 INFO [train.py:842] (0/4) Epoch 25, batch 3000, loss[loss=0.165, simple_loss=0.2623, pruned_loss=0.0339, over 6626.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2643, pruned_loss=0.04418, over 1428505.61 frames.], batch size: 31, lr: 2.21e-04 2022-05-28 13:40:36,697 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 13:40:46,045 INFO [train.py:871] (0/4) Epoch 25, validation: loss=0.1651, simple_loss=0.2634, pruned_loss=0.03341, over 868885.00 frames. 2022-05-28 13:41:24,592 INFO [train.py:842] (0/4) Epoch 25, batch 3050, loss[loss=0.1812, simple_loss=0.2866, pruned_loss=0.0379, over 7119.00 frames.], tot_loss[loss=0.177, simple_loss=0.2652, pruned_loss=0.04441, over 1428447.77 frames.], batch size: 21, lr: 2.21e-04 2022-05-28 13:42:03,498 INFO [train.py:842] (0/4) Epoch 25, batch 3100, loss[loss=0.224, simple_loss=0.2894, pruned_loss=0.07928, over 6845.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2647, pruned_loss=0.04432, over 1428624.15 frames.], batch size: 15, lr: 2.21e-04 2022-05-28 13:42:41,825 INFO [train.py:842] (0/4) Epoch 25, batch 3150, loss[loss=0.1618, simple_loss=0.2529, pruned_loss=0.03537, over 7268.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2646, pruned_loss=0.04426, over 1429958.26 frames.], batch size: 19, lr: 2.21e-04 2022-05-28 13:43:20,550 INFO [train.py:842] (0/4) Epoch 25, batch 3200, loss[loss=0.2656, simple_loss=0.3358, pruned_loss=0.0977, over 4823.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2646, pruned_loss=0.04463, over 1428328.70 frames.], batch size: 52, lr: 2.21e-04 2022-05-28 13:43:59,044 INFO [train.py:842] (0/4) Epoch 25, batch 3250, loss[loss=0.2334, simple_loss=0.3281, pruned_loss=0.06932, over 7237.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2665, pruned_loss=0.04549, over 1426287.85 frames.], batch size: 20, lr: 2.21e-04 2022-05-28 13:44:37,659 INFO [train.py:842] (0/4) Epoch 25, batch 3300, loss[loss=0.1599, simple_loss=0.254, pruned_loss=0.03289, over 7156.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2673, pruned_loss=0.04604, over 1425541.40 frames.], batch size: 19, lr: 2.21e-04 2022-05-28 13:45:16,119 INFO [train.py:842] (0/4) Epoch 25, batch 3350, loss[loss=0.1795, simple_loss=0.2701, pruned_loss=0.04444, over 7262.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2669, pruned_loss=0.04602, over 1422636.26 frames.], batch size: 19, lr: 2.21e-04 2022-05-28 13:45:49,609 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-224000.pt 2022-05-28 13:45:57,676 INFO [train.py:842] (0/4) Epoch 25, batch 3400, loss[loss=0.141, simple_loss=0.2223, pruned_loss=0.02992, over 7286.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2668, pruned_loss=0.04602, over 1424102.69 frames.], batch size: 17, lr: 2.21e-04 2022-05-28 13:46:36,036 INFO [train.py:842] (0/4) Epoch 25, batch 3450, loss[loss=0.1755, simple_loss=0.2647, pruned_loss=0.04315, over 7218.00 frames.], tot_loss[loss=0.1796, simple_loss=0.267, pruned_loss=0.04607, over 1420488.88 frames.], batch size: 21, lr: 2.21e-04 2022-05-28 13:47:14,709 INFO [train.py:842] (0/4) Epoch 25, batch 3500, loss[loss=0.2072, simple_loss=0.2715, pruned_loss=0.07147, over 7133.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2677, pruned_loss=0.04633, over 1422720.56 frames.], batch size: 17, lr: 2.21e-04 2022-05-28 13:47:52,980 INFO [train.py:842] (0/4) Epoch 25, batch 3550, loss[loss=0.1585, simple_loss=0.2533, pruned_loss=0.03183, over 7318.00 frames.], tot_loss[loss=0.181, simple_loss=0.2685, pruned_loss=0.04676, over 1423025.89 frames.], batch size: 20, lr: 2.21e-04 2022-05-28 13:48:31,599 INFO [train.py:842] (0/4) Epoch 25, batch 3600, loss[loss=0.2191, simple_loss=0.3002, pruned_loss=0.06897, over 7220.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2683, pruned_loss=0.04654, over 1421818.79 frames.], batch size: 23, lr: 2.21e-04 2022-05-28 13:49:09,892 INFO [train.py:842] (0/4) Epoch 25, batch 3650, loss[loss=0.1527, simple_loss=0.2423, pruned_loss=0.03153, over 6203.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2687, pruned_loss=0.04698, over 1418516.13 frames.], batch size: 37, lr: 2.21e-04 2022-05-28 13:49:48,740 INFO [train.py:842] (0/4) Epoch 25, batch 3700, loss[loss=0.1646, simple_loss=0.2525, pruned_loss=0.03832, over 7431.00 frames.], tot_loss[loss=0.1811, simple_loss=0.2684, pruned_loss=0.04692, over 1421111.83 frames.], batch size: 20, lr: 2.21e-04 2022-05-28 13:50:27,333 INFO [train.py:842] (0/4) Epoch 25, batch 3750, loss[loss=0.177, simple_loss=0.2686, pruned_loss=0.04265, over 7366.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2673, pruned_loss=0.04627, over 1424001.87 frames.], batch size: 23, lr: 2.21e-04 2022-05-28 13:51:06,204 INFO [train.py:842] (0/4) Epoch 25, batch 3800, loss[loss=0.2094, simple_loss=0.2926, pruned_loss=0.06308, over 5361.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2674, pruned_loss=0.04681, over 1422499.31 frames.], batch size: 53, lr: 2.21e-04 2022-05-28 13:51:44,467 INFO [train.py:842] (0/4) Epoch 25, batch 3850, loss[loss=0.1356, simple_loss=0.2276, pruned_loss=0.02181, over 7275.00 frames.], tot_loss[loss=0.1809, simple_loss=0.268, pruned_loss=0.04689, over 1421965.60 frames.], batch size: 18, lr: 2.20e-04 2022-05-28 13:52:23,072 INFO [train.py:842] (0/4) Epoch 25, batch 3900, loss[loss=0.1765, simple_loss=0.2609, pruned_loss=0.04601, over 7255.00 frames.], tot_loss[loss=0.1813, simple_loss=0.2685, pruned_loss=0.04702, over 1421860.23 frames.], batch size: 19, lr: 2.20e-04 2022-05-28 13:53:01,448 INFO [train.py:842] (0/4) Epoch 25, batch 3950, loss[loss=0.1676, simple_loss=0.249, pruned_loss=0.0431, over 7404.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2677, pruned_loss=0.0463, over 1424064.49 frames.], batch size: 18, lr: 2.20e-04 2022-05-28 13:53:40,179 INFO [train.py:842] (0/4) Epoch 25, batch 4000, loss[loss=0.1654, simple_loss=0.2607, pruned_loss=0.03507, over 7413.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2669, pruned_loss=0.04579, over 1426260.85 frames.], batch size: 21, lr: 2.20e-04 2022-05-28 13:54:18,684 INFO [train.py:842] (0/4) Epoch 25, batch 4050, loss[loss=0.1588, simple_loss=0.2417, pruned_loss=0.03795, over 7120.00 frames.], tot_loss[loss=0.1783, simple_loss=0.266, pruned_loss=0.04533, over 1425078.73 frames.], batch size: 17, lr: 2.20e-04 2022-05-28 13:54:57,370 INFO [train.py:842] (0/4) Epoch 25, batch 4100, loss[loss=0.1688, simple_loss=0.2672, pruned_loss=0.03519, over 7326.00 frames.], tot_loss[loss=0.1791, simple_loss=0.267, pruned_loss=0.04554, over 1429143.45 frames.], batch size: 21, lr: 2.20e-04 2022-05-28 13:55:35,864 INFO [train.py:842] (0/4) Epoch 25, batch 4150, loss[loss=0.1871, simple_loss=0.2803, pruned_loss=0.04698, over 7401.00 frames.], tot_loss[loss=0.1795, simple_loss=0.267, pruned_loss=0.046, over 1425425.72 frames.], batch size: 21, lr: 2.20e-04 2022-05-28 13:56:14,594 INFO [train.py:842] (0/4) Epoch 25, batch 4200, loss[loss=0.1618, simple_loss=0.2545, pruned_loss=0.03452, over 7219.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2678, pruned_loss=0.04636, over 1422186.60 frames.], batch size: 20, lr: 2.20e-04 2022-05-28 13:56:52,839 INFO [train.py:842] (0/4) Epoch 25, batch 4250, loss[loss=0.2201, simple_loss=0.311, pruned_loss=0.0646, over 7371.00 frames.], tot_loss[loss=0.1812, simple_loss=0.269, pruned_loss=0.04673, over 1424368.61 frames.], batch size: 23, lr: 2.20e-04 2022-05-28 13:57:31,630 INFO [train.py:842] (0/4) Epoch 25, batch 4300, loss[loss=0.2169, simple_loss=0.3091, pruned_loss=0.06231, over 7109.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2692, pruned_loss=0.04724, over 1426110.12 frames.], batch size: 28, lr: 2.20e-04 2022-05-28 13:58:10,169 INFO [train.py:842] (0/4) Epoch 25, batch 4350, loss[loss=0.1776, simple_loss=0.2639, pruned_loss=0.04561, over 7420.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2677, pruned_loss=0.04646, over 1425865.20 frames.], batch size: 18, lr: 2.20e-04 2022-05-28 13:58:48,803 INFO [train.py:842] (0/4) Epoch 25, batch 4400, loss[loss=0.2227, simple_loss=0.3099, pruned_loss=0.06774, over 7229.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2676, pruned_loss=0.0464, over 1424759.14 frames.], batch size: 20, lr: 2.20e-04 2022-05-28 13:59:27,181 INFO [train.py:842] (0/4) Epoch 25, batch 4450, loss[loss=0.266, simple_loss=0.3355, pruned_loss=0.09828, over 7298.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2681, pruned_loss=0.04639, over 1423411.13 frames.], batch size: 24, lr: 2.20e-04 2022-05-28 14:00:05,956 INFO [train.py:842] (0/4) Epoch 25, batch 4500, loss[loss=0.1713, simple_loss=0.2673, pruned_loss=0.03761, over 7153.00 frames.], tot_loss[loss=0.179, simple_loss=0.2672, pruned_loss=0.04537, over 1424072.22 frames.], batch size: 20, lr: 2.20e-04 2022-05-28 14:00:44,413 INFO [train.py:842] (0/4) Epoch 25, batch 4550, loss[loss=0.1717, simple_loss=0.2587, pruned_loss=0.04234, over 6521.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2673, pruned_loss=0.04561, over 1425362.20 frames.], batch size: 38, lr: 2.20e-04 2022-05-28 14:01:23,193 INFO [train.py:842] (0/4) Epoch 25, batch 4600, loss[loss=0.1633, simple_loss=0.2563, pruned_loss=0.03513, over 6709.00 frames.], tot_loss[loss=0.1791, simple_loss=0.267, pruned_loss=0.04562, over 1424294.73 frames.], batch size: 31, lr: 2.20e-04 2022-05-28 14:02:01,597 INFO [train.py:842] (0/4) Epoch 25, batch 4650, loss[loss=0.2135, simple_loss=0.3019, pruned_loss=0.06254, over 7188.00 frames.], tot_loss[loss=0.1793, simple_loss=0.267, pruned_loss=0.04578, over 1423744.06 frames.], batch size: 26, lr: 2.20e-04 2022-05-28 14:02:40,409 INFO [train.py:842] (0/4) Epoch 25, batch 4700, loss[loss=0.1444, simple_loss=0.2325, pruned_loss=0.02821, over 6992.00 frames.], tot_loss[loss=0.179, simple_loss=0.2664, pruned_loss=0.04582, over 1419372.51 frames.], batch size: 16, lr: 2.20e-04 2022-05-28 14:03:18,770 INFO [train.py:842] (0/4) Epoch 25, batch 4750, loss[loss=0.2475, simple_loss=0.3248, pruned_loss=0.08512, over 7267.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2661, pruned_loss=0.04602, over 1420777.90 frames.], batch size: 24, lr: 2.20e-04 2022-05-28 14:03:57,591 INFO [train.py:842] (0/4) Epoch 25, batch 4800, loss[loss=0.1953, simple_loss=0.2856, pruned_loss=0.05254, over 7213.00 frames.], tot_loss[loss=0.1788, simple_loss=0.266, pruned_loss=0.04578, over 1424300.15 frames.], batch size: 22, lr: 2.20e-04 2022-05-28 14:04:36,159 INFO [train.py:842] (0/4) Epoch 25, batch 4850, loss[loss=0.1846, simple_loss=0.273, pruned_loss=0.04808, over 7320.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2661, pruned_loss=0.04559, over 1429348.11 frames.], batch size: 21, lr: 2.20e-04 2022-05-28 14:05:14,790 INFO [train.py:842] (0/4) Epoch 25, batch 4900, loss[loss=0.1666, simple_loss=0.2575, pruned_loss=0.03786, over 7371.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2657, pruned_loss=0.04546, over 1422710.05 frames.], batch size: 23, lr: 2.20e-04 2022-05-28 14:05:53,357 INFO [train.py:842] (0/4) Epoch 25, batch 4950, loss[loss=0.1911, simple_loss=0.2755, pruned_loss=0.05332, over 4931.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2661, pruned_loss=0.04536, over 1420675.67 frames.], batch size: 53, lr: 2.20e-04 2022-05-28 14:06:31,859 INFO [train.py:842] (0/4) Epoch 25, batch 5000, loss[loss=0.165, simple_loss=0.2558, pruned_loss=0.03708, over 7425.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2665, pruned_loss=0.04556, over 1418109.41 frames.], batch size: 20, lr: 2.20e-04 2022-05-28 14:07:10,243 INFO [train.py:842] (0/4) Epoch 25, batch 5050, loss[loss=0.1918, simple_loss=0.2884, pruned_loss=0.04758, over 7311.00 frames.], tot_loss[loss=0.1791, simple_loss=0.267, pruned_loss=0.04557, over 1424494.42 frames.], batch size: 21, lr: 2.20e-04 2022-05-28 14:07:49,138 INFO [train.py:842] (0/4) Epoch 25, batch 5100, loss[loss=0.1741, simple_loss=0.266, pruned_loss=0.04107, over 7149.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2662, pruned_loss=0.04566, over 1426614.58 frames.], batch size: 19, lr: 2.20e-04 2022-05-28 14:08:27,765 INFO [train.py:842] (0/4) Epoch 25, batch 5150, loss[loss=0.181, simple_loss=0.2723, pruned_loss=0.04487, over 7119.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2666, pruned_loss=0.04587, over 1426278.46 frames.], batch size: 21, lr: 2.20e-04 2022-05-28 14:09:06,403 INFO [train.py:842] (0/4) Epoch 25, batch 5200, loss[loss=0.1845, simple_loss=0.2788, pruned_loss=0.04511, over 7114.00 frames.], tot_loss[loss=0.18, simple_loss=0.268, pruned_loss=0.04603, over 1427491.27 frames.], batch size: 26, lr: 2.20e-04 2022-05-28 14:09:44,927 INFO [train.py:842] (0/4) Epoch 25, batch 5250, loss[loss=0.2093, simple_loss=0.2897, pruned_loss=0.06441, over 7358.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2669, pruned_loss=0.04583, over 1424906.68 frames.], batch size: 19, lr: 2.20e-04 2022-05-28 14:10:23,853 INFO [train.py:842] (0/4) Epoch 25, batch 5300, loss[loss=0.1838, simple_loss=0.2579, pruned_loss=0.05486, over 7340.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2676, pruned_loss=0.04666, over 1421620.56 frames.], batch size: 20, lr: 2.20e-04 2022-05-28 14:11:02,326 INFO [train.py:842] (0/4) Epoch 25, batch 5350, loss[loss=0.1805, simple_loss=0.2688, pruned_loss=0.0461, over 7358.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2671, pruned_loss=0.04606, over 1416268.54 frames.], batch size: 19, lr: 2.20e-04 2022-05-28 14:11:41,149 INFO [train.py:842] (0/4) Epoch 25, batch 5400, loss[loss=0.1505, simple_loss=0.2349, pruned_loss=0.03307, over 7075.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2656, pruned_loss=0.04556, over 1421962.60 frames.], batch size: 18, lr: 2.20e-04 2022-05-28 14:12:19,664 INFO [train.py:842] (0/4) Epoch 25, batch 5450, loss[loss=0.146, simple_loss=0.2369, pruned_loss=0.02752, over 7437.00 frames.], tot_loss[loss=0.1785, simple_loss=0.266, pruned_loss=0.04548, over 1418530.39 frames.], batch size: 20, lr: 2.20e-04 2022-05-28 14:12:58,308 INFO [train.py:842] (0/4) Epoch 25, batch 5500, loss[loss=0.1921, simple_loss=0.2796, pruned_loss=0.0523, over 6435.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2661, pruned_loss=0.04558, over 1418988.06 frames.], batch size: 37, lr: 2.20e-04 2022-05-28 14:13:36,721 INFO [train.py:842] (0/4) Epoch 25, batch 5550, loss[loss=0.1605, simple_loss=0.2508, pruned_loss=0.03504, over 7415.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2647, pruned_loss=0.04485, over 1423796.45 frames.], batch size: 21, lr: 2.20e-04 2022-05-28 14:14:15,453 INFO [train.py:842] (0/4) Epoch 25, batch 5600, loss[loss=0.2281, simple_loss=0.3073, pruned_loss=0.07451, over 7226.00 frames.], tot_loss[loss=0.1766, simple_loss=0.264, pruned_loss=0.04459, over 1427711.48 frames.], batch size: 21, lr: 2.20e-04 2022-05-28 14:14:53,917 INFO [train.py:842] (0/4) Epoch 25, batch 5650, loss[loss=0.1688, simple_loss=0.2649, pruned_loss=0.03636, over 7120.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2646, pruned_loss=0.04437, over 1431091.74 frames.], batch size: 28, lr: 2.20e-04 2022-05-28 14:15:32,493 INFO [train.py:842] (0/4) Epoch 25, batch 5700, loss[loss=0.1807, simple_loss=0.2784, pruned_loss=0.04152, over 7333.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2651, pruned_loss=0.04459, over 1426556.84 frames.], batch size: 22, lr: 2.20e-04 2022-05-28 14:16:11,070 INFO [train.py:842] (0/4) Epoch 25, batch 5750, loss[loss=0.1403, simple_loss=0.2259, pruned_loss=0.02739, over 7137.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2639, pruned_loss=0.04396, over 1428730.56 frames.], batch size: 17, lr: 2.20e-04 2022-05-28 14:16:49,560 INFO [train.py:842] (0/4) Epoch 25, batch 5800, loss[loss=0.2461, simple_loss=0.3352, pruned_loss=0.07848, over 7139.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2655, pruned_loss=0.04463, over 1430576.22 frames.], batch size: 20, lr: 2.20e-04 2022-05-28 14:17:27,564 INFO [train.py:842] (0/4) Epoch 25, batch 5850, loss[loss=0.1721, simple_loss=0.265, pruned_loss=0.03959, over 6560.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2668, pruned_loss=0.04518, over 1425002.17 frames.], batch size: 38, lr: 2.20e-04 2022-05-28 14:18:06,220 INFO [train.py:842] (0/4) Epoch 25, batch 5900, loss[loss=0.1656, simple_loss=0.2645, pruned_loss=0.03328, over 7332.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2667, pruned_loss=0.04503, over 1424396.89 frames.], batch size: 22, lr: 2.19e-04 2022-05-28 14:18:44,327 INFO [train.py:842] (0/4) Epoch 25, batch 5950, loss[loss=0.1999, simple_loss=0.2819, pruned_loss=0.05897, over 7432.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2683, pruned_loss=0.04607, over 1422477.68 frames.], batch size: 20, lr: 2.19e-04 2022-05-28 14:19:23,244 INFO [train.py:842] (0/4) Epoch 25, batch 6000, loss[loss=0.1791, simple_loss=0.2768, pruned_loss=0.04066, over 7336.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2667, pruned_loss=0.0454, over 1423964.68 frames.], batch size: 22, lr: 2.19e-04 2022-05-28 14:19:23,245 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 14:19:32,870 INFO [train.py:871] (0/4) Epoch 25, validation: loss=0.1658, simple_loss=0.2641, pruned_loss=0.03372, over 868885.00 frames. 2022-05-28 14:20:11,398 INFO [train.py:842] (0/4) Epoch 25, batch 6050, loss[loss=0.2, simple_loss=0.2898, pruned_loss=0.05505, over 7201.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2658, pruned_loss=0.04533, over 1425794.92 frames.], batch size: 23, lr: 2.19e-04 2022-05-28 14:20:50,169 INFO [train.py:842] (0/4) Epoch 25, batch 6100, loss[loss=0.1803, simple_loss=0.2592, pruned_loss=0.05074, over 6992.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2658, pruned_loss=0.04538, over 1427806.92 frames.], batch size: 16, lr: 2.19e-04 2022-05-28 14:21:28,811 INFO [train.py:842] (0/4) Epoch 25, batch 6150, loss[loss=0.2017, simple_loss=0.2965, pruned_loss=0.05342, over 7109.00 frames.], tot_loss[loss=0.1788, simple_loss=0.266, pruned_loss=0.04581, over 1425528.27 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:22:07,504 INFO [train.py:842] (0/4) Epoch 25, batch 6200, loss[loss=0.1737, simple_loss=0.2522, pruned_loss=0.04758, over 7323.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2672, pruned_loss=0.04648, over 1422251.74 frames.], batch size: 20, lr: 2.19e-04 2022-05-28 14:22:45,912 INFO [train.py:842] (0/4) Epoch 25, batch 6250, loss[loss=0.1698, simple_loss=0.2771, pruned_loss=0.03129, over 7204.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2674, pruned_loss=0.04656, over 1416909.17 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:23:34,548 INFO [train.py:842] (0/4) Epoch 25, batch 6300, loss[loss=0.1706, simple_loss=0.2651, pruned_loss=0.03803, over 7315.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2677, pruned_loss=0.04651, over 1417387.43 frames.], batch size: 22, lr: 2.19e-04 2022-05-28 14:24:12,963 INFO [train.py:842] (0/4) Epoch 25, batch 6350, loss[loss=0.2042, simple_loss=0.2806, pruned_loss=0.06391, over 7411.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2689, pruned_loss=0.04727, over 1416626.85 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:24:51,897 INFO [train.py:842] (0/4) Epoch 25, batch 6400, loss[loss=0.162, simple_loss=0.2496, pruned_loss=0.03722, over 7249.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2683, pruned_loss=0.04637, over 1418293.21 frames.], batch size: 19, lr: 2.19e-04 2022-05-28 14:25:30,512 INFO [train.py:842] (0/4) Epoch 25, batch 6450, loss[loss=0.1688, simple_loss=0.2365, pruned_loss=0.05057, over 6991.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2679, pruned_loss=0.04612, over 1422400.28 frames.], batch size: 16, lr: 2.19e-04 2022-05-28 14:26:09,238 INFO [train.py:842] (0/4) Epoch 25, batch 6500, loss[loss=0.1921, simple_loss=0.2746, pruned_loss=0.0548, over 6601.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2683, pruned_loss=0.04655, over 1422265.17 frames.], batch size: 38, lr: 2.19e-04 2022-05-28 14:26:47,621 INFO [train.py:842] (0/4) Epoch 25, batch 6550, loss[loss=0.1521, simple_loss=0.2343, pruned_loss=0.03493, over 7159.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2678, pruned_loss=0.04657, over 1420276.59 frames.], batch size: 19, lr: 2.19e-04 2022-05-28 14:27:26,212 INFO [train.py:842] (0/4) Epoch 25, batch 6600, loss[loss=0.2161, simple_loss=0.288, pruned_loss=0.07207, over 7232.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2684, pruned_loss=0.04638, over 1422785.81 frames.], batch size: 20, lr: 2.19e-04 2022-05-28 14:28:04,735 INFO [train.py:842] (0/4) Epoch 25, batch 6650, loss[loss=0.1988, simple_loss=0.2899, pruned_loss=0.05384, over 6738.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2679, pruned_loss=0.04637, over 1425089.22 frames.], batch size: 31, lr: 2.19e-04 2022-05-28 14:28:43,447 INFO [train.py:842] (0/4) Epoch 25, batch 6700, loss[loss=0.1746, simple_loss=0.262, pruned_loss=0.0436, over 7160.00 frames.], tot_loss[loss=0.1802, simple_loss=0.268, pruned_loss=0.04627, over 1426408.84 frames.], batch size: 19, lr: 2.19e-04 2022-05-28 14:29:22,074 INFO [train.py:842] (0/4) Epoch 25, batch 6750, loss[loss=0.1975, simple_loss=0.2879, pruned_loss=0.05353, over 7142.00 frames.], tot_loss[loss=0.1804, simple_loss=0.268, pruned_loss=0.04639, over 1426933.89 frames.], batch size: 20, lr: 2.19e-04 2022-05-28 14:30:00,923 INFO [train.py:842] (0/4) Epoch 25, batch 6800, loss[loss=0.1679, simple_loss=0.2617, pruned_loss=0.03704, over 5393.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2668, pruned_loss=0.04573, over 1424260.53 frames.], batch size: 54, lr: 2.19e-04 2022-05-28 14:30:39,282 INFO [train.py:842] (0/4) Epoch 25, batch 6850, loss[loss=0.1729, simple_loss=0.2584, pruned_loss=0.04365, over 7430.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2684, pruned_loss=0.04659, over 1427429.04 frames.], batch size: 20, lr: 2.19e-04 2022-05-28 14:31:18,197 INFO [train.py:842] (0/4) Epoch 25, batch 6900, loss[loss=0.1524, simple_loss=0.236, pruned_loss=0.03436, over 7136.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2675, pruned_loss=0.04619, over 1429982.48 frames.], batch size: 17, lr: 2.19e-04 2022-05-28 14:31:56,680 INFO [train.py:842] (0/4) Epoch 25, batch 6950, loss[loss=0.154, simple_loss=0.2471, pruned_loss=0.03048, over 7329.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2674, pruned_loss=0.04617, over 1430517.17 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:32:35,579 INFO [train.py:842] (0/4) Epoch 25, batch 7000, loss[loss=0.1621, simple_loss=0.2443, pruned_loss=0.03992, over 7275.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2664, pruned_loss=0.04553, over 1433283.11 frames.], batch size: 18, lr: 2.19e-04 2022-05-28 14:33:13,742 INFO [train.py:842] (0/4) Epoch 25, batch 7050, loss[loss=0.1518, simple_loss=0.239, pruned_loss=0.03229, over 7257.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2669, pruned_loss=0.04584, over 1428152.69 frames.], batch size: 19, lr: 2.19e-04 2022-05-28 14:33:52,527 INFO [train.py:842] (0/4) Epoch 25, batch 7100, loss[loss=0.1731, simple_loss=0.2664, pruned_loss=0.03994, over 7324.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2665, pruned_loss=0.04592, over 1428850.04 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:34:31,051 INFO [train.py:842] (0/4) Epoch 25, batch 7150, loss[loss=0.1557, simple_loss=0.2504, pruned_loss=0.03053, over 7294.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2664, pruned_loss=0.04572, over 1428458.82 frames.], batch size: 17, lr: 2.19e-04 2022-05-28 14:35:09,888 INFO [train.py:842] (0/4) Epoch 25, batch 7200, loss[loss=0.1735, simple_loss=0.2751, pruned_loss=0.03598, over 7317.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2664, pruned_loss=0.04571, over 1429220.25 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:35:48,496 INFO [train.py:842] (0/4) Epoch 25, batch 7250, loss[loss=0.2075, simple_loss=0.2999, pruned_loss=0.05755, over 7162.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2657, pruned_loss=0.04525, over 1429157.71 frames.], batch size: 26, lr: 2.19e-04 2022-05-28 14:36:26,923 INFO [train.py:842] (0/4) Epoch 25, batch 7300, loss[loss=0.1824, simple_loss=0.2743, pruned_loss=0.04529, over 7320.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2667, pruned_loss=0.0452, over 1425428.90 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:37:05,546 INFO [train.py:842] (0/4) Epoch 25, batch 7350, loss[loss=0.1806, simple_loss=0.2821, pruned_loss=0.03954, over 7201.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2664, pruned_loss=0.04503, over 1427480.09 frames.], batch size: 22, lr: 2.19e-04 2022-05-28 14:37:44,482 INFO [train.py:842] (0/4) Epoch 25, batch 7400, loss[loss=0.1852, simple_loss=0.2701, pruned_loss=0.05018, over 7063.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2669, pruned_loss=0.04564, over 1425246.33 frames.], batch size: 18, lr: 2.19e-04 2022-05-28 14:38:22,845 INFO [train.py:842] (0/4) Epoch 25, batch 7450, loss[loss=0.1878, simple_loss=0.2805, pruned_loss=0.04757, over 7258.00 frames.], tot_loss[loss=0.18, simple_loss=0.2679, pruned_loss=0.04604, over 1426639.70 frames.], batch size: 24, lr: 2.19e-04 2022-05-28 14:39:01,267 INFO [train.py:842] (0/4) Epoch 25, batch 7500, loss[loss=0.1685, simple_loss=0.2654, pruned_loss=0.03581, over 7068.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2699, pruned_loss=0.04728, over 1424339.15 frames.], batch size: 18, lr: 2.19e-04 2022-05-28 14:39:39,758 INFO [train.py:842] (0/4) Epoch 25, batch 7550, loss[loss=0.203, simple_loss=0.2895, pruned_loss=0.05827, over 7289.00 frames.], tot_loss[loss=0.181, simple_loss=0.2687, pruned_loss=0.04664, over 1427399.83 frames.], batch size: 25, lr: 2.19e-04 2022-05-28 14:40:18,789 INFO [train.py:842] (0/4) Epoch 25, batch 7600, loss[loss=0.1959, simple_loss=0.291, pruned_loss=0.05041, over 7385.00 frames.], tot_loss[loss=0.18, simple_loss=0.2674, pruned_loss=0.04623, over 1432274.04 frames.], batch size: 23, lr: 2.19e-04 2022-05-28 14:40:57,016 INFO [train.py:842] (0/4) Epoch 25, batch 7650, loss[loss=0.195, simple_loss=0.2959, pruned_loss=0.04708, over 7129.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2683, pruned_loss=0.04639, over 1432286.45 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:41:35,617 INFO [train.py:842] (0/4) Epoch 25, batch 7700, loss[loss=0.1749, simple_loss=0.2552, pruned_loss=0.0473, over 7065.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2682, pruned_loss=0.04641, over 1430944.14 frames.], batch size: 18, lr: 2.19e-04 2022-05-28 14:42:14,004 INFO [train.py:842] (0/4) Epoch 25, batch 7750, loss[loss=0.1895, simple_loss=0.273, pruned_loss=0.053, over 5070.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2681, pruned_loss=0.04683, over 1430339.61 frames.], batch size: 52, lr: 2.19e-04 2022-05-28 14:42:52,791 INFO [train.py:842] (0/4) Epoch 25, batch 7800, loss[loss=0.2154, simple_loss=0.3032, pruned_loss=0.06384, over 6804.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2671, pruned_loss=0.04633, over 1426930.23 frames.], batch size: 31, lr: 2.19e-04 2022-05-28 14:43:31,077 INFO [train.py:842] (0/4) Epoch 25, batch 7850, loss[loss=0.1711, simple_loss=0.2744, pruned_loss=0.03386, over 7316.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2674, pruned_loss=0.04655, over 1426584.32 frames.], batch size: 21, lr: 2.19e-04 2022-05-28 14:44:10,152 INFO [train.py:842] (0/4) Epoch 25, batch 7900, loss[loss=0.1581, simple_loss=0.253, pruned_loss=0.03167, over 7278.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2667, pruned_loss=0.0462, over 1426835.67 frames.], batch size: 18, lr: 2.19e-04 2022-05-28 14:44:48,426 INFO [train.py:842] (0/4) Epoch 25, batch 7950, loss[loss=0.1698, simple_loss=0.2549, pruned_loss=0.04233, over 6999.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2666, pruned_loss=0.04595, over 1425382.74 frames.], batch size: 16, lr: 2.18e-04 2022-05-28 14:45:27,119 INFO [train.py:842] (0/4) Epoch 25, batch 8000, loss[loss=0.2055, simple_loss=0.2869, pruned_loss=0.06206, over 7331.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2675, pruned_loss=0.0467, over 1421386.53 frames.], batch size: 20, lr: 2.18e-04 2022-05-28 14:46:05,217 INFO [train.py:842] (0/4) Epoch 25, batch 8050, loss[loss=0.1508, simple_loss=0.2356, pruned_loss=0.03304, over 7270.00 frames.], tot_loss[loss=0.18, simple_loss=0.2675, pruned_loss=0.04623, over 1418788.23 frames.], batch size: 18, lr: 2.18e-04 2022-05-28 14:46:43,908 INFO [train.py:842] (0/4) Epoch 25, batch 8100, loss[loss=0.1685, simple_loss=0.259, pruned_loss=0.03895, over 7200.00 frames.], tot_loss[loss=0.18, simple_loss=0.2676, pruned_loss=0.04621, over 1421301.89 frames.], batch size: 22, lr: 2.18e-04 2022-05-28 14:47:22,084 INFO [train.py:842] (0/4) Epoch 25, batch 8150, loss[loss=0.1954, simple_loss=0.2872, pruned_loss=0.05185, over 7240.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2677, pruned_loss=0.04637, over 1416148.93 frames.], batch size: 20, lr: 2.18e-04 2022-05-28 14:48:00,735 INFO [train.py:842] (0/4) Epoch 25, batch 8200, loss[loss=0.1391, simple_loss=0.2202, pruned_loss=0.02898, over 7270.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2666, pruned_loss=0.04633, over 1412833.03 frames.], batch size: 17, lr: 2.18e-04 2022-05-28 14:48:39,304 INFO [train.py:842] (0/4) Epoch 25, batch 8250, loss[loss=0.1412, simple_loss=0.2363, pruned_loss=0.02304, over 7160.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2662, pruned_loss=0.04606, over 1418069.29 frames.], batch size: 19, lr: 2.18e-04 2022-05-28 14:49:18,041 INFO [train.py:842] (0/4) Epoch 25, batch 8300, loss[loss=0.1692, simple_loss=0.2569, pruned_loss=0.04078, over 7454.00 frames.], tot_loss[loss=0.179, simple_loss=0.2669, pruned_loss=0.04558, over 1421336.21 frames.], batch size: 19, lr: 2.18e-04 2022-05-28 14:49:56,272 INFO [train.py:842] (0/4) Epoch 25, batch 8350, loss[loss=0.1631, simple_loss=0.2418, pruned_loss=0.04216, over 7285.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2666, pruned_loss=0.04584, over 1416531.02 frames.], batch size: 17, lr: 2.18e-04 2022-05-28 14:50:35,120 INFO [train.py:842] (0/4) Epoch 25, batch 8400, loss[loss=0.1635, simple_loss=0.2575, pruned_loss=0.03473, over 7219.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2661, pruned_loss=0.04561, over 1415083.49 frames.], batch size: 21, lr: 2.18e-04 2022-05-28 14:51:13,341 INFO [train.py:842] (0/4) Epoch 25, batch 8450, loss[loss=0.1945, simple_loss=0.2966, pruned_loss=0.04623, over 7163.00 frames.], tot_loss[loss=0.18, simple_loss=0.2678, pruned_loss=0.0461, over 1415758.97 frames.], batch size: 26, lr: 2.18e-04 2022-05-28 14:51:51,811 INFO [train.py:842] (0/4) Epoch 25, batch 8500, loss[loss=0.1803, simple_loss=0.263, pruned_loss=0.04881, over 7071.00 frames.], tot_loss[loss=0.181, simple_loss=0.2688, pruned_loss=0.04665, over 1415221.16 frames.], batch size: 18, lr: 2.18e-04 2022-05-28 14:52:29,806 INFO [train.py:842] (0/4) Epoch 25, batch 8550, loss[loss=0.1446, simple_loss=0.2186, pruned_loss=0.03527, over 7425.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2679, pruned_loss=0.04623, over 1412301.44 frames.], batch size: 18, lr: 2.18e-04 2022-05-28 14:53:08,356 INFO [train.py:842] (0/4) Epoch 25, batch 8600, loss[loss=0.207, simple_loss=0.2944, pruned_loss=0.05981, over 7123.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2683, pruned_loss=0.04631, over 1413839.71 frames.], batch size: 21, lr: 2.18e-04 2022-05-28 14:53:46,538 INFO [train.py:842] (0/4) Epoch 25, batch 8650, loss[loss=0.1794, simple_loss=0.2717, pruned_loss=0.04359, over 7305.00 frames.], tot_loss[loss=0.1812, simple_loss=0.269, pruned_loss=0.04672, over 1418224.13 frames.], batch size: 24, lr: 2.18e-04 2022-05-28 14:54:25,086 INFO [train.py:842] (0/4) Epoch 25, batch 8700, loss[loss=0.1389, simple_loss=0.2249, pruned_loss=0.02648, over 7272.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2681, pruned_loss=0.04613, over 1419959.51 frames.], batch size: 18, lr: 2.18e-04 2022-05-28 14:55:03,380 INFO [train.py:842] (0/4) Epoch 25, batch 8750, loss[loss=0.1832, simple_loss=0.2846, pruned_loss=0.04093, over 7201.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2689, pruned_loss=0.0464, over 1423122.87 frames.], batch size: 23, lr: 2.18e-04 2022-05-28 14:55:42,038 INFO [train.py:842] (0/4) Epoch 25, batch 8800, loss[loss=0.1543, simple_loss=0.2386, pruned_loss=0.03498, over 7063.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2682, pruned_loss=0.0461, over 1421330.94 frames.], batch size: 18, lr: 2.18e-04 2022-05-28 14:56:20,240 INFO [train.py:842] (0/4) Epoch 25, batch 8850, loss[loss=0.154, simple_loss=0.2593, pruned_loss=0.02437, over 7223.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2673, pruned_loss=0.04527, over 1421398.33 frames.], batch size: 21, lr: 2.18e-04 2022-05-28 14:56:58,669 INFO [train.py:842] (0/4) Epoch 25, batch 8900, loss[loss=0.1888, simple_loss=0.2811, pruned_loss=0.04822, over 7084.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2678, pruned_loss=0.04588, over 1407684.89 frames.], batch size: 28, lr: 2.18e-04 2022-05-28 14:57:36,567 INFO [train.py:842] (0/4) Epoch 25, batch 8950, loss[loss=0.2231, simple_loss=0.298, pruned_loss=0.07408, over 4766.00 frames.], tot_loss[loss=0.182, simple_loss=0.2701, pruned_loss=0.04698, over 1399905.73 frames.], batch size: 52, lr: 2.18e-04 2022-05-28 14:58:14,362 INFO [train.py:842] (0/4) Epoch 25, batch 9000, loss[loss=0.2007, simple_loss=0.2946, pruned_loss=0.05344, over 6284.00 frames.], tot_loss[loss=0.1836, simple_loss=0.2719, pruned_loss=0.04765, over 1386405.27 frames.], batch size: 38, lr: 2.18e-04 2022-05-28 14:58:14,363 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 14:58:23,547 INFO [train.py:871] (0/4) Epoch 25, validation: loss=0.1658, simple_loss=0.265, pruned_loss=0.03327, over 868885.00 frames. 2022-05-28 14:59:00,528 INFO [train.py:842] (0/4) Epoch 25, batch 9050, loss[loss=0.1869, simple_loss=0.2769, pruned_loss=0.04846, over 6352.00 frames.], tot_loss[loss=0.1856, simple_loss=0.274, pruned_loss=0.04861, over 1351197.50 frames.], batch size: 38, lr: 2.18e-04 2022-05-28 14:59:37,684 INFO [train.py:842] (0/4) Epoch 25, batch 9100, loss[loss=0.2095, simple_loss=0.2929, pruned_loss=0.06304, over 5334.00 frames.], tot_loss[loss=0.1883, simple_loss=0.2765, pruned_loss=0.05008, over 1303116.17 frames.], batch size: 52, lr: 2.18e-04 2022-05-28 15:00:14,851 INFO [train.py:842] (0/4) Epoch 25, batch 9150, loss[loss=0.1795, simple_loss=0.2631, pruned_loss=0.04794, over 5341.00 frames.], tot_loss[loss=0.1929, simple_loss=0.2797, pruned_loss=0.05309, over 1247404.24 frames.], batch size: 52, lr: 2.18e-04 2022-05-28 15:00:45,895 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-25.pt 2022-05-28 15:01:00,944 INFO [train.py:842] (0/4) Epoch 26, batch 0, loss[loss=0.1802, simple_loss=0.2688, pruned_loss=0.04583, over 7221.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2688, pruned_loss=0.04583, over 7221.00 frames.], batch size: 21, lr: 2.14e-04 2022-05-28 15:01:39,897 INFO [train.py:842] (0/4) Epoch 26, batch 50, loss[loss=0.1652, simple_loss=0.2563, pruned_loss=0.03708, over 7304.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2613, pruned_loss=0.04263, over 322051.87 frames.], batch size: 21, lr: 2.14e-04 2022-05-28 15:02:18,130 INFO [train.py:842] (0/4) Epoch 26, batch 100, loss[loss=0.2798, simple_loss=0.3472, pruned_loss=0.1062, over 4828.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2655, pruned_loss=0.0446, over 566132.74 frames.], batch size: 52, lr: 2.14e-04 2022-05-28 15:02:56,774 INFO [train.py:842] (0/4) Epoch 26, batch 150, loss[loss=0.1591, simple_loss=0.2397, pruned_loss=0.0393, over 7280.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2661, pruned_loss=0.04473, over 760500.04 frames.], batch size: 17, lr: 2.14e-04 2022-05-28 15:03:35,180 INFO [train.py:842] (0/4) Epoch 26, batch 200, loss[loss=0.2352, simple_loss=0.303, pruned_loss=0.0837, over 7379.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2653, pruned_loss=0.04439, over 907148.77 frames.], batch size: 23, lr: 2.14e-04 2022-05-28 15:04:13,804 INFO [train.py:842] (0/4) Epoch 26, batch 250, loss[loss=0.2061, simple_loss=0.2982, pruned_loss=0.05704, over 7199.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2653, pruned_loss=0.04483, over 1019281.18 frames.], batch size: 22, lr: 2.14e-04 2022-05-28 15:04:51,892 INFO [train.py:842] (0/4) Epoch 26, batch 300, loss[loss=0.1717, simple_loss=0.2578, pruned_loss=0.04279, over 7327.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2663, pruned_loss=0.04552, over 1105779.72 frames.], batch size: 20, lr: 2.14e-04 2022-05-28 15:05:30,449 INFO [train.py:842] (0/4) Epoch 26, batch 350, loss[loss=0.1654, simple_loss=0.2525, pruned_loss=0.03916, over 7171.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2681, pruned_loss=0.04675, over 1176222.42 frames.], batch size: 18, lr: 2.14e-04 2022-05-28 15:06:08,846 INFO [train.py:842] (0/4) Epoch 26, batch 400, loss[loss=0.1407, simple_loss=0.2202, pruned_loss=0.0306, over 7413.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2677, pruned_loss=0.04639, over 1233679.10 frames.], batch size: 18, lr: 2.14e-04 2022-05-28 15:06:47,612 INFO [train.py:842] (0/4) Epoch 26, batch 450, loss[loss=0.1894, simple_loss=0.2729, pruned_loss=0.05295, over 7418.00 frames.], tot_loss[loss=0.1794, simple_loss=0.267, pruned_loss=0.0459, over 1274994.52 frames.], batch size: 21, lr: 2.14e-04 2022-05-28 15:07:25,845 INFO [train.py:842] (0/4) Epoch 26, batch 500, loss[loss=0.1929, simple_loss=0.2834, pruned_loss=0.05114, over 7378.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2675, pruned_loss=0.04611, over 1303466.59 frames.], batch size: 23, lr: 2.14e-04 2022-05-28 15:08:04,617 INFO [train.py:842] (0/4) Epoch 26, batch 550, loss[loss=0.2008, simple_loss=0.2943, pruned_loss=0.05358, over 7236.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2673, pruned_loss=0.04585, over 1328946.20 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:08:43,125 INFO [train.py:842] (0/4) Epoch 26, batch 600, loss[loss=0.2079, simple_loss=0.2841, pruned_loss=0.06591, over 7083.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2668, pruned_loss=0.04571, over 1346863.75 frames.], batch size: 28, lr: 2.13e-04 2022-05-28 15:09:22,090 INFO [train.py:842] (0/4) Epoch 26, batch 650, loss[loss=0.1441, simple_loss=0.2324, pruned_loss=0.02789, over 7329.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2649, pruned_loss=0.04481, over 1360962.58 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:10:10,857 INFO [train.py:842] (0/4) Epoch 26, batch 700, loss[loss=0.2184, simple_loss=0.2917, pruned_loss=0.07252, over 7145.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2641, pruned_loss=0.04426, over 1374849.36 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:10:49,685 INFO [train.py:842] (0/4) Epoch 26, batch 750, loss[loss=0.1721, simple_loss=0.2601, pruned_loss=0.04208, over 7428.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2636, pruned_loss=0.04372, over 1389930.58 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:11:27,859 INFO [train.py:842] (0/4) Epoch 26, batch 800, loss[loss=0.1888, simple_loss=0.2751, pruned_loss=0.05124, over 6752.00 frames.], tot_loss[loss=0.1757, simple_loss=0.264, pruned_loss=0.04367, over 1395040.13 frames.], batch size: 31, lr: 2.13e-04 2022-05-28 15:12:06,491 INFO [train.py:842] (0/4) Epoch 26, batch 850, loss[loss=0.2408, simple_loss=0.3136, pruned_loss=0.08398, over 7114.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2655, pruned_loss=0.04476, over 1405728.85 frames.], batch size: 21, lr: 2.13e-04 2022-05-28 15:12:54,991 INFO [train.py:842] (0/4) Epoch 26, batch 900, loss[loss=0.1676, simple_loss=0.2423, pruned_loss=0.04643, over 6827.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2669, pruned_loss=0.04568, over 1405621.90 frames.], batch size: 15, lr: 2.13e-04 2022-05-28 15:13:43,641 INFO [train.py:842] (0/4) Epoch 26, batch 950, loss[loss=0.1593, simple_loss=0.2387, pruned_loss=0.03994, over 7282.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2658, pruned_loss=0.04528, over 1412254.15 frames.], batch size: 17, lr: 2.13e-04 2022-05-28 15:14:21,982 INFO [train.py:842] (0/4) Epoch 26, batch 1000, loss[loss=0.1801, simple_loss=0.2719, pruned_loss=0.04416, over 7113.00 frames.], tot_loss[loss=0.179, simple_loss=0.2665, pruned_loss=0.04572, over 1410882.58 frames.], batch size: 21, lr: 2.13e-04 2022-05-28 15:15:00,539 INFO [train.py:842] (0/4) Epoch 26, batch 1050, loss[loss=0.2212, simple_loss=0.293, pruned_loss=0.07468, over 5001.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2671, pruned_loss=0.04565, over 1411531.30 frames.], batch size: 52, lr: 2.13e-04 2022-05-28 15:15:38,919 INFO [train.py:842] (0/4) Epoch 26, batch 1100, loss[loss=0.1903, simple_loss=0.2811, pruned_loss=0.04972, over 7122.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2673, pruned_loss=0.04621, over 1412196.38 frames.], batch size: 21, lr: 2.13e-04 2022-05-28 15:16:17,612 INFO [train.py:842] (0/4) Epoch 26, batch 1150, loss[loss=0.1803, simple_loss=0.2686, pruned_loss=0.04607, over 7386.00 frames.], tot_loss[loss=0.179, simple_loss=0.2668, pruned_loss=0.04555, over 1416032.50 frames.], batch size: 23, lr: 2.13e-04 2022-05-28 15:16:56,064 INFO [train.py:842] (0/4) Epoch 26, batch 1200, loss[loss=0.1552, simple_loss=0.237, pruned_loss=0.0367, over 7136.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2659, pruned_loss=0.04548, over 1419972.99 frames.], batch size: 17, lr: 2.13e-04 2022-05-28 15:17:34,815 INFO [train.py:842] (0/4) Epoch 26, batch 1250, loss[loss=0.1846, simple_loss=0.283, pruned_loss=0.04307, over 7308.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2665, pruned_loss=0.04567, over 1422002.10 frames.], batch size: 21, lr: 2.13e-04 2022-05-28 15:18:13,310 INFO [train.py:842] (0/4) Epoch 26, batch 1300, loss[loss=0.1697, simple_loss=0.2629, pruned_loss=0.03822, over 7449.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2678, pruned_loss=0.04628, over 1425348.21 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:18:51,816 INFO [train.py:842] (0/4) Epoch 26, batch 1350, loss[loss=0.1818, simple_loss=0.2763, pruned_loss=0.04364, over 7315.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2677, pruned_loss=0.04576, over 1425621.59 frames.], batch size: 21, lr: 2.13e-04 2022-05-28 15:19:30,188 INFO [train.py:842] (0/4) Epoch 26, batch 1400, loss[loss=0.1999, simple_loss=0.2853, pruned_loss=0.05725, over 7331.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2682, pruned_loss=0.04595, over 1426201.77 frames.], batch size: 22, lr: 2.13e-04 2022-05-28 15:20:08,704 INFO [train.py:842] (0/4) Epoch 26, batch 1450, loss[loss=0.1722, simple_loss=0.2557, pruned_loss=0.04433, over 7003.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2674, pruned_loss=0.04557, over 1428154.48 frames.], batch size: 16, lr: 2.13e-04 2022-05-28 15:20:47,185 INFO [train.py:842] (0/4) Epoch 26, batch 1500, loss[loss=0.1693, simple_loss=0.2617, pruned_loss=0.03841, over 7217.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2665, pruned_loss=0.04547, over 1427349.39 frames.], batch size: 21, lr: 2.13e-04 2022-05-28 15:21:25,991 INFO [train.py:842] (0/4) Epoch 26, batch 1550, loss[loss=0.1402, simple_loss=0.2266, pruned_loss=0.02687, over 7133.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2661, pruned_loss=0.0454, over 1426407.90 frames.], batch size: 17, lr: 2.13e-04 2022-05-28 15:22:03,996 INFO [train.py:842] (0/4) Epoch 26, batch 1600, loss[loss=0.1583, simple_loss=0.2518, pruned_loss=0.03237, over 7142.00 frames.], tot_loss[loss=0.1805, simple_loss=0.268, pruned_loss=0.04651, over 1423643.14 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:22:43,191 INFO [train.py:842] (0/4) Epoch 26, batch 1650, loss[loss=0.1712, simple_loss=0.2549, pruned_loss=0.04379, over 7013.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2664, pruned_loss=0.04602, over 1424799.88 frames.], batch size: 28, lr: 2.13e-04 2022-05-28 15:23:21,396 INFO [train.py:842] (0/4) Epoch 26, batch 1700, loss[loss=0.1708, simple_loss=0.2699, pruned_loss=0.03583, over 7318.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2674, pruned_loss=0.04633, over 1424797.22 frames.], batch size: 21, lr: 2.13e-04 2022-05-28 15:24:00,052 INFO [train.py:842] (0/4) Epoch 26, batch 1750, loss[loss=0.1733, simple_loss=0.2572, pruned_loss=0.04474, over 7145.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2669, pruned_loss=0.04563, over 1424203.45 frames.], batch size: 17, lr: 2.13e-04 2022-05-28 15:24:38,389 INFO [train.py:842] (0/4) Epoch 26, batch 1800, loss[loss=0.1388, simple_loss=0.2432, pruned_loss=0.01716, over 7146.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2661, pruned_loss=0.04535, over 1421133.12 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:25:17,107 INFO [train.py:842] (0/4) Epoch 26, batch 1850, loss[loss=0.127, simple_loss=0.2206, pruned_loss=0.01672, over 7439.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2651, pruned_loss=0.04453, over 1421932.50 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:25:55,277 INFO [train.py:842] (0/4) Epoch 26, batch 1900, loss[loss=0.1764, simple_loss=0.2548, pruned_loss=0.049, over 7149.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2665, pruned_loss=0.04557, over 1422759.67 frames.], batch size: 17, lr: 2.13e-04 2022-05-28 15:26:33,986 INFO [train.py:842] (0/4) Epoch 26, batch 1950, loss[loss=0.2387, simple_loss=0.3058, pruned_loss=0.0858, over 4771.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2658, pruned_loss=0.04543, over 1420529.39 frames.], batch size: 52, lr: 2.13e-04 2022-05-28 15:27:12,176 INFO [train.py:842] (0/4) Epoch 26, batch 2000, loss[loss=0.1683, simple_loss=0.2535, pruned_loss=0.04161, over 7156.00 frames.], tot_loss[loss=0.1781, simple_loss=0.266, pruned_loss=0.04508, over 1416960.24 frames.], batch size: 19, lr: 2.13e-04 2022-05-28 15:27:51,022 INFO [train.py:842] (0/4) Epoch 26, batch 2050, loss[loss=0.1697, simple_loss=0.2545, pruned_loss=0.04238, over 7332.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2652, pruned_loss=0.04488, over 1418834.16 frames.], batch size: 20, lr: 2.13e-04 2022-05-28 15:28:29,354 INFO [train.py:842] (0/4) Epoch 26, batch 2100, loss[loss=0.1891, simple_loss=0.2796, pruned_loss=0.04931, over 7212.00 frames.], tot_loss[loss=0.179, simple_loss=0.2666, pruned_loss=0.04571, over 1418068.54 frames.], batch size: 22, lr: 2.13e-04 2022-05-28 15:29:07,767 INFO [train.py:842] (0/4) Epoch 26, batch 2150, loss[loss=0.1496, simple_loss=0.2418, pruned_loss=0.02868, over 7165.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2668, pruned_loss=0.04532, over 1419946.69 frames.], batch size: 18, lr: 2.13e-04 2022-05-28 15:29:46,155 INFO [train.py:842] (0/4) Epoch 26, batch 2200, loss[loss=0.1792, simple_loss=0.2702, pruned_loss=0.04415, over 6990.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2666, pruned_loss=0.04522, over 1422486.04 frames.], batch size: 28, lr: 2.13e-04 2022-05-28 15:29:48,006 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-232000.pt 2022-05-28 15:30:27,789 INFO [train.py:842] (0/4) Epoch 26, batch 2250, loss[loss=0.1772, simple_loss=0.2643, pruned_loss=0.04504, over 7373.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2658, pruned_loss=0.04484, over 1424680.00 frames.], batch size: 23, lr: 2.13e-04 2022-05-28 15:31:06,151 INFO [train.py:842] (0/4) Epoch 26, batch 2300, loss[loss=0.1582, simple_loss=0.2548, pruned_loss=0.03074, over 7064.00 frames.], tot_loss[loss=0.178, simple_loss=0.2661, pruned_loss=0.04492, over 1425028.25 frames.], batch size: 18, lr: 2.13e-04 2022-05-28 15:31:45,088 INFO [train.py:842] (0/4) Epoch 26, batch 2350, loss[loss=0.1912, simple_loss=0.2746, pruned_loss=0.05392, over 7273.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2662, pruned_loss=0.04538, over 1425199.28 frames.], batch size: 19, lr: 2.13e-04 2022-05-28 15:32:23,716 INFO [train.py:842] (0/4) Epoch 26, batch 2400, loss[loss=0.1725, simple_loss=0.2578, pruned_loss=0.04356, over 7368.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2658, pruned_loss=0.04501, over 1424605.00 frames.], batch size: 23, lr: 2.13e-04 2022-05-28 15:33:02,373 INFO [train.py:842] (0/4) Epoch 26, batch 2450, loss[loss=0.2548, simple_loss=0.3391, pruned_loss=0.08521, over 6750.00 frames.], tot_loss[loss=0.178, simple_loss=0.2659, pruned_loss=0.04498, over 1424093.84 frames.], batch size: 31, lr: 2.13e-04 2022-05-28 15:33:40,952 INFO [train.py:842] (0/4) Epoch 26, batch 2500, loss[loss=0.152, simple_loss=0.2413, pruned_loss=0.03137, over 7363.00 frames.], tot_loss[loss=0.1766, simple_loss=0.265, pruned_loss=0.04411, over 1425243.96 frames.], batch size: 19, lr: 2.13e-04 2022-05-28 15:34:19,815 INFO [train.py:842] (0/4) Epoch 26, batch 2550, loss[loss=0.157, simple_loss=0.2416, pruned_loss=0.03623, over 7414.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2658, pruned_loss=0.04473, over 1427876.64 frames.], batch size: 18, lr: 2.13e-04 2022-05-28 15:34:58,257 INFO [train.py:842] (0/4) Epoch 26, batch 2600, loss[loss=0.1597, simple_loss=0.2426, pruned_loss=0.03834, over 7159.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2675, pruned_loss=0.04584, over 1424732.92 frames.], batch size: 19, lr: 2.13e-04 2022-05-28 15:35:36,565 INFO [train.py:842] (0/4) Epoch 26, batch 2650, loss[loss=0.2156, simple_loss=0.2965, pruned_loss=0.06737, over 7029.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2677, pruned_loss=0.04593, over 1420073.37 frames.], batch size: 28, lr: 2.13e-04 2022-05-28 15:36:14,915 INFO [train.py:842] (0/4) Epoch 26, batch 2700, loss[loss=0.1644, simple_loss=0.2547, pruned_loss=0.03702, over 7267.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2675, pruned_loss=0.04581, over 1420461.39 frames.], batch size: 19, lr: 2.13e-04 2022-05-28 15:36:53,413 INFO [train.py:842] (0/4) Epoch 26, batch 2750, loss[loss=0.179, simple_loss=0.2689, pruned_loss=0.04451, over 7252.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2675, pruned_loss=0.04604, over 1414064.98 frames.], batch size: 25, lr: 2.12e-04 2022-05-28 15:37:32,300 INFO [train.py:842] (0/4) Epoch 26, batch 2800, loss[loss=0.1416, simple_loss=0.2244, pruned_loss=0.0294, over 7266.00 frames.], tot_loss[loss=0.1792, simple_loss=0.267, pruned_loss=0.04576, over 1416389.06 frames.], batch size: 18, lr: 2.12e-04 2022-05-28 15:38:11,408 INFO [train.py:842] (0/4) Epoch 26, batch 2850, loss[loss=0.1734, simple_loss=0.2571, pruned_loss=0.04486, over 7417.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2659, pruned_loss=0.04557, over 1411732.59 frames.], batch size: 21, lr: 2.12e-04 2022-05-28 15:38:50,413 INFO [train.py:842] (0/4) Epoch 26, batch 2900, loss[loss=0.1631, simple_loss=0.2713, pruned_loss=0.02747, over 7145.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2667, pruned_loss=0.0459, over 1417961.15 frames.], batch size: 20, lr: 2.12e-04 2022-05-28 15:39:30,011 INFO [train.py:842] (0/4) Epoch 26, batch 2950, loss[loss=0.1487, simple_loss=0.2399, pruned_loss=0.02871, over 7324.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2677, pruned_loss=0.04643, over 1418120.35 frames.], batch size: 20, lr: 2.12e-04 2022-05-28 15:40:09,201 INFO [train.py:842] (0/4) Epoch 26, batch 3000, loss[loss=0.2215, simple_loss=0.3067, pruned_loss=0.06811, over 6502.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2676, pruned_loss=0.04597, over 1422378.05 frames.], batch size: 38, lr: 2.12e-04 2022-05-28 15:40:09,204 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 15:40:19,618 INFO [train.py:871] (0/4) Epoch 26, validation: loss=0.166, simple_loss=0.2639, pruned_loss=0.0341, over 868885.00 frames. 2022-05-28 15:40:59,108 INFO [train.py:842] (0/4) Epoch 26, batch 3050, loss[loss=0.1681, simple_loss=0.2612, pruned_loss=0.03748, over 7335.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2681, pruned_loss=0.04581, over 1421520.16 frames.], batch size: 22, lr: 2.12e-04 2022-05-28 15:41:38,580 INFO [train.py:842] (0/4) Epoch 26, batch 3100, loss[loss=0.152, simple_loss=0.2428, pruned_loss=0.03063, over 7257.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2676, pruned_loss=0.04572, over 1419153.63 frames.], batch size: 19, lr: 2.12e-04 2022-05-28 15:42:18,082 INFO [train.py:842] (0/4) Epoch 26, batch 3150, loss[loss=0.1502, simple_loss=0.235, pruned_loss=0.03267, over 7142.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2668, pruned_loss=0.04534, over 1419392.36 frames.], batch size: 17, lr: 2.12e-04 2022-05-28 15:42:57,580 INFO [train.py:842] (0/4) Epoch 26, batch 3200, loss[loss=0.1888, simple_loss=0.266, pruned_loss=0.05581, over 7166.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2671, pruned_loss=0.04554, over 1422220.95 frames.], batch size: 19, lr: 2.12e-04 2022-05-28 15:43:37,415 INFO [train.py:842] (0/4) Epoch 26, batch 3250, loss[loss=0.1457, simple_loss=0.2307, pruned_loss=0.03036, over 7281.00 frames.], tot_loss[loss=0.1774, simple_loss=0.265, pruned_loss=0.04487, over 1425918.30 frames.], batch size: 18, lr: 2.12e-04 2022-05-28 15:44:16,475 INFO [train.py:842] (0/4) Epoch 26, batch 3300, loss[loss=0.1904, simple_loss=0.2839, pruned_loss=0.04843, over 7183.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2655, pruned_loss=0.0449, over 1418761.06 frames.], batch size: 26, lr: 2.12e-04 2022-05-28 15:44:55,914 INFO [train.py:842] (0/4) Epoch 26, batch 3350, loss[loss=0.1828, simple_loss=0.2727, pruned_loss=0.04643, over 7316.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2651, pruned_loss=0.04464, over 1415264.39 frames.], batch size: 21, lr: 2.12e-04 2022-05-28 15:45:35,667 INFO [train.py:842] (0/4) Epoch 26, batch 3400, loss[loss=0.2082, simple_loss=0.2973, pruned_loss=0.05957, over 6382.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2647, pruned_loss=0.04475, over 1420851.78 frames.], batch size: 38, lr: 2.12e-04 2022-05-28 15:46:15,355 INFO [train.py:842] (0/4) Epoch 26, batch 3450, loss[loss=0.1601, simple_loss=0.2453, pruned_loss=0.03746, over 7152.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2656, pruned_loss=0.04533, over 1421448.24 frames.], batch size: 18, lr: 2.12e-04 2022-05-28 15:46:54,509 INFO [train.py:842] (0/4) Epoch 26, batch 3500, loss[loss=0.205, simple_loss=0.2872, pruned_loss=0.06146, over 7387.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2668, pruned_loss=0.04583, over 1419754.90 frames.], batch size: 23, lr: 2.12e-04 2022-05-28 15:47:33,966 INFO [train.py:842] (0/4) Epoch 26, batch 3550, loss[loss=0.1974, simple_loss=0.2754, pruned_loss=0.05969, over 7417.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2662, pruned_loss=0.04579, over 1422344.99 frames.], batch size: 21, lr: 2.12e-04 2022-05-28 15:48:13,503 INFO [train.py:842] (0/4) Epoch 26, batch 3600, loss[loss=0.1831, simple_loss=0.2627, pruned_loss=0.05178, over 7202.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2657, pruned_loss=0.04572, over 1426943.59 frames.], batch size: 23, lr: 2.12e-04 2022-05-28 15:48:53,145 INFO [train.py:842] (0/4) Epoch 26, batch 3650, loss[loss=0.1556, simple_loss=0.2493, pruned_loss=0.03096, over 7255.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2655, pruned_loss=0.04541, over 1427547.68 frames.], batch size: 19, lr: 2.12e-04 2022-05-28 15:49:32,409 INFO [train.py:842] (0/4) Epoch 26, batch 3700, loss[loss=0.1781, simple_loss=0.2554, pruned_loss=0.05037, over 7445.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2647, pruned_loss=0.04532, over 1424603.33 frames.], batch size: 19, lr: 2.12e-04 2022-05-28 15:50:12,023 INFO [train.py:842] (0/4) Epoch 26, batch 3750, loss[loss=0.1917, simple_loss=0.2743, pruned_loss=0.05449, over 7159.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2662, pruned_loss=0.04565, over 1422368.90 frames.], batch size: 19, lr: 2.12e-04 2022-05-28 15:50:51,099 INFO [train.py:842] (0/4) Epoch 26, batch 3800, loss[loss=0.1614, simple_loss=0.2517, pruned_loss=0.03552, over 6509.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2665, pruned_loss=0.04523, over 1420718.63 frames.], batch size: 37, lr: 2.12e-04 2022-05-28 15:51:30,398 INFO [train.py:842] (0/4) Epoch 26, batch 3850, loss[loss=0.1746, simple_loss=0.2689, pruned_loss=0.04012, over 7148.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2663, pruned_loss=0.04511, over 1418374.24 frames.], batch size: 20, lr: 2.12e-04 2022-05-28 15:52:09,597 INFO [train.py:842] (0/4) Epoch 26, batch 3900, loss[loss=0.1738, simple_loss=0.2644, pruned_loss=0.04159, over 7222.00 frames.], tot_loss[loss=0.1805, simple_loss=0.2681, pruned_loss=0.04647, over 1420032.99 frames.], batch size: 20, lr: 2.12e-04 2022-05-28 15:52:49,316 INFO [train.py:842] (0/4) Epoch 26, batch 3950, loss[loss=0.1866, simple_loss=0.2737, pruned_loss=0.04973, over 6824.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2674, pruned_loss=0.0465, over 1425481.53 frames.], batch size: 31, lr: 2.12e-04 2022-05-28 15:53:28,778 INFO [train.py:842] (0/4) Epoch 26, batch 4000, loss[loss=0.1938, simple_loss=0.2779, pruned_loss=0.05488, over 7123.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2671, pruned_loss=0.04658, over 1418481.83 frames.], batch size: 21, lr: 2.12e-04 2022-05-28 15:54:08,326 INFO [train.py:842] (0/4) Epoch 26, batch 4050, loss[loss=0.1917, simple_loss=0.2793, pruned_loss=0.05207, over 6336.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2665, pruned_loss=0.04597, over 1420096.07 frames.], batch size: 37, lr: 2.12e-04 2022-05-28 15:54:48,036 INFO [train.py:842] (0/4) Epoch 26, batch 4100, loss[loss=0.1334, simple_loss=0.2138, pruned_loss=0.02653, over 7278.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2663, pruned_loss=0.04641, over 1418081.11 frames.], batch size: 17, lr: 2.12e-04 2022-05-28 15:55:28,394 INFO [train.py:842] (0/4) Epoch 26, batch 4150, loss[loss=0.1906, simple_loss=0.2717, pruned_loss=0.05477, over 7061.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2656, pruned_loss=0.04598, over 1421925.79 frames.], batch size: 28, lr: 2.12e-04 2022-05-28 15:56:07,744 INFO [train.py:842] (0/4) Epoch 26, batch 4200, loss[loss=0.1733, simple_loss=0.2538, pruned_loss=0.04641, over 7284.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2655, pruned_loss=0.04598, over 1421696.29 frames.], batch size: 18, lr: 2.12e-04 2022-05-28 15:56:47,350 INFO [train.py:842] (0/4) Epoch 26, batch 4250, loss[loss=0.2118, simple_loss=0.3016, pruned_loss=0.06099, over 7224.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2654, pruned_loss=0.04582, over 1423552.62 frames.], batch size: 21, lr: 2.12e-04 2022-05-28 15:57:26,571 INFO [train.py:842] (0/4) Epoch 26, batch 4300, loss[loss=0.2018, simple_loss=0.2855, pruned_loss=0.05906, over 7425.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2664, pruned_loss=0.04638, over 1422181.58 frames.], batch size: 20, lr: 2.12e-04 2022-05-28 15:58:06,185 INFO [train.py:842] (0/4) Epoch 26, batch 4350, loss[loss=0.2132, simple_loss=0.294, pruned_loss=0.06625, over 7393.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2657, pruned_loss=0.04601, over 1423194.31 frames.], batch size: 23, lr: 2.12e-04 2022-05-28 15:58:45,519 INFO [train.py:842] (0/4) Epoch 26, batch 4400, loss[loss=0.1763, simple_loss=0.2763, pruned_loss=0.03813, over 7223.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2658, pruned_loss=0.04587, over 1423602.79 frames.], batch size: 21, lr: 2.12e-04 2022-05-28 15:59:24,703 INFO [train.py:842] (0/4) Epoch 26, batch 4450, loss[loss=0.156, simple_loss=0.2349, pruned_loss=0.0385, over 7292.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2664, pruned_loss=0.04617, over 1416884.39 frames.], batch size: 18, lr: 2.12e-04 2022-05-28 16:00:03,667 INFO [train.py:842] (0/4) Epoch 26, batch 4500, loss[loss=0.1885, simple_loss=0.2865, pruned_loss=0.04524, over 6361.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2684, pruned_loss=0.04662, over 1416711.04 frames.], batch size: 38, lr: 2.12e-04 2022-05-28 16:00:44,564 INFO [train.py:842] (0/4) Epoch 26, batch 4550, loss[loss=0.154, simple_loss=0.255, pruned_loss=0.0265, over 7108.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2677, pruned_loss=0.04587, over 1415858.36 frames.], batch size: 21, lr: 2.12e-04 2022-05-28 16:01:23,797 INFO [train.py:842] (0/4) Epoch 26, batch 4600, loss[loss=0.1741, simple_loss=0.256, pruned_loss=0.04607, over 7068.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2672, pruned_loss=0.04594, over 1419199.87 frames.], batch size: 18, lr: 2.12e-04 2022-05-28 16:02:03,300 INFO [train.py:842] (0/4) Epoch 26, batch 4650, loss[loss=0.2301, simple_loss=0.3097, pruned_loss=0.07524, over 6589.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2676, pruned_loss=0.04635, over 1413817.16 frames.], batch size: 38, lr: 2.12e-04 2022-05-28 16:02:42,709 INFO [train.py:842] (0/4) Epoch 26, batch 4700, loss[loss=0.2105, simple_loss=0.2909, pruned_loss=0.06504, over 7207.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2675, pruned_loss=0.04643, over 1416602.21 frames.], batch size: 22, lr: 2.12e-04 2022-05-28 16:03:22,493 INFO [train.py:842] (0/4) Epoch 26, batch 4750, loss[loss=0.2043, simple_loss=0.2882, pruned_loss=0.06018, over 7380.00 frames.], tot_loss[loss=0.18, simple_loss=0.2667, pruned_loss=0.04662, over 1416294.24 frames.], batch size: 23, lr: 2.12e-04 2022-05-28 16:04:03,182 INFO [train.py:842] (0/4) Epoch 26, batch 4800, loss[loss=0.1699, simple_loss=0.2608, pruned_loss=0.03949, over 7309.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2672, pruned_loss=0.04659, over 1419922.13 frames.], batch size: 21, lr: 2.12e-04 2022-05-28 16:04:43,140 INFO [train.py:842] (0/4) Epoch 26, batch 4850, loss[loss=0.1697, simple_loss=0.269, pruned_loss=0.03517, over 7342.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2666, pruned_loss=0.04612, over 1422467.68 frames.], batch size: 22, lr: 2.12e-04 2022-05-28 16:05:22,320 INFO [train.py:842] (0/4) Epoch 26, batch 4900, loss[loss=0.1857, simple_loss=0.274, pruned_loss=0.04875, over 7159.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2674, pruned_loss=0.04599, over 1419464.26 frames.], batch size: 19, lr: 2.12e-04 2022-05-28 16:06:02,523 INFO [train.py:842] (0/4) Epoch 26, batch 4950, loss[loss=0.1893, simple_loss=0.2677, pruned_loss=0.05542, over 7080.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2665, pruned_loss=0.04551, over 1417944.42 frames.], batch size: 18, lr: 2.11e-04 2022-05-28 16:06:42,105 INFO [train.py:842] (0/4) Epoch 26, batch 5000, loss[loss=0.2101, simple_loss=0.299, pruned_loss=0.06061, over 7119.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2672, pruned_loss=0.04552, over 1419356.07 frames.], batch size: 21, lr: 2.11e-04 2022-05-28 16:07:21,646 INFO [train.py:842] (0/4) Epoch 26, batch 5050, loss[loss=0.1561, simple_loss=0.2342, pruned_loss=0.03903, over 6825.00 frames.], tot_loss[loss=0.1808, simple_loss=0.2688, pruned_loss=0.04644, over 1418660.27 frames.], batch size: 15, lr: 2.11e-04 2022-05-28 16:08:00,686 INFO [train.py:842] (0/4) Epoch 26, batch 5100, loss[loss=0.1831, simple_loss=0.2728, pruned_loss=0.04672, over 5013.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2671, pruned_loss=0.04564, over 1417273.65 frames.], batch size: 52, lr: 2.11e-04 2022-05-28 16:08:40,316 INFO [train.py:842] (0/4) Epoch 26, batch 5150, loss[loss=0.1842, simple_loss=0.2753, pruned_loss=0.04653, over 7338.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2673, pruned_loss=0.04566, over 1419006.83 frames.], batch size: 22, lr: 2.11e-04 2022-05-28 16:09:19,781 INFO [train.py:842] (0/4) Epoch 26, batch 5200, loss[loss=0.1769, simple_loss=0.2745, pruned_loss=0.03966, over 6484.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2665, pruned_loss=0.04547, over 1419765.58 frames.], batch size: 38, lr: 2.11e-04 2022-05-28 16:09:59,234 INFO [train.py:842] (0/4) Epoch 26, batch 5250, loss[loss=0.2083, simple_loss=0.3073, pruned_loss=0.05463, over 7227.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2664, pruned_loss=0.04523, over 1421173.26 frames.], batch size: 20, lr: 2.11e-04 2022-05-28 16:10:38,557 INFO [train.py:842] (0/4) Epoch 26, batch 5300, loss[loss=0.1345, simple_loss=0.2179, pruned_loss=0.02556, over 7282.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2664, pruned_loss=0.04513, over 1421522.56 frames.], batch size: 18, lr: 2.11e-04 2022-05-28 16:11:17,708 INFO [train.py:842] (0/4) Epoch 26, batch 5350, loss[loss=0.1367, simple_loss=0.2327, pruned_loss=0.02028, over 7163.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2653, pruned_loss=0.04426, over 1418563.87 frames.], batch size: 18, lr: 2.11e-04 2022-05-28 16:11:56,898 INFO [train.py:842] (0/4) Epoch 26, batch 5400, loss[loss=0.1633, simple_loss=0.2501, pruned_loss=0.03821, over 7161.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2655, pruned_loss=0.04463, over 1415824.67 frames.], batch size: 19, lr: 2.11e-04 2022-05-28 16:12:36,478 INFO [train.py:842] (0/4) Epoch 26, batch 5450, loss[loss=0.1531, simple_loss=0.2455, pruned_loss=0.03038, over 7152.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2664, pruned_loss=0.04464, over 1417190.68 frames.], batch size: 19, lr: 2.11e-04 2022-05-28 16:13:15,625 INFO [train.py:842] (0/4) Epoch 26, batch 5500, loss[loss=0.1505, simple_loss=0.2278, pruned_loss=0.03658, over 7000.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2648, pruned_loss=0.04391, over 1412252.52 frames.], batch size: 16, lr: 2.11e-04 2022-05-28 16:13:54,899 INFO [train.py:842] (0/4) Epoch 26, batch 5550, loss[loss=0.1643, simple_loss=0.2592, pruned_loss=0.03468, over 7217.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2649, pruned_loss=0.04372, over 1415139.35 frames.], batch size: 21, lr: 2.11e-04 2022-05-28 16:14:34,112 INFO [train.py:842] (0/4) Epoch 26, batch 5600, loss[loss=0.2008, simple_loss=0.2877, pruned_loss=0.05701, over 7199.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2658, pruned_loss=0.04382, over 1417138.22 frames.], batch size: 22, lr: 2.11e-04 2022-05-28 16:15:15,728 INFO [train.py:842] (0/4) Epoch 26, batch 5650, loss[loss=0.1948, simple_loss=0.2841, pruned_loss=0.05278, over 7202.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2665, pruned_loss=0.04458, over 1418420.46 frames.], batch size: 23, lr: 2.11e-04 2022-05-28 16:15:55,754 INFO [train.py:842] (0/4) Epoch 26, batch 5700, loss[loss=0.1899, simple_loss=0.2868, pruned_loss=0.04654, over 7142.00 frames.], tot_loss[loss=0.1781, simple_loss=0.267, pruned_loss=0.04466, over 1419963.76 frames.], batch size: 26, lr: 2.11e-04 2022-05-28 16:16:35,343 INFO [train.py:842] (0/4) Epoch 26, batch 5750, loss[loss=0.1699, simple_loss=0.2589, pruned_loss=0.04043, over 7163.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2661, pruned_loss=0.04491, over 1419345.81 frames.], batch size: 19, lr: 2.11e-04 2022-05-28 16:17:14,710 INFO [train.py:842] (0/4) Epoch 26, batch 5800, loss[loss=0.2422, simple_loss=0.3153, pruned_loss=0.08453, over 7233.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2666, pruned_loss=0.04525, over 1420441.92 frames.], batch size: 20, lr: 2.11e-04 2022-05-28 16:17:54,323 INFO [train.py:842] (0/4) Epoch 26, batch 5850, loss[loss=0.1673, simple_loss=0.2487, pruned_loss=0.043, over 7270.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2666, pruned_loss=0.04499, over 1420036.49 frames.], batch size: 18, lr: 2.11e-04 2022-05-28 16:18:34,350 INFO [train.py:842] (0/4) Epoch 26, batch 5900, loss[loss=0.176, simple_loss=0.2483, pruned_loss=0.05185, over 7295.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2657, pruned_loss=0.0444, over 1422404.26 frames.], batch size: 17, lr: 2.11e-04 2022-05-28 16:19:16,487 INFO [train.py:842] (0/4) Epoch 26, batch 5950, loss[loss=0.1706, simple_loss=0.2633, pruned_loss=0.03897, over 7225.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2652, pruned_loss=0.04447, over 1423105.42 frames.], batch size: 21, lr: 2.11e-04 2022-05-28 16:19:56,734 INFO [train.py:842] (0/4) Epoch 26, batch 6000, loss[loss=0.181, simple_loss=0.2795, pruned_loss=0.04123, over 6718.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2676, pruned_loss=0.04588, over 1420822.18 frames.], batch size: 31, lr: 2.11e-04 2022-05-28 16:19:56,736 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 16:20:06,474 INFO [train.py:871] (0/4) Epoch 26, validation: loss=0.164, simple_loss=0.2621, pruned_loss=0.03292, over 868885.00 frames. 2022-05-28 16:20:46,121 INFO [train.py:842] (0/4) Epoch 26, batch 6050, loss[loss=0.2016, simple_loss=0.2898, pruned_loss=0.05671, over 7140.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2682, pruned_loss=0.0461, over 1420768.29 frames.], batch size: 20, lr: 2.11e-04 2022-05-28 16:21:25,158 INFO [train.py:842] (0/4) Epoch 26, batch 6100, loss[loss=0.1605, simple_loss=0.2549, pruned_loss=0.03304, over 7155.00 frames.], tot_loss[loss=0.1787, simple_loss=0.267, pruned_loss=0.04517, over 1421531.03 frames.], batch size: 19, lr: 2.11e-04 2022-05-28 16:22:05,074 INFO [train.py:842] (0/4) Epoch 26, batch 6150, loss[loss=0.211, simple_loss=0.306, pruned_loss=0.05806, over 7288.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2672, pruned_loss=0.04554, over 1425248.32 frames.], batch size: 24, lr: 2.11e-04 2022-05-28 16:22:44,670 INFO [train.py:842] (0/4) Epoch 26, batch 6200, loss[loss=0.1763, simple_loss=0.2588, pruned_loss=0.04691, over 7173.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2663, pruned_loss=0.0455, over 1426100.78 frames.], batch size: 26, lr: 2.11e-04 2022-05-28 16:23:24,214 INFO [train.py:842] (0/4) Epoch 26, batch 6250, loss[loss=0.1743, simple_loss=0.2646, pruned_loss=0.04197, over 7387.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2664, pruned_loss=0.04521, over 1429522.16 frames.], batch size: 23, lr: 2.11e-04 2022-05-28 16:24:03,390 INFO [train.py:842] (0/4) Epoch 26, batch 6300, loss[loss=0.1918, simple_loss=0.2752, pruned_loss=0.05423, over 7271.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2658, pruned_loss=0.04535, over 1425337.09 frames.], batch size: 25, lr: 2.11e-04 2022-05-28 16:24:42,797 INFO [train.py:842] (0/4) Epoch 26, batch 6350, loss[loss=0.1906, simple_loss=0.2698, pruned_loss=0.05566, over 7136.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2662, pruned_loss=0.04568, over 1421703.23 frames.], batch size: 17, lr: 2.11e-04 2022-05-28 16:25:22,038 INFO [train.py:842] (0/4) Epoch 26, batch 6400, loss[loss=0.2, simple_loss=0.2783, pruned_loss=0.06091, over 7422.00 frames.], tot_loss[loss=0.18, simple_loss=0.2674, pruned_loss=0.04625, over 1425256.61 frames.], batch size: 20, lr: 2.11e-04 2022-05-28 16:26:01,606 INFO [train.py:842] (0/4) Epoch 26, batch 6450, loss[loss=0.1599, simple_loss=0.2517, pruned_loss=0.03408, over 7242.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2663, pruned_loss=0.04562, over 1419769.73 frames.], batch size: 19, lr: 2.11e-04 2022-05-28 16:26:41,171 INFO [train.py:842] (0/4) Epoch 26, batch 6500, loss[loss=0.1413, simple_loss=0.2338, pruned_loss=0.02441, over 7078.00 frames.], tot_loss[loss=0.1784, simple_loss=0.266, pruned_loss=0.04537, over 1423334.23 frames.], batch size: 18, lr: 2.11e-04 2022-05-28 16:27:20,705 INFO [train.py:842] (0/4) Epoch 26, batch 6550, loss[loss=0.1946, simple_loss=0.2867, pruned_loss=0.05122, over 7418.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2679, pruned_loss=0.04597, over 1419413.16 frames.], batch size: 20, lr: 2.11e-04 2022-05-28 16:28:00,084 INFO [train.py:842] (0/4) Epoch 26, batch 6600, loss[loss=0.2051, simple_loss=0.2854, pruned_loss=0.06238, over 7192.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2666, pruned_loss=0.04513, over 1422755.47 frames.], batch size: 22, lr: 2.11e-04 2022-05-28 16:28:39,881 INFO [train.py:842] (0/4) Epoch 26, batch 6650, loss[loss=0.1895, simple_loss=0.2802, pruned_loss=0.04937, over 7378.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2659, pruned_loss=0.04495, over 1425564.15 frames.], batch size: 23, lr: 2.11e-04 2022-05-28 16:29:19,229 INFO [train.py:842] (0/4) Epoch 26, batch 6700, loss[loss=0.1849, simple_loss=0.2632, pruned_loss=0.05327, over 7270.00 frames.], tot_loss[loss=0.179, simple_loss=0.2668, pruned_loss=0.0456, over 1427499.26 frames.], batch size: 18, lr: 2.11e-04 2022-05-28 16:29:59,195 INFO [train.py:842] (0/4) Epoch 26, batch 6750, loss[loss=0.1792, simple_loss=0.2645, pruned_loss=0.04697, over 7065.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2657, pruned_loss=0.04506, over 1429798.64 frames.], batch size: 18, lr: 2.11e-04 2022-05-28 16:30:38,555 INFO [train.py:842] (0/4) Epoch 26, batch 6800, loss[loss=0.1432, simple_loss=0.2321, pruned_loss=0.02717, over 7315.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2663, pruned_loss=0.04516, over 1431644.57 frames.], batch size: 20, lr: 2.11e-04 2022-05-28 16:31:18,196 INFO [train.py:842] (0/4) Epoch 26, batch 6850, loss[loss=0.1966, simple_loss=0.2819, pruned_loss=0.05569, over 7364.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2655, pruned_loss=0.04468, over 1432019.07 frames.], batch size: 19, lr: 2.11e-04 2022-05-28 16:31:57,854 INFO [train.py:842] (0/4) Epoch 26, batch 6900, loss[loss=0.1586, simple_loss=0.2484, pruned_loss=0.03433, over 7264.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2652, pruned_loss=0.04472, over 1431600.08 frames.], batch size: 19, lr: 2.11e-04 2022-05-28 16:32:37,577 INFO [train.py:842] (0/4) Epoch 26, batch 6950, loss[loss=0.1962, simple_loss=0.2638, pruned_loss=0.06433, over 7141.00 frames.], tot_loss[loss=0.177, simple_loss=0.2647, pruned_loss=0.04463, over 1428466.02 frames.], batch size: 17, lr: 2.11e-04 2022-05-28 16:33:16,721 INFO [train.py:842] (0/4) Epoch 26, batch 7000, loss[loss=0.1572, simple_loss=0.2262, pruned_loss=0.04413, over 7268.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2655, pruned_loss=0.04487, over 1429844.36 frames.], batch size: 17, lr: 2.11e-04 2022-05-28 16:33:56,300 INFO [train.py:842] (0/4) Epoch 26, batch 7050, loss[loss=0.1876, simple_loss=0.2684, pruned_loss=0.05339, over 5000.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2661, pruned_loss=0.04558, over 1425795.84 frames.], batch size: 52, lr: 2.11e-04 2022-05-28 16:34:35,516 INFO [train.py:842] (0/4) Epoch 26, batch 7100, loss[loss=0.1871, simple_loss=0.2769, pruned_loss=0.04864, over 7084.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2663, pruned_loss=0.04568, over 1419901.89 frames.], batch size: 28, lr: 2.11e-04 2022-05-28 16:35:15,055 INFO [train.py:842] (0/4) Epoch 26, batch 7150, loss[loss=0.2418, simple_loss=0.3165, pruned_loss=0.08351, over 7293.00 frames.], tot_loss[loss=0.1805, simple_loss=0.268, pruned_loss=0.04653, over 1422622.62 frames.], batch size: 25, lr: 2.11e-04 2022-05-28 16:35:54,045 INFO [train.py:842] (0/4) Epoch 26, batch 7200, loss[loss=0.1752, simple_loss=0.2706, pruned_loss=0.03991, over 7416.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2684, pruned_loss=0.0465, over 1423083.01 frames.], batch size: 21, lr: 2.10e-04 2022-05-28 16:36:33,702 INFO [train.py:842] (0/4) Epoch 26, batch 7250, loss[loss=0.1746, simple_loss=0.2694, pruned_loss=0.03991, over 7313.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2678, pruned_loss=0.04589, over 1425882.49 frames.], batch size: 24, lr: 2.10e-04 2022-05-28 16:37:12,933 INFO [train.py:842] (0/4) Epoch 26, batch 7300, loss[loss=0.1428, simple_loss=0.2245, pruned_loss=0.03052, over 7278.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2678, pruned_loss=0.04596, over 1424141.39 frames.], batch size: 17, lr: 2.10e-04 2022-05-28 16:37:52,186 INFO [train.py:842] (0/4) Epoch 26, batch 7350, loss[loss=0.1587, simple_loss=0.2507, pruned_loss=0.03336, over 7065.00 frames.], tot_loss[loss=0.18, simple_loss=0.268, pruned_loss=0.04595, over 1423900.41 frames.], batch size: 18, lr: 2.10e-04 2022-05-28 16:38:31,867 INFO [train.py:842] (0/4) Epoch 26, batch 7400, loss[loss=0.1499, simple_loss=0.2338, pruned_loss=0.03303, over 7270.00 frames.], tot_loss[loss=0.1803, simple_loss=0.2679, pruned_loss=0.0464, over 1426505.51 frames.], batch size: 18, lr: 2.10e-04 2022-05-28 16:39:11,456 INFO [train.py:842] (0/4) Epoch 26, batch 7450, loss[loss=0.1517, simple_loss=0.237, pruned_loss=0.03317, over 7349.00 frames.], tot_loss[loss=0.1804, simple_loss=0.268, pruned_loss=0.04637, over 1425402.35 frames.], batch size: 19, lr: 2.10e-04 2022-05-28 16:39:50,879 INFO [train.py:842] (0/4) Epoch 26, batch 7500, loss[loss=0.1985, simple_loss=0.2855, pruned_loss=0.05581, over 7160.00 frames.], tot_loss[loss=0.18, simple_loss=0.2678, pruned_loss=0.04611, over 1429766.06 frames.], batch size: 26, lr: 2.10e-04 2022-05-28 16:40:30,584 INFO [train.py:842] (0/4) Epoch 26, batch 7550, loss[loss=0.1531, simple_loss=0.2369, pruned_loss=0.03462, over 7148.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2674, pruned_loss=0.04626, over 1423966.81 frames.], batch size: 19, lr: 2.10e-04 2022-05-28 16:41:09,967 INFO [train.py:842] (0/4) Epoch 26, batch 7600, loss[loss=0.1904, simple_loss=0.2766, pruned_loss=0.05211, over 7144.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2663, pruned_loss=0.04548, over 1425420.06 frames.], batch size: 20, lr: 2.10e-04 2022-05-28 16:41:49,255 INFO [train.py:842] (0/4) Epoch 26, batch 7650, loss[loss=0.1842, simple_loss=0.2717, pruned_loss=0.04831, over 7198.00 frames.], tot_loss[loss=0.179, simple_loss=0.267, pruned_loss=0.04548, over 1426068.05 frames.], batch size: 23, lr: 2.10e-04 2022-05-28 16:42:28,530 INFO [train.py:842] (0/4) Epoch 26, batch 7700, loss[loss=0.1974, simple_loss=0.2827, pruned_loss=0.05602, over 7432.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2666, pruned_loss=0.04555, over 1428003.53 frames.], batch size: 20, lr: 2.10e-04 2022-05-28 16:43:08,345 INFO [train.py:842] (0/4) Epoch 26, batch 7750, loss[loss=0.1501, simple_loss=0.2446, pruned_loss=0.0278, over 7159.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2664, pruned_loss=0.04552, over 1430233.20 frames.], batch size: 18, lr: 2.10e-04 2022-05-28 16:43:47,822 INFO [train.py:842] (0/4) Epoch 26, batch 7800, loss[loss=0.1717, simple_loss=0.2533, pruned_loss=0.04505, over 7000.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2673, pruned_loss=0.04598, over 1429090.21 frames.], batch size: 16, lr: 2.10e-04 2022-05-28 16:44:27,459 INFO [train.py:842] (0/4) Epoch 26, batch 7850, loss[loss=0.2091, simple_loss=0.3018, pruned_loss=0.05826, over 6485.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2665, pruned_loss=0.04542, over 1425107.76 frames.], batch size: 38, lr: 2.10e-04 2022-05-28 16:45:06,417 INFO [train.py:842] (0/4) Epoch 26, batch 7900, loss[loss=0.1608, simple_loss=0.2435, pruned_loss=0.03909, over 7225.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2669, pruned_loss=0.04592, over 1421765.84 frames.], batch size: 16, lr: 2.10e-04 2022-05-28 16:45:46,028 INFO [train.py:842] (0/4) Epoch 26, batch 7950, loss[loss=0.1397, simple_loss=0.2293, pruned_loss=0.02512, over 7158.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2664, pruned_loss=0.04553, over 1420386.17 frames.], batch size: 18, lr: 2.10e-04 2022-05-28 16:46:25,434 INFO [train.py:842] (0/4) Epoch 26, batch 8000, loss[loss=0.1935, simple_loss=0.271, pruned_loss=0.05796, over 7170.00 frames.], tot_loss[loss=0.1762, simple_loss=0.264, pruned_loss=0.04418, over 1425607.10 frames.], batch size: 18, lr: 2.10e-04 2022-05-28 16:47:04,823 INFO [train.py:842] (0/4) Epoch 26, batch 8050, loss[loss=0.1709, simple_loss=0.2553, pruned_loss=0.04329, over 7191.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2642, pruned_loss=0.04407, over 1428444.35 frames.], batch size: 16, lr: 2.10e-04 2022-05-28 16:47:43,846 INFO [train.py:842] (0/4) Epoch 26, batch 8100, loss[loss=0.1752, simple_loss=0.2684, pruned_loss=0.04104, over 7446.00 frames.], tot_loss[loss=0.1767, simple_loss=0.265, pruned_loss=0.04426, over 1430662.76 frames.], batch size: 20, lr: 2.10e-04 2022-05-28 16:48:23,577 INFO [train.py:842] (0/4) Epoch 26, batch 8150, loss[loss=0.1666, simple_loss=0.2638, pruned_loss=0.0347, over 7323.00 frames.], tot_loss[loss=0.177, simple_loss=0.2652, pruned_loss=0.04436, over 1432783.05 frames.], batch size: 21, lr: 2.10e-04 2022-05-28 16:49:02,876 INFO [train.py:842] (0/4) Epoch 26, batch 8200, loss[loss=0.158, simple_loss=0.2466, pruned_loss=0.03466, over 7248.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2658, pruned_loss=0.04468, over 1431396.95 frames.], batch size: 19, lr: 2.10e-04 2022-05-28 16:49:42,459 INFO [train.py:842] (0/4) Epoch 26, batch 8250, loss[loss=0.1738, simple_loss=0.2492, pruned_loss=0.04919, over 7387.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2644, pruned_loss=0.04444, over 1430872.04 frames.], batch size: 18, lr: 2.10e-04 2022-05-28 16:50:21,808 INFO [train.py:842] (0/4) Epoch 26, batch 8300, loss[loss=0.1842, simple_loss=0.277, pruned_loss=0.04567, over 7314.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2641, pruned_loss=0.04463, over 1432596.65 frames.], batch size: 25, lr: 2.10e-04 2022-05-28 16:51:01,200 INFO [train.py:842] (0/4) Epoch 26, batch 8350, loss[loss=0.1473, simple_loss=0.2486, pruned_loss=0.02305, over 7345.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2654, pruned_loss=0.04499, over 1423003.94 frames.], batch size: 19, lr: 2.10e-04 2022-05-28 16:51:40,151 INFO [train.py:842] (0/4) Epoch 26, batch 8400, loss[loss=0.1602, simple_loss=0.2418, pruned_loss=0.03926, over 7156.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2663, pruned_loss=0.0453, over 1420848.21 frames.], batch size: 18, lr: 2.10e-04 2022-05-28 16:52:19,773 INFO [train.py:842] (0/4) Epoch 26, batch 8450, loss[loss=0.2047, simple_loss=0.2859, pruned_loss=0.06175, over 5139.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2657, pruned_loss=0.04494, over 1420660.96 frames.], batch size: 52, lr: 2.10e-04 2022-05-28 16:52:58,967 INFO [train.py:842] (0/4) Epoch 26, batch 8500, loss[loss=0.188, simple_loss=0.2712, pruned_loss=0.0524, over 7255.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2664, pruned_loss=0.04559, over 1419218.43 frames.], batch size: 19, lr: 2.10e-04 2022-05-28 16:53:38,728 INFO [train.py:842] (0/4) Epoch 26, batch 8550, loss[loss=0.1848, simple_loss=0.2706, pruned_loss=0.04949, over 7153.00 frames.], tot_loss[loss=0.1792, simple_loss=0.267, pruned_loss=0.04568, over 1420963.43 frames.], batch size: 28, lr: 2.10e-04 2022-05-28 16:54:18,389 INFO [train.py:842] (0/4) Epoch 26, batch 8600, loss[loss=0.1841, simple_loss=0.2614, pruned_loss=0.05336, over 7144.00 frames.], tot_loss[loss=0.1786, simple_loss=0.266, pruned_loss=0.04564, over 1425092.98 frames.], batch size: 17, lr: 2.10e-04 2022-05-28 16:54:58,078 INFO [train.py:842] (0/4) Epoch 26, batch 8650, loss[loss=0.1361, simple_loss=0.22, pruned_loss=0.02609, over 7136.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2654, pruned_loss=0.04551, over 1419377.19 frames.], batch size: 17, lr: 2.10e-04 2022-05-28 16:55:37,258 INFO [train.py:842] (0/4) Epoch 26, batch 8700, loss[loss=0.1741, simple_loss=0.2639, pruned_loss=0.04215, over 7328.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2663, pruned_loss=0.04608, over 1416280.79 frames.], batch size: 20, lr: 2.10e-04 2022-05-28 16:56:16,944 INFO [train.py:842] (0/4) Epoch 26, batch 8750, loss[loss=0.253, simple_loss=0.3347, pruned_loss=0.08561, over 7163.00 frames.], tot_loss[loss=0.1798, simple_loss=0.267, pruned_loss=0.04627, over 1421475.20 frames.], batch size: 26, lr: 2.10e-04 2022-05-28 16:56:56,141 INFO [train.py:842] (0/4) Epoch 26, batch 8800, loss[loss=0.2317, simple_loss=0.3123, pruned_loss=0.07559, over 7282.00 frames.], tot_loss[loss=0.1812, simple_loss=0.2686, pruned_loss=0.04692, over 1422175.07 frames.], batch size: 24, lr: 2.10e-04 2022-05-28 16:57:35,760 INFO [train.py:842] (0/4) Epoch 26, batch 8850, loss[loss=0.1673, simple_loss=0.2545, pruned_loss=0.03999, over 7074.00 frames.], tot_loss[loss=0.181, simple_loss=0.2688, pruned_loss=0.04661, over 1419738.87 frames.], batch size: 18, lr: 2.10e-04 2022-05-28 16:58:14,943 INFO [train.py:842] (0/4) Epoch 26, batch 8900, loss[loss=0.1596, simple_loss=0.2556, pruned_loss=0.03182, over 7144.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2684, pruned_loss=0.04635, over 1420156.50 frames.], batch size: 20, lr: 2.10e-04 2022-05-28 16:59:04,917 INFO [train.py:842] (0/4) Epoch 26, batch 8950, loss[loss=0.2223, simple_loss=0.3077, pruned_loss=0.06843, over 7194.00 frames.], tot_loss[loss=0.1806, simple_loss=0.2689, pruned_loss=0.04619, over 1417523.06 frames.], batch size: 26, lr: 2.10e-04 2022-05-28 16:59:43,856 INFO [train.py:842] (0/4) Epoch 26, batch 9000, loss[loss=0.2054, simple_loss=0.2878, pruned_loss=0.06152, over 4888.00 frames.], tot_loss[loss=0.1818, simple_loss=0.2701, pruned_loss=0.04678, over 1413051.24 frames.], batch size: 52, lr: 2.10e-04 2022-05-28 16:59:43,859 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 16:59:53,411 INFO [train.py:871] (0/4) Epoch 26, validation: loss=0.1634, simple_loss=0.2612, pruned_loss=0.03281, over 868885.00 frames. 2022-05-28 17:00:32,361 INFO [train.py:842] (0/4) Epoch 26, batch 9050, loss[loss=0.1901, simple_loss=0.2769, pruned_loss=0.05169, over 5080.00 frames.], tot_loss[loss=0.1828, simple_loss=0.271, pruned_loss=0.04727, over 1389789.13 frames.], batch size: 52, lr: 2.10e-04 2022-05-28 17:01:10,197 INFO [train.py:842] (0/4) Epoch 26, batch 9100, loss[loss=0.198, simple_loss=0.2847, pruned_loss=0.05563, over 5091.00 frames.], tot_loss[loss=0.1853, simple_loss=0.2733, pruned_loss=0.04861, over 1343234.86 frames.], batch size: 52, lr: 2.10e-04 2022-05-28 17:01:48,421 INFO [train.py:842] (0/4) Epoch 26, batch 9150, loss[loss=0.2398, simple_loss=0.3292, pruned_loss=0.07521, over 4958.00 frames.], tot_loss[loss=0.1896, simple_loss=0.2766, pruned_loss=0.05128, over 1276710.98 frames.], batch size: 52, lr: 2.10e-04 2022-05-28 17:02:20,167 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-26.pt 2022-05-28 17:02:39,075 INFO [train.py:842] (0/4) Epoch 27, batch 0, loss[loss=0.1427, simple_loss=0.2254, pruned_loss=0.03001, over 7161.00 frames.], tot_loss[loss=0.1427, simple_loss=0.2254, pruned_loss=0.03001, over 7161.00 frames.], batch size: 18, lr: 2.06e-04 2022-05-28 17:03:18,939 INFO [train.py:842] (0/4) Epoch 27, batch 50, loss[loss=0.1168, simple_loss=0.1966, pruned_loss=0.01855, over 7286.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2617, pruned_loss=0.04382, over 318959.49 frames.], batch size: 17, lr: 2.06e-04 2022-05-28 17:03:58,045 INFO [train.py:842] (0/4) Epoch 27, batch 100, loss[loss=0.1647, simple_loss=0.2309, pruned_loss=0.04926, over 7279.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2645, pruned_loss=0.04425, over 563103.41 frames.], batch size: 17, lr: 2.06e-04 2022-05-28 17:04:37,587 INFO [train.py:842] (0/4) Epoch 27, batch 150, loss[loss=0.1773, simple_loss=0.2741, pruned_loss=0.04022, over 6300.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2652, pruned_loss=0.04456, over 751377.15 frames.], batch size: 37, lr: 2.06e-04 2022-05-28 17:05:16,864 INFO [train.py:842] (0/4) Epoch 27, batch 200, loss[loss=0.19, simple_loss=0.2733, pruned_loss=0.05338, over 7172.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2659, pruned_loss=0.04552, over 894196.45 frames.], batch size: 26, lr: 2.06e-04 2022-05-28 17:05:56,193 INFO [train.py:842] (0/4) Epoch 27, batch 250, loss[loss=0.1656, simple_loss=0.2629, pruned_loss=0.03411, over 6525.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2668, pruned_loss=0.0458, over 1006558.24 frames.], batch size: 37, lr: 2.06e-04 2022-05-28 17:06:35,390 INFO [train.py:842] (0/4) Epoch 27, batch 300, loss[loss=0.1625, simple_loss=0.2587, pruned_loss=0.03316, over 6485.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2663, pruned_loss=0.04565, over 1100235.35 frames.], batch size: 38, lr: 2.06e-04 2022-05-28 17:07:15,056 INFO [train.py:842] (0/4) Epoch 27, batch 350, loss[loss=0.1836, simple_loss=0.2811, pruned_loss=0.04301, over 6756.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2654, pruned_loss=0.04485, over 1167723.37 frames.], batch size: 31, lr: 2.06e-04 2022-05-28 17:07:54,287 INFO [train.py:842] (0/4) Epoch 27, batch 400, loss[loss=0.1774, simple_loss=0.28, pruned_loss=0.03742, over 7145.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2668, pruned_loss=0.04528, over 1228568.06 frames.], batch size: 20, lr: 2.06e-04 2022-05-28 17:08:33,812 INFO [train.py:842] (0/4) Epoch 27, batch 450, loss[loss=0.2105, simple_loss=0.2845, pruned_loss=0.06828, over 7236.00 frames.], tot_loss[loss=0.178, simple_loss=0.2662, pruned_loss=0.04488, over 1276101.29 frames.], batch size: 20, lr: 2.06e-04 2022-05-28 17:09:13,071 INFO [train.py:842] (0/4) Epoch 27, batch 500, loss[loss=0.2276, simple_loss=0.3045, pruned_loss=0.0753, over 4918.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2656, pruned_loss=0.04488, over 1307528.10 frames.], batch size: 52, lr: 2.06e-04 2022-05-28 17:09:52,593 INFO [train.py:842] (0/4) Epoch 27, batch 550, loss[loss=0.189, simple_loss=0.2754, pruned_loss=0.05135, over 7207.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2668, pruned_loss=0.0453, over 1332102.72 frames.], batch size: 22, lr: 2.06e-04 2022-05-28 17:10:31,943 INFO [train.py:842] (0/4) Epoch 27, batch 600, loss[loss=0.1831, simple_loss=0.2736, pruned_loss=0.0463, over 7261.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2668, pruned_loss=0.0454, over 1354869.28 frames.], batch size: 19, lr: 2.05e-04 2022-05-28 17:11:11,817 INFO [train.py:842] (0/4) Epoch 27, batch 650, loss[loss=0.1554, simple_loss=0.2441, pruned_loss=0.03335, over 7271.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2652, pruned_loss=0.04485, over 1371240.82 frames.], batch size: 18, lr: 2.05e-04 2022-05-28 17:11:50,958 INFO [train.py:842] (0/4) Epoch 27, batch 700, loss[loss=0.171, simple_loss=0.2659, pruned_loss=0.03802, over 7112.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2651, pruned_loss=0.04475, over 1379943.97 frames.], batch size: 21, lr: 2.05e-04 2022-05-28 17:12:30,572 INFO [train.py:842] (0/4) Epoch 27, batch 750, loss[loss=0.1852, simple_loss=0.2766, pruned_loss=0.04689, over 7136.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2656, pruned_loss=0.04483, over 1388343.44 frames.], batch size: 20, lr: 2.05e-04 2022-05-28 17:13:09,875 INFO [train.py:842] (0/4) Epoch 27, batch 800, loss[loss=0.1846, simple_loss=0.2741, pruned_loss=0.04758, over 7236.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2662, pruned_loss=0.04535, over 1394781.32 frames.], batch size: 20, lr: 2.05e-04 2022-05-28 17:13:49,273 INFO [train.py:842] (0/4) Epoch 27, batch 850, loss[loss=0.2059, simple_loss=0.2903, pruned_loss=0.06075, over 5131.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2662, pruned_loss=0.04526, over 1397880.62 frames.], batch size: 52, lr: 2.05e-04 2022-05-28 17:14:28,640 INFO [train.py:842] (0/4) Epoch 27, batch 900, loss[loss=0.1685, simple_loss=0.2468, pruned_loss=0.0451, over 7409.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2659, pruned_loss=0.04479, over 1407524.15 frames.], batch size: 18, lr: 2.05e-04 2022-05-28 17:15:08,264 INFO [train.py:842] (0/4) Epoch 27, batch 950, loss[loss=0.1403, simple_loss=0.2216, pruned_loss=0.02951, over 7188.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2669, pruned_loss=0.04541, over 1408716.14 frames.], batch size: 16, lr: 2.05e-04 2022-05-28 17:15:47,431 INFO [train.py:842] (0/4) Epoch 27, batch 1000, loss[loss=0.1534, simple_loss=0.2532, pruned_loss=0.02676, over 7273.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2665, pruned_loss=0.04508, over 1411663.11 frames.], batch size: 24, lr: 2.05e-04 2022-05-28 17:15:56,402 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-240000.pt 2022-05-28 17:16:29,678 INFO [train.py:842] (0/4) Epoch 27, batch 1050, loss[loss=0.1738, simple_loss=0.2643, pruned_loss=0.04166, over 7221.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2662, pruned_loss=0.04483, over 1417238.37 frames.], batch size: 23, lr: 2.05e-04 2022-05-28 17:17:09,235 INFO [train.py:842] (0/4) Epoch 27, batch 1100, loss[loss=0.2022, simple_loss=0.2905, pruned_loss=0.05694, over 7193.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2654, pruned_loss=0.04481, over 1421320.88 frames.], batch size: 22, lr: 2.05e-04 2022-05-28 17:17:48,444 INFO [train.py:842] (0/4) Epoch 27, batch 1150, loss[loss=0.1745, simple_loss=0.2544, pruned_loss=0.04731, over 7158.00 frames.], tot_loss[loss=0.177, simple_loss=0.265, pruned_loss=0.04448, over 1422879.24 frames.], batch size: 19, lr: 2.05e-04 2022-05-28 17:18:27,729 INFO [train.py:842] (0/4) Epoch 27, batch 1200, loss[loss=0.1623, simple_loss=0.2622, pruned_loss=0.03116, over 7274.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2655, pruned_loss=0.04448, over 1426669.63 frames.], batch size: 24, lr: 2.05e-04 2022-05-28 17:19:07,423 INFO [train.py:842] (0/4) Epoch 27, batch 1250, loss[loss=0.1788, simple_loss=0.277, pruned_loss=0.04028, over 6355.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2653, pruned_loss=0.04409, over 1427186.23 frames.], batch size: 37, lr: 2.05e-04 2022-05-28 17:19:46,480 INFO [train.py:842] (0/4) Epoch 27, batch 1300, loss[loss=0.1692, simple_loss=0.255, pruned_loss=0.04169, over 7281.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2652, pruned_loss=0.04404, over 1423788.98 frames.], batch size: 18, lr: 2.05e-04 2022-05-28 17:20:26,363 INFO [train.py:842] (0/4) Epoch 27, batch 1350, loss[loss=0.157, simple_loss=0.2452, pruned_loss=0.03442, over 7396.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2636, pruned_loss=0.04368, over 1427557.70 frames.], batch size: 18, lr: 2.05e-04 2022-05-28 17:21:05,313 INFO [train.py:842] (0/4) Epoch 27, batch 1400, loss[loss=0.1926, simple_loss=0.2862, pruned_loss=0.04951, over 7209.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2648, pruned_loss=0.04441, over 1420452.91 frames.], batch size: 23, lr: 2.05e-04 2022-05-28 17:21:44,755 INFO [train.py:842] (0/4) Epoch 27, batch 1450, loss[loss=0.1283, simple_loss=0.2206, pruned_loss=0.01805, over 7283.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2638, pruned_loss=0.04391, over 1422335.86 frames.], batch size: 18, lr: 2.05e-04 2022-05-28 17:22:23,915 INFO [train.py:842] (0/4) Epoch 27, batch 1500, loss[loss=0.2217, simple_loss=0.3037, pruned_loss=0.06984, over 4970.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2636, pruned_loss=0.04346, over 1417245.48 frames.], batch size: 52, lr: 2.05e-04 2022-05-28 17:23:03,655 INFO [train.py:842] (0/4) Epoch 27, batch 1550, loss[loss=0.1628, simple_loss=0.26, pruned_loss=0.03279, over 7127.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2628, pruned_loss=0.04311, over 1420677.85 frames.], batch size: 21, lr: 2.05e-04 2022-05-28 17:23:43,237 INFO [train.py:842] (0/4) Epoch 27, batch 1600, loss[loss=0.1741, simple_loss=0.2611, pruned_loss=0.0435, over 7253.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2633, pruned_loss=0.04377, over 1424407.07 frames.], batch size: 19, lr: 2.05e-04 2022-05-28 17:24:22,835 INFO [train.py:842] (0/4) Epoch 27, batch 1650, loss[loss=0.1703, simple_loss=0.2638, pruned_loss=0.03839, over 7167.00 frames.], tot_loss[loss=0.176, simple_loss=0.2641, pruned_loss=0.044, over 1428496.67 frames.], batch size: 26, lr: 2.05e-04 2022-05-28 17:25:01,989 INFO [train.py:842] (0/4) Epoch 27, batch 1700, loss[loss=0.1742, simple_loss=0.263, pruned_loss=0.04274, over 7344.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2633, pruned_loss=0.04391, over 1430476.73 frames.], batch size: 22, lr: 2.05e-04 2022-05-28 17:25:41,524 INFO [train.py:842] (0/4) Epoch 27, batch 1750, loss[loss=0.1893, simple_loss=0.2768, pruned_loss=0.05095, over 7163.00 frames.], tot_loss[loss=0.176, simple_loss=0.2639, pruned_loss=0.04403, over 1430969.10 frames.], batch size: 26, lr: 2.05e-04 2022-05-28 17:26:20,787 INFO [train.py:842] (0/4) Epoch 27, batch 1800, loss[loss=0.1439, simple_loss=0.2473, pruned_loss=0.02024, over 7122.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2633, pruned_loss=0.04384, over 1427985.32 frames.], batch size: 21, lr: 2.05e-04 2022-05-28 17:27:00,481 INFO [train.py:842] (0/4) Epoch 27, batch 1850, loss[loss=0.2128, simple_loss=0.3018, pruned_loss=0.06189, over 5146.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2635, pruned_loss=0.04402, over 1428061.53 frames.], batch size: 53, lr: 2.05e-04 2022-05-28 17:27:40,052 INFO [train.py:842] (0/4) Epoch 27, batch 1900, loss[loss=0.1869, simple_loss=0.2788, pruned_loss=0.04747, over 7351.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2644, pruned_loss=0.04491, over 1426631.00 frames.], batch size: 19, lr: 2.05e-04 2022-05-28 17:28:19,451 INFO [train.py:842] (0/4) Epoch 27, batch 1950, loss[loss=0.1841, simple_loss=0.278, pruned_loss=0.04509, over 6444.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2653, pruned_loss=0.04548, over 1424259.89 frames.], batch size: 37, lr: 2.05e-04 2022-05-28 17:28:58,925 INFO [train.py:842] (0/4) Epoch 27, batch 2000, loss[loss=0.1946, simple_loss=0.2834, pruned_loss=0.05286, over 6821.00 frames.], tot_loss[loss=0.178, simple_loss=0.2652, pruned_loss=0.04536, over 1422477.61 frames.], batch size: 31, lr: 2.05e-04 2022-05-28 17:29:38,272 INFO [train.py:842] (0/4) Epoch 27, batch 2050, loss[loss=0.1892, simple_loss=0.2908, pruned_loss=0.04387, over 7169.00 frames.], tot_loss[loss=0.1782, simple_loss=0.266, pruned_loss=0.04527, over 1426123.87 frames.], batch size: 26, lr: 2.05e-04 2022-05-28 17:30:17,480 INFO [train.py:842] (0/4) Epoch 27, batch 2100, loss[loss=0.2439, simple_loss=0.327, pruned_loss=0.08037, over 7215.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2659, pruned_loss=0.0455, over 1425106.65 frames.], batch size: 22, lr: 2.05e-04 2022-05-28 17:30:56,981 INFO [train.py:842] (0/4) Epoch 27, batch 2150, loss[loss=0.1895, simple_loss=0.2867, pruned_loss=0.0461, over 7300.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2671, pruned_loss=0.04569, over 1428456.78 frames.], batch size: 25, lr: 2.05e-04 2022-05-28 17:31:36,226 INFO [train.py:842] (0/4) Epoch 27, batch 2200, loss[loss=0.1903, simple_loss=0.2769, pruned_loss=0.0519, over 7237.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2676, pruned_loss=0.04602, over 1427053.81 frames.], batch size: 20, lr: 2.05e-04 2022-05-28 17:32:15,916 INFO [train.py:842] (0/4) Epoch 27, batch 2250, loss[loss=0.1554, simple_loss=0.2458, pruned_loss=0.03256, over 7012.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2676, pruned_loss=0.0457, over 1432322.18 frames.], batch size: 16, lr: 2.05e-04 2022-05-28 17:32:55,028 INFO [train.py:842] (0/4) Epoch 27, batch 2300, loss[loss=0.1407, simple_loss=0.2179, pruned_loss=0.03173, over 7133.00 frames.], tot_loss[loss=0.1789, simple_loss=0.2672, pruned_loss=0.04533, over 1433720.50 frames.], batch size: 17, lr: 2.05e-04 2022-05-28 17:33:34,531 INFO [train.py:842] (0/4) Epoch 27, batch 2350, loss[loss=0.1794, simple_loss=0.2661, pruned_loss=0.04633, over 7134.00 frames.], tot_loss[loss=0.1798, simple_loss=0.268, pruned_loss=0.0458, over 1432084.40 frames.], batch size: 20, lr: 2.05e-04 2022-05-28 17:34:13,872 INFO [train.py:842] (0/4) Epoch 27, batch 2400, loss[loss=0.1643, simple_loss=0.26, pruned_loss=0.03425, over 7299.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2683, pruned_loss=0.04593, over 1433358.11 frames.], batch size: 24, lr: 2.05e-04 2022-05-28 17:34:53,582 INFO [train.py:842] (0/4) Epoch 27, batch 2450, loss[loss=0.2069, simple_loss=0.3046, pruned_loss=0.05455, over 7231.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2689, pruned_loss=0.04593, over 1436072.40 frames.], batch size: 20, lr: 2.05e-04 2022-05-28 17:35:32,737 INFO [train.py:842] (0/4) Epoch 27, batch 2500, loss[loss=0.1527, simple_loss=0.2515, pruned_loss=0.02689, over 7229.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2678, pruned_loss=0.04563, over 1437520.05 frames.], batch size: 21, lr: 2.05e-04 2022-05-28 17:36:12,411 INFO [train.py:842] (0/4) Epoch 27, batch 2550, loss[loss=0.2067, simple_loss=0.2964, pruned_loss=0.05855, over 6585.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2672, pruned_loss=0.04508, over 1434173.03 frames.], batch size: 31, lr: 2.05e-04 2022-05-28 17:36:51,718 INFO [train.py:842] (0/4) Epoch 27, batch 2600, loss[loss=0.1473, simple_loss=0.2241, pruned_loss=0.03522, over 6801.00 frames.], tot_loss[loss=0.1777, simple_loss=0.266, pruned_loss=0.04475, over 1433896.30 frames.], batch size: 15, lr: 2.05e-04 2022-05-28 17:37:31,347 INFO [train.py:842] (0/4) Epoch 27, batch 2650, loss[loss=0.1878, simple_loss=0.2833, pruned_loss=0.04619, over 7282.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2664, pruned_loss=0.04465, over 1431145.40 frames.], batch size: 24, lr: 2.05e-04 2022-05-28 17:38:10,529 INFO [train.py:842] (0/4) Epoch 27, batch 2700, loss[loss=0.1764, simple_loss=0.277, pruned_loss=0.03792, over 7335.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2669, pruned_loss=0.04518, over 1428864.65 frames.], batch size: 22, lr: 2.05e-04 2022-05-28 17:38:50,266 INFO [train.py:842] (0/4) Epoch 27, batch 2750, loss[loss=0.1531, simple_loss=0.2447, pruned_loss=0.03075, over 7163.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2657, pruned_loss=0.04439, over 1428193.67 frames.], batch size: 19, lr: 2.05e-04 2022-05-28 17:39:29,433 INFO [train.py:842] (0/4) Epoch 27, batch 2800, loss[loss=0.1884, simple_loss=0.281, pruned_loss=0.04786, over 7277.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2666, pruned_loss=0.0448, over 1427226.11 frames.], batch size: 25, lr: 2.05e-04 2022-05-28 17:40:08,875 INFO [train.py:842] (0/4) Epoch 27, batch 2850, loss[loss=0.1734, simple_loss=0.2639, pruned_loss=0.04149, over 7251.00 frames.], tot_loss[loss=0.1786, simple_loss=0.267, pruned_loss=0.04511, over 1426517.23 frames.], batch size: 19, lr: 2.05e-04 2022-05-28 17:40:48,068 INFO [train.py:842] (0/4) Epoch 27, batch 2900, loss[loss=0.1655, simple_loss=0.2565, pruned_loss=0.03722, over 7162.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2661, pruned_loss=0.04483, over 1425202.26 frames.], batch size: 19, lr: 2.05e-04 2022-05-28 17:41:27,480 INFO [train.py:842] (0/4) Epoch 27, batch 2950, loss[loss=0.1639, simple_loss=0.2584, pruned_loss=0.03465, over 7105.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2668, pruned_loss=0.04517, over 1419475.74 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 17:42:06,536 INFO [train.py:842] (0/4) Epoch 27, batch 3000, loss[loss=0.1912, simple_loss=0.3025, pruned_loss=0.03995, over 7404.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2669, pruned_loss=0.04505, over 1418803.87 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 17:42:06,538 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 17:42:16,300 INFO [train.py:871] (0/4) Epoch 27, validation: loss=0.1636, simple_loss=0.2622, pruned_loss=0.03251, over 868885.00 frames. 2022-05-28 17:42:56,071 INFO [train.py:842] (0/4) Epoch 27, batch 3050, loss[loss=0.1778, simple_loss=0.274, pruned_loss=0.0408, over 7124.00 frames.], tot_loss[loss=0.178, simple_loss=0.266, pruned_loss=0.04498, over 1409593.76 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 17:43:35,205 INFO [train.py:842] (0/4) Epoch 27, batch 3100, loss[loss=0.1441, simple_loss=0.2464, pruned_loss=0.0209, over 7315.00 frames.], tot_loss[loss=0.178, simple_loss=0.2663, pruned_loss=0.04491, over 1415414.17 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 17:44:14,967 INFO [train.py:842] (0/4) Epoch 27, batch 3150, loss[loss=0.2185, simple_loss=0.305, pruned_loss=0.06599, over 7203.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2662, pruned_loss=0.04506, over 1416061.10 frames.], batch size: 22, lr: 2.04e-04 2022-05-28 17:44:54,123 INFO [train.py:842] (0/4) Epoch 27, batch 3200, loss[loss=0.2244, simple_loss=0.3015, pruned_loss=0.07362, over 7205.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2672, pruned_loss=0.04554, over 1418100.25 frames.], batch size: 23, lr: 2.04e-04 2022-05-28 17:45:33,914 INFO [train.py:842] (0/4) Epoch 27, batch 3250, loss[loss=0.2253, simple_loss=0.3052, pruned_loss=0.07265, over 6326.00 frames.], tot_loss[loss=0.179, simple_loss=0.2668, pruned_loss=0.04563, over 1419032.71 frames.], batch size: 37, lr: 2.04e-04 2022-05-28 17:46:13,052 INFO [train.py:842] (0/4) Epoch 27, batch 3300, loss[loss=0.1582, simple_loss=0.2608, pruned_loss=0.02786, over 6649.00 frames.], tot_loss[loss=0.18, simple_loss=0.2678, pruned_loss=0.04611, over 1419159.00 frames.], batch size: 31, lr: 2.04e-04 2022-05-28 17:46:52,294 INFO [train.py:842] (0/4) Epoch 27, batch 3350, loss[loss=0.1952, simple_loss=0.2862, pruned_loss=0.05213, over 7337.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2679, pruned_loss=0.04575, over 1419396.66 frames.], batch size: 22, lr: 2.04e-04 2022-05-28 17:47:31,359 INFO [train.py:842] (0/4) Epoch 27, batch 3400, loss[loss=0.2021, simple_loss=0.3019, pruned_loss=0.05121, over 7142.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2676, pruned_loss=0.04581, over 1416811.25 frames.], batch size: 20, lr: 2.04e-04 2022-05-28 17:48:10,899 INFO [train.py:842] (0/4) Epoch 27, batch 3450, loss[loss=0.1734, simple_loss=0.2821, pruned_loss=0.03231, over 7341.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2668, pruned_loss=0.04509, over 1419759.57 frames.], batch size: 22, lr: 2.04e-04 2022-05-28 17:49:02,122 INFO [train.py:842] (0/4) Epoch 27, batch 3500, loss[loss=0.1713, simple_loss=0.2458, pruned_loss=0.04841, over 6790.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2655, pruned_loss=0.04448, over 1421990.16 frames.], batch size: 15, lr: 2.04e-04 2022-05-28 17:49:41,575 INFO [train.py:842] (0/4) Epoch 27, batch 3550, loss[loss=0.2851, simple_loss=0.3473, pruned_loss=0.1115, over 5197.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2653, pruned_loss=0.04476, over 1416307.78 frames.], batch size: 52, lr: 2.04e-04 2022-05-28 17:50:20,529 INFO [train.py:842] (0/4) Epoch 27, batch 3600, loss[loss=0.1991, simple_loss=0.2828, pruned_loss=0.05772, over 7159.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2658, pruned_loss=0.04473, over 1413403.64 frames.], batch size: 19, lr: 2.04e-04 2022-05-28 17:51:00,298 INFO [train.py:842] (0/4) Epoch 27, batch 3650, loss[loss=0.1562, simple_loss=0.2477, pruned_loss=0.03236, over 7072.00 frames.], tot_loss[loss=0.177, simple_loss=0.2648, pruned_loss=0.04458, over 1412545.21 frames.], batch size: 18, lr: 2.04e-04 2022-05-28 17:51:50,565 INFO [train.py:842] (0/4) Epoch 27, batch 3700, loss[loss=0.1693, simple_loss=0.2626, pruned_loss=0.03797, over 7210.00 frames.], tot_loss[loss=0.176, simple_loss=0.2636, pruned_loss=0.04415, over 1411311.78 frames.], batch size: 22, lr: 2.04e-04 2022-05-28 17:52:41,168 INFO [train.py:842] (0/4) Epoch 27, batch 3750, loss[loss=0.1554, simple_loss=0.2299, pruned_loss=0.0405, over 7158.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2633, pruned_loss=0.04414, over 1414873.22 frames.], batch size: 19, lr: 2.04e-04 2022-05-28 17:53:20,686 INFO [train.py:842] (0/4) Epoch 27, batch 3800, loss[loss=0.1523, simple_loss=0.2374, pruned_loss=0.03366, over 7427.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2625, pruned_loss=0.04381, over 1419026.08 frames.], batch size: 18, lr: 2.04e-04 2022-05-28 17:54:00,217 INFO [train.py:842] (0/4) Epoch 27, batch 3850, loss[loss=0.2014, simple_loss=0.2815, pruned_loss=0.06065, over 7218.00 frames.], tot_loss[loss=0.176, simple_loss=0.2635, pruned_loss=0.04423, over 1413734.33 frames.], batch size: 23, lr: 2.04e-04 2022-05-28 17:54:39,430 INFO [train.py:842] (0/4) Epoch 27, batch 3900, loss[loss=0.1623, simple_loss=0.2594, pruned_loss=0.03262, over 7218.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2642, pruned_loss=0.0444, over 1412504.15 frames.], batch size: 22, lr: 2.04e-04 2022-05-28 17:55:18,914 INFO [train.py:842] (0/4) Epoch 27, batch 3950, loss[loss=0.1608, simple_loss=0.257, pruned_loss=0.03228, over 7323.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2639, pruned_loss=0.04472, over 1417241.83 frames.], batch size: 20, lr: 2.04e-04 2022-05-28 17:55:58,101 INFO [train.py:842] (0/4) Epoch 27, batch 4000, loss[loss=0.1575, simple_loss=0.2409, pruned_loss=0.0371, over 7418.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2648, pruned_loss=0.04491, over 1424314.53 frames.], batch size: 18, lr: 2.04e-04 2022-05-28 17:56:37,742 INFO [train.py:842] (0/4) Epoch 27, batch 4050, loss[loss=0.167, simple_loss=0.2601, pruned_loss=0.03699, over 7199.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2642, pruned_loss=0.04419, over 1426595.53 frames.], batch size: 22, lr: 2.04e-04 2022-05-28 17:57:17,337 INFO [train.py:842] (0/4) Epoch 27, batch 4100, loss[loss=0.1382, simple_loss=0.2289, pruned_loss=0.0237, over 7252.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2639, pruned_loss=0.04421, over 1428087.37 frames.], batch size: 19, lr: 2.04e-04 2022-05-28 17:57:56,810 INFO [train.py:842] (0/4) Epoch 27, batch 4150, loss[loss=0.1554, simple_loss=0.2558, pruned_loss=0.02751, over 7334.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2635, pruned_loss=0.04357, over 1422064.28 frames.], batch size: 22, lr: 2.04e-04 2022-05-28 17:58:36,168 INFO [train.py:842] (0/4) Epoch 27, batch 4200, loss[loss=0.1494, simple_loss=0.2344, pruned_loss=0.03221, over 7061.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2637, pruned_loss=0.04384, over 1422506.67 frames.], batch size: 18, lr: 2.04e-04 2022-05-28 17:59:15,948 INFO [train.py:842] (0/4) Epoch 27, batch 4250, loss[loss=0.1833, simple_loss=0.2733, pruned_loss=0.0466, over 7329.00 frames.], tot_loss[loss=0.1764, simple_loss=0.264, pruned_loss=0.04444, over 1424318.16 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 17:59:55,115 INFO [train.py:842] (0/4) Epoch 27, batch 4300, loss[loss=0.2303, simple_loss=0.2943, pruned_loss=0.08318, over 7133.00 frames.], tot_loss[loss=0.1776, simple_loss=0.265, pruned_loss=0.04507, over 1421693.36 frames.], batch size: 17, lr: 2.04e-04 2022-05-28 18:00:34,752 INFO [train.py:842] (0/4) Epoch 27, batch 4350, loss[loss=0.208, simple_loss=0.2991, pruned_loss=0.05847, over 7326.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2654, pruned_loss=0.04553, over 1420440.35 frames.], batch size: 24, lr: 2.04e-04 2022-05-28 18:01:13,866 INFO [train.py:842] (0/4) Epoch 27, batch 4400, loss[loss=0.1849, simple_loss=0.2713, pruned_loss=0.04923, over 7191.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2664, pruned_loss=0.04586, over 1419748.59 frames.], batch size: 23, lr: 2.04e-04 2022-05-28 18:01:53,426 INFO [train.py:842] (0/4) Epoch 27, batch 4450, loss[loss=0.1925, simple_loss=0.2803, pruned_loss=0.05234, over 7065.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2661, pruned_loss=0.04609, over 1419240.15 frames.], batch size: 28, lr: 2.04e-04 2022-05-28 18:02:32,835 INFO [train.py:842] (0/4) Epoch 27, batch 4500, loss[loss=0.2483, simple_loss=0.3359, pruned_loss=0.08037, over 7408.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2665, pruned_loss=0.04588, over 1425621.63 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 18:03:12,533 INFO [train.py:842] (0/4) Epoch 27, batch 4550, loss[loss=0.2535, simple_loss=0.3393, pruned_loss=0.0839, over 7410.00 frames.], tot_loss[loss=0.1785, simple_loss=0.266, pruned_loss=0.04547, over 1418802.87 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 18:03:51,739 INFO [train.py:842] (0/4) Epoch 27, batch 4600, loss[loss=0.152, simple_loss=0.2325, pruned_loss=0.0357, over 7281.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2654, pruned_loss=0.04503, over 1418739.14 frames.], batch size: 18, lr: 2.04e-04 2022-05-28 18:04:31,170 INFO [train.py:842] (0/4) Epoch 27, batch 4650, loss[loss=0.1586, simple_loss=0.2448, pruned_loss=0.0362, over 6997.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2645, pruned_loss=0.04422, over 1421589.43 frames.], batch size: 16, lr: 2.04e-04 2022-05-28 18:05:10,260 INFO [train.py:842] (0/4) Epoch 27, batch 4700, loss[loss=0.1775, simple_loss=0.2613, pruned_loss=0.04679, over 7259.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2654, pruned_loss=0.04448, over 1421738.14 frames.], batch size: 19, lr: 2.04e-04 2022-05-28 18:05:49,568 INFO [train.py:842] (0/4) Epoch 27, batch 4750, loss[loss=0.1853, simple_loss=0.2725, pruned_loss=0.04899, over 7213.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2666, pruned_loss=0.04526, over 1424939.12 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 18:06:28,912 INFO [train.py:842] (0/4) Epoch 27, batch 4800, loss[loss=0.1422, simple_loss=0.2301, pruned_loss=0.02718, over 7278.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2659, pruned_loss=0.04476, over 1424089.33 frames.], batch size: 18, lr: 2.04e-04 2022-05-28 18:07:08,559 INFO [train.py:842] (0/4) Epoch 27, batch 4850, loss[loss=0.1945, simple_loss=0.291, pruned_loss=0.04899, over 7313.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2653, pruned_loss=0.04477, over 1425687.21 frames.], batch size: 21, lr: 2.04e-04 2022-05-28 18:07:47,910 INFO [train.py:842] (0/4) Epoch 27, batch 4900, loss[loss=0.2401, simple_loss=0.3224, pruned_loss=0.07884, over 7190.00 frames.], tot_loss[loss=0.1767, simple_loss=0.265, pruned_loss=0.04416, over 1426831.23 frames.], batch size: 22, lr: 2.04e-04 2022-05-28 18:08:27,466 INFO [train.py:842] (0/4) Epoch 27, batch 4950, loss[loss=0.179, simple_loss=0.2627, pruned_loss=0.04768, over 7281.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2645, pruned_loss=0.04408, over 1421481.88 frames.], batch size: 17, lr: 2.04e-04 2022-05-28 18:09:06,655 INFO [train.py:842] (0/4) Epoch 27, batch 5000, loss[loss=0.1683, simple_loss=0.2647, pruned_loss=0.03601, over 7274.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2653, pruned_loss=0.04418, over 1420759.66 frames.], batch size: 25, lr: 2.04e-04 2022-05-28 18:09:46,194 INFO [train.py:842] (0/4) Epoch 27, batch 5050, loss[loss=0.1783, simple_loss=0.27, pruned_loss=0.04328, over 7232.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2663, pruned_loss=0.04478, over 1424039.91 frames.], batch size: 20, lr: 2.04e-04 2022-05-28 18:10:25,424 INFO [train.py:842] (0/4) Epoch 27, batch 5100, loss[loss=0.1591, simple_loss=0.2458, pruned_loss=0.03623, over 7289.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2663, pruned_loss=0.04506, over 1423456.11 frames.], batch size: 17, lr: 2.04e-04 2022-05-28 18:11:05,129 INFO [train.py:842] (0/4) Epoch 27, batch 5150, loss[loss=0.2169, simple_loss=0.3112, pruned_loss=0.06135, over 7202.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2665, pruned_loss=0.04517, over 1425558.26 frames.], batch size: 23, lr: 2.04e-04 2022-05-28 18:11:44,264 INFO [train.py:842] (0/4) Epoch 27, batch 5200, loss[loss=0.1696, simple_loss=0.2755, pruned_loss=0.03183, over 6382.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2675, pruned_loss=0.04564, over 1425939.95 frames.], batch size: 37, lr: 2.04e-04 2022-05-28 18:12:24,104 INFO [train.py:842] (0/4) Epoch 27, batch 5250, loss[loss=0.1532, simple_loss=0.2426, pruned_loss=0.03192, over 7154.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2664, pruned_loss=0.0451, over 1428417.30 frames.], batch size: 19, lr: 2.04e-04 2022-05-28 18:13:03,669 INFO [train.py:842] (0/4) Epoch 27, batch 5300, loss[loss=0.1733, simple_loss=0.2597, pruned_loss=0.04343, over 7058.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2658, pruned_loss=0.04477, over 1432590.12 frames.], batch size: 18, lr: 2.04e-04 2022-05-28 18:13:43,376 INFO [train.py:842] (0/4) Epoch 27, batch 5350, loss[loss=0.17, simple_loss=0.2648, pruned_loss=0.03754, over 7101.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2661, pruned_loss=0.04465, over 1429804.02 frames.], batch size: 21, lr: 2.03e-04 2022-05-28 18:14:22,790 INFO [train.py:842] (0/4) Epoch 27, batch 5400, loss[loss=0.1797, simple_loss=0.2638, pruned_loss=0.04781, over 7437.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2657, pruned_loss=0.04445, over 1430973.12 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:15:02,174 INFO [train.py:842] (0/4) Epoch 27, batch 5450, loss[loss=0.1674, simple_loss=0.2659, pruned_loss=0.03449, over 7074.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2648, pruned_loss=0.04415, over 1428368.37 frames.], batch size: 18, lr: 2.03e-04 2022-05-28 18:15:41,469 INFO [train.py:842] (0/4) Epoch 27, batch 5500, loss[loss=0.1743, simple_loss=0.2628, pruned_loss=0.04285, over 7155.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2649, pruned_loss=0.044, over 1427335.46 frames.], batch size: 26, lr: 2.03e-04 2022-05-28 18:16:21,184 INFO [train.py:842] (0/4) Epoch 27, batch 5550, loss[loss=0.1819, simple_loss=0.275, pruned_loss=0.0444, over 7419.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2648, pruned_loss=0.04432, over 1426334.04 frames.], batch size: 21, lr: 2.03e-04 2022-05-28 18:17:00,330 INFO [train.py:842] (0/4) Epoch 27, batch 5600, loss[loss=0.1656, simple_loss=0.2468, pruned_loss=0.04221, over 7131.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2641, pruned_loss=0.04378, over 1426520.51 frames.], batch size: 17, lr: 2.03e-04 2022-05-28 18:17:39,875 INFO [train.py:842] (0/4) Epoch 27, batch 5650, loss[loss=0.1758, simple_loss=0.267, pruned_loss=0.04228, over 7223.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2641, pruned_loss=0.04359, over 1426893.64 frames.], batch size: 21, lr: 2.03e-04 2022-05-28 18:18:19,102 INFO [train.py:842] (0/4) Epoch 27, batch 5700, loss[loss=0.1748, simple_loss=0.2635, pruned_loss=0.04304, over 7411.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2647, pruned_loss=0.04422, over 1421075.53 frames.], batch size: 21, lr: 2.03e-04 2022-05-28 18:18:58,794 INFO [train.py:842] (0/4) Epoch 27, batch 5750, loss[loss=0.1717, simple_loss=0.2571, pruned_loss=0.04316, over 7437.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2657, pruned_loss=0.04483, over 1423269.60 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:19:38,112 INFO [train.py:842] (0/4) Epoch 27, batch 5800, loss[loss=0.1562, simple_loss=0.2268, pruned_loss=0.0428, over 6996.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2661, pruned_loss=0.04468, over 1419765.74 frames.], batch size: 16, lr: 2.03e-04 2022-05-28 18:20:17,613 INFO [train.py:842] (0/4) Epoch 27, batch 5850, loss[loss=0.1925, simple_loss=0.2715, pruned_loss=0.05677, over 7142.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2667, pruned_loss=0.0451, over 1419217.82 frames.], batch size: 26, lr: 2.03e-04 2022-05-28 18:20:57,001 INFO [train.py:842] (0/4) Epoch 27, batch 5900, loss[loss=0.1719, simple_loss=0.2674, pruned_loss=0.03823, over 7435.00 frames.], tot_loss[loss=0.1777, simple_loss=0.266, pruned_loss=0.04465, over 1423514.39 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:21:36,797 INFO [train.py:842] (0/4) Epoch 27, batch 5950, loss[loss=0.1801, simple_loss=0.2698, pruned_loss=0.04518, over 7247.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2653, pruned_loss=0.04493, over 1422484.32 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:22:15,967 INFO [train.py:842] (0/4) Epoch 27, batch 6000, loss[loss=0.1829, simple_loss=0.2823, pruned_loss=0.04178, over 7337.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2664, pruned_loss=0.04528, over 1420824.95 frames.], batch size: 22, lr: 2.03e-04 2022-05-28 18:22:15,968 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 18:22:25,576 INFO [train.py:871] (0/4) Epoch 27, validation: loss=0.1638, simple_loss=0.2621, pruned_loss=0.03281, over 868885.00 frames. 2022-05-28 18:23:04,722 INFO [train.py:842] (0/4) Epoch 27, batch 6050, loss[loss=0.1436, simple_loss=0.2393, pruned_loss=0.02395, over 7358.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2679, pruned_loss=0.04561, over 1414515.24 frames.], batch size: 19, lr: 2.03e-04 2022-05-28 18:23:44,245 INFO [train.py:842] (0/4) Epoch 27, batch 6100, loss[loss=0.1798, simple_loss=0.2795, pruned_loss=0.04005, over 7109.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2662, pruned_loss=0.04516, over 1415279.27 frames.], batch size: 21, lr: 2.03e-04 2022-05-28 18:24:23,918 INFO [train.py:842] (0/4) Epoch 27, batch 6150, loss[loss=0.1797, simple_loss=0.2594, pruned_loss=0.05003, over 7137.00 frames.], tot_loss[loss=0.179, simple_loss=0.2667, pruned_loss=0.04561, over 1421534.24 frames.], batch size: 17, lr: 2.03e-04 2022-05-28 18:25:03,236 INFO [train.py:842] (0/4) Epoch 27, batch 6200, loss[loss=0.1447, simple_loss=0.2263, pruned_loss=0.0316, over 7271.00 frames.], tot_loss[loss=0.177, simple_loss=0.2652, pruned_loss=0.04436, over 1424750.36 frames.], batch size: 18, lr: 2.03e-04 2022-05-28 18:25:42,827 INFO [train.py:842] (0/4) Epoch 27, batch 6250, loss[loss=0.138, simple_loss=0.2413, pruned_loss=0.01734, over 7323.00 frames.], tot_loss[loss=0.1773, simple_loss=0.266, pruned_loss=0.04432, over 1423257.85 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:26:22,138 INFO [train.py:842] (0/4) Epoch 27, batch 6300, loss[loss=0.2039, simple_loss=0.2941, pruned_loss=0.05687, over 7333.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2664, pruned_loss=0.04469, over 1419592.44 frames.], batch size: 22, lr: 2.03e-04 2022-05-28 18:27:01,743 INFO [train.py:842] (0/4) Epoch 27, batch 6350, loss[loss=0.1643, simple_loss=0.2557, pruned_loss=0.03648, over 7319.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2661, pruned_loss=0.04473, over 1419802.49 frames.], batch size: 21, lr: 2.03e-04 2022-05-28 18:27:40,913 INFO [train.py:842] (0/4) Epoch 27, batch 6400, loss[loss=0.2137, simple_loss=0.2965, pruned_loss=0.06546, over 7336.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2662, pruned_loss=0.04472, over 1421135.87 frames.], batch size: 22, lr: 2.03e-04 2022-05-28 18:28:20,578 INFO [train.py:842] (0/4) Epoch 27, batch 6450, loss[loss=0.1935, simple_loss=0.2812, pruned_loss=0.05295, over 7078.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2676, pruned_loss=0.04595, over 1416778.66 frames.], batch size: 28, lr: 2.03e-04 2022-05-28 18:28:59,846 INFO [train.py:842] (0/4) Epoch 27, batch 6500, loss[loss=0.1963, simple_loss=0.2851, pruned_loss=0.05378, over 7199.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2677, pruned_loss=0.04591, over 1421645.69 frames.], batch size: 22, lr: 2.03e-04 2022-05-28 18:29:39,281 INFO [train.py:842] (0/4) Epoch 27, batch 6550, loss[loss=0.1722, simple_loss=0.2689, pruned_loss=0.03779, over 7217.00 frames.], tot_loss[loss=0.1805, simple_loss=0.268, pruned_loss=0.04655, over 1423229.93 frames.], batch size: 21, lr: 2.03e-04 2022-05-28 18:30:18,666 INFO [train.py:842] (0/4) Epoch 27, batch 6600, loss[loss=0.1587, simple_loss=0.2435, pruned_loss=0.0369, over 7431.00 frames.], tot_loss[loss=0.1816, simple_loss=0.2686, pruned_loss=0.04735, over 1423643.92 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:30:58,057 INFO [train.py:842] (0/4) Epoch 27, batch 6650, loss[loss=0.1602, simple_loss=0.2528, pruned_loss=0.03379, over 7168.00 frames.], tot_loss[loss=0.1822, simple_loss=0.2695, pruned_loss=0.04742, over 1421118.99 frames.], batch size: 26, lr: 2.03e-04 2022-05-28 18:31:37,218 INFO [train.py:842] (0/4) Epoch 27, batch 6700, loss[loss=0.1945, simple_loss=0.2783, pruned_loss=0.0553, over 7213.00 frames.], tot_loss[loss=0.1823, simple_loss=0.2701, pruned_loss=0.04728, over 1423003.59 frames.], batch size: 26, lr: 2.03e-04 2022-05-28 18:32:16,870 INFO [train.py:842] (0/4) Epoch 27, batch 6750, loss[loss=0.1361, simple_loss=0.2171, pruned_loss=0.02759, over 7182.00 frames.], tot_loss[loss=0.1813, simple_loss=0.269, pruned_loss=0.04679, over 1422110.48 frames.], batch size: 16, lr: 2.03e-04 2022-05-28 18:32:56,067 INFO [train.py:842] (0/4) Epoch 27, batch 6800, loss[loss=0.1433, simple_loss=0.2222, pruned_loss=0.03221, over 7282.00 frames.], tot_loss[loss=0.18, simple_loss=0.2681, pruned_loss=0.04598, over 1423500.89 frames.], batch size: 17, lr: 2.03e-04 2022-05-28 18:33:35,605 INFO [train.py:842] (0/4) Epoch 27, batch 6850, loss[loss=0.1776, simple_loss=0.2592, pruned_loss=0.04798, over 7172.00 frames.], tot_loss[loss=0.1809, simple_loss=0.2687, pruned_loss=0.04657, over 1422347.75 frames.], batch size: 18, lr: 2.03e-04 2022-05-28 18:34:14,901 INFO [train.py:842] (0/4) Epoch 27, batch 6900, loss[loss=0.1932, simple_loss=0.2738, pruned_loss=0.05635, over 7160.00 frames.], tot_loss[loss=0.18, simple_loss=0.2681, pruned_loss=0.04601, over 1425675.49 frames.], batch size: 19, lr: 2.03e-04 2022-05-28 18:34:54,554 INFO [train.py:842] (0/4) Epoch 27, batch 6950, loss[loss=0.1773, simple_loss=0.2781, pruned_loss=0.03821, over 6859.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2682, pruned_loss=0.04634, over 1425994.82 frames.], batch size: 31, lr: 2.03e-04 2022-05-28 18:35:33,881 INFO [train.py:842] (0/4) Epoch 27, batch 7000, loss[loss=0.1393, simple_loss=0.2289, pruned_loss=0.0248, over 7389.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2673, pruned_loss=0.04593, over 1427377.03 frames.], batch size: 18, lr: 2.03e-04 2022-05-28 18:36:13,573 INFO [train.py:842] (0/4) Epoch 27, batch 7050, loss[loss=0.1698, simple_loss=0.2694, pruned_loss=0.03517, over 7326.00 frames.], tot_loss[loss=0.18, simple_loss=0.268, pruned_loss=0.04596, over 1427940.59 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:36:52,885 INFO [train.py:842] (0/4) Epoch 27, batch 7100, loss[loss=0.1473, simple_loss=0.2374, pruned_loss=0.02857, over 7250.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2664, pruned_loss=0.04509, over 1422878.18 frames.], batch size: 19, lr: 2.03e-04 2022-05-28 18:37:32,317 INFO [train.py:842] (0/4) Epoch 27, batch 7150, loss[loss=0.1758, simple_loss=0.2753, pruned_loss=0.03809, over 7307.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2662, pruned_loss=0.04481, over 1418522.27 frames.], batch size: 24, lr: 2.03e-04 2022-05-28 18:38:11,636 INFO [train.py:842] (0/4) Epoch 27, batch 7200, loss[loss=0.1336, simple_loss=0.2103, pruned_loss=0.02843, over 7295.00 frames.], tot_loss[loss=0.177, simple_loss=0.2656, pruned_loss=0.04418, over 1422040.38 frames.], batch size: 17, lr: 2.03e-04 2022-05-28 18:38:51,121 INFO [train.py:842] (0/4) Epoch 27, batch 7250, loss[loss=0.1669, simple_loss=0.249, pruned_loss=0.0424, over 7413.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2664, pruned_loss=0.04462, over 1424056.54 frames.], batch size: 18, lr: 2.03e-04 2022-05-28 18:39:30,082 INFO [train.py:842] (0/4) Epoch 27, batch 7300, loss[loss=0.159, simple_loss=0.2567, pruned_loss=0.03066, over 7417.00 frames.], tot_loss[loss=0.1777, simple_loss=0.266, pruned_loss=0.04467, over 1425464.42 frames.], batch size: 21, lr: 2.03e-04 2022-05-28 18:40:09,825 INFO [train.py:842] (0/4) Epoch 27, batch 7350, loss[loss=0.1808, simple_loss=0.277, pruned_loss=0.04228, over 7148.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2654, pruned_loss=0.04415, over 1425734.93 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:40:49,107 INFO [train.py:842] (0/4) Epoch 27, batch 7400, loss[loss=0.1729, simple_loss=0.2612, pruned_loss=0.04237, over 7212.00 frames.], tot_loss[loss=0.177, simple_loss=0.2656, pruned_loss=0.04416, over 1423644.70 frames.], batch size: 23, lr: 2.03e-04 2022-05-28 18:41:28,893 INFO [train.py:842] (0/4) Epoch 27, batch 7450, loss[loss=0.1332, simple_loss=0.2133, pruned_loss=0.02651, over 7274.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2644, pruned_loss=0.04404, over 1427652.49 frames.], batch size: 17, lr: 2.03e-04 2022-05-28 18:42:08,151 INFO [train.py:842] (0/4) Epoch 27, batch 7500, loss[loss=0.2746, simple_loss=0.3447, pruned_loss=0.1022, over 5191.00 frames.], tot_loss[loss=0.178, simple_loss=0.2659, pruned_loss=0.04505, over 1424556.26 frames.], batch size: 52, lr: 2.03e-04 2022-05-28 18:42:47,781 INFO [train.py:842] (0/4) Epoch 27, batch 7550, loss[loss=0.1772, simple_loss=0.2731, pruned_loss=0.04068, over 7333.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2658, pruned_loss=0.04485, over 1426219.65 frames.], batch size: 20, lr: 2.03e-04 2022-05-28 18:43:27,143 INFO [train.py:842] (0/4) Epoch 27, batch 7600, loss[loss=0.1749, simple_loss=0.268, pruned_loss=0.04091, over 7338.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2643, pruned_loss=0.04411, over 1427305.74 frames.], batch size: 22, lr: 2.03e-04 2022-05-28 18:44:07,149 INFO [train.py:842] (0/4) Epoch 27, batch 7650, loss[loss=0.2128, simple_loss=0.2941, pruned_loss=0.06569, over 7287.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2631, pruned_loss=0.04366, over 1431482.79 frames.], batch size: 24, lr: 2.03e-04 2022-05-28 18:44:46,338 INFO [train.py:842] (0/4) Epoch 27, batch 7700, loss[loss=0.2023, simple_loss=0.2924, pruned_loss=0.05608, over 7211.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2632, pruned_loss=0.04381, over 1426930.55 frames.], batch size: 22, lr: 2.03e-04 2022-05-28 18:45:26,182 INFO [train.py:842] (0/4) Epoch 27, batch 7750, loss[loss=0.2149, simple_loss=0.3034, pruned_loss=0.06318, over 7193.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2634, pruned_loss=0.04398, over 1426083.50 frames.], batch size: 22, lr: 2.02e-04 2022-05-28 18:46:05,416 INFO [train.py:842] (0/4) Epoch 27, batch 7800, loss[loss=0.1763, simple_loss=0.2666, pruned_loss=0.04296, over 7306.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2641, pruned_loss=0.04422, over 1428344.76 frames.], batch size: 25, lr: 2.02e-04 2022-05-28 18:46:45,102 INFO [train.py:842] (0/4) Epoch 27, batch 7850, loss[loss=0.1249, simple_loss=0.2047, pruned_loss=0.02256, over 6793.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2631, pruned_loss=0.04396, over 1426788.20 frames.], batch size: 15, lr: 2.02e-04 2022-05-28 18:47:24,273 INFO [train.py:842] (0/4) Epoch 27, batch 7900, loss[loss=0.1586, simple_loss=0.249, pruned_loss=0.03408, over 7324.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2631, pruned_loss=0.04376, over 1425417.52 frames.], batch size: 21, lr: 2.02e-04 2022-05-28 18:48:03,876 INFO [train.py:842] (0/4) Epoch 27, batch 7950, loss[loss=0.1377, simple_loss=0.2267, pruned_loss=0.02434, over 7290.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2641, pruned_loss=0.04437, over 1423319.92 frames.], batch size: 17, lr: 2.02e-04 2022-05-28 18:48:42,909 INFO [train.py:842] (0/4) Epoch 27, batch 8000, loss[loss=0.1476, simple_loss=0.2243, pruned_loss=0.03538, over 7143.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2664, pruned_loss=0.04607, over 1415504.33 frames.], batch size: 17, lr: 2.02e-04 2022-05-28 18:49:22,389 INFO [train.py:842] (0/4) Epoch 27, batch 8050, loss[loss=0.2021, simple_loss=0.2922, pruned_loss=0.05602, over 7176.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2662, pruned_loss=0.04598, over 1415256.23 frames.], batch size: 26, lr: 2.02e-04 2022-05-28 18:50:01,716 INFO [train.py:842] (0/4) Epoch 27, batch 8100, loss[loss=0.1744, simple_loss=0.2646, pruned_loss=0.04209, over 7236.00 frames.], tot_loss[loss=0.1792, simple_loss=0.2665, pruned_loss=0.04599, over 1417856.90 frames.], batch size: 20, lr: 2.02e-04 2022-05-28 18:50:41,125 INFO [train.py:842] (0/4) Epoch 27, batch 8150, loss[loss=0.1666, simple_loss=0.251, pruned_loss=0.04114, over 7354.00 frames.], tot_loss[loss=0.1798, simple_loss=0.267, pruned_loss=0.04631, over 1416222.49 frames.], batch size: 19, lr: 2.02e-04 2022-05-28 18:51:20,450 INFO [train.py:842] (0/4) Epoch 27, batch 8200, loss[loss=0.1673, simple_loss=0.248, pruned_loss=0.04327, over 6782.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2667, pruned_loss=0.04598, over 1420103.85 frames.], batch size: 15, lr: 2.02e-04 2022-05-28 18:51:59,944 INFO [train.py:842] (0/4) Epoch 27, batch 8250, loss[loss=0.1902, simple_loss=0.2812, pruned_loss=0.04963, over 6708.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2657, pruned_loss=0.04527, over 1417573.74 frames.], batch size: 31, lr: 2.02e-04 2022-05-28 18:52:39,237 INFO [train.py:842] (0/4) Epoch 27, batch 8300, loss[loss=0.1634, simple_loss=0.2628, pruned_loss=0.03198, over 7454.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2662, pruned_loss=0.04538, over 1421313.02 frames.], batch size: 22, lr: 2.02e-04 2022-05-28 18:53:18,877 INFO [train.py:842] (0/4) Epoch 27, batch 8350, loss[loss=0.1468, simple_loss=0.2367, pruned_loss=0.02844, over 7124.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2649, pruned_loss=0.04433, over 1420904.88 frames.], batch size: 17, lr: 2.02e-04 2022-05-28 18:53:58,057 INFO [train.py:842] (0/4) Epoch 27, batch 8400, loss[loss=0.1797, simple_loss=0.2755, pruned_loss=0.04198, over 7426.00 frames.], tot_loss[loss=0.177, simple_loss=0.2657, pruned_loss=0.04418, over 1423229.78 frames.], batch size: 21, lr: 2.02e-04 2022-05-28 18:54:37,722 INFO [train.py:842] (0/4) Epoch 27, batch 8450, loss[loss=0.1665, simple_loss=0.2519, pruned_loss=0.0405, over 7163.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2652, pruned_loss=0.04401, over 1423140.24 frames.], batch size: 18, lr: 2.02e-04 2022-05-28 18:55:16,677 INFO [train.py:842] (0/4) Epoch 27, batch 8500, loss[loss=0.2067, simple_loss=0.2989, pruned_loss=0.05724, over 7197.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2658, pruned_loss=0.04433, over 1426208.96 frames.], batch size: 22, lr: 2.02e-04 2022-05-28 18:55:56,049 INFO [train.py:842] (0/4) Epoch 27, batch 8550, loss[loss=0.2014, simple_loss=0.2876, pruned_loss=0.05764, over 6774.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2665, pruned_loss=0.04494, over 1424579.24 frames.], batch size: 31, lr: 2.02e-04 2022-05-28 18:56:35,094 INFO [train.py:842] (0/4) Epoch 27, batch 8600, loss[loss=0.1764, simple_loss=0.2698, pruned_loss=0.04156, over 7122.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2664, pruned_loss=0.04473, over 1421672.75 frames.], batch size: 21, lr: 2.02e-04 2022-05-28 18:57:14,400 INFO [train.py:842] (0/4) Epoch 27, batch 8650, loss[loss=0.1917, simple_loss=0.2851, pruned_loss=0.0491, over 7228.00 frames.], tot_loss[loss=0.177, simple_loss=0.2656, pruned_loss=0.04421, over 1416071.35 frames.], batch size: 21, lr: 2.02e-04 2022-05-28 18:57:53,606 INFO [train.py:842] (0/4) Epoch 27, batch 8700, loss[loss=0.1664, simple_loss=0.2632, pruned_loss=0.03487, over 7206.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2662, pruned_loss=0.04462, over 1417993.74 frames.], batch size: 22, lr: 2.02e-04 2022-05-28 18:58:33,139 INFO [train.py:842] (0/4) Epoch 27, batch 8750, loss[loss=0.1788, simple_loss=0.2662, pruned_loss=0.04573, over 7206.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2667, pruned_loss=0.04501, over 1414368.03 frames.], batch size: 22, lr: 2.02e-04 2022-05-28 18:59:12,355 INFO [train.py:842] (0/4) Epoch 27, batch 8800, loss[loss=0.134, simple_loss=0.2276, pruned_loss=0.02019, over 7213.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2669, pruned_loss=0.0452, over 1410737.68 frames.], batch size: 16, lr: 2.02e-04 2022-05-28 18:59:51,552 INFO [train.py:842] (0/4) Epoch 27, batch 8850, loss[loss=0.1546, simple_loss=0.2387, pruned_loss=0.03524, over 7071.00 frames.], tot_loss[loss=0.178, simple_loss=0.2662, pruned_loss=0.0449, over 1414017.30 frames.], batch size: 18, lr: 2.02e-04 2022-05-28 19:00:30,272 INFO [train.py:842] (0/4) Epoch 27, batch 8900, loss[loss=0.2515, simple_loss=0.3312, pruned_loss=0.08592, over 7212.00 frames.], tot_loss[loss=0.179, simple_loss=0.2674, pruned_loss=0.0453, over 1409179.81 frames.], batch size: 22, lr: 2.02e-04 2022-05-28 19:01:09,159 INFO [train.py:842] (0/4) Epoch 27, batch 8950, loss[loss=0.1475, simple_loss=0.2341, pruned_loss=0.03047, over 7013.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2682, pruned_loss=0.04598, over 1404510.72 frames.], batch size: 16, lr: 2.02e-04 2022-05-28 19:01:47,928 INFO [train.py:842] (0/4) Epoch 27, batch 9000, loss[loss=0.1595, simple_loss=0.255, pruned_loss=0.03197, over 7089.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2679, pruned_loss=0.04582, over 1397249.66 frames.], batch size: 28, lr: 2.02e-04 2022-05-28 19:01:47,929 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 19:01:58,050 INFO [train.py:871] (0/4) Epoch 27, validation: loss=0.1653, simple_loss=0.2632, pruned_loss=0.03366, over 868885.00 frames. 2022-05-28 19:02:06,832 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-248000.pt 2022-05-28 19:02:39,558 INFO [train.py:842] (0/4) Epoch 27, batch 9050, loss[loss=0.1713, simple_loss=0.2653, pruned_loss=0.03862, over 6461.00 frames.], tot_loss[loss=0.1807, simple_loss=0.2684, pruned_loss=0.04646, over 1372258.98 frames.], batch size: 38, lr: 2.02e-04 2022-05-28 19:03:17,097 INFO [train.py:842] (0/4) Epoch 27, batch 9100, loss[loss=0.1693, simple_loss=0.2629, pruned_loss=0.03786, over 6800.00 frames.], tot_loss[loss=0.1845, simple_loss=0.272, pruned_loss=0.04849, over 1335413.99 frames.], batch size: 31, lr: 2.02e-04 2022-05-28 19:03:55,319 INFO [train.py:842] (0/4) Epoch 27, batch 9150, loss[loss=0.2216, simple_loss=0.29, pruned_loss=0.07658, over 5341.00 frames.], tot_loss[loss=0.188, simple_loss=0.2744, pruned_loss=0.05086, over 1269088.03 frames.], batch size: 52, lr: 2.02e-04 2022-05-28 19:04:28,709 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-27.pt 2022-05-28 19:04:48,246 INFO [train.py:842] (0/4) Epoch 28, batch 0, loss[loss=0.1728, simple_loss=0.2696, pruned_loss=0.038, over 7259.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2696, pruned_loss=0.038, over 7259.00 frames.], batch size: 19, lr: 1.98e-04 2022-05-28 19:05:27,965 INFO [train.py:842] (0/4) Epoch 28, batch 50, loss[loss=0.1673, simple_loss=0.2641, pruned_loss=0.0353, over 7255.00 frames.], tot_loss[loss=0.176, simple_loss=0.2654, pruned_loss=0.04332, over 321916.04 frames.], batch size: 19, lr: 1.98e-04 2022-05-28 19:06:07,187 INFO [train.py:842] (0/4) Epoch 28, batch 100, loss[loss=0.2408, simple_loss=0.3105, pruned_loss=0.08551, over 7133.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2655, pruned_loss=0.04431, over 565378.06 frames.], batch size: 20, lr: 1.98e-04 2022-05-28 19:06:46,664 INFO [train.py:842] (0/4) Epoch 28, batch 150, loss[loss=0.1743, simple_loss=0.2669, pruned_loss=0.04085, over 6355.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2654, pruned_loss=0.04386, over 753461.76 frames.], batch size: 37, lr: 1.98e-04 2022-05-28 19:07:25,753 INFO [train.py:842] (0/4) Epoch 28, batch 200, loss[loss=0.1856, simple_loss=0.2766, pruned_loss=0.04732, over 7207.00 frames.], tot_loss[loss=0.1753, simple_loss=0.264, pruned_loss=0.04329, over 899592.09 frames.], batch size: 23, lr: 1.98e-04 2022-05-28 19:08:05,243 INFO [train.py:842] (0/4) Epoch 28, batch 250, loss[loss=0.2027, simple_loss=0.2757, pruned_loss=0.06484, over 7306.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2653, pruned_loss=0.04401, over 1016167.82 frames.], batch size: 24, lr: 1.98e-04 2022-05-28 19:08:44,434 INFO [train.py:842] (0/4) Epoch 28, batch 300, loss[loss=0.1908, simple_loss=0.2789, pruned_loss=0.05134, over 6786.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2673, pruned_loss=0.04478, over 1105894.72 frames.], batch size: 31, lr: 1.98e-04 2022-05-28 19:09:23,883 INFO [train.py:842] (0/4) Epoch 28, batch 350, loss[loss=0.1774, simple_loss=0.2611, pruned_loss=0.04687, over 7149.00 frames.], tot_loss[loss=0.1773, simple_loss=0.266, pruned_loss=0.04434, over 1178022.12 frames.], batch size: 19, lr: 1.98e-04 2022-05-28 19:10:03,146 INFO [train.py:842] (0/4) Epoch 28, batch 400, loss[loss=0.1768, simple_loss=0.2514, pruned_loss=0.05107, over 7141.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2657, pruned_loss=0.04444, over 1233673.23 frames.], batch size: 17, lr: 1.98e-04 2022-05-28 19:10:42,635 INFO [train.py:842] (0/4) Epoch 28, batch 450, loss[loss=0.1748, simple_loss=0.267, pruned_loss=0.04132, over 7322.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2658, pruned_loss=0.04465, over 1271027.31 frames.], batch size: 25, lr: 1.98e-04 2022-05-28 19:11:21,968 INFO [train.py:842] (0/4) Epoch 28, batch 500, loss[loss=0.1804, simple_loss=0.2793, pruned_loss=0.04072, over 7316.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2661, pruned_loss=0.04443, over 1308293.84 frames.], batch size: 21, lr: 1.98e-04 2022-05-28 19:12:01,645 INFO [train.py:842] (0/4) Epoch 28, batch 550, loss[loss=0.1952, simple_loss=0.2755, pruned_loss=0.05744, over 7066.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2658, pruned_loss=0.04476, over 1329450.77 frames.], batch size: 18, lr: 1.98e-04 2022-05-28 19:12:41,031 INFO [train.py:842] (0/4) Epoch 28, batch 600, loss[loss=0.1784, simple_loss=0.2624, pruned_loss=0.04721, over 7332.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2664, pruned_loss=0.04567, over 1348575.70 frames.], batch size: 20, lr: 1.98e-04 2022-05-28 19:13:20,388 INFO [train.py:842] (0/4) Epoch 28, batch 650, loss[loss=0.1678, simple_loss=0.2618, pruned_loss=0.03685, over 7044.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2663, pruned_loss=0.04547, over 1365974.34 frames.], batch size: 28, lr: 1.98e-04 2022-05-28 19:13:59,789 INFO [train.py:842] (0/4) Epoch 28, batch 700, loss[loss=0.1807, simple_loss=0.267, pruned_loss=0.04718, over 7074.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2657, pruned_loss=0.04505, over 1379961.08 frames.], batch size: 18, lr: 1.98e-04 2022-05-28 19:14:39,579 INFO [train.py:842] (0/4) Epoch 28, batch 750, loss[loss=0.1752, simple_loss=0.2627, pruned_loss=0.04387, over 7224.00 frames.], tot_loss[loss=0.176, simple_loss=0.2637, pruned_loss=0.04418, over 1390708.69 frames.], batch size: 21, lr: 1.98e-04 2022-05-28 19:15:18,796 INFO [train.py:842] (0/4) Epoch 28, batch 800, loss[loss=0.1862, simple_loss=0.2705, pruned_loss=0.05095, over 7049.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2639, pruned_loss=0.04435, over 1397673.73 frames.], batch size: 28, lr: 1.98e-04 2022-05-28 19:15:58,591 INFO [train.py:842] (0/4) Epoch 28, batch 850, loss[loss=0.2108, simple_loss=0.3101, pruned_loss=0.0558, over 7336.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2639, pruned_loss=0.04388, over 1405246.64 frames.], batch size: 25, lr: 1.98e-04 2022-05-28 19:16:37,598 INFO [train.py:842] (0/4) Epoch 28, batch 900, loss[loss=0.1842, simple_loss=0.2561, pruned_loss=0.05618, over 7007.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2645, pruned_loss=0.04402, over 1408086.25 frames.], batch size: 16, lr: 1.98e-04 2022-05-28 19:17:16,999 INFO [train.py:842] (0/4) Epoch 28, batch 950, loss[loss=0.1865, simple_loss=0.2744, pruned_loss=0.04934, over 7148.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2653, pruned_loss=0.04481, over 1409992.05 frames.], batch size: 18, lr: 1.98e-04 2022-05-28 19:17:56,533 INFO [train.py:842] (0/4) Epoch 28, batch 1000, loss[loss=0.1636, simple_loss=0.2567, pruned_loss=0.03529, over 7435.00 frames.], tot_loss[loss=0.1767, simple_loss=0.265, pruned_loss=0.04423, over 1416476.86 frames.], batch size: 20, lr: 1.98e-04 2022-05-28 19:18:36,071 INFO [train.py:842] (0/4) Epoch 28, batch 1050, loss[loss=0.2233, simple_loss=0.3165, pruned_loss=0.06501, over 7418.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2655, pruned_loss=0.04434, over 1416173.97 frames.], batch size: 21, lr: 1.98e-04 2022-05-28 19:19:15,247 INFO [train.py:842] (0/4) Epoch 28, batch 1100, loss[loss=0.1728, simple_loss=0.2651, pruned_loss=0.04028, over 7064.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2657, pruned_loss=0.04434, over 1416046.00 frames.], batch size: 18, lr: 1.98e-04 2022-05-28 19:19:55,031 INFO [train.py:842] (0/4) Epoch 28, batch 1150, loss[loss=0.1965, simple_loss=0.2919, pruned_loss=0.05054, over 7196.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2658, pruned_loss=0.04487, over 1421519.94 frames.], batch size: 23, lr: 1.98e-04 2022-05-28 19:20:34,389 INFO [train.py:842] (0/4) Epoch 28, batch 1200, loss[loss=0.1803, simple_loss=0.2578, pruned_loss=0.05139, over 7139.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2658, pruned_loss=0.04493, over 1426229.23 frames.], batch size: 17, lr: 1.98e-04 2022-05-28 19:21:13,767 INFO [train.py:842] (0/4) Epoch 28, batch 1250, loss[loss=0.1688, simple_loss=0.2492, pruned_loss=0.04419, over 7133.00 frames.], tot_loss[loss=0.179, simple_loss=0.2667, pruned_loss=0.04565, over 1423764.09 frames.], batch size: 17, lr: 1.98e-04 2022-05-28 19:21:52,987 INFO [train.py:842] (0/4) Epoch 28, batch 1300, loss[loss=0.1484, simple_loss=0.238, pruned_loss=0.02937, over 7283.00 frames.], tot_loss[loss=0.1792, simple_loss=0.267, pruned_loss=0.04568, over 1419248.38 frames.], batch size: 18, lr: 1.98e-04 2022-05-28 19:22:32,599 INFO [train.py:842] (0/4) Epoch 28, batch 1350, loss[loss=0.1887, simple_loss=0.2739, pruned_loss=0.05175, over 7354.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2665, pruned_loss=0.04541, over 1420372.98 frames.], batch size: 19, lr: 1.98e-04 2022-05-28 19:23:11,740 INFO [train.py:842] (0/4) Epoch 28, batch 1400, loss[loss=0.1804, simple_loss=0.2646, pruned_loss=0.04807, over 7069.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2664, pruned_loss=0.04553, over 1420442.00 frames.], batch size: 18, lr: 1.98e-04 2022-05-28 19:23:51,611 INFO [train.py:842] (0/4) Epoch 28, batch 1450, loss[loss=0.1934, simple_loss=0.2826, pruned_loss=0.05215, over 7325.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2655, pruned_loss=0.04514, over 1422616.82 frames.], batch size: 20, lr: 1.98e-04 2022-05-28 19:24:30,668 INFO [train.py:842] (0/4) Epoch 28, batch 1500, loss[loss=0.1585, simple_loss=0.2611, pruned_loss=0.02793, over 7114.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2663, pruned_loss=0.04504, over 1424337.13 frames.], batch size: 21, lr: 1.98e-04 2022-05-28 19:25:10,185 INFO [train.py:842] (0/4) Epoch 28, batch 1550, loss[loss=0.141, simple_loss=0.2276, pruned_loss=0.02716, over 6812.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2657, pruned_loss=0.04498, over 1421007.33 frames.], batch size: 15, lr: 1.98e-04 2022-05-28 19:25:49,448 INFO [train.py:842] (0/4) Epoch 28, batch 1600, loss[loss=0.1816, simple_loss=0.2692, pruned_loss=0.04698, over 7406.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2643, pruned_loss=0.04411, over 1424556.79 frames.], batch size: 21, lr: 1.98e-04 2022-05-28 19:26:29,077 INFO [train.py:842] (0/4) Epoch 28, batch 1650, loss[loss=0.1671, simple_loss=0.2507, pruned_loss=0.04175, over 7069.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2625, pruned_loss=0.04318, over 1425617.49 frames.], batch size: 18, lr: 1.98e-04 2022-05-28 19:27:08,133 INFO [train.py:842] (0/4) Epoch 28, batch 1700, loss[loss=0.1819, simple_loss=0.2604, pruned_loss=0.05175, over 7355.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2645, pruned_loss=0.04392, over 1426881.98 frames.], batch size: 19, lr: 1.98e-04 2022-05-28 19:27:48,076 INFO [train.py:842] (0/4) Epoch 28, batch 1750, loss[loss=0.1974, simple_loss=0.2865, pruned_loss=0.0541, over 6766.00 frames.], tot_loss[loss=0.1749, simple_loss=0.263, pruned_loss=0.04341, over 1428399.39 frames.], batch size: 31, lr: 1.98e-04 2022-05-28 19:28:27,178 INFO [train.py:842] (0/4) Epoch 28, batch 1800, loss[loss=0.1901, simple_loss=0.2702, pruned_loss=0.05507, over 7234.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2641, pruned_loss=0.04408, over 1427887.42 frames.], batch size: 20, lr: 1.98e-04 2022-05-28 19:29:06,916 INFO [train.py:842] (0/4) Epoch 28, batch 1850, loss[loss=0.1615, simple_loss=0.2514, pruned_loss=0.03581, over 7156.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2645, pruned_loss=0.0443, over 1430599.80 frames.], batch size: 19, lr: 1.98e-04 2022-05-28 19:29:46,178 INFO [train.py:842] (0/4) Epoch 28, batch 1900, loss[loss=0.1509, simple_loss=0.236, pruned_loss=0.03294, over 7271.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2647, pruned_loss=0.04402, over 1430061.99 frames.], batch size: 17, lr: 1.98e-04 2022-05-28 19:30:25,853 INFO [train.py:842] (0/4) Epoch 28, batch 1950, loss[loss=0.1862, simple_loss=0.2776, pruned_loss=0.04743, over 6375.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2659, pruned_loss=0.04499, over 1425227.77 frames.], batch size: 37, lr: 1.98e-04 2022-05-28 19:31:05,153 INFO [train.py:842] (0/4) Epoch 28, batch 2000, loss[loss=0.1615, simple_loss=0.262, pruned_loss=0.0305, over 7205.00 frames.], tot_loss[loss=0.1781, simple_loss=0.266, pruned_loss=0.04515, over 1424271.83 frames.], batch size: 21, lr: 1.98e-04 2022-05-28 19:31:44,619 INFO [train.py:842] (0/4) Epoch 28, batch 2050, loss[loss=0.1839, simple_loss=0.2763, pruned_loss=0.04579, over 7216.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2661, pruned_loss=0.04506, over 1422616.32 frames.], batch size: 23, lr: 1.97e-04 2022-05-28 19:32:34,472 INFO [train.py:842] (0/4) Epoch 28, batch 2100, loss[loss=0.1999, simple_loss=0.2773, pruned_loss=0.06122, over 7276.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2661, pruned_loss=0.04507, over 1423427.94 frames.], batch size: 25, lr: 1.97e-04 2022-05-28 19:33:13,894 INFO [train.py:842] (0/4) Epoch 28, batch 2150, loss[loss=0.1607, simple_loss=0.2432, pruned_loss=0.03907, over 7130.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2664, pruned_loss=0.04527, over 1422167.08 frames.], batch size: 17, lr: 1.97e-04 2022-05-28 19:33:53,235 INFO [train.py:842] (0/4) Epoch 28, batch 2200, loss[loss=0.2246, simple_loss=0.3038, pruned_loss=0.07271, over 7303.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2666, pruned_loss=0.04582, over 1420922.72 frames.], batch size: 24, lr: 1.97e-04 2022-05-28 19:34:32,677 INFO [train.py:842] (0/4) Epoch 28, batch 2250, loss[loss=0.1759, simple_loss=0.2674, pruned_loss=0.04222, over 7345.00 frames.], tot_loss[loss=0.1788, simple_loss=0.2664, pruned_loss=0.0456, over 1423628.93 frames.], batch size: 22, lr: 1.97e-04 2022-05-28 19:35:11,815 INFO [train.py:842] (0/4) Epoch 28, batch 2300, loss[loss=0.1985, simple_loss=0.2781, pruned_loss=0.05939, over 7142.00 frames.], tot_loss[loss=0.1793, simple_loss=0.267, pruned_loss=0.04583, over 1421356.79 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 19:35:51,204 INFO [train.py:842] (0/4) Epoch 28, batch 2350, loss[loss=0.1903, simple_loss=0.2708, pruned_loss=0.05491, over 7154.00 frames.], tot_loss[loss=0.1799, simple_loss=0.2674, pruned_loss=0.04618, over 1419639.94 frames.], batch size: 19, lr: 1.97e-04 2022-05-28 19:36:30,495 INFO [train.py:842] (0/4) Epoch 28, batch 2400, loss[loss=0.2102, simple_loss=0.292, pruned_loss=0.06422, over 7195.00 frames.], tot_loss[loss=0.1802, simple_loss=0.2679, pruned_loss=0.0462, over 1422827.39 frames.], batch size: 23, lr: 1.97e-04 2022-05-28 19:37:10,204 INFO [train.py:842] (0/4) Epoch 28, batch 2450, loss[loss=0.1484, simple_loss=0.2472, pruned_loss=0.02478, over 6371.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2659, pruned_loss=0.04489, over 1424008.14 frames.], batch size: 38, lr: 1.97e-04 2022-05-28 19:37:49,523 INFO [train.py:842] (0/4) Epoch 28, batch 2500, loss[loss=0.1898, simple_loss=0.2603, pruned_loss=0.05971, over 6832.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2651, pruned_loss=0.04487, over 1421166.32 frames.], batch size: 15, lr: 1.97e-04 2022-05-28 19:38:29,252 INFO [train.py:842] (0/4) Epoch 28, batch 2550, loss[loss=0.1813, simple_loss=0.2715, pruned_loss=0.04559, over 7250.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2658, pruned_loss=0.04521, over 1421553.51 frames.], batch size: 19, lr: 1.97e-04 2022-05-28 19:39:08,424 INFO [train.py:842] (0/4) Epoch 28, batch 2600, loss[loss=0.1787, simple_loss=0.2634, pruned_loss=0.04701, over 7227.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2643, pruned_loss=0.04445, over 1421445.20 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 19:39:48,095 INFO [train.py:842] (0/4) Epoch 28, batch 2650, loss[loss=0.139, simple_loss=0.2211, pruned_loss=0.02847, over 6991.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2653, pruned_loss=0.04448, over 1419763.68 frames.], batch size: 16, lr: 1.97e-04 2022-05-28 19:40:27,357 INFO [train.py:842] (0/4) Epoch 28, batch 2700, loss[loss=0.1815, simple_loss=0.2793, pruned_loss=0.04188, over 7320.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2647, pruned_loss=0.04396, over 1421630.41 frames.], batch size: 21, lr: 1.97e-04 2022-05-28 19:41:06,933 INFO [train.py:842] (0/4) Epoch 28, batch 2750, loss[loss=0.1554, simple_loss=0.2468, pruned_loss=0.03198, over 7249.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2651, pruned_loss=0.0446, over 1419431.62 frames.], batch size: 19, lr: 1.97e-04 2022-05-28 19:41:45,952 INFO [train.py:842] (0/4) Epoch 28, batch 2800, loss[loss=0.21, simple_loss=0.3029, pruned_loss=0.0586, over 7242.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2656, pruned_loss=0.04484, over 1416034.14 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 19:42:25,427 INFO [train.py:842] (0/4) Epoch 28, batch 2850, loss[loss=0.1331, simple_loss=0.2194, pruned_loss=0.02347, over 7125.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2653, pruned_loss=0.04489, over 1421383.19 frames.], batch size: 17, lr: 1.97e-04 2022-05-28 19:43:04,592 INFO [train.py:842] (0/4) Epoch 28, batch 2900, loss[loss=0.1854, simple_loss=0.2699, pruned_loss=0.05047, over 7269.00 frames.], tot_loss[loss=0.1785, simple_loss=0.266, pruned_loss=0.04554, over 1420784.40 frames.], batch size: 25, lr: 1.97e-04 2022-05-28 19:43:43,924 INFO [train.py:842] (0/4) Epoch 28, batch 2950, loss[loss=0.1678, simple_loss=0.264, pruned_loss=0.03585, over 7192.00 frames.], tot_loss[loss=0.179, simple_loss=0.2666, pruned_loss=0.04571, over 1423406.18 frames.], batch size: 23, lr: 1.97e-04 2022-05-28 19:44:22,906 INFO [train.py:842] (0/4) Epoch 28, batch 3000, loss[loss=0.166, simple_loss=0.2512, pruned_loss=0.04041, over 7063.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2655, pruned_loss=0.04469, over 1426116.03 frames.], batch size: 28, lr: 1.97e-04 2022-05-28 19:44:22,907 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 19:44:32,547 INFO [train.py:871] (0/4) Epoch 28, validation: loss=0.165, simple_loss=0.263, pruned_loss=0.03349, over 868885.00 frames. 2022-05-28 19:45:12,190 INFO [train.py:842] (0/4) Epoch 28, batch 3050, loss[loss=0.1485, simple_loss=0.2372, pruned_loss=0.02988, over 7124.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2643, pruned_loss=0.04398, over 1427373.32 frames.], batch size: 17, lr: 1.97e-04 2022-05-28 19:45:51,483 INFO [train.py:842] (0/4) Epoch 28, batch 3100, loss[loss=0.2037, simple_loss=0.2913, pruned_loss=0.05808, over 7366.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2642, pruned_loss=0.0444, over 1425849.14 frames.], batch size: 23, lr: 1.97e-04 2022-05-28 19:46:31,134 INFO [train.py:842] (0/4) Epoch 28, batch 3150, loss[loss=0.1385, simple_loss=0.2305, pruned_loss=0.02325, over 7417.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2643, pruned_loss=0.04445, over 1424056.97 frames.], batch size: 18, lr: 1.97e-04 2022-05-28 19:47:10,206 INFO [train.py:842] (0/4) Epoch 28, batch 3200, loss[loss=0.2056, simple_loss=0.2915, pruned_loss=0.05986, over 7317.00 frames.], tot_loss[loss=0.1771, simple_loss=0.265, pruned_loss=0.04464, over 1424262.69 frames.], batch size: 21, lr: 1.97e-04 2022-05-28 19:47:50,132 INFO [train.py:842] (0/4) Epoch 28, batch 3250, loss[loss=0.1602, simple_loss=0.2488, pruned_loss=0.03577, over 7168.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2637, pruned_loss=0.04422, over 1423249.12 frames.], batch size: 18, lr: 1.97e-04 2022-05-28 19:48:29,255 INFO [train.py:842] (0/4) Epoch 28, batch 3300, loss[loss=0.1511, simple_loss=0.2387, pruned_loss=0.03172, over 6989.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2661, pruned_loss=0.04545, over 1423704.79 frames.], batch size: 16, lr: 1.97e-04 2022-05-28 19:49:08,811 INFO [train.py:842] (0/4) Epoch 28, batch 3350, loss[loss=0.2029, simple_loss=0.2977, pruned_loss=0.05405, over 7379.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2652, pruned_loss=0.04475, over 1421178.52 frames.], batch size: 23, lr: 1.97e-04 2022-05-28 19:49:48,136 INFO [train.py:842] (0/4) Epoch 28, batch 3400, loss[loss=0.1746, simple_loss=0.2711, pruned_loss=0.03905, over 7335.00 frames.], tot_loss[loss=0.177, simple_loss=0.265, pruned_loss=0.04446, over 1422994.04 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 19:50:27,845 INFO [train.py:842] (0/4) Epoch 28, batch 3450, loss[loss=0.2468, simple_loss=0.3282, pruned_loss=0.08272, over 7216.00 frames.], tot_loss[loss=0.1769, simple_loss=0.265, pruned_loss=0.04433, over 1423728.07 frames.], batch size: 22, lr: 1.97e-04 2022-05-28 19:51:07,208 INFO [train.py:842] (0/4) Epoch 28, batch 3500, loss[loss=0.2293, simple_loss=0.3074, pruned_loss=0.07559, over 7064.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2655, pruned_loss=0.04435, over 1422446.37 frames.], batch size: 18, lr: 1.97e-04 2022-05-28 19:51:46,794 INFO [train.py:842] (0/4) Epoch 28, batch 3550, loss[loss=0.1529, simple_loss=0.2553, pruned_loss=0.02525, over 7325.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2652, pruned_loss=0.044, over 1423651.32 frames.], batch size: 22, lr: 1.97e-04 2022-05-28 19:52:25,754 INFO [train.py:842] (0/4) Epoch 28, batch 3600, loss[loss=0.168, simple_loss=0.247, pruned_loss=0.04453, over 7074.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2657, pruned_loss=0.04368, over 1421398.48 frames.], batch size: 18, lr: 1.97e-04 2022-05-28 19:53:05,422 INFO [train.py:842] (0/4) Epoch 28, batch 3650, loss[loss=0.1469, simple_loss=0.2314, pruned_loss=0.03118, over 7139.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2643, pruned_loss=0.0435, over 1421968.32 frames.], batch size: 17, lr: 1.97e-04 2022-05-28 19:53:44,321 INFO [train.py:842] (0/4) Epoch 28, batch 3700, loss[loss=0.1871, simple_loss=0.2694, pruned_loss=0.05238, over 7426.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2644, pruned_loss=0.04364, over 1421985.17 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 19:54:23,841 INFO [train.py:842] (0/4) Epoch 28, batch 3750, loss[loss=0.1578, simple_loss=0.248, pruned_loss=0.03379, over 7329.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2646, pruned_loss=0.04364, over 1422914.70 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 19:55:02,958 INFO [train.py:842] (0/4) Epoch 28, batch 3800, loss[loss=0.216, simple_loss=0.2938, pruned_loss=0.06911, over 7219.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2657, pruned_loss=0.04407, over 1421947.87 frames.], batch size: 22, lr: 1.97e-04 2022-05-28 19:55:42,608 INFO [train.py:842] (0/4) Epoch 28, batch 3850, loss[loss=0.2045, simple_loss=0.2962, pruned_loss=0.05639, over 7143.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2655, pruned_loss=0.04395, over 1426506.96 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 19:56:21,911 INFO [train.py:842] (0/4) Epoch 28, batch 3900, loss[loss=0.1374, simple_loss=0.2274, pruned_loss=0.02372, over 6994.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2646, pruned_loss=0.04345, over 1425977.68 frames.], batch size: 16, lr: 1.97e-04 2022-05-28 19:57:01,260 INFO [train.py:842] (0/4) Epoch 28, batch 3950, loss[loss=0.15, simple_loss=0.2203, pruned_loss=0.0399, over 6994.00 frames.], tot_loss[loss=0.1762, simple_loss=0.265, pruned_loss=0.04374, over 1426570.02 frames.], batch size: 16, lr: 1.97e-04 2022-05-28 19:57:40,565 INFO [train.py:842] (0/4) Epoch 28, batch 4000, loss[loss=0.1586, simple_loss=0.2597, pruned_loss=0.02871, over 7328.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2652, pruned_loss=0.04314, over 1428006.33 frames.], batch size: 21, lr: 1.97e-04 2022-05-28 19:58:20,481 INFO [train.py:842] (0/4) Epoch 28, batch 4050, loss[loss=0.1887, simple_loss=0.2917, pruned_loss=0.04285, over 7291.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2643, pruned_loss=0.04319, over 1428407.83 frames.], batch size: 24, lr: 1.97e-04 2022-05-28 19:58:59,889 INFO [train.py:842] (0/4) Epoch 28, batch 4100, loss[loss=0.1288, simple_loss=0.2111, pruned_loss=0.0233, over 6863.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2641, pruned_loss=0.04342, over 1424240.83 frames.], batch size: 15, lr: 1.97e-04 2022-05-28 19:59:39,559 INFO [train.py:842] (0/4) Epoch 28, batch 4150, loss[loss=0.1619, simple_loss=0.2606, pruned_loss=0.03159, over 7410.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2643, pruned_loss=0.04376, over 1425045.90 frames.], batch size: 21, lr: 1.97e-04 2022-05-28 20:00:18,992 INFO [train.py:842] (0/4) Epoch 28, batch 4200, loss[loss=0.1468, simple_loss=0.2273, pruned_loss=0.03313, over 7140.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2643, pruned_loss=0.04403, over 1427343.22 frames.], batch size: 17, lr: 1.97e-04 2022-05-28 20:00:58,735 INFO [train.py:842] (0/4) Epoch 28, batch 4250, loss[loss=0.1622, simple_loss=0.2556, pruned_loss=0.03441, over 7222.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2638, pruned_loss=0.04352, over 1428913.41 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 20:01:38,113 INFO [train.py:842] (0/4) Epoch 28, batch 4300, loss[loss=0.163, simple_loss=0.2448, pruned_loss=0.04061, over 7162.00 frames.], tot_loss[loss=0.176, simple_loss=0.2642, pruned_loss=0.04387, over 1426920.76 frames.], batch size: 19, lr: 1.97e-04 2022-05-28 20:02:17,328 INFO [train.py:842] (0/4) Epoch 28, batch 4350, loss[loss=0.1735, simple_loss=0.2541, pruned_loss=0.04647, over 7238.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2667, pruned_loss=0.0449, over 1420144.31 frames.], batch size: 20, lr: 1.97e-04 2022-05-28 20:02:56,647 INFO [train.py:842] (0/4) Epoch 28, batch 4400, loss[loss=0.1476, simple_loss=0.2315, pruned_loss=0.03185, over 6992.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2664, pruned_loss=0.04458, over 1422081.02 frames.], batch size: 16, lr: 1.97e-04 2022-05-28 20:03:36,344 INFO [train.py:842] (0/4) Epoch 28, batch 4450, loss[loss=0.1584, simple_loss=0.2473, pruned_loss=0.03479, over 7276.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2652, pruned_loss=0.04415, over 1425721.17 frames.], batch size: 18, lr: 1.97e-04 2022-05-28 20:04:15,398 INFO [train.py:842] (0/4) Epoch 28, batch 4500, loss[loss=0.1602, simple_loss=0.256, pruned_loss=0.03221, over 7347.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2666, pruned_loss=0.04462, over 1425243.29 frames.], batch size: 22, lr: 1.97e-04 2022-05-28 20:04:54,901 INFO [train.py:842] (0/4) Epoch 28, batch 4550, loss[loss=0.1647, simple_loss=0.2527, pruned_loss=0.03833, over 7160.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2654, pruned_loss=0.04406, over 1420466.41 frames.], batch size: 18, lr: 1.97e-04 2022-05-28 20:05:33,868 INFO [train.py:842] (0/4) Epoch 28, batch 4600, loss[loss=0.1669, simple_loss=0.2575, pruned_loss=0.03812, over 7141.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2668, pruned_loss=0.04499, over 1423332.07 frames.], batch size: 19, lr: 1.96e-04 2022-05-28 20:06:13,390 INFO [train.py:842] (0/4) Epoch 28, batch 4650, loss[loss=0.2016, simple_loss=0.2899, pruned_loss=0.05668, over 6738.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2663, pruned_loss=0.04479, over 1424137.23 frames.], batch size: 31, lr: 1.96e-04 2022-05-28 20:06:52,735 INFO [train.py:842] (0/4) Epoch 28, batch 4700, loss[loss=0.135, simple_loss=0.2179, pruned_loss=0.02605, over 7271.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2676, pruned_loss=0.0457, over 1423301.76 frames.], batch size: 18, lr: 1.96e-04 2022-05-28 20:07:32,410 INFO [train.py:842] (0/4) Epoch 28, batch 4750, loss[loss=0.2119, simple_loss=0.2958, pruned_loss=0.06403, over 7217.00 frames.], tot_loss[loss=0.18, simple_loss=0.2679, pruned_loss=0.04604, over 1422201.63 frames.], batch size: 22, lr: 1.96e-04 2022-05-28 20:08:11,801 INFO [train.py:842] (0/4) Epoch 28, batch 4800, loss[loss=0.1874, simple_loss=0.2743, pruned_loss=0.05019, over 6745.00 frames.], tot_loss[loss=0.1784, simple_loss=0.266, pruned_loss=0.04544, over 1416765.93 frames.], batch size: 31, lr: 1.96e-04 2022-05-28 20:08:51,417 INFO [train.py:842] (0/4) Epoch 28, batch 4850, loss[loss=0.1488, simple_loss=0.2404, pruned_loss=0.02864, over 7313.00 frames.], tot_loss[loss=0.1783, simple_loss=0.266, pruned_loss=0.04527, over 1420061.40 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:09:30,546 INFO [train.py:842] (0/4) Epoch 28, batch 4900, loss[loss=0.1749, simple_loss=0.2601, pruned_loss=0.04487, over 7328.00 frames.], tot_loss[loss=0.179, simple_loss=0.2666, pruned_loss=0.04568, over 1422995.92 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:10:10,253 INFO [train.py:842] (0/4) Epoch 28, batch 4950, loss[loss=0.1827, simple_loss=0.2751, pruned_loss=0.04517, over 7329.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2668, pruned_loss=0.04587, over 1423586.46 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:10:49,479 INFO [train.py:842] (0/4) Epoch 28, batch 5000, loss[loss=0.1929, simple_loss=0.2864, pruned_loss=0.04966, over 7321.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2658, pruned_loss=0.04497, over 1427987.33 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:11:28,937 INFO [train.py:842] (0/4) Epoch 28, batch 5050, loss[loss=0.1741, simple_loss=0.2493, pruned_loss=0.04947, over 7205.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2657, pruned_loss=0.04503, over 1420023.48 frames.], batch size: 16, lr: 1.96e-04 2022-05-28 20:12:08,118 INFO [train.py:842] (0/4) Epoch 28, batch 5100, loss[loss=0.167, simple_loss=0.2716, pruned_loss=0.03119, over 7225.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2666, pruned_loss=0.04524, over 1417608.94 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:12:47,618 INFO [train.py:842] (0/4) Epoch 28, batch 5150, loss[loss=0.1461, simple_loss=0.2335, pruned_loss=0.02932, over 7285.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2643, pruned_loss=0.04406, over 1416743.72 frames.], batch size: 18, lr: 1.96e-04 2022-05-28 20:13:26,658 INFO [train.py:842] (0/4) Epoch 28, batch 5200, loss[loss=0.163, simple_loss=0.2616, pruned_loss=0.03214, over 7316.00 frames.], tot_loss[loss=0.177, simple_loss=0.2649, pruned_loss=0.04458, over 1419241.72 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:14:06,374 INFO [train.py:842] (0/4) Epoch 28, batch 5250, loss[loss=0.1458, simple_loss=0.2403, pruned_loss=0.02567, over 7363.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2637, pruned_loss=0.04387, over 1421074.10 frames.], batch size: 19, lr: 1.96e-04 2022-05-28 20:14:45,256 INFO [train.py:842] (0/4) Epoch 28, batch 5300, loss[loss=0.2438, simple_loss=0.328, pruned_loss=0.07982, over 7362.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2641, pruned_loss=0.04412, over 1415251.30 frames.], batch size: 23, lr: 1.96e-04 2022-05-28 20:15:25,205 INFO [train.py:842] (0/4) Epoch 28, batch 5350, loss[loss=0.1767, simple_loss=0.2598, pruned_loss=0.04684, over 7383.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2635, pruned_loss=0.04389, over 1417993.23 frames.], batch size: 23, lr: 1.96e-04 2022-05-28 20:16:04,676 INFO [train.py:842] (0/4) Epoch 28, batch 5400, loss[loss=0.1601, simple_loss=0.2446, pruned_loss=0.03781, over 6822.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2626, pruned_loss=0.04357, over 1421327.79 frames.], batch size: 31, lr: 1.96e-04 2022-05-28 20:16:44,126 INFO [train.py:842] (0/4) Epoch 28, batch 5450, loss[loss=0.146, simple_loss=0.2359, pruned_loss=0.02803, over 7270.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2635, pruned_loss=0.04382, over 1417719.93 frames.], batch size: 19, lr: 1.96e-04 2022-05-28 20:17:23,310 INFO [train.py:842] (0/4) Epoch 28, batch 5500, loss[loss=0.1924, simple_loss=0.2892, pruned_loss=0.04779, over 7290.00 frames.], tot_loss[loss=0.1757, simple_loss=0.264, pruned_loss=0.04373, over 1417328.74 frames.], batch size: 24, lr: 1.96e-04 2022-05-28 20:18:02,997 INFO [train.py:842] (0/4) Epoch 28, batch 5550, loss[loss=0.1671, simple_loss=0.2451, pruned_loss=0.04455, over 7265.00 frames.], tot_loss[loss=0.176, simple_loss=0.2643, pruned_loss=0.04385, over 1421719.10 frames.], batch size: 18, lr: 1.96e-04 2022-05-28 20:18:42,326 INFO [train.py:842] (0/4) Epoch 28, batch 5600, loss[loss=0.2208, simple_loss=0.3079, pruned_loss=0.06688, over 7331.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2641, pruned_loss=0.04413, over 1421238.01 frames.], batch size: 22, lr: 1.96e-04 2022-05-28 20:19:22,320 INFO [train.py:842] (0/4) Epoch 28, batch 5650, loss[loss=0.1845, simple_loss=0.2802, pruned_loss=0.04442, over 7334.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2633, pruned_loss=0.04322, over 1426599.27 frames.], batch size: 22, lr: 1.96e-04 2022-05-28 20:20:01,601 INFO [train.py:842] (0/4) Epoch 28, batch 5700, loss[loss=0.1583, simple_loss=0.2538, pruned_loss=0.0314, over 7152.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2635, pruned_loss=0.04348, over 1430200.58 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:20:41,243 INFO [train.py:842] (0/4) Epoch 28, batch 5750, loss[loss=0.158, simple_loss=0.2492, pruned_loss=0.03336, over 7303.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2635, pruned_loss=0.04347, over 1427520.42 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:21:20,862 INFO [train.py:842] (0/4) Epoch 28, batch 5800, loss[loss=0.1894, simple_loss=0.2766, pruned_loss=0.05112, over 7147.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2641, pruned_loss=0.04373, over 1431548.06 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:22:00,584 INFO [train.py:842] (0/4) Epoch 28, batch 5850, loss[loss=0.1658, simple_loss=0.2617, pruned_loss=0.0349, over 7164.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2628, pruned_loss=0.0432, over 1433804.62 frames.], batch size: 19, lr: 1.96e-04 2022-05-28 20:22:39,827 INFO [train.py:842] (0/4) Epoch 28, batch 5900, loss[loss=0.1625, simple_loss=0.2568, pruned_loss=0.03409, over 7425.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2628, pruned_loss=0.04287, over 1436497.37 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:23:19,384 INFO [train.py:842] (0/4) Epoch 28, batch 5950, loss[loss=0.2509, simple_loss=0.3347, pruned_loss=0.0835, over 7327.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2643, pruned_loss=0.04368, over 1437160.19 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:23:58,869 INFO [train.py:842] (0/4) Epoch 28, batch 6000, loss[loss=0.187, simple_loss=0.2777, pruned_loss=0.04812, over 7212.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2633, pruned_loss=0.04366, over 1436661.29 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:23:58,870 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 20:24:08,607 INFO [train.py:871] (0/4) Epoch 28, validation: loss=0.1653, simple_loss=0.2626, pruned_loss=0.03405, over 868885.00 frames. 2022-05-28 20:24:48,316 INFO [train.py:842] (0/4) Epoch 28, batch 6050, loss[loss=0.2317, simple_loss=0.3064, pruned_loss=0.07844, over 7134.00 frames.], tot_loss[loss=0.1762, simple_loss=0.264, pruned_loss=0.04423, over 1432547.57 frames.], batch size: 28, lr: 1.96e-04 2022-05-28 20:25:27,633 INFO [train.py:842] (0/4) Epoch 28, batch 6100, loss[loss=0.158, simple_loss=0.2435, pruned_loss=0.03625, over 7071.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2635, pruned_loss=0.04388, over 1430173.78 frames.], batch size: 18, lr: 1.96e-04 2022-05-28 20:26:07,226 INFO [train.py:842] (0/4) Epoch 28, batch 6150, loss[loss=0.2166, simple_loss=0.3021, pruned_loss=0.0656, over 7289.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2642, pruned_loss=0.04439, over 1430206.07 frames.], batch size: 25, lr: 1.96e-04 2022-05-28 20:26:46,780 INFO [train.py:842] (0/4) Epoch 28, batch 6200, loss[loss=0.2296, simple_loss=0.3029, pruned_loss=0.07815, over 7192.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2637, pruned_loss=0.04426, over 1427389.75 frames.], batch size: 22, lr: 1.96e-04 2022-05-28 20:27:26,699 INFO [train.py:842] (0/4) Epoch 28, batch 6250, loss[loss=0.1873, simple_loss=0.2757, pruned_loss=0.04943, over 7266.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2642, pruned_loss=0.04457, over 1426616.95 frames.], batch size: 19, lr: 1.96e-04 2022-05-28 20:28:17,016 INFO [train.py:842] (0/4) Epoch 28, batch 6300, loss[loss=0.1787, simple_loss=0.2827, pruned_loss=0.03732, over 7225.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2642, pruned_loss=0.04452, over 1425601.67 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:28:56,506 INFO [train.py:842] (0/4) Epoch 28, batch 6350, loss[loss=0.1861, simple_loss=0.273, pruned_loss=0.04962, over 7152.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2655, pruned_loss=0.04508, over 1422043.31 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:29:35,550 INFO [train.py:842] (0/4) Epoch 28, batch 6400, loss[loss=0.2136, simple_loss=0.3085, pruned_loss=0.05934, over 7149.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2642, pruned_loss=0.04428, over 1418631.26 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:30:15,247 INFO [train.py:842] (0/4) Epoch 28, batch 6450, loss[loss=0.167, simple_loss=0.2513, pruned_loss=0.04137, over 7361.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2633, pruned_loss=0.04379, over 1413455.17 frames.], batch size: 19, lr: 1.96e-04 2022-05-28 20:30:54,477 INFO [train.py:842] (0/4) Epoch 28, batch 6500, loss[loss=0.191, simple_loss=0.2873, pruned_loss=0.04736, over 7131.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2655, pruned_loss=0.04485, over 1414991.70 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:31:33,952 INFO [train.py:842] (0/4) Epoch 28, batch 6550, loss[loss=0.2109, simple_loss=0.2931, pruned_loss=0.06432, over 4817.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2661, pruned_loss=0.04521, over 1413454.88 frames.], batch size: 52, lr: 1.96e-04 2022-05-28 20:32:34,887 INFO [train.py:842] (0/4) Epoch 28, batch 6600, loss[loss=0.1541, simple_loss=0.2394, pruned_loss=0.03442, over 7138.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2653, pruned_loss=0.045, over 1411277.12 frames.], batch size: 17, lr: 1.96e-04 2022-05-28 20:33:14,399 INFO [train.py:842] (0/4) Epoch 28, batch 6650, loss[loss=0.1945, simple_loss=0.2897, pruned_loss=0.04966, over 7189.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2647, pruned_loss=0.04435, over 1414894.97 frames.], batch size: 23, lr: 1.96e-04 2022-05-28 20:33:53,575 INFO [train.py:842] (0/4) Epoch 28, batch 6700, loss[loss=0.2075, simple_loss=0.2922, pruned_loss=0.06143, over 7148.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2637, pruned_loss=0.0437, over 1421077.96 frames.], batch size: 26, lr: 1.96e-04 2022-05-28 20:34:33,398 INFO [train.py:842] (0/4) Epoch 28, batch 6750, loss[loss=0.1649, simple_loss=0.2659, pruned_loss=0.03192, over 7382.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2628, pruned_loss=0.04352, over 1422096.26 frames.], batch size: 23, lr: 1.96e-04 2022-05-28 20:35:12,636 INFO [train.py:842] (0/4) Epoch 28, batch 6800, loss[loss=0.1644, simple_loss=0.2513, pruned_loss=0.03879, over 7291.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2634, pruned_loss=0.04382, over 1423457.04 frames.], batch size: 18, lr: 1.96e-04 2022-05-28 20:35:52,296 INFO [train.py:842] (0/4) Epoch 28, batch 6850, loss[loss=0.1776, simple_loss=0.2719, pruned_loss=0.04168, over 7075.00 frames.], tot_loss[loss=0.1767, simple_loss=0.265, pruned_loss=0.04422, over 1418698.53 frames.], batch size: 28, lr: 1.96e-04 2022-05-28 20:36:31,675 INFO [train.py:842] (0/4) Epoch 28, batch 6900, loss[loss=0.1899, simple_loss=0.2872, pruned_loss=0.0463, over 7124.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2648, pruned_loss=0.04417, over 1419124.05 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:37:11,503 INFO [train.py:842] (0/4) Epoch 28, batch 6950, loss[loss=0.1797, simple_loss=0.2766, pruned_loss=0.0414, over 7296.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2644, pruned_loss=0.04394, over 1422009.05 frames.], batch size: 25, lr: 1.96e-04 2022-05-28 20:37:50,731 INFO [train.py:842] (0/4) Epoch 28, batch 7000, loss[loss=0.1995, simple_loss=0.299, pruned_loss=0.05006, over 7151.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2643, pruned_loss=0.04396, over 1423041.16 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:38:30,264 INFO [train.py:842] (0/4) Epoch 28, batch 7050, loss[loss=0.1497, simple_loss=0.236, pruned_loss=0.03172, over 7363.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2638, pruned_loss=0.0435, over 1422834.96 frames.], batch size: 19, lr: 1.96e-04 2022-05-28 20:39:09,606 INFO [train.py:842] (0/4) Epoch 28, batch 7100, loss[loss=0.1916, simple_loss=0.2755, pruned_loss=0.05383, over 7429.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2646, pruned_loss=0.04385, over 1425159.01 frames.], batch size: 20, lr: 1.96e-04 2022-05-28 20:39:49,129 INFO [train.py:842] (0/4) Epoch 28, batch 7150, loss[loss=0.153, simple_loss=0.2496, pruned_loss=0.02822, over 7118.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2658, pruned_loss=0.04436, over 1424738.92 frames.], batch size: 21, lr: 1.96e-04 2022-05-28 20:40:28,338 INFO [train.py:842] (0/4) Epoch 28, batch 7200, loss[loss=0.1535, simple_loss=0.2329, pruned_loss=0.03708, over 7004.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2664, pruned_loss=0.04514, over 1421112.47 frames.], batch size: 16, lr: 1.95e-04 2022-05-28 20:41:07,987 INFO [train.py:842] (0/4) Epoch 28, batch 7250, loss[loss=0.1843, simple_loss=0.2698, pruned_loss=0.04943, over 6830.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2661, pruned_loss=0.0447, over 1419050.02 frames.], batch size: 15, lr: 1.95e-04 2022-05-28 20:41:46,963 INFO [train.py:842] (0/4) Epoch 28, batch 7300, loss[loss=0.1508, simple_loss=0.2408, pruned_loss=0.03036, over 7211.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2655, pruned_loss=0.04387, over 1420202.68 frames.], batch size: 21, lr: 1.95e-04 2022-05-28 20:42:26,414 INFO [train.py:842] (0/4) Epoch 28, batch 7350, loss[loss=0.2037, simple_loss=0.2709, pruned_loss=0.06825, over 7166.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2655, pruned_loss=0.04444, over 1421754.39 frames.], batch size: 18, lr: 1.95e-04 2022-05-28 20:43:05,469 INFO [train.py:842] (0/4) Epoch 28, batch 7400, loss[loss=0.1785, simple_loss=0.2655, pruned_loss=0.04579, over 7312.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2659, pruned_loss=0.0448, over 1418165.07 frames.], batch size: 21, lr: 1.95e-04 2022-05-28 20:43:45,207 INFO [train.py:842] (0/4) Epoch 28, batch 7450, loss[loss=0.205, simple_loss=0.2941, pruned_loss=0.05799, over 7436.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2663, pruned_loss=0.04508, over 1423842.48 frames.], batch size: 20, lr: 1.95e-04 2022-05-28 20:44:24,567 INFO [train.py:842] (0/4) Epoch 28, batch 7500, loss[loss=0.194, simple_loss=0.2787, pruned_loss=0.05467, over 7183.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2662, pruned_loss=0.0452, over 1425187.81 frames.], batch size: 26, lr: 1.95e-04 2022-05-28 20:45:04,394 INFO [train.py:842] (0/4) Epoch 28, batch 7550, loss[loss=0.1731, simple_loss=0.2718, pruned_loss=0.03719, over 7340.00 frames.], tot_loss[loss=0.1794, simple_loss=0.267, pruned_loss=0.04594, over 1427290.56 frames.], batch size: 22, lr: 1.95e-04 2022-05-28 20:45:43,615 INFO [train.py:842] (0/4) Epoch 28, batch 7600, loss[loss=0.1547, simple_loss=0.2333, pruned_loss=0.03805, over 7251.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2652, pruned_loss=0.04513, over 1427001.94 frames.], batch size: 16, lr: 1.95e-04 2022-05-28 20:46:23,179 INFO [train.py:842] (0/4) Epoch 28, batch 7650, loss[loss=0.1509, simple_loss=0.2293, pruned_loss=0.0363, over 7220.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2656, pruned_loss=0.04509, over 1427870.98 frames.], batch size: 16, lr: 1.95e-04 2022-05-28 20:47:02,315 INFO [train.py:842] (0/4) Epoch 28, batch 7700, loss[loss=0.2133, simple_loss=0.302, pruned_loss=0.06234, over 7206.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2669, pruned_loss=0.0456, over 1427318.65 frames.], batch size: 22, lr: 1.95e-04 2022-05-28 20:47:42,013 INFO [train.py:842] (0/4) Epoch 28, batch 7750, loss[loss=0.1529, simple_loss=0.2409, pruned_loss=0.03248, over 7159.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2654, pruned_loss=0.04523, over 1422178.33 frames.], batch size: 18, lr: 1.95e-04 2022-05-28 20:48:21,287 INFO [train.py:842] (0/4) Epoch 28, batch 7800, loss[loss=0.1797, simple_loss=0.274, pruned_loss=0.04267, over 7325.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2653, pruned_loss=0.0449, over 1425744.62 frames.], batch size: 21, lr: 1.95e-04 2022-05-28 20:48:35,758 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-256000.pt 2022-05-28 20:49:04,110 INFO [train.py:842] (0/4) Epoch 28, batch 7850, loss[loss=0.1605, simple_loss=0.2496, pruned_loss=0.0357, over 6663.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2655, pruned_loss=0.04569, over 1424563.48 frames.], batch size: 31, lr: 1.95e-04 2022-05-28 20:49:43,326 INFO [train.py:842] (0/4) Epoch 28, batch 7900, loss[loss=0.1681, simple_loss=0.2626, pruned_loss=0.0368, over 7429.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2655, pruned_loss=0.04517, over 1426484.36 frames.], batch size: 20, lr: 1.95e-04 2022-05-28 20:50:22,512 INFO [train.py:842] (0/4) Epoch 28, batch 7950, loss[loss=0.1462, simple_loss=0.2392, pruned_loss=0.02664, over 7317.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2663, pruned_loss=0.04507, over 1422150.91 frames.], batch size: 20, lr: 1.95e-04 2022-05-28 20:51:01,509 INFO [train.py:842] (0/4) Epoch 28, batch 8000, loss[loss=0.1507, simple_loss=0.2385, pruned_loss=0.03146, over 7109.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2652, pruned_loss=0.04428, over 1419330.43 frames.], batch size: 21, lr: 1.95e-04 2022-05-28 20:51:40,847 INFO [train.py:842] (0/4) Epoch 28, batch 8050, loss[loss=0.1728, simple_loss=0.253, pruned_loss=0.04627, over 7241.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2646, pruned_loss=0.04431, over 1419949.57 frames.], batch size: 20, lr: 1.95e-04 2022-05-28 20:52:19,871 INFO [train.py:842] (0/4) Epoch 28, batch 8100, loss[loss=0.1919, simple_loss=0.2752, pruned_loss=0.05424, over 7320.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2654, pruned_loss=0.04481, over 1420333.29 frames.], batch size: 24, lr: 1.95e-04 2022-05-28 20:52:59,298 INFO [train.py:842] (0/4) Epoch 28, batch 8150, loss[loss=0.1984, simple_loss=0.2705, pruned_loss=0.06314, over 7402.00 frames.], tot_loss[loss=0.1786, simple_loss=0.2665, pruned_loss=0.04533, over 1414571.29 frames.], batch size: 18, lr: 1.95e-04 2022-05-28 20:53:38,449 INFO [train.py:842] (0/4) Epoch 28, batch 8200, loss[loss=0.1877, simple_loss=0.2806, pruned_loss=0.04744, over 7294.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2657, pruned_loss=0.04462, over 1417415.64 frames.], batch size: 24, lr: 1.95e-04 2022-05-28 20:54:18,208 INFO [train.py:842] (0/4) Epoch 28, batch 8250, loss[loss=0.1984, simple_loss=0.2905, pruned_loss=0.05309, over 7339.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2644, pruned_loss=0.04401, over 1419194.80 frames.], batch size: 22, lr: 1.95e-04 2022-05-28 20:54:57,230 INFO [train.py:842] (0/4) Epoch 28, batch 8300, loss[loss=0.1702, simple_loss=0.2621, pruned_loss=0.03914, over 7224.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2655, pruned_loss=0.04477, over 1421708.99 frames.], batch size: 21, lr: 1.95e-04 2022-05-28 20:55:36,890 INFO [train.py:842] (0/4) Epoch 28, batch 8350, loss[loss=0.1696, simple_loss=0.265, pruned_loss=0.03711, over 6809.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2667, pruned_loss=0.04538, over 1423741.37 frames.], batch size: 15, lr: 1.95e-04 2022-05-28 20:56:16,227 INFO [train.py:842] (0/4) Epoch 28, batch 8400, loss[loss=0.1553, simple_loss=0.2326, pruned_loss=0.03895, over 7142.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2658, pruned_loss=0.04492, over 1423894.22 frames.], batch size: 17, lr: 1.95e-04 2022-05-28 20:56:55,877 INFO [train.py:842] (0/4) Epoch 28, batch 8450, loss[loss=0.1468, simple_loss=0.2334, pruned_loss=0.03014, over 7140.00 frames.], tot_loss[loss=0.1781, simple_loss=0.266, pruned_loss=0.04509, over 1418273.15 frames.], batch size: 17, lr: 1.95e-04 2022-05-28 20:57:34,957 INFO [train.py:842] (0/4) Epoch 28, batch 8500, loss[loss=0.1689, simple_loss=0.2537, pruned_loss=0.04202, over 7373.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2657, pruned_loss=0.04471, over 1417087.79 frames.], batch size: 23, lr: 1.95e-04 2022-05-28 20:58:14,420 INFO [train.py:842] (0/4) Epoch 28, batch 8550, loss[loss=0.168, simple_loss=0.2613, pruned_loss=0.03736, over 7386.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2662, pruned_loss=0.04513, over 1414409.41 frames.], batch size: 23, lr: 1.95e-04 2022-05-28 20:58:53,548 INFO [train.py:842] (0/4) Epoch 28, batch 8600, loss[loss=0.1924, simple_loss=0.2965, pruned_loss=0.04421, over 7390.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2664, pruned_loss=0.04494, over 1408668.01 frames.], batch size: 23, lr: 1.95e-04 2022-05-28 20:59:32,590 INFO [train.py:842] (0/4) Epoch 28, batch 8650, loss[loss=0.1922, simple_loss=0.2723, pruned_loss=0.05604, over 7436.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2677, pruned_loss=0.04553, over 1408735.84 frames.], batch size: 20, lr: 1.95e-04 2022-05-28 21:00:11,800 INFO [train.py:842] (0/4) Epoch 28, batch 8700, loss[loss=0.1552, simple_loss=0.2517, pruned_loss=0.02932, over 6440.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2664, pruned_loss=0.04455, over 1411232.13 frames.], batch size: 38, lr: 1.95e-04 2022-05-28 21:00:51,043 INFO [train.py:842] (0/4) Epoch 28, batch 8750, loss[loss=0.2024, simple_loss=0.2798, pruned_loss=0.06256, over 7116.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2661, pruned_loss=0.04488, over 1408862.23 frames.], batch size: 21, lr: 1.95e-04 2022-05-28 21:01:30,136 INFO [train.py:842] (0/4) Epoch 28, batch 8800, loss[loss=0.1856, simple_loss=0.2802, pruned_loss=0.0455, over 7380.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2673, pruned_loss=0.04567, over 1406288.83 frames.], batch size: 23, lr: 1.95e-04 2022-05-28 21:02:09,472 INFO [train.py:842] (0/4) Epoch 28, batch 8850, loss[loss=0.1722, simple_loss=0.2747, pruned_loss=0.03486, over 7220.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2678, pruned_loss=0.04579, over 1405206.35 frames.], batch size: 26, lr: 1.95e-04 2022-05-28 21:02:48,619 INFO [train.py:842] (0/4) Epoch 28, batch 8900, loss[loss=0.1714, simple_loss=0.2584, pruned_loss=0.04222, over 5254.00 frames.], tot_loss[loss=0.1797, simple_loss=0.2678, pruned_loss=0.04575, over 1394188.99 frames.], batch size: 52, lr: 1.95e-04 2022-05-28 21:03:28,073 INFO [train.py:842] (0/4) Epoch 28, batch 8950, loss[loss=0.1445, simple_loss=0.2345, pruned_loss=0.02724, over 7129.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2664, pruned_loss=0.04547, over 1389520.74 frames.], batch size: 21, lr: 1.95e-04 2022-05-28 21:04:07,004 INFO [train.py:842] (0/4) Epoch 28, batch 9000, loss[loss=0.1527, simple_loss=0.2467, pruned_loss=0.02933, over 7165.00 frames.], tot_loss[loss=0.1792, simple_loss=0.267, pruned_loss=0.04575, over 1381882.51 frames.], batch size: 19, lr: 1.95e-04 2022-05-28 21:04:07,005 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 21:04:16,597 INFO [train.py:871] (0/4) Epoch 28, validation: loss=0.1632, simple_loss=0.2611, pruned_loss=0.03268, over 868885.00 frames. 2022-05-28 21:04:55,658 INFO [train.py:842] (0/4) Epoch 28, batch 9050, loss[loss=0.1949, simple_loss=0.29, pruned_loss=0.04988, over 6355.00 frames.], tot_loss[loss=0.1804, simple_loss=0.2677, pruned_loss=0.04652, over 1360176.52 frames.], batch size: 37, lr: 1.95e-04 2022-05-28 21:05:34,567 INFO [train.py:842] (0/4) Epoch 28, batch 9100, loss[loss=0.2053, simple_loss=0.2831, pruned_loss=0.06373, over 4909.00 frames.], tot_loss[loss=0.1801, simple_loss=0.2668, pruned_loss=0.04668, over 1333322.27 frames.], batch size: 52, lr: 1.95e-04 2022-05-28 21:06:12,618 INFO [train.py:842] (0/4) Epoch 28, batch 9150, loss[loss=0.1709, simple_loss=0.2621, pruned_loss=0.03992, over 6529.00 frames.], tot_loss[loss=0.1826, simple_loss=0.2692, pruned_loss=0.04798, over 1298427.76 frames.], batch size: 38, lr: 1.95e-04 2022-05-28 21:06:45,307 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-28.pt 2022-05-28 21:07:05,004 INFO [train.py:842] (0/4) Epoch 29, batch 0, loss[loss=0.1624, simple_loss=0.2596, pruned_loss=0.03263, over 7067.00 frames.], tot_loss[loss=0.1624, simple_loss=0.2596, pruned_loss=0.03263, over 7067.00 frames.], batch size: 28, lr: 1.91e-04 2022-05-28 21:07:44,646 INFO [train.py:842] (0/4) Epoch 29, batch 50, loss[loss=0.2013, simple_loss=0.2911, pruned_loss=0.05572, over 7306.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2653, pruned_loss=0.043, over 323748.84 frames.], batch size: 24, lr: 1.91e-04 2022-05-28 21:08:24,029 INFO [train.py:842] (0/4) Epoch 29, batch 100, loss[loss=0.1603, simple_loss=0.2481, pruned_loss=0.0362, over 7315.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2654, pruned_loss=0.04396, over 569516.66 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:09:03,635 INFO [train.py:842] (0/4) Epoch 29, batch 150, loss[loss=0.1864, simple_loss=0.2767, pruned_loss=0.04804, over 7227.00 frames.], tot_loss[loss=0.1771, simple_loss=0.266, pruned_loss=0.04413, over 759668.74 frames.], batch size: 20, lr: 1.91e-04 2022-05-28 21:09:43,178 INFO [train.py:842] (0/4) Epoch 29, batch 200, loss[loss=0.1593, simple_loss=0.2503, pruned_loss=0.03419, over 7458.00 frames.], tot_loss[loss=0.176, simple_loss=0.2648, pruned_loss=0.0436, over 909648.91 frames.], batch size: 19, lr: 1.91e-04 2022-05-28 21:10:22,666 INFO [train.py:842] (0/4) Epoch 29, batch 250, loss[loss=0.3023, simple_loss=0.3692, pruned_loss=0.1177, over 4657.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2657, pruned_loss=0.0444, over 1020026.60 frames.], batch size: 52, lr: 1.91e-04 2022-05-28 21:11:01,708 INFO [train.py:842] (0/4) Epoch 29, batch 300, loss[loss=0.1519, simple_loss=0.2381, pruned_loss=0.03289, over 7160.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2655, pruned_loss=0.04361, over 1109707.67 frames.], batch size: 18, lr: 1.91e-04 2022-05-28 21:11:41,207 INFO [train.py:842] (0/4) Epoch 29, batch 350, loss[loss=0.1654, simple_loss=0.2529, pruned_loss=0.03889, over 7453.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2649, pruned_loss=0.04302, over 1181701.98 frames.], batch size: 19, lr: 1.91e-04 2022-05-28 21:12:20,481 INFO [train.py:842] (0/4) Epoch 29, batch 400, loss[loss=0.2316, simple_loss=0.3062, pruned_loss=0.07851, over 7142.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2649, pruned_loss=0.04326, over 1236997.11 frames.], batch size: 20, lr: 1.91e-04 2022-05-28 21:13:00,119 INFO [train.py:842] (0/4) Epoch 29, batch 450, loss[loss=0.21, simple_loss=0.3033, pruned_loss=0.05837, over 7110.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2654, pruned_loss=0.0431, over 1282525.82 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:13:39,181 INFO [train.py:842] (0/4) Epoch 29, batch 500, loss[loss=0.1749, simple_loss=0.2742, pruned_loss=0.03779, over 4871.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2647, pruned_loss=0.04338, over 1309998.73 frames.], batch size: 52, lr: 1.91e-04 2022-05-28 21:14:18,723 INFO [train.py:842] (0/4) Epoch 29, batch 550, loss[loss=0.2028, simple_loss=0.296, pruned_loss=0.05476, over 7222.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2643, pruned_loss=0.04328, over 1332044.46 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:14:58,134 INFO [train.py:842] (0/4) Epoch 29, batch 600, loss[loss=0.1803, simple_loss=0.2614, pruned_loss=0.04963, over 7257.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2645, pruned_loss=0.04398, over 1347826.28 frames.], batch size: 19, lr: 1.91e-04 2022-05-28 21:15:37,661 INFO [train.py:842] (0/4) Epoch 29, batch 650, loss[loss=0.1622, simple_loss=0.2585, pruned_loss=0.03299, over 7069.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2642, pruned_loss=0.04362, over 1366492.11 frames.], batch size: 18, lr: 1.91e-04 2022-05-28 21:16:17,139 INFO [train.py:842] (0/4) Epoch 29, batch 700, loss[loss=0.2086, simple_loss=0.2885, pruned_loss=0.06435, over 5488.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2661, pruned_loss=0.04477, over 1374402.40 frames.], batch size: 54, lr: 1.91e-04 2022-05-28 21:16:56,540 INFO [train.py:842] (0/4) Epoch 29, batch 750, loss[loss=0.1747, simple_loss=0.2599, pruned_loss=0.04472, over 7434.00 frames.], tot_loss[loss=0.177, simple_loss=0.2655, pruned_loss=0.04431, over 1381210.32 frames.], batch size: 20, lr: 1.91e-04 2022-05-28 21:17:35,593 INFO [train.py:842] (0/4) Epoch 29, batch 800, loss[loss=0.1847, simple_loss=0.2797, pruned_loss=0.04488, over 7104.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2654, pruned_loss=0.04425, over 1387153.85 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:18:15,134 INFO [train.py:842] (0/4) Epoch 29, batch 850, loss[loss=0.1348, simple_loss=0.2266, pruned_loss=0.02155, over 6262.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2659, pruned_loss=0.04444, over 1391612.20 frames.], batch size: 37, lr: 1.91e-04 2022-05-28 21:18:54,078 INFO [train.py:842] (0/4) Epoch 29, batch 900, loss[loss=0.1788, simple_loss=0.2664, pruned_loss=0.04562, over 6678.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2661, pruned_loss=0.04414, over 1398664.13 frames.], batch size: 31, lr: 1.91e-04 2022-05-28 21:19:33,689 INFO [train.py:842] (0/4) Epoch 29, batch 950, loss[loss=0.1633, simple_loss=0.2528, pruned_loss=0.03689, over 7208.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2654, pruned_loss=0.04387, over 1408054.47 frames.], batch size: 22, lr: 1.91e-04 2022-05-28 21:20:13,129 INFO [train.py:842] (0/4) Epoch 29, batch 1000, loss[loss=0.1688, simple_loss=0.2466, pruned_loss=0.04554, over 7201.00 frames.], tot_loss[loss=0.1766, simple_loss=0.265, pruned_loss=0.04404, over 1414126.89 frames.], batch size: 16, lr: 1.91e-04 2022-05-28 21:20:52,878 INFO [train.py:842] (0/4) Epoch 29, batch 1050, loss[loss=0.1652, simple_loss=0.2556, pruned_loss=0.03737, over 7408.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2663, pruned_loss=0.04409, over 1419505.28 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:21:32,451 INFO [train.py:842] (0/4) Epoch 29, batch 1100, loss[loss=0.1525, simple_loss=0.2311, pruned_loss=0.03697, over 7292.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2659, pruned_loss=0.04429, over 1422680.63 frames.], batch size: 17, lr: 1.91e-04 2022-05-28 21:22:11,771 INFO [train.py:842] (0/4) Epoch 29, batch 1150, loss[loss=0.1783, simple_loss=0.2695, pruned_loss=0.04359, over 7091.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2658, pruned_loss=0.04451, over 1421840.65 frames.], batch size: 28, lr: 1.91e-04 2022-05-28 21:22:50,760 INFO [train.py:842] (0/4) Epoch 29, batch 1200, loss[loss=0.1658, simple_loss=0.2615, pruned_loss=0.03508, over 7048.00 frames.], tot_loss[loss=0.1787, simple_loss=0.2674, pruned_loss=0.04503, over 1424072.29 frames.], batch size: 28, lr: 1.91e-04 2022-05-28 21:23:30,566 INFO [train.py:842] (0/4) Epoch 29, batch 1250, loss[loss=0.1953, simple_loss=0.2938, pruned_loss=0.04841, over 7204.00 frames.], tot_loss[loss=0.1791, simple_loss=0.2672, pruned_loss=0.0455, over 1418760.40 frames.], batch size: 22, lr: 1.91e-04 2022-05-28 21:24:09,913 INFO [train.py:842] (0/4) Epoch 29, batch 1300, loss[loss=0.1855, simple_loss=0.2781, pruned_loss=0.04647, over 7134.00 frames.], tot_loss[loss=0.178, simple_loss=0.2661, pruned_loss=0.04493, over 1421355.78 frames.], batch size: 20, lr: 1.91e-04 2022-05-28 21:24:49,611 INFO [train.py:842] (0/4) Epoch 29, batch 1350, loss[loss=0.1844, simple_loss=0.2855, pruned_loss=0.04161, over 7103.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2653, pruned_loss=0.04432, over 1426770.97 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:25:29,113 INFO [train.py:842] (0/4) Epoch 29, batch 1400, loss[loss=0.1263, simple_loss=0.2129, pruned_loss=0.01989, over 7274.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2652, pruned_loss=0.04421, over 1428123.14 frames.], batch size: 17, lr: 1.91e-04 2022-05-28 21:26:08,823 INFO [train.py:842] (0/4) Epoch 29, batch 1450, loss[loss=0.1592, simple_loss=0.255, pruned_loss=0.03175, over 7299.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2642, pruned_loss=0.0434, over 1431885.42 frames.], batch size: 24, lr: 1.91e-04 2022-05-28 21:26:47,897 INFO [train.py:842] (0/4) Epoch 29, batch 1500, loss[loss=0.1897, simple_loss=0.2735, pruned_loss=0.05299, over 7334.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2645, pruned_loss=0.04317, over 1428694.40 frames.], batch size: 20, lr: 1.91e-04 2022-05-28 21:27:27,458 INFO [train.py:842] (0/4) Epoch 29, batch 1550, loss[loss=0.1679, simple_loss=0.2627, pruned_loss=0.03651, over 7211.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2643, pruned_loss=0.04329, over 1430545.84 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:28:06,666 INFO [train.py:842] (0/4) Epoch 29, batch 1600, loss[loss=0.1482, simple_loss=0.2277, pruned_loss=0.03438, over 7265.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2645, pruned_loss=0.04333, over 1427326.64 frames.], batch size: 16, lr: 1.91e-04 2022-05-28 21:28:46,415 INFO [train.py:842] (0/4) Epoch 29, batch 1650, loss[loss=0.141, simple_loss=0.2269, pruned_loss=0.02762, over 6832.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2645, pruned_loss=0.04348, over 1428496.67 frames.], batch size: 15, lr: 1.91e-04 2022-05-28 21:29:25,921 INFO [train.py:842] (0/4) Epoch 29, batch 1700, loss[loss=0.1389, simple_loss=0.234, pruned_loss=0.02189, over 7254.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2633, pruned_loss=0.04307, over 1430840.45 frames.], batch size: 19, lr: 1.91e-04 2022-05-28 21:30:05,641 INFO [train.py:842] (0/4) Epoch 29, batch 1750, loss[loss=0.166, simple_loss=0.2671, pruned_loss=0.03247, over 7115.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04274, over 1433133.89 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:30:44,933 INFO [train.py:842] (0/4) Epoch 29, batch 1800, loss[loss=0.157, simple_loss=0.2404, pruned_loss=0.0368, over 7011.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2629, pruned_loss=0.04334, over 1422760.02 frames.], batch size: 16, lr: 1.91e-04 2022-05-28 21:31:24,533 INFO [train.py:842] (0/4) Epoch 29, batch 1850, loss[loss=0.1725, simple_loss=0.2544, pruned_loss=0.04528, over 7413.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2642, pruned_loss=0.04415, over 1424718.63 frames.], batch size: 18, lr: 1.91e-04 2022-05-28 21:32:03,673 INFO [train.py:842] (0/4) Epoch 29, batch 1900, loss[loss=0.201, simple_loss=0.3013, pruned_loss=0.05033, over 7189.00 frames.], tot_loss[loss=0.175, simple_loss=0.2634, pruned_loss=0.04326, over 1425435.92 frames.], batch size: 26, lr: 1.91e-04 2022-05-28 21:32:43,258 INFO [train.py:842] (0/4) Epoch 29, batch 1950, loss[loss=0.1789, simple_loss=0.2729, pruned_loss=0.0425, over 7289.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2638, pruned_loss=0.04353, over 1427670.25 frames.], batch size: 25, lr: 1.91e-04 2022-05-28 21:33:22,691 INFO [train.py:842] (0/4) Epoch 29, batch 2000, loss[loss=0.1913, simple_loss=0.2835, pruned_loss=0.04955, over 7195.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2649, pruned_loss=0.04438, over 1430470.99 frames.], batch size: 23, lr: 1.91e-04 2022-05-28 21:34:02,031 INFO [train.py:842] (0/4) Epoch 29, batch 2050, loss[loss=0.1854, simple_loss=0.2746, pruned_loss=0.04816, over 7321.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2655, pruned_loss=0.04454, over 1424186.66 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:34:41,426 INFO [train.py:842] (0/4) Epoch 29, batch 2100, loss[loss=0.2227, simple_loss=0.3015, pruned_loss=0.07193, over 7282.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2647, pruned_loss=0.04441, over 1425592.91 frames.], batch size: 25, lr: 1.91e-04 2022-05-28 21:35:20,997 INFO [train.py:842] (0/4) Epoch 29, batch 2150, loss[loss=0.1764, simple_loss=0.2699, pruned_loss=0.04146, over 7224.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2651, pruned_loss=0.04432, over 1426998.91 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:36:00,002 INFO [train.py:842] (0/4) Epoch 29, batch 2200, loss[loss=0.2058, simple_loss=0.293, pruned_loss=0.05935, over 7277.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2652, pruned_loss=0.04424, over 1421870.28 frames.], batch size: 25, lr: 1.91e-04 2022-05-28 21:36:39,491 INFO [train.py:842] (0/4) Epoch 29, batch 2250, loss[loss=0.1795, simple_loss=0.2689, pruned_loss=0.04505, over 7108.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2645, pruned_loss=0.04411, over 1425651.83 frames.], batch size: 21, lr: 1.91e-04 2022-05-28 21:37:18,702 INFO [train.py:842] (0/4) Epoch 29, batch 2300, loss[loss=0.1946, simple_loss=0.2862, pruned_loss=0.05149, over 7276.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2647, pruned_loss=0.04449, over 1427442.83 frames.], batch size: 24, lr: 1.91e-04 2022-05-28 21:37:58,194 INFO [train.py:842] (0/4) Epoch 29, batch 2350, loss[loss=0.2199, simple_loss=0.2896, pruned_loss=0.0751, over 7069.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2644, pruned_loss=0.04441, over 1425174.50 frames.], batch size: 18, lr: 1.91e-04 2022-05-28 21:38:37,543 INFO [train.py:842] (0/4) Epoch 29, batch 2400, loss[loss=0.1616, simple_loss=0.2553, pruned_loss=0.03399, over 7353.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2639, pruned_loss=0.04421, over 1426465.20 frames.], batch size: 19, lr: 1.90e-04 2022-05-28 21:39:16,883 INFO [train.py:842] (0/4) Epoch 29, batch 2450, loss[loss=0.1521, simple_loss=0.2564, pruned_loss=0.02393, over 7117.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2656, pruned_loss=0.04475, over 1416708.73 frames.], batch size: 21, lr: 1.90e-04 2022-05-28 21:39:56,302 INFO [train.py:842] (0/4) Epoch 29, batch 2500, loss[loss=0.1667, simple_loss=0.2466, pruned_loss=0.04342, over 7415.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2657, pruned_loss=0.04508, over 1420064.91 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:40:35,883 INFO [train.py:842] (0/4) Epoch 29, batch 2550, loss[loss=0.1486, simple_loss=0.2424, pruned_loss=0.02742, over 7169.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2657, pruned_loss=0.04504, over 1417367.00 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:41:14,980 INFO [train.py:842] (0/4) Epoch 29, batch 2600, loss[loss=0.1822, simple_loss=0.2771, pruned_loss=0.04365, over 7210.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2653, pruned_loss=0.04471, over 1415858.90 frames.], batch size: 23, lr: 1.90e-04 2022-05-28 21:41:54,608 INFO [train.py:842] (0/4) Epoch 29, batch 2650, loss[loss=0.1654, simple_loss=0.2498, pruned_loss=0.04055, over 7412.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2643, pruned_loss=0.04422, over 1418664.02 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:42:33,892 INFO [train.py:842] (0/4) Epoch 29, batch 2700, loss[loss=0.1839, simple_loss=0.2704, pruned_loss=0.04867, over 5276.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2629, pruned_loss=0.04347, over 1418434.53 frames.], batch size: 52, lr: 1.90e-04 2022-05-28 21:43:13,534 INFO [train.py:842] (0/4) Epoch 29, batch 2750, loss[loss=0.1893, simple_loss=0.2805, pruned_loss=0.04907, over 7310.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2645, pruned_loss=0.04439, over 1413861.55 frames.], batch size: 21, lr: 1.90e-04 2022-05-28 21:43:52,903 INFO [train.py:842] (0/4) Epoch 29, batch 2800, loss[loss=0.1656, simple_loss=0.2655, pruned_loss=0.03289, over 7353.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2646, pruned_loss=0.04354, over 1417473.69 frames.], batch size: 22, lr: 1.90e-04 2022-05-28 21:44:32,574 INFO [train.py:842] (0/4) Epoch 29, batch 2850, loss[loss=0.1721, simple_loss=0.2545, pruned_loss=0.04479, over 7265.00 frames.], tot_loss[loss=0.1743, simple_loss=0.263, pruned_loss=0.04287, over 1417961.93 frames.], batch size: 19, lr: 1.90e-04 2022-05-28 21:45:11,771 INFO [train.py:842] (0/4) Epoch 29, batch 2900, loss[loss=0.1489, simple_loss=0.2368, pruned_loss=0.03046, over 7282.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2629, pruned_loss=0.0429, over 1416663.07 frames.], batch size: 17, lr: 1.90e-04 2022-05-28 21:45:51,683 INFO [train.py:842] (0/4) Epoch 29, batch 2950, loss[loss=0.1488, simple_loss=0.238, pruned_loss=0.02977, over 7119.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2612, pruned_loss=0.0423, over 1416460.32 frames.], batch size: 17, lr: 1.90e-04 2022-05-28 21:46:30,730 INFO [train.py:842] (0/4) Epoch 29, batch 3000, loss[loss=0.1673, simple_loss=0.2529, pruned_loss=0.0409, over 7240.00 frames.], tot_loss[loss=0.1733, simple_loss=0.262, pruned_loss=0.04226, over 1417658.62 frames.], batch size: 20, lr: 1.90e-04 2022-05-28 21:46:30,732 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 21:46:40,406 INFO [train.py:871] (0/4) Epoch 29, validation: loss=0.1638, simple_loss=0.2614, pruned_loss=0.03305, over 868885.00 frames. 2022-05-28 21:47:20,136 INFO [train.py:842] (0/4) Epoch 29, batch 3050, loss[loss=0.1502, simple_loss=0.2378, pruned_loss=0.03132, over 7156.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2613, pruned_loss=0.04219, over 1420278.44 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:47:59,328 INFO [train.py:842] (0/4) Epoch 29, batch 3100, loss[loss=0.1818, simple_loss=0.2683, pruned_loss=0.0477, over 7266.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2617, pruned_loss=0.04222, over 1417475.19 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:48:38,765 INFO [train.py:842] (0/4) Epoch 29, batch 3150, loss[loss=0.1603, simple_loss=0.2502, pruned_loss=0.03525, over 7211.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2627, pruned_loss=0.04297, over 1421438.45 frames.], batch size: 21, lr: 1.90e-04 2022-05-28 21:49:18,080 INFO [train.py:842] (0/4) Epoch 29, batch 3200, loss[loss=0.1834, simple_loss=0.2716, pruned_loss=0.04759, over 7123.00 frames.], tot_loss[loss=0.1755, simple_loss=0.264, pruned_loss=0.04353, over 1421714.99 frames.], batch size: 21, lr: 1.90e-04 2022-05-28 21:49:57,944 INFO [train.py:842] (0/4) Epoch 29, batch 3250, loss[loss=0.1483, simple_loss=0.2298, pruned_loss=0.03344, over 6792.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2637, pruned_loss=0.04355, over 1420565.63 frames.], batch size: 15, lr: 1.90e-04 2022-05-28 21:50:37,003 INFO [train.py:842] (0/4) Epoch 29, batch 3300, loss[loss=0.1368, simple_loss=0.2376, pruned_loss=0.01799, over 7220.00 frames.], tot_loss[loss=0.175, simple_loss=0.2639, pruned_loss=0.04304, over 1420405.52 frames.], batch size: 21, lr: 1.90e-04 2022-05-28 21:51:16,442 INFO [train.py:842] (0/4) Epoch 29, batch 3350, loss[loss=0.1943, simple_loss=0.2788, pruned_loss=0.05493, over 7052.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2638, pruned_loss=0.04349, over 1418783.83 frames.], batch size: 28, lr: 1.90e-04 2022-05-28 21:51:55,701 INFO [train.py:842] (0/4) Epoch 29, batch 3400, loss[loss=0.193, simple_loss=0.2776, pruned_loss=0.05423, over 7078.00 frames.], tot_loss[loss=0.176, simple_loss=0.2646, pruned_loss=0.04374, over 1417300.43 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:52:35,486 INFO [train.py:842] (0/4) Epoch 29, batch 3450, loss[loss=0.18, simple_loss=0.2572, pruned_loss=0.05142, over 7273.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2629, pruned_loss=0.04347, over 1419553.23 frames.], batch size: 17, lr: 1.90e-04 2022-05-28 21:53:14,678 INFO [train.py:842] (0/4) Epoch 29, batch 3500, loss[loss=0.1842, simple_loss=0.2832, pruned_loss=0.04263, over 6845.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2642, pruned_loss=0.04381, over 1419232.11 frames.], batch size: 31, lr: 1.90e-04 2022-05-28 21:53:54,282 INFO [train.py:842] (0/4) Epoch 29, batch 3550, loss[loss=0.1662, simple_loss=0.249, pruned_loss=0.04168, over 7282.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2639, pruned_loss=0.04375, over 1422178.00 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:54:33,684 INFO [train.py:842] (0/4) Epoch 29, batch 3600, loss[loss=0.2072, simple_loss=0.2825, pruned_loss=0.06595, over 6807.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2655, pruned_loss=0.04468, over 1422867.21 frames.], batch size: 15, lr: 1.90e-04 2022-05-28 21:55:13,227 INFO [train.py:842] (0/4) Epoch 29, batch 3650, loss[loss=0.199, simple_loss=0.2898, pruned_loss=0.05414, over 7336.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2652, pruned_loss=0.04458, over 1426717.38 frames.], batch size: 22, lr: 1.90e-04 2022-05-28 21:55:52,673 INFO [train.py:842] (0/4) Epoch 29, batch 3700, loss[loss=0.1929, simple_loss=0.2818, pruned_loss=0.05199, over 7200.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2647, pruned_loss=0.04428, over 1426459.79 frames.], batch size: 23, lr: 1.90e-04 2022-05-28 21:56:32,193 INFO [train.py:842] (0/4) Epoch 29, batch 3750, loss[loss=0.2093, simple_loss=0.2925, pruned_loss=0.06304, over 5453.00 frames.], tot_loss[loss=0.1783, simple_loss=0.2662, pruned_loss=0.0452, over 1426544.25 frames.], batch size: 52, lr: 1.90e-04 2022-05-28 21:57:11,438 INFO [train.py:842] (0/4) Epoch 29, batch 3800, loss[loss=0.1542, simple_loss=0.2432, pruned_loss=0.0326, over 7057.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2662, pruned_loss=0.04499, over 1429038.00 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:57:51,286 INFO [train.py:842] (0/4) Epoch 29, batch 3850, loss[loss=0.1548, simple_loss=0.24, pruned_loss=0.03479, over 7240.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2644, pruned_loss=0.0439, over 1427164.21 frames.], batch size: 16, lr: 1.90e-04 2022-05-28 21:58:30,599 INFO [train.py:842] (0/4) Epoch 29, batch 3900, loss[loss=0.1671, simple_loss=0.244, pruned_loss=0.04509, over 7408.00 frames.], tot_loss[loss=0.176, simple_loss=0.2643, pruned_loss=0.04388, over 1429510.34 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 21:59:10,096 INFO [train.py:842] (0/4) Epoch 29, batch 3950, loss[loss=0.1495, simple_loss=0.2483, pruned_loss=0.02536, over 7116.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2649, pruned_loss=0.04377, over 1430548.90 frames.], batch size: 21, lr: 1.90e-04 2022-05-28 21:59:49,004 INFO [train.py:842] (0/4) Epoch 29, batch 4000, loss[loss=0.2, simple_loss=0.2912, pruned_loss=0.05444, over 7110.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2645, pruned_loss=0.04368, over 1429454.34 frames.], batch size: 21, lr: 1.90e-04 2022-05-28 22:00:28,491 INFO [train.py:842] (0/4) Epoch 29, batch 4050, loss[loss=0.1714, simple_loss=0.2659, pruned_loss=0.03847, over 7100.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2646, pruned_loss=0.0439, over 1429014.04 frames.], batch size: 28, lr: 1.90e-04 2022-05-28 22:01:07,527 INFO [train.py:842] (0/4) Epoch 29, batch 4100, loss[loss=0.1822, simple_loss=0.2784, pruned_loss=0.04295, over 7048.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2639, pruned_loss=0.04351, over 1430150.95 frames.], batch size: 28, lr: 1.90e-04 2022-05-28 22:01:47,160 INFO [train.py:842] (0/4) Epoch 29, batch 4150, loss[loss=0.2097, simple_loss=0.3033, pruned_loss=0.05805, over 7233.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2641, pruned_loss=0.04372, over 1432316.85 frames.], batch size: 20, lr: 1.90e-04 2022-05-28 22:02:26,306 INFO [train.py:842] (0/4) Epoch 29, batch 4200, loss[loss=0.1735, simple_loss=0.2804, pruned_loss=0.03334, over 7342.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2648, pruned_loss=0.04391, over 1427893.09 frames.], batch size: 22, lr: 1.90e-04 2022-05-28 22:03:05,988 INFO [train.py:842] (0/4) Epoch 29, batch 4250, loss[loss=0.1532, simple_loss=0.2382, pruned_loss=0.03408, over 7258.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2642, pruned_loss=0.0432, over 1427221.42 frames.], batch size: 19, lr: 1.90e-04 2022-05-28 22:03:45,578 INFO [train.py:842] (0/4) Epoch 29, batch 4300, loss[loss=0.1474, simple_loss=0.234, pruned_loss=0.03038, over 6754.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2648, pruned_loss=0.0433, over 1428918.48 frames.], batch size: 15, lr: 1.90e-04 2022-05-28 22:04:25,114 INFO [train.py:842] (0/4) Epoch 29, batch 4350, loss[loss=0.2067, simple_loss=0.2974, pruned_loss=0.05794, over 7183.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2649, pruned_loss=0.04323, over 1431341.68 frames.], batch size: 26, lr: 1.90e-04 2022-05-28 22:05:04,058 INFO [train.py:842] (0/4) Epoch 29, batch 4400, loss[loss=0.162, simple_loss=0.2508, pruned_loss=0.03655, over 4884.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2653, pruned_loss=0.04341, over 1426738.04 frames.], batch size: 52, lr: 1.90e-04 2022-05-28 22:05:43,312 INFO [train.py:842] (0/4) Epoch 29, batch 4450, loss[loss=0.1819, simple_loss=0.2743, pruned_loss=0.04477, over 6749.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2654, pruned_loss=0.04357, over 1426327.15 frames.], batch size: 31, lr: 1.90e-04 2022-05-28 22:06:22,577 INFO [train.py:842] (0/4) Epoch 29, batch 4500, loss[loss=0.19, simple_loss=0.2847, pruned_loss=0.04767, over 7347.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2643, pruned_loss=0.04321, over 1428992.96 frames.], batch size: 22, lr: 1.90e-04 2022-05-28 22:07:02,119 INFO [train.py:842] (0/4) Epoch 29, batch 4550, loss[loss=0.15, simple_loss=0.2286, pruned_loss=0.03572, over 6735.00 frames.], tot_loss[loss=0.175, simple_loss=0.2639, pruned_loss=0.04302, over 1428199.27 frames.], batch size: 15, lr: 1.90e-04 2022-05-28 22:07:41,328 INFO [train.py:842] (0/4) Epoch 29, batch 4600, loss[loss=0.1945, simple_loss=0.2827, pruned_loss=0.05318, over 7417.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2636, pruned_loss=0.04288, over 1429077.60 frames.], batch size: 20, lr: 1.90e-04 2022-05-28 22:08:21,007 INFO [train.py:842] (0/4) Epoch 29, batch 4650, loss[loss=0.194, simple_loss=0.2811, pruned_loss=0.05341, over 7205.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2654, pruned_loss=0.04492, over 1428987.98 frames.], batch size: 22, lr: 1.90e-04 2022-05-28 22:09:00,202 INFO [train.py:842] (0/4) Epoch 29, batch 4700, loss[loss=0.1579, simple_loss=0.2578, pruned_loss=0.02895, over 7154.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2653, pruned_loss=0.04457, over 1424054.96 frames.], batch size: 19, lr: 1.90e-04 2022-05-28 22:09:50,584 INFO [train.py:842] (0/4) Epoch 29, batch 4750, loss[loss=0.1844, simple_loss=0.2721, pruned_loss=0.04833, over 7306.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2664, pruned_loss=0.045, over 1423439.45 frames.], batch size: 24, lr: 1.90e-04 2022-05-28 22:10:29,950 INFO [train.py:842] (0/4) Epoch 29, batch 4800, loss[loss=0.162, simple_loss=0.2473, pruned_loss=0.0384, over 7248.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2656, pruned_loss=0.04485, over 1427128.22 frames.], batch size: 19, lr: 1.90e-04 2022-05-28 22:11:09,550 INFO [train.py:842] (0/4) Epoch 29, batch 4850, loss[loss=0.1581, simple_loss=0.2566, pruned_loss=0.02978, over 7227.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2655, pruned_loss=0.04469, over 1427481.67 frames.], batch size: 20, lr: 1.90e-04 2022-05-28 22:11:48,975 INFO [train.py:842] (0/4) Epoch 29, batch 4900, loss[loss=0.1495, simple_loss=0.2399, pruned_loss=0.02957, over 7067.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2645, pruned_loss=0.04395, over 1428335.70 frames.], batch size: 18, lr: 1.90e-04 2022-05-28 22:12:28,547 INFO [train.py:842] (0/4) Epoch 29, batch 4950, loss[loss=0.1903, simple_loss=0.278, pruned_loss=0.05131, over 6444.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2654, pruned_loss=0.0444, over 1427374.59 frames.], batch size: 38, lr: 1.90e-04 2022-05-28 22:13:07,901 INFO [train.py:842] (0/4) Epoch 29, batch 5000, loss[loss=0.173, simple_loss=0.2601, pruned_loss=0.04295, over 7381.00 frames.], tot_loss[loss=0.1782, simple_loss=0.2659, pruned_loss=0.04528, over 1422563.64 frames.], batch size: 23, lr: 1.90e-04 2022-05-28 22:13:47,751 INFO [train.py:842] (0/4) Epoch 29, batch 5050, loss[loss=0.1296, simple_loss=0.2142, pruned_loss=0.0225, over 7256.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2651, pruned_loss=0.04529, over 1428858.05 frames.], batch size: 17, lr: 1.90e-04 2022-05-28 22:14:27,100 INFO [train.py:842] (0/4) Epoch 29, batch 5100, loss[loss=0.1532, simple_loss=0.2484, pruned_loss=0.02897, over 7157.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2645, pruned_loss=0.0445, over 1429705.50 frames.], batch size: 20, lr: 1.90e-04 2022-05-28 22:15:06,664 INFO [train.py:842] (0/4) Epoch 29, batch 5150, loss[loss=0.1765, simple_loss=0.2584, pruned_loss=0.04735, over 6259.00 frames.], tot_loss[loss=0.1762, simple_loss=0.264, pruned_loss=0.04417, over 1430961.00 frames.], batch size: 37, lr: 1.89e-04 2022-05-28 22:15:45,868 INFO [train.py:842] (0/4) Epoch 29, batch 5200, loss[loss=0.1659, simple_loss=0.2571, pruned_loss=0.03736, over 7057.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2633, pruned_loss=0.04351, over 1429805.20 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:16:25,573 INFO [train.py:842] (0/4) Epoch 29, batch 5250, loss[loss=0.1534, simple_loss=0.2388, pruned_loss=0.03403, over 7160.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2638, pruned_loss=0.04345, over 1430946.29 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:17:04,958 INFO [train.py:842] (0/4) Epoch 29, batch 5300, loss[loss=0.2044, simple_loss=0.3052, pruned_loss=0.05181, over 7114.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2649, pruned_loss=0.04368, over 1431424.38 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:17:44,495 INFO [train.py:842] (0/4) Epoch 29, batch 5350, loss[loss=0.169, simple_loss=0.2469, pruned_loss=0.0455, over 7273.00 frames.], tot_loss[loss=0.1755, simple_loss=0.264, pruned_loss=0.04346, over 1428615.23 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:18:23,833 INFO [train.py:842] (0/4) Epoch 29, batch 5400, loss[loss=0.1789, simple_loss=0.2726, pruned_loss=0.04256, over 7377.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2641, pruned_loss=0.04331, over 1427809.64 frames.], batch size: 23, lr: 1.89e-04 2022-05-28 22:19:03,670 INFO [train.py:842] (0/4) Epoch 29, batch 5450, loss[loss=0.203, simple_loss=0.288, pruned_loss=0.05901, over 7321.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2649, pruned_loss=0.04386, over 1429451.49 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:19:42,629 INFO [train.py:842] (0/4) Epoch 29, batch 5500, loss[loss=0.1989, simple_loss=0.2873, pruned_loss=0.05526, over 7205.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2664, pruned_loss=0.04461, over 1430179.37 frames.], batch size: 22, lr: 1.89e-04 2022-05-28 22:20:22,036 INFO [train.py:842] (0/4) Epoch 29, batch 5550, loss[loss=0.2033, simple_loss=0.2753, pruned_loss=0.06563, over 4968.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2665, pruned_loss=0.04426, over 1427499.50 frames.], batch size: 52, lr: 1.89e-04 2022-05-28 22:21:01,138 INFO [train.py:842] (0/4) Epoch 29, batch 5600, loss[loss=0.1324, simple_loss=0.2194, pruned_loss=0.02269, over 7267.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2666, pruned_loss=0.04437, over 1428482.28 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:21:40,625 INFO [train.py:842] (0/4) Epoch 29, batch 5650, loss[loss=0.1384, simple_loss=0.2151, pruned_loss=0.03081, over 7274.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2657, pruned_loss=0.04397, over 1430068.05 frames.], batch size: 17, lr: 1.89e-04 2022-05-28 22:22:19,816 INFO [train.py:842] (0/4) Epoch 29, batch 5700, loss[loss=0.2569, simple_loss=0.3274, pruned_loss=0.09325, over 6763.00 frames.], tot_loss[loss=0.177, simple_loss=0.2655, pruned_loss=0.04423, over 1430051.61 frames.], batch size: 31, lr: 1.89e-04 2022-05-28 22:22:59,376 INFO [train.py:842] (0/4) Epoch 29, batch 5750, loss[loss=0.1394, simple_loss=0.2214, pruned_loss=0.02872, over 7291.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2662, pruned_loss=0.04451, over 1429264.49 frames.], batch size: 17, lr: 1.89e-04 2022-05-28 22:23:38,548 INFO [train.py:842] (0/4) Epoch 29, batch 5800, loss[loss=0.1788, simple_loss=0.2743, pruned_loss=0.04172, over 7149.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2661, pruned_loss=0.04438, over 1425306.54 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:24:18,023 INFO [train.py:842] (0/4) Epoch 29, batch 5850, loss[loss=0.1655, simple_loss=0.2639, pruned_loss=0.03354, over 7418.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2652, pruned_loss=0.044, over 1421477.40 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:24:57,168 INFO [train.py:842] (0/4) Epoch 29, batch 5900, loss[loss=0.201, simple_loss=0.289, pruned_loss=0.05648, over 7138.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2651, pruned_loss=0.04365, over 1424316.24 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:25:36,551 INFO [train.py:842] (0/4) Epoch 29, batch 5950, loss[loss=0.1616, simple_loss=0.2448, pruned_loss=0.03925, over 7234.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2645, pruned_loss=0.04347, over 1419401.76 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:26:15,730 INFO [train.py:842] (0/4) Epoch 29, batch 6000, loss[loss=0.1616, simple_loss=0.2432, pruned_loss=0.04003, over 7129.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2653, pruned_loss=0.04401, over 1419071.63 frames.], batch size: 17, lr: 1.89e-04 2022-05-28 22:26:15,731 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 22:26:26,014 INFO [train.py:871] (0/4) Epoch 29, validation: loss=0.1668, simple_loss=0.2645, pruned_loss=0.03459, over 868885.00 frames. 2022-05-28 22:27:05,787 INFO [train.py:842] (0/4) Epoch 29, batch 6050, loss[loss=0.1678, simple_loss=0.2664, pruned_loss=0.03461, over 7214.00 frames.], tot_loss[loss=0.1754, simple_loss=0.264, pruned_loss=0.04345, over 1421276.30 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:27:44,931 INFO [train.py:842] (0/4) Epoch 29, batch 6100, loss[loss=0.249, simple_loss=0.3272, pruned_loss=0.08542, over 7217.00 frames.], tot_loss[loss=0.1756, simple_loss=0.264, pruned_loss=0.04358, over 1421941.23 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:28:24,677 INFO [train.py:842] (0/4) Epoch 29, batch 6150, loss[loss=0.1545, simple_loss=0.2395, pruned_loss=0.03476, over 7287.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2632, pruned_loss=0.04306, over 1423977.91 frames.], batch size: 17, lr: 1.89e-04 2022-05-28 22:29:03,804 INFO [train.py:842] (0/4) Epoch 29, batch 6200, loss[loss=0.1565, simple_loss=0.2494, pruned_loss=0.03183, over 7422.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2628, pruned_loss=0.04269, over 1419894.54 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:29:43,509 INFO [train.py:842] (0/4) Epoch 29, batch 6250, loss[loss=0.1846, simple_loss=0.2822, pruned_loss=0.04345, over 7209.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2639, pruned_loss=0.0434, over 1423929.04 frames.], batch size: 22, lr: 1.89e-04 2022-05-28 22:30:22,879 INFO [train.py:842] (0/4) Epoch 29, batch 6300, loss[loss=0.1809, simple_loss=0.2748, pruned_loss=0.04356, over 7330.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2646, pruned_loss=0.04396, over 1426882.70 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:31:02,427 INFO [train.py:842] (0/4) Epoch 29, batch 6350, loss[loss=0.1895, simple_loss=0.2819, pruned_loss=0.04856, over 7043.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2644, pruned_loss=0.04431, over 1426068.57 frames.], batch size: 28, lr: 1.89e-04 2022-05-28 22:31:41,475 INFO [train.py:842] (0/4) Epoch 29, batch 6400, loss[loss=0.1477, simple_loss=0.2415, pruned_loss=0.02691, over 7426.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2652, pruned_loss=0.04512, over 1422879.90 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:32:20,973 INFO [train.py:842] (0/4) Epoch 29, batch 6450, loss[loss=0.1676, simple_loss=0.2501, pruned_loss=0.04249, over 7161.00 frames.], tot_loss[loss=0.1784, simple_loss=0.266, pruned_loss=0.04547, over 1424858.19 frames.], batch size: 19, lr: 1.89e-04 2022-05-28 22:33:00,143 INFO [train.py:842] (0/4) Epoch 29, batch 6500, loss[loss=0.1858, simple_loss=0.2712, pruned_loss=0.05022, over 7231.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2661, pruned_loss=0.04503, over 1422871.40 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:33:39,825 INFO [train.py:842] (0/4) Epoch 29, batch 6550, loss[loss=0.1924, simple_loss=0.2816, pruned_loss=0.05157, over 7319.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2654, pruned_loss=0.04442, over 1421971.99 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:34:18,913 INFO [train.py:842] (0/4) Epoch 29, batch 6600, loss[loss=0.2238, simple_loss=0.3025, pruned_loss=0.07255, over 7157.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2654, pruned_loss=0.04434, over 1421619.62 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:34:39,519 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-264000.pt 2022-05-28 22:35:01,107 INFO [train.py:842] (0/4) Epoch 29, batch 6650, loss[loss=0.184, simple_loss=0.2743, pruned_loss=0.04682, over 7377.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2649, pruned_loss=0.04387, over 1423500.80 frames.], batch size: 23, lr: 1.89e-04 2022-05-28 22:35:40,344 INFO [train.py:842] (0/4) Epoch 29, batch 6700, loss[loss=0.1401, simple_loss=0.232, pruned_loss=0.02407, over 7273.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2636, pruned_loss=0.04364, over 1416998.12 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:36:19,868 INFO [train.py:842] (0/4) Epoch 29, batch 6750, loss[loss=0.2003, simple_loss=0.2889, pruned_loss=0.05578, over 7213.00 frames.], tot_loss[loss=0.176, simple_loss=0.2642, pruned_loss=0.04392, over 1416501.39 frames.], batch size: 23, lr: 1.89e-04 2022-05-28 22:36:58,910 INFO [train.py:842] (0/4) Epoch 29, batch 6800, loss[loss=0.2183, simple_loss=0.3021, pruned_loss=0.06721, over 6615.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2655, pruned_loss=0.04434, over 1419145.34 frames.], batch size: 31, lr: 1.89e-04 2022-05-28 22:37:38,460 INFO [train.py:842] (0/4) Epoch 29, batch 6850, loss[loss=0.2359, simple_loss=0.3021, pruned_loss=0.08492, over 5124.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2655, pruned_loss=0.04451, over 1416242.25 frames.], batch size: 53, lr: 1.89e-04 2022-05-28 22:38:17,569 INFO [train.py:842] (0/4) Epoch 29, batch 6900, loss[loss=0.1792, simple_loss=0.2702, pruned_loss=0.04411, over 6815.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2658, pruned_loss=0.04444, over 1418968.18 frames.], batch size: 31, lr: 1.89e-04 2022-05-28 22:38:57,126 INFO [train.py:842] (0/4) Epoch 29, batch 6950, loss[loss=0.1378, simple_loss=0.2208, pruned_loss=0.02736, over 7141.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2656, pruned_loss=0.04435, over 1418059.61 frames.], batch size: 17, lr: 1.89e-04 2022-05-28 22:39:36,332 INFO [train.py:842] (0/4) Epoch 29, batch 7000, loss[loss=0.1692, simple_loss=0.257, pruned_loss=0.04067, over 7159.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2651, pruned_loss=0.04423, over 1418559.29 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:40:15,824 INFO [train.py:842] (0/4) Epoch 29, batch 7050, loss[loss=0.1566, simple_loss=0.2551, pruned_loss=0.02906, over 7074.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2654, pruned_loss=0.04418, over 1420661.00 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:40:54,893 INFO [train.py:842] (0/4) Epoch 29, batch 7100, loss[loss=0.1906, simple_loss=0.2742, pruned_loss=0.05351, over 7220.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2656, pruned_loss=0.04392, over 1416385.11 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:41:34,460 INFO [train.py:842] (0/4) Epoch 29, batch 7150, loss[loss=0.1394, simple_loss=0.2278, pruned_loss=0.02547, over 7154.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2658, pruned_loss=0.04377, over 1417622.08 frames.], batch size: 19, lr: 1.89e-04 2022-05-28 22:42:14,004 INFO [train.py:842] (0/4) Epoch 29, batch 7200, loss[loss=0.1673, simple_loss=0.2587, pruned_loss=0.03796, over 7305.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2652, pruned_loss=0.04356, over 1421924.79 frames.], batch size: 24, lr: 1.89e-04 2022-05-28 22:42:53,902 INFO [train.py:842] (0/4) Epoch 29, batch 7250, loss[loss=0.1562, simple_loss=0.2467, pruned_loss=0.03284, over 7220.00 frames.], tot_loss[loss=0.175, simple_loss=0.2634, pruned_loss=0.04332, over 1428193.43 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:43:33,113 INFO [train.py:842] (0/4) Epoch 29, batch 7300, loss[loss=0.1988, simple_loss=0.2876, pruned_loss=0.05497, over 6262.00 frames.], tot_loss[loss=0.1758, simple_loss=0.264, pruned_loss=0.04382, over 1429449.92 frames.], batch size: 38, lr: 1.89e-04 2022-05-28 22:44:12,674 INFO [train.py:842] (0/4) Epoch 29, batch 7350, loss[loss=0.193, simple_loss=0.282, pruned_loss=0.05199, over 7423.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2638, pruned_loss=0.04361, over 1428868.55 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:44:51,869 INFO [train.py:842] (0/4) Epoch 29, batch 7400, loss[loss=0.2, simple_loss=0.2918, pruned_loss=0.05405, over 6751.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2629, pruned_loss=0.04317, over 1426605.62 frames.], batch size: 31, lr: 1.89e-04 2022-05-28 22:45:31,614 INFO [train.py:842] (0/4) Epoch 29, batch 7450, loss[loss=0.1624, simple_loss=0.2488, pruned_loss=0.03804, over 7165.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2639, pruned_loss=0.04446, over 1424455.82 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:46:10,850 INFO [train.py:842] (0/4) Epoch 29, batch 7500, loss[loss=0.1559, simple_loss=0.2316, pruned_loss=0.04014, over 7134.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2632, pruned_loss=0.04387, over 1423693.94 frames.], batch size: 17, lr: 1.89e-04 2022-05-28 22:46:50,599 INFO [train.py:842] (0/4) Epoch 29, batch 7550, loss[loss=0.1509, simple_loss=0.2379, pruned_loss=0.03193, over 7065.00 frames.], tot_loss[loss=0.174, simple_loss=0.2624, pruned_loss=0.04275, over 1426049.70 frames.], batch size: 18, lr: 1.89e-04 2022-05-28 22:47:29,669 INFO [train.py:842] (0/4) Epoch 29, batch 7600, loss[loss=0.1693, simple_loss=0.2652, pruned_loss=0.03666, over 7324.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2624, pruned_loss=0.04267, over 1423867.80 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:48:09,241 INFO [train.py:842] (0/4) Epoch 29, batch 7650, loss[loss=0.178, simple_loss=0.2644, pruned_loss=0.04581, over 7317.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2624, pruned_loss=0.04288, over 1423462.35 frames.], batch size: 21, lr: 1.89e-04 2022-05-28 22:48:48,375 INFO [train.py:842] (0/4) Epoch 29, batch 7700, loss[loss=0.1775, simple_loss=0.2683, pruned_loss=0.04338, over 7149.00 frames.], tot_loss[loss=0.1756, simple_loss=0.264, pruned_loss=0.0436, over 1424853.49 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:49:27,964 INFO [train.py:842] (0/4) Epoch 29, batch 7750, loss[loss=0.1575, simple_loss=0.2468, pruned_loss=0.03409, over 7223.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2643, pruned_loss=0.04362, over 1422895.00 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:50:07,181 INFO [train.py:842] (0/4) Epoch 29, batch 7800, loss[loss=0.1931, simple_loss=0.2904, pruned_loss=0.04789, over 7147.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2646, pruned_loss=0.04409, over 1420937.15 frames.], batch size: 20, lr: 1.89e-04 2022-05-28 22:50:46,757 INFO [train.py:842] (0/4) Epoch 29, batch 7850, loss[loss=0.2823, simple_loss=0.3565, pruned_loss=0.104, over 6173.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2645, pruned_loss=0.04427, over 1420453.16 frames.], batch size: 37, lr: 1.89e-04 2022-05-28 22:51:26,025 INFO [train.py:842] (0/4) Epoch 29, batch 7900, loss[loss=0.1981, simple_loss=0.2909, pruned_loss=0.05262, over 7345.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2643, pruned_loss=0.04449, over 1422308.59 frames.], batch size: 22, lr: 1.89e-04 2022-05-28 22:52:05,541 INFO [train.py:842] (0/4) Epoch 29, batch 7950, loss[loss=0.1526, simple_loss=0.237, pruned_loss=0.03416, over 7302.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2643, pruned_loss=0.04455, over 1422092.86 frames.], batch size: 17, lr: 1.88e-04 2022-05-28 22:52:44,469 INFO [train.py:842] (0/4) Epoch 29, batch 8000, loss[loss=0.2033, simple_loss=0.2929, pruned_loss=0.05683, over 7339.00 frames.], tot_loss[loss=0.1774, simple_loss=0.2652, pruned_loss=0.0448, over 1421728.42 frames.], batch size: 22, lr: 1.88e-04 2022-05-28 22:53:24,180 INFO [train.py:842] (0/4) Epoch 29, batch 8050, loss[loss=0.1946, simple_loss=0.2763, pruned_loss=0.05648, over 7440.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2644, pruned_loss=0.04407, over 1427034.96 frames.], batch size: 20, lr: 1.88e-04 2022-05-28 22:54:03,214 INFO [train.py:842] (0/4) Epoch 29, batch 8100, loss[loss=0.1707, simple_loss=0.2636, pruned_loss=0.0389, over 7298.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2665, pruned_loss=0.04519, over 1426098.44 frames.], batch size: 25, lr: 1.88e-04 2022-05-28 22:54:43,043 INFO [train.py:842] (0/4) Epoch 29, batch 8150, loss[loss=0.1531, simple_loss=0.2328, pruned_loss=0.03669, over 7271.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2655, pruned_loss=0.04491, over 1424763.52 frames.], batch size: 17, lr: 1.88e-04 2022-05-28 22:55:22,202 INFO [train.py:842] (0/4) Epoch 29, batch 8200, loss[loss=0.1539, simple_loss=0.2433, pruned_loss=0.03223, over 7247.00 frames.], tot_loss[loss=0.1762, simple_loss=0.264, pruned_loss=0.04417, over 1422579.68 frames.], batch size: 20, lr: 1.88e-04 2022-05-28 22:56:01,676 INFO [train.py:842] (0/4) Epoch 29, batch 8250, loss[loss=0.1699, simple_loss=0.2585, pruned_loss=0.04066, over 7163.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2641, pruned_loss=0.04388, over 1425605.77 frames.], batch size: 19, lr: 1.88e-04 2022-05-28 22:56:40,902 INFO [train.py:842] (0/4) Epoch 29, batch 8300, loss[loss=0.1596, simple_loss=0.2525, pruned_loss=0.03332, over 7333.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2646, pruned_loss=0.04422, over 1425263.13 frames.], batch size: 20, lr: 1.88e-04 2022-05-28 22:57:20,360 INFO [train.py:842] (0/4) Epoch 29, batch 8350, loss[loss=0.1373, simple_loss=0.2253, pruned_loss=0.02465, over 6985.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2643, pruned_loss=0.04393, over 1422352.81 frames.], batch size: 16, lr: 1.88e-04 2022-05-28 22:57:59,577 INFO [train.py:842] (0/4) Epoch 29, batch 8400, loss[loss=0.2302, simple_loss=0.3013, pruned_loss=0.07957, over 7431.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2639, pruned_loss=0.04324, over 1420583.54 frames.], batch size: 20, lr: 1.88e-04 2022-05-28 22:58:38,725 INFO [train.py:842] (0/4) Epoch 29, batch 8450, loss[loss=0.1997, simple_loss=0.2797, pruned_loss=0.05987, over 7200.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2647, pruned_loss=0.044, over 1414408.84 frames.], batch size: 23, lr: 1.88e-04 2022-05-28 22:59:17,799 INFO [train.py:842] (0/4) Epoch 29, batch 8500, loss[loss=0.1534, simple_loss=0.2414, pruned_loss=0.03267, over 7324.00 frames.], tot_loss[loss=0.1764, simple_loss=0.265, pruned_loss=0.04391, over 1417765.55 frames.], batch size: 20, lr: 1.88e-04 2022-05-28 22:59:57,412 INFO [train.py:842] (0/4) Epoch 29, batch 8550, loss[loss=0.1652, simple_loss=0.2478, pruned_loss=0.04128, over 7249.00 frames.], tot_loss[loss=0.1765, simple_loss=0.265, pruned_loss=0.04398, over 1418526.26 frames.], batch size: 19, lr: 1.88e-04 2022-05-28 23:00:36,819 INFO [train.py:842] (0/4) Epoch 29, batch 8600, loss[loss=0.2039, simple_loss=0.2906, pruned_loss=0.05864, over 7306.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2643, pruned_loss=0.04394, over 1422225.02 frames.], batch size: 21, lr: 1.88e-04 2022-05-28 23:01:16,272 INFO [train.py:842] (0/4) Epoch 29, batch 8650, loss[loss=0.1613, simple_loss=0.2343, pruned_loss=0.04417, over 7236.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2645, pruned_loss=0.04398, over 1422545.37 frames.], batch size: 16, lr: 1.88e-04 2022-05-28 23:01:55,554 INFO [train.py:842] (0/4) Epoch 29, batch 8700, loss[loss=0.1373, simple_loss=0.2161, pruned_loss=0.0292, over 7374.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2643, pruned_loss=0.04397, over 1419262.24 frames.], batch size: 19, lr: 1.88e-04 2022-05-28 23:02:35,317 INFO [train.py:842] (0/4) Epoch 29, batch 8750, loss[loss=0.2151, simple_loss=0.3091, pruned_loss=0.06052, over 7290.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2643, pruned_loss=0.04398, over 1422895.68 frames.], batch size: 25, lr: 1.88e-04 2022-05-28 23:03:14,608 INFO [train.py:842] (0/4) Epoch 29, batch 8800, loss[loss=0.1592, simple_loss=0.2386, pruned_loss=0.03995, over 6988.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2641, pruned_loss=0.04343, over 1424655.84 frames.], batch size: 16, lr: 1.88e-04 2022-05-28 23:03:54,167 INFO [train.py:842] (0/4) Epoch 29, batch 8850, loss[loss=0.2172, simple_loss=0.3117, pruned_loss=0.06137, over 7154.00 frames.], tot_loss[loss=0.175, simple_loss=0.2634, pruned_loss=0.0433, over 1415410.88 frames.], batch size: 19, lr: 1.88e-04 2022-05-28 23:04:33,296 INFO [train.py:842] (0/4) Epoch 29, batch 8900, loss[loss=0.1727, simple_loss=0.2644, pruned_loss=0.04051, over 6753.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2632, pruned_loss=0.04315, over 1413078.82 frames.], batch size: 31, lr: 1.88e-04 2022-05-28 23:05:12,317 INFO [train.py:842] (0/4) Epoch 29, batch 8950, loss[loss=0.2292, simple_loss=0.3049, pruned_loss=0.07675, over 7202.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2642, pruned_loss=0.04365, over 1400551.09 frames.], batch size: 22, lr: 1.88e-04 2022-05-28 23:05:50,685 INFO [train.py:842] (0/4) Epoch 29, batch 9000, loss[loss=0.1632, simple_loss=0.2573, pruned_loss=0.03455, over 6434.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2651, pruned_loss=0.04391, over 1381162.20 frames.], batch size: 37, lr: 1.88e-04 2022-05-28 23:05:50,686 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 23:06:00,408 INFO [train.py:871] (0/4) Epoch 29, validation: loss=0.164, simple_loss=0.2612, pruned_loss=0.03344, over 868885.00 frames. 2022-05-28 23:06:39,638 INFO [train.py:842] (0/4) Epoch 29, batch 9050, loss[loss=0.2013, simple_loss=0.2907, pruned_loss=0.05598, over 7025.00 frames.], tot_loss[loss=0.178, simple_loss=0.2661, pruned_loss=0.04493, over 1365421.94 frames.], batch size: 28, lr: 1.88e-04 2022-05-28 23:07:27,901 INFO [train.py:842] (0/4) Epoch 29, batch 9100, loss[loss=0.1835, simple_loss=0.2709, pruned_loss=0.04811, over 5322.00 frames.], tot_loss[loss=0.1798, simple_loss=0.268, pruned_loss=0.04583, over 1311408.00 frames.], batch size: 52, lr: 1.88e-04 2022-05-28 23:08:06,161 INFO [train.py:842] (0/4) Epoch 29, batch 9150, loss[loss=0.2384, simple_loss=0.321, pruned_loss=0.07788, over 5159.00 frames.], tot_loss[loss=0.1842, simple_loss=0.2714, pruned_loss=0.0485, over 1242840.24 frames.], batch size: 52, lr: 1.88e-04 2022-05-28 23:08:37,762 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-29.pt 2022-05-28 23:08:53,898 INFO [train.py:842] (0/4) Epoch 30, batch 0, loss[loss=0.17, simple_loss=0.2558, pruned_loss=0.04214, over 7317.00 frames.], tot_loss[loss=0.17, simple_loss=0.2558, pruned_loss=0.04214, over 7317.00 frames.], batch size: 20, lr: 1.85e-04 2022-05-28 23:09:44,401 INFO [train.py:842] (0/4) Epoch 30, batch 50, loss[loss=0.1374, simple_loss=0.2341, pruned_loss=0.02038, over 7294.00 frames.], tot_loss[loss=0.1722, simple_loss=0.263, pruned_loss=0.04068, over 324569.10 frames.], batch size: 18, lr: 1.85e-04 2022-05-28 23:10:34,878 INFO [train.py:842] (0/4) Epoch 30, batch 100, loss[loss=0.1841, simple_loss=0.2631, pruned_loss=0.05257, over 7292.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2617, pruned_loss=0.0424, over 572808.18 frames.], batch size: 17, lr: 1.85e-04 2022-05-28 23:11:14,537 INFO [train.py:842] (0/4) Epoch 30, batch 150, loss[loss=0.2329, simple_loss=0.3048, pruned_loss=0.0805, over 7274.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2632, pruned_loss=0.04363, over 750450.24 frames.], batch size: 24, lr: 1.85e-04 2022-05-28 23:11:53,899 INFO [train.py:842] (0/4) Epoch 30, batch 200, loss[loss=0.1496, simple_loss=0.2368, pruned_loss=0.03117, over 7359.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2622, pruned_loss=0.04304, over 900020.13 frames.], batch size: 19, lr: 1.85e-04 2022-05-28 23:12:33,256 INFO [train.py:842] (0/4) Epoch 30, batch 250, loss[loss=0.1536, simple_loss=0.2332, pruned_loss=0.03699, over 6781.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2647, pruned_loss=0.04358, over 1015939.68 frames.], batch size: 15, lr: 1.85e-04 2022-05-28 23:13:12,484 INFO [train.py:842] (0/4) Epoch 30, batch 300, loss[loss=0.1806, simple_loss=0.2598, pruned_loss=0.05067, over 7263.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2658, pruned_loss=0.04465, over 1107710.88 frames.], batch size: 18, lr: 1.85e-04 2022-05-28 23:13:52,333 INFO [train.py:842] (0/4) Epoch 30, batch 350, loss[loss=0.1631, simple_loss=0.2548, pruned_loss=0.03572, over 7309.00 frames.], tot_loss[loss=0.1768, simple_loss=0.265, pruned_loss=0.04425, over 1180745.03 frames.], batch size: 20, lr: 1.85e-04 2022-05-28 23:14:31,588 INFO [train.py:842] (0/4) Epoch 30, batch 400, loss[loss=0.2081, simple_loss=0.2939, pruned_loss=0.06116, over 7256.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2661, pruned_loss=0.04474, over 1236279.50 frames.], batch size: 24, lr: 1.85e-04 2022-05-28 23:15:11,124 INFO [train.py:842] (0/4) Epoch 30, batch 450, loss[loss=0.1697, simple_loss=0.2599, pruned_loss=0.03972, over 7420.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2646, pruned_loss=0.04437, over 1278679.27 frames.], batch size: 21, lr: 1.85e-04 2022-05-28 23:15:50,242 INFO [train.py:842] (0/4) Epoch 30, batch 500, loss[loss=0.1613, simple_loss=0.2618, pruned_loss=0.03037, over 7319.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2645, pruned_loss=0.04414, over 1307286.73 frames.], batch size: 20, lr: 1.85e-04 2022-05-28 23:16:29,797 INFO [train.py:842] (0/4) Epoch 30, batch 550, loss[loss=0.1754, simple_loss=0.27, pruned_loss=0.04037, over 7296.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2659, pruned_loss=0.04456, over 1335297.70 frames.], batch size: 24, lr: 1.85e-04 2022-05-28 23:17:08,974 INFO [train.py:842] (0/4) Epoch 30, batch 600, loss[loss=0.1584, simple_loss=0.2412, pruned_loss=0.03779, over 7219.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2638, pruned_loss=0.04333, over 1351150.57 frames.], batch size: 22, lr: 1.85e-04 2022-05-28 23:17:48,552 INFO [train.py:842] (0/4) Epoch 30, batch 650, loss[loss=0.1925, simple_loss=0.2707, pruned_loss=0.05717, over 7055.00 frames.], tot_loss[loss=0.1745, simple_loss=0.263, pruned_loss=0.04299, over 1366349.77 frames.], batch size: 18, lr: 1.85e-04 2022-05-28 23:18:27,654 INFO [train.py:842] (0/4) Epoch 30, batch 700, loss[loss=0.1821, simple_loss=0.2698, pruned_loss=0.04722, over 7330.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2623, pruned_loss=0.04239, over 1374153.61 frames.], batch size: 20, lr: 1.85e-04 2022-05-28 23:19:07,242 INFO [train.py:842] (0/4) Epoch 30, batch 750, loss[loss=0.2289, simple_loss=0.3137, pruned_loss=0.07201, over 7225.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2632, pruned_loss=0.04268, over 1380485.56 frames.], batch size: 20, lr: 1.85e-04 2022-05-28 23:19:46,349 INFO [train.py:842] (0/4) Epoch 30, batch 800, loss[loss=0.1717, simple_loss=0.2697, pruned_loss=0.03684, over 7321.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2629, pruned_loss=0.04271, over 1387145.73 frames.], batch size: 22, lr: 1.85e-04 2022-05-28 23:20:25,925 INFO [train.py:842] (0/4) Epoch 30, batch 850, loss[loss=0.1788, simple_loss=0.2613, pruned_loss=0.04815, over 7057.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2615, pruned_loss=0.04234, over 1396343.90 frames.], batch size: 18, lr: 1.85e-04 2022-05-28 23:21:05,181 INFO [train.py:842] (0/4) Epoch 30, batch 900, loss[loss=0.1793, simple_loss=0.2694, pruned_loss=0.04462, over 7205.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2626, pruned_loss=0.04316, over 1400370.32 frames.], batch size: 21, lr: 1.85e-04 2022-05-28 23:21:44,635 INFO [train.py:842] (0/4) Epoch 30, batch 950, loss[loss=0.1902, simple_loss=0.2853, pruned_loss=0.04762, over 7111.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2636, pruned_loss=0.04353, over 1406740.20 frames.], batch size: 21, lr: 1.85e-04 2022-05-28 23:22:23,806 INFO [train.py:842] (0/4) Epoch 30, batch 1000, loss[loss=0.1579, simple_loss=0.2475, pruned_loss=0.03412, over 7141.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2639, pruned_loss=0.04347, over 1410337.50 frames.], batch size: 20, lr: 1.85e-04 2022-05-28 23:23:03,154 INFO [train.py:842] (0/4) Epoch 30, batch 1050, loss[loss=0.1677, simple_loss=0.2504, pruned_loss=0.04249, over 7272.00 frames.], tot_loss[loss=0.176, simple_loss=0.2644, pruned_loss=0.04377, over 1406941.05 frames.], batch size: 18, lr: 1.85e-04 2022-05-28 23:23:42,405 INFO [train.py:842] (0/4) Epoch 30, batch 1100, loss[loss=0.1481, simple_loss=0.2445, pruned_loss=0.02592, over 7318.00 frames.], tot_loss[loss=0.176, simple_loss=0.265, pruned_loss=0.04352, over 1416427.12 frames.], batch size: 21, lr: 1.85e-04 2022-05-28 23:24:21,911 INFO [train.py:842] (0/4) Epoch 30, batch 1150, loss[loss=0.1476, simple_loss=0.2287, pruned_loss=0.03323, over 6988.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2647, pruned_loss=0.04309, over 1417490.63 frames.], batch size: 16, lr: 1.85e-04 2022-05-28 23:25:01,226 INFO [train.py:842] (0/4) Epoch 30, batch 1200, loss[loss=0.1473, simple_loss=0.2339, pruned_loss=0.03035, over 7162.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2642, pruned_loss=0.04327, over 1422325.17 frames.], batch size: 19, lr: 1.85e-04 2022-05-28 23:25:40,911 INFO [train.py:842] (0/4) Epoch 30, batch 1250, loss[loss=0.1728, simple_loss=0.2566, pruned_loss=0.04446, over 4963.00 frames.], tot_loss[loss=0.1745, simple_loss=0.263, pruned_loss=0.04301, over 1416886.46 frames.], batch size: 52, lr: 1.84e-04 2022-05-28 23:26:20,212 INFO [train.py:842] (0/4) Epoch 30, batch 1300, loss[loss=0.1577, simple_loss=0.2532, pruned_loss=0.0311, over 7328.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2633, pruned_loss=0.04287, over 1417639.05 frames.], batch size: 22, lr: 1.84e-04 2022-05-28 23:26:59,750 INFO [train.py:842] (0/4) Epoch 30, batch 1350, loss[loss=0.1765, simple_loss=0.2708, pruned_loss=0.04109, over 6500.00 frames.], tot_loss[loss=0.176, simple_loss=0.2646, pruned_loss=0.04369, over 1418742.80 frames.], batch size: 38, lr: 1.84e-04 2022-05-28 23:27:39,235 INFO [train.py:842] (0/4) Epoch 30, batch 1400, loss[loss=0.1413, simple_loss=0.2299, pruned_loss=0.02632, over 6833.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2643, pruned_loss=0.04375, over 1419548.05 frames.], batch size: 15, lr: 1.84e-04 2022-05-28 23:28:18,746 INFO [train.py:842] (0/4) Epoch 30, batch 1450, loss[loss=0.1701, simple_loss=0.2617, pruned_loss=0.03926, over 7112.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2648, pruned_loss=0.04388, over 1418869.94 frames.], batch size: 21, lr: 1.84e-04 2022-05-28 23:28:58,162 INFO [train.py:842] (0/4) Epoch 30, batch 1500, loss[loss=0.1542, simple_loss=0.2427, pruned_loss=0.03286, over 7263.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2648, pruned_loss=0.04417, over 1417825.32 frames.], batch size: 19, lr: 1.84e-04 2022-05-28 23:29:37,432 INFO [train.py:842] (0/4) Epoch 30, batch 1550, loss[loss=0.169, simple_loss=0.2608, pruned_loss=0.03856, over 7203.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2646, pruned_loss=0.04352, over 1418826.06 frames.], batch size: 23, lr: 1.84e-04 2022-05-28 23:30:16,504 INFO [train.py:842] (0/4) Epoch 30, batch 1600, loss[loss=0.1611, simple_loss=0.2575, pruned_loss=0.0323, over 7324.00 frames.], tot_loss[loss=0.1764, simple_loss=0.265, pruned_loss=0.0439, over 1419879.07 frames.], batch size: 21, lr: 1.84e-04 2022-05-28 23:30:56,194 INFO [train.py:842] (0/4) Epoch 30, batch 1650, loss[loss=0.1612, simple_loss=0.2585, pruned_loss=0.03193, over 7124.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2649, pruned_loss=0.04378, over 1423884.68 frames.], batch size: 26, lr: 1.84e-04 2022-05-28 23:31:35,549 INFO [train.py:842] (0/4) Epoch 30, batch 1700, loss[loss=0.1525, simple_loss=0.2272, pruned_loss=0.03889, over 7139.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2641, pruned_loss=0.04341, over 1426194.26 frames.], batch size: 17, lr: 1.84e-04 2022-05-28 23:32:15,269 INFO [train.py:842] (0/4) Epoch 30, batch 1750, loss[loss=0.1801, simple_loss=0.2734, pruned_loss=0.04345, over 7147.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2632, pruned_loss=0.04314, over 1423049.87 frames.], batch size: 20, lr: 1.84e-04 2022-05-28 23:32:54,396 INFO [train.py:842] (0/4) Epoch 30, batch 1800, loss[loss=0.1983, simple_loss=0.288, pruned_loss=0.05429, over 5139.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2632, pruned_loss=0.04315, over 1420278.53 frames.], batch size: 52, lr: 1.84e-04 2022-05-28 23:33:33,939 INFO [train.py:842] (0/4) Epoch 30, batch 1850, loss[loss=0.2275, simple_loss=0.3147, pruned_loss=0.07015, over 7119.00 frames.], tot_loss[loss=0.175, simple_loss=0.2629, pruned_loss=0.04355, over 1424167.25 frames.], batch size: 21, lr: 1.84e-04 2022-05-28 23:34:13,215 INFO [train.py:842] (0/4) Epoch 30, batch 1900, loss[loss=0.1618, simple_loss=0.245, pruned_loss=0.03932, over 6794.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2633, pruned_loss=0.04389, over 1425971.75 frames.], batch size: 15, lr: 1.84e-04 2022-05-28 23:34:52,929 INFO [train.py:842] (0/4) Epoch 30, batch 1950, loss[loss=0.1695, simple_loss=0.2458, pruned_loss=0.0466, over 7273.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2631, pruned_loss=0.04371, over 1427195.86 frames.], batch size: 17, lr: 1.84e-04 2022-05-28 23:35:32,322 INFO [train.py:842] (0/4) Epoch 30, batch 2000, loss[loss=0.1988, simple_loss=0.2913, pruned_loss=0.05312, over 7331.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2633, pruned_loss=0.04386, over 1428918.99 frames.], batch size: 22, lr: 1.84e-04 2022-05-28 23:36:11,896 INFO [train.py:842] (0/4) Epoch 30, batch 2050, loss[loss=0.2666, simple_loss=0.342, pruned_loss=0.09559, over 7207.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2634, pruned_loss=0.04377, over 1429299.28 frames.], batch size: 23, lr: 1.84e-04 2022-05-28 23:36:50,967 INFO [train.py:842] (0/4) Epoch 30, batch 2100, loss[loss=0.1781, simple_loss=0.2688, pruned_loss=0.04373, over 7153.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2638, pruned_loss=0.04337, over 1429217.81 frames.], batch size: 20, lr: 1.84e-04 2022-05-28 23:37:30,436 INFO [train.py:842] (0/4) Epoch 30, batch 2150, loss[loss=0.1794, simple_loss=0.2554, pruned_loss=0.05167, over 7128.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2636, pruned_loss=0.04333, over 1428519.83 frames.], batch size: 17, lr: 1.84e-04 2022-05-28 23:38:09,567 INFO [train.py:842] (0/4) Epoch 30, batch 2200, loss[loss=0.2017, simple_loss=0.2925, pruned_loss=0.0555, over 7287.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2639, pruned_loss=0.0437, over 1424403.64 frames.], batch size: 24, lr: 1.84e-04 2022-05-28 23:38:48,991 INFO [train.py:842] (0/4) Epoch 30, batch 2250, loss[loss=0.184, simple_loss=0.284, pruned_loss=0.04207, over 7198.00 frames.], tot_loss[loss=0.177, simple_loss=0.2651, pruned_loss=0.04443, over 1423133.95 frames.], batch size: 26, lr: 1.84e-04 2022-05-28 23:39:28,076 INFO [train.py:842] (0/4) Epoch 30, batch 2300, loss[loss=0.1419, simple_loss=0.2364, pruned_loss=0.02374, over 7329.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2644, pruned_loss=0.04356, over 1419377.04 frames.], batch size: 20, lr: 1.84e-04 2022-05-28 23:40:07,917 INFO [train.py:842] (0/4) Epoch 30, batch 2350, loss[loss=0.1797, simple_loss=0.2758, pruned_loss=0.04182, over 7328.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2645, pruned_loss=0.04371, over 1421176.66 frames.], batch size: 22, lr: 1.84e-04 2022-05-28 23:40:47,271 INFO [train.py:842] (0/4) Epoch 30, batch 2400, loss[loss=0.1934, simple_loss=0.2845, pruned_loss=0.05113, over 7268.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2642, pruned_loss=0.04358, over 1422688.96 frames.], batch size: 25, lr: 1.84e-04 2022-05-28 23:41:27,073 INFO [train.py:842] (0/4) Epoch 30, batch 2450, loss[loss=0.1867, simple_loss=0.2714, pruned_loss=0.05103, over 7147.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04287, over 1427032.47 frames.], batch size: 20, lr: 1.84e-04 2022-05-28 23:42:06,579 INFO [train.py:842] (0/4) Epoch 30, batch 2500, loss[loss=0.1483, simple_loss=0.2239, pruned_loss=0.03631, over 6801.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2624, pruned_loss=0.04254, over 1430535.13 frames.], batch size: 15, lr: 1.84e-04 2022-05-28 23:42:46,114 INFO [train.py:842] (0/4) Epoch 30, batch 2550, loss[loss=0.3051, simple_loss=0.3447, pruned_loss=0.1328, over 7407.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2625, pruned_loss=0.04314, over 1427795.76 frames.], batch size: 18, lr: 1.84e-04 2022-05-28 23:43:25,298 INFO [train.py:842] (0/4) Epoch 30, batch 2600, loss[loss=0.1769, simple_loss=0.2653, pruned_loss=0.04423, over 7129.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2626, pruned_loss=0.0433, over 1426637.56 frames.], batch size: 21, lr: 1.84e-04 2022-05-28 23:44:04,866 INFO [train.py:842] (0/4) Epoch 30, batch 2650, loss[loss=0.1335, simple_loss=0.2227, pruned_loss=0.0221, over 7135.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2625, pruned_loss=0.04317, over 1428451.80 frames.], batch size: 17, lr: 1.84e-04 2022-05-28 23:44:43,996 INFO [train.py:842] (0/4) Epoch 30, batch 2700, loss[loss=0.1959, simple_loss=0.2978, pruned_loss=0.04701, over 7127.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2631, pruned_loss=0.04289, over 1428996.36 frames.], batch size: 21, lr: 1.84e-04 2022-05-28 23:45:23,626 INFO [train.py:842] (0/4) Epoch 30, batch 2750, loss[loss=0.1725, simple_loss=0.2693, pruned_loss=0.03785, over 7239.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2638, pruned_loss=0.04299, over 1425710.12 frames.], batch size: 20, lr: 1.84e-04 2022-05-28 23:46:02,901 INFO [train.py:842] (0/4) Epoch 30, batch 2800, loss[loss=0.1658, simple_loss=0.2487, pruned_loss=0.04144, over 7343.00 frames.], tot_loss[loss=0.1743, simple_loss=0.263, pruned_loss=0.04277, over 1424992.71 frames.], batch size: 22, lr: 1.84e-04 2022-05-28 23:46:42,635 INFO [train.py:842] (0/4) Epoch 30, batch 2850, loss[loss=0.1634, simple_loss=0.263, pruned_loss=0.03186, over 7230.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2627, pruned_loss=0.04272, over 1418924.29 frames.], batch size: 20, lr: 1.84e-04 2022-05-28 23:47:21,849 INFO [train.py:842] (0/4) Epoch 30, batch 2900, loss[loss=0.1519, simple_loss=0.2321, pruned_loss=0.03588, over 6994.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2632, pruned_loss=0.04332, over 1421773.49 frames.], batch size: 16, lr: 1.84e-04 2022-05-28 23:48:01,440 INFO [train.py:842] (0/4) Epoch 30, batch 2950, loss[loss=0.1635, simple_loss=0.2582, pruned_loss=0.03442, over 6421.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2632, pruned_loss=0.04311, over 1422896.80 frames.], batch size: 37, lr: 1.84e-04 2022-05-28 23:48:40,737 INFO [train.py:842] (0/4) Epoch 30, batch 3000, loss[loss=0.1821, simple_loss=0.2716, pruned_loss=0.04629, over 7119.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2634, pruned_loss=0.04359, over 1424960.56 frames.], batch size: 21, lr: 1.84e-04 2022-05-28 23:48:40,739 INFO [train.py:862] (0/4) Computing validation loss 2022-05-28 23:48:50,532 INFO [train.py:871] (0/4) Epoch 30, validation: loss=0.1622, simple_loss=0.26, pruned_loss=0.03222, over 868885.00 frames. 2022-05-28 23:49:30,136 INFO [train.py:842] (0/4) Epoch 30, batch 3050, loss[loss=0.1607, simple_loss=0.2562, pruned_loss=0.03255, over 7106.00 frames.], tot_loss[loss=0.1754, simple_loss=0.264, pruned_loss=0.04341, over 1426315.53 frames.], batch size: 21, lr: 1.84e-04 2022-05-28 23:50:09,339 INFO [train.py:842] (0/4) Epoch 30, batch 3100, loss[loss=0.1772, simple_loss=0.2716, pruned_loss=0.04141, over 7429.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2648, pruned_loss=0.04392, over 1426321.09 frames.], batch size: 21, lr: 1.84e-04 2022-05-28 23:50:48,879 INFO [train.py:842] (0/4) Epoch 30, batch 3150, loss[loss=0.1434, simple_loss=0.2295, pruned_loss=0.02864, over 7157.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2642, pruned_loss=0.04426, over 1422324.72 frames.], batch size: 18, lr: 1.84e-04 2022-05-28 23:51:28,439 INFO [train.py:842] (0/4) Epoch 30, batch 3200, loss[loss=0.1636, simple_loss=0.2556, pruned_loss=0.03575, over 7255.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2629, pruned_loss=0.04368, over 1425178.77 frames.], batch size: 19, lr: 1.84e-04 2022-05-28 23:52:07,972 INFO [train.py:842] (0/4) Epoch 30, batch 3250, loss[loss=0.1873, simple_loss=0.2779, pruned_loss=0.04833, over 6987.00 frames.], tot_loss[loss=0.1751, simple_loss=0.263, pruned_loss=0.04358, over 1419963.15 frames.], batch size: 28, lr: 1.84e-04 2022-05-28 23:52:47,271 INFO [train.py:842] (0/4) Epoch 30, batch 3300, loss[loss=0.1579, simple_loss=0.2445, pruned_loss=0.03569, over 7332.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04277, over 1422908.67 frames.], batch size: 20, lr: 1.84e-04 2022-05-28 23:53:26,874 INFO [train.py:842] (0/4) Epoch 30, batch 3350, loss[loss=0.18, simple_loss=0.2579, pruned_loss=0.05108, over 7288.00 frames.], tot_loss[loss=0.1749, simple_loss=0.263, pruned_loss=0.04342, over 1427099.44 frames.], batch size: 17, lr: 1.84e-04 2022-05-28 23:54:06,120 INFO [train.py:842] (0/4) Epoch 30, batch 3400, loss[loss=0.2243, simple_loss=0.2955, pruned_loss=0.07652, over 5002.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2634, pruned_loss=0.04382, over 1423798.01 frames.], batch size: 52, lr: 1.84e-04 2022-05-28 23:54:45,911 INFO [train.py:842] (0/4) Epoch 30, batch 3450, loss[loss=0.1918, simple_loss=0.2897, pruned_loss=0.04699, over 7289.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2624, pruned_loss=0.04324, over 1420829.30 frames.], batch size: 24, lr: 1.84e-04 2022-05-28 23:55:25,173 INFO [train.py:842] (0/4) Epoch 30, batch 3500, loss[loss=0.1861, simple_loss=0.2859, pruned_loss=0.04309, over 7176.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2623, pruned_loss=0.04296, over 1422895.76 frames.], batch size: 26, lr: 1.84e-04 2022-05-28 23:56:04,829 INFO [train.py:842] (0/4) Epoch 30, batch 3550, loss[loss=0.1598, simple_loss=0.2407, pruned_loss=0.03949, over 7171.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2634, pruned_loss=0.04352, over 1421448.96 frames.], batch size: 18, lr: 1.84e-04 2022-05-28 23:56:44,252 INFO [train.py:842] (0/4) Epoch 30, batch 3600, loss[loss=0.1913, simple_loss=0.2838, pruned_loss=0.04941, over 7264.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2631, pruned_loss=0.04318, over 1426148.50 frames.], batch size: 19, lr: 1.84e-04 2022-05-28 23:57:23,871 INFO [train.py:842] (0/4) Epoch 30, batch 3650, loss[loss=0.1609, simple_loss=0.2558, pruned_loss=0.03297, over 6850.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2628, pruned_loss=0.04281, over 1427917.40 frames.], batch size: 31, lr: 1.84e-04 2022-05-28 23:58:03,111 INFO [train.py:842] (0/4) Epoch 30, batch 3700, loss[loss=0.1439, simple_loss=0.2238, pruned_loss=0.03205, over 7272.00 frames.], tot_loss[loss=0.175, simple_loss=0.2633, pruned_loss=0.04331, over 1429308.28 frames.], batch size: 17, lr: 1.84e-04 2022-05-28 23:58:42,914 INFO [train.py:842] (0/4) Epoch 30, batch 3750, loss[loss=0.1896, simple_loss=0.2845, pruned_loss=0.04736, over 7094.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2632, pruned_loss=0.04279, over 1432243.74 frames.], batch size: 28, lr: 1.84e-04 2022-05-28 23:59:21,923 INFO [train.py:842] (0/4) Epoch 30, batch 3800, loss[loss=0.2027, simple_loss=0.2886, pruned_loss=0.05845, over 7197.00 frames.], tot_loss[loss=0.1761, simple_loss=0.265, pruned_loss=0.04359, over 1425159.46 frames.], batch size: 22, lr: 1.84e-04 2022-05-29 00:00:01,397 INFO [train.py:842] (0/4) Epoch 30, batch 3850, loss[loss=0.1852, simple_loss=0.2752, pruned_loss=0.0476, over 7205.00 frames.], tot_loss[loss=0.1761, simple_loss=0.265, pruned_loss=0.04359, over 1427711.32 frames.], batch size: 22, lr: 1.84e-04 2022-05-29 00:00:40,352 INFO [train.py:842] (0/4) Epoch 30, batch 3900, loss[loss=0.1821, simple_loss=0.281, pruned_loss=0.0416, over 7222.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2662, pruned_loss=0.04411, over 1427863.12 frames.], batch size: 21, lr: 1.84e-04 2022-05-29 00:01:19,884 INFO [train.py:842] (0/4) Epoch 30, batch 3950, loss[loss=0.1538, simple_loss=0.247, pruned_loss=0.03032, over 7364.00 frames.], tot_loss[loss=0.1772, simple_loss=0.266, pruned_loss=0.04422, over 1425178.42 frames.], batch size: 19, lr: 1.84e-04 2022-05-29 00:01:59,022 INFO [train.py:842] (0/4) Epoch 30, batch 4000, loss[loss=0.1723, simple_loss=0.2574, pruned_loss=0.04357, over 7161.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2664, pruned_loss=0.04452, over 1422377.81 frames.], batch size: 18, lr: 1.84e-04 2022-05-29 00:02:38,733 INFO [train.py:842] (0/4) Epoch 30, batch 4050, loss[loss=0.173, simple_loss=0.2613, pruned_loss=0.04238, over 7307.00 frames.], tot_loss[loss=0.1779, simple_loss=0.2661, pruned_loss=0.0449, over 1423718.25 frames.], batch size: 24, lr: 1.84e-04 2022-05-29 00:03:17,889 INFO [train.py:842] (0/4) Epoch 30, batch 4100, loss[loss=0.1778, simple_loss=0.2705, pruned_loss=0.04251, over 7231.00 frames.], tot_loss[loss=0.1778, simple_loss=0.2655, pruned_loss=0.04504, over 1424343.94 frames.], batch size: 21, lr: 1.84e-04 2022-05-29 00:03:57,486 INFO [train.py:842] (0/4) Epoch 30, batch 4150, loss[loss=0.1521, simple_loss=0.2421, pruned_loss=0.03106, over 7271.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2652, pruned_loss=0.04476, over 1427751.61 frames.], batch size: 18, lr: 1.84e-04 2022-05-29 00:04:36,799 INFO [train.py:842] (0/4) Epoch 30, batch 4200, loss[loss=0.1966, simple_loss=0.2806, pruned_loss=0.05624, over 7378.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2646, pruned_loss=0.04407, over 1429466.38 frames.], batch size: 23, lr: 1.83e-04 2022-05-29 00:05:16,538 INFO [train.py:842] (0/4) Epoch 30, batch 4250, loss[loss=0.1519, simple_loss=0.2374, pruned_loss=0.03318, over 7146.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2632, pruned_loss=0.04362, over 1429393.51 frames.], batch size: 17, lr: 1.83e-04 2022-05-29 00:05:55,541 INFO [train.py:842] (0/4) Epoch 30, batch 4300, loss[loss=0.1833, simple_loss=0.2717, pruned_loss=0.04741, over 7324.00 frames.], tot_loss[loss=0.178, simple_loss=0.2659, pruned_loss=0.04507, over 1427477.12 frames.], batch size: 25, lr: 1.83e-04 2022-05-29 00:06:35,223 INFO [train.py:842] (0/4) Epoch 30, batch 4350, loss[loss=0.189, simple_loss=0.2588, pruned_loss=0.05959, over 6980.00 frames.], tot_loss[loss=0.1779, simple_loss=0.266, pruned_loss=0.04487, over 1425283.81 frames.], batch size: 16, lr: 1.83e-04 2022-05-29 00:07:14,674 INFO [train.py:842] (0/4) Epoch 30, batch 4400, loss[loss=0.1742, simple_loss=0.2676, pruned_loss=0.04038, over 7432.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2659, pruned_loss=0.04458, over 1428312.85 frames.], batch size: 20, lr: 1.83e-04 2022-05-29 00:07:54,176 INFO [train.py:842] (0/4) Epoch 30, batch 4450, loss[loss=0.1714, simple_loss=0.2699, pruned_loss=0.03643, over 7167.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2649, pruned_loss=0.04382, over 1427820.13 frames.], batch size: 26, lr: 1.83e-04 2022-05-29 00:08:33,368 INFO [train.py:842] (0/4) Epoch 30, batch 4500, loss[loss=0.2193, simple_loss=0.3064, pruned_loss=0.0661, over 7320.00 frames.], tot_loss[loss=0.176, simple_loss=0.2645, pruned_loss=0.04372, over 1425597.62 frames.], batch size: 21, lr: 1.83e-04 2022-05-29 00:09:13,151 INFO [train.py:842] (0/4) Epoch 30, batch 4550, loss[loss=0.1408, simple_loss=0.2252, pruned_loss=0.02826, over 7431.00 frames.], tot_loss[loss=0.1747, simple_loss=0.263, pruned_loss=0.04322, over 1428229.75 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:09:52,414 INFO [train.py:842] (0/4) Epoch 30, batch 4600, loss[loss=0.2026, simple_loss=0.2922, pruned_loss=0.0565, over 7201.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2634, pruned_loss=0.04364, over 1424862.11 frames.], batch size: 22, lr: 1.83e-04 2022-05-29 00:10:32,204 INFO [train.py:842] (0/4) Epoch 30, batch 4650, loss[loss=0.1768, simple_loss=0.2733, pruned_loss=0.04014, over 6788.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2632, pruned_loss=0.04334, over 1425833.57 frames.], batch size: 31, lr: 1.83e-04 2022-05-29 00:11:11,540 INFO [train.py:842] (0/4) Epoch 30, batch 4700, loss[loss=0.1935, simple_loss=0.2804, pruned_loss=0.05329, over 7300.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2633, pruned_loss=0.04305, over 1427976.38 frames.], batch size: 24, lr: 1.83e-04 2022-05-29 00:11:51,329 INFO [train.py:842] (0/4) Epoch 30, batch 4750, loss[loss=0.1852, simple_loss=0.264, pruned_loss=0.05324, over 7144.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2642, pruned_loss=0.04327, over 1428271.43 frames.], batch size: 19, lr: 1.83e-04 2022-05-29 00:12:30,822 INFO [train.py:842] (0/4) Epoch 30, batch 4800, loss[loss=0.1884, simple_loss=0.2752, pruned_loss=0.05084, over 7371.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2648, pruned_loss=0.04332, over 1429224.71 frames.], batch size: 23, lr: 1.83e-04 2022-05-29 00:13:10,321 INFO [train.py:842] (0/4) Epoch 30, batch 4850, loss[loss=0.1579, simple_loss=0.2475, pruned_loss=0.03414, over 7222.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2652, pruned_loss=0.04379, over 1427739.22 frames.], batch size: 21, lr: 1.83e-04 2022-05-29 00:13:49,492 INFO [train.py:842] (0/4) Epoch 30, batch 4900, loss[loss=0.1456, simple_loss=0.2319, pruned_loss=0.02967, over 7409.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2652, pruned_loss=0.04371, over 1425599.11 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:14:29,066 INFO [train.py:842] (0/4) Epoch 30, batch 4950, loss[loss=0.1791, simple_loss=0.2624, pruned_loss=0.0479, over 7285.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2651, pruned_loss=0.04381, over 1421993.38 frames.], batch size: 24, lr: 1.83e-04 2022-05-29 00:15:08,304 INFO [train.py:842] (0/4) Epoch 30, batch 5000, loss[loss=0.1705, simple_loss=0.2448, pruned_loss=0.04807, over 6788.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2643, pruned_loss=0.04312, over 1422965.66 frames.], batch size: 15, lr: 1.83e-04 2022-05-29 00:15:47,616 INFO [train.py:842] (0/4) Epoch 30, batch 5050, loss[loss=0.2322, simple_loss=0.3261, pruned_loss=0.06917, over 7083.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2657, pruned_loss=0.04382, over 1418226.96 frames.], batch size: 28, lr: 1.83e-04 2022-05-29 00:16:27,098 INFO [train.py:842] (0/4) Epoch 30, batch 5100, loss[loss=0.1577, simple_loss=0.2434, pruned_loss=0.036, over 6774.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2642, pruned_loss=0.04373, over 1416023.00 frames.], batch size: 15, lr: 1.83e-04 2022-05-29 00:17:06,609 INFO [train.py:842] (0/4) Epoch 30, batch 5150, loss[loss=0.1644, simple_loss=0.2491, pruned_loss=0.03985, over 7280.00 frames.], tot_loss[loss=0.1766, simple_loss=0.265, pruned_loss=0.04413, over 1412472.24 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:17:45,846 INFO [train.py:842] (0/4) Epoch 30, batch 5200, loss[loss=0.1664, simple_loss=0.2529, pruned_loss=0.03995, over 7387.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2632, pruned_loss=0.04322, over 1416865.22 frames.], batch size: 23, lr: 1.83e-04 2022-05-29 00:18:25,585 INFO [train.py:842] (0/4) Epoch 30, batch 5250, loss[loss=0.1975, simple_loss=0.2828, pruned_loss=0.05611, over 7321.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2625, pruned_loss=0.04285, over 1420184.79 frames.], batch size: 21, lr: 1.83e-04 2022-05-29 00:19:04,590 INFO [train.py:842] (0/4) Epoch 30, batch 5300, loss[loss=0.2076, simple_loss=0.275, pruned_loss=0.07014, over 7148.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2646, pruned_loss=0.04386, over 1421686.19 frames.], batch size: 17, lr: 1.83e-04 2022-05-29 00:19:44,221 INFO [train.py:842] (0/4) Epoch 30, batch 5350, loss[loss=0.1576, simple_loss=0.2395, pruned_loss=0.03782, over 7156.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2639, pruned_loss=0.04369, over 1423500.44 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:20:23,305 INFO [train.py:842] (0/4) Epoch 30, batch 5400, loss[loss=0.1488, simple_loss=0.2397, pruned_loss=0.02896, over 7140.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2641, pruned_loss=0.04414, over 1423080.69 frames.], batch size: 17, lr: 1.83e-04 2022-05-29 00:20:50,975 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-272000.pt 2022-05-29 00:21:05,452 INFO [train.py:842] (0/4) Epoch 30, batch 5450, loss[loss=0.1674, simple_loss=0.2566, pruned_loss=0.03906, over 7251.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2654, pruned_loss=0.04415, over 1423293.35 frames.], batch size: 19, lr: 1.83e-04 2022-05-29 00:21:44,380 INFO [train.py:842] (0/4) Epoch 30, batch 5500, loss[loss=0.1865, simple_loss=0.2731, pruned_loss=0.04993, over 7410.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2661, pruned_loss=0.04444, over 1422350.87 frames.], batch size: 21, lr: 1.83e-04 2022-05-29 00:22:23,771 INFO [train.py:842] (0/4) Epoch 30, batch 5550, loss[loss=0.2037, simple_loss=0.2891, pruned_loss=0.05915, over 7322.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2655, pruned_loss=0.04408, over 1420273.36 frames.], batch size: 20, lr: 1.83e-04 2022-05-29 00:23:02,853 INFO [train.py:842] (0/4) Epoch 30, batch 5600, loss[loss=0.162, simple_loss=0.2555, pruned_loss=0.03426, over 7370.00 frames.], tot_loss[loss=0.1772, simple_loss=0.2659, pruned_loss=0.04427, over 1409425.49 frames.], batch size: 19, lr: 1.83e-04 2022-05-29 00:23:42,325 INFO [train.py:842] (0/4) Epoch 30, batch 5650, loss[loss=0.1903, simple_loss=0.2712, pruned_loss=0.05474, over 7351.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2651, pruned_loss=0.04394, over 1410744.40 frames.], batch size: 19, lr: 1.83e-04 2022-05-29 00:24:21,761 INFO [train.py:842] (0/4) Epoch 30, batch 5700, loss[loss=0.1562, simple_loss=0.239, pruned_loss=0.03665, over 7013.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2642, pruned_loss=0.04361, over 1418509.48 frames.], batch size: 16, lr: 1.83e-04 2022-05-29 00:25:01,293 INFO [train.py:842] (0/4) Epoch 30, batch 5750, loss[loss=0.2182, simple_loss=0.3092, pruned_loss=0.06356, over 7274.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2635, pruned_loss=0.04292, over 1421747.19 frames.], batch size: 24, lr: 1.83e-04 2022-05-29 00:25:40,653 INFO [train.py:842] (0/4) Epoch 30, batch 5800, loss[loss=0.1614, simple_loss=0.2446, pruned_loss=0.03909, over 7421.00 frames.], tot_loss[loss=0.174, simple_loss=0.2627, pruned_loss=0.04268, over 1422859.25 frames.], batch size: 20, lr: 1.83e-04 2022-05-29 00:26:19,839 INFO [train.py:842] (0/4) Epoch 30, batch 5850, loss[loss=0.151, simple_loss=0.2435, pruned_loss=0.02924, over 7061.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2641, pruned_loss=0.04329, over 1422475.24 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:26:59,110 INFO [train.py:842] (0/4) Epoch 30, batch 5900, loss[loss=0.2077, simple_loss=0.3028, pruned_loss=0.05626, over 7153.00 frames.], tot_loss[loss=0.1754, simple_loss=0.264, pruned_loss=0.04339, over 1422274.99 frames.], batch size: 20, lr: 1.83e-04 2022-05-29 00:27:38,810 INFO [train.py:842] (0/4) Epoch 30, batch 5950, loss[loss=0.1714, simple_loss=0.262, pruned_loss=0.04039, over 7127.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2642, pruned_loss=0.04317, over 1425909.89 frames.], batch size: 21, lr: 1.83e-04 2022-05-29 00:28:18,036 INFO [train.py:842] (0/4) Epoch 30, batch 6000, loss[loss=0.14, simple_loss=0.2355, pruned_loss=0.02226, over 7419.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2636, pruned_loss=0.04312, over 1425550.62 frames.], batch size: 21, lr: 1.83e-04 2022-05-29 00:28:18,037 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 00:28:27,765 INFO [train.py:871] (0/4) Epoch 30, validation: loss=0.1626, simple_loss=0.2605, pruned_loss=0.03232, over 868885.00 frames. 2022-05-29 00:29:07,505 INFO [train.py:842] (0/4) Epoch 30, batch 6050, loss[loss=0.162, simple_loss=0.2445, pruned_loss=0.03972, over 7159.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2643, pruned_loss=0.04376, over 1425836.05 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:29:46,797 INFO [train.py:842] (0/4) Epoch 30, batch 6100, loss[loss=0.1801, simple_loss=0.2708, pruned_loss=0.04468, over 7066.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2646, pruned_loss=0.04348, over 1423928.74 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:30:26,356 INFO [train.py:842] (0/4) Epoch 30, batch 6150, loss[loss=0.1425, simple_loss=0.2259, pruned_loss=0.02954, over 7350.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2644, pruned_loss=0.0432, over 1421079.02 frames.], batch size: 19, lr: 1.83e-04 2022-05-29 00:31:05,516 INFO [train.py:842] (0/4) Epoch 30, batch 6200, loss[loss=0.1579, simple_loss=0.2378, pruned_loss=0.03894, over 7282.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2642, pruned_loss=0.04306, over 1420981.12 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:31:45,357 INFO [train.py:842] (0/4) Epoch 30, batch 6250, loss[loss=0.1738, simple_loss=0.2695, pruned_loss=0.03899, over 7164.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2643, pruned_loss=0.04311, over 1423046.87 frames.], batch size: 19, lr: 1.83e-04 2022-05-29 00:32:24,514 INFO [train.py:842] (0/4) Epoch 30, batch 6300, loss[loss=0.1659, simple_loss=0.2715, pruned_loss=0.03011, over 6750.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2641, pruned_loss=0.04278, over 1427407.08 frames.], batch size: 31, lr: 1.83e-04 2022-05-29 00:33:03,728 INFO [train.py:842] (0/4) Epoch 30, batch 6350, loss[loss=0.1599, simple_loss=0.2468, pruned_loss=0.03646, over 7261.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2647, pruned_loss=0.04301, over 1426752.20 frames.], batch size: 17, lr: 1.83e-04 2022-05-29 00:33:42,868 INFO [train.py:842] (0/4) Epoch 30, batch 6400, loss[loss=0.1329, simple_loss=0.2147, pruned_loss=0.02549, over 7139.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2644, pruned_loss=0.04297, over 1424301.58 frames.], batch size: 17, lr: 1.83e-04 2022-05-29 00:34:22,373 INFO [train.py:842] (0/4) Epoch 30, batch 6450, loss[loss=0.2106, simple_loss=0.294, pruned_loss=0.06357, over 7273.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2653, pruned_loss=0.04406, over 1425679.45 frames.], batch size: 24, lr: 1.83e-04 2022-05-29 00:35:01,375 INFO [train.py:842] (0/4) Epoch 30, batch 6500, loss[loss=0.1935, simple_loss=0.2796, pruned_loss=0.05374, over 7294.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2655, pruned_loss=0.04375, over 1427160.56 frames.], batch size: 24, lr: 1.83e-04 2022-05-29 00:35:41,137 INFO [train.py:842] (0/4) Epoch 30, batch 6550, loss[loss=0.189, simple_loss=0.2852, pruned_loss=0.04636, over 7407.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2644, pruned_loss=0.04342, over 1426145.72 frames.], batch size: 21, lr: 1.83e-04 2022-05-29 00:36:20,413 INFO [train.py:842] (0/4) Epoch 30, batch 6600, loss[loss=0.2108, simple_loss=0.2927, pruned_loss=0.06445, over 7396.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2651, pruned_loss=0.04399, over 1427906.24 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:37:00,148 INFO [train.py:842] (0/4) Epoch 30, batch 6650, loss[loss=0.1802, simple_loss=0.2669, pruned_loss=0.04677, over 7164.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2632, pruned_loss=0.0429, over 1426579.03 frames.], batch size: 19, lr: 1.83e-04 2022-05-29 00:37:39,117 INFO [train.py:842] (0/4) Epoch 30, batch 6700, loss[loss=0.1586, simple_loss=0.232, pruned_loss=0.04266, over 7277.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2632, pruned_loss=0.04351, over 1423056.37 frames.], batch size: 17, lr: 1.83e-04 2022-05-29 00:38:18,793 INFO [train.py:842] (0/4) Epoch 30, batch 6750, loss[loss=0.1351, simple_loss=0.2193, pruned_loss=0.02546, over 7429.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2622, pruned_loss=0.04284, over 1425483.92 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:38:57,921 INFO [train.py:842] (0/4) Epoch 30, batch 6800, loss[loss=0.1925, simple_loss=0.283, pruned_loss=0.05096, over 7268.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2623, pruned_loss=0.04293, over 1426074.84 frames.], batch size: 25, lr: 1.83e-04 2022-05-29 00:39:37,553 INFO [train.py:842] (0/4) Epoch 30, batch 6850, loss[loss=0.2126, simple_loss=0.2963, pruned_loss=0.06446, over 7207.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2635, pruned_loss=0.0435, over 1425770.60 frames.], batch size: 22, lr: 1.83e-04 2022-05-29 00:40:16,988 INFO [train.py:842] (0/4) Epoch 30, batch 6900, loss[loss=0.1472, simple_loss=0.2198, pruned_loss=0.03732, over 7281.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2627, pruned_loss=0.04302, over 1424392.42 frames.], batch size: 17, lr: 1.83e-04 2022-05-29 00:40:56,486 INFO [train.py:842] (0/4) Epoch 30, batch 6950, loss[loss=0.1652, simple_loss=0.2496, pruned_loss=0.04034, over 7073.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2624, pruned_loss=0.04303, over 1420893.96 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:41:35,794 INFO [train.py:842] (0/4) Epoch 30, batch 7000, loss[loss=0.1979, simple_loss=0.2837, pruned_loss=0.056, over 7254.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2628, pruned_loss=0.04325, over 1420596.12 frames.], batch size: 19, lr: 1.83e-04 2022-05-29 00:42:15,294 INFO [train.py:842] (0/4) Epoch 30, batch 7050, loss[loss=0.1957, simple_loss=0.2928, pruned_loss=0.04928, over 7321.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2638, pruned_loss=0.04371, over 1420546.07 frames.], batch size: 21, lr: 1.83e-04 2022-05-29 00:42:54,718 INFO [train.py:842] (0/4) Epoch 30, batch 7100, loss[loss=0.1466, simple_loss=0.2352, pruned_loss=0.02896, over 7415.00 frames.], tot_loss[loss=0.175, simple_loss=0.2635, pruned_loss=0.04327, over 1417375.04 frames.], batch size: 18, lr: 1.83e-04 2022-05-29 00:43:34,107 INFO [train.py:842] (0/4) Epoch 30, batch 7150, loss[loss=0.1776, simple_loss=0.2699, pruned_loss=0.04265, over 7198.00 frames.], tot_loss[loss=0.176, simple_loss=0.2644, pruned_loss=0.04377, over 1417717.24 frames.], batch size: 22, lr: 1.82e-04 2022-05-29 00:44:13,155 INFO [train.py:842] (0/4) Epoch 30, batch 7200, loss[loss=0.1902, simple_loss=0.2846, pruned_loss=0.04792, over 7101.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2657, pruned_loss=0.04425, over 1416773.91 frames.], batch size: 21, lr: 1.82e-04 2022-05-29 00:44:52,866 INFO [train.py:842] (0/4) Epoch 30, batch 7250, loss[loss=0.1824, simple_loss=0.2825, pruned_loss=0.04117, over 7332.00 frames.], tot_loss[loss=0.176, simple_loss=0.2649, pruned_loss=0.04358, over 1417253.52 frames.], batch size: 22, lr: 1.82e-04 2022-05-29 00:45:32,230 INFO [train.py:842] (0/4) Epoch 30, batch 7300, loss[loss=0.1581, simple_loss=0.2467, pruned_loss=0.03472, over 7058.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04277, over 1420147.35 frames.], batch size: 18, lr: 1.82e-04 2022-05-29 00:46:12,047 INFO [train.py:842] (0/4) Epoch 30, batch 7350, loss[loss=0.1819, simple_loss=0.2718, pruned_loss=0.04603, over 7050.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2614, pruned_loss=0.0421, over 1422396.72 frames.], batch size: 28, lr: 1.82e-04 2022-05-29 00:47:02,156 INFO [train.py:842] (0/4) Epoch 30, batch 7400, loss[loss=0.1867, simple_loss=0.2764, pruned_loss=0.04851, over 6683.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2622, pruned_loss=0.04238, over 1420357.10 frames.], batch size: 31, lr: 1.82e-04 2022-05-29 00:47:41,627 INFO [train.py:842] (0/4) Epoch 30, batch 7450, loss[loss=0.1574, simple_loss=0.2563, pruned_loss=0.02925, over 7315.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2629, pruned_loss=0.04279, over 1425406.25 frames.], batch size: 21, lr: 1.82e-04 2022-05-29 00:48:20,885 INFO [train.py:842] (0/4) Epoch 30, batch 7500, loss[loss=0.1619, simple_loss=0.2465, pruned_loss=0.03866, over 7062.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2632, pruned_loss=0.04273, over 1425209.04 frames.], batch size: 18, lr: 1.82e-04 2022-05-29 00:49:00,496 INFO [train.py:842] (0/4) Epoch 30, batch 7550, loss[loss=0.1646, simple_loss=0.2551, pruned_loss=0.03706, over 7147.00 frames.], tot_loss[loss=0.175, simple_loss=0.2637, pruned_loss=0.04315, over 1422811.27 frames.], batch size: 19, lr: 1.82e-04 2022-05-29 00:49:39,638 INFO [train.py:842] (0/4) Epoch 30, batch 7600, loss[loss=0.1468, simple_loss=0.2437, pruned_loss=0.02495, over 7339.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2646, pruned_loss=0.04294, over 1422986.41 frames.], batch size: 20, lr: 1.82e-04 2022-05-29 00:50:19,176 INFO [train.py:842] (0/4) Epoch 30, batch 7650, loss[loss=0.1732, simple_loss=0.2706, pruned_loss=0.03786, over 7231.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2643, pruned_loss=0.04337, over 1421907.34 frames.], batch size: 20, lr: 1.82e-04 2022-05-29 00:50:58,426 INFO [train.py:842] (0/4) Epoch 30, batch 7700, loss[loss=0.2131, simple_loss=0.3125, pruned_loss=0.05682, over 7315.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2651, pruned_loss=0.04355, over 1418785.35 frames.], batch size: 25, lr: 1.82e-04 2022-05-29 00:51:38,124 INFO [train.py:842] (0/4) Epoch 30, batch 7750, loss[loss=0.1648, simple_loss=0.2482, pruned_loss=0.04073, over 7349.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2643, pruned_loss=0.0428, over 1419827.16 frames.], batch size: 19, lr: 1.82e-04 2022-05-29 00:52:17,437 INFO [train.py:842] (0/4) Epoch 30, batch 7800, loss[loss=0.1447, simple_loss=0.2391, pruned_loss=0.0251, over 7060.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2631, pruned_loss=0.04252, over 1421206.54 frames.], batch size: 18, lr: 1.82e-04 2022-05-29 00:52:57,223 INFO [train.py:842] (0/4) Epoch 30, batch 7850, loss[loss=0.1572, simple_loss=0.2308, pruned_loss=0.04179, over 6778.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2634, pruned_loss=0.04271, over 1425419.30 frames.], batch size: 15, lr: 1.82e-04 2022-05-29 00:53:36,747 INFO [train.py:842] (0/4) Epoch 30, batch 7900, loss[loss=0.1918, simple_loss=0.2804, pruned_loss=0.05158, over 7343.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2629, pruned_loss=0.04303, over 1422597.47 frames.], batch size: 20, lr: 1.82e-04 2022-05-29 00:54:16,416 INFO [train.py:842] (0/4) Epoch 30, batch 7950, loss[loss=0.1607, simple_loss=0.2508, pruned_loss=0.03532, over 7149.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2619, pruned_loss=0.0426, over 1421216.26 frames.], batch size: 19, lr: 1.82e-04 2022-05-29 00:54:55,577 INFO [train.py:842] (0/4) Epoch 30, batch 8000, loss[loss=0.1952, simple_loss=0.2874, pruned_loss=0.05149, over 7203.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2637, pruned_loss=0.04362, over 1423639.82 frames.], batch size: 22, lr: 1.82e-04 2022-05-29 00:55:35,091 INFO [train.py:842] (0/4) Epoch 30, batch 8050, loss[loss=0.1841, simple_loss=0.2669, pruned_loss=0.05069, over 7241.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2626, pruned_loss=0.04235, over 1426728.41 frames.], batch size: 20, lr: 1.82e-04 2022-05-29 00:56:14,418 INFO [train.py:842] (0/4) Epoch 30, batch 8100, loss[loss=0.1585, simple_loss=0.2513, pruned_loss=0.03287, over 7142.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2636, pruned_loss=0.04303, over 1430920.24 frames.], batch size: 20, lr: 1.82e-04 2022-05-29 00:56:53,806 INFO [train.py:842] (0/4) Epoch 30, batch 8150, loss[loss=0.2065, simple_loss=0.3003, pruned_loss=0.05634, over 7330.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2644, pruned_loss=0.04349, over 1422994.46 frames.], batch size: 22, lr: 1.82e-04 2022-05-29 00:57:33,241 INFO [train.py:842] (0/4) Epoch 30, batch 8200, loss[loss=0.2551, simple_loss=0.3236, pruned_loss=0.09327, over 7135.00 frames.], tot_loss[loss=0.1757, simple_loss=0.264, pruned_loss=0.04375, over 1425230.57 frames.], batch size: 26, lr: 1.82e-04 2022-05-29 00:58:12,717 INFO [train.py:842] (0/4) Epoch 30, batch 8250, loss[loss=0.1316, simple_loss=0.2169, pruned_loss=0.02315, over 7409.00 frames.], tot_loss[loss=0.1747, simple_loss=0.263, pruned_loss=0.04323, over 1423758.09 frames.], batch size: 18, lr: 1.82e-04 2022-05-29 00:58:51,852 INFO [train.py:842] (0/4) Epoch 30, batch 8300, loss[loss=0.1839, simple_loss=0.272, pruned_loss=0.04791, over 7223.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2632, pruned_loss=0.04301, over 1425006.68 frames.], batch size: 21, lr: 1.82e-04 2022-05-29 00:59:31,516 INFO [train.py:842] (0/4) Epoch 30, batch 8350, loss[loss=0.1959, simple_loss=0.2885, pruned_loss=0.05162, over 7029.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2627, pruned_loss=0.04278, over 1429520.96 frames.], batch size: 28, lr: 1.82e-04 2022-05-29 01:00:10,570 INFO [train.py:842] (0/4) Epoch 30, batch 8400, loss[loss=0.1731, simple_loss=0.251, pruned_loss=0.04766, over 7003.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2627, pruned_loss=0.04286, over 1426476.34 frames.], batch size: 16, lr: 1.82e-04 2022-05-29 01:00:49,943 INFO [train.py:842] (0/4) Epoch 30, batch 8450, loss[loss=0.1715, simple_loss=0.2662, pruned_loss=0.03843, over 7201.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2635, pruned_loss=0.04353, over 1423849.32 frames.], batch size: 22, lr: 1.82e-04 2022-05-29 01:01:28,967 INFO [train.py:842] (0/4) Epoch 30, batch 8500, loss[loss=0.1766, simple_loss=0.2682, pruned_loss=0.04248, over 7244.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2639, pruned_loss=0.04369, over 1422777.71 frames.], batch size: 20, lr: 1.82e-04 2022-05-29 01:02:08,166 INFO [train.py:842] (0/4) Epoch 30, batch 8550, loss[loss=0.1398, simple_loss=0.2183, pruned_loss=0.03066, over 6990.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2648, pruned_loss=0.04403, over 1421003.50 frames.], batch size: 16, lr: 1.82e-04 2022-05-29 01:02:47,540 INFO [train.py:842] (0/4) Epoch 30, batch 8600, loss[loss=0.1817, simple_loss=0.2838, pruned_loss=0.03982, over 7215.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2642, pruned_loss=0.04385, over 1417372.56 frames.], batch size: 21, lr: 1.82e-04 2022-05-29 01:03:26,996 INFO [train.py:842] (0/4) Epoch 30, batch 8650, loss[loss=0.1779, simple_loss=0.2681, pruned_loss=0.04384, over 7190.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2633, pruned_loss=0.04322, over 1418744.84 frames.], batch size: 23, lr: 1.82e-04 2022-05-29 01:04:06,160 INFO [train.py:842] (0/4) Epoch 30, batch 8700, loss[loss=0.1448, simple_loss=0.2272, pruned_loss=0.03121, over 6851.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2635, pruned_loss=0.04366, over 1416982.43 frames.], batch size: 15, lr: 1.82e-04 2022-05-29 01:04:45,526 INFO [train.py:842] (0/4) Epoch 30, batch 8750, loss[loss=0.1915, simple_loss=0.2741, pruned_loss=0.05442, over 5022.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2638, pruned_loss=0.0435, over 1415093.66 frames.], batch size: 54, lr: 1.82e-04 2022-05-29 01:05:24,836 INFO [train.py:842] (0/4) Epoch 30, batch 8800, loss[loss=0.2207, simple_loss=0.3079, pruned_loss=0.06674, over 7104.00 frames.], tot_loss[loss=0.1744, simple_loss=0.263, pruned_loss=0.04295, over 1417369.51 frames.], batch size: 21, lr: 1.82e-04 2022-05-29 01:06:04,542 INFO [train.py:842] (0/4) Epoch 30, batch 8850, loss[loss=0.1871, simple_loss=0.2754, pruned_loss=0.04941, over 7199.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2623, pruned_loss=0.04252, over 1419982.01 frames.], batch size: 22, lr: 1.82e-04 2022-05-29 01:06:43,637 INFO [train.py:842] (0/4) Epoch 30, batch 8900, loss[loss=0.2333, simple_loss=0.3179, pruned_loss=0.07429, over 7161.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2627, pruned_loss=0.04309, over 1415151.52 frames.], batch size: 18, lr: 1.82e-04 2022-05-29 01:07:22,751 INFO [train.py:842] (0/4) Epoch 30, batch 8950, loss[loss=0.1671, simple_loss=0.2588, pruned_loss=0.03776, over 7399.00 frames.], tot_loss[loss=0.1749, simple_loss=0.263, pruned_loss=0.04335, over 1404721.80 frames.], batch size: 18, lr: 1.82e-04 2022-05-29 01:08:01,596 INFO [train.py:842] (0/4) Epoch 30, batch 9000, loss[loss=0.1655, simple_loss=0.2583, pruned_loss=0.03633, over 6849.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2637, pruned_loss=0.04391, over 1392112.19 frames.], batch size: 31, lr: 1.82e-04 2022-05-29 01:08:01,598 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 01:08:11,201 INFO [train.py:871] (0/4) Epoch 30, validation: loss=0.1653, simple_loss=0.2631, pruned_loss=0.03376, over 868885.00 frames. 2022-05-29 01:08:49,657 INFO [train.py:842] (0/4) Epoch 30, batch 9050, loss[loss=0.1839, simple_loss=0.2677, pruned_loss=0.05002, over 5209.00 frames.], tot_loss[loss=0.177, simple_loss=0.2653, pruned_loss=0.04437, over 1372935.55 frames.], batch size: 54, lr: 1.82e-04 2022-05-29 01:09:27,644 INFO [train.py:842] (0/4) Epoch 30, batch 9100, loss[loss=0.1722, simple_loss=0.2639, pruned_loss=0.04027, over 6483.00 frames.], tot_loss[loss=0.1793, simple_loss=0.2674, pruned_loss=0.04556, over 1330019.86 frames.], batch size: 37, lr: 1.82e-04 2022-05-29 01:10:06,015 INFO [train.py:842] (0/4) Epoch 30, batch 9150, loss[loss=0.1961, simple_loss=0.2851, pruned_loss=0.05355, over 5117.00 frames.], tot_loss[loss=0.1851, simple_loss=0.2727, pruned_loss=0.04882, over 1261911.32 frames.], batch size: 52, lr: 1.82e-04 2022-05-29 01:10:37,919 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-30.pt 2022-05-29 01:10:56,764 INFO [train.py:842] (0/4) Epoch 31, batch 0, loss[loss=0.1577, simple_loss=0.2479, pruned_loss=0.03373, over 7336.00 frames.], tot_loss[loss=0.1577, simple_loss=0.2479, pruned_loss=0.03373, over 7336.00 frames.], batch size: 20, lr: 1.79e-04 2022-05-29 01:11:36,501 INFO [train.py:842] (0/4) Epoch 31, batch 50, loss[loss=0.1649, simple_loss=0.2473, pruned_loss=0.04119, over 7254.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2626, pruned_loss=0.0449, over 316508.74 frames.], batch size: 19, lr: 1.79e-04 2022-05-29 01:12:15,780 INFO [train.py:842] (0/4) Epoch 31, batch 100, loss[loss=0.2049, simple_loss=0.2876, pruned_loss=0.06106, over 7374.00 frames.], tot_loss[loss=0.1784, simple_loss=0.2659, pruned_loss=0.04545, over 560528.34 frames.], batch size: 23, lr: 1.79e-04 2022-05-29 01:12:55,641 INFO [train.py:842] (0/4) Epoch 31, batch 150, loss[loss=0.2204, simple_loss=0.2991, pruned_loss=0.07082, over 7198.00 frames.], tot_loss[loss=0.178, simple_loss=0.2654, pruned_loss=0.0453, over 755755.41 frames.], batch size: 22, lr: 1.79e-04 2022-05-29 01:13:34,915 INFO [train.py:842] (0/4) Epoch 31, batch 200, loss[loss=0.2239, simple_loss=0.3073, pruned_loss=0.07031, over 4979.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2653, pruned_loss=0.04496, over 900135.12 frames.], batch size: 52, lr: 1.79e-04 2022-05-29 01:14:14,357 INFO [train.py:842] (0/4) Epoch 31, batch 250, loss[loss=0.186, simple_loss=0.2811, pruned_loss=0.04549, over 7280.00 frames.], tot_loss[loss=0.1794, simple_loss=0.2684, pruned_loss=0.04522, over 1014508.45 frames.], batch size: 25, lr: 1.79e-04 2022-05-29 01:14:53,613 INFO [train.py:842] (0/4) Epoch 31, batch 300, loss[loss=0.1628, simple_loss=0.2627, pruned_loss=0.03144, over 7321.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2666, pruned_loss=0.04401, over 1106028.67 frames.], batch size: 21, lr: 1.79e-04 2022-05-29 01:15:33,025 INFO [train.py:842] (0/4) Epoch 31, batch 350, loss[loss=0.1424, simple_loss=0.2326, pruned_loss=0.02605, over 7155.00 frames.], tot_loss[loss=0.1768, simple_loss=0.266, pruned_loss=0.04378, over 1173993.34 frames.], batch size: 18, lr: 1.79e-04 2022-05-29 01:16:12,227 INFO [train.py:842] (0/4) Epoch 31, batch 400, loss[loss=0.1841, simple_loss=0.2765, pruned_loss=0.04582, over 7224.00 frames.], tot_loss[loss=0.1773, simple_loss=0.2662, pruned_loss=0.04424, over 1224975.51 frames.], batch size: 21, lr: 1.79e-04 2022-05-29 01:16:51,531 INFO [train.py:842] (0/4) Epoch 31, batch 450, loss[loss=0.1739, simple_loss=0.2717, pruned_loss=0.03802, over 7157.00 frames.], tot_loss[loss=0.1772, simple_loss=0.266, pruned_loss=0.04418, over 1266560.82 frames.], batch size: 26, lr: 1.79e-04 2022-05-29 01:17:30,772 INFO [train.py:842] (0/4) Epoch 31, batch 500, loss[loss=0.1251, simple_loss=0.2101, pruned_loss=0.02007, over 7280.00 frames.], tot_loss[loss=0.1762, simple_loss=0.265, pruned_loss=0.04374, over 1301608.59 frames.], batch size: 17, lr: 1.79e-04 2022-05-29 01:18:10,333 INFO [train.py:842] (0/4) Epoch 31, batch 550, loss[loss=0.173, simple_loss=0.2719, pruned_loss=0.03704, over 7416.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2652, pruned_loss=0.04386, over 1328322.62 frames.], batch size: 21, lr: 1.79e-04 2022-05-29 01:18:49,297 INFO [train.py:842] (0/4) Epoch 31, batch 600, loss[loss=0.1676, simple_loss=0.2535, pruned_loss=0.0409, over 7074.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2652, pruned_loss=0.04354, over 1347996.41 frames.], batch size: 18, lr: 1.79e-04 2022-05-29 01:19:29,072 INFO [train.py:842] (0/4) Epoch 31, batch 650, loss[loss=0.2174, simple_loss=0.2995, pruned_loss=0.06761, over 7136.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2645, pruned_loss=0.04353, over 1369915.20 frames.], batch size: 20, lr: 1.79e-04 2022-05-29 01:20:08,345 INFO [train.py:842] (0/4) Epoch 31, batch 700, loss[loss=0.1407, simple_loss=0.2215, pruned_loss=0.02997, over 7220.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2648, pruned_loss=0.04415, over 1380511.65 frames.], batch size: 16, lr: 1.79e-04 2022-05-29 01:20:47,916 INFO [train.py:842] (0/4) Epoch 31, batch 750, loss[loss=0.1657, simple_loss=0.2635, pruned_loss=0.03398, over 7233.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2653, pruned_loss=0.04443, over 1388272.78 frames.], batch size: 20, lr: 1.79e-04 2022-05-29 01:21:27,076 INFO [train.py:842] (0/4) Epoch 31, batch 800, loss[loss=0.1809, simple_loss=0.2677, pruned_loss=0.04708, over 7308.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2644, pruned_loss=0.04399, over 1395889.16 frames.], batch size: 20, lr: 1.79e-04 2022-05-29 01:22:06,668 INFO [train.py:842] (0/4) Epoch 31, batch 850, loss[loss=0.1602, simple_loss=0.254, pruned_loss=0.03318, over 7430.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2643, pruned_loss=0.04419, over 1399793.60 frames.], batch size: 20, lr: 1.79e-04 2022-05-29 01:22:46,069 INFO [train.py:842] (0/4) Epoch 31, batch 900, loss[loss=0.1378, simple_loss=0.2227, pruned_loss=0.02639, over 7216.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2649, pruned_loss=0.04446, over 1404795.47 frames.], batch size: 16, lr: 1.79e-04 2022-05-29 01:23:25,746 INFO [train.py:842] (0/4) Epoch 31, batch 950, loss[loss=0.2233, simple_loss=0.3068, pruned_loss=0.06997, over 7001.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2634, pruned_loss=0.04368, over 1406620.63 frames.], batch size: 28, lr: 1.79e-04 2022-05-29 01:24:04,910 INFO [train.py:842] (0/4) Epoch 31, batch 1000, loss[loss=0.1708, simple_loss=0.263, pruned_loss=0.03928, over 7346.00 frames.], tot_loss[loss=0.176, simple_loss=0.2642, pruned_loss=0.04388, over 1408854.06 frames.], batch size: 22, lr: 1.79e-04 2022-05-29 01:24:44,398 INFO [train.py:842] (0/4) Epoch 31, batch 1050, loss[loss=0.2316, simple_loss=0.3209, pruned_loss=0.0711, over 7085.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2647, pruned_loss=0.04416, over 1411591.32 frames.], batch size: 28, lr: 1.79e-04 2022-05-29 01:25:23,481 INFO [train.py:842] (0/4) Epoch 31, batch 1100, loss[loss=0.161, simple_loss=0.2498, pruned_loss=0.03606, over 7064.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2639, pruned_loss=0.04371, over 1415718.47 frames.], batch size: 18, lr: 1.79e-04 2022-05-29 01:26:03,300 INFO [train.py:842] (0/4) Epoch 31, batch 1150, loss[loss=0.1713, simple_loss=0.2598, pruned_loss=0.04139, over 7059.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2634, pruned_loss=0.04336, over 1417428.65 frames.], batch size: 18, lr: 1.79e-04 2022-05-29 01:26:42,661 INFO [train.py:842] (0/4) Epoch 31, batch 1200, loss[loss=0.1724, simple_loss=0.2618, pruned_loss=0.04156, over 7194.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2623, pruned_loss=0.04232, over 1418953.25 frames.], batch size: 22, lr: 1.78e-04 2022-05-29 01:27:22,216 INFO [train.py:842] (0/4) Epoch 31, batch 1250, loss[loss=0.1712, simple_loss=0.2666, pruned_loss=0.03786, over 7416.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2622, pruned_loss=0.04229, over 1418116.01 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 01:28:01,505 INFO [train.py:842] (0/4) Epoch 31, batch 1300, loss[loss=0.1877, simple_loss=0.28, pruned_loss=0.0477, over 7170.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2624, pruned_loss=0.04223, over 1417730.06 frames.], batch size: 26, lr: 1.78e-04 2022-05-29 01:28:40,939 INFO [train.py:842] (0/4) Epoch 31, batch 1350, loss[loss=0.1719, simple_loss=0.2554, pruned_loss=0.04422, over 7129.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2639, pruned_loss=0.04287, over 1415525.65 frames.], batch size: 17, lr: 1.78e-04 2022-05-29 01:29:20,123 INFO [train.py:842] (0/4) Epoch 31, batch 1400, loss[loss=0.1855, simple_loss=0.277, pruned_loss=0.04703, over 7334.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2641, pruned_loss=0.04272, over 1419494.69 frames.], batch size: 22, lr: 1.78e-04 2022-05-29 01:29:59,793 INFO [train.py:842] (0/4) Epoch 31, batch 1450, loss[loss=0.1608, simple_loss=0.2464, pruned_loss=0.03758, over 7145.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2636, pruned_loss=0.0423, over 1420557.41 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 01:30:38,797 INFO [train.py:842] (0/4) Epoch 31, batch 1500, loss[loss=0.1872, simple_loss=0.2708, pruned_loss=0.05183, over 7302.00 frames.], tot_loss[loss=0.175, simple_loss=0.2646, pruned_loss=0.04275, over 1426172.98 frames.], batch size: 25, lr: 1.78e-04 2022-05-29 01:31:18,558 INFO [train.py:842] (0/4) Epoch 31, batch 1550, loss[loss=0.2182, simple_loss=0.3122, pruned_loss=0.06208, over 7289.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2635, pruned_loss=0.0424, over 1427317.94 frames.], batch size: 25, lr: 1.78e-04 2022-05-29 01:31:57,807 INFO [train.py:842] (0/4) Epoch 31, batch 1600, loss[loss=0.1651, simple_loss=0.2532, pruned_loss=0.03847, over 7272.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2641, pruned_loss=0.04284, over 1428831.00 frames.], batch size: 19, lr: 1.78e-04 2022-05-29 01:32:37,045 INFO [train.py:842] (0/4) Epoch 31, batch 1650, loss[loss=0.1647, simple_loss=0.2653, pruned_loss=0.03206, over 7098.00 frames.], tot_loss[loss=0.176, simple_loss=0.2652, pruned_loss=0.04345, over 1428922.20 frames.], batch size: 21, lr: 1.78e-04 2022-05-29 01:33:16,435 INFO [train.py:842] (0/4) Epoch 31, batch 1700, loss[loss=0.169, simple_loss=0.2658, pruned_loss=0.0361, over 7332.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2638, pruned_loss=0.04345, over 1425511.54 frames.], batch size: 24, lr: 1.78e-04 2022-05-29 01:33:55,898 INFO [train.py:842] (0/4) Epoch 31, batch 1750, loss[loss=0.1827, simple_loss=0.2661, pruned_loss=0.04966, over 7398.00 frames.], tot_loss[loss=0.1751, simple_loss=0.264, pruned_loss=0.04313, over 1427734.19 frames.], batch size: 23, lr: 1.78e-04 2022-05-29 01:34:34,912 INFO [train.py:842] (0/4) Epoch 31, batch 1800, loss[loss=0.1485, simple_loss=0.2341, pruned_loss=0.03144, over 7437.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2628, pruned_loss=0.0428, over 1424692.36 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 01:35:14,393 INFO [train.py:842] (0/4) Epoch 31, batch 1850, loss[loss=0.138, simple_loss=0.2207, pruned_loss=0.02764, over 7151.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2619, pruned_loss=0.04253, over 1422966.40 frames.], batch size: 17, lr: 1.78e-04 2022-05-29 01:35:53,690 INFO [train.py:842] (0/4) Epoch 31, batch 1900, loss[loss=0.1843, simple_loss=0.2722, pruned_loss=0.04819, over 7328.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2619, pruned_loss=0.04225, over 1426157.03 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 01:36:33,287 INFO [train.py:842] (0/4) Epoch 31, batch 1950, loss[loss=0.1848, simple_loss=0.271, pruned_loss=0.04936, over 7382.00 frames.], tot_loss[loss=0.1743, simple_loss=0.263, pruned_loss=0.04283, over 1426227.85 frames.], batch size: 23, lr: 1.78e-04 2022-05-29 01:37:12,631 INFO [train.py:842] (0/4) Epoch 31, batch 2000, loss[loss=0.1503, simple_loss=0.2439, pruned_loss=0.02837, over 7161.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2617, pruned_loss=0.04252, over 1427459.17 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 01:37:52,325 INFO [train.py:842] (0/4) Epoch 31, batch 2050, loss[loss=0.172, simple_loss=0.2587, pruned_loss=0.04262, over 7196.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2605, pruned_loss=0.0419, over 1424702.59 frames.], batch size: 22, lr: 1.78e-04 2022-05-29 01:38:31,294 INFO [train.py:842] (0/4) Epoch 31, batch 2100, loss[loss=0.16, simple_loss=0.2621, pruned_loss=0.02897, over 7165.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2621, pruned_loss=0.04272, over 1422733.20 frames.], batch size: 19, lr: 1.78e-04 2022-05-29 01:39:10,897 INFO [train.py:842] (0/4) Epoch 31, batch 2150, loss[loss=0.1639, simple_loss=0.2614, pruned_loss=0.0332, over 7168.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2615, pruned_loss=0.04236, over 1426931.66 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 01:39:50,103 INFO [train.py:842] (0/4) Epoch 31, batch 2200, loss[loss=0.2129, simple_loss=0.3029, pruned_loss=0.06142, over 7056.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2618, pruned_loss=0.04206, over 1428368.18 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 01:40:29,733 INFO [train.py:842] (0/4) Epoch 31, batch 2250, loss[loss=0.189, simple_loss=0.284, pruned_loss=0.04703, over 7210.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2626, pruned_loss=0.04209, over 1427750.51 frames.], batch size: 23, lr: 1.78e-04 2022-05-29 01:41:09,312 INFO [train.py:842] (0/4) Epoch 31, batch 2300, loss[loss=0.1695, simple_loss=0.257, pruned_loss=0.04099, over 7258.00 frames.], tot_loss[loss=0.173, simple_loss=0.2619, pruned_loss=0.04204, over 1430011.05 frames.], batch size: 19, lr: 1.78e-04 2022-05-29 01:41:48,916 INFO [train.py:842] (0/4) Epoch 31, batch 2350, loss[loss=0.1819, simple_loss=0.2557, pruned_loss=0.05403, over 7073.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2607, pruned_loss=0.04184, over 1431085.88 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 01:42:27,927 INFO [train.py:842] (0/4) Epoch 31, batch 2400, loss[loss=0.1649, simple_loss=0.2593, pruned_loss=0.03524, over 7226.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2615, pruned_loss=0.042, over 1428991.69 frames.], batch size: 21, lr: 1.78e-04 2022-05-29 01:43:07,263 INFO [train.py:842] (0/4) Epoch 31, batch 2450, loss[loss=0.1874, simple_loss=0.278, pruned_loss=0.04836, over 7229.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2626, pruned_loss=0.04242, over 1425570.77 frames.], batch size: 21, lr: 1.78e-04 2022-05-29 01:43:46,448 INFO [train.py:842] (0/4) Epoch 31, batch 2500, loss[loss=0.1616, simple_loss=0.2594, pruned_loss=0.03194, over 7336.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2607, pruned_loss=0.04118, over 1427940.43 frames.], batch size: 22, lr: 1.78e-04 2022-05-29 01:44:37,197 INFO [train.py:842] (0/4) Epoch 31, batch 2550, loss[loss=0.1839, simple_loss=0.2801, pruned_loss=0.04383, over 7193.00 frames.], tot_loss[loss=0.171, simple_loss=0.2601, pruned_loss=0.04091, over 1430171.60 frames.], batch size: 23, lr: 1.78e-04 2022-05-29 01:45:16,363 INFO [train.py:842] (0/4) Epoch 31, batch 2600, loss[loss=0.1583, simple_loss=0.2444, pruned_loss=0.03609, over 7413.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2602, pruned_loss=0.04118, over 1429030.18 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 01:45:55,625 INFO [train.py:842] (0/4) Epoch 31, batch 2650, loss[loss=0.178, simple_loss=0.272, pruned_loss=0.04203, over 7414.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2622, pruned_loss=0.04234, over 1425361.67 frames.], batch size: 21, lr: 1.78e-04 2022-05-29 01:46:34,536 INFO [train.py:842] (0/4) Epoch 31, batch 2700, loss[loss=0.156, simple_loss=0.2561, pruned_loss=0.02796, over 7272.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2635, pruned_loss=0.04272, over 1419274.46 frames.], batch size: 25, lr: 1.78e-04 2022-05-29 01:47:14,157 INFO [train.py:842] (0/4) Epoch 31, batch 2750, loss[loss=0.1799, simple_loss=0.2704, pruned_loss=0.04472, over 7144.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2635, pruned_loss=0.04283, over 1419250.03 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 01:47:53,460 INFO [train.py:842] (0/4) Epoch 31, batch 2800, loss[loss=0.1874, simple_loss=0.2787, pruned_loss=0.04807, over 7156.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2631, pruned_loss=0.04257, over 1421640.66 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 01:48:43,776 INFO [train.py:842] (0/4) Epoch 31, batch 2850, loss[loss=0.2157, simple_loss=0.2862, pruned_loss=0.07254, over 7187.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2632, pruned_loss=0.04279, over 1419755.82 frames.], batch size: 22, lr: 1.78e-04 2022-05-29 01:49:23,034 INFO [train.py:842] (0/4) Epoch 31, batch 2900, loss[loss=0.1749, simple_loss=0.2706, pruned_loss=0.03958, over 7121.00 frames.], tot_loss[loss=0.175, simple_loss=0.2639, pruned_loss=0.04306, over 1424290.34 frames.], batch size: 21, lr: 1.78e-04 2022-05-29 01:50:02,442 INFO [train.py:842] (0/4) Epoch 31, batch 2950, loss[loss=0.177, simple_loss=0.2675, pruned_loss=0.04323, over 7248.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2636, pruned_loss=0.04291, over 1423146.71 frames.], batch size: 19, lr: 1.78e-04 2022-05-29 01:50:52,141 INFO [train.py:842] (0/4) Epoch 31, batch 3000, loss[loss=0.1509, simple_loss=0.2383, pruned_loss=0.03174, over 7331.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2634, pruned_loss=0.04318, over 1423563.57 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 01:50:52,142 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 01:51:02,034 INFO [train.py:871] (0/4) Epoch 31, validation: loss=0.165, simple_loss=0.262, pruned_loss=0.03402, over 868885.00 frames. 2022-05-29 01:51:41,749 INFO [train.py:842] (0/4) Epoch 31, batch 3050, loss[loss=0.1567, simple_loss=0.237, pruned_loss=0.03824, over 7001.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2631, pruned_loss=0.04292, over 1423040.57 frames.], batch size: 16, lr: 1.78e-04 2022-05-29 01:52:21,213 INFO [train.py:842] (0/4) Epoch 31, batch 3100, loss[loss=0.1694, simple_loss=0.259, pruned_loss=0.03995, over 7298.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2614, pruned_loss=0.04224, over 1426200.64 frames.], batch size: 25, lr: 1.78e-04 2022-05-29 01:53:00,799 INFO [train.py:842] (0/4) Epoch 31, batch 3150, loss[loss=0.1416, simple_loss=0.2244, pruned_loss=0.02943, over 7012.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2606, pruned_loss=0.04203, over 1425915.84 frames.], batch size: 16, lr: 1.78e-04 2022-05-29 01:53:39,943 INFO [train.py:842] (0/4) Epoch 31, batch 3200, loss[loss=0.2137, simple_loss=0.2991, pruned_loss=0.06417, over 7205.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2622, pruned_loss=0.04298, over 1417203.96 frames.], batch size: 23, lr: 1.78e-04 2022-05-29 01:54:19,548 INFO [train.py:842] (0/4) Epoch 31, batch 3250, loss[loss=0.1614, simple_loss=0.2637, pruned_loss=0.02957, over 7145.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2641, pruned_loss=0.04402, over 1416720.13 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 01:54:58,958 INFO [train.py:842] (0/4) Epoch 31, batch 3300, loss[loss=0.1469, simple_loss=0.2232, pruned_loss=0.03531, over 7262.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2633, pruned_loss=0.0438, over 1422971.31 frames.], batch size: 17, lr: 1.78e-04 2022-05-29 01:55:38,530 INFO [train.py:842] (0/4) Epoch 31, batch 3350, loss[loss=0.1617, simple_loss=0.2564, pruned_loss=0.03348, over 7219.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2625, pruned_loss=0.04315, over 1422435.83 frames.], batch size: 21, lr: 1.78e-04 2022-05-29 01:56:17,506 INFO [train.py:842] (0/4) Epoch 31, batch 3400, loss[loss=0.1921, simple_loss=0.2879, pruned_loss=0.04819, over 7291.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2614, pruned_loss=0.04254, over 1421571.40 frames.], batch size: 25, lr: 1.78e-04 2022-05-29 01:56:57,132 INFO [train.py:842] (0/4) Epoch 31, batch 3450, loss[loss=0.1994, simple_loss=0.289, pruned_loss=0.05489, over 6516.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2621, pruned_loss=0.04283, over 1425954.83 frames.], batch size: 38, lr: 1.78e-04 2022-05-29 01:57:36,434 INFO [train.py:842] (0/4) Epoch 31, batch 3500, loss[loss=0.1971, simple_loss=0.2916, pruned_loss=0.05132, over 7368.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2615, pruned_loss=0.04252, over 1426953.27 frames.], batch size: 23, lr: 1.78e-04 2022-05-29 01:58:15,851 INFO [train.py:842] (0/4) Epoch 31, batch 3550, loss[loss=0.1613, simple_loss=0.2506, pruned_loss=0.03601, over 7421.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2623, pruned_loss=0.04316, over 1428355.66 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 01:58:54,902 INFO [train.py:842] (0/4) Epoch 31, batch 3600, loss[loss=0.1644, simple_loss=0.2592, pruned_loss=0.03481, over 7312.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2633, pruned_loss=0.04342, over 1423173.05 frames.], batch size: 24, lr: 1.78e-04 2022-05-29 01:59:34,568 INFO [train.py:842] (0/4) Epoch 31, batch 3650, loss[loss=0.1338, simple_loss=0.2168, pruned_loss=0.0254, over 7137.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04281, over 1422149.21 frames.], batch size: 17, lr: 1.78e-04 2022-05-29 02:00:14,110 INFO [train.py:842] (0/4) Epoch 31, batch 3700, loss[loss=0.1502, simple_loss=0.2319, pruned_loss=0.0342, over 7271.00 frames.], tot_loss[loss=0.1727, simple_loss=0.261, pruned_loss=0.04219, over 1424742.54 frames.], batch size: 17, lr: 1.78e-04 2022-05-29 02:00:53,413 INFO [train.py:842] (0/4) Epoch 31, batch 3750, loss[loss=0.1679, simple_loss=0.2527, pruned_loss=0.04155, over 7264.00 frames.], tot_loss[loss=0.173, simple_loss=0.2615, pruned_loss=0.04225, over 1422407.46 frames.], batch size: 19, lr: 1.78e-04 2022-05-29 02:01:33,033 INFO [train.py:842] (0/4) Epoch 31, batch 3800, loss[loss=0.1926, simple_loss=0.2725, pruned_loss=0.05642, over 7362.00 frames.], tot_loss[loss=0.175, simple_loss=0.2633, pruned_loss=0.04334, over 1425453.61 frames.], batch size: 23, lr: 1.78e-04 2022-05-29 02:02:12,542 INFO [train.py:842] (0/4) Epoch 31, batch 3850, loss[loss=0.1676, simple_loss=0.2416, pruned_loss=0.04679, over 6984.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2642, pruned_loss=0.04378, over 1424690.82 frames.], batch size: 16, lr: 1.78e-04 2022-05-29 02:02:51,828 INFO [train.py:842] (0/4) Epoch 31, batch 3900, loss[loss=0.1595, simple_loss=0.2652, pruned_loss=0.02686, over 7344.00 frames.], tot_loss[loss=0.1758, simple_loss=0.264, pruned_loss=0.04375, over 1428735.11 frames.], batch size: 22, lr: 1.78e-04 2022-05-29 02:03:31,393 INFO [train.py:842] (0/4) Epoch 31, batch 3950, loss[loss=0.1734, simple_loss=0.2575, pruned_loss=0.04467, over 7297.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2643, pruned_loss=0.04373, over 1429205.85 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 02:04:10,616 INFO [train.py:842] (0/4) Epoch 31, batch 4000, loss[loss=0.1687, simple_loss=0.2601, pruned_loss=0.03867, over 7420.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2639, pruned_loss=0.04333, over 1430997.24 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 02:04:50,019 INFO [train.py:842] (0/4) Epoch 31, batch 4050, loss[loss=0.1356, simple_loss=0.2178, pruned_loss=0.02673, over 6999.00 frames.], tot_loss[loss=0.176, simple_loss=0.2644, pruned_loss=0.04376, over 1427302.31 frames.], batch size: 16, lr: 1.78e-04 2022-05-29 02:05:29,184 INFO [train.py:842] (0/4) Epoch 31, batch 4100, loss[loss=0.1802, simple_loss=0.2564, pruned_loss=0.05196, over 7410.00 frames.], tot_loss[loss=0.1769, simple_loss=0.2656, pruned_loss=0.04411, over 1427115.35 frames.], batch size: 18, lr: 1.78e-04 2022-05-29 02:06:08,805 INFO [train.py:842] (0/4) Epoch 31, batch 4150, loss[loss=0.1749, simple_loss=0.2701, pruned_loss=0.03983, over 7145.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2654, pruned_loss=0.04377, over 1429725.24 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 02:06:47,967 INFO [train.py:842] (0/4) Epoch 31, batch 4200, loss[loss=0.197, simple_loss=0.2742, pruned_loss=0.05994, over 7430.00 frames.], tot_loss[loss=0.176, simple_loss=0.2648, pruned_loss=0.04358, over 1430112.36 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 02:07:23,083 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-280000.pt 2022-05-29 02:07:30,388 INFO [train.py:842] (0/4) Epoch 31, batch 4250, loss[loss=0.1901, simple_loss=0.281, pruned_loss=0.04965, over 7323.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2638, pruned_loss=0.04326, over 1431651.19 frames.], batch size: 22, lr: 1.78e-04 2022-05-29 02:08:09,301 INFO [train.py:842] (0/4) Epoch 31, batch 4300, loss[loss=0.1872, simple_loss=0.2758, pruned_loss=0.04926, over 7317.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2639, pruned_loss=0.04286, over 1430648.98 frames.], batch size: 20, lr: 1.78e-04 2022-05-29 02:08:48,977 INFO [train.py:842] (0/4) Epoch 31, batch 4350, loss[loss=0.1933, simple_loss=0.2758, pruned_loss=0.05536, over 7064.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2623, pruned_loss=0.0421, over 1430940.09 frames.], batch size: 18, lr: 1.77e-04 2022-05-29 02:09:28,409 INFO [train.py:842] (0/4) Epoch 31, batch 4400, loss[loss=0.1772, simple_loss=0.2668, pruned_loss=0.04384, over 7052.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2624, pruned_loss=0.04247, over 1433113.40 frames.], batch size: 28, lr: 1.77e-04 2022-05-29 02:10:08,104 INFO [train.py:842] (0/4) Epoch 31, batch 4450, loss[loss=0.2038, simple_loss=0.2842, pruned_loss=0.06167, over 7164.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2625, pruned_loss=0.04253, over 1433688.58 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:10:47,318 INFO [train.py:842] (0/4) Epoch 31, batch 4500, loss[loss=0.1594, simple_loss=0.2531, pruned_loss=0.03288, over 7208.00 frames.], tot_loss[loss=0.174, simple_loss=0.2627, pruned_loss=0.04262, over 1430070.97 frames.], batch size: 22, lr: 1.77e-04 2022-05-29 02:11:26,823 INFO [train.py:842] (0/4) Epoch 31, batch 4550, loss[loss=0.1802, simple_loss=0.2669, pruned_loss=0.04675, over 4784.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2634, pruned_loss=0.04292, over 1421007.57 frames.], batch size: 52, lr: 1.77e-04 2022-05-29 02:12:06,222 INFO [train.py:842] (0/4) Epoch 31, batch 4600, loss[loss=0.1562, simple_loss=0.2598, pruned_loss=0.02626, over 7152.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04275, over 1424064.00 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:12:45,678 INFO [train.py:842] (0/4) Epoch 31, batch 4650, loss[loss=0.1868, simple_loss=0.2756, pruned_loss=0.04898, over 7429.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2633, pruned_loss=0.0429, over 1423993.96 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:13:24,746 INFO [train.py:842] (0/4) Epoch 31, batch 4700, loss[loss=0.1675, simple_loss=0.2513, pruned_loss=0.04186, over 7263.00 frames.], tot_loss[loss=0.1747, simple_loss=0.263, pruned_loss=0.04319, over 1422927.03 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:14:04,512 INFO [train.py:842] (0/4) Epoch 31, batch 4750, loss[loss=0.1758, simple_loss=0.2618, pruned_loss=0.04491, over 7137.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2627, pruned_loss=0.04334, over 1425953.33 frames.], batch size: 17, lr: 1.77e-04 2022-05-29 02:14:43,773 INFO [train.py:842] (0/4) Epoch 31, batch 4800, loss[loss=0.1996, simple_loss=0.299, pruned_loss=0.05013, over 7159.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2633, pruned_loss=0.0432, over 1426119.92 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:15:23,360 INFO [train.py:842] (0/4) Epoch 31, batch 4850, loss[loss=0.1755, simple_loss=0.2687, pruned_loss=0.04115, over 6409.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2631, pruned_loss=0.04318, over 1420557.40 frames.], batch size: 38, lr: 1.77e-04 2022-05-29 02:16:02,469 INFO [train.py:842] (0/4) Epoch 31, batch 4900, loss[loss=0.1599, simple_loss=0.2567, pruned_loss=0.03155, over 7429.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2642, pruned_loss=0.04344, over 1421788.45 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:16:42,075 INFO [train.py:842] (0/4) Epoch 31, batch 4950, loss[loss=0.1834, simple_loss=0.2747, pruned_loss=0.04605, over 7074.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2638, pruned_loss=0.04339, over 1418859.98 frames.], batch size: 18, lr: 1.77e-04 2022-05-29 02:17:21,106 INFO [train.py:842] (0/4) Epoch 31, batch 5000, loss[loss=0.1497, simple_loss=0.2478, pruned_loss=0.02575, over 7067.00 frames.], tot_loss[loss=0.176, simple_loss=0.2645, pruned_loss=0.04381, over 1417334.82 frames.], batch size: 18, lr: 1.77e-04 2022-05-29 02:18:00,699 INFO [train.py:842] (0/4) Epoch 31, batch 5050, loss[loss=0.1532, simple_loss=0.2505, pruned_loss=0.02795, over 7172.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2647, pruned_loss=0.04357, over 1417297.17 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:18:39,762 INFO [train.py:842] (0/4) Epoch 31, batch 5100, loss[loss=0.2144, simple_loss=0.291, pruned_loss=0.06887, over 7417.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2648, pruned_loss=0.04369, over 1422419.05 frames.], batch size: 18, lr: 1.77e-04 2022-05-29 02:19:19,591 INFO [train.py:842] (0/4) Epoch 31, batch 5150, loss[loss=0.1466, simple_loss=0.2261, pruned_loss=0.03359, over 7138.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2646, pruned_loss=0.04356, over 1424976.20 frames.], batch size: 17, lr: 1.77e-04 2022-05-29 02:19:59,017 INFO [train.py:842] (0/4) Epoch 31, batch 5200, loss[loss=0.192, simple_loss=0.277, pruned_loss=0.05348, over 7251.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2643, pruned_loss=0.0437, over 1424185.02 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:20:38,725 INFO [train.py:842] (0/4) Epoch 31, batch 5250, loss[loss=0.177, simple_loss=0.2665, pruned_loss=0.04373, over 7104.00 frames.], tot_loss[loss=0.176, simple_loss=0.2641, pruned_loss=0.04393, over 1422088.28 frames.], batch size: 28, lr: 1.77e-04 2022-05-29 02:21:18,155 INFO [train.py:842] (0/4) Epoch 31, batch 5300, loss[loss=0.1841, simple_loss=0.2639, pruned_loss=0.05218, over 7377.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2622, pruned_loss=0.04282, over 1422916.93 frames.], batch size: 23, lr: 1.77e-04 2022-05-29 02:21:57,668 INFO [train.py:842] (0/4) Epoch 31, batch 5350, loss[loss=0.2141, simple_loss=0.3046, pruned_loss=0.0618, over 4941.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2627, pruned_loss=0.04291, over 1418536.94 frames.], batch size: 52, lr: 1.77e-04 2022-05-29 02:22:36,637 INFO [train.py:842] (0/4) Epoch 31, batch 5400, loss[loss=0.1838, simple_loss=0.2754, pruned_loss=0.0461, over 7320.00 frames.], tot_loss[loss=0.176, simple_loss=0.2642, pruned_loss=0.04394, over 1420212.70 frames.], batch size: 21, lr: 1.77e-04 2022-05-29 02:23:16,007 INFO [train.py:842] (0/4) Epoch 31, batch 5450, loss[loss=0.1499, simple_loss=0.237, pruned_loss=0.03139, over 6996.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2644, pruned_loss=0.04398, over 1416314.47 frames.], batch size: 16, lr: 1.77e-04 2022-05-29 02:23:55,247 INFO [train.py:842] (0/4) Epoch 31, batch 5500, loss[loss=0.1616, simple_loss=0.2485, pruned_loss=0.03736, over 7331.00 frames.], tot_loss[loss=0.1766, simple_loss=0.265, pruned_loss=0.04414, over 1418199.77 frames.], batch size: 22, lr: 1.77e-04 2022-05-29 02:24:34,757 INFO [train.py:842] (0/4) Epoch 31, batch 5550, loss[loss=0.1645, simple_loss=0.2566, pruned_loss=0.0362, over 7413.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2648, pruned_loss=0.044, over 1419559.44 frames.], batch size: 21, lr: 1.77e-04 2022-05-29 02:25:13,908 INFO [train.py:842] (0/4) Epoch 31, batch 5600, loss[loss=0.1559, simple_loss=0.2367, pruned_loss=0.03752, over 6983.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2645, pruned_loss=0.04407, over 1419646.28 frames.], batch size: 16, lr: 1.77e-04 2022-05-29 02:25:53,458 INFO [train.py:842] (0/4) Epoch 31, batch 5650, loss[loss=0.1401, simple_loss=0.2311, pruned_loss=0.02453, over 7404.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2643, pruned_loss=0.04379, over 1420362.89 frames.], batch size: 18, lr: 1.77e-04 2022-05-29 02:26:32,541 INFO [train.py:842] (0/4) Epoch 31, batch 5700, loss[loss=0.1794, simple_loss=0.2701, pruned_loss=0.04434, over 7330.00 frames.], tot_loss[loss=0.1775, simple_loss=0.2656, pruned_loss=0.04467, over 1414909.03 frames.], batch size: 22, lr: 1.77e-04 2022-05-29 02:27:12,152 INFO [train.py:842] (0/4) Epoch 31, batch 5750, loss[loss=0.2129, simple_loss=0.3114, pruned_loss=0.05719, over 7124.00 frames.], tot_loss[loss=0.177, simple_loss=0.2654, pruned_loss=0.04432, over 1419782.62 frames.], batch size: 21, lr: 1.77e-04 2022-05-29 02:27:51,394 INFO [train.py:842] (0/4) Epoch 31, batch 5800, loss[loss=0.1893, simple_loss=0.2781, pruned_loss=0.05028, over 7263.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2662, pruned_loss=0.04461, over 1419682.70 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:28:31,143 INFO [train.py:842] (0/4) Epoch 31, batch 5850, loss[loss=0.1376, simple_loss=0.2188, pruned_loss=0.02822, over 7424.00 frames.], tot_loss[loss=0.176, simple_loss=0.2647, pruned_loss=0.04369, over 1423273.33 frames.], batch size: 18, lr: 1.77e-04 2022-05-29 02:29:10,375 INFO [train.py:842] (0/4) Epoch 31, batch 5900, loss[loss=0.1585, simple_loss=0.2485, pruned_loss=0.03424, over 7328.00 frames.], tot_loss[loss=0.1753, simple_loss=0.264, pruned_loss=0.04328, over 1424692.07 frames.], batch size: 22, lr: 1.77e-04 2022-05-29 02:29:50,382 INFO [train.py:842] (0/4) Epoch 31, batch 5950, loss[loss=0.1516, simple_loss=0.2433, pruned_loss=0.02994, over 7158.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2624, pruned_loss=0.04268, over 1429433.15 frames.], batch size: 18, lr: 1.77e-04 2022-05-29 02:30:29,439 INFO [train.py:842] (0/4) Epoch 31, batch 6000, loss[loss=0.1594, simple_loss=0.2515, pruned_loss=0.03362, over 7387.00 frames.], tot_loss[loss=0.1744, simple_loss=0.263, pruned_loss=0.04293, over 1427448.83 frames.], batch size: 23, lr: 1.77e-04 2022-05-29 02:30:29,441 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 02:30:38,842 INFO [train.py:871] (0/4) Epoch 31, validation: loss=0.1644, simple_loss=0.2618, pruned_loss=0.0335, over 868885.00 frames. 2022-05-29 02:31:18,332 INFO [train.py:842] (0/4) Epoch 31, batch 6050, loss[loss=0.1544, simple_loss=0.2373, pruned_loss=0.03576, over 7428.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2633, pruned_loss=0.04297, over 1425519.73 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:31:57,782 INFO [train.py:842] (0/4) Epoch 31, batch 6100, loss[loss=0.16, simple_loss=0.2389, pruned_loss=0.04061, over 7361.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04283, over 1430080.58 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:32:37,370 INFO [train.py:842] (0/4) Epoch 31, batch 6150, loss[loss=0.1789, simple_loss=0.2682, pruned_loss=0.04479, over 7154.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2625, pruned_loss=0.04259, over 1428368.26 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:33:16,227 INFO [train.py:842] (0/4) Epoch 31, batch 6200, loss[loss=0.1915, simple_loss=0.2859, pruned_loss=0.04855, over 7144.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2638, pruned_loss=0.04273, over 1421437.14 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:33:55,498 INFO [train.py:842] (0/4) Epoch 31, batch 6250, loss[loss=0.2127, simple_loss=0.2929, pruned_loss=0.06625, over 6779.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2648, pruned_loss=0.04329, over 1422460.98 frames.], batch size: 31, lr: 1.77e-04 2022-05-29 02:34:34,691 INFO [train.py:842] (0/4) Epoch 31, batch 6300, loss[loss=0.1685, simple_loss=0.2545, pruned_loss=0.0413, over 7347.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2652, pruned_loss=0.04353, over 1420773.50 frames.], batch size: 22, lr: 1.77e-04 2022-05-29 02:35:14,271 INFO [train.py:842] (0/4) Epoch 31, batch 6350, loss[loss=0.1831, simple_loss=0.2657, pruned_loss=0.05022, over 7161.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2652, pruned_loss=0.04374, over 1425926.69 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:35:53,423 INFO [train.py:842] (0/4) Epoch 31, batch 6400, loss[loss=0.2051, simple_loss=0.2917, pruned_loss=0.05929, over 6181.00 frames.], tot_loss[loss=0.178, simple_loss=0.2667, pruned_loss=0.04466, over 1424463.30 frames.], batch size: 37, lr: 1.77e-04 2022-05-29 02:36:33,075 INFO [train.py:842] (0/4) Epoch 31, batch 6450, loss[loss=0.2168, simple_loss=0.2969, pruned_loss=0.0683, over 7443.00 frames.], tot_loss[loss=0.177, simple_loss=0.2655, pruned_loss=0.04423, over 1421049.81 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:37:12,496 INFO [train.py:842] (0/4) Epoch 31, batch 6500, loss[loss=0.1663, simple_loss=0.2412, pruned_loss=0.04572, over 7249.00 frames.], tot_loss[loss=0.175, simple_loss=0.2637, pruned_loss=0.04315, over 1427102.78 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:37:52,284 INFO [train.py:842] (0/4) Epoch 31, batch 6550, loss[loss=0.1645, simple_loss=0.252, pruned_loss=0.03848, over 7007.00 frames.], tot_loss[loss=0.1742, simple_loss=0.263, pruned_loss=0.0427, over 1423512.57 frames.], batch size: 16, lr: 1.77e-04 2022-05-29 02:38:31,463 INFO [train.py:842] (0/4) Epoch 31, batch 6600, loss[loss=0.1708, simple_loss=0.274, pruned_loss=0.0338, over 7204.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2635, pruned_loss=0.04298, over 1423029.49 frames.], batch size: 23, lr: 1.77e-04 2022-05-29 02:39:11,134 INFO [train.py:842] (0/4) Epoch 31, batch 6650, loss[loss=0.1645, simple_loss=0.2588, pruned_loss=0.03516, over 7428.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2629, pruned_loss=0.04241, over 1427305.78 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:39:50,717 INFO [train.py:842] (0/4) Epoch 31, batch 6700, loss[loss=0.2002, simple_loss=0.2852, pruned_loss=0.05765, over 7214.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2619, pruned_loss=0.04218, over 1432473.29 frames.], batch size: 21, lr: 1.77e-04 2022-05-29 02:40:30,501 INFO [train.py:842] (0/4) Epoch 31, batch 6750, loss[loss=0.177, simple_loss=0.2802, pruned_loss=0.03683, over 7290.00 frames.], tot_loss[loss=0.172, simple_loss=0.261, pruned_loss=0.04153, over 1430718.91 frames.], batch size: 24, lr: 1.77e-04 2022-05-29 02:41:09,487 INFO [train.py:842] (0/4) Epoch 31, batch 6800, loss[loss=0.1697, simple_loss=0.2646, pruned_loss=0.03736, over 6283.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2608, pruned_loss=0.04133, over 1430175.87 frames.], batch size: 37, lr: 1.77e-04 2022-05-29 02:41:49,196 INFO [train.py:842] (0/4) Epoch 31, batch 6850, loss[loss=0.1474, simple_loss=0.2332, pruned_loss=0.03074, over 7284.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2613, pruned_loss=0.04167, over 1424953.63 frames.], batch size: 18, lr: 1.77e-04 2022-05-29 02:42:28,717 INFO [train.py:842] (0/4) Epoch 31, batch 6900, loss[loss=0.183, simple_loss=0.2731, pruned_loss=0.04647, over 7118.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2621, pruned_loss=0.0423, over 1424273.56 frames.], batch size: 21, lr: 1.77e-04 2022-05-29 02:43:08,182 INFO [train.py:842] (0/4) Epoch 31, batch 6950, loss[loss=0.1476, simple_loss=0.2435, pruned_loss=0.02587, over 7321.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2637, pruned_loss=0.04302, over 1421480.48 frames.], batch size: 21, lr: 1.77e-04 2022-05-29 02:43:47,648 INFO [train.py:842] (0/4) Epoch 31, batch 7000, loss[loss=0.1656, simple_loss=0.2624, pruned_loss=0.03441, over 7326.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2644, pruned_loss=0.04316, over 1417530.46 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:44:27,086 INFO [train.py:842] (0/4) Epoch 31, batch 7050, loss[loss=0.1609, simple_loss=0.2482, pruned_loss=0.03677, over 7255.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2651, pruned_loss=0.04385, over 1406441.06 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:45:06,180 INFO [train.py:842] (0/4) Epoch 31, batch 7100, loss[loss=0.169, simple_loss=0.2709, pruned_loss=0.03354, over 7433.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2649, pruned_loss=0.0438, over 1398605.66 frames.], batch size: 20, lr: 1.77e-04 2022-05-29 02:45:46,083 INFO [train.py:842] (0/4) Epoch 31, batch 7150, loss[loss=0.1607, simple_loss=0.2503, pruned_loss=0.03554, over 7261.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2645, pruned_loss=0.04389, over 1400946.36 frames.], batch size: 19, lr: 1.77e-04 2022-05-29 02:46:25,429 INFO [train.py:842] (0/4) Epoch 31, batch 7200, loss[loss=0.1291, simple_loss=0.2146, pruned_loss=0.02181, over 7236.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2645, pruned_loss=0.0441, over 1399712.21 frames.], batch size: 16, lr: 1.77e-04 2022-05-29 02:47:05,284 INFO [train.py:842] (0/4) Epoch 31, batch 7250, loss[loss=0.1531, simple_loss=0.2382, pruned_loss=0.03402, over 6788.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2644, pruned_loss=0.04409, over 1406651.90 frames.], batch size: 15, lr: 1.77e-04 2022-05-29 02:47:44,430 INFO [train.py:842] (0/4) Epoch 31, batch 7300, loss[loss=0.1709, simple_loss=0.2438, pruned_loss=0.04898, over 7272.00 frames.], tot_loss[loss=0.1759, simple_loss=0.264, pruned_loss=0.04385, over 1408427.64 frames.], batch size: 17, lr: 1.77e-04 2022-05-29 02:48:24,335 INFO [train.py:842] (0/4) Epoch 31, batch 7350, loss[loss=0.1332, simple_loss=0.2286, pruned_loss=0.01895, over 7265.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2621, pruned_loss=0.04269, over 1412151.10 frames.], batch size: 17, lr: 1.77e-04 2022-05-29 02:49:03,708 INFO [train.py:842] (0/4) Epoch 31, batch 7400, loss[loss=0.1638, simple_loss=0.2536, pruned_loss=0.03697, over 6398.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2633, pruned_loss=0.04358, over 1416904.15 frames.], batch size: 37, lr: 1.77e-04 2022-05-29 02:49:43,375 INFO [train.py:842] (0/4) Epoch 31, batch 7450, loss[loss=0.1563, simple_loss=0.2394, pruned_loss=0.03658, over 6825.00 frames.], tot_loss[loss=0.176, simple_loss=0.2641, pruned_loss=0.04393, over 1417701.34 frames.], batch size: 15, lr: 1.77e-04 2022-05-29 02:50:22,747 INFO [train.py:842] (0/4) Epoch 31, batch 7500, loss[loss=0.1573, simple_loss=0.2442, pruned_loss=0.03524, over 7265.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2632, pruned_loss=0.04357, over 1415500.86 frames.], batch size: 19, lr: 1.76e-04 2022-05-29 02:51:02,342 INFO [train.py:842] (0/4) Epoch 31, batch 7550, loss[loss=0.1649, simple_loss=0.2615, pruned_loss=0.03413, over 7148.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2643, pruned_loss=0.04367, over 1416123.93 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 02:51:41,385 INFO [train.py:842] (0/4) Epoch 31, batch 7600, loss[loss=0.1577, simple_loss=0.2473, pruned_loss=0.03403, over 7446.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2644, pruned_loss=0.04365, over 1415181.73 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 02:52:20,820 INFO [train.py:842] (0/4) Epoch 31, batch 7650, loss[loss=0.1335, simple_loss=0.2261, pruned_loss=0.02044, over 7269.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2645, pruned_loss=0.04361, over 1415318.66 frames.], batch size: 19, lr: 1.76e-04 2022-05-29 02:53:00,147 INFO [train.py:842] (0/4) Epoch 31, batch 7700, loss[loss=0.1602, simple_loss=0.252, pruned_loss=0.03421, over 5232.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2647, pruned_loss=0.04379, over 1417859.70 frames.], batch size: 53, lr: 1.76e-04 2022-05-29 02:53:39,829 INFO [train.py:842] (0/4) Epoch 31, batch 7750, loss[loss=0.1687, simple_loss=0.258, pruned_loss=0.03972, over 7331.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2646, pruned_loss=0.04359, over 1418358.22 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 02:54:19,334 INFO [train.py:842] (0/4) Epoch 31, batch 7800, loss[loss=0.1699, simple_loss=0.2662, pruned_loss=0.03683, over 7328.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2622, pruned_loss=0.04244, over 1419336.39 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 02:54:58,954 INFO [train.py:842] (0/4) Epoch 31, batch 7850, loss[loss=0.1742, simple_loss=0.2691, pruned_loss=0.03965, over 7255.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2627, pruned_loss=0.04299, over 1419052.58 frames.], batch size: 25, lr: 1.76e-04 2022-05-29 02:55:38,162 INFO [train.py:842] (0/4) Epoch 31, batch 7900, loss[loss=0.1357, simple_loss=0.2164, pruned_loss=0.02749, over 7406.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2641, pruned_loss=0.04378, over 1418692.33 frames.], batch size: 18, lr: 1.76e-04 2022-05-29 02:56:17,828 INFO [train.py:842] (0/4) Epoch 31, batch 7950, loss[loss=0.158, simple_loss=0.2433, pruned_loss=0.03638, over 7425.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2634, pruned_loss=0.04345, over 1418946.66 frames.], batch size: 18, lr: 1.76e-04 2022-05-29 02:56:57,329 INFO [train.py:842] (0/4) Epoch 31, batch 8000, loss[loss=0.1747, simple_loss=0.2636, pruned_loss=0.04291, over 7444.00 frames.], tot_loss[loss=0.176, simple_loss=0.2641, pruned_loss=0.04398, over 1418628.33 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 02:57:36,905 INFO [train.py:842] (0/4) Epoch 31, batch 8050, loss[loss=0.1746, simple_loss=0.2611, pruned_loss=0.04409, over 7221.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2642, pruned_loss=0.0441, over 1415008.25 frames.], batch size: 21, lr: 1.76e-04 2022-05-29 02:58:16,034 INFO [train.py:842] (0/4) Epoch 31, batch 8100, loss[loss=0.1758, simple_loss=0.2623, pruned_loss=0.04463, over 7338.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2645, pruned_loss=0.04384, over 1414849.56 frames.], batch size: 22, lr: 1.76e-04 2022-05-29 02:58:55,627 INFO [train.py:842] (0/4) Epoch 31, batch 8150, loss[loss=0.1413, simple_loss=0.2237, pruned_loss=0.02947, over 7280.00 frames.], tot_loss[loss=0.177, simple_loss=0.2652, pruned_loss=0.04443, over 1418113.65 frames.], batch size: 17, lr: 1.76e-04 2022-05-29 02:59:34,736 INFO [train.py:842] (0/4) Epoch 31, batch 8200, loss[loss=0.1639, simple_loss=0.2405, pruned_loss=0.0436, over 7131.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2638, pruned_loss=0.04377, over 1418292.56 frames.], batch size: 17, lr: 1.76e-04 2022-05-29 03:00:14,429 INFO [train.py:842] (0/4) Epoch 31, batch 8250, loss[loss=0.1402, simple_loss=0.2271, pruned_loss=0.02662, over 7281.00 frames.], tot_loss[loss=0.176, simple_loss=0.2641, pruned_loss=0.04395, over 1414454.02 frames.], batch size: 17, lr: 1.76e-04 2022-05-29 03:00:53,609 INFO [train.py:842] (0/4) Epoch 31, batch 8300, loss[loss=0.1484, simple_loss=0.2309, pruned_loss=0.03294, over 7292.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2633, pruned_loss=0.04365, over 1412871.06 frames.], batch size: 18, lr: 1.76e-04 2022-05-29 03:01:33,299 INFO [train.py:842] (0/4) Epoch 31, batch 8350, loss[loss=0.1533, simple_loss=0.2492, pruned_loss=0.0287, over 7110.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2624, pruned_loss=0.04296, over 1413600.13 frames.], batch size: 21, lr: 1.76e-04 2022-05-29 03:02:12,670 INFO [train.py:842] (0/4) Epoch 31, batch 8400, loss[loss=0.1965, simple_loss=0.2792, pruned_loss=0.05692, over 7224.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2629, pruned_loss=0.04307, over 1413558.92 frames.], batch size: 21, lr: 1.76e-04 2022-05-29 03:02:52,215 INFO [train.py:842] (0/4) Epoch 31, batch 8450, loss[loss=0.175, simple_loss=0.2746, pruned_loss=0.0377, over 7322.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2635, pruned_loss=0.04308, over 1415426.22 frames.], batch size: 21, lr: 1.76e-04 2022-05-29 03:03:31,488 INFO [train.py:842] (0/4) Epoch 31, batch 8500, loss[loss=0.1966, simple_loss=0.2888, pruned_loss=0.05223, over 7144.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2645, pruned_loss=0.04379, over 1414048.60 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 03:04:11,217 INFO [train.py:842] (0/4) Epoch 31, batch 8550, loss[loss=0.2605, simple_loss=0.3384, pruned_loss=0.09126, over 7403.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2647, pruned_loss=0.04396, over 1413249.71 frames.], batch size: 21, lr: 1.76e-04 2022-05-29 03:04:50,537 INFO [train.py:842] (0/4) Epoch 31, batch 8600, loss[loss=0.1882, simple_loss=0.2786, pruned_loss=0.04883, over 7310.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2639, pruned_loss=0.04365, over 1416653.71 frames.], batch size: 21, lr: 1.76e-04 2022-05-29 03:05:30,272 INFO [train.py:842] (0/4) Epoch 31, batch 8650, loss[loss=0.2115, simple_loss=0.295, pruned_loss=0.06394, over 6458.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2636, pruned_loss=0.04369, over 1419408.95 frames.], batch size: 38, lr: 1.76e-04 2022-05-29 03:06:09,458 INFO [train.py:842] (0/4) Epoch 31, batch 8700, loss[loss=0.154, simple_loss=0.2425, pruned_loss=0.03277, over 7003.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2637, pruned_loss=0.04331, over 1419512.06 frames.], batch size: 16, lr: 1.76e-04 2022-05-29 03:06:48,532 INFO [train.py:842] (0/4) Epoch 31, batch 8750, loss[loss=0.1593, simple_loss=0.2504, pruned_loss=0.03414, over 6953.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2642, pruned_loss=0.04367, over 1408937.57 frames.], batch size: 32, lr: 1.76e-04 2022-05-29 03:07:28,052 INFO [train.py:842] (0/4) Epoch 31, batch 8800, loss[loss=0.1476, simple_loss=0.2326, pruned_loss=0.03131, over 7224.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2635, pruned_loss=0.04392, over 1408194.24 frames.], batch size: 16, lr: 1.76e-04 2022-05-29 03:08:07,344 INFO [train.py:842] (0/4) Epoch 31, batch 8850, loss[loss=0.1652, simple_loss=0.2486, pruned_loss=0.04087, over 7417.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2642, pruned_loss=0.044, over 1403834.91 frames.], batch size: 18, lr: 1.76e-04 2022-05-29 03:08:46,461 INFO [train.py:842] (0/4) Epoch 31, batch 8900, loss[loss=0.2315, simple_loss=0.3069, pruned_loss=0.0781, over 4653.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2657, pruned_loss=0.04525, over 1395422.45 frames.], batch size: 52, lr: 1.76e-04 2022-05-29 03:09:25,602 INFO [train.py:842] (0/4) Epoch 31, batch 8950, loss[loss=0.1594, simple_loss=0.2453, pruned_loss=0.03678, over 7335.00 frames.], tot_loss[loss=0.177, simple_loss=0.2652, pruned_loss=0.04446, over 1393021.41 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 03:10:04,390 INFO [train.py:842] (0/4) Epoch 31, batch 9000, loss[loss=0.1814, simple_loss=0.2647, pruned_loss=0.04902, over 7233.00 frames.], tot_loss[loss=0.1766, simple_loss=0.265, pruned_loss=0.04404, over 1390004.25 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 03:10:04,392 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 03:10:13,976 INFO [train.py:871] (0/4) Epoch 31, validation: loss=0.1628, simple_loss=0.2602, pruned_loss=0.03268, over 868885.00 frames. 2022-05-29 03:10:53,014 INFO [train.py:842] (0/4) Epoch 31, batch 9050, loss[loss=0.1536, simple_loss=0.2491, pruned_loss=0.02901, over 6280.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2669, pruned_loss=0.04464, over 1373181.99 frames.], batch size: 38, lr: 1.76e-04 2022-05-29 03:11:31,599 INFO [train.py:842] (0/4) Epoch 31, batch 9100, loss[loss=0.161, simple_loss=0.2505, pruned_loss=0.03575, over 7318.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2666, pruned_loss=0.04439, over 1359669.47 frames.], batch size: 20, lr: 1.76e-04 2022-05-29 03:12:10,261 INFO [train.py:842] (0/4) Epoch 31, batch 9150, loss[loss=0.1903, simple_loss=0.2836, pruned_loss=0.0485, over 6254.00 frames.], tot_loss[loss=0.1798, simple_loss=0.2685, pruned_loss=0.0455, over 1327468.48 frames.], batch size: 37, lr: 1.76e-04 2022-05-29 03:12:43,248 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-31.pt 2022-05-29 03:13:02,479 INFO [train.py:842] (0/4) Epoch 32, batch 0, loss[loss=0.184, simple_loss=0.2862, pruned_loss=0.04095, over 5112.00 frames.], tot_loss[loss=0.184, simple_loss=0.2862, pruned_loss=0.04095, over 5112.00 frames.], batch size: 52, lr: 1.73e-04 2022-05-29 03:13:41,545 INFO [train.py:842] (0/4) Epoch 32, batch 50, loss[loss=0.2193, simple_loss=0.2962, pruned_loss=0.0712, over 6233.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2665, pruned_loss=0.04241, over 319539.59 frames.], batch size: 37, lr: 1.73e-04 2022-05-29 03:14:21,193 INFO [train.py:842] (0/4) Epoch 32, batch 100, loss[loss=0.2835, simple_loss=0.3395, pruned_loss=0.1137, over 7279.00 frames.], tot_loss[loss=0.1781, simple_loss=0.2669, pruned_loss=0.0446, over 566431.66 frames.], batch size: 25, lr: 1.73e-04 2022-05-29 03:15:00,451 INFO [train.py:842] (0/4) Epoch 32, batch 150, loss[loss=0.1552, simple_loss=0.2537, pruned_loss=0.02837, over 7163.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2651, pruned_loss=0.0442, over 758494.64 frames.], batch size: 26, lr: 1.73e-04 2022-05-29 03:15:39,787 INFO [train.py:842] (0/4) Epoch 32, batch 200, loss[loss=0.1299, simple_loss=0.212, pruned_loss=0.02387, over 7026.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2647, pruned_loss=0.04395, over 902478.25 frames.], batch size: 16, lr: 1.73e-04 2022-05-29 03:16:19,166 INFO [train.py:842] (0/4) Epoch 32, batch 250, loss[loss=0.1798, simple_loss=0.2754, pruned_loss=0.04208, over 7280.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2629, pruned_loss=0.04227, over 1022307.36 frames.], batch size: 24, lr: 1.73e-04 2022-05-29 03:16:58,606 INFO [train.py:842] (0/4) Epoch 32, batch 300, loss[loss=0.1652, simple_loss=0.2563, pruned_loss=0.03701, over 7292.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2625, pruned_loss=0.04148, over 1113096.96 frames.], batch size: 24, lr: 1.73e-04 2022-05-29 03:17:37,859 INFO [train.py:842] (0/4) Epoch 32, batch 350, loss[loss=0.1767, simple_loss=0.2778, pruned_loss=0.0378, over 7075.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2628, pruned_loss=0.04192, over 1180082.14 frames.], batch size: 28, lr: 1.73e-04 2022-05-29 03:18:17,598 INFO [train.py:842] (0/4) Epoch 32, batch 400, loss[loss=0.1633, simple_loss=0.2531, pruned_loss=0.03675, over 7156.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2631, pruned_loss=0.0425, over 1235528.41 frames.], batch size: 26, lr: 1.73e-04 2022-05-29 03:18:56,921 INFO [train.py:842] (0/4) Epoch 32, batch 450, loss[loss=0.1398, simple_loss=0.2274, pruned_loss=0.02615, over 7333.00 frames.], tot_loss[loss=0.174, simple_loss=0.2634, pruned_loss=0.04235, over 1275924.54 frames.], batch size: 21, lr: 1.73e-04 2022-05-29 03:19:36,525 INFO [train.py:842] (0/4) Epoch 32, batch 500, loss[loss=0.1593, simple_loss=0.2612, pruned_loss=0.02869, over 7329.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2624, pruned_loss=0.04169, over 1312611.49 frames.], batch size: 22, lr: 1.73e-04 2022-05-29 03:20:15,822 INFO [train.py:842] (0/4) Epoch 32, batch 550, loss[loss=0.1889, simple_loss=0.2806, pruned_loss=0.04859, over 7341.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2637, pruned_loss=0.04278, over 1340487.40 frames.], batch size: 22, lr: 1.73e-04 2022-05-29 03:20:55,434 INFO [train.py:842] (0/4) Epoch 32, batch 600, loss[loss=0.1371, simple_loss=0.2208, pruned_loss=0.02665, over 7126.00 frames.], tot_loss[loss=0.1744, simple_loss=0.263, pruned_loss=0.04295, over 1363515.07 frames.], batch size: 17, lr: 1.73e-04 2022-05-29 03:21:45,349 INFO [train.py:842] (0/4) Epoch 32, batch 650, loss[loss=0.1648, simple_loss=0.2519, pruned_loss=0.03887, over 6989.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2636, pruned_loss=0.04334, over 1379158.64 frames.], batch size: 16, lr: 1.73e-04 2022-05-29 03:22:24,802 INFO [train.py:842] (0/4) Epoch 32, batch 700, loss[loss=0.1989, simple_loss=0.2852, pruned_loss=0.05631, over 7204.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2641, pruned_loss=0.04359, over 1387908.68 frames.], batch size: 23, lr: 1.73e-04 2022-05-29 03:23:04,316 INFO [train.py:842] (0/4) Epoch 32, batch 750, loss[loss=0.1686, simple_loss=0.2749, pruned_loss=0.03118, over 7114.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2643, pruned_loss=0.04359, over 1396631.56 frames.], batch size: 21, lr: 1.73e-04 2022-05-29 03:23:43,766 INFO [train.py:842] (0/4) Epoch 32, batch 800, loss[loss=0.1598, simple_loss=0.2439, pruned_loss=0.03782, over 7290.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2642, pruned_loss=0.04338, over 1401379.24 frames.], batch size: 18, lr: 1.73e-04 2022-05-29 03:24:22,937 INFO [train.py:842] (0/4) Epoch 32, batch 850, loss[loss=0.1747, simple_loss=0.2782, pruned_loss=0.03561, over 7295.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2639, pruned_loss=0.04263, over 1408159.42 frames.], batch size: 25, lr: 1.73e-04 2022-05-29 03:25:02,020 INFO [train.py:842] (0/4) Epoch 32, batch 900, loss[loss=0.1697, simple_loss=0.2592, pruned_loss=0.04008, over 7321.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2644, pruned_loss=0.04294, over 1410673.40 frames.], batch size: 22, lr: 1.73e-04 2022-05-29 03:25:41,219 INFO [train.py:842] (0/4) Epoch 32, batch 950, loss[loss=0.1506, simple_loss=0.2368, pruned_loss=0.03216, over 7242.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2629, pruned_loss=0.04265, over 1412799.12 frames.], batch size: 16, lr: 1.73e-04 2022-05-29 03:26:20,879 INFO [train.py:842] (0/4) Epoch 32, batch 1000, loss[loss=0.1465, simple_loss=0.2398, pruned_loss=0.02657, over 7426.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2633, pruned_loss=0.04297, over 1416498.87 frames.], batch size: 20, lr: 1.73e-04 2022-05-29 03:27:00,321 INFO [train.py:842] (0/4) Epoch 32, batch 1050, loss[loss=0.1729, simple_loss=0.2715, pruned_loss=0.03721, over 7234.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2628, pruned_loss=0.04302, over 1419939.46 frames.], batch size: 20, lr: 1.73e-04 2022-05-29 03:27:39,929 INFO [train.py:842] (0/4) Epoch 32, batch 1100, loss[loss=0.1978, simple_loss=0.2899, pruned_loss=0.05283, over 7205.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2628, pruned_loss=0.04278, over 1418460.12 frames.], batch size: 22, lr: 1.73e-04 2022-05-29 03:28:19,221 INFO [train.py:842] (0/4) Epoch 32, batch 1150, loss[loss=0.1534, simple_loss=0.2306, pruned_loss=0.03807, over 7143.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2632, pruned_loss=0.04279, over 1422447.20 frames.], batch size: 17, lr: 1.73e-04 2022-05-29 03:28:59,064 INFO [train.py:842] (0/4) Epoch 32, batch 1200, loss[loss=0.1472, simple_loss=0.2442, pruned_loss=0.02508, over 7417.00 frames.], tot_loss[loss=0.1745, simple_loss=0.263, pruned_loss=0.04302, over 1424614.18 frames.], batch size: 21, lr: 1.73e-04 2022-05-29 03:29:38,226 INFO [train.py:842] (0/4) Epoch 32, batch 1250, loss[loss=0.1913, simple_loss=0.2918, pruned_loss=0.0454, over 7197.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2632, pruned_loss=0.04317, over 1418180.87 frames.], batch size: 23, lr: 1.73e-04 2022-05-29 03:30:17,916 INFO [train.py:842] (0/4) Epoch 32, batch 1300, loss[loss=0.1672, simple_loss=0.2678, pruned_loss=0.0333, over 7135.00 frames.], tot_loss[loss=0.1753, simple_loss=0.264, pruned_loss=0.04328, over 1423963.91 frames.], batch size: 20, lr: 1.73e-04 2022-05-29 03:30:57,293 INFO [train.py:842] (0/4) Epoch 32, batch 1350, loss[loss=0.1806, simple_loss=0.263, pruned_loss=0.04914, over 7325.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2632, pruned_loss=0.04306, over 1421844.85 frames.], batch size: 20, lr: 1.73e-04 2022-05-29 03:31:37,042 INFO [train.py:842] (0/4) Epoch 32, batch 1400, loss[loss=0.171, simple_loss=0.2633, pruned_loss=0.0394, over 7222.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2616, pruned_loss=0.04226, over 1422345.72 frames.], batch size: 20, lr: 1.73e-04 2022-05-29 03:32:16,187 INFO [train.py:842] (0/4) Epoch 32, batch 1450, loss[loss=0.1393, simple_loss=0.2318, pruned_loss=0.02334, over 7328.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2627, pruned_loss=0.04295, over 1424061.84 frames.], batch size: 20, lr: 1.73e-04 2022-05-29 03:32:55,627 INFO [train.py:842] (0/4) Epoch 32, batch 1500, loss[loss=0.2066, simple_loss=0.3006, pruned_loss=0.0563, over 5000.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2631, pruned_loss=0.04316, over 1422971.57 frames.], batch size: 53, lr: 1.73e-04 2022-05-29 03:33:35,100 INFO [train.py:842] (0/4) Epoch 32, batch 1550, loss[loss=0.1642, simple_loss=0.2548, pruned_loss=0.03682, over 7426.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2645, pruned_loss=0.04389, over 1421791.46 frames.], batch size: 18, lr: 1.73e-04 2022-05-29 03:34:14,454 INFO [train.py:842] (0/4) Epoch 32, batch 1600, loss[loss=0.198, simple_loss=0.2884, pruned_loss=0.0538, over 7184.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2634, pruned_loss=0.04325, over 1418300.79 frames.], batch size: 23, lr: 1.73e-04 2022-05-29 03:34:53,742 INFO [train.py:842] (0/4) Epoch 32, batch 1650, loss[loss=0.1854, simple_loss=0.278, pruned_loss=0.04638, over 7411.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2642, pruned_loss=0.04361, over 1417266.10 frames.], batch size: 21, lr: 1.73e-04 2022-05-29 03:35:33,283 INFO [train.py:842] (0/4) Epoch 32, batch 1700, loss[loss=0.1722, simple_loss=0.2626, pruned_loss=0.04095, over 7116.00 frames.], tot_loss[loss=0.175, simple_loss=0.2636, pruned_loss=0.04322, over 1412987.61 frames.], batch size: 21, lr: 1.73e-04 2022-05-29 03:36:12,395 INFO [train.py:842] (0/4) Epoch 32, batch 1750, loss[loss=0.2573, simple_loss=0.3194, pruned_loss=0.09754, over 5028.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2645, pruned_loss=0.04352, over 1410521.62 frames.], batch size: 52, lr: 1.73e-04 2022-05-29 03:36:51,783 INFO [train.py:842] (0/4) Epoch 32, batch 1800, loss[loss=0.1748, simple_loss=0.2625, pruned_loss=0.04358, over 7232.00 frames.], tot_loss[loss=0.1758, simple_loss=0.265, pruned_loss=0.04328, over 1411706.94 frames.], batch size: 20, lr: 1.73e-04 2022-05-29 03:37:30,792 INFO [train.py:842] (0/4) Epoch 32, batch 1850, loss[loss=0.1922, simple_loss=0.2707, pruned_loss=0.05686, over 7017.00 frames.], tot_loss[loss=0.1761, simple_loss=0.2651, pruned_loss=0.04359, over 1405302.23 frames.], batch size: 16, lr: 1.73e-04 2022-05-29 03:38:10,511 INFO [train.py:842] (0/4) Epoch 32, batch 1900, loss[loss=0.1411, simple_loss=0.2253, pruned_loss=0.02842, over 7344.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2631, pruned_loss=0.04256, over 1411214.01 frames.], batch size: 19, lr: 1.73e-04 2022-05-29 03:38:49,854 INFO [train.py:842] (0/4) Epoch 32, batch 1950, loss[loss=0.227, simple_loss=0.2908, pruned_loss=0.08165, over 7352.00 frames.], tot_loss[loss=0.173, simple_loss=0.2618, pruned_loss=0.04212, over 1417508.82 frames.], batch size: 19, lr: 1.73e-04 2022-05-29 03:39:29,622 INFO [train.py:842] (0/4) Epoch 32, batch 2000, loss[loss=0.1374, simple_loss=0.2355, pruned_loss=0.01969, over 7271.00 frames.], tot_loss[loss=0.173, simple_loss=0.2619, pruned_loss=0.04204, over 1418925.68 frames.], batch size: 18, lr: 1.73e-04 2022-05-29 03:40:08,884 INFO [train.py:842] (0/4) Epoch 32, batch 2050, loss[loss=0.1636, simple_loss=0.2614, pruned_loss=0.03291, over 7157.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2613, pruned_loss=0.0417, over 1416312.43 frames.], batch size: 20, lr: 1.73e-04 2022-05-29 03:40:48,259 INFO [train.py:842] (0/4) Epoch 32, batch 2100, loss[loss=0.1733, simple_loss=0.2461, pruned_loss=0.05023, over 7253.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2627, pruned_loss=0.04239, over 1416545.58 frames.], batch size: 16, lr: 1.73e-04 2022-05-29 03:41:27,445 INFO [train.py:842] (0/4) Epoch 32, batch 2150, loss[loss=0.1541, simple_loss=0.2498, pruned_loss=0.02916, over 7218.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2627, pruned_loss=0.04226, over 1420945.15 frames.], batch size: 21, lr: 1.73e-04 2022-05-29 03:42:07,261 INFO [train.py:842] (0/4) Epoch 32, batch 2200, loss[loss=0.1782, simple_loss=0.2735, pruned_loss=0.0414, over 7189.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2626, pruned_loss=0.04219, over 1424064.18 frames.], batch size: 26, lr: 1.73e-04 2022-05-29 03:42:46,572 INFO [train.py:842] (0/4) Epoch 32, batch 2250, loss[loss=0.253, simple_loss=0.3304, pruned_loss=0.08774, over 7074.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2632, pruned_loss=0.04223, over 1425691.09 frames.], batch size: 18, lr: 1.73e-04 2022-05-29 03:43:25,957 INFO [train.py:842] (0/4) Epoch 32, batch 2300, loss[loss=0.1611, simple_loss=0.2664, pruned_loss=0.02789, over 7339.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2632, pruned_loss=0.04272, over 1422378.72 frames.], batch size: 22, lr: 1.73e-04 2022-05-29 03:44:05,253 INFO [train.py:842] (0/4) Epoch 32, batch 2350, loss[loss=0.2062, simple_loss=0.2775, pruned_loss=0.0674, over 7274.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2629, pruned_loss=0.04208, over 1425979.14 frames.], batch size: 17, lr: 1.73e-04 2022-05-29 03:44:44,566 INFO [train.py:842] (0/4) Epoch 32, batch 2400, loss[loss=0.1775, simple_loss=0.2648, pruned_loss=0.04513, over 7321.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2636, pruned_loss=0.04272, over 1421792.24 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 03:45:24,021 INFO [train.py:842] (0/4) Epoch 32, batch 2450, loss[loss=0.1712, simple_loss=0.2615, pruned_loss=0.04044, over 7170.00 frames.], tot_loss[loss=0.174, simple_loss=0.2628, pruned_loss=0.04265, over 1422857.27 frames.], batch size: 26, lr: 1.72e-04 2022-05-29 03:46:03,802 INFO [train.py:842] (0/4) Epoch 32, batch 2500, loss[loss=0.1438, simple_loss=0.2257, pruned_loss=0.0309, over 7273.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2618, pruned_loss=0.04176, over 1425099.55 frames.], batch size: 17, lr: 1.72e-04 2022-05-29 03:46:43,177 INFO [train.py:842] (0/4) Epoch 32, batch 2550, loss[loss=0.1477, simple_loss=0.2402, pruned_loss=0.02759, over 7336.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2628, pruned_loss=0.04232, over 1423503.40 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 03:47:22,977 INFO [train.py:842] (0/4) Epoch 32, batch 2600, loss[loss=0.1365, simple_loss=0.221, pruned_loss=0.02601, over 7148.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2625, pruned_loss=0.04243, over 1421628.76 frames.], batch size: 17, lr: 1.72e-04 2022-05-29 03:48:02,166 INFO [train.py:842] (0/4) Epoch 32, batch 2650, loss[loss=0.1867, simple_loss=0.2737, pruned_loss=0.04988, over 7099.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2616, pruned_loss=0.04161, over 1424457.40 frames.], batch size: 26, lr: 1.72e-04 2022-05-29 03:48:41,716 INFO [train.py:842] (0/4) Epoch 32, batch 2700, loss[loss=0.1447, simple_loss=0.2355, pruned_loss=0.02692, over 7333.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2618, pruned_loss=0.04186, over 1423047.31 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 03:49:21,014 INFO [train.py:842] (0/4) Epoch 32, batch 2750, loss[loss=0.1908, simple_loss=0.2852, pruned_loss=0.04817, over 7114.00 frames.], tot_loss[loss=0.1728, simple_loss=0.262, pruned_loss=0.04184, over 1424978.59 frames.], batch size: 28, lr: 1.72e-04 2022-05-29 03:50:00,714 INFO [train.py:842] (0/4) Epoch 32, batch 2800, loss[loss=0.1452, simple_loss=0.2257, pruned_loss=0.03235, over 7418.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2609, pruned_loss=0.04163, over 1424302.16 frames.], batch size: 18, lr: 1.72e-04 2022-05-29 03:50:40,040 INFO [train.py:842] (0/4) Epoch 32, batch 2850, loss[loss=0.1614, simple_loss=0.2565, pruned_loss=0.03314, over 6729.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2611, pruned_loss=0.04198, over 1421650.63 frames.], batch size: 38, lr: 1.72e-04 2022-05-29 03:51:19,907 INFO [train.py:842] (0/4) Epoch 32, batch 2900, loss[loss=0.1571, simple_loss=0.2594, pruned_loss=0.0274, over 7241.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2621, pruned_loss=0.04201, over 1425667.40 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 03:51:58,834 INFO [train.py:842] (0/4) Epoch 32, batch 2950, loss[loss=0.1825, simple_loss=0.2792, pruned_loss=0.04292, over 7201.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2622, pruned_loss=0.04182, over 1418802.82 frames.], batch size: 23, lr: 1.72e-04 2022-05-29 03:52:38,221 INFO [train.py:842] (0/4) Epoch 32, batch 3000, loss[loss=0.2198, simple_loss=0.3138, pruned_loss=0.06294, over 7428.00 frames.], tot_loss[loss=0.1744, simple_loss=0.264, pruned_loss=0.04234, over 1419393.07 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 03:52:38,222 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 03:52:48,107 INFO [train.py:871] (0/4) Epoch 32, validation: loss=0.1619, simple_loss=0.2592, pruned_loss=0.03236, over 868885.00 frames. 2022-05-29 03:53:27,710 INFO [train.py:842] (0/4) Epoch 32, batch 3050, loss[loss=0.1951, simple_loss=0.2807, pruned_loss=0.05472, over 7283.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2638, pruned_loss=0.04285, over 1422647.13 frames.], batch size: 25, lr: 1.72e-04 2022-05-29 03:53:29,706 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-288000.pt 2022-05-29 03:54:10,072 INFO [train.py:842] (0/4) Epoch 32, batch 3100, loss[loss=0.1697, simple_loss=0.2639, pruned_loss=0.03778, over 7066.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2643, pruned_loss=0.04316, over 1425682.91 frames.], batch size: 28, lr: 1.72e-04 2022-05-29 03:54:49,460 INFO [train.py:842] (0/4) Epoch 32, batch 3150, loss[loss=0.1593, simple_loss=0.2445, pruned_loss=0.03702, over 7281.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2629, pruned_loss=0.04245, over 1423725.17 frames.], batch size: 17, lr: 1.72e-04 2022-05-29 03:55:28,994 INFO [train.py:842] (0/4) Epoch 32, batch 3200, loss[loss=0.1979, simple_loss=0.2848, pruned_loss=0.05554, over 7109.00 frames.], tot_loss[loss=0.174, simple_loss=0.2632, pruned_loss=0.04236, over 1426420.71 frames.], batch size: 21, lr: 1.72e-04 2022-05-29 03:56:08,325 INFO [train.py:842] (0/4) Epoch 32, batch 3250, loss[loss=0.1875, simple_loss=0.2858, pruned_loss=0.0446, over 7346.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2635, pruned_loss=0.04248, over 1427535.48 frames.], batch size: 22, lr: 1.72e-04 2022-05-29 03:56:47,743 INFO [train.py:842] (0/4) Epoch 32, batch 3300, loss[loss=0.2007, simple_loss=0.2794, pruned_loss=0.06099, over 7424.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2641, pruned_loss=0.04313, over 1422407.06 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 03:57:27,361 INFO [train.py:842] (0/4) Epoch 32, batch 3350, loss[loss=0.1787, simple_loss=0.2739, pruned_loss=0.04177, over 7317.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2628, pruned_loss=0.04303, over 1424263.94 frames.], batch size: 21, lr: 1.72e-04 2022-05-29 03:58:06,801 INFO [train.py:842] (0/4) Epoch 32, batch 3400, loss[loss=0.1619, simple_loss=0.2592, pruned_loss=0.03231, over 7336.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2642, pruned_loss=0.04324, over 1421707.33 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 03:58:45,726 INFO [train.py:842] (0/4) Epoch 32, batch 3450, loss[loss=0.2196, simple_loss=0.3142, pruned_loss=0.06244, over 7223.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2648, pruned_loss=0.04308, over 1424869.50 frames.], batch size: 22, lr: 1.72e-04 2022-05-29 03:59:25,405 INFO [train.py:842] (0/4) Epoch 32, batch 3500, loss[loss=0.1708, simple_loss=0.2641, pruned_loss=0.03872, over 7275.00 frames.], tot_loss[loss=0.176, simple_loss=0.2651, pruned_loss=0.04345, over 1427977.62 frames.], batch size: 24, lr: 1.72e-04 2022-05-29 04:00:04,727 INFO [train.py:842] (0/4) Epoch 32, batch 3550, loss[loss=0.1883, simple_loss=0.2798, pruned_loss=0.04842, over 7392.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2647, pruned_loss=0.04349, over 1431022.94 frames.], batch size: 23, lr: 1.72e-04 2022-05-29 04:00:44,331 INFO [train.py:842] (0/4) Epoch 32, batch 3600, loss[loss=0.1623, simple_loss=0.2575, pruned_loss=0.03358, over 6496.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2636, pruned_loss=0.04262, over 1428523.31 frames.], batch size: 38, lr: 1.72e-04 2022-05-29 04:01:23,436 INFO [train.py:842] (0/4) Epoch 32, batch 3650, loss[loss=0.1476, simple_loss=0.2354, pruned_loss=0.02984, over 7238.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2636, pruned_loss=0.04228, over 1427858.25 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:02:03,201 INFO [train.py:842] (0/4) Epoch 32, batch 3700, loss[loss=0.1493, simple_loss=0.2364, pruned_loss=0.03117, over 7149.00 frames.], tot_loss[loss=0.174, simple_loss=0.2634, pruned_loss=0.04236, over 1429959.18 frames.], batch size: 17, lr: 1.72e-04 2022-05-29 04:02:41,988 INFO [train.py:842] (0/4) Epoch 32, batch 3750, loss[loss=0.1838, simple_loss=0.2862, pruned_loss=0.0407, over 7211.00 frames.], tot_loss[loss=0.1747, simple_loss=0.264, pruned_loss=0.04267, over 1424113.00 frames.], batch size: 23, lr: 1.72e-04 2022-05-29 04:03:21,650 INFO [train.py:842] (0/4) Epoch 32, batch 3800, loss[loss=0.1861, simple_loss=0.2656, pruned_loss=0.05329, over 7377.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2629, pruned_loss=0.04205, over 1425412.48 frames.], batch size: 23, lr: 1.72e-04 2022-05-29 04:04:01,020 INFO [train.py:842] (0/4) Epoch 32, batch 3850, loss[loss=0.1759, simple_loss=0.2682, pruned_loss=0.04185, over 7435.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2628, pruned_loss=0.04236, over 1427980.27 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:04:40,460 INFO [train.py:842] (0/4) Epoch 32, batch 3900, loss[loss=0.1734, simple_loss=0.2547, pruned_loss=0.046, over 7163.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2612, pruned_loss=0.04176, over 1429100.36 frames.], batch size: 18, lr: 1.72e-04 2022-05-29 04:05:19,793 INFO [train.py:842] (0/4) Epoch 32, batch 3950, loss[loss=0.1695, simple_loss=0.2617, pruned_loss=0.03861, over 7217.00 frames.], tot_loss[loss=0.173, simple_loss=0.2616, pruned_loss=0.04219, over 1424711.84 frames.], batch size: 21, lr: 1.72e-04 2022-05-29 04:05:59,354 INFO [train.py:842] (0/4) Epoch 32, batch 4000, loss[loss=0.1938, simple_loss=0.2776, pruned_loss=0.05499, over 7406.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2612, pruned_loss=0.04234, over 1421599.13 frames.], batch size: 18, lr: 1.72e-04 2022-05-29 04:06:38,590 INFO [train.py:842] (0/4) Epoch 32, batch 4050, loss[loss=0.1666, simple_loss=0.2591, pruned_loss=0.03701, over 7373.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2616, pruned_loss=0.04258, over 1418786.85 frames.], batch size: 23, lr: 1.72e-04 2022-05-29 04:07:17,986 INFO [train.py:842] (0/4) Epoch 32, batch 4100, loss[loss=0.1682, simple_loss=0.2677, pruned_loss=0.03433, over 7142.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2633, pruned_loss=0.04327, over 1418689.86 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:07:57,025 INFO [train.py:842] (0/4) Epoch 32, batch 4150, loss[loss=0.2056, simple_loss=0.2987, pruned_loss=0.05626, over 6704.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2647, pruned_loss=0.04339, over 1421691.31 frames.], batch size: 31, lr: 1.72e-04 2022-05-29 04:08:36,761 INFO [train.py:842] (0/4) Epoch 32, batch 4200, loss[loss=0.1953, simple_loss=0.291, pruned_loss=0.04984, over 7298.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2634, pruned_loss=0.04298, over 1424900.23 frames.], batch size: 24, lr: 1.72e-04 2022-05-29 04:09:15,945 INFO [train.py:842] (0/4) Epoch 32, batch 4250, loss[loss=0.1604, simple_loss=0.2515, pruned_loss=0.03461, over 7220.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2647, pruned_loss=0.0433, over 1420548.01 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:09:55,655 INFO [train.py:842] (0/4) Epoch 32, batch 4300, loss[loss=0.192, simple_loss=0.2785, pruned_loss=0.05275, over 7150.00 frames.], tot_loss[loss=0.175, simple_loss=0.2644, pruned_loss=0.04287, over 1423760.57 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:10:34,778 INFO [train.py:842] (0/4) Epoch 32, batch 4350, loss[loss=0.1682, simple_loss=0.265, pruned_loss=0.03569, over 6163.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2644, pruned_loss=0.04298, over 1425038.81 frames.], batch size: 37, lr: 1.72e-04 2022-05-29 04:11:14,396 INFO [train.py:842] (0/4) Epoch 32, batch 4400, loss[loss=0.1745, simple_loss=0.2745, pruned_loss=0.03725, over 7353.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2637, pruned_loss=0.04293, over 1425831.79 frames.], batch size: 22, lr: 1.72e-04 2022-05-29 04:11:53,983 INFO [train.py:842] (0/4) Epoch 32, batch 4450, loss[loss=0.1575, simple_loss=0.2396, pruned_loss=0.03769, over 7250.00 frames.], tot_loss[loss=0.1737, simple_loss=0.262, pruned_loss=0.0427, over 1429774.84 frames.], batch size: 19, lr: 1.72e-04 2022-05-29 04:12:33,506 INFO [train.py:842] (0/4) Epoch 32, batch 4500, loss[loss=0.192, simple_loss=0.2857, pruned_loss=0.04912, over 7111.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2619, pruned_loss=0.04226, over 1424920.21 frames.], batch size: 21, lr: 1.72e-04 2022-05-29 04:13:12,630 INFO [train.py:842] (0/4) Epoch 32, batch 4550, loss[loss=0.1913, simple_loss=0.2766, pruned_loss=0.05303, over 7340.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2608, pruned_loss=0.04145, over 1416655.05 frames.], batch size: 22, lr: 1.72e-04 2022-05-29 04:13:52,348 INFO [train.py:842] (0/4) Epoch 32, batch 4600, loss[loss=0.137, simple_loss=0.2241, pruned_loss=0.02497, over 7001.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2613, pruned_loss=0.04191, over 1420580.75 frames.], batch size: 16, lr: 1.72e-04 2022-05-29 04:14:31,869 INFO [train.py:842] (0/4) Epoch 32, batch 4650, loss[loss=0.1651, simple_loss=0.2607, pruned_loss=0.03481, over 7223.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2632, pruned_loss=0.04264, over 1425410.78 frames.], batch size: 21, lr: 1.72e-04 2022-05-29 04:15:11,549 INFO [train.py:842] (0/4) Epoch 32, batch 4700, loss[loss=0.1809, simple_loss=0.2662, pruned_loss=0.04785, over 7233.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2634, pruned_loss=0.04284, over 1425922.46 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:15:51,109 INFO [train.py:842] (0/4) Epoch 32, batch 4750, loss[loss=0.175, simple_loss=0.2564, pruned_loss=0.04683, over 7200.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2624, pruned_loss=0.04255, over 1423166.93 frames.], batch size: 22, lr: 1.72e-04 2022-05-29 04:16:30,596 INFO [train.py:842] (0/4) Epoch 32, batch 4800, loss[loss=0.1638, simple_loss=0.26, pruned_loss=0.03379, over 7315.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2633, pruned_loss=0.04273, over 1419451.04 frames.], batch size: 21, lr: 1.72e-04 2022-05-29 04:17:09,875 INFO [train.py:842] (0/4) Epoch 32, batch 4850, loss[loss=0.191, simple_loss=0.2937, pruned_loss=0.04414, over 7231.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2622, pruned_loss=0.04182, over 1419823.71 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:17:49,579 INFO [train.py:842] (0/4) Epoch 32, batch 4900, loss[loss=0.1971, simple_loss=0.2865, pruned_loss=0.05382, over 7273.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2634, pruned_loss=0.04293, over 1422331.17 frames.], batch size: 25, lr: 1.72e-04 2022-05-29 04:18:28,905 INFO [train.py:842] (0/4) Epoch 32, batch 4950, loss[loss=0.1652, simple_loss=0.2539, pruned_loss=0.03823, over 7437.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2626, pruned_loss=0.04235, over 1425674.54 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:19:08,239 INFO [train.py:842] (0/4) Epoch 32, batch 5000, loss[loss=0.2037, simple_loss=0.2945, pruned_loss=0.05644, over 6751.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2644, pruned_loss=0.04293, over 1422947.64 frames.], batch size: 31, lr: 1.72e-04 2022-05-29 04:19:47,567 INFO [train.py:842] (0/4) Epoch 32, batch 5050, loss[loss=0.1658, simple_loss=0.2426, pruned_loss=0.04451, over 7291.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2626, pruned_loss=0.04224, over 1423138.02 frames.], batch size: 18, lr: 1.72e-04 2022-05-29 04:20:27,252 INFO [train.py:842] (0/4) Epoch 32, batch 5100, loss[loss=0.1808, simple_loss=0.2744, pruned_loss=0.04357, over 7311.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2626, pruned_loss=0.04265, over 1422352.54 frames.], batch size: 21, lr: 1.72e-04 2022-05-29 04:21:06,324 INFO [train.py:842] (0/4) Epoch 32, batch 5150, loss[loss=0.1709, simple_loss=0.2593, pruned_loss=0.04128, over 7067.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2623, pruned_loss=0.04229, over 1418421.04 frames.], batch size: 18, lr: 1.72e-04 2022-05-29 04:21:45,929 INFO [train.py:842] (0/4) Epoch 32, batch 5200, loss[loss=0.1319, simple_loss=0.2145, pruned_loss=0.02464, over 7283.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2624, pruned_loss=0.04241, over 1419995.67 frames.], batch size: 17, lr: 1.72e-04 2022-05-29 04:22:25,243 INFO [train.py:842] (0/4) Epoch 32, batch 5250, loss[loss=0.2459, simple_loss=0.3023, pruned_loss=0.0947, over 6996.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2619, pruned_loss=0.04216, over 1420055.48 frames.], batch size: 16, lr: 1.72e-04 2022-05-29 04:23:04,884 INFO [train.py:842] (0/4) Epoch 32, batch 5300, loss[loss=0.1626, simple_loss=0.2524, pruned_loss=0.0364, over 7230.00 frames.], tot_loss[loss=0.173, simple_loss=0.2613, pruned_loss=0.04241, over 1422064.90 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:23:43,917 INFO [train.py:842] (0/4) Epoch 32, batch 5350, loss[loss=0.1959, simple_loss=0.2829, pruned_loss=0.05443, over 7368.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2629, pruned_loss=0.04289, over 1424241.78 frames.], batch size: 23, lr: 1.72e-04 2022-05-29 04:24:23,426 INFO [train.py:842] (0/4) Epoch 32, batch 5400, loss[loss=0.1653, simple_loss=0.2558, pruned_loss=0.03739, over 7141.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2639, pruned_loss=0.0433, over 1426299.83 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:25:13,703 INFO [train.py:842] (0/4) Epoch 32, batch 5450, loss[loss=0.1539, simple_loss=0.2317, pruned_loss=0.03809, over 7192.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2632, pruned_loss=0.04307, over 1429220.69 frames.], batch size: 16, lr: 1.72e-04 2022-05-29 04:25:53,343 INFO [train.py:842] (0/4) Epoch 32, batch 5500, loss[loss=0.1758, simple_loss=0.2713, pruned_loss=0.04015, over 7195.00 frames.], tot_loss[loss=0.1752, simple_loss=0.264, pruned_loss=0.0432, over 1426442.20 frames.], batch size: 22, lr: 1.72e-04 2022-05-29 04:26:32,572 INFO [train.py:842] (0/4) Epoch 32, batch 5550, loss[loss=0.1673, simple_loss=0.2634, pruned_loss=0.03562, over 7413.00 frames.], tot_loss[loss=0.1748, simple_loss=0.264, pruned_loss=0.04285, over 1426297.06 frames.], batch size: 21, lr: 1.72e-04 2022-05-29 04:27:12,135 INFO [train.py:842] (0/4) Epoch 32, batch 5600, loss[loss=0.1719, simple_loss=0.2559, pruned_loss=0.04395, over 5138.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2641, pruned_loss=0.04263, over 1426576.84 frames.], batch size: 52, lr: 1.72e-04 2022-05-29 04:27:51,466 INFO [train.py:842] (0/4) Epoch 32, batch 5650, loss[loss=0.1698, simple_loss=0.2495, pruned_loss=0.04504, over 7334.00 frames.], tot_loss[loss=0.1762, simple_loss=0.265, pruned_loss=0.04375, over 1426271.56 frames.], batch size: 20, lr: 1.72e-04 2022-05-29 04:28:41,654 INFO [train.py:842] (0/4) Epoch 32, batch 5700, loss[loss=0.1854, simple_loss=0.2659, pruned_loss=0.05248, over 7016.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2646, pruned_loss=0.04339, over 1421487.10 frames.], batch size: 16, lr: 1.72e-04 2022-05-29 04:29:21,093 INFO [train.py:842] (0/4) Epoch 32, batch 5750, loss[loss=0.1923, simple_loss=0.2664, pruned_loss=0.05908, over 7059.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2637, pruned_loss=0.04298, over 1423113.03 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 04:30:11,339 INFO [train.py:842] (0/4) Epoch 32, batch 5800, loss[loss=0.165, simple_loss=0.2707, pruned_loss=0.02962, over 7322.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2635, pruned_loss=0.04307, over 1420742.55 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:30:50,471 INFO [train.py:842] (0/4) Epoch 32, batch 5850, loss[loss=0.1739, simple_loss=0.2621, pruned_loss=0.04279, over 7350.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2645, pruned_loss=0.04361, over 1420390.19 frames.], batch size: 19, lr: 1.71e-04 2022-05-29 04:31:30,330 INFO [train.py:842] (0/4) Epoch 32, batch 5900, loss[loss=0.2201, simple_loss=0.2963, pruned_loss=0.07191, over 7279.00 frames.], tot_loss[loss=0.175, simple_loss=0.2635, pruned_loss=0.04326, over 1425449.36 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 04:32:09,701 INFO [train.py:842] (0/4) Epoch 32, batch 5950, loss[loss=0.1581, simple_loss=0.2497, pruned_loss=0.03321, over 7241.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2618, pruned_loss=0.04297, over 1424124.18 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:32:48,990 INFO [train.py:842] (0/4) Epoch 32, batch 6000, loss[loss=0.1572, simple_loss=0.2381, pruned_loss=0.0382, over 7015.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2619, pruned_loss=0.04295, over 1418522.10 frames.], batch size: 16, lr: 1.71e-04 2022-05-29 04:32:48,991 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 04:32:58,750 INFO [train.py:871] (0/4) Epoch 32, validation: loss=0.1638, simple_loss=0.2607, pruned_loss=0.0334, over 868885.00 frames. 2022-05-29 04:33:38,122 INFO [train.py:842] (0/4) Epoch 32, batch 6050, loss[loss=0.1657, simple_loss=0.2412, pruned_loss=0.04508, over 7268.00 frames.], tot_loss[loss=0.174, simple_loss=0.2622, pruned_loss=0.04286, over 1421260.67 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 04:34:17,905 INFO [train.py:842] (0/4) Epoch 32, batch 6100, loss[loss=0.1757, simple_loss=0.2594, pruned_loss=0.04602, over 7427.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2639, pruned_loss=0.04375, over 1421968.28 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:34:57,127 INFO [train.py:842] (0/4) Epoch 32, batch 6150, loss[loss=0.2239, simple_loss=0.3187, pruned_loss=0.06459, over 7286.00 frames.], tot_loss[loss=0.1767, simple_loss=0.2647, pruned_loss=0.04433, over 1420310.04 frames.], batch size: 25, lr: 1.71e-04 2022-05-29 04:35:36,541 INFO [train.py:842] (0/4) Epoch 32, batch 6200, loss[loss=0.208, simple_loss=0.3103, pruned_loss=0.05282, over 7419.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2643, pruned_loss=0.04377, over 1422607.90 frames.], batch size: 21, lr: 1.71e-04 2022-05-29 04:36:16,131 INFO [train.py:842] (0/4) Epoch 32, batch 6250, loss[loss=0.165, simple_loss=0.2526, pruned_loss=0.03869, over 7410.00 frames.], tot_loss[loss=0.1757, simple_loss=0.264, pruned_loss=0.04369, over 1426457.39 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 04:36:55,654 INFO [train.py:842] (0/4) Epoch 32, batch 6300, loss[loss=0.183, simple_loss=0.2616, pruned_loss=0.05222, over 7057.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2636, pruned_loss=0.04328, over 1423998.72 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 04:37:34,947 INFO [train.py:842] (0/4) Epoch 32, batch 6350, loss[loss=0.1337, simple_loss=0.2239, pruned_loss=0.02171, over 7365.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2629, pruned_loss=0.043, over 1423622.60 frames.], batch size: 19, lr: 1.71e-04 2022-05-29 04:38:14,615 INFO [train.py:842] (0/4) Epoch 32, batch 6400, loss[loss=0.2299, simple_loss=0.312, pruned_loss=0.07391, over 7265.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2634, pruned_loss=0.04285, over 1423360.49 frames.], batch size: 25, lr: 1.71e-04 2022-05-29 04:38:53,815 INFO [train.py:842] (0/4) Epoch 32, batch 6450, loss[loss=0.1641, simple_loss=0.2577, pruned_loss=0.03527, over 7433.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2627, pruned_loss=0.04246, over 1425041.13 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:39:33,322 INFO [train.py:842] (0/4) Epoch 32, batch 6500, loss[loss=0.215, simple_loss=0.2946, pruned_loss=0.06767, over 7295.00 frames.], tot_loss[loss=0.1743, simple_loss=0.263, pruned_loss=0.04279, over 1422940.51 frames.], batch size: 24, lr: 1.71e-04 2022-05-29 04:40:12,612 INFO [train.py:842] (0/4) Epoch 32, batch 6550, loss[loss=0.1595, simple_loss=0.2422, pruned_loss=0.03841, over 7412.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2629, pruned_loss=0.04283, over 1420533.27 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 04:40:52,235 INFO [train.py:842] (0/4) Epoch 32, batch 6600, loss[loss=0.1653, simple_loss=0.2691, pruned_loss=0.03081, over 6745.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2622, pruned_loss=0.04243, over 1422150.35 frames.], batch size: 31, lr: 1.71e-04 2022-05-29 04:41:31,521 INFO [train.py:842] (0/4) Epoch 32, batch 6650, loss[loss=0.1779, simple_loss=0.2706, pruned_loss=0.04265, over 7316.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2612, pruned_loss=0.04185, over 1419466.22 frames.], batch size: 25, lr: 1.71e-04 2022-05-29 04:42:11,091 INFO [train.py:842] (0/4) Epoch 32, batch 6700, loss[loss=0.1733, simple_loss=0.2659, pruned_loss=0.04035, over 7154.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2621, pruned_loss=0.04233, over 1418640.18 frames.], batch size: 19, lr: 1.71e-04 2022-05-29 04:42:50,471 INFO [train.py:842] (0/4) Epoch 32, batch 6750, loss[loss=0.1935, simple_loss=0.2952, pruned_loss=0.04592, over 7216.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2627, pruned_loss=0.04247, over 1419624.36 frames.], batch size: 26, lr: 1.71e-04 2022-05-29 04:43:30,246 INFO [train.py:842] (0/4) Epoch 32, batch 6800, loss[loss=0.1875, simple_loss=0.2768, pruned_loss=0.04907, over 7234.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2614, pruned_loss=0.04198, over 1425442.23 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:44:09,481 INFO [train.py:842] (0/4) Epoch 32, batch 6850, loss[loss=0.1564, simple_loss=0.2453, pruned_loss=0.0338, over 7261.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2622, pruned_loss=0.04246, over 1425655.83 frames.], batch size: 19, lr: 1.71e-04 2022-05-29 04:44:48,817 INFO [train.py:842] (0/4) Epoch 32, batch 6900, loss[loss=0.1598, simple_loss=0.2446, pruned_loss=0.03747, over 7231.00 frames.], tot_loss[loss=0.1736, simple_loss=0.262, pruned_loss=0.04256, over 1426084.61 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:45:27,970 INFO [train.py:842] (0/4) Epoch 32, batch 6950, loss[loss=0.1756, simple_loss=0.2675, pruned_loss=0.04191, over 7142.00 frames.], tot_loss[loss=0.174, simple_loss=0.2626, pruned_loss=0.04275, over 1428193.88 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:46:07,514 INFO [train.py:842] (0/4) Epoch 32, batch 7000, loss[loss=0.155, simple_loss=0.2396, pruned_loss=0.03518, over 7158.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2628, pruned_loss=0.04297, over 1426763.42 frames.], batch size: 19, lr: 1.71e-04 2022-05-29 04:46:46,469 INFO [train.py:842] (0/4) Epoch 32, batch 7050, loss[loss=0.1862, simple_loss=0.2702, pruned_loss=0.05116, over 7198.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2632, pruned_loss=0.04283, over 1426540.46 frames.], batch size: 23, lr: 1.71e-04 2022-05-29 04:47:25,870 INFO [train.py:842] (0/4) Epoch 32, batch 7100, loss[loss=0.183, simple_loss=0.2735, pruned_loss=0.04625, over 7219.00 frames.], tot_loss[loss=0.1742, simple_loss=0.263, pruned_loss=0.04273, over 1425207.25 frames.], batch size: 22, lr: 1.71e-04 2022-05-29 04:48:04,934 INFO [train.py:842] (0/4) Epoch 32, batch 7150, loss[loss=0.183, simple_loss=0.2751, pruned_loss=0.04546, over 6728.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04279, over 1421648.53 frames.], batch size: 31, lr: 1.71e-04 2022-05-29 04:48:44,487 INFO [train.py:842] (0/4) Epoch 32, batch 7200, loss[loss=0.1835, simple_loss=0.2695, pruned_loss=0.04875, over 7218.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2623, pruned_loss=0.0427, over 1424148.87 frames.], batch size: 21, lr: 1.71e-04 2022-05-29 04:49:23,960 INFO [train.py:842] (0/4) Epoch 32, batch 7250, loss[loss=0.1563, simple_loss=0.2468, pruned_loss=0.03289, over 7411.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2623, pruned_loss=0.04265, over 1428807.90 frames.], batch size: 21, lr: 1.71e-04 2022-05-29 04:50:03,704 INFO [train.py:842] (0/4) Epoch 32, batch 7300, loss[loss=0.1705, simple_loss=0.27, pruned_loss=0.03546, over 7407.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2619, pruned_loss=0.0426, over 1431587.50 frames.], batch size: 21, lr: 1.71e-04 2022-05-29 04:50:43,042 INFO [train.py:842] (0/4) Epoch 32, batch 7350, loss[loss=0.1624, simple_loss=0.2512, pruned_loss=0.03687, over 7361.00 frames.], tot_loss[loss=0.173, simple_loss=0.2613, pruned_loss=0.04241, over 1431092.25 frames.], batch size: 19, lr: 1.71e-04 2022-05-29 04:51:22,693 INFO [train.py:842] (0/4) Epoch 32, batch 7400, loss[loss=0.2326, simple_loss=0.3099, pruned_loss=0.07762, over 5199.00 frames.], tot_loss[loss=0.1738, simple_loss=0.262, pruned_loss=0.04281, over 1426838.53 frames.], batch size: 54, lr: 1.71e-04 2022-05-29 04:52:02,076 INFO [train.py:842] (0/4) Epoch 32, batch 7450, loss[loss=0.1669, simple_loss=0.2576, pruned_loss=0.03811, over 7311.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2621, pruned_loss=0.04268, over 1429633.60 frames.], batch size: 24, lr: 1.71e-04 2022-05-29 04:52:41,531 INFO [train.py:842] (0/4) Epoch 32, batch 7500, loss[loss=0.2114, simple_loss=0.2941, pruned_loss=0.06431, over 7330.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2624, pruned_loss=0.04287, over 1428428.20 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:53:20,809 INFO [train.py:842] (0/4) Epoch 32, batch 7550, loss[loss=0.1699, simple_loss=0.2716, pruned_loss=0.03405, over 7267.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2626, pruned_loss=0.04253, over 1427329.84 frames.], batch size: 24, lr: 1.71e-04 2022-05-29 04:54:00,339 INFO [train.py:842] (0/4) Epoch 32, batch 7600, loss[loss=0.1622, simple_loss=0.2373, pruned_loss=0.04356, over 7354.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2618, pruned_loss=0.04203, over 1425559.11 frames.], batch size: 19, lr: 1.71e-04 2022-05-29 04:54:39,790 INFO [train.py:842] (0/4) Epoch 32, batch 7650, loss[loss=0.1966, simple_loss=0.2808, pruned_loss=0.05622, over 7236.00 frames.], tot_loss[loss=0.1729, simple_loss=0.262, pruned_loss=0.04189, over 1427184.51 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:55:19,477 INFO [train.py:842] (0/4) Epoch 32, batch 7700, loss[loss=0.1967, simple_loss=0.2857, pruned_loss=0.05382, over 7293.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2627, pruned_loss=0.04253, over 1428680.73 frames.], batch size: 24, lr: 1.71e-04 2022-05-29 04:55:58,587 INFO [train.py:842] (0/4) Epoch 32, batch 7750, loss[loss=0.1696, simple_loss=0.2674, pruned_loss=0.0359, over 7045.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2621, pruned_loss=0.0424, over 1422961.78 frames.], batch size: 28, lr: 1.71e-04 2022-05-29 04:56:38,191 INFO [train.py:842] (0/4) Epoch 32, batch 7800, loss[loss=0.1785, simple_loss=0.2654, pruned_loss=0.04579, over 6560.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2626, pruned_loss=0.04307, over 1422896.45 frames.], batch size: 38, lr: 1.71e-04 2022-05-29 04:57:17,415 INFO [train.py:842] (0/4) Epoch 32, batch 7850, loss[loss=0.1412, simple_loss=0.2273, pruned_loss=0.0276, over 7131.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2623, pruned_loss=0.04295, over 1422860.76 frames.], batch size: 17, lr: 1.71e-04 2022-05-29 04:57:56,951 INFO [train.py:842] (0/4) Epoch 32, batch 7900, loss[loss=0.1677, simple_loss=0.2534, pruned_loss=0.04098, over 7230.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2625, pruned_loss=0.04298, over 1423473.74 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:58:36,023 INFO [train.py:842] (0/4) Epoch 32, batch 7950, loss[loss=0.1706, simple_loss=0.2673, pruned_loss=0.03691, over 6711.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2636, pruned_loss=0.04315, over 1425095.33 frames.], batch size: 31, lr: 1.71e-04 2022-05-29 04:59:15,394 INFO [train.py:842] (0/4) Epoch 32, batch 8000, loss[loss=0.1717, simple_loss=0.2721, pruned_loss=0.03564, over 7322.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2627, pruned_loss=0.04294, over 1424251.78 frames.], batch size: 20, lr: 1.71e-04 2022-05-29 04:59:54,727 INFO [train.py:842] (0/4) Epoch 32, batch 8050, loss[loss=0.1669, simple_loss=0.2511, pruned_loss=0.04137, over 7420.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2634, pruned_loss=0.04348, over 1421902.09 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 05:00:34,445 INFO [train.py:842] (0/4) Epoch 32, batch 8100, loss[loss=0.1383, simple_loss=0.2258, pruned_loss=0.02541, over 6857.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2633, pruned_loss=0.04318, over 1421010.87 frames.], batch size: 15, lr: 1.71e-04 2022-05-29 05:01:13,692 INFO [train.py:842] (0/4) Epoch 32, batch 8150, loss[loss=0.1674, simple_loss=0.2614, pruned_loss=0.03669, over 7237.00 frames.], tot_loss[loss=0.1764, simple_loss=0.2647, pruned_loss=0.04404, over 1422844.57 frames.], batch size: 26, lr: 1.71e-04 2022-05-29 05:01:53,212 INFO [train.py:842] (0/4) Epoch 32, batch 8200, loss[loss=0.1945, simple_loss=0.2939, pruned_loss=0.0476, over 7221.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2643, pruned_loss=0.04369, over 1421205.93 frames.], batch size: 21, lr: 1.71e-04 2022-05-29 05:02:32,246 INFO [train.py:842] (0/4) Epoch 32, batch 8250, loss[loss=0.1533, simple_loss=0.2522, pruned_loss=0.02718, over 6386.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2643, pruned_loss=0.0438, over 1418957.53 frames.], batch size: 38, lr: 1.71e-04 2022-05-29 05:03:11,971 INFO [train.py:842] (0/4) Epoch 32, batch 8300, loss[loss=0.1447, simple_loss=0.2295, pruned_loss=0.02988, over 7066.00 frames.], tot_loss[loss=0.175, simple_loss=0.2632, pruned_loss=0.04341, over 1423038.71 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 05:03:50,994 INFO [train.py:842] (0/4) Epoch 32, batch 8350, loss[loss=0.2548, simple_loss=0.3283, pruned_loss=0.09067, over 7091.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2648, pruned_loss=0.04393, over 1423940.00 frames.], batch size: 28, lr: 1.71e-04 2022-05-29 05:04:30,271 INFO [train.py:842] (0/4) Epoch 32, batch 8400, loss[loss=0.1875, simple_loss=0.2763, pruned_loss=0.04933, over 7162.00 frames.], tot_loss[loss=0.177, simple_loss=0.2654, pruned_loss=0.04431, over 1420418.51 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 05:05:09,497 INFO [train.py:842] (0/4) Epoch 32, batch 8450, loss[loss=0.1441, simple_loss=0.2313, pruned_loss=0.02842, over 7278.00 frames.], tot_loss[loss=0.1762, simple_loss=0.2645, pruned_loss=0.04397, over 1416663.92 frames.], batch size: 17, lr: 1.71e-04 2022-05-29 05:05:49,195 INFO [train.py:842] (0/4) Epoch 32, batch 8500, loss[loss=0.1623, simple_loss=0.2492, pruned_loss=0.03768, over 7070.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2635, pruned_loss=0.04366, over 1417862.11 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 05:06:28,100 INFO [train.py:842] (0/4) Epoch 32, batch 8550, loss[loss=0.1463, simple_loss=0.2364, pruned_loss=0.02811, over 7067.00 frames.], tot_loss[loss=0.1758, simple_loss=0.264, pruned_loss=0.04384, over 1417891.39 frames.], batch size: 28, lr: 1.71e-04 2022-05-29 05:07:07,485 INFO [train.py:842] (0/4) Epoch 32, batch 8600, loss[loss=0.1783, simple_loss=0.2727, pruned_loss=0.0419, over 7226.00 frames.], tot_loss[loss=0.1743, simple_loss=0.263, pruned_loss=0.04284, over 1414556.26 frames.], batch size: 21, lr: 1.71e-04 2022-05-29 05:07:46,504 INFO [train.py:842] (0/4) Epoch 32, batch 8650, loss[loss=0.1301, simple_loss=0.2098, pruned_loss=0.02519, over 6793.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2621, pruned_loss=0.04232, over 1419150.89 frames.], batch size: 15, lr: 1.71e-04 2022-05-29 05:08:26,040 INFO [train.py:842] (0/4) Epoch 32, batch 8700, loss[loss=0.1488, simple_loss=0.2334, pruned_loss=0.03213, over 7412.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2626, pruned_loss=0.0426, over 1422950.94 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 05:09:05,077 INFO [train.py:842] (0/4) Epoch 32, batch 8750, loss[loss=0.2308, simple_loss=0.3071, pruned_loss=0.07722, over 5027.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2639, pruned_loss=0.0436, over 1417581.04 frames.], batch size: 53, lr: 1.71e-04 2022-05-29 05:09:44,403 INFO [train.py:842] (0/4) Epoch 32, batch 8800, loss[loss=0.2176, simple_loss=0.3028, pruned_loss=0.06621, over 7289.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2631, pruned_loss=0.04291, over 1412047.69 frames.], batch size: 24, lr: 1.71e-04 2022-05-29 05:10:23,484 INFO [train.py:842] (0/4) Epoch 32, batch 8850, loss[loss=0.1793, simple_loss=0.263, pruned_loss=0.04777, over 7078.00 frames.], tot_loss[loss=0.1755, simple_loss=0.264, pruned_loss=0.04353, over 1411449.44 frames.], batch size: 18, lr: 1.71e-04 2022-05-29 05:11:02,864 INFO [train.py:842] (0/4) Epoch 32, batch 8900, loss[loss=0.1958, simple_loss=0.2955, pruned_loss=0.04802, over 7325.00 frames.], tot_loss[loss=0.1767, simple_loss=0.265, pruned_loss=0.04417, over 1401475.76 frames.], batch size: 21, lr: 1.71e-04 2022-05-29 05:11:41,558 INFO [train.py:842] (0/4) Epoch 32, batch 8950, loss[loss=0.1904, simple_loss=0.2693, pruned_loss=0.05573, over 5137.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2648, pruned_loss=0.04323, over 1397170.45 frames.], batch size: 52, lr: 1.71e-04 2022-05-29 05:12:20,906 INFO [train.py:842] (0/4) Epoch 32, batch 9000, loss[loss=0.1452, simple_loss=0.2364, pruned_loss=0.027, over 6817.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2639, pruned_loss=0.04287, over 1390694.57 frames.], batch size: 15, lr: 1.71e-04 2022-05-29 05:12:20,907 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 05:12:30,815 INFO [train.py:871] (0/4) Epoch 32, validation: loss=0.1637, simple_loss=0.261, pruned_loss=0.03323, over 868885.00 frames. 2022-05-29 05:13:09,660 INFO [train.py:842] (0/4) Epoch 32, batch 9050, loss[loss=0.2033, simple_loss=0.2933, pruned_loss=0.05669, over 7095.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2649, pruned_loss=0.04333, over 1379447.45 frames.], batch size: 28, lr: 1.71e-04 2022-05-29 05:13:48,007 INFO [train.py:842] (0/4) Epoch 32, batch 9100, loss[loss=0.1821, simple_loss=0.2815, pruned_loss=0.04141, over 4923.00 frames.], tot_loss[loss=0.1796, simple_loss=0.2681, pruned_loss=0.0455, over 1329939.69 frames.], batch size: 53, lr: 1.71e-04 2022-05-29 05:14:25,942 INFO [train.py:842] (0/4) Epoch 32, batch 9150, loss[loss=0.2421, simple_loss=0.3318, pruned_loss=0.0762, over 5324.00 frames.], tot_loss[loss=0.1843, simple_loss=0.2719, pruned_loss=0.04836, over 1260277.95 frames.], batch size: 52, lr: 1.71e-04 2022-05-29 05:14:59,031 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-32.pt 2022-05-29 05:15:18,077 INFO [train.py:842] (0/4) Epoch 33, batch 0, loss[loss=0.2554, simple_loss=0.3317, pruned_loss=0.08952, over 6741.00 frames.], tot_loss[loss=0.2554, simple_loss=0.3317, pruned_loss=0.08952, over 6741.00 frames.], batch size: 31, lr: 1.68e-04 2022-05-29 05:15:57,432 INFO [train.py:842] (0/4) Epoch 33, batch 50, loss[loss=0.1685, simple_loss=0.2611, pruned_loss=0.03795, over 5276.00 frames.], tot_loss[loss=0.177, simple_loss=0.265, pruned_loss=0.04447, over 314334.46 frames.], batch size: 52, lr: 1.68e-04 2022-05-29 05:16:36,950 INFO [train.py:842] (0/4) Epoch 33, batch 100, loss[loss=0.1834, simple_loss=0.2751, pruned_loss=0.04584, over 6513.00 frames.], tot_loss[loss=0.1768, simple_loss=0.2652, pruned_loss=0.04423, over 558901.11 frames.], batch size: 38, lr: 1.68e-04 2022-05-29 05:17:16,111 INFO [train.py:842] (0/4) Epoch 33, batch 150, loss[loss=0.2067, simple_loss=0.2879, pruned_loss=0.06271, over 7202.00 frames.], tot_loss[loss=0.1763, simple_loss=0.2656, pruned_loss=0.04348, over 751047.14 frames.], batch size: 23, lr: 1.68e-04 2022-05-29 05:17:55,469 INFO [train.py:842] (0/4) Epoch 33, batch 200, loss[loss=0.1421, simple_loss=0.2282, pruned_loss=0.02795, over 7005.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2637, pruned_loss=0.04321, over 895556.20 frames.], batch size: 16, lr: 1.68e-04 2022-05-29 05:18:34,579 INFO [train.py:842] (0/4) Epoch 33, batch 250, loss[loss=0.1644, simple_loss=0.2625, pruned_loss=0.03319, over 7238.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2638, pruned_loss=0.04303, over 1010236.32 frames.], batch size: 20, lr: 1.68e-04 2022-05-29 05:19:13,819 INFO [train.py:842] (0/4) Epoch 33, batch 300, loss[loss=0.167, simple_loss=0.2645, pruned_loss=0.0348, over 6810.00 frames.], tot_loss[loss=0.175, simple_loss=0.2642, pruned_loss=0.0429, over 1093193.12 frames.], batch size: 31, lr: 1.68e-04 2022-05-29 05:19:52,960 INFO [train.py:842] (0/4) Epoch 33, batch 350, loss[loss=0.1599, simple_loss=0.2392, pruned_loss=0.04037, over 7418.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2638, pruned_loss=0.04303, over 1163286.53 frames.], batch size: 18, lr: 1.68e-04 2022-05-29 05:20:32,722 INFO [train.py:842] (0/4) Epoch 33, batch 400, loss[loss=0.209, simple_loss=0.2898, pruned_loss=0.06413, over 7428.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2624, pruned_loss=0.0426, over 1219900.06 frames.], batch size: 20, lr: 1.68e-04 2022-05-29 05:21:12,021 INFO [train.py:842] (0/4) Epoch 33, batch 450, loss[loss=0.1673, simple_loss=0.2643, pruned_loss=0.03514, over 6802.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2626, pruned_loss=0.04309, over 1261911.52 frames.], batch size: 31, lr: 1.68e-04 2022-05-29 05:21:51,578 INFO [train.py:842] (0/4) Epoch 33, batch 500, loss[loss=0.1819, simple_loss=0.2755, pruned_loss=0.0441, over 7197.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2636, pruned_loss=0.04351, over 1299692.62 frames.], batch size: 23, lr: 1.68e-04 2022-05-29 05:22:30,648 INFO [train.py:842] (0/4) Epoch 33, batch 550, loss[loss=0.2001, simple_loss=0.2896, pruned_loss=0.05534, over 7323.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2643, pruned_loss=0.04352, over 1328660.76 frames.], batch size: 21, lr: 1.68e-04 2022-05-29 05:23:09,921 INFO [train.py:842] (0/4) Epoch 33, batch 600, loss[loss=0.1939, simple_loss=0.2808, pruned_loss=0.05351, over 7279.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2645, pruned_loss=0.0434, over 1346510.71 frames.], batch size: 24, lr: 1.68e-04 2022-05-29 05:23:49,149 INFO [train.py:842] (0/4) Epoch 33, batch 650, loss[loss=0.1642, simple_loss=0.2607, pruned_loss=0.03384, over 7220.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2647, pruned_loss=0.04318, over 1363414.34 frames.], batch size: 26, lr: 1.68e-04 2022-05-29 05:24:28,760 INFO [train.py:842] (0/4) Epoch 33, batch 700, loss[loss=0.1455, simple_loss=0.2246, pruned_loss=0.03324, over 7139.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2639, pruned_loss=0.04295, over 1373899.28 frames.], batch size: 17, lr: 1.68e-04 2022-05-29 05:25:07,924 INFO [train.py:842] (0/4) Epoch 33, batch 750, loss[loss=0.1514, simple_loss=0.2524, pruned_loss=0.02521, over 7225.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2644, pruned_loss=0.04312, over 1380743.74 frames.], batch size: 21, lr: 1.68e-04 2022-05-29 05:25:47,774 INFO [train.py:842] (0/4) Epoch 33, batch 800, loss[loss=0.1718, simple_loss=0.27, pruned_loss=0.03684, over 7424.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2627, pruned_loss=0.04215, over 1392583.57 frames.], batch size: 20, lr: 1.68e-04 2022-05-29 05:26:26,915 INFO [train.py:842] (0/4) Epoch 33, batch 850, loss[loss=0.194, simple_loss=0.2824, pruned_loss=0.05279, over 7369.00 frames.], tot_loss[loss=0.174, simple_loss=0.2634, pruned_loss=0.04234, over 1399786.70 frames.], batch size: 23, lr: 1.68e-04 2022-05-29 05:27:06,452 INFO [train.py:842] (0/4) Epoch 33, batch 900, loss[loss=0.2035, simple_loss=0.2978, pruned_loss=0.05465, over 7215.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2623, pruned_loss=0.04194, over 1408909.70 frames.], batch size: 23, lr: 1.68e-04 2022-05-29 05:27:45,831 INFO [train.py:842] (0/4) Epoch 33, batch 950, loss[loss=0.1557, simple_loss=0.2487, pruned_loss=0.03131, over 7437.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2621, pruned_loss=0.04165, over 1413776.26 frames.], batch size: 20, lr: 1.68e-04 2022-05-29 05:28:25,606 INFO [train.py:842] (0/4) Epoch 33, batch 1000, loss[loss=0.1629, simple_loss=0.2512, pruned_loss=0.03727, over 7207.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2616, pruned_loss=0.04162, over 1414350.19 frames.], batch size: 23, lr: 1.68e-04 2022-05-29 05:29:04,476 INFO [train.py:842] (0/4) Epoch 33, batch 1050, loss[loss=0.2079, simple_loss=0.2879, pruned_loss=0.06391, over 7108.00 frames.], tot_loss[loss=0.173, simple_loss=0.2622, pruned_loss=0.04187, over 1413068.80 frames.], batch size: 28, lr: 1.68e-04 2022-05-29 05:29:43,852 INFO [train.py:842] (0/4) Epoch 33, batch 1100, loss[loss=0.1708, simple_loss=0.2626, pruned_loss=0.03951, over 7279.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2622, pruned_loss=0.04207, over 1418409.56 frames.], batch size: 24, lr: 1.68e-04 2022-05-29 05:30:22,960 INFO [train.py:842] (0/4) Epoch 33, batch 1150, loss[loss=0.1678, simple_loss=0.2654, pruned_loss=0.03513, over 7207.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2634, pruned_loss=0.04276, over 1419518.21 frames.], batch size: 23, lr: 1.68e-04 2022-05-29 05:31:02,279 INFO [train.py:842] (0/4) Epoch 33, batch 1200, loss[loss=0.1939, simple_loss=0.289, pruned_loss=0.04934, over 7154.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2642, pruned_loss=0.04262, over 1421783.76 frames.], batch size: 26, lr: 1.68e-04 2022-05-29 05:31:41,545 INFO [train.py:842] (0/4) Epoch 33, batch 1250, loss[loss=0.1437, simple_loss=0.2398, pruned_loss=0.0238, over 6178.00 frames.], tot_loss[loss=0.174, simple_loss=0.2636, pruned_loss=0.04222, over 1420108.50 frames.], batch size: 37, lr: 1.68e-04 2022-05-29 05:32:21,169 INFO [train.py:842] (0/4) Epoch 33, batch 1300, loss[loss=0.1612, simple_loss=0.2612, pruned_loss=0.03057, over 7226.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2628, pruned_loss=0.04192, over 1420977.62 frames.], batch size: 21, lr: 1.68e-04 2022-05-29 05:33:00,621 INFO [train.py:842] (0/4) Epoch 33, batch 1350, loss[loss=0.161, simple_loss=0.2334, pruned_loss=0.04428, over 7275.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2625, pruned_loss=0.04236, over 1419672.72 frames.], batch size: 17, lr: 1.68e-04 2022-05-29 05:33:40,379 INFO [train.py:842] (0/4) Epoch 33, batch 1400, loss[loss=0.2015, simple_loss=0.2966, pruned_loss=0.05322, over 7151.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2618, pruned_loss=0.04189, over 1421024.71 frames.], batch size: 20, lr: 1.68e-04 2022-05-29 05:34:19,603 INFO [train.py:842] (0/4) Epoch 33, batch 1450, loss[loss=0.1784, simple_loss=0.2748, pruned_loss=0.04098, over 6892.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2626, pruned_loss=0.04211, over 1424385.57 frames.], batch size: 31, lr: 1.67e-04 2022-05-29 05:34:59,081 INFO [train.py:842] (0/4) Epoch 33, batch 1500, loss[loss=0.2189, simple_loss=0.3031, pruned_loss=0.06738, over 5167.00 frames.], tot_loss[loss=0.173, simple_loss=0.2621, pruned_loss=0.0419, over 1422134.28 frames.], batch size: 52, lr: 1.67e-04 2022-05-29 05:35:38,264 INFO [train.py:842] (0/4) Epoch 33, batch 1550, loss[loss=0.1583, simple_loss=0.2549, pruned_loss=0.0309, over 7218.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2635, pruned_loss=0.04249, over 1418351.72 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 05:36:17,786 INFO [train.py:842] (0/4) Epoch 33, batch 1600, loss[loss=0.1767, simple_loss=0.2727, pruned_loss=0.04033, over 7405.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2632, pruned_loss=0.04262, over 1420029.92 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 05:36:57,040 INFO [train.py:842] (0/4) Epoch 33, batch 1650, loss[loss=0.2187, simple_loss=0.3095, pruned_loss=0.06394, over 7221.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2637, pruned_loss=0.04272, over 1420127.43 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 05:37:36,354 INFO [train.py:842] (0/4) Epoch 33, batch 1700, loss[loss=0.2272, simple_loss=0.3197, pruned_loss=0.06731, over 7287.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2639, pruned_loss=0.04278, over 1423065.62 frames.], batch size: 24, lr: 1.67e-04 2022-05-29 05:38:15,183 INFO [train.py:842] (0/4) Epoch 33, batch 1750, loss[loss=0.1624, simple_loss=0.2568, pruned_loss=0.03399, over 7143.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2645, pruned_loss=0.04317, over 1416277.37 frames.], batch size: 28, lr: 1.67e-04 2022-05-29 05:38:54,945 INFO [train.py:842] (0/4) Epoch 33, batch 1800, loss[loss=0.1572, simple_loss=0.2418, pruned_loss=0.03633, over 7253.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2643, pruned_loss=0.04327, over 1420580.70 frames.], batch size: 19, lr: 1.67e-04 2022-05-29 05:39:34,275 INFO [train.py:842] (0/4) Epoch 33, batch 1850, loss[loss=0.1765, simple_loss=0.2749, pruned_loss=0.03907, over 7311.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2643, pruned_loss=0.0431, over 1423736.79 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 05:39:42,449 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-296000.pt 2022-05-29 05:40:16,630 INFO [train.py:842] (0/4) Epoch 33, batch 1900, loss[loss=0.1684, simple_loss=0.2482, pruned_loss=0.04429, over 7398.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2631, pruned_loss=0.0424, over 1426155.90 frames.], batch size: 23, lr: 1.67e-04 2022-05-29 05:40:55,641 INFO [train.py:842] (0/4) Epoch 33, batch 1950, loss[loss=0.1827, simple_loss=0.2749, pruned_loss=0.04532, over 7318.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2635, pruned_loss=0.0426, over 1425468.38 frames.], batch size: 24, lr: 1.67e-04 2022-05-29 05:41:35,596 INFO [train.py:842] (0/4) Epoch 33, batch 2000, loss[loss=0.1729, simple_loss=0.2702, pruned_loss=0.03785, over 6227.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2626, pruned_loss=0.04221, over 1427145.08 frames.], batch size: 37, lr: 1.67e-04 2022-05-29 05:42:14,864 INFO [train.py:842] (0/4) Epoch 33, batch 2050, loss[loss=0.1547, simple_loss=0.2453, pruned_loss=0.03207, over 7161.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2626, pruned_loss=0.0423, over 1427047.09 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 05:42:54,692 INFO [train.py:842] (0/4) Epoch 33, batch 2100, loss[loss=0.1512, simple_loss=0.2414, pruned_loss=0.03057, over 7158.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2623, pruned_loss=0.0424, over 1427711.77 frames.], batch size: 19, lr: 1.67e-04 2022-05-29 05:43:33,963 INFO [train.py:842] (0/4) Epoch 33, batch 2150, loss[loss=0.1616, simple_loss=0.2428, pruned_loss=0.04023, over 7399.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2622, pruned_loss=0.04219, over 1428843.69 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 05:44:13,533 INFO [train.py:842] (0/4) Epoch 33, batch 2200, loss[loss=0.2129, simple_loss=0.2966, pruned_loss=0.06456, over 5116.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2634, pruned_loss=0.04275, over 1422984.80 frames.], batch size: 52, lr: 1.67e-04 2022-05-29 05:44:52,670 INFO [train.py:842] (0/4) Epoch 33, batch 2250, loss[loss=0.1629, simple_loss=0.2559, pruned_loss=0.03495, over 7179.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2626, pruned_loss=0.04241, over 1420734.66 frames.], batch size: 26, lr: 1.67e-04 2022-05-29 05:45:32,520 INFO [train.py:842] (0/4) Epoch 33, batch 2300, loss[loss=0.163, simple_loss=0.2557, pruned_loss=0.03513, over 7201.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2615, pruned_loss=0.04206, over 1419294.50 frames.], batch size: 22, lr: 1.67e-04 2022-05-29 05:46:11,727 INFO [train.py:842] (0/4) Epoch 33, batch 2350, loss[loss=0.1339, simple_loss=0.2265, pruned_loss=0.02064, over 7236.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2615, pruned_loss=0.04205, over 1422887.05 frames.], batch size: 16, lr: 1.67e-04 2022-05-29 05:46:51,707 INFO [train.py:842] (0/4) Epoch 33, batch 2400, loss[loss=0.1937, simple_loss=0.2728, pruned_loss=0.05729, over 7433.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2608, pruned_loss=0.04215, over 1425820.86 frames.], batch size: 20, lr: 1.67e-04 2022-05-29 05:47:31,158 INFO [train.py:842] (0/4) Epoch 33, batch 2450, loss[loss=0.1472, simple_loss=0.2326, pruned_loss=0.03089, over 7257.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2599, pruned_loss=0.0416, over 1426665.37 frames.], batch size: 19, lr: 1.67e-04 2022-05-29 05:48:10,855 INFO [train.py:842] (0/4) Epoch 33, batch 2500, loss[loss=0.178, simple_loss=0.2686, pruned_loss=0.04372, over 7313.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2596, pruned_loss=0.04133, over 1428615.75 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 05:48:50,209 INFO [train.py:842] (0/4) Epoch 33, batch 2550, loss[loss=0.1895, simple_loss=0.2914, pruned_loss=0.04376, over 7385.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2601, pruned_loss=0.0416, over 1428098.16 frames.], batch size: 23, lr: 1.67e-04 2022-05-29 05:49:30,017 INFO [train.py:842] (0/4) Epoch 33, batch 2600, loss[loss=0.2068, simple_loss=0.2913, pruned_loss=0.06112, over 7221.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2603, pruned_loss=0.04157, over 1428735.80 frames.], batch size: 23, lr: 1.67e-04 2022-05-29 05:50:08,982 INFO [train.py:842] (0/4) Epoch 33, batch 2650, loss[loss=0.1639, simple_loss=0.2433, pruned_loss=0.04226, over 6804.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2608, pruned_loss=0.04134, over 1423388.34 frames.], batch size: 15, lr: 1.67e-04 2022-05-29 05:50:48,473 INFO [train.py:842] (0/4) Epoch 33, batch 2700, loss[loss=0.1564, simple_loss=0.2463, pruned_loss=0.03322, over 7421.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2619, pruned_loss=0.04186, over 1424499.19 frames.], batch size: 20, lr: 1.67e-04 2022-05-29 05:51:27,567 INFO [train.py:842] (0/4) Epoch 33, batch 2750, loss[loss=0.1452, simple_loss=0.2288, pruned_loss=0.03084, over 7288.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2619, pruned_loss=0.04152, over 1425563.82 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 05:52:07,117 INFO [train.py:842] (0/4) Epoch 33, batch 2800, loss[loss=0.172, simple_loss=0.2628, pruned_loss=0.04058, over 7218.00 frames.], tot_loss[loss=0.172, simple_loss=0.2613, pruned_loss=0.04135, over 1424349.90 frames.], batch size: 23, lr: 1.67e-04 2022-05-29 05:52:46,420 INFO [train.py:842] (0/4) Epoch 33, batch 2850, loss[loss=0.1621, simple_loss=0.2519, pruned_loss=0.03615, over 7310.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2608, pruned_loss=0.04134, over 1425946.93 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 05:53:25,967 INFO [train.py:842] (0/4) Epoch 33, batch 2900, loss[loss=0.1951, simple_loss=0.2746, pruned_loss=0.0578, over 7299.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2615, pruned_loss=0.04154, over 1425745.18 frames.], batch size: 25, lr: 1.67e-04 2022-05-29 05:54:05,082 INFO [train.py:842] (0/4) Epoch 33, batch 2950, loss[loss=0.1914, simple_loss=0.2803, pruned_loss=0.05123, over 7429.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2623, pruned_loss=0.04201, over 1428469.77 frames.], batch size: 20, lr: 1.67e-04 2022-05-29 05:54:44,594 INFO [train.py:842] (0/4) Epoch 33, batch 3000, loss[loss=0.1775, simple_loss=0.2588, pruned_loss=0.04809, over 7061.00 frames.], tot_loss[loss=0.1734, simple_loss=0.262, pruned_loss=0.04242, over 1427138.94 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 05:54:44,595 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 05:54:54,324 INFO [train.py:871] (0/4) Epoch 33, validation: loss=0.1647, simple_loss=0.2614, pruned_loss=0.03398, over 868885.00 frames. 2022-05-29 05:55:33,719 INFO [train.py:842] (0/4) Epoch 33, batch 3050, loss[loss=0.187, simple_loss=0.2796, pruned_loss=0.04723, over 6461.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2615, pruned_loss=0.04208, over 1424207.37 frames.], batch size: 37, lr: 1.67e-04 2022-05-29 05:56:13,279 INFO [train.py:842] (0/4) Epoch 33, batch 3100, loss[loss=0.1834, simple_loss=0.2705, pruned_loss=0.04815, over 7394.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2618, pruned_loss=0.04237, over 1424685.09 frames.], batch size: 23, lr: 1.67e-04 2022-05-29 05:56:52,612 INFO [train.py:842] (0/4) Epoch 33, batch 3150, loss[loss=0.1564, simple_loss=0.2437, pruned_loss=0.03462, over 7064.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2604, pruned_loss=0.04173, over 1421801.47 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 05:57:42,962 INFO [train.py:842] (0/4) Epoch 33, batch 3200, loss[loss=0.1881, simple_loss=0.2609, pruned_loss=0.05768, over 6782.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2606, pruned_loss=0.04149, over 1421540.71 frames.], batch size: 15, lr: 1.67e-04 2022-05-29 05:58:22,105 INFO [train.py:842] (0/4) Epoch 33, batch 3250, loss[loss=0.1339, simple_loss=0.2161, pruned_loss=0.02581, over 7293.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2606, pruned_loss=0.04183, over 1418387.58 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 05:59:02,038 INFO [train.py:842] (0/4) Epoch 33, batch 3300, loss[loss=0.1909, simple_loss=0.3, pruned_loss=0.04088, over 7229.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2609, pruned_loss=0.04191, over 1424366.56 frames.], batch size: 20, lr: 1.67e-04 2022-05-29 05:59:41,360 INFO [train.py:842] (0/4) Epoch 33, batch 3350, loss[loss=0.1834, simple_loss=0.268, pruned_loss=0.04943, over 7309.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2617, pruned_loss=0.04226, over 1428183.45 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 06:00:20,955 INFO [train.py:842] (0/4) Epoch 33, batch 3400, loss[loss=0.1359, simple_loss=0.2158, pruned_loss=0.02806, over 7269.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2609, pruned_loss=0.04196, over 1428009.22 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 06:01:00,223 INFO [train.py:842] (0/4) Epoch 33, batch 3450, loss[loss=0.1617, simple_loss=0.2557, pruned_loss=0.03387, over 7335.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2606, pruned_loss=0.04153, over 1431704.79 frames.], batch size: 20, lr: 1.67e-04 2022-05-29 06:01:39,798 INFO [train.py:842] (0/4) Epoch 33, batch 3500, loss[loss=0.1682, simple_loss=0.2616, pruned_loss=0.03735, over 7372.00 frames.], tot_loss[loss=0.1733, simple_loss=0.262, pruned_loss=0.04231, over 1428527.36 frames.], batch size: 23, lr: 1.67e-04 2022-05-29 06:02:19,061 INFO [train.py:842] (0/4) Epoch 33, batch 3550, loss[loss=0.1924, simple_loss=0.2662, pruned_loss=0.05927, over 7409.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2617, pruned_loss=0.04245, over 1427231.21 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 06:02:58,515 INFO [train.py:842] (0/4) Epoch 33, batch 3600, loss[loss=0.1854, simple_loss=0.2762, pruned_loss=0.0473, over 7324.00 frames.], tot_loss[loss=0.1732, simple_loss=0.262, pruned_loss=0.04222, over 1424140.20 frames.], batch size: 20, lr: 1.67e-04 2022-05-29 06:03:38,022 INFO [train.py:842] (0/4) Epoch 33, batch 3650, loss[loss=0.2146, simple_loss=0.3105, pruned_loss=0.05934, over 7319.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2622, pruned_loss=0.04266, over 1423349.38 frames.], batch size: 20, lr: 1.67e-04 2022-05-29 06:04:17,599 INFO [train.py:842] (0/4) Epoch 33, batch 3700, loss[loss=0.1661, simple_loss=0.2422, pruned_loss=0.04498, over 7269.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2633, pruned_loss=0.04287, over 1426366.45 frames.], batch size: 17, lr: 1.67e-04 2022-05-29 06:04:56,705 INFO [train.py:842] (0/4) Epoch 33, batch 3750, loss[loss=0.1885, simple_loss=0.2873, pruned_loss=0.04488, over 7209.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2645, pruned_loss=0.04338, over 1426105.52 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 06:05:36,551 INFO [train.py:842] (0/4) Epoch 33, batch 3800, loss[loss=0.1967, simple_loss=0.2882, pruned_loss=0.05258, over 7226.00 frames.], tot_loss[loss=0.175, simple_loss=0.2639, pruned_loss=0.04301, over 1426917.83 frames.], batch size: 23, lr: 1.67e-04 2022-05-29 06:06:15,674 INFO [train.py:842] (0/4) Epoch 33, batch 3850, loss[loss=0.1708, simple_loss=0.2602, pruned_loss=0.04067, over 7312.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2637, pruned_loss=0.04285, over 1427282.60 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 06:06:55,067 INFO [train.py:842] (0/4) Epoch 33, batch 3900, loss[loss=0.1773, simple_loss=0.2609, pruned_loss=0.04684, over 6815.00 frames.], tot_loss[loss=0.1745, simple_loss=0.264, pruned_loss=0.04252, over 1427854.59 frames.], batch size: 15, lr: 1.67e-04 2022-05-29 06:07:34,368 INFO [train.py:842] (0/4) Epoch 33, batch 3950, loss[loss=0.1341, simple_loss=0.2157, pruned_loss=0.02623, over 6776.00 frames.], tot_loss[loss=0.175, simple_loss=0.2641, pruned_loss=0.043, over 1429159.93 frames.], batch size: 15, lr: 1.67e-04 2022-05-29 06:08:14,171 INFO [train.py:842] (0/4) Epoch 33, batch 4000, loss[loss=0.1929, simple_loss=0.288, pruned_loss=0.0489, over 4877.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2632, pruned_loss=0.04251, over 1430344.44 frames.], batch size: 52, lr: 1.67e-04 2022-05-29 06:08:53,347 INFO [train.py:842] (0/4) Epoch 33, batch 4050, loss[loss=0.164, simple_loss=0.2543, pruned_loss=0.03684, over 7244.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2622, pruned_loss=0.04221, over 1426091.61 frames.], batch size: 19, lr: 1.67e-04 2022-05-29 06:09:32,994 INFO [train.py:842] (0/4) Epoch 33, batch 4100, loss[loss=0.2069, simple_loss=0.2997, pruned_loss=0.057, over 7324.00 frames.], tot_loss[loss=0.173, simple_loss=0.2617, pruned_loss=0.04214, over 1425782.42 frames.], batch size: 25, lr: 1.67e-04 2022-05-29 06:10:12,287 INFO [train.py:842] (0/4) Epoch 33, batch 4150, loss[loss=0.176, simple_loss=0.2596, pruned_loss=0.04618, over 7158.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2617, pruned_loss=0.04199, over 1420787.43 frames.], batch size: 19, lr: 1.67e-04 2022-05-29 06:10:52,067 INFO [train.py:842] (0/4) Epoch 33, batch 4200, loss[loss=0.1766, simple_loss=0.2744, pruned_loss=0.03943, over 7409.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2614, pruned_loss=0.04206, over 1426067.48 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 06:11:31,143 INFO [train.py:842] (0/4) Epoch 33, batch 4250, loss[loss=0.1851, simple_loss=0.2728, pruned_loss=0.04872, over 7338.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2624, pruned_loss=0.0425, over 1423917.57 frames.], batch size: 22, lr: 1.67e-04 2022-05-29 06:12:10,755 INFO [train.py:842] (0/4) Epoch 33, batch 4300, loss[loss=0.182, simple_loss=0.2664, pruned_loss=0.04879, over 7173.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2623, pruned_loss=0.04223, over 1423469.96 frames.], batch size: 19, lr: 1.67e-04 2022-05-29 06:12:50,130 INFO [train.py:842] (0/4) Epoch 33, batch 4350, loss[loss=0.1879, simple_loss=0.2822, pruned_loss=0.04674, over 7221.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2616, pruned_loss=0.04182, over 1425605.09 frames.], batch size: 20, lr: 1.67e-04 2022-05-29 06:13:29,858 INFO [train.py:842] (0/4) Epoch 33, batch 4400, loss[loss=0.2178, simple_loss=0.3111, pruned_loss=0.06222, over 6269.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2616, pruned_loss=0.04166, over 1423640.56 frames.], batch size: 37, lr: 1.67e-04 2022-05-29 06:14:09,287 INFO [train.py:842] (0/4) Epoch 33, batch 4450, loss[loss=0.1633, simple_loss=0.2464, pruned_loss=0.04016, over 7414.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2621, pruned_loss=0.04171, over 1423688.18 frames.], batch size: 17, lr: 1.67e-04 2022-05-29 06:14:48,768 INFO [train.py:842] (0/4) Epoch 33, batch 4500, loss[loss=0.1622, simple_loss=0.2494, pruned_loss=0.03751, over 7325.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2623, pruned_loss=0.04143, over 1425142.98 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 06:15:28,052 INFO [train.py:842] (0/4) Epoch 33, batch 4550, loss[loss=0.1619, simple_loss=0.2585, pruned_loss=0.03269, over 7294.00 frames.], tot_loss[loss=0.1717, simple_loss=0.262, pruned_loss=0.04071, over 1423259.41 frames.], batch size: 25, lr: 1.67e-04 2022-05-29 06:16:07,567 INFO [train.py:842] (0/4) Epoch 33, batch 4600, loss[loss=0.1852, simple_loss=0.2781, pruned_loss=0.04613, over 6725.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2622, pruned_loss=0.04126, over 1422259.18 frames.], batch size: 31, lr: 1.67e-04 2022-05-29 06:16:46,698 INFO [train.py:842] (0/4) Epoch 33, batch 4650, loss[loss=0.1502, simple_loss=0.2397, pruned_loss=0.03038, over 7429.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2627, pruned_loss=0.04148, over 1420287.50 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 06:17:26,328 INFO [train.py:842] (0/4) Epoch 33, batch 4700, loss[loss=0.1716, simple_loss=0.2702, pruned_loss=0.03648, over 6190.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2639, pruned_loss=0.04235, over 1422129.89 frames.], batch size: 37, lr: 1.67e-04 2022-05-29 06:18:05,553 INFO [train.py:842] (0/4) Epoch 33, batch 4750, loss[loss=0.1906, simple_loss=0.2669, pruned_loss=0.05713, over 7291.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2629, pruned_loss=0.04205, over 1422511.05 frames.], batch size: 17, lr: 1.67e-04 2022-05-29 06:18:45,207 INFO [train.py:842] (0/4) Epoch 33, batch 4800, loss[loss=0.1787, simple_loss=0.2767, pruned_loss=0.04035, over 7116.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2619, pruned_loss=0.04188, over 1423306.60 frames.], batch size: 21, lr: 1.67e-04 2022-05-29 06:19:24,419 INFO [train.py:842] (0/4) Epoch 33, batch 4850, loss[loss=0.2098, simple_loss=0.3022, pruned_loss=0.05871, over 6303.00 frames.], tot_loss[loss=0.173, simple_loss=0.2618, pruned_loss=0.04211, over 1418949.49 frames.], batch size: 37, lr: 1.67e-04 2022-05-29 06:20:04,027 INFO [train.py:842] (0/4) Epoch 33, batch 4900, loss[loss=0.1811, simple_loss=0.2692, pruned_loss=0.04648, over 7251.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2613, pruned_loss=0.04208, over 1418762.36 frames.], batch size: 19, lr: 1.67e-04 2022-05-29 06:20:43,300 INFO [train.py:842] (0/4) Epoch 33, batch 4950, loss[loss=0.1655, simple_loss=0.2504, pruned_loss=0.04031, over 7058.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2628, pruned_loss=0.04288, over 1417350.12 frames.], batch size: 18, lr: 1.67e-04 2022-05-29 06:21:22,690 INFO [train.py:842] (0/4) Epoch 33, batch 5000, loss[loss=0.1476, simple_loss=0.2406, pruned_loss=0.02735, over 6704.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2619, pruned_loss=0.04217, over 1415014.86 frames.], batch size: 31, lr: 1.66e-04 2022-05-29 06:22:02,145 INFO [train.py:842] (0/4) Epoch 33, batch 5050, loss[loss=0.1683, simple_loss=0.2613, pruned_loss=0.03764, over 7180.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2614, pruned_loss=0.04184, over 1415298.91 frames.], batch size: 26, lr: 1.66e-04 2022-05-29 06:22:41,583 INFO [train.py:842] (0/4) Epoch 33, batch 5100, loss[loss=0.1384, simple_loss=0.2249, pruned_loss=0.02593, over 7288.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2626, pruned_loss=0.04256, over 1413591.24 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:23:20,488 INFO [train.py:842] (0/4) Epoch 33, batch 5150, loss[loss=0.1726, simple_loss=0.2722, pruned_loss=0.03649, over 7223.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2636, pruned_loss=0.04278, over 1406637.18 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 06:24:00,113 INFO [train.py:842] (0/4) Epoch 33, batch 5200, loss[loss=0.1357, simple_loss=0.2172, pruned_loss=0.02713, over 6999.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2636, pruned_loss=0.04288, over 1413770.57 frames.], batch size: 16, lr: 1.66e-04 2022-05-29 06:24:38,969 INFO [train.py:842] (0/4) Epoch 33, batch 5250, loss[loss=0.1514, simple_loss=0.2488, pruned_loss=0.02695, over 7152.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2636, pruned_loss=0.04275, over 1415335.50 frames.], batch size: 20, lr: 1.66e-04 2022-05-29 06:25:18,426 INFO [train.py:842] (0/4) Epoch 33, batch 5300, loss[loss=0.1503, simple_loss=0.242, pruned_loss=0.02937, over 7067.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2636, pruned_loss=0.04277, over 1416096.78 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:25:57,780 INFO [train.py:842] (0/4) Epoch 33, batch 5350, loss[loss=0.1726, simple_loss=0.2644, pruned_loss=0.04038, over 7216.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2627, pruned_loss=0.04229, over 1418681.58 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 06:26:37,435 INFO [train.py:842] (0/4) Epoch 33, batch 5400, loss[loss=0.1814, simple_loss=0.2679, pruned_loss=0.04745, over 6935.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2622, pruned_loss=0.04264, over 1420415.90 frames.], batch size: 32, lr: 1.66e-04 2022-05-29 06:27:16,746 INFO [train.py:842] (0/4) Epoch 33, batch 5450, loss[loss=0.1627, simple_loss=0.2537, pruned_loss=0.03585, over 7328.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2616, pruned_loss=0.04229, over 1422399.33 frames.], batch size: 22, lr: 1.66e-04 2022-05-29 06:27:56,398 INFO [train.py:842] (0/4) Epoch 33, batch 5500, loss[loss=0.1489, simple_loss=0.2408, pruned_loss=0.02848, over 7298.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2612, pruned_loss=0.04187, over 1425186.45 frames.], batch size: 17, lr: 1.66e-04 2022-05-29 06:28:35,875 INFO [train.py:842] (0/4) Epoch 33, batch 5550, loss[loss=0.1793, simple_loss=0.2717, pruned_loss=0.04343, over 7430.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2606, pruned_loss=0.04135, over 1426588.88 frames.], batch size: 20, lr: 1.66e-04 2022-05-29 06:29:15,510 INFO [train.py:842] (0/4) Epoch 33, batch 5600, loss[loss=0.1829, simple_loss=0.2745, pruned_loss=0.04564, over 4840.00 frames.], tot_loss[loss=0.172, simple_loss=0.2608, pruned_loss=0.04164, over 1421081.23 frames.], batch size: 52, lr: 1.66e-04 2022-05-29 06:29:54,948 INFO [train.py:842] (0/4) Epoch 33, batch 5650, loss[loss=0.1985, simple_loss=0.2869, pruned_loss=0.055, over 7374.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2617, pruned_loss=0.04277, over 1424322.34 frames.], batch size: 23, lr: 1.66e-04 2022-05-29 06:30:34,465 INFO [train.py:842] (0/4) Epoch 33, batch 5700, loss[loss=0.1522, simple_loss=0.2485, pruned_loss=0.02794, over 6681.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2622, pruned_loss=0.04282, over 1417437.90 frames.], batch size: 39, lr: 1.66e-04 2022-05-29 06:31:13,649 INFO [train.py:842] (0/4) Epoch 33, batch 5750, loss[loss=0.1344, simple_loss=0.2182, pruned_loss=0.02534, over 7057.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2633, pruned_loss=0.04301, over 1421082.42 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:31:53,527 INFO [train.py:842] (0/4) Epoch 33, batch 5800, loss[loss=0.2051, simple_loss=0.2941, pruned_loss=0.05805, over 7340.00 frames.], tot_loss[loss=0.1746, simple_loss=0.263, pruned_loss=0.04315, over 1422361.17 frames.], batch size: 22, lr: 1.66e-04 2022-05-29 06:32:32,845 INFO [train.py:842] (0/4) Epoch 33, batch 5850, loss[loss=0.142, simple_loss=0.2216, pruned_loss=0.0312, over 7407.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2633, pruned_loss=0.04351, over 1419817.19 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:33:12,580 INFO [train.py:842] (0/4) Epoch 33, batch 5900, loss[loss=0.1628, simple_loss=0.2623, pruned_loss=0.03165, over 7429.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2627, pruned_loss=0.04274, over 1424754.41 frames.], batch size: 20, lr: 1.66e-04 2022-05-29 06:33:51,673 INFO [train.py:842] (0/4) Epoch 33, batch 5950, loss[loss=0.1732, simple_loss=0.2684, pruned_loss=0.03896, over 6276.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2629, pruned_loss=0.04278, over 1427889.53 frames.], batch size: 37, lr: 1.66e-04 2022-05-29 06:34:31,364 INFO [train.py:842] (0/4) Epoch 33, batch 6000, loss[loss=0.1962, simple_loss=0.2967, pruned_loss=0.04789, over 7141.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2634, pruned_loss=0.04311, over 1427730.25 frames.], batch size: 20, lr: 1.66e-04 2022-05-29 06:34:31,365 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 06:34:41,082 INFO [train.py:871] (0/4) Epoch 33, validation: loss=0.1642, simple_loss=0.2614, pruned_loss=0.03347, over 868885.00 frames. 2022-05-29 06:35:20,424 INFO [train.py:842] (0/4) Epoch 33, batch 6050, loss[loss=0.1729, simple_loss=0.2679, pruned_loss=0.03899, over 7223.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2627, pruned_loss=0.04301, over 1426869.57 frames.], batch size: 22, lr: 1.66e-04 2022-05-29 06:36:00,006 INFO [train.py:842] (0/4) Epoch 33, batch 6100, loss[loss=0.1868, simple_loss=0.2875, pruned_loss=0.04308, over 7333.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2618, pruned_loss=0.04225, over 1423254.99 frames.], batch size: 22, lr: 1.66e-04 2022-05-29 06:36:39,102 INFO [train.py:842] (0/4) Epoch 33, batch 6150, loss[loss=0.1521, simple_loss=0.2346, pruned_loss=0.03476, over 7176.00 frames.], tot_loss[loss=0.1733, simple_loss=0.262, pruned_loss=0.04228, over 1424524.59 frames.], batch size: 16, lr: 1.66e-04 2022-05-29 06:37:18,634 INFO [train.py:842] (0/4) Epoch 33, batch 6200, loss[loss=0.1595, simple_loss=0.2643, pruned_loss=0.02734, over 7406.00 frames.], tot_loss[loss=0.1725, simple_loss=0.261, pruned_loss=0.04203, over 1424179.34 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 06:37:57,971 INFO [train.py:842] (0/4) Epoch 33, batch 6250, loss[loss=0.1357, simple_loss=0.2193, pruned_loss=0.02612, over 7130.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2609, pruned_loss=0.04221, over 1424782.92 frames.], batch size: 17, lr: 1.66e-04 2022-05-29 06:38:37,861 INFO [train.py:842] (0/4) Epoch 33, batch 6300, loss[loss=0.2191, simple_loss=0.3005, pruned_loss=0.06888, over 7281.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2606, pruned_loss=0.04254, over 1424883.71 frames.], batch size: 24, lr: 1.66e-04 2022-05-29 06:39:17,439 INFO [train.py:842] (0/4) Epoch 33, batch 6350, loss[loss=0.1836, simple_loss=0.2687, pruned_loss=0.04928, over 7171.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2609, pruned_loss=0.04277, over 1426506.32 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:39:57,020 INFO [train.py:842] (0/4) Epoch 33, batch 6400, loss[loss=0.1652, simple_loss=0.2586, pruned_loss=0.03588, over 7234.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2606, pruned_loss=0.04226, over 1426711.38 frames.], batch size: 20, lr: 1.66e-04 2022-05-29 06:40:36,393 INFO [train.py:842] (0/4) Epoch 33, batch 6450, loss[loss=0.1506, simple_loss=0.2491, pruned_loss=0.02611, over 7387.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2599, pruned_loss=0.04135, over 1426630.04 frames.], batch size: 23, lr: 1.66e-04 2022-05-29 06:41:16,112 INFO [train.py:842] (0/4) Epoch 33, batch 6500, loss[loss=0.1993, simple_loss=0.2773, pruned_loss=0.0606, over 7416.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2607, pruned_loss=0.04181, over 1428631.85 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 06:41:55,393 INFO [train.py:842] (0/4) Epoch 33, batch 6550, loss[loss=0.1542, simple_loss=0.2367, pruned_loss=0.03591, over 7195.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2612, pruned_loss=0.04184, over 1427891.08 frames.], batch size: 16, lr: 1.66e-04 2022-05-29 06:42:34,837 INFO [train.py:842] (0/4) Epoch 33, batch 6600, loss[loss=0.1778, simple_loss=0.2554, pruned_loss=0.05014, over 7416.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2616, pruned_loss=0.04205, over 1427885.08 frames.], batch size: 17, lr: 1.66e-04 2022-05-29 06:43:14,040 INFO [train.py:842] (0/4) Epoch 33, batch 6650, loss[loss=0.1729, simple_loss=0.273, pruned_loss=0.03643, over 6697.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2611, pruned_loss=0.04182, over 1425098.72 frames.], batch size: 31, lr: 1.66e-04 2022-05-29 06:43:53,722 INFO [train.py:842] (0/4) Epoch 33, batch 6700, loss[loss=0.1418, simple_loss=0.2293, pruned_loss=0.02715, over 7078.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2615, pruned_loss=0.04232, over 1421359.00 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:44:32,940 INFO [train.py:842] (0/4) Epoch 33, batch 6750, loss[loss=0.176, simple_loss=0.2708, pruned_loss=0.04059, over 7171.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2613, pruned_loss=0.04204, over 1422128.40 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:45:12,726 INFO [train.py:842] (0/4) Epoch 33, batch 6800, loss[loss=0.1638, simple_loss=0.248, pruned_loss=0.03974, over 7199.00 frames.], tot_loss[loss=0.172, simple_loss=0.2605, pruned_loss=0.04171, over 1419135.51 frames.], batch size: 23, lr: 1.66e-04 2022-05-29 06:45:51,925 INFO [train.py:842] (0/4) Epoch 33, batch 6850, loss[loss=0.2009, simple_loss=0.2948, pruned_loss=0.05354, over 7164.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2608, pruned_loss=0.04175, over 1423644.85 frames.], batch size: 26, lr: 1.66e-04 2022-05-29 06:46:31,078 INFO [train.py:842] (0/4) Epoch 33, batch 6900, loss[loss=0.1875, simple_loss=0.2758, pruned_loss=0.04956, over 6821.00 frames.], tot_loss[loss=0.173, simple_loss=0.2614, pruned_loss=0.04228, over 1419879.65 frames.], batch size: 31, lr: 1.66e-04 2022-05-29 06:47:10,485 INFO [train.py:842] (0/4) Epoch 33, batch 6950, loss[loss=0.189, simple_loss=0.271, pruned_loss=0.05348, over 7429.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2619, pruned_loss=0.04213, over 1426489.36 frames.], batch size: 20, lr: 1.66e-04 2022-05-29 06:47:50,211 INFO [train.py:842] (0/4) Epoch 33, batch 7000, loss[loss=0.1595, simple_loss=0.2472, pruned_loss=0.0359, over 7060.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2622, pruned_loss=0.04225, over 1427452.19 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:48:29,615 INFO [train.py:842] (0/4) Epoch 33, batch 7050, loss[loss=0.1482, simple_loss=0.2409, pruned_loss=0.02772, over 7150.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2618, pruned_loss=0.04221, over 1424742.00 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:49:09,087 INFO [train.py:842] (0/4) Epoch 33, batch 7100, loss[loss=0.1891, simple_loss=0.2828, pruned_loss=0.04775, over 7154.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2617, pruned_loss=0.04232, over 1427465.60 frames.], batch size: 26, lr: 1.66e-04 2022-05-29 06:49:48,450 INFO [train.py:842] (0/4) Epoch 33, batch 7150, loss[loss=0.1893, simple_loss=0.2838, pruned_loss=0.04736, over 7190.00 frames.], tot_loss[loss=0.173, simple_loss=0.2618, pruned_loss=0.04214, over 1430250.81 frames.], batch size: 26, lr: 1.66e-04 2022-05-29 06:50:28,359 INFO [train.py:842] (0/4) Epoch 33, batch 7200, loss[loss=0.1686, simple_loss=0.2611, pruned_loss=0.03806, over 7291.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2605, pruned_loss=0.04162, over 1430522.53 frames.], batch size: 24, lr: 1.66e-04 2022-05-29 06:51:07,723 INFO [train.py:842] (0/4) Epoch 33, batch 7250, loss[loss=0.1775, simple_loss=0.2566, pruned_loss=0.04918, over 7256.00 frames.], tot_loss[loss=0.1715, simple_loss=0.26, pruned_loss=0.04143, over 1426932.42 frames.], batch size: 19, lr: 1.66e-04 2022-05-29 06:51:46,994 INFO [train.py:842] (0/4) Epoch 33, batch 7300, loss[loss=0.2041, simple_loss=0.29, pruned_loss=0.05912, over 7165.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2607, pruned_loss=0.04131, over 1427678.48 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:52:26,331 INFO [train.py:842] (0/4) Epoch 33, batch 7350, loss[loss=0.213, simple_loss=0.3056, pruned_loss=0.06018, over 7217.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2613, pruned_loss=0.04169, over 1427502.79 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 06:53:05,839 INFO [train.py:842] (0/4) Epoch 33, batch 7400, loss[loss=0.2063, simple_loss=0.2866, pruned_loss=0.06299, over 4852.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2615, pruned_loss=0.04159, over 1423812.78 frames.], batch size: 53, lr: 1.66e-04 2022-05-29 06:53:45,059 INFO [train.py:842] (0/4) Epoch 33, batch 7450, loss[loss=0.1751, simple_loss=0.2667, pruned_loss=0.04178, over 7288.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2612, pruned_loss=0.04155, over 1415390.03 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:54:24,409 INFO [train.py:842] (0/4) Epoch 33, batch 7500, loss[loss=0.1642, simple_loss=0.2656, pruned_loss=0.0314, over 6421.00 frames.], tot_loss[loss=0.172, simple_loss=0.2614, pruned_loss=0.04129, over 1418631.06 frames.], batch size: 38, lr: 1.66e-04 2022-05-29 06:55:03,705 INFO [train.py:842] (0/4) Epoch 33, batch 7550, loss[loss=0.1309, simple_loss=0.2164, pruned_loss=0.02266, over 7392.00 frames.], tot_loss[loss=0.171, simple_loss=0.2602, pruned_loss=0.04091, over 1422354.60 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:55:43,320 INFO [train.py:842] (0/4) Epoch 33, batch 7600, loss[loss=0.1615, simple_loss=0.2569, pruned_loss=0.03306, over 7112.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2611, pruned_loss=0.04087, over 1427291.57 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 06:56:22,532 INFO [train.py:842] (0/4) Epoch 33, batch 7650, loss[loss=0.1807, simple_loss=0.2456, pruned_loss=0.05792, over 7292.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2613, pruned_loss=0.04107, over 1426047.81 frames.], batch size: 18, lr: 1.66e-04 2022-05-29 06:57:02,133 INFO [train.py:842] (0/4) Epoch 33, batch 7700, loss[loss=0.188, simple_loss=0.2662, pruned_loss=0.05491, over 6811.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2606, pruned_loss=0.04066, over 1424942.89 frames.], batch size: 15, lr: 1.66e-04 2022-05-29 06:57:41,097 INFO [train.py:842] (0/4) Epoch 33, batch 7750, loss[loss=0.1711, simple_loss=0.2557, pruned_loss=0.04327, over 7417.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2618, pruned_loss=0.04134, over 1424725.05 frames.], batch size: 20, lr: 1.66e-04 2022-05-29 06:58:20,691 INFO [train.py:842] (0/4) Epoch 33, batch 7800, loss[loss=0.1598, simple_loss=0.2531, pruned_loss=0.03325, over 6972.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2623, pruned_loss=0.04198, over 1425780.38 frames.], batch size: 32, lr: 1.66e-04 2022-05-29 06:58:59,882 INFO [train.py:842] (0/4) Epoch 33, batch 7850, loss[loss=0.1608, simple_loss=0.2498, pruned_loss=0.03591, over 7331.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2617, pruned_loss=0.04138, over 1427060.94 frames.], batch size: 20, lr: 1.66e-04 2022-05-29 06:59:39,469 INFO [train.py:842] (0/4) Epoch 33, batch 7900, loss[loss=0.1842, simple_loss=0.2891, pruned_loss=0.03966, over 7337.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2621, pruned_loss=0.04213, over 1429642.27 frames.], batch size: 22, lr: 1.66e-04 2022-05-29 07:00:18,936 INFO [train.py:842] (0/4) Epoch 33, batch 7950, loss[loss=0.2109, simple_loss=0.2982, pruned_loss=0.06181, over 6180.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2616, pruned_loss=0.04208, over 1427084.72 frames.], batch size: 37, lr: 1.66e-04 2022-05-29 07:00:58,348 INFO [train.py:842] (0/4) Epoch 33, batch 8000, loss[loss=0.1592, simple_loss=0.2502, pruned_loss=0.03409, over 7008.00 frames.], tot_loss[loss=0.173, simple_loss=0.262, pruned_loss=0.04194, over 1426150.19 frames.], batch size: 28, lr: 1.66e-04 2022-05-29 07:01:37,402 INFO [train.py:842] (0/4) Epoch 33, batch 8050, loss[loss=0.2016, simple_loss=0.2838, pruned_loss=0.05968, over 7119.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2628, pruned_loss=0.04217, over 1425536.62 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 07:02:17,322 INFO [train.py:842] (0/4) Epoch 33, batch 8100, loss[loss=0.1461, simple_loss=0.2431, pruned_loss=0.02456, over 7216.00 frames.], tot_loss[loss=0.1728, simple_loss=0.262, pruned_loss=0.04177, over 1425299.02 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 07:02:56,288 INFO [train.py:842] (0/4) Epoch 33, batch 8150, loss[loss=0.2116, simple_loss=0.2983, pruned_loss=0.06241, over 7336.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2628, pruned_loss=0.04189, over 1423405.47 frames.], batch size: 22, lr: 1.66e-04 2022-05-29 07:03:35,664 INFO [train.py:842] (0/4) Epoch 33, batch 8200, loss[loss=0.2126, simple_loss=0.3086, pruned_loss=0.05829, over 5212.00 frames.], tot_loss[loss=0.1721, simple_loss=0.262, pruned_loss=0.04114, over 1420792.21 frames.], batch size: 52, lr: 1.66e-04 2022-05-29 07:04:25,758 INFO [train.py:842] (0/4) Epoch 33, batch 8250, loss[loss=0.1344, simple_loss=0.2245, pruned_loss=0.02214, over 6974.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2623, pruned_loss=0.04132, over 1426578.50 frames.], batch size: 16, lr: 1.66e-04 2022-05-29 07:05:05,269 INFO [train.py:842] (0/4) Epoch 33, batch 8300, loss[loss=0.1558, simple_loss=0.2337, pruned_loss=0.03889, over 6972.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2626, pruned_loss=0.04181, over 1424184.70 frames.], batch size: 16, lr: 1.66e-04 2022-05-29 07:05:44,148 INFO [train.py:842] (0/4) Epoch 33, batch 8350, loss[loss=0.1647, simple_loss=0.2577, pruned_loss=0.03583, over 7211.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2627, pruned_loss=0.04188, over 1423681.39 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 07:06:23,451 INFO [train.py:842] (0/4) Epoch 33, batch 8400, loss[loss=0.1857, simple_loss=0.2727, pruned_loss=0.04933, over 7339.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2626, pruned_loss=0.0418, over 1417772.88 frames.], batch size: 22, lr: 1.66e-04 2022-05-29 07:07:02,625 INFO [train.py:842] (0/4) Epoch 33, batch 8450, loss[loss=0.1737, simple_loss=0.2684, pruned_loss=0.0395, over 7055.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2626, pruned_loss=0.04184, over 1421279.80 frames.], batch size: 28, lr: 1.66e-04 2022-05-29 07:07:53,985 INFO [train.py:842] (0/4) Epoch 33, batch 8500, loss[loss=0.2279, simple_loss=0.3097, pruned_loss=0.07304, over 7323.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2626, pruned_loss=0.04194, over 1423106.52 frames.], batch size: 21, lr: 1.66e-04 2022-05-29 07:08:33,130 INFO [train.py:842] (0/4) Epoch 33, batch 8550, loss[loss=0.2017, simple_loss=0.2879, pruned_loss=0.05775, over 6795.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2625, pruned_loss=0.04211, over 1422473.99 frames.], batch size: 31, lr: 1.66e-04 2022-05-29 07:09:23,004 INFO [train.py:842] (0/4) Epoch 33, batch 8600, loss[loss=0.2164, simple_loss=0.3128, pruned_loss=0.06002, over 4863.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2634, pruned_loss=0.04288, over 1415102.67 frames.], batch size: 52, lr: 1.65e-04 2022-05-29 07:10:01,969 INFO [train.py:842] (0/4) Epoch 33, batch 8650, loss[loss=0.2324, simple_loss=0.3107, pruned_loss=0.07706, over 7192.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2643, pruned_loss=0.04324, over 1407270.91 frames.], batch size: 26, lr: 1.65e-04 2022-05-29 07:10:41,473 INFO [train.py:842] (0/4) Epoch 33, batch 8700, loss[loss=0.3383, simple_loss=0.3861, pruned_loss=0.1452, over 5057.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2654, pruned_loss=0.04387, over 1403807.36 frames.], batch size: 53, lr: 1.65e-04 2022-05-29 07:11:20,655 INFO [train.py:842] (0/4) Epoch 33, batch 8750, loss[loss=0.2231, simple_loss=0.3137, pruned_loss=0.06621, over 6303.00 frames.], tot_loss[loss=0.178, simple_loss=0.2668, pruned_loss=0.04461, over 1407226.37 frames.], batch size: 37, lr: 1.65e-04 2022-05-29 07:11:59,644 INFO [train.py:842] (0/4) Epoch 33, batch 8800, loss[loss=0.1485, simple_loss=0.2478, pruned_loss=0.0246, over 7143.00 frames.], tot_loss[loss=0.1766, simple_loss=0.2657, pruned_loss=0.04378, over 1403819.91 frames.], batch size: 20, lr: 1.65e-04 2022-05-29 07:12:38,563 INFO [train.py:842] (0/4) Epoch 33, batch 8850, loss[loss=0.1698, simple_loss=0.2717, pruned_loss=0.034, over 7220.00 frames.], tot_loss[loss=0.177, simple_loss=0.2661, pruned_loss=0.04392, over 1389820.05 frames.], batch size: 21, lr: 1.65e-04 2022-05-29 07:13:17,807 INFO [train.py:842] (0/4) Epoch 33, batch 8900, loss[loss=0.1899, simple_loss=0.281, pruned_loss=0.04941, over 7178.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2666, pruned_loss=0.04433, over 1389603.05 frames.], batch size: 26, lr: 1.65e-04 2022-05-29 07:13:56,471 INFO [train.py:842] (0/4) Epoch 33, batch 8950, loss[loss=0.1739, simple_loss=0.2587, pruned_loss=0.04453, over 7265.00 frames.], tot_loss[loss=0.178, simple_loss=0.2668, pruned_loss=0.04459, over 1382211.50 frames.], batch size: 19, lr: 1.65e-04 2022-05-29 07:14:35,677 INFO [train.py:842] (0/4) Epoch 33, batch 9000, loss[loss=0.1982, simple_loss=0.2939, pruned_loss=0.05122, over 7196.00 frames.], tot_loss[loss=0.1776, simple_loss=0.2662, pruned_loss=0.04453, over 1376452.22 frames.], batch size: 23, lr: 1.65e-04 2022-05-29 07:14:35,679 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 07:14:45,195 INFO [train.py:871] (0/4) Epoch 33, validation: loss=0.1641, simple_loss=0.2616, pruned_loss=0.03332, over 868885.00 frames. 2022-05-29 07:15:24,271 INFO [train.py:842] (0/4) Epoch 33, batch 9050, loss[loss=0.172, simple_loss=0.2612, pruned_loss=0.0414, over 7059.00 frames.], tot_loss[loss=0.1771, simple_loss=0.2655, pruned_loss=0.04435, over 1371999.24 frames.], batch size: 28, lr: 1.65e-04 2022-05-29 07:16:03,511 INFO [train.py:842] (0/4) Epoch 33, batch 9100, loss[loss=0.2075, simple_loss=0.2844, pruned_loss=0.06532, over 5086.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2668, pruned_loss=0.04506, over 1355238.89 frames.], batch size: 52, lr: 1.65e-04 2022-05-29 07:16:41,709 INFO [train.py:842] (0/4) Epoch 33, batch 9150, loss[loss=0.2152, simple_loss=0.2942, pruned_loss=0.0681, over 5268.00 frames.], tot_loss[loss=0.1817, simple_loss=0.2697, pruned_loss=0.04678, over 1309234.60 frames.], batch size: 52, lr: 1.65e-04 2022-05-29 07:17:13,822 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-33.pt 2022-05-29 07:17:29,976 INFO [train.py:842] (0/4) Epoch 34, batch 0, loss[loss=0.1665, simple_loss=0.2596, pruned_loss=0.0367, over 7430.00 frames.], tot_loss[loss=0.1665, simple_loss=0.2596, pruned_loss=0.0367, over 7430.00 frames.], batch size: 20, lr: 1.63e-04 2022-05-29 07:18:09,722 INFO [train.py:842] (0/4) Epoch 34, batch 50, loss[loss=0.181, simple_loss=0.2843, pruned_loss=0.03879, over 7119.00 frames.], tot_loss[loss=0.1688, simple_loss=0.2575, pruned_loss=0.04005, over 324819.47 frames.], batch size: 28, lr: 1.63e-04 2022-05-29 07:18:49,365 INFO [train.py:842] (0/4) Epoch 34, batch 100, loss[loss=0.1684, simple_loss=0.2733, pruned_loss=0.03172, over 7116.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2601, pruned_loss=0.04071, over 566020.92 frames.], batch size: 21, lr: 1.63e-04 2022-05-29 07:19:28,866 INFO [train.py:842] (0/4) Epoch 34, batch 150, loss[loss=0.1282, simple_loss=0.2225, pruned_loss=0.01696, over 7070.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2603, pruned_loss=0.04146, over 756503.69 frames.], batch size: 18, lr: 1.63e-04 2022-05-29 07:20:08,767 INFO [train.py:842] (0/4) Epoch 34, batch 200, loss[loss=0.1516, simple_loss=0.2317, pruned_loss=0.03573, over 7288.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2592, pruned_loss=0.04092, over 906367.28 frames.], batch size: 17, lr: 1.63e-04 2022-05-29 07:20:48,026 INFO [train.py:842] (0/4) Epoch 34, batch 250, loss[loss=0.2202, simple_loss=0.3015, pruned_loss=0.06941, over 5188.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2589, pruned_loss=0.04084, over 1012853.54 frames.], batch size: 52, lr: 1.63e-04 2022-05-29 07:21:27,671 INFO [train.py:842] (0/4) Epoch 34, batch 300, loss[loss=0.1814, simple_loss=0.2606, pruned_loss=0.05113, over 7394.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2596, pruned_loss=0.04085, over 1103119.44 frames.], batch size: 23, lr: 1.63e-04 2022-05-29 07:22:06,460 INFO [train.py:842] (0/4) Epoch 34, batch 350, loss[loss=0.1409, simple_loss=0.2211, pruned_loss=0.03032, over 7127.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2625, pruned_loss=0.04212, over 1167597.81 frames.], batch size: 17, lr: 1.63e-04 2022-05-29 07:22:46,334 INFO [train.py:842] (0/4) Epoch 34, batch 400, loss[loss=0.2024, simple_loss=0.2859, pruned_loss=0.05944, over 7422.00 frames.], tot_loss[loss=0.173, simple_loss=0.262, pruned_loss=0.04202, over 1227997.48 frames.], batch size: 21, lr: 1.63e-04 2022-05-29 07:23:25,563 INFO [train.py:842] (0/4) Epoch 34, batch 450, loss[loss=0.2677, simple_loss=0.3191, pruned_loss=0.1081, over 7400.00 frames.], tot_loss[loss=0.176, simple_loss=0.2645, pruned_loss=0.0437, over 1272099.80 frames.], batch size: 18, lr: 1.63e-04 2022-05-29 07:24:05,214 INFO [train.py:842] (0/4) Epoch 34, batch 500, loss[loss=0.1891, simple_loss=0.2801, pruned_loss=0.04908, over 7255.00 frames.], tot_loss[loss=0.1751, simple_loss=0.264, pruned_loss=0.04313, over 1304966.11 frames.], batch size: 24, lr: 1.63e-04 2022-05-29 07:24:44,501 INFO [train.py:842] (0/4) Epoch 34, batch 550, loss[loss=0.1485, simple_loss=0.2488, pruned_loss=0.02417, over 6376.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2639, pruned_loss=0.0432, over 1328917.52 frames.], batch size: 37, lr: 1.63e-04 2022-05-29 07:25:24,071 INFO [train.py:842] (0/4) Epoch 34, batch 600, loss[loss=0.1904, simple_loss=0.283, pruned_loss=0.0489, over 7318.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2637, pruned_loss=0.04278, over 1351391.59 frames.], batch size: 25, lr: 1.63e-04 2022-05-29 07:26:03,442 INFO [train.py:842] (0/4) Epoch 34, batch 650, loss[loss=0.1974, simple_loss=0.2706, pruned_loss=0.06215, over 7171.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2649, pruned_loss=0.04349, over 1369857.02 frames.], batch size: 18, lr: 1.63e-04 2022-05-29 07:26:18,816 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-304000.pt 2022-05-29 07:26:46,156 INFO [train.py:842] (0/4) Epoch 34, batch 700, loss[loss=0.152, simple_loss=0.2348, pruned_loss=0.0346, over 7144.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2631, pruned_loss=0.04289, over 1377619.00 frames.], batch size: 17, lr: 1.63e-04 2022-05-29 07:27:25,321 INFO [train.py:842] (0/4) Epoch 34, batch 750, loss[loss=0.2098, simple_loss=0.2981, pruned_loss=0.06076, over 7193.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2633, pruned_loss=0.0428, over 1389029.49 frames.], batch size: 23, lr: 1.63e-04 2022-05-29 07:28:04,901 INFO [train.py:842] (0/4) Epoch 34, batch 800, loss[loss=0.1495, simple_loss=0.2325, pruned_loss=0.03325, over 7269.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2631, pruned_loss=0.04252, over 1394626.37 frames.], batch size: 18, lr: 1.63e-04 2022-05-29 07:28:44,222 INFO [train.py:842] (0/4) Epoch 34, batch 850, loss[loss=0.1742, simple_loss=0.2726, pruned_loss=0.0379, over 6479.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2616, pruned_loss=0.04176, over 1403854.84 frames.], batch size: 38, lr: 1.63e-04 2022-05-29 07:29:23,966 INFO [train.py:842] (0/4) Epoch 34, batch 900, loss[loss=0.1768, simple_loss=0.2632, pruned_loss=0.04522, over 4860.00 frames.], tot_loss[loss=0.1713, simple_loss=0.26, pruned_loss=0.04128, over 1408312.32 frames.], batch size: 53, lr: 1.63e-04 2022-05-29 07:30:03,396 INFO [train.py:842] (0/4) Epoch 34, batch 950, loss[loss=0.1881, simple_loss=0.2623, pruned_loss=0.05691, over 7275.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2608, pruned_loss=0.04177, over 1406893.19 frames.], batch size: 18, lr: 1.63e-04 2022-05-29 07:30:43,016 INFO [train.py:842] (0/4) Epoch 34, batch 1000, loss[loss=0.165, simple_loss=0.2536, pruned_loss=0.03822, over 7424.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2611, pruned_loss=0.04193, over 1408788.44 frames.], batch size: 20, lr: 1.63e-04 2022-05-29 07:31:22,398 INFO [train.py:842] (0/4) Epoch 34, batch 1050, loss[loss=0.1418, simple_loss=0.2376, pruned_loss=0.02302, over 7153.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2607, pruned_loss=0.04149, over 1415153.58 frames.], batch size: 19, lr: 1.63e-04 2022-05-29 07:32:01,623 INFO [train.py:842] (0/4) Epoch 34, batch 1100, loss[loss=0.1922, simple_loss=0.2852, pruned_loss=0.04963, over 6546.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2603, pruned_loss=0.04109, over 1413243.07 frames.], batch size: 38, lr: 1.63e-04 2022-05-29 07:32:40,977 INFO [train.py:842] (0/4) Epoch 34, batch 1150, loss[loss=0.1882, simple_loss=0.2756, pruned_loss=0.05034, over 7435.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2613, pruned_loss=0.042, over 1415148.52 frames.], batch size: 20, lr: 1.63e-04 2022-05-29 07:33:20,610 INFO [train.py:842] (0/4) Epoch 34, batch 1200, loss[loss=0.154, simple_loss=0.2404, pruned_loss=0.03387, over 7187.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2615, pruned_loss=0.04206, over 1419743.30 frames.], batch size: 23, lr: 1.63e-04 2022-05-29 07:33:59,724 INFO [train.py:842] (0/4) Epoch 34, batch 1250, loss[loss=0.1684, simple_loss=0.2665, pruned_loss=0.03509, over 7329.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2619, pruned_loss=0.0422, over 1417959.49 frames.], batch size: 22, lr: 1.63e-04 2022-05-29 07:34:39,383 INFO [train.py:842] (0/4) Epoch 34, batch 1300, loss[loss=0.2464, simple_loss=0.3254, pruned_loss=0.08367, over 7204.00 frames.], tot_loss[loss=0.174, simple_loss=0.2623, pruned_loss=0.04286, over 1417925.38 frames.], batch size: 26, lr: 1.63e-04 2022-05-29 07:35:18,846 INFO [train.py:842] (0/4) Epoch 34, batch 1350, loss[loss=0.2126, simple_loss=0.3186, pruned_loss=0.05332, over 7218.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2619, pruned_loss=0.04286, over 1418610.82 frames.], batch size: 21, lr: 1.63e-04 2022-05-29 07:35:58,707 INFO [train.py:842] (0/4) Epoch 34, batch 1400, loss[loss=0.1291, simple_loss=0.2228, pruned_loss=0.01771, over 7241.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2605, pruned_loss=0.04189, over 1422006.65 frames.], batch size: 19, lr: 1.63e-04 2022-05-29 07:36:38,116 INFO [train.py:842] (0/4) Epoch 34, batch 1450, loss[loss=0.1723, simple_loss=0.2674, pruned_loss=0.03863, over 7414.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2612, pruned_loss=0.04192, over 1425817.00 frames.], batch size: 21, lr: 1.63e-04 2022-05-29 07:37:17,572 INFO [train.py:842] (0/4) Epoch 34, batch 1500, loss[loss=0.1661, simple_loss=0.2584, pruned_loss=0.03691, over 7379.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2625, pruned_loss=0.04204, over 1424754.06 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 07:37:56,876 INFO [train.py:842] (0/4) Epoch 34, batch 1550, loss[loss=0.1631, simple_loss=0.2543, pruned_loss=0.03593, over 7308.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2621, pruned_loss=0.04157, over 1422110.21 frames.], batch size: 24, lr: 1.62e-04 2022-05-29 07:38:36,495 INFO [train.py:842] (0/4) Epoch 34, batch 1600, loss[loss=0.1532, simple_loss=0.2413, pruned_loss=0.03252, over 7320.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2621, pruned_loss=0.04164, over 1422981.61 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 07:39:15,594 INFO [train.py:842] (0/4) Epoch 34, batch 1650, loss[loss=0.1873, simple_loss=0.2719, pruned_loss=0.05137, over 7204.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2615, pruned_loss=0.04097, over 1422640.55 frames.], batch size: 22, lr: 1.62e-04 2022-05-29 07:39:54,979 INFO [train.py:842] (0/4) Epoch 34, batch 1700, loss[loss=0.1605, simple_loss=0.2584, pruned_loss=0.03128, over 7381.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2612, pruned_loss=0.04054, over 1426459.48 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 07:40:34,176 INFO [train.py:842] (0/4) Epoch 34, batch 1750, loss[loss=0.1794, simple_loss=0.2721, pruned_loss=0.0433, over 7041.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2621, pruned_loss=0.04105, over 1421285.29 frames.], batch size: 28, lr: 1.62e-04 2022-05-29 07:41:13,638 INFO [train.py:842] (0/4) Epoch 34, batch 1800, loss[loss=0.1479, simple_loss=0.2381, pruned_loss=0.02889, over 7267.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2621, pruned_loss=0.04105, over 1422406.51 frames.], batch size: 17, lr: 1.62e-04 2022-05-29 07:41:52,938 INFO [train.py:842] (0/4) Epoch 34, batch 1850, loss[loss=0.1673, simple_loss=0.2606, pruned_loss=0.03702, over 7318.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2622, pruned_loss=0.04146, over 1414156.83 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 07:42:32,379 INFO [train.py:842] (0/4) Epoch 34, batch 1900, loss[loss=0.1952, simple_loss=0.292, pruned_loss=0.04917, over 6789.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2622, pruned_loss=0.04198, over 1410564.87 frames.], batch size: 31, lr: 1.62e-04 2022-05-29 07:43:11,708 INFO [train.py:842] (0/4) Epoch 34, batch 1950, loss[loss=0.1307, simple_loss=0.2107, pruned_loss=0.02532, over 6986.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2624, pruned_loss=0.04187, over 1416673.95 frames.], batch size: 16, lr: 1.62e-04 2022-05-29 07:43:51,615 INFO [train.py:842] (0/4) Epoch 34, batch 2000, loss[loss=0.1735, simple_loss=0.2437, pruned_loss=0.05158, over 7430.00 frames.], tot_loss[loss=0.1728, simple_loss=0.262, pruned_loss=0.04178, over 1421751.52 frames.], batch size: 18, lr: 1.62e-04 2022-05-29 07:44:31,042 INFO [train.py:842] (0/4) Epoch 34, batch 2050, loss[loss=0.19, simple_loss=0.2826, pruned_loss=0.04874, over 7144.00 frames.], tot_loss[loss=0.172, simple_loss=0.2616, pruned_loss=0.04121, over 1421169.29 frames.], batch size: 26, lr: 1.62e-04 2022-05-29 07:45:10,511 INFO [train.py:842] (0/4) Epoch 34, batch 2100, loss[loss=0.2357, simple_loss=0.3164, pruned_loss=0.07753, over 7194.00 frames.], tot_loss[loss=0.174, simple_loss=0.2633, pruned_loss=0.04232, over 1424007.79 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 07:45:49,747 INFO [train.py:842] (0/4) Epoch 34, batch 2150, loss[loss=0.1637, simple_loss=0.2582, pruned_loss=0.03463, over 7303.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2632, pruned_loss=0.04231, over 1423737.70 frames.], batch size: 24, lr: 1.62e-04 2022-05-29 07:46:29,323 INFO [train.py:842] (0/4) Epoch 34, batch 2200, loss[loss=0.1411, simple_loss=0.2395, pruned_loss=0.02134, over 7309.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2621, pruned_loss=0.04162, over 1426456.82 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 07:47:08,691 INFO [train.py:842] (0/4) Epoch 34, batch 2250, loss[loss=0.1536, simple_loss=0.2325, pruned_loss=0.03736, over 7279.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2612, pruned_loss=0.04161, over 1423183.42 frames.], batch size: 18, lr: 1.62e-04 2022-05-29 07:47:47,863 INFO [train.py:842] (0/4) Epoch 34, batch 2300, loss[loss=0.2121, simple_loss=0.3001, pruned_loss=0.06207, over 7164.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2618, pruned_loss=0.04138, over 1424336.76 frames.], batch size: 19, lr: 1.62e-04 2022-05-29 07:48:27,174 INFO [train.py:842] (0/4) Epoch 34, batch 2350, loss[loss=0.178, simple_loss=0.2738, pruned_loss=0.04113, over 7156.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2607, pruned_loss=0.04074, over 1424674.98 frames.], batch size: 19, lr: 1.62e-04 2022-05-29 07:49:06,865 INFO [train.py:842] (0/4) Epoch 34, batch 2400, loss[loss=0.1959, simple_loss=0.2749, pruned_loss=0.05842, over 7370.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2607, pruned_loss=0.04075, over 1425680.75 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 07:49:45,765 INFO [train.py:842] (0/4) Epoch 34, batch 2450, loss[loss=0.1784, simple_loss=0.2816, pruned_loss=0.03758, over 7218.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2624, pruned_loss=0.04155, over 1420225.02 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 07:50:25,155 INFO [train.py:842] (0/4) Epoch 34, batch 2500, loss[loss=0.1636, simple_loss=0.243, pruned_loss=0.0421, over 7006.00 frames.], tot_loss[loss=0.173, simple_loss=0.2625, pruned_loss=0.04173, over 1417713.80 frames.], batch size: 16, lr: 1.62e-04 2022-05-29 07:51:04,378 INFO [train.py:842] (0/4) Epoch 34, batch 2550, loss[loss=0.1757, simple_loss=0.2684, pruned_loss=0.04149, over 7343.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2615, pruned_loss=0.04134, over 1419005.34 frames.], batch size: 22, lr: 1.62e-04 2022-05-29 07:51:44,028 INFO [train.py:842] (0/4) Epoch 34, batch 2600, loss[loss=0.1791, simple_loss=0.2699, pruned_loss=0.04415, over 7063.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2623, pruned_loss=0.04168, over 1419410.86 frames.], batch size: 18, lr: 1.62e-04 2022-05-29 07:52:23,462 INFO [train.py:842] (0/4) Epoch 34, batch 2650, loss[loss=0.1854, simple_loss=0.2742, pruned_loss=0.04828, over 7349.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2616, pruned_loss=0.04154, over 1421039.50 frames.], batch size: 22, lr: 1.62e-04 2022-05-29 07:53:03,142 INFO [train.py:842] (0/4) Epoch 34, batch 2700, loss[loss=0.1533, simple_loss=0.2298, pruned_loss=0.03839, over 7276.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2612, pruned_loss=0.04149, over 1425726.69 frames.], batch size: 18, lr: 1.62e-04 2022-05-29 07:53:42,407 INFO [train.py:842] (0/4) Epoch 34, batch 2750, loss[loss=0.1814, simple_loss=0.2835, pruned_loss=0.03967, over 7318.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2611, pruned_loss=0.04128, over 1424930.96 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 07:54:21,988 INFO [train.py:842] (0/4) Epoch 34, batch 2800, loss[loss=0.1861, simple_loss=0.2721, pruned_loss=0.04999, over 7424.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2613, pruned_loss=0.04099, over 1429885.29 frames.], batch size: 18, lr: 1.62e-04 2022-05-29 07:55:01,282 INFO [train.py:842] (0/4) Epoch 34, batch 2850, loss[loss=0.1912, simple_loss=0.2795, pruned_loss=0.05145, over 7219.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2606, pruned_loss=0.04056, over 1430919.19 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 07:55:40,893 INFO [train.py:842] (0/4) Epoch 34, batch 2900, loss[loss=0.1571, simple_loss=0.2535, pruned_loss=0.03034, over 7154.00 frames.], tot_loss[loss=0.1713, simple_loss=0.261, pruned_loss=0.04085, over 1427582.37 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 07:56:20,240 INFO [train.py:842] (0/4) Epoch 34, batch 2950, loss[loss=0.1622, simple_loss=0.2458, pruned_loss=0.03932, over 7148.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2604, pruned_loss=0.04073, over 1427367.61 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 07:56:59,653 INFO [train.py:842] (0/4) Epoch 34, batch 3000, loss[loss=0.1468, simple_loss=0.237, pruned_loss=0.02828, over 7346.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2612, pruned_loss=0.041, over 1427753.40 frames.], batch size: 19, lr: 1.62e-04 2022-05-29 07:56:59,655 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 07:57:09,215 INFO [train.py:871] (0/4) Epoch 34, validation: loss=0.165, simple_loss=0.2618, pruned_loss=0.03414, over 868885.00 frames. 2022-05-29 07:57:48,390 INFO [train.py:842] (0/4) Epoch 34, batch 3050, loss[loss=0.1655, simple_loss=0.2494, pruned_loss=0.04078, over 7363.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2619, pruned_loss=0.04084, over 1427525.29 frames.], batch size: 19, lr: 1.62e-04 2022-05-29 07:58:28,171 INFO [train.py:842] (0/4) Epoch 34, batch 3100, loss[loss=0.1596, simple_loss=0.2398, pruned_loss=0.03976, over 6802.00 frames.], tot_loss[loss=0.171, simple_loss=0.2612, pruned_loss=0.04037, over 1428647.95 frames.], batch size: 15, lr: 1.62e-04 2022-05-29 07:59:07,355 INFO [train.py:842] (0/4) Epoch 34, batch 3150, loss[loss=0.1532, simple_loss=0.2312, pruned_loss=0.03758, over 7254.00 frames.], tot_loss[loss=0.1698, simple_loss=0.2597, pruned_loss=0.03989, over 1428463.73 frames.], batch size: 17, lr: 1.62e-04 2022-05-29 07:59:46,843 INFO [train.py:842] (0/4) Epoch 34, batch 3200, loss[loss=0.1803, simple_loss=0.2754, pruned_loss=0.04257, over 4942.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2591, pruned_loss=0.0398, over 1424769.75 frames.], batch size: 53, lr: 1.62e-04 2022-05-29 08:00:26,130 INFO [train.py:842] (0/4) Epoch 34, batch 3250, loss[loss=0.1731, simple_loss=0.2475, pruned_loss=0.04933, over 7123.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2589, pruned_loss=0.0399, over 1421236.94 frames.], batch size: 17, lr: 1.62e-04 2022-05-29 08:01:05,724 INFO [train.py:842] (0/4) Epoch 34, batch 3300, loss[loss=0.1761, simple_loss=0.2811, pruned_loss=0.03556, over 7096.00 frames.], tot_loss[loss=0.1696, simple_loss=0.2592, pruned_loss=0.03998, over 1417581.74 frames.], batch size: 28, lr: 1.62e-04 2022-05-29 08:01:45,231 INFO [train.py:842] (0/4) Epoch 34, batch 3350, loss[loss=0.157, simple_loss=0.2529, pruned_loss=0.0305, over 7150.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2591, pruned_loss=0.04056, over 1421028.43 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:02:24,813 INFO [train.py:842] (0/4) Epoch 34, batch 3400, loss[loss=0.1693, simple_loss=0.2542, pruned_loss=0.04225, over 7215.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2602, pruned_loss=0.04122, over 1421718.22 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 08:03:04,107 INFO [train.py:842] (0/4) Epoch 34, batch 3450, loss[loss=0.1826, simple_loss=0.2598, pruned_loss=0.05267, over 6988.00 frames.], tot_loss[loss=0.172, simple_loss=0.2609, pruned_loss=0.04156, over 1427470.01 frames.], batch size: 16, lr: 1.62e-04 2022-05-29 08:03:43,416 INFO [train.py:842] (0/4) Epoch 34, batch 3500, loss[loss=0.1912, simple_loss=0.2817, pruned_loss=0.05037, over 7180.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2631, pruned_loss=0.04215, over 1429694.54 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 08:04:22,890 INFO [train.py:842] (0/4) Epoch 34, batch 3550, loss[loss=0.1475, simple_loss=0.2269, pruned_loss=0.034, over 7292.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2619, pruned_loss=0.04169, over 1431276.05 frames.], batch size: 17, lr: 1.62e-04 2022-05-29 08:05:02,700 INFO [train.py:842] (0/4) Epoch 34, batch 3600, loss[loss=0.175, simple_loss=0.2693, pruned_loss=0.04037, over 7316.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2633, pruned_loss=0.04243, over 1432495.11 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 08:05:41,978 INFO [train.py:842] (0/4) Epoch 34, batch 3650, loss[loss=0.1826, simple_loss=0.2618, pruned_loss=0.05171, over 7431.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2626, pruned_loss=0.04249, over 1429419.46 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:06:21,225 INFO [train.py:842] (0/4) Epoch 34, batch 3700, loss[loss=0.2706, simple_loss=0.35, pruned_loss=0.09559, over 5176.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2627, pruned_loss=0.04322, over 1423092.38 frames.], batch size: 53, lr: 1.62e-04 2022-05-29 08:07:00,404 INFO [train.py:842] (0/4) Epoch 34, batch 3750, loss[loss=0.1995, simple_loss=0.268, pruned_loss=0.06551, over 7132.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2617, pruned_loss=0.04267, over 1421349.59 frames.], batch size: 17, lr: 1.62e-04 2022-05-29 08:07:40,304 INFO [train.py:842] (0/4) Epoch 34, batch 3800, loss[loss=0.2316, simple_loss=0.3212, pruned_loss=0.071, over 7233.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2608, pruned_loss=0.04195, over 1422756.23 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:08:19,521 INFO [train.py:842] (0/4) Epoch 34, batch 3850, loss[loss=0.178, simple_loss=0.2673, pruned_loss=0.0444, over 7116.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2612, pruned_loss=0.04182, over 1425342.01 frames.], batch size: 28, lr: 1.62e-04 2022-05-29 08:08:59,241 INFO [train.py:842] (0/4) Epoch 34, batch 3900, loss[loss=0.1329, simple_loss=0.2226, pruned_loss=0.02161, over 7361.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2605, pruned_loss=0.04142, over 1427951.57 frames.], batch size: 19, lr: 1.62e-04 2022-05-29 08:09:38,321 INFO [train.py:842] (0/4) Epoch 34, batch 3950, loss[loss=0.1686, simple_loss=0.2689, pruned_loss=0.03415, over 7329.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2604, pruned_loss=0.04147, over 1422373.94 frames.], batch size: 22, lr: 1.62e-04 2022-05-29 08:10:18,065 INFO [train.py:842] (0/4) Epoch 34, batch 4000, loss[loss=0.2085, simple_loss=0.2973, pruned_loss=0.05984, over 7171.00 frames.], tot_loss[loss=0.172, simple_loss=0.261, pruned_loss=0.0415, over 1426538.74 frames.], batch size: 26, lr: 1.62e-04 2022-05-29 08:10:57,389 INFO [train.py:842] (0/4) Epoch 34, batch 4050, loss[loss=0.138, simple_loss=0.2319, pruned_loss=0.02205, over 7423.00 frames.], tot_loss[loss=0.1713, simple_loss=0.26, pruned_loss=0.04126, over 1424935.61 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:11:36,855 INFO [train.py:842] (0/4) Epoch 34, batch 4100, loss[loss=0.1932, simple_loss=0.2761, pruned_loss=0.05512, over 7328.00 frames.], tot_loss[loss=0.174, simple_loss=0.2627, pruned_loss=0.04262, over 1422191.95 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:12:16,156 INFO [train.py:842] (0/4) Epoch 34, batch 4150, loss[loss=0.1412, simple_loss=0.2266, pruned_loss=0.02786, over 7421.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2626, pruned_loss=0.04235, over 1424306.64 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:12:55,706 INFO [train.py:842] (0/4) Epoch 34, batch 4200, loss[loss=0.1751, simple_loss=0.2618, pruned_loss=0.04422, over 6755.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2624, pruned_loss=0.04224, over 1421516.81 frames.], batch size: 31, lr: 1.62e-04 2022-05-29 08:13:35,019 INFO [train.py:842] (0/4) Epoch 34, batch 4250, loss[loss=0.1827, simple_loss=0.2743, pruned_loss=0.04557, over 7315.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2626, pruned_loss=0.04222, over 1424042.80 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:14:14,655 INFO [train.py:842] (0/4) Epoch 34, batch 4300, loss[loss=0.1564, simple_loss=0.2481, pruned_loss=0.03239, over 6911.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2621, pruned_loss=0.04203, over 1425906.66 frames.], batch size: 32, lr: 1.62e-04 2022-05-29 08:14:53,567 INFO [train.py:842] (0/4) Epoch 34, batch 4350, loss[loss=0.1516, simple_loss=0.2471, pruned_loss=0.02802, over 7313.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2622, pruned_loss=0.04197, over 1425085.82 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 08:15:33,259 INFO [train.py:842] (0/4) Epoch 34, batch 4400, loss[loss=0.165, simple_loss=0.2646, pruned_loss=0.03268, over 7231.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2612, pruned_loss=0.04119, over 1428360.46 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:16:12,430 INFO [train.py:842] (0/4) Epoch 34, batch 4450, loss[loss=0.2114, simple_loss=0.2982, pruned_loss=0.06232, over 7201.00 frames.], tot_loss[loss=0.173, simple_loss=0.2622, pruned_loss=0.04193, over 1427637.33 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 08:16:52,065 INFO [train.py:842] (0/4) Epoch 34, batch 4500, loss[loss=0.1496, simple_loss=0.2347, pruned_loss=0.03225, over 7213.00 frames.], tot_loss[loss=0.173, simple_loss=0.2621, pruned_loss=0.04193, over 1427574.74 frames.], batch size: 16, lr: 1.62e-04 2022-05-29 08:17:31,606 INFO [train.py:842] (0/4) Epoch 34, batch 4550, loss[loss=0.1329, simple_loss=0.2079, pruned_loss=0.02898, over 7289.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2616, pruned_loss=0.0419, over 1429133.20 frames.], batch size: 18, lr: 1.62e-04 2022-05-29 08:18:11,486 INFO [train.py:842] (0/4) Epoch 34, batch 4600, loss[loss=0.1789, simple_loss=0.2683, pruned_loss=0.04476, over 6649.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2606, pruned_loss=0.04156, over 1422458.93 frames.], batch size: 38, lr: 1.62e-04 2022-05-29 08:18:50,837 INFO [train.py:842] (0/4) Epoch 34, batch 4650, loss[loss=0.1659, simple_loss=0.2666, pruned_loss=0.03258, over 7419.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2616, pruned_loss=0.04186, over 1423300.53 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 08:19:30,473 INFO [train.py:842] (0/4) Epoch 34, batch 4700, loss[loss=0.1589, simple_loss=0.2403, pruned_loss=0.03876, over 7276.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2608, pruned_loss=0.04148, over 1422259.66 frames.], batch size: 18, lr: 1.62e-04 2022-05-29 08:20:09,790 INFO [train.py:842] (0/4) Epoch 34, batch 4750, loss[loss=0.165, simple_loss=0.2677, pruned_loss=0.03117, over 7146.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2612, pruned_loss=0.04151, over 1422925.55 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:20:49,487 INFO [train.py:842] (0/4) Epoch 34, batch 4800, loss[loss=0.2656, simple_loss=0.3297, pruned_loss=0.1008, over 7412.00 frames.], tot_loss[loss=0.1722, simple_loss=0.261, pruned_loss=0.04164, over 1425405.46 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 08:21:28,538 INFO [train.py:842] (0/4) Epoch 34, batch 4850, loss[loss=0.2037, simple_loss=0.2995, pruned_loss=0.05395, over 7417.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2617, pruned_loss=0.04174, over 1425222.55 frames.], batch size: 21, lr: 1.62e-04 2022-05-29 08:22:08,250 INFO [train.py:842] (0/4) Epoch 34, batch 4900, loss[loss=0.1437, simple_loss=0.2405, pruned_loss=0.02348, over 7330.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2615, pruned_loss=0.04195, over 1424338.33 frames.], batch size: 22, lr: 1.62e-04 2022-05-29 08:22:47,428 INFO [train.py:842] (0/4) Epoch 34, batch 4950, loss[loss=0.1307, simple_loss=0.2165, pruned_loss=0.02251, over 6993.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2615, pruned_loss=0.04174, over 1422367.29 frames.], batch size: 16, lr: 1.62e-04 2022-05-29 08:23:27,041 INFO [train.py:842] (0/4) Epoch 34, batch 5000, loss[loss=0.1963, simple_loss=0.2854, pruned_loss=0.05358, over 7144.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2615, pruned_loss=0.04172, over 1419417.19 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:24:06,295 INFO [train.py:842] (0/4) Epoch 34, batch 5050, loss[loss=0.2251, simple_loss=0.3012, pruned_loss=0.07454, over 7294.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2621, pruned_loss=0.04219, over 1422735.31 frames.], batch size: 18, lr: 1.62e-04 2022-05-29 08:24:46,025 INFO [train.py:842] (0/4) Epoch 34, batch 5100, loss[loss=0.1929, simple_loss=0.2862, pruned_loss=0.04982, over 7274.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2622, pruned_loss=0.04222, over 1426506.21 frames.], batch size: 25, lr: 1.62e-04 2022-05-29 08:25:25,116 INFO [train.py:842] (0/4) Epoch 34, batch 5150, loss[loss=0.1748, simple_loss=0.2644, pruned_loss=0.04258, over 6874.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2613, pruned_loss=0.04186, over 1423109.55 frames.], batch size: 31, lr: 1.62e-04 2022-05-29 08:26:04,883 INFO [train.py:842] (0/4) Epoch 34, batch 5200, loss[loss=0.1421, simple_loss=0.2396, pruned_loss=0.0223, over 7417.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2606, pruned_loss=0.04133, over 1424501.88 frames.], batch size: 20, lr: 1.62e-04 2022-05-29 08:26:44,181 INFO [train.py:842] (0/4) Epoch 34, batch 5250, loss[loss=0.2258, simple_loss=0.3085, pruned_loss=0.07157, over 7378.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2613, pruned_loss=0.04154, over 1425896.72 frames.], batch size: 23, lr: 1.62e-04 2022-05-29 08:27:23,813 INFO [train.py:842] (0/4) Epoch 34, batch 5300, loss[loss=0.1273, simple_loss=0.2151, pruned_loss=0.01977, over 7282.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2611, pruned_loss=0.04165, over 1425665.02 frames.], batch size: 17, lr: 1.61e-04 2022-05-29 08:28:03,175 INFO [train.py:842] (0/4) Epoch 34, batch 5350, loss[loss=0.178, simple_loss=0.2517, pruned_loss=0.05217, over 7136.00 frames.], tot_loss[loss=0.1735, simple_loss=0.262, pruned_loss=0.04247, over 1418435.54 frames.], batch size: 17, lr: 1.61e-04 2022-05-29 08:28:42,824 INFO [train.py:842] (0/4) Epoch 34, batch 5400, loss[loss=0.188, simple_loss=0.2763, pruned_loss=0.04986, over 7305.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2628, pruned_loss=0.04277, over 1419775.48 frames.], batch size: 25, lr: 1.61e-04 2022-05-29 08:29:21,700 INFO [train.py:842] (0/4) Epoch 34, batch 5450, loss[loss=0.1569, simple_loss=0.2504, pruned_loss=0.03172, over 6393.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2629, pruned_loss=0.04285, over 1417095.72 frames.], batch size: 38, lr: 1.61e-04 2022-05-29 08:30:01,366 INFO [train.py:842] (0/4) Epoch 34, batch 5500, loss[loss=0.1686, simple_loss=0.2638, pruned_loss=0.03673, over 7205.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2628, pruned_loss=0.04268, over 1420141.28 frames.], batch size: 22, lr: 1.61e-04 2022-05-29 08:30:40,396 INFO [train.py:842] (0/4) Epoch 34, batch 5550, loss[loss=0.1602, simple_loss=0.2578, pruned_loss=0.03129, over 7238.00 frames.], tot_loss[loss=0.1757, simple_loss=0.2646, pruned_loss=0.04342, over 1418515.98 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 08:31:19,884 INFO [train.py:842] (0/4) Epoch 34, batch 5600, loss[loss=0.1635, simple_loss=0.2621, pruned_loss=0.03243, over 7325.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2644, pruned_loss=0.04328, over 1419360.58 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 08:31:59,178 INFO [train.py:842] (0/4) Epoch 34, batch 5650, loss[loss=0.2054, simple_loss=0.296, pruned_loss=0.05741, over 7182.00 frames.], tot_loss[loss=0.1738, simple_loss=0.263, pruned_loss=0.04233, over 1419488.78 frames.], batch size: 23, lr: 1.61e-04 2022-05-29 08:32:38,759 INFO [train.py:842] (0/4) Epoch 34, batch 5700, loss[loss=0.1696, simple_loss=0.2532, pruned_loss=0.04302, over 7323.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2637, pruned_loss=0.04267, over 1422613.99 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 08:33:17,953 INFO [train.py:842] (0/4) Epoch 34, batch 5750, loss[loss=0.1619, simple_loss=0.2522, pruned_loss=0.0358, over 7349.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2635, pruned_loss=0.04241, over 1424173.59 frames.], batch size: 19, lr: 1.61e-04 2022-05-29 08:33:57,815 INFO [train.py:842] (0/4) Epoch 34, batch 5800, loss[loss=0.1561, simple_loss=0.256, pruned_loss=0.02813, over 7325.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2631, pruned_loss=0.04228, over 1424455.30 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:34:37,230 INFO [train.py:842] (0/4) Epoch 34, batch 5850, loss[loss=0.1682, simple_loss=0.2636, pruned_loss=0.0364, over 6411.00 frames.], tot_loss[loss=0.1739, simple_loss=0.263, pruned_loss=0.0424, over 1425919.19 frames.], batch size: 38, lr: 1.61e-04 2022-05-29 08:35:27,456 INFO [train.py:842] (0/4) Epoch 34, batch 5900, loss[loss=0.1691, simple_loss=0.2794, pruned_loss=0.02938, over 7225.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2633, pruned_loss=0.04246, over 1420853.15 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:36:06,829 INFO [train.py:842] (0/4) Epoch 34, batch 5950, loss[loss=0.2078, simple_loss=0.2994, pruned_loss=0.05807, over 6435.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2625, pruned_loss=0.04246, over 1422453.11 frames.], batch size: 37, lr: 1.61e-04 2022-05-29 08:36:46,452 INFO [train.py:842] (0/4) Epoch 34, batch 6000, loss[loss=0.1424, simple_loss=0.2248, pruned_loss=0.03004, over 6995.00 frames.], tot_loss[loss=0.175, simple_loss=0.2635, pruned_loss=0.04322, over 1423194.65 frames.], batch size: 16, lr: 1.61e-04 2022-05-29 08:36:46,454 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 08:36:56,770 INFO [train.py:871] (0/4) Epoch 34, validation: loss=0.1642, simple_loss=0.2613, pruned_loss=0.03356, over 868885.00 frames. 2022-05-29 08:37:36,209 INFO [train.py:842] (0/4) Epoch 34, batch 6050, loss[loss=0.1774, simple_loss=0.268, pruned_loss=0.04337, over 4898.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2638, pruned_loss=0.04324, over 1425634.97 frames.], batch size: 52, lr: 1.61e-04 2022-05-29 08:38:15,773 INFO [train.py:842] (0/4) Epoch 34, batch 6100, loss[loss=0.1883, simple_loss=0.2628, pruned_loss=0.0569, over 7235.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2644, pruned_loss=0.04366, over 1424364.32 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 08:38:55,092 INFO [train.py:842] (0/4) Epoch 34, batch 6150, loss[loss=0.1934, simple_loss=0.28, pruned_loss=0.05343, over 7199.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2633, pruned_loss=0.04298, over 1426072.75 frames.], batch size: 23, lr: 1.61e-04 2022-05-29 08:39:34,504 INFO [train.py:842] (0/4) Epoch 34, batch 6200, loss[loss=0.1739, simple_loss=0.2515, pruned_loss=0.04811, over 7286.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2622, pruned_loss=0.04237, over 1425470.22 frames.], batch size: 18, lr: 1.61e-04 2022-05-29 08:40:13,610 INFO [train.py:842] (0/4) Epoch 34, batch 6250, loss[loss=0.1651, simple_loss=0.2644, pruned_loss=0.03295, over 7226.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2616, pruned_loss=0.04169, over 1428005.97 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:40:53,090 INFO [train.py:842] (0/4) Epoch 34, batch 6300, loss[loss=0.1479, simple_loss=0.2333, pruned_loss=0.03124, over 7165.00 frames.], tot_loss[loss=0.1739, simple_loss=0.263, pruned_loss=0.04236, over 1431611.88 frames.], batch size: 18, lr: 1.61e-04 2022-05-29 08:41:32,255 INFO [train.py:842] (0/4) Epoch 34, batch 6350, loss[loss=0.1922, simple_loss=0.2829, pruned_loss=0.05078, over 7212.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2641, pruned_loss=0.04302, over 1428342.38 frames.], batch size: 26, lr: 1.61e-04 2022-05-29 08:42:12,050 INFO [train.py:842] (0/4) Epoch 34, batch 6400, loss[loss=0.1534, simple_loss=0.246, pruned_loss=0.03044, over 7325.00 frames.], tot_loss[loss=0.175, simple_loss=0.2642, pruned_loss=0.04289, over 1431045.46 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 08:42:51,308 INFO [train.py:842] (0/4) Epoch 34, batch 6450, loss[loss=0.1972, simple_loss=0.2819, pruned_loss=0.05624, over 7232.00 frames.], tot_loss[loss=0.174, simple_loss=0.2628, pruned_loss=0.04258, over 1427647.41 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 08:43:31,013 INFO [train.py:842] (0/4) Epoch 34, batch 6500, loss[loss=0.1747, simple_loss=0.2488, pruned_loss=0.05033, over 7413.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2633, pruned_loss=0.04226, over 1429003.08 frames.], batch size: 18, lr: 1.61e-04 2022-05-29 08:44:10,223 INFO [train.py:842] (0/4) Epoch 34, batch 6550, loss[loss=0.1665, simple_loss=0.2602, pruned_loss=0.03637, over 7109.00 frames.], tot_loss[loss=0.1733, simple_loss=0.263, pruned_loss=0.04181, over 1429710.37 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:44:50,110 INFO [train.py:842] (0/4) Epoch 34, batch 6600, loss[loss=0.1742, simple_loss=0.2537, pruned_loss=0.04737, over 7205.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2617, pruned_loss=0.04122, over 1432042.83 frames.], batch size: 16, lr: 1.61e-04 2022-05-29 08:45:29,269 INFO [train.py:842] (0/4) Epoch 34, batch 6650, loss[loss=0.1506, simple_loss=0.2293, pruned_loss=0.03592, over 7168.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2613, pruned_loss=0.04121, over 1427370.97 frames.], batch size: 18, lr: 1.61e-04 2022-05-29 08:46:08,632 INFO [train.py:842] (0/4) Epoch 34, batch 6700, loss[loss=0.1619, simple_loss=0.2497, pruned_loss=0.03698, over 7215.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2628, pruned_loss=0.04193, over 1426318.33 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:46:47,745 INFO [train.py:842] (0/4) Epoch 34, batch 6750, loss[loss=0.1692, simple_loss=0.2578, pruned_loss=0.04032, over 7224.00 frames.], tot_loss[loss=0.1758, simple_loss=0.2647, pruned_loss=0.04342, over 1423467.31 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:47:27,225 INFO [train.py:842] (0/4) Epoch 34, batch 6800, loss[loss=0.197, simple_loss=0.2862, pruned_loss=0.05389, over 7150.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2644, pruned_loss=0.0434, over 1415340.88 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 08:48:06,411 INFO [train.py:842] (0/4) Epoch 34, batch 6850, loss[loss=0.1752, simple_loss=0.2738, pruned_loss=0.03832, over 6832.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2645, pruned_loss=0.04324, over 1416573.89 frames.], batch size: 31, lr: 1.61e-04 2022-05-29 08:48:45,879 INFO [train.py:842] (0/4) Epoch 34, batch 6900, loss[loss=0.1787, simple_loss=0.2702, pruned_loss=0.0436, over 6647.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2646, pruned_loss=0.04358, over 1415948.85 frames.], batch size: 31, lr: 1.61e-04 2022-05-29 08:49:25,561 INFO [train.py:842] (0/4) Epoch 34, batch 6950, loss[loss=0.2113, simple_loss=0.2986, pruned_loss=0.062, over 7124.00 frames.], tot_loss[loss=0.1759, simple_loss=0.2644, pruned_loss=0.0437, over 1421481.99 frames.], batch size: 26, lr: 1.61e-04 2022-05-29 08:50:05,182 INFO [train.py:842] (0/4) Epoch 34, batch 7000, loss[loss=0.1831, simple_loss=0.2729, pruned_loss=0.04666, over 7189.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2627, pruned_loss=0.04278, over 1421050.45 frames.], batch size: 26, lr: 1.61e-04 2022-05-29 08:50:44,475 INFO [train.py:842] (0/4) Epoch 34, batch 7050, loss[loss=0.1876, simple_loss=0.279, pruned_loss=0.04809, over 7066.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2632, pruned_loss=0.04303, over 1421289.95 frames.], batch size: 18, lr: 1.61e-04 2022-05-29 08:51:24,134 INFO [train.py:842] (0/4) Epoch 34, batch 7100, loss[loss=0.176, simple_loss=0.2712, pruned_loss=0.04042, over 7412.00 frames.], tot_loss[loss=0.1753, simple_loss=0.264, pruned_loss=0.04331, over 1425248.32 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:52:03,387 INFO [train.py:842] (0/4) Epoch 34, batch 7150, loss[loss=0.1622, simple_loss=0.2576, pruned_loss=0.03338, over 7424.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2633, pruned_loss=0.04262, over 1425037.26 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 08:52:43,304 INFO [train.py:842] (0/4) Epoch 34, batch 7200, loss[loss=0.1371, simple_loss=0.2241, pruned_loss=0.02505, over 7158.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2615, pruned_loss=0.04162, over 1424301.63 frames.], batch size: 18, lr: 1.61e-04 2022-05-29 08:53:22,459 INFO [train.py:842] (0/4) Epoch 34, batch 7250, loss[loss=0.1455, simple_loss=0.233, pruned_loss=0.02899, over 6824.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2616, pruned_loss=0.04194, over 1422571.10 frames.], batch size: 15, lr: 1.61e-04 2022-05-29 08:54:02,115 INFO [train.py:842] (0/4) Epoch 34, batch 7300, loss[loss=0.1662, simple_loss=0.2495, pruned_loss=0.04139, over 7149.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2612, pruned_loss=0.04191, over 1425972.99 frames.], batch size: 19, lr: 1.61e-04 2022-05-29 08:54:41,348 INFO [train.py:842] (0/4) Epoch 34, batch 7350, loss[loss=0.1998, simple_loss=0.2869, pruned_loss=0.05636, over 7112.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2613, pruned_loss=0.04217, over 1421764.91 frames.], batch size: 28, lr: 1.61e-04 2022-05-29 08:55:20,687 INFO [train.py:842] (0/4) Epoch 34, batch 7400, loss[loss=0.18, simple_loss=0.2792, pruned_loss=0.04037, over 7022.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2627, pruned_loss=0.04253, over 1419758.98 frames.], batch size: 28, lr: 1.61e-04 2022-05-29 08:55:59,978 INFO [train.py:842] (0/4) Epoch 34, batch 7450, loss[loss=0.159, simple_loss=0.2586, pruned_loss=0.02965, over 7121.00 frames.], tot_loss[loss=0.1742, simple_loss=0.263, pruned_loss=0.04272, over 1422330.63 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:56:39,506 INFO [train.py:842] (0/4) Epoch 34, batch 7500, loss[loss=0.1849, simple_loss=0.2796, pruned_loss=0.04514, over 7332.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2621, pruned_loss=0.04203, over 1424416.95 frames.], batch size: 25, lr: 1.61e-04 2022-05-29 08:57:18,926 INFO [train.py:842] (0/4) Epoch 34, batch 7550, loss[loss=0.152, simple_loss=0.2409, pruned_loss=0.03157, over 7248.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2603, pruned_loss=0.04105, over 1423551.00 frames.], batch size: 16, lr: 1.61e-04 2022-05-29 08:57:58,693 INFO [train.py:842] (0/4) Epoch 34, batch 7600, loss[loss=0.1512, simple_loss=0.2245, pruned_loss=0.03892, over 7223.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2615, pruned_loss=0.04161, over 1428626.11 frames.], batch size: 16, lr: 1.61e-04 2022-05-29 08:58:37,741 INFO [train.py:842] (0/4) Epoch 34, batch 7650, loss[loss=0.1818, simple_loss=0.2657, pruned_loss=0.04897, over 7122.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2609, pruned_loss=0.04112, over 1428380.43 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 08:59:17,467 INFO [train.py:842] (0/4) Epoch 34, batch 7700, loss[loss=0.2092, simple_loss=0.2804, pruned_loss=0.06897, over 7181.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2597, pruned_loss=0.04101, over 1428294.87 frames.], batch size: 26, lr: 1.61e-04 2022-05-29 08:59:56,636 INFO [train.py:842] (0/4) Epoch 34, batch 7750, loss[loss=0.185, simple_loss=0.2672, pruned_loss=0.05141, over 7357.00 frames.], tot_loss[loss=0.1698, simple_loss=0.2586, pruned_loss=0.04044, over 1429428.51 frames.], batch size: 19, lr: 1.61e-04 2022-05-29 09:00:36,149 INFO [train.py:842] (0/4) Epoch 34, batch 7800, loss[loss=0.1447, simple_loss=0.2264, pruned_loss=0.03145, over 7300.00 frames.], tot_loss[loss=0.171, simple_loss=0.2598, pruned_loss=0.04106, over 1426468.64 frames.], batch size: 17, lr: 1.61e-04 2022-05-29 09:01:15,313 INFO [train.py:842] (0/4) Epoch 34, batch 7850, loss[loss=0.2678, simple_loss=0.3502, pruned_loss=0.09269, over 5354.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2613, pruned_loss=0.042, over 1425542.80 frames.], batch size: 53, lr: 1.61e-04 2022-05-29 09:01:54,543 INFO [train.py:842] (0/4) Epoch 34, batch 7900, loss[loss=0.2173, simple_loss=0.2913, pruned_loss=0.07171, over 4935.00 frames.], tot_loss[loss=0.174, simple_loss=0.2626, pruned_loss=0.04269, over 1418597.27 frames.], batch size: 52, lr: 1.61e-04 2022-05-29 09:02:33,831 INFO [train.py:842] (0/4) Epoch 34, batch 7950, loss[loss=0.1779, simple_loss=0.2762, pruned_loss=0.03985, over 7299.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2623, pruned_loss=0.04298, over 1421231.93 frames.], batch size: 24, lr: 1.61e-04 2022-05-29 09:03:13,350 INFO [train.py:842] (0/4) Epoch 34, batch 8000, loss[loss=0.2165, simple_loss=0.302, pruned_loss=0.06549, over 7217.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2617, pruned_loss=0.04273, over 1419536.97 frames.], batch size: 23, lr: 1.61e-04 2022-05-29 09:03:52,509 INFO [train.py:842] (0/4) Epoch 34, batch 8050, loss[loss=0.1694, simple_loss=0.2541, pruned_loss=0.04236, over 7167.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2622, pruned_loss=0.04259, over 1416118.51 frames.], batch size: 18, lr: 1.61e-04 2022-05-29 09:04:32,344 INFO [train.py:842] (0/4) Epoch 34, batch 8100, loss[loss=0.1668, simple_loss=0.2602, pruned_loss=0.03673, over 7251.00 frames.], tot_loss[loss=0.173, simple_loss=0.2619, pruned_loss=0.04205, over 1421680.74 frames.], batch size: 19, lr: 1.61e-04 2022-05-29 09:05:11,681 INFO [train.py:842] (0/4) Epoch 34, batch 8150, loss[loss=0.1696, simple_loss=0.2718, pruned_loss=0.03374, over 7213.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2618, pruned_loss=0.04223, over 1423126.44 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 09:05:51,186 INFO [train.py:842] (0/4) Epoch 34, batch 8200, loss[loss=0.1653, simple_loss=0.2679, pruned_loss=0.0314, over 7065.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2621, pruned_loss=0.04221, over 1424051.08 frames.], batch size: 28, lr: 1.61e-04 2022-05-29 09:06:30,361 INFO [train.py:842] (0/4) Epoch 34, batch 8250, loss[loss=0.1813, simple_loss=0.2715, pruned_loss=0.04553, over 7298.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2615, pruned_loss=0.04206, over 1420075.16 frames.], batch size: 25, lr: 1.61e-04 2022-05-29 09:07:09,889 INFO [train.py:842] (0/4) Epoch 34, batch 8300, loss[loss=0.1739, simple_loss=0.2601, pruned_loss=0.0438, over 5185.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2619, pruned_loss=0.04246, over 1421640.63 frames.], batch size: 52, lr: 1.61e-04 2022-05-29 09:07:49,127 INFO [train.py:842] (0/4) Epoch 34, batch 8350, loss[loss=0.1592, simple_loss=0.2498, pruned_loss=0.03431, over 7152.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2619, pruned_loss=0.04285, over 1419836.99 frames.], batch size: 19, lr: 1.61e-04 2022-05-29 09:08:28,585 INFO [train.py:842] (0/4) Epoch 34, batch 8400, loss[loss=0.1511, simple_loss=0.2471, pruned_loss=0.02753, over 7256.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2619, pruned_loss=0.04291, over 1419143.27 frames.], batch size: 19, lr: 1.61e-04 2022-05-29 09:09:07,965 INFO [train.py:842] (0/4) Epoch 34, batch 8450, loss[loss=0.1612, simple_loss=0.2483, pruned_loss=0.03704, over 7132.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2615, pruned_loss=0.04273, over 1420449.54 frames.], batch size: 17, lr: 1.61e-04 2022-05-29 09:09:47,733 INFO [train.py:842] (0/4) Epoch 34, batch 8500, loss[loss=0.1472, simple_loss=0.2336, pruned_loss=0.03041, over 7141.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2618, pruned_loss=0.04303, over 1419962.19 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 09:10:26,689 INFO [train.py:842] (0/4) Epoch 34, batch 8550, loss[loss=0.1823, simple_loss=0.273, pruned_loss=0.04578, over 7204.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2618, pruned_loss=0.04283, over 1418230.41 frames.], batch size: 23, lr: 1.61e-04 2022-05-29 09:11:06,063 INFO [train.py:842] (0/4) Epoch 34, batch 8600, loss[loss=0.1426, simple_loss=0.2245, pruned_loss=0.03032, over 7208.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2631, pruned_loss=0.04324, over 1421752.12 frames.], batch size: 16, lr: 1.61e-04 2022-05-29 09:11:45,249 INFO [train.py:842] (0/4) Epoch 34, batch 8650, loss[loss=0.1491, simple_loss=0.2273, pruned_loss=0.03541, over 7290.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2617, pruned_loss=0.0425, over 1418604.85 frames.], batch size: 18, lr: 1.61e-04 2022-05-29 09:12:00,544 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-312000.pt 2022-05-29 09:12:27,529 INFO [train.py:842] (0/4) Epoch 34, batch 8700, loss[loss=0.1823, simple_loss=0.2722, pruned_loss=0.04623, over 7122.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2633, pruned_loss=0.04321, over 1414585.68 frames.], batch size: 26, lr: 1.61e-04 2022-05-29 09:13:06,608 INFO [train.py:842] (0/4) Epoch 34, batch 8750, loss[loss=0.1563, simple_loss=0.2554, pruned_loss=0.02861, over 7320.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2636, pruned_loss=0.04305, over 1414981.23 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 09:13:45,933 INFO [train.py:842] (0/4) Epoch 34, batch 8800, loss[loss=0.1748, simple_loss=0.2633, pruned_loss=0.0432, over 7327.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2631, pruned_loss=0.04279, over 1408369.08 frames.], batch size: 20, lr: 1.61e-04 2022-05-29 09:14:24,678 INFO [train.py:842] (0/4) Epoch 34, batch 8850, loss[loss=0.169, simple_loss=0.2591, pruned_loss=0.03945, over 7406.00 frames.], tot_loss[loss=0.1753, simple_loss=0.2639, pruned_loss=0.04332, over 1406471.56 frames.], batch size: 21, lr: 1.61e-04 2022-05-29 09:15:04,303 INFO [train.py:842] (0/4) Epoch 34, batch 8900, loss[loss=0.2317, simple_loss=0.305, pruned_loss=0.07918, over 6872.00 frames.], tot_loss[loss=0.1753, simple_loss=0.264, pruned_loss=0.04327, over 1406212.56 frames.], batch size: 31, lr: 1.61e-04 2022-05-29 09:15:43,383 INFO [train.py:842] (0/4) Epoch 34, batch 8950, loss[loss=0.1625, simple_loss=0.2521, pruned_loss=0.03643, over 7148.00 frames.], tot_loss[loss=0.175, simple_loss=0.2639, pruned_loss=0.04308, over 1407001.85 frames.], batch size: 19, lr: 1.61e-04 2022-05-29 09:16:22,275 INFO [train.py:842] (0/4) Epoch 34, batch 9000, loss[loss=0.1855, simple_loss=0.2737, pruned_loss=0.04869, over 7207.00 frames.], tot_loss[loss=0.1765, simple_loss=0.2657, pruned_loss=0.04371, over 1395945.47 frames.], batch size: 22, lr: 1.61e-04 2022-05-29 09:16:22,277 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 09:16:31,957 INFO [train.py:871] (0/4) Epoch 34, validation: loss=0.1642, simple_loss=0.2613, pruned_loss=0.03353, over 868885.00 frames. 2022-05-29 09:17:10,307 INFO [train.py:842] (0/4) Epoch 34, batch 9050, loss[loss=0.1655, simple_loss=0.2693, pruned_loss=0.03086, over 6433.00 frames.], tot_loss[loss=0.177, simple_loss=0.2661, pruned_loss=0.04394, over 1377509.35 frames.], batch size: 38, lr: 1.61e-04 2022-05-29 09:17:48,537 INFO [train.py:842] (0/4) Epoch 34, batch 9100, loss[loss=0.1811, simple_loss=0.2804, pruned_loss=0.04096, over 6462.00 frames.], tot_loss[loss=0.1785, simple_loss=0.2677, pruned_loss=0.04462, over 1339098.02 frames.], batch size: 38, lr: 1.61e-04 2022-05-29 09:18:26,612 INFO [train.py:842] (0/4) Epoch 34, batch 9150, loss[loss=0.1835, simple_loss=0.2717, pruned_loss=0.04769, over 4813.00 frames.], tot_loss[loss=0.1827, simple_loss=0.2711, pruned_loss=0.04718, over 1273925.89 frames.], batch size: 53, lr: 1.60e-04 2022-05-29 09:18:59,449 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-34.pt 2022-05-29 09:19:15,514 INFO [train.py:842] (0/4) Epoch 35, batch 0, loss[loss=0.1847, simple_loss=0.2761, pruned_loss=0.04672, over 7231.00 frames.], tot_loss[loss=0.1847, simple_loss=0.2761, pruned_loss=0.04672, over 7231.00 frames.], batch size: 20, lr: 1.58e-04 2022-05-29 09:19:54,658 INFO [train.py:842] (0/4) Epoch 35, batch 50, loss[loss=0.1693, simple_loss=0.2649, pruned_loss=0.03682, over 7290.00 frames.], tot_loss[loss=0.1831, simple_loss=0.2702, pruned_loss=0.04796, over 318293.39 frames.], batch size: 24, lr: 1.58e-04 2022-05-29 09:20:34,660 INFO [train.py:842] (0/4) Epoch 35, batch 100, loss[loss=0.1846, simple_loss=0.2614, pruned_loss=0.05392, over 7137.00 frames.], tot_loss[loss=0.1776, simple_loss=0.265, pruned_loss=0.04515, over 567956.16 frames.], batch size: 26, lr: 1.58e-04 2022-05-29 09:21:13,988 INFO [train.py:842] (0/4) Epoch 35, batch 150, loss[loss=0.2168, simple_loss=0.305, pruned_loss=0.06433, over 7363.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2643, pruned_loss=0.04331, over 760222.70 frames.], batch size: 23, lr: 1.58e-04 2022-05-29 09:21:53,695 INFO [train.py:842] (0/4) Epoch 35, batch 200, loss[loss=0.1414, simple_loss=0.2224, pruned_loss=0.03022, over 7063.00 frames.], tot_loss[loss=0.1747, simple_loss=0.2634, pruned_loss=0.04297, over 909385.03 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:22:33,049 INFO [train.py:842] (0/4) Epoch 35, batch 250, loss[loss=0.1621, simple_loss=0.2552, pruned_loss=0.03453, over 7244.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2622, pruned_loss=0.04281, over 1026683.12 frames.], batch size: 20, lr: 1.58e-04 2022-05-29 09:23:12,663 INFO [train.py:842] (0/4) Epoch 35, batch 300, loss[loss=0.1681, simple_loss=0.2565, pruned_loss=0.03988, over 7160.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2621, pruned_loss=0.04247, over 1113634.08 frames.], batch size: 19, lr: 1.58e-04 2022-05-29 09:23:51,956 INFO [train.py:842] (0/4) Epoch 35, batch 350, loss[loss=0.1963, simple_loss=0.285, pruned_loss=0.05375, over 7187.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2605, pruned_loss=0.04124, over 1186203.45 frames.], batch size: 23, lr: 1.58e-04 2022-05-29 09:24:31,405 INFO [train.py:842] (0/4) Epoch 35, batch 400, loss[loss=0.1688, simple_loss=0.2561, pruned_loss=0.04079, over 7321.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2606, pruned_loss=0.0413, over 1240388.40 frames.], batch size: 20, lr: 1.58e-04 2022-05-29 09:25:10,957 INFO [train.py:842] (0/4) Epoch 35, batch 450, loss[loss=0.1515, simple_loss=0.2446, pruned_loss=0.02924, over 6806.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2601, pruned_loss=0.04119, over 1284514.84 frames.], batch size: 31, lr: 1.58e-04 2022-05-29 09:25:50,485 INFO [train.py:842] (0/4) Epoch 35, batch 500, loss[loss=0.2103, simple_loss=0.2922, pruned_loss=0.06423, over 7323.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2598, pruned_loss=0.04099, over 1313414.81 frames.], batch size: 20, lr: 1.58e-04 2022-05-29 09:26:29,909 INFO [train.py:842] (0/4) Epoch 35, batch 550, loss[loss=0.1529, simple_loss=0.2405, pruned_loss=0.03266, over 7071.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2596, pruned_loss=0.04102, over 1333937.94 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:27:09,486 INFO [train.py:842] (0/4) Epoch 35, batch 600, loss[loss=0.1512, simple_loss=0.2571, pruned_loss=0.02267, over 7340.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2605, pruned_loss=0.04153, over 1353063.75 frames.], batch size: 22, lr: 1.58e-04 2022-05-29 09:27:48,644 INFO [train.py:842] (0/4) Epoch 35, batch 650, loss[loss=0.164, simple_loss=0.2423, pruned_loss=0.04286, over 7180.00 frames.], tot_loss[loss=0.172, simple_loss=0.2609, pruned_loss=0.04153, over 1372144.19 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:28:28,631 INFO [train.py:842] (0/4) Epoch 35, batch 700, loss[loss=0.1481, simple_loss=0.2279, pruned_loss=0.03417, over 7275.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2605, pruned_loss=0.04101, over 1386632.79 frames.], batch size: 17, lr: 1.58e-04 2022-05-29 09:29:08,037 INFO [train.py:842] (0/4) Epoch 35, batch 750, loss[loss=0.1366, simple_loss=0.2294, pruned_loss=0.02187, over 7257.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2597, pruned_loss=0.04042, over 1393538.27 frames.], batch size: 19, lr: 1.58e-04 2022-05-29 09:29:47,597 INFO [train.py:842] (0/4) Epoch 35, batch 800, loss[loss=0.1775, simple_loss=0.286, pruned_loss=0.03446, over 7225.00 frames.], tot_loss[loss=0.171, simple_loss=0.2606, pruned_loss=0.04067, over 1402395.50 frames.], batch size: 21, lr: 1.58e-04 2022-05-29 09:30:26,767 INFO [train.py:842] (0/4) Epoch 35, batch 850, loss[loss=0.1613, simple_loss=0.2491, pruned_loss=0.0368, over 7298.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2615, pruned_loss=0.04087, over 1403104.66 frames.], batch size: 24, lr: 1.58e-04 2022-05-29 09:31:06,377 INFO [train.py:842] (0/4) Epoch 35, batch 900, loss[loss=0.2109, simple_loss=0.2909, pruned_loss=0.06546, over 5175.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2617, pruned_loss=0.04125, over 1406825.29 frames.], batch size: 52, lr: 1.58e-04 2022-05-29 09:31:45,716 INFO [train.py:842] (0/4) Epoch 35, batch 950, loss[loss=0.1712, simple_loss=0.2689, pruned_loss=0.03669, over 7261.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2606, pruned_loss=0.04129, over 1410417.39 frames.], batch size: 19, lr: 1.58e-04 2022-05-29 09:32:25,389 INFO [train.py:842] (0/4) Epoch 35, batch 1000, loss[loss=0.1794, simple_loss=0.2817, pruned_loss=0.03852, over 6780.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2612, pruned_loss=0.04158, over 1411109.01 frames.], batch size: 31, lr: 1.58e-04 2022-05-29 09:33:04,760 INFO [train.py:842] (0/4) Epoch 35, batch 1050, loss[loss=0.1972, simple_loss=0.2912, pruned_loss=0.05161, over 7415.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2596, pruned_loss=0.04101, over 1416231.99 frames.], batch size: 21, lr: 1.58e-04 2022-05-29 09:33:44,554 INFO [train.py:842] (0/4) Epoch 35, batch 1100, loss[loss=0.1744, simple_loss=0.265, pruned_loss=0.04196, over 7352.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2591, pruned_loss=0.04031, over 1421109.18 frames.], batch size: 19, lr: 1.58e-04 2022-05-29 09:34:23,657 INFO [train.py:842] (0/4) Epoch 35, batch 1150, loss[loss=0.1648, simple_loss=0.272, pruned_loss=0.02878, over 7193.00 frames.], tot_loss[loss=0.1707, simple_loss=0.26, pruned_loss=0.04068, over 1421800.62 frames.], batch size: 23, lr: 1.58e-04 2022-05-29 09:35:03,454 INFO [train.py:842] (0/4) Epoch 35, batch 1200, loss[loss=0.1554, simple_loss=0.2359, pruned_loss=0.03745, over 7272.00 frames.], tot_loss[loss=0.1689, simple_loss=0.2584, pruned_loss=0.03968, over 1424702.10 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:35:42,609 INFO [train.py:842] (0/4) Epoch 35, batch 1250, loss[loss=0.1596, simple_loss=0.2527, pruned_loss=0.03323, over 7332.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2597, pruned_loss=0.0404, over 1423728.79 frames.], batch size: 22, lr: 1.58e-04 2022-05-29 09:36:21,898 INFO [train.py:842] (0/4) Epoch 35, batch 1300, loss[loss=0.1837, simple_loss=0.2729, pruned_loss=0.04726, over 6993.00 frames.], tot_loss[loss=0.172, simple_loss=0.2617, pruned_loss=0.04119, over 1420392.72 frames.], batch size: 28, lr: 1.58e-04 2022-05-29 09:37:00,973 INFO [train.py:842] (0/4) Epoch 35, batch 1350, loss[loss=0.1644, simple_loss=0.2575, pruned_loss=0.03562, over 7103.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2617, pruned_loss=0.04132, over 1423577.12 frames.], batch size: 28, lr: 1.58e-04 2022-05-29 09:37:40,237 INFO [train.py:842] (0/4) Epoch 35, batch 1400, loss[loss=0.1611, simple_loss=0.2472, pruned_loss=0.03743, over 7317.00 frames.], tot_loss[loss=0.172, simple_loss=0.2611, pruned_loss=0.04141, over 1421193.08 frames.], batch size: 20, lr: 1.58e-04 2022-05-29 09:38:19,781 INFO [train.py:842] (0/4) Epoch 35, batch 1450, loss[loss=0.1489, simple_loss=0.243, pruned_loss=0.02742, over 7258.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2602, pruned_loss=0.04127, over 1418715.87 frames.], batch size: 19, lr: 1.58e-04 2022-05-29 09:38:59,556 INFO [train.py:842] (0/4) Epoch 35, batch 1500, loss[loss=0.1766, simple_loss=0.2659, pruned_loss=0.04358, over 7154.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2614, pruned_loss=0.04192, over 1419300.78 frames.], batch size: 17, lr: 1.58e-04 2022-05-29 09:39:38,669 INFO [train.py:842] (0/4) Epoch 35, batch 1550, loss[loss=0.1677, simple_loss=0.2578, pruned_loss=0.03879, over 7220.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2621, pruned_loss=0.04244, over 1419733.54 frames.], batch size: 21, lr: 1.58e-04 2022-05-29 09:40:18,413 INFO [train.py:842] (0/4) Epoch 35, batch 1600, loss[loss=0.1603, simple_loss=0.2547, pruned_loss=0.03291, over 7081.00 frames.], tot_loss[loss=0.1722, simple_loss=0.261, pruned_loss=0.0417, over 1421391.48 frames.], batch size: 28, lr: 1.58e-04 2022-05-29 09:40:57,782 INFO [train.py:842] (0/4) Epoch 35, batch 1650, loss[loss=0.1941, simple_loss=0.2773, pruned_loss=0.05546, over 7415.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2611, pruned_loss=0.04186, over 1426119.21 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:41:48,185 INFO [train.py:842] (0/4) Epoch 35, batch 1700, loss[loss=0.2477, simple_loss=0.3193, pruned_loss=0.08811, over 5099.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2613, pruned_loss=0.042, over 1425830.21 frames.], batch size: 52, lr: 1.58e-04 2022-05-29 09:42:27,689 INFO [train.py:842] (0/4) Epoch 35, batch 1750, loss[loss=0.1481, simple_loss=0.2354, pruned_loss=0.03044, over 7158.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2608, pruned_loss=0.04176, over 1425521.00 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:43:07,475 INFO [train.py:842] (0/4) Epoch 35, batch 1800, loss[loss=0.1878, simple_loss=0.2923, pruned_loss=0.04167, over 7302.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2596, pruned_loss=0.04079, over 1429198.42 frames.], batch size: 25, lr: 1.58e-04 2022-05-29 09:43:46,608 INFO [train.py:842] (0/4) Epoch 35, batch 1850, loss[loss=0.1867, simple_loss=0.272, pruned_loss=0.05074, over 7075.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2598, pruned_loss=0.04062, over 1425532.37 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:44:26,189 INFO [train.py:842] (0/4) Epoch 35, batch 1900, loss[loss=0.163, simple_loss=0.2604, pruned_loss=0.03275, over 7373.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2601, pruned_loss=0.041, over 1424799.52 frames.], batch size: 23, lr: 1.58e-04 2022-05-29 09:45:05,536 INFO [train.py:842] (0/4) Epoch 35, batch 1950, loss[loss=0.1506, simple_loss=0.2307, pruned_loss=0.03528, over 7161.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2604, pruned_loss=0.04136, over 1424163.24 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:45:55,995 INFO [train.py:842] (0/4) Epoch 35, batch 2000, loss[loss=0.1998, simple_loss=0.2913, pruned_loss=0.05417, over 6501.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2605, pruned_loss=0.04136, over 1420105.22 frames.], batch size: 39, lr: 1.58e-04 2022-05-29 09:46:35,087 INFO [train.py:842] (0/4) Epoch 35, batch 2050, loss[loss=0.1827, simple_loss=0.2614, pruned_loss=0.05199, over 7103.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2622, pruned_loss=0.04217, over 1421171.28 frames.], batch size: 21, lr: 1.58e-04 2022-05-29 09:47:25,693 INFO [train.py:842] (0/4) Epoch 35, batch 2100, loss[loss=0.1855, simple_loss=0.2792, pruned_loss=0.04588, over 7417.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2633, pruned_loss=0.04258, over 1423572.43 frames.], batch size: 21, lr: 1.58e-04 2022-05-29 09:48:05,086 INFO [train.py:842] (0/4) Epoch 35, batch 2150, loss[loss=0.2042, simple_loss=0.2829, pruned_loss=0.06276, over 6378.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2632, pruned_loss=0.04285, over 1426726.05 frames.], batch size: 37, lr: 1.58e-04 2022-05-29 09:48:44,639 INFO [train.py:842] (0/4) Epoch 35, batch 2200, loss[loss=0.1727, simple_loss=0.2565, pruned_loss=0.04445, over 7430.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2632, pruned_loss=0.04284, over 1422588.60 frames.], batch size: 20, lr: 1.58e-04 2022-05-29 09:49:23,739 INFO [train.py:842] (0/4) Epoch 35, batch 2250, loss[loss=0.177, simple_loss=0.2658, pruned_loss=0.04411, over 7276.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2626, pruned_loss=0.04206, over 1421263.92 frames.], batch size: 18, lr: 1.58e-04 2022-05-29 09:50:03,202 INFO [train.py:842] (0/4) Epoch 35, batch 2300, loss[loss=0.2432, simple_loss=0.3178, pruned_loss=0.08432, over 7161.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2618, pruned_loss=0.04158, over 1418350.03 frames.], batch size: 26, lr: 1.58e-04 2022-05-29 09:50:42,234 INFO [train.py:842] (0/4) Epoch 35, batch 2350, loss[loss=0.1862, simple_loss=0.2847, pruned_loss=0.04388, over 7131.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2614, pruned_loss=0.04172, over 1417030.73 frames.], batch size: 28, lr: 1.58e-04 2022-05-29 09:51:21,883 INFO [train.py:842] (0/4) Epoch 35, batch 2400, loss[loss=0.1516, simple_loss=0.2281, pruned_loss=0.03758, over 6988.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2609, pruned_loss=0.04129, over 1422958.49 frames.], batch size: 16, lr: 1.58e-04 2022-05-29 09:52:01,230 INFO [train.py:842] (0/4) Epoch 35, batch 2450, loss[loss=0.1626, simple_loss=0.2567, pruned_loss=0.03422, over 7431.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2602, pruned_loss=0.04085, over 1423588.12 frames.], batch size: 20, lr: 1.58e-04 2022-05-29 09:52:41,067 INFO [train.py:842] (0/4) Epoch 35, batch 2500, loss[loss=0.1799, simple_loss=0.266, pruned_loss=0.04693, over 6429.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2589, pruned_loss=0.04076, over 1425699.18 frames.], batch size: 38, lr: 1.58e-04 2022-05-29 09:53:20,275 INFO [train.py:842] (0/4) Epoch 35, batch 2550, loss[loss=0.1539, simple_loss=0.2589, pruned_loss=0.02444, over 7117.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2587, pruned_loss=0.04011, over 1424738.56 frames.], batch size: 21, lr: 1.58e-04 2022-05-29 09:53:59,852 INFO [train.py:842] (0/4) Epoch 35, batch 2600, loss[loss=0.1597, simple_loss=0.2613, pruned_loss=0.02903, over 7222.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2591, pruned_loss=0.04077, over 1423586.29 frames.], batch size: 22, lr: 1.58e-04 2022-05-29 09:54:38,884 INFO [train.py:842] (0/4) Epoch 35, batch 2650, loss[loss=0.1709, simple_loss=0.26, pruned_loss=0.04092, over 7196.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2599, pruned_loss=0.04136, over 1422063.97 frames.], batch size: 23, lr: 1.58e-04 2022-05-29 09:55:18,673 INFO [train.py:842] (0/4) Epoch 35, batch 2700, loss[loss=0.202, simple_loss=0.288, pruned_loss=0.05801, over 7125.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2602, pruned_loss=0.04152, over 1424551.15 frames.], batch size: 21, lr: 1.57e-04 2022-05-29 09:55:57,882 INFO [train.py:842] (0/4) Epoch 35, batch 2750, loss[loss=0.1689, simple_loss=0.2621, pruned_loss=0.03783, over 7321.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2596, pruned_loss=0.04107, over 1424281.69 frames.], batch size: 21, lr: 1.57e-04 2022-05-29 09:56:37,373 INFO [train.py:842] (0/4) Epoch 35, batch 2800, loss[loss=0.1638, simple_loss=0.2724, pruned_loss=0.02754, over 7325.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2605, pruned_loss=0.04144, over 1425893.99 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 09:57:16,504 INFO [train.py:842] (0/4) Epoch 35, batch 2850, loss[loss=0.163, simple_loss=0.2552, pruned_loss=0.03543, over 7163.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2609, pruned_loss=0.04131, over 1424029.76 frames.], batch size: 19, lr: 1.57e-04 2022-05-29 09:57:56,066 INFO [train.py:842] (0/4) Epoch 35, batch 2900, loss[loss=0.1994, simple_loss=0.2921, pruned_loss=0.05337, over 6615.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2611, pruned_loss=0.04136, over 1422255.84 frames.], batch size: 38, lr: 1.57e-04 2022-05-29 09:58:34,879 INFO [train.py:842] (0/4) Epoch 35, batch 2950, loss[loss=0.1435, simple_loss=0.2246, pruned_loss=0.03124, over 7233.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2622, pruned_loss=0.04141, over 1415139.51 frames.], batch size: 16, lr: 1.57e-04 2022-05-29 09:59:14,473 INFO [train.py:842] (0/4) Epoch 35, batch 3000, loss[loss=0.1682, simple_loss=0.2582, pruned_loss=0.03904, over 7370.00 frames.], tot_loss[loss=0.1725, simple_loss=0.262, pruned_loss=0.04143, over 1419460.14 frames.], batch size: 23, lr: 1.57e-04 2022-05-29 09:59:14,475 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 09:59:24,201 INFO [train.py:871] (0/4) Epoch 35, validation: loss=0.1643, simple_loss=0.2608, pruned_loss=0.03387, over 868885.00 frames. 2022-05-29 10:00:03,538 INFO [train.py:842] (0/4) Epoch 35, batch 3050, loss[loss=0.1708, simple_loss=0.272, pruned_loss=0.03479, over 7237.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2629, pruned_loss=0.04209, over 1422630.83 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:00:42,995 INFO [train.py:842] (0/4) Epoch 35, batch 3100, loss[loss=0.1899, simple_loss=0.2753, pruned_loss=0.05228, over 7379.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2629, pruned_loss=0.04186, over 1420234.41 frames.], batch size: 23, lr: 1.57e-04 2022-05-29 10:01:22,490 INFO [train.py:842] (0/4) Epoch 35, batch 3150, loss[loss=0.1892, simple_loss=0.2752, pruned_loss=0.0516, over 7201.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2608, pruned_loss=0.04109, over 1421922.53 frames.], batch size: 22, lr: 1.57e-04 2022-05-29 10:02:01,988 INFO [train.py:842] (0/4) Epoch 35, batch 3200, loss[loss=0.2281, simple_loss=0.3051, pruned_loss=0.07555, over 7204.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2618, pruned_loss=0.04179, over 1426632.28 frames.], batch size: 22, lr: 1.57e-04 2022-05-29 10:02:41,280 INFO [train.py:842] (0/4) Epoch 35, batch 3250, loss[loss=0.1714, simple_loss=0.2651, pruned_loss=0.03888, over 7433.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2614, pruned_loss=0.04165, over 1424770.89 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:03:20,908 INFO [train.py:842] (0/4) Epoch 35, batch 3300, loss[loss=0.1506, simple_loss=0.2494, pruned_loss=0.02593, over 7432.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2613, pruned_loss=0.04162, over 1425591.71 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:04:00,226 INFO [train.py:842] (0/4) Epoch 35, batch 3350, loss[loss=0.1849, simple_loss=0.2925, pruned_loss=0.03868, over 7431.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2608, pruned_loss=0.04132, over 1429288.44 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:04:39,787 INFO [train.py:842] (0/4) Epoch 35, batch 3400, loss[loss=0.1683, simple_loss=0.2553, pruned_loss=0.04061, over 7282.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2604, pruned_loss=0.04129, over 1426088.19 frames.], batch size: 18, lr: 1.57e-04 2022-05-29 10:05:18,879 INFO [train.py:842] (0/4) Epoch 35, batch 3450, loss[loss=0.1458, simple_loss=0.2336, pruned_loss=0.02896, over 6986.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2608, pruned_loss=0.04138, over 1429482.79 frames.], batch size: 16, lr: 1.57e-04 2022-05-29 10:05:58,327 INFO [train.py:842] (0/4) Epoch 35, batch 3500, loss[loss=0.1838, simple_loss=0.2709, pruned_loss=0.04833, over 7324.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2602, pruned_loss=0.04122, over 1428252.11 frames.], batch size: 22, lr: 1.57e-04 2022-05-29 10:06:37,458 INFO [train.py:842] (0/4) Epoch 35, batch 3550, loss[loss=0.1814, simple_loss=0.2673, pruned_loss=0.04771, over 6799.00 frames.], tot_loss[loss=0.172, simple_loss=0.2609, pruned_loss=0.04152, over 1421462.76 frames.], batch size: 31, lr: 1.57e-04 2022-05-29 10:07:17,158 INFO [train.py:842] (0/4) Epoch 35, batch 3600, loss[loss=0.1607, simple_loss=0.2543, pruned_loss=0.03357, over 7205.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2605, pruned_loss=0.04127, over 1419734.87 frames.], batch size: 22, lr: 1.57e-04 2022-05-29 10:07:56,175 INFO [train.py:842] (0/4) Epoch 35, batch 3650, loss[loss=0.1732, simple_loss=0.2638, pruned_loss=0.0413, over 7305.00 frames.], tot_loss[loss=0.1705, simple_loss=0.26, pruned_loss=0.04055, over 1420960.21 frames.], batch size: 25, lr: 1.57e-04 2022-05-29 10:08:35,604 INFO [train.py:842] (0/4) Epoch 35, batch 3700, loss[loss=0.1576, simple_loss=0.2504, pruned_loss=0.03239, over 6282.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2604, pruned_loss=0.04053, over 1420445.70 frames.], batch size: 37, lr: 1.57e-04 2022-05-29 10:09:14,958 INFO [train.py:842] (0/4) Epoch 35, batch 3750, loss[loss=0.1848, simple_loss=0.2656, pruned_loss=0.05204, over 5012.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2612, pruned_loss=0.04133, over 1417498.83 frames.], batch size: 53, lr: 1.57e-04 2022-05-29 10:09:54,420 INFO [train.py:842] (0/4) Epoch 35, batch 3800, loss[loss=0.1855, simple_loss=0.2829, pruned_loss=0.04404, over 4924.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2618, pruned_loss=0.04156, over 1418061.76 frames.], batch size: 52, lr: 1.57e-04 2022-05-29 10:10:33,447 INFO [train.py:842] (0/4) Epoch 35, batch 3850, loss[loss=0.1788, simple_loss=0.2522, pruned_loss=0.05273, over 7000.00 frames.], tot_loss[loss=0.1737, simple_loss=0.2631, pruned_loss=0.04222, over 1420057.13 frames.], batch size: 16, lr: 1.57e-04 2022-05-29 10:11:12,937 INFO [train.py:842] (0/4) Epoch 35, batch 3900, loss[loss=0.1565, simple_loss=0.2443, pruned_loss=0.0344, over 7274.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2638, pruned_loss=0.04259, over 1417321.81 frames.], batch size: 18, lr: 1.57e-04 2022-05-29 10:11:52,241 INFO [train.py:842] (0/4) Epoch 35, batch 3950, loss[loss=0.1422, simple_loss=0.2309, pruned_loss=0.02671, over 7163.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2629, pruned_loss=0.04238, over 1416713.69 frames.], batch size: 19, lr: 1.57e-04 2022-05-29 10:12:31,911 INFO [train.py:842] (0/4) Epoch 35, batch 4000, loss[loss=0.2437, simple_loss=0.3266, pruned_loss=0.08043, over 7227.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2627, pruned_loss=0.04229, over 1418091.63 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:13:11,416 INFO [train.py:842] (0/4) Epoch 35, batch 4050, loss[loss=0.172, simple_loss=0.2624, pruned_loss=0.04084, over 7205.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2625, pruned_loss=0.0422, over 1422112.62 frames.], batch size: 22, lr: 1.57e-04 2022-05-29 10:13:50,828 INFO [train.py:842] (0/4) Epoch 35, batch 4100, loss[loss=0.2111, simple_loss=0.3024, pruned_loss=0.05993, over 6774.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2634, pruned_loss=0.04244, over 1424638.64 frames.], batch size: 31, lr: 1.57e-04 2022-05-29 10:14:30,197 INFO [train.py:842] (0/4) Epoch 35, batch 4150, loss[loss=0.1547, simple_loss=0.2309, pruned_loss=0.03931, over 7287.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2628, pruned_loss=0.04217, over 1428163.60 frames.], batch size: 17, lr: 1.57e-04 2022-05-29 10:15:09,589 INFO [train.py:842] (0/4) Epoch 35, batch 4200, loss[loss=0.1918, simple_loss=0.283, pruned_loss=0.05027, over 7434.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2624, pruned_loss=0.04203, over 1424974.50 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:15:48,835 INFO [train.py:842] (0/4) Epoch 35, batch 4250, loss[loss=0.1874, simple_loss=0.2715, pruned_loss=0.05166, over 7162.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2619, pruned_loss=0.04189, over 1426690.04 frames.], batch size: 18, lr: 1.57e-04 2022-05-29 10:16:28,502 INFO [train.py:842] (0/4) Epoch 35, batch 4300, loss[loss=0.1332, simple_loss=0.2148, pruned_loss=0.02585, over 7144.00 frames.], tot_loss[loss=0.1736, simple_loss=0.262, pruned_loss=0.04258, over 1427741.67 frames.], batch size: 17, lr: 1.57e-04 2022-05-29 10:17:07,783 INFO [train.py:842] (0/4) Epoch 35, batch 4350, loss[loss=0.1494, simple_loss=0.2381, pruned_loss=0.03034, over 7311.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2613, pruned_loss=0.04213, over 1430824.08 frames.], batch size: 21, lr: 1.57e-04 2022-05-29 10:17:47,263 INFO [train.py:842] (0/4) Epoch 35, batch 4400, loss[loss=0.1768, simple_loss=0.2654, pruned_loss=0.04413, over 7206.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2611, pruned_loss=0.04214, over 1423532.24 frames.], batch size: 22, lr: 1.57e-04 2022-05-29 10:18:26,571 INFO [train.py:842] (0/4) Epoch 35, batch 4450, loss[loss=0.2036, simple_loss=0.2848, pruned_loss=0.06118, over 7295.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2598, pruned_loss=0.04165, over 1420697.88 frames.], batch size: 24, lr: 1.57e-04 2022-05-29 10:19:06,137 INFO [train.py:842] (0/4) Epoch 35, batch 4500, loss[loss=0.1531, simple_loss=0.2262, pruned_loss=0.04001, over 6828.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2606, pruned_loss=0.04199, over 1420784.62 frames.], batch size: 15, lr: 1.57e-04 2022-05-29 10:19:45,386 INFO [train.py:842] (0/4) Epoch 35, batch 4550, loss[loss=0.1481, simple_loss=0.2333, pruned_loss=0.03142, over 7199.00 frames.], tot_loss[loss=0.1726, simple_loss=0.261, pruned_loss=0.0421, over 1420295.41 frames.], batch size: 23, lr: 1.57e-04 2022-05-29 10:20:25,343 INFO [train.py:842] (0/4) Epoch 35, batch 4600, loss[loss=0.1407, simple_loss=0.2405, pruned_loss=0.02045, over 7148.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2601, pruned_loss=0.04172, over 1421739.75 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:21:04,529 INFO [train.py:842] (0/4) Epoch 35, batch 4650, loss[loss=0.1964, simple_loss=0.2924, pruned_loss=0.05026, over 7230.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2603, pruned_loss=0.04211, over 1418549.07 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:21:44,084 INFO [train.py:842] (0/4) Epoch 35, batch 4700, loss[loss=0.1796, simple_loss=0.2693, pruned_loss=0.04501, over 7219.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2609, pruned_loss=0.04227, over 1419541.69 frames.], batch size: 21, lr: 1.57e-04 2022-05-29 10:22:23,143 INFO [train.py:842] (0/4) Epoch 35, batch 4750, loss[loss=0.1902, simple_loss=0.2721, pruned_loss=0.05409, over 7201.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2612, pruned_loss=0.04265, over 1420443.80 frames.], batch size: 22, lr: 1.57e-04 2022-05-29 10:23:02,784 INFO [train.py:842] (0/4) Epoch 35, batch 4800, loss[loss=0.1555, simple_loss=0.2445, pruned_loss=0.03325, over 7263.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2605, pruned_loss=0.04201, over 1426176.16 frames.], batch size: 19, lr: 1.57e-04 2022-05-29 10:23:42,025 INFO [train.py:842] (0/4) Epoch 35, batch 4850, loss[loss=0.2231, simple_loss=0.3073, pruned_loss=0.06947, over 5315.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2611, pruned_loss=0.04189, over 1424310.37 frames.], batch size: 52, lr: 1.57e-04 2022-05-29 10:24:21,733 INFO [train.py:842] (0/4) Epoch 35, batch 4900, loss[loss=0.1733, simple_loss=0.2551, pruned_loss=0.04576, over 7274.00 frames.], tot_loss[loss=0.172, simple_loss=0.2608, pruned_loss=0.04159, over 1424896.10 frames.], batch size: 17, lr: 1.57e-04 2022-05-29 10:25:01,012 INFO [train.py:842] (0/4) Epoch 35, batch 4950, loss[loss=0.1454, simple_loss=0.2414, pruned_loss=0.02467, over 7432.00 frames.], tot_loss[loss=0.1711, simple_loss=0.26, pruned_loss=0.04112, over 1428505.66 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:25:40,561 INFO [train.py:842] (0/4) Epoch 35, batch 5000, loss[loss=0.1978, simple_loss=0.3033, pruned_loss=0.04613, over 7215.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2616, pruned_loss=0.04162, over 1427944.68 frames.], batch size: 21, lr: 1.57e-04 2022-05-29 10:26:19,944 INFO [train.py:842] (0/4) Epoch 35, batch 5050, loss[loss=0.1633, simple_loss=0.2407, pruned_loss=0.04293, over 7162.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2621, pruned_loss=0.04228, over 1431352.85 frames.], batch size: 19, lr: 1.57e-04 2022-05-29 10:26:59,243 INFO [train.py:842] (0/4) Epoch 35, batch 5100, loss[loss=0.1387, simple_loss=0.2233, pruned_loss=0.02702, over 7260.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2629, pruned_loss=0.04246, over 1427598.79 frames.], batch size: 17, lr: 1.57e-04 2022-05-29 10:27:38,560 INFO [train.py:842] (0/4) Epoch 35, batch 5150, loss[loss=0.151, simple_loss=0.2432, pruned_loss=0.02941, over 7324.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2625, pruned_loss=0.04231, over 1427325.72 frames.], batch size: 21, lr: 1.57e-04 2022-05-29 10:28:18,433 INFO [train.py:842] (0/4) Epoch 35, batch 5200, loss[loss=0.1556, simple_loss=0.2332, pruned_loss=0.039, over 7253.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2611, pruned_loss=0.04188, over 1428152.87 frames.], batch size: 16, lr: 1.57e-04 2022-05-29 10:28:57,712 INFO [train.py:842] (0/4) Epoch 35, batch 5250, loss[loss=0.1571, simple_loss=0.2497, pruned_loss=0.03221, over 7362.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2613, pruned_loss=0.04186, over 1428543.33 frames.], batch size: 19, lr: 1.57e-04 2022-05-29 10:29:37,107 INFO [train.py:842] (0/4) Epoch 35, batch 5300, loss[loss=0.1772, simple_loss=0.275, pruned_loss=0.0397, over 7277.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2614, pruned_loss=0.0414, over 1428294.02 frames.], batch size: 24, lr: 1.57e-04 2022-05-29 10:30:16,475 INFO [train.py:842] (0/4) Epoch 35, batch 5350, loss[loss=0.1745, simple_loss=0.2691, pruned_loss=0.03991, over 7330.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2609, pruned_loss=0.04138, over 1427285.07 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:30:56,243 INFO [train.py:842] (0/4) Epoch 35, batch 5400, loss[loss=0.1627, simple_loss=0.2594, pruned_loss=0.03297, over 7333.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2609, pruned_loss=0.04105, over 1431082.06 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:31:35,706 INFO [train.py:842] (0/4) Epoch 35, batch 5450, loss[loss=0.1621, simple_loss=0.241, pruned_loss=0.04154, over 7159.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2607, pruned_loss=0.04116, over 1431877.14 frames.], batch size: 18, lr: 1.57e-04 2022-05-29 10:32:14,944 INFO [train.py:842] (0/4) Epoch 35, batch 5500, loss[loss=0.1601, simple_loss=0.2551, pruned_loss=0.03262, over 7146.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2609, pruned_loss=0.04111, over 1428165.02 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:32:54,337 INFO [train.py:842] (0/4) Epoch 35, batch 5550, loss[loss=0.1313, simple_loss=0.2212, pruned_loss=0.02075, over 7275.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2617, pruned_loss=0.04167, over 1425677.77 frames.], batch size: 17, lr: 1.57e-04 2022-05-29 10:33:33,937 INFO [train.py:842] (0/4) Epoch 35, batch 5600, loss[loss=0.1708, simple_loss=0.2621, pruned_loss=0.03978, over 7236.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2613, pruned_loss=0.04159, over 1424710.21 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:34:13,251 INFO [train.py:842] (0/4) Epoch 35, batch 5650, loss[loss=0.1362, simple_loss=0.2215, pruned_loss=0.02548, over 7162.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2615, pruned_loss=0.04173, over 1426944.15 frames.], batch size: 18, lr: 1.57e-04 2022-05-29 10:34:52,885 INFO [train.py:842] (0/4) Epoch 35, batch 5700, loss[loss=0.1958, simple_loss=0.2816, pruned_loss=0.05495, over 7296.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2619, pruned_loss=0.04133, over 1428854.67 frames.], batch size: 24, lr: 1.57e-04 2022-05-29 10:35:32,076 INFO [train.py:842] (0/4) Epoch 35, batch 5750, loss[loss=0.1939, simple_loss=0.2905, pruned_loss=0.0486, over 6746.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2624, pruned_loss=0.04135, over 1431880.57 frames.], batch size: 31, lr: 1.57e-04 2022-05-29 10:36:11,756 INFO [train.py:842] (0/4) Epoch 35, batch 5800, loss[loss=0.1529, simple_loss=0.2585, pruned_loss=0.02364, over 7233.00 frames.], tot_loss[loss=0.171, simple_loss=0.2608, pruned_loss=0.04063, over 1430231.93 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:36:51,137 INFO [train.py:842] (0/4) Epoch 35, batch 5850, loss[loss=0.2258, simple_loss=0.3149, pruned_loss=0.06834, over 7135.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2602, pruned_loss=0.04072, over 1429494.71 frames.], batch size: 28, lr: 1.57e-04 2022-05-29 10:37:30,768 INFO [train.py:842] (0/4) Epoch 35, batch 5900, loss[loss=0.1811, simple_loss=0.2724, pruned_loss=0.04491, over 7204.00 frames.], tot_loss[loss=0.171, simple_loss=0.26, pruned_loss=0.04097, over 1428479.31 frames.], batch size: 23, lr: 1.57e-04 2022-05-29 10:38:10,004 INFO [train.py:842] (0/4) Epoch 35, batch 5950, loss[loss=0.1583, simple_loss=0.2465, pruned_loss=0.03506, over 7257.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2594, pruned_loss=0.04063, over 1428170.02 frames.], batch size: 19, lr: 1.57e-04 2022-05-29 10:38:49,677 INFO [train.py:842] (0/4) Epoch 35, batch 6000, loss[loss=0.187, simple_loss=0.2857, pruned_loss=0.04417, over 7342.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2599, pruned_loss=0.04123, over 1430624.22 frames.], batch size: 22, lr: 1.57e-04 2022-05-29 10:38:49,679 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 10:38:59,233 INFO [train.py:871] (0/4) Epoch 35, validation: loss=0.1626, simple_loss=0.2599, pruned_loss=0.03269, over 868885.00 frames. 2022-05-29 10:39:38,672 INFO [train.py:842] (0/4) Epoch 35, batch 6050, loss[loss=0.1744, simple_loss=0.2644, pruned_loss=0.04223, over 6996.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2602, pruned_loss=0.04112, over 1430279.02 frames.], batch size: 16, lr: 1.57e-04 2022-05-29 10:40:18,094 INFO [train.py:842] (0/4) Epoch 35, batch 6100, loss[loss=0.1321, simple_loss=0.2116, pruned_loss=0.02628, over 7002.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2611, pruned_loss=0.04152, over 1424805.96 frames.], batch size: 16, lr: 1.57e-04 2022-05-29 10:40:57,375 INFO [train.py:842] (0/4) Epoch 35, batch 6150, loss[loss=0.1546, simple_loss=0.2543, pruned_loss=0.02748, over 7088.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2618, pruned_loss=0.04155, over 1422788.96 frames.], batch size: 28, lr: 1.57e-04 2022-05-29 10:41:36,710 INFO [train.py:842] (0/4) Epoch 35, batch 6200, loss[loss=0.1575, simple_loss=0.2574, pruned_loss=0.02877, over 7228.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2642, pruned_loss=0.04215, over 1426098.50 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:42:16,067 INFO [train.py:842] (0/4) Epoch 35, batch 6250, loss[loss=0.1683, simple_loss=0.265, pruned_loss=0.03586, over 7411.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2642, pruned_loss=0.04218, over 1429186.56 frames.], batch size: 21, lr: 1.57e-04 2022-05-29 10:42:55,554 INFO [train.py:842] (0/4) Epoch 35, batch 6300, loss[loss=0.1733, simple_loss=0.2652, pruned_loss=0.04072, over 7280.00 frames.], tot_loss[loss=0.1741, simple_loss=0.264, pruned_loss=0.04215, over 1426775.36 frames.], batch size: 18, lr: 1.57e-04 2022-05-29 10:43:34,617 INFO [train.py:842] (0/4) Epoch 35, batch 6350, loss[loss=0.228, simple_loss=0.3087, pruned_loss=0.07366, over 5062.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2627, pruned_loss=0.0418, over 1427308.20 frames.], batch size: 53, lr: 1.57e-04 2022-05-29 10:44:14,151 INFO [train.py:842] (0/4) Epoch 35, batch 6400, loss[loss=0.1788, simple_loss=0.2732, pruned_loss=0.04219, over 7278.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2637, pruned_loss=0.04249, over 1427373.11 frames.], batch size: 24, lr: 1.57e-04 2022-05-29 10:44:53,509 INFO [train.py:842] (0/4) Epoch 35, batch 6450, loss[loss=0.1845, simple_loss=0.2785, pruned_loss=0.04522, over 7138.00 frames.], tot_loss[loss=0.1746, simple_loss=0.2639, pruned_loss=0.04269, over 1428006.67 frames.], batch size: 20, lr: 1.57e-04 2022-05-29 10:45:32,956 INFO [train.py:842] (0/4) Epoch 35, batch 6500, loss[loss=0.1635, simple_loss=0.2578, pruned_loss=0.03457, over 6722.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2632, pruned_loss=0.04219, over 1429137.11 frames.], batch size: 31, lr: 1.57e-04 2022-05-29 10:46:12,204 INFO [train.py:842] (0/4) Epoch 35, batch 6550, loss[loss=0.1523, simple_loss=0.2402, pruned_loss=0.03218, over 7371.00 frames.], tot_loss[loss=0.174, simple_loss=0.263, pruned_loss=0.04246, over 1426778.10 frames.], batch size: 19, lr: 1.57e-04 2022-05-29 10:46:51,876 INFO [train.py:842] (0/4) Epoch 35, batch 6600, loss[loss=0.1711, simple_loss=0.2671, pruned_loss=0.03758, over 7053.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2626, pruned_loss=0.04284, over 1419400.29 frames.], batch size: 28, lr: 1.57e-04 2022-05-29 10:47:31,063 INFO [train.py:842] (0/4) Epoch 35, batch 6650, loss[loss=0.1855, simple_loss=0.2642, pruned_loss=0.05335, over 7133.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2636, pruned_loss=0.04302, over 1420206.34 frames.], batch size: 17, lr: 1.57e-04 2022-05-29 10:48:10,515 INFO [train.py:842] (0/4) Epoch 35, batch 6700, loss[loss=0.19, simple_loss=0.2648, pruned_loss=0.05754, over 7287.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2625, pruned_loss=0.04254, over 1418480.08 frames.], batch size: 17, lr: 1.57e-04 2022-05-29 10:48:49,686 INFO [train.py:842] (0/4) Epoch 35, batch 6750, loss[loss=0.1264, simple_loss=0.211, pruned_loss=0.02095, over 7161.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2624, pruned_loss=0.04292, over 1415182.10 frames.], batch size: 17, lr: 1.56e-04 2022-05-29 10:49:29,327 INFO [train.py:842] (0/4) Epoch 35, batch 6800, loss[loss=0.2116, simple_loss=0.2965, pruned_loss=0.0633, over 6798.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2621, pruned_loss=0.04254, over 1418902.07 frames.], batch size: 31, lr: 1.56e-04 2022-05-29 10:50:08,368 INFO [train.py:842] (0/4) Epoch 35, batch 6850, loss[loss=0.1634, simple_loss=0.2363, pruned_loss=0.04527, over 6822.00 frames.], tot_loss[loss=0.173, simple_loss=0.2619, pruned_loss=0.04202, over 1419715.35 frames.], batch size: 15, lr: 1.56e-04 2022-05-29 10:50:47,893 INFO [train.py:842] (0/4) Epoch 35, batch 6900, loss[loss=0.1437, simple_loss=0.2445, pruned_loss=0.02144, over 7229.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2612, pruned_loss=0.04197, over 1420673.69 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 10:51:27,294 INFO [train.py:842] (0/4) Epoch 35, batch 6950, loss[loss=0.1783, simple_loss=0.2645, pruned_loss=0.04603, over 7383.00 frames.], tot_loss[loss=0.173, simple_loss=0.2617, pruned_loss=0.04216, over 1422504.06 frames.], batch size: 23, lr: 1.56e-04 2022-05-29 10:52:07,007 INFO [train.py:842] (0/4) Epoch 35, batch 7000, loss[loss=0.1734, simple_loss=0.2566, pruned_loss=0.04515, over 7187.00 frames.], tot_loss[loss=0.172, simple_loss=0.2607, pruned_loss=0.04166, over 1427146.49 frames.], batch size: 22, lr: 1.56e-04 2022-05-29 10:52:46,343 INFO [train.py:842] (0/4) Epoch 35, batch 7050, loss[loss=0.1556, simple_loss=0.2449, pruned_loss=0.03317, over 7325.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2623, pruned_loss=0.04244, over 1427078.79 frames.], batch size: 21, lr: 1.56e-04 2022-05-29 10:53:25,686 INFO [train.py:842] (0/4) Epoch 35, batch 7100, loss[loss=0.1882, simple_loss=0.2758, pruned_loss=0.05036, over 7152.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2617, pruned_loss=0.04185, over 1427770.55 frames.], batch size: 19, lr: 1.56e-04 2022-05-29 10:54:05,033 INFO [train.py:842] (0/4) Epoch 35, batch 7150, loss[loss=0.1329, simple_loss=0.2155, pruned_loss=0.02516, over 7353.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2607, pruned_loss=0.04139, over 1426110.31 frames.], batch size: 19, lr: 1.56e-04 2022-05-29 10:54:44,601 INFO [train.py:842] (0/4) Epoch 35, batch 7200, loss[loss=0.1694, simple_loss=0.2611, pruned_loss=0.03883, over 7313.00 frames.], tot_loss[loss=0.173, simple_loss=0.2623, pruned_loss=0.04185, over 1428970.97 frames.], batch size: 21, lr: 1.56e-04 2022-05-29 10:55:23,834 INFO [train.py:842] (0/4) Epoch 35, batch 7250, loss[loss=0.1839, simple_loss=0.2782, pruned_loss=0.04482, over 7280.00 frames.], tot_loss[loss=0.1726, simple_loss=0.262, pruned_loss=0.0416, over 1429428.21 frames.], batch size: 25, lr: 1.56e-04 2022-05-29 10:56:03,385 INFO [train.py:842] (0/4) Epoch 35, batch 7300, loss[loss=0.1673, simple_loss=0.2536, pruned_loss=0.0405, over 7334.00 frames.], tot_loss[loss=0.173, simple_loss=0.2626, pruned_loss=0.0417, over 1429340.75 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 10:56:42,472 INFO [train.py:842] (0/4) Epoch 35, batch 7350, loss[loss=0.1542, simple_loss=0.2386, pruned_loss=0.03485, over 6793.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2619, pruned_loss=0.04138, over 1429634.92 frames.], batch size: 15, lr: 1.56e-04 2022-05-29 10:57:22,058 INFO [train.py:842] (0/4) Epoch 35, batch 7400, loss[loss=0.2026, simple_loss=0.3029, pruned_loss=0.05109, over 7283.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2624, pruned_loss=0.04165, over 1431809.96 frames.], batch size: 25, lr: 1.56e-04 2022-05-29 10:58:01,206 INFO [train.py:842] (0/4) Epoch 35, batch 7450, loss[loss=0.2166, simple_loss=0.293, pruned_loss=0.07008, over 6707.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2626, pruned_loss=0.04194, over 1427808.92 frames.], batch size: 31, lr: 1.56e-04 2022-05-29 10:58:22,888 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-320000.pt 2022-05-29 10:58:43,658 INFO [train.py:842] (0/4) Epoch 35, batch 7500, loss[loss=0.1409, simple_loss=0.2336, pruned_loss=0.02411, over 7435.00 frames.], tot_loss[loss=0.172, simple_loss=0.2613, pruned_loss=0.04133, over 1429254.99 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 10:59:22,995 INFO [train.py:842] (0/4) Epoch 35, batch 7550, loss[loss=0.1583, simple_loss=0.2409, pruned_loss=0.03787, over 7354.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2609, pruned_loss=0.04106, over 1431207.22 frames.], batch size: 19, lr: 1.56e-04 2022-05-29 11:00:02,447 INFO [train.py:842] (0/4) Epoch 35, batch 7600, loss[loss=0.1719, simple_loss=0.2631, pruned_loss=0.04033, over 7237.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2614, pruned_loss=0.04177, over 1422450.01 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 11:00:41,966 INFO [train.py:842] (0/4) Epoch 35, batch 7650, loss[loss=0.1434, simple_loss=0.2268, pruned_loss=0.02995, over 7184.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2602, pruned_loss=0.04102, over 1425463.14 frames.], batch size: 16, lr: 1.56e-04 2022-05-29 11:01:21,757 INFO [train.py:842] (0/4) Epoch 35, batch 7700, loss[loss=0.1949, simple_loss=0.2851, pruned_loss=0.05234, over 7076.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2604, pruned_loss=0.04129, over 1425810.90 frames.], batch size: 28, lr: 1.56e-04 2022-05-29 11:02:00,843 INFO [train.py:842] (0/4) Epoch 35, batch 7750, loss[loss=0.1757, simple_loss=0.2608, pruned_loss=0.0453, over 7328.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2623, pruned_loss=0.04169, over 1429971.68 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 11:02:40,421 INFO [train.py:842] (0/4) Epoch 35, batch 7800, loss[loss=0.207, simple_loss=0.2962, pruned_loss=0.05893, over 7202.00 frames.], tot_loss[loss=0.1745, simple_loss=0.2638, pruned_loss=0.04259, over 1428793.57 frames.], batch size: 22, lr: 1.56e-04 2022-05-29 11:03:19,918 INFO [train.py:842] (0/4) Epoch 35, batch 7850, loss[loss=0.1646, simple_loss=0.2535, pruned_loss=0.03789, over 7321.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2637, pruned_loss=0.04237, over 1431855.59 frames.], batch size: 22, lr: 1.56e-04 2022-05-29 11:03:59,723 INFO [train.py:842] (0/4) Epoch 35, batch 7900, loss[loss=0.1606, simple_loss=0.2523, pruned_loss=0.03441, over 7197.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2629, pruned_loss=0.04236, over 1431737.49 frames.], batch size: 22, lr: 1.56e-04 2022-05-29 11:04:38,839 INFO [train.py:842] (0/4) Epoch 35, batch 7950, loss[loss=0.1787, simple_loss=0.2741, pruned_loss=0.04168, over 7297.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2628, pruned_loss=0.04201, over 1430416.80 frames.], batch size: 25, lr: 1.56e-04 2022-05-29 11:05:18,264 INFO [train.py:842] (0/4) Epoch 35, batch 8000, loss[loss=0.136, simple_loss=0.2315, pruned_loss=0.02024, over 7299.00 frames.], tot_loss[loss=0.1733, simple_loss=0.263, pruned_loss=0.04185, over 1430305.88 frames.], batch size: 24, lr: 1.56e-04 2022-05-29 11:05:57,341 INFO [train.py:842] (0/4) Epoch 35, batch 8050, loss[loss=0.1741, simple_loss=0.2665, pruned_loss=0.04078, over 7109.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2629, pruned_loss=0.04168, over 1432431.49 frames.], batch size: 28, lr: 1.56e-04 2022-05-29 11:06:36,965 INFO [train.py:842] (0/4) Epoch 35, batch 8100, loss[loss=0.1849, simple_loss=0.2868, pruned_loss=0.04146, over 7295.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2617, pruned_loss=0.04107, over 1432759.98 frames.], batch size: 24, lr: 1.56e-04 2022-05-29 11:07:15,986 INFO [train.py:842] (0/4) Epoch 35, batch 8150, loss[loss=0.158, simple_loss=0.2389, pruned_loss=0.03852, over 7363.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2614, pruned_loss=0.04102, over 1428255.33 frames.], batch size: 19, lr: 1.56e-04 2022-05-29 11:07:55,467 INFO [train.py:842] (0/4) Epoch 35, batch 8200, loss[loss=0.1682, simple_loss=0.2667, pruned_loss=0.03479, over 6742.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2615, pruned_loss=0.04097, over 1428559.81 frames.], batch size: 31, lr: 1.56e-04 2022-05-29 11:08:34,640 INFO [train.py:842] (0/4) Epoch 35, batch 8250, loss[loss=0.1827, simple_loss=0.2688, pruned_loss=0.04833, over 7313.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2606, pruned_loss=0.04081, over 1422701.15 frames.], batch size: 25, lr: 1.56e-04 2022-05-29 11:09:14,299 INFO [train.py:842] (0/4) Epoch 35, batch 8300, loss[loss=0.1473, simple_loss=0.2364, pruned_loss=0.02912, over 7252.00 frames.], tot_loss[loss=0.1704, simple_loss=0.2596, pruned_loss=0.04058, over 1422367.78 frames.], batch size: 19, lr: 1.56e-04 2022-05-29 11:09:53,321 INFO [train.py:842] (0/4) Epoch 35, batch 8350, loss[loss=0.1583, simple_loss=0.245, pruned_loss=0.03578, over 7427.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2612, pruned_loss=0.0411, over 1425782.82 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 11:10:32,671 INFO [train.py:842] (0/4) Epoch 35, batch 8400, loss[loss=0.1823, simple_loss=0.2722, pruned_loss=0.04621, over 7141.00 frames.], tot_loss[loss=0.172, simple_loss=0.2612, pruned_loss=0.04137, over 1418163.28 frames.], batch size: 26, lr: 1.56e-04 2022-05-29 11:11:11,749 INFO [train.py:842] (0/4) Epoch 35, batch 8450, loss[loss=0.1655, simple_loss=0.2606, pruned_loss=0.03518, over 7148.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2613, pruned_loss=0.04155, over 1415051.18 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 11:11:51,464 INFO [train.py:842] (0/4) Epoch 35, batch 8500, loss[loss=0.1332, simple_loss=0.22, pruned_loss=0.02316, over 7063.00 frames.], tot_loss[loss=0.172, simple_loss=0.2607, pruned_loss=0.04159, over 1418338.84 frames.], batch size: 18, lr: 1.56e-04 2022-05-29 11:12:41,675 INFO [train.py:842] (0/4) Epoch 35, batch 8550, loss[loss=0.2254, simple_loss=0.3104, pruned_loss=0.07019, over 6929.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2606, pruned_loss=0.04203, over 1420175.18 frames.], batch size: 32, lr: 1.56e-04 2022-05-29 11:13:21,152 INFO [train.py:842] (0/4) Epoch 35, batch 8600, loss[loss=0.2108, simple_loss=0.2901, pruned_loss=0.06571, over 5164.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2606, pruned_loss=0.04191, over 1412437.05 frames.], batch size: 52, lr: 1.56e-04 2022-05-29 11:14:00,828 INFO [train.py:842] (0/4) Epoch 35, batch 8650, loss[loss=0.1677, simple_loss=0.2654, pruned_loss=0.03496, over 7212.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2609, pruned_loss=0.04209, over 1419874.40 frames.], batch size: 21, lr: 1.56e-04 2022-05-29 11:14:40,506 INFO [train.py:842] (0/4) Epoch 35, batch 8700, loss[loss=0.1609, simple_loss=0.2449, pruned_loss=0.03848, over 7355.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2609, pruned_loss=0.04204, over 1415617.15 frames.], batch size: 19, lr: 1.56e-04 2022-05-29 11:15:19,847 INFO [train.py:842] (0/4) Epoch 35, batch 8750, loss[loss=0.1553, simple_loss=0.248, pruned_loss=0.03126, over 7422.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2609, pruned_loss=0.04223, over 1415921.05 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 11:15:59,158 INFO [train.py:842] (0/4) Epoch 35, batch 8800, loss[loss=0.1799, simple_loss=0.2677, pruned_loss=0.04607, over 7334.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2606, pruned_loss=0.04147, over 1412818.48 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 11:16:38,455 INFO [train.py:842] (0/4) Epoch 35, batch 8850, loss[loss=0.167, simple_loss=0.255, pruned_loss=0.03951, over 7233.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2597, pruned_loss=0.04108, over 1411046.73 frames.], batch size: 21, lr: 1.56e-04 2022-05-29 11:17:17,814 INFO [train.py:842] (0/4) Epoch 35, batch 8900, loss[loss=0.1633, simple_loss=0.2551, pruned_loss=0.03579, over 7329.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2589, pruned_loss=0.04039, over 1412888.05 frames.], batch size: 20, lr: 1.56e-04 2022-05-29 11:17:56,636 INFO [train.py:842] (0/4) Epoch 35, batch 8950, loss[loss=0.1968, simple_loss=0.2885, pruned_loss=0.0525, over 5191.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2601, pruned_loss=0.04064, over 1403163.63 frames.], batch size: 52, lr: 1.56e-04 2022-05-29 11:18:35,173 INFO [train.py:842] (0/4) Epoch 35, batch 9000, loss[loss=0.1555, simple_loss=0.2433, pruned_loss=0.03387, over 6366.00 frames.], tot_loss[loss=0.1756, simple_loss=0.2646, pruned_loss=0.0433, over 1379521.06 frames.], batch size: 37, lr: 1.56e-04 2022-05-29 11:18:35,174 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 11:18:44,740 INFO [train.py:871] (0/4) Epoch 35, validation: loss=0.1633, simple_loss=0.2602, pruned_loss=0.03324, over 868885.00 frames. 2022-05-29 11:19:22,582 INFO [train.py:842] (0/4) Epoch 35, batch 9050, loss[loss=0.1884, simple_loss=0.2868, pruned_loss=0.04501, over 6250.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2684, pruned_loss=0.04525, over 1348544.87 frames.], batch size: 37, lr: 1.56e-04 2022-05-29 11:20:00,767 INFO [train.py:842] (0/4) Epoch 35, batch 9100, loss[loss=0.2145, simple_loss=0.3025, pruned_loss=0.06328, over 5144.00 frames.], tot_loss[loss=0.1829, simple_loss=0.2713, pruned_loss=0.04721, over 1289072.17 frames.], batch size: 52, lr: 1.56e-04 2022-05-29 11:20:38,929 INFO [train.py:842] (0/4) Epoch 35, batch 9150, loss[loss=0.1845, simple_loss=0.2783, pruned_loss=0.04539, over 5006.00 frames.], tot_loss[loss=0.1874, simple_loss=0.2748, pruned_loss=0.05, over 1228431.15 frames.], batch size: 53, lr: 1.56e-04 2022-05-29 11:21:11,045 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-35.pt 2022-05-29 11:21:27,241 INFO [train.py:842] (0/4) Epoch 36, batch 0, loss[loss=0.1777, simple_loss=0.2618, pruned_loss=0.04684, over 7329.00 frames.], tot_loss[loss=0.1777, simple_loss=0.2618, pruned_loss=0.04684, over 7329.00 frames.], batch size: 20, lr: 1.54e-04 2022-05-29 11:22:06,543 INFO [train.py:842] (0/4) Epoch 36, batch 50, loss[loss=0.1627, simple_loss=0.2499, pruned_loss=0.0377, over 7429.00 frames.], tot_loss[loss=0.174, simple_loss=0.2628, pruned_loss=0.04262, over 316806.17 frames.], batch size: 20, lr: 1.54e-04 2022-05-29 11:22:46,035 INFO [train.py:842] (0/4) Epoch 36, batch 100, loss[loss=0.1873, simple_loss=0.2639, pruned_loss=0.05533, over 5356.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2611, pruned_loss=0.04107, over 562896.60 frames.], batch size: 52, lr: 1.54e-04 2022-05-29 11:23:25,184 INFO [train.py:842] (0/4) Epoch 36, batch 150, loss[loss=0.1622, simple_loss=0.2583, pruned_loss=0.03303, over 7232.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2608, pruned_loss=0.04156, over 751682.75 frames.], batch size: 20, lr: 1.54e-04 2022-05-29 11:24:04,882 INFO [train.py:842] (0/4) Epoch 36, batch 200, loss[loss=0.1458, simple_loss=0.2354, pruned_loss=0.02813, over 7326.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2596, pruned_loss=0.04088, over 901263.10 frames.], batch size: 21, lr: 1.54e-04 2022-05-29 11:24:44,266 INFO [train.py:842] (0/4) Epoch 36, batch 250, loss[loss=0.1705, simple_loss=0.2656, pruned_loss=0.03769, over 7162.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2593, pruned_loss=0.04056, over 1020218.99 frames.], batch size: 19, lr: 1.54e-04 2022-05-29 11:25:23,560 INFO [train.py:842] (0/4) Epoch 36, batch 300, loss[loss=0.2081, simple_loss=0.2973, pruned_loss=0.05948, over 7212.00 frames.], tot_loss[loss=0.171, simple_loss=0.2597, pruned_loss=0.04112, over 1105292.41 frames.], batch size: 26, lr: 1.54e-04 2022-05-29 11:26:02,737 INFO [train.py:842] (0/4) Epoch 36, batch 350, loss[loss=0.1655, simple_loss=0.261, pruned_loss=0.035, over 6769.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2602, pruned_loss=0.04074, over 1174142.81 frames.], batch size: 31, lr: 1.54e-04 2022-05-29 11:26:41,956 INFO [train.py:842] (0/4) Epoch 36, batch 400, loss[loss=0.1682, simple_loss=0.2598, pruned_loss=0.0383, over 7200.00 frames.], tot_loss[loss=0.171, simple_loss=0.2608, pruned_loss=0.04063, over 1230172.17 frames.], batch size: 22, lr: 1.54e-04 2022-05-29 11:27:21,345 INFO [train.py:842] (0/4) Epoch 36, batch 450, loss[loss=0.1976, simple_loss=0.2914, pruned_loss=0.05188, over 7178.00 frames.], tot_loss[loss=0.171, simple_loss=0.2608, pruned_loss=0.04056, over 1278166.00 frames.], batch size: 26, lr: 1.54e-04 2022-05-29 11:28:00,714 INFO [train.py:842] (0/4) Epoch 36, batch 500, loss[loss=0.1917, simple_loss=0.2784, pruned_loss=0.05254, over 7210.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2613, pruned_loss=0.04068, over 1310089.60 frames.], batch size: 23, lr: 1.54e-04 2022-05-29 11:28:39,908 INFO [train.py:842] (0/4) Epoch 36, batch 550, loss[loss=0.185, simple_loss=0.2824, pruned_loss=0.04379, over 7417.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2614, pruned_loss=0.04043, over 1336612.05 frames.], batch size: 20, lr: 1.54e-04 2022-05-29 11:29:19,641 INFO [train.py:842] (0/4) Epoch 36, batch 600, loss[loss=0.1921, simple_loss=0.2752, pruned_loss=0.05453, over 7178.00 frames.], tot_loss[loss=0.1721, simple_loss=0.262, pruned_loss=0.04112, over 1358592.82 frames.], batch size: 23, lr: 1.54e-04 2022-05-29 11:29:59,115 INFO [train.py:842] (0/4) Epoch 36, batch 650, loss[loss=0.1561, simple_loss=0.2492, pruned_loss=0.03156, over 7165.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2613, pruned_loss=0.04117, over 1373006.90 frames.], batch size: 19, lr: 1.54e-04 2022-05-29 11:30:38,713 INFO [train.py:842] (0/4) Epoch 36, batch 700, loss[loss=0.1427, simple_loss=0.2226, pruned_loss=0.03143, over 7258.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2605, pruned_loss=0.04062, over 1385104.53 frames.], batch size: 19, lr: 1.54e-04 2022-05-29 11:31:17,833 INFO [train.py:842] (0/4) Epoch 36, batch 750, loss[loss=0.1787, simple_loss=0.2705, pruned_loss=0.04342, over 7335.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2608, pruned_loss=0.04089, over 1384817.17 frames.], batch size: 20, lr: 1.54e-04 2022-05-29 11:31:57,479 INFO [train.py:842] (0/4) Epoch 36, batch 800, loss[loss=0.2092, simple_loss=0.3023, pruned_loss=0.05805, over 7420.00 frames.], tot_loss[loss=0.172, simple_loss=0.2617, pruned_loss=0.04112, over 1393015.35 frames.], batch size: 21, lr: 1.54e-04 2022-05-29 11:32:36,715 INFO [train.py:842] (0/4) Epoch 36, batch 850, loss[loss=0.1653, simple_loss=0.2626, pruned_loss=0.03403, over 7215.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2613, pruned_loss=0.04069, over 1394660.29 frames.], batch size: 21, lr: 1.54e-04 2022-05-29 11:33:16,379 INFO [train.py:842] (0/4) Epoch 36, batch 900, loss[loss=0.1642, simple_loss=0.2505, pruned_loss=0.03894, over 6958.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2606, pruned_loss=0.04014, over 1402293.68 frames.], batch size: 32, lr: 1.54e-04 2022-05-29 11:33:55,548 INFO [train.py:842] (0/4) Epoch 36, batch 950, loss[loss=0.1214, simple_loss=0.2123, pruned_loss=0.01523, over 6997.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2614, pruned_loss=0.04096, over 1405858.23 frames.], batch size: 16, lr: 1.53e-04 2022-05-29 11:34:35,011 INFO [train.py:842] (0/4) Epoch 36, batch 1000, loss[loss=0.1439, simple_loss=0.2221, pruned_loss=0.03281, over 7285.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2619, pruned_loss=0.04152, over 1407725.02 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 11:35:14,264 INFO [train.py:842] (0/4) Epoch 36, batch 1050, loss[loss=0.1306, simple_loss=0.2175, pruned_loss=0.02178, over 7364.00 frames.], tot_loss[loss=0.174, simple_loss=0.2633, pruned_loss=0.04234, over 1408475.17 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 11:35:53,830 INFO [train.py:842] (0/4) Epoch 36, batch 1100, loss[loss=0.2328, simple_loss=0.319, pruned_loss=0.0733, over 7201.00 frames.], tot_loss[loss=0.1747, simple_loss=0.264, pruned_loss=0.04268, over 1409093.27 frames.], batch size: 22, lr: 1.53e-04 2022-05-29 11:36:33,125 INFO [train.py:842] (0/4) Epoch 36, batch 1150, loss[loss=0.206, simple_loss=0.3016, pruned_loss=0.05518, over 7309.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2635, pruned_loss=0.04261, over 1414116.17 frames.], batch size: 24, lr: 1.53e-04 2022-05-29 11:37:12,444 INFO [train.py:842] (0/4) Epoch 36, batch 1200, loss[loss=0.1439, simple_loss=0.2275, pruned_loss=0.03013, over 7292.00 frames.], tot_loss[loss=0.1755, simple_loss=0.2646, pruned_loss=0.04321, over 1408900.76 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 11:37:51,890 INFO [train.py:842] (0/4) Epoch 36, batch 1250, loss[loss=0.158, simple_loss=0.2408, pruned_loss=0.03756, over 7001.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2637, pruned_loss=0.04303, over 1410339.00 frames.], batch size: 16, lr: 1.53e-04 2022-05-29 11:38:31,210 INFO [train.py:842] (0/4) Epoch 36, batch 1300, loss[loss=0.1588, simple_loss=0.2424, pruned_loss=0.03761, over 7132.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2627, pruned_loss=0.04281, over 1414665.34 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 11:39:10,488 INFO [train.py:842] (0/4) Epoch 36, batch 1350, loss[loss=0.175, simple_loss=0.2672, pruned_loss=0.04143, over 7252.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2615, pruned_loss=0.04208, over 1420003.62 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 11:39:49,831 INFO [train.py:842] (0/4) Epoch 36, batch 1400, loss[loss=0.1305, simple_loss=0.2141, pruned_loss=0.02343, over 7002.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2622, pruned_loss=0.0422, over 1418101.38 frames.], batch size: 16, lr: 1.53e-04 2022-05-29 11:40:29,042 INFO [train.py:842] (0/4) Epoch 36, batch 1450, loss[loss=0.1396, simple_loss=0.2184, pruned_loss=0.03033, over 6856.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2614, pruned_loss=0.04167, over 1415011.48 frames.], batch size: 15, lr: 1.53e-04 2022-05-29 11:41:08,693 INFO [train.py:842] (0/4) Epoch 36, batch 1500, loss[loss=0.1816, simple_loss=0.2733, pruned_loss=0.045, over 7316.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2616, pruned_loss=0.04129, over 1419166.69 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 11:41:47,977 INFO [train.py:842] (0/4) Epoch 36, batch 1550, loss[loss=0.1903, simple_loss=0.2865, pruned_loss=0.04707, over 7235.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2615, pruned_loss=0.0413, over 1420098.73 frames.], batch size: 20, lr: 1.53e-04 2022-05-29 11:42:27,485 INFO [train.py:842] (0/4) Epoch 36, batch 1600, loss[loss=0.1955, simple_loss=0.2817, pruned_loss=0.05462, over 7369.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2609, pruned_loss=0.04129, over 1420517.05 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 11:43:06,844 INFO [train.py:842] (0/4) Epoch 36, batch 1650, loss[loss=0.166, simple_loss=0.2506, pruned_loss=0.0407, over 7156.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2615, pruned_loss=0.04173, over 1421559.85 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 11:43:46,236 INFO [train.py:842] (0/4) Epoch 36, batch 1700, loss[loss=0.2205, simple_loss=0.3149, pruned_loss=0.06308, over 7308.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2611, pruned_loss=0.04119, over 1424274.35 frames.], batch size: 25, lr: 1.53e-04 2022-05-29 11:44:25,377 INFO [train.py:842] (0/4) Epoch 36, batch 1750, loss[loss=0.2015, simple_loss=0.2855, pruned_loss=0.05879, over 7272.00 frames.], tot_loss[loss=0.1725, simple_loss=0.262, pruned_loss=0.04148, over 1420451.77 frames.], batch size: 18, lr: 1.53e-04 2022-05-29 11:45:04,918 INFO [train.py:842] (0/4) Epoch 36, batch 1800, loss[loss=0.163, simple_loss=0.2621, pruned_loss=0.03197, over 7195.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2627, pruned_loss=0.04211, over 1422387.25 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 11:45:44,139 INFO [train.py:842] (0/4) Epoch 36, batch 1850, loss[loss=0.1732, simple_loss=0.2736, pruned_loss=0.03639, over 7113.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2621, pruned_loss=0.04219, over 1425198.17 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 11:46:23,712 INFO [train.py:842] (0/4) Epoch 36, batch 1900, loss[loss=0.1616, simple_loss=0.2603, pruned_loss=0.03147, over 6709.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2609, pruned_loss=0.0414, over 1426221.73 frames.], batch size: 31, lr: 1.53e-04 2022-05-29 11:47:02,844 INFO [train.py:842] (0/4) Epoch 36, batch 1950, loss[loss=0.1882, simple_loss=0.2756, pruned_loss=0.05042, over 7229.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2613, pruned_loss=0.04172, over 1422516.26 frames.], batch size: 20, lr: 1.53e-04 2022-05-29 11:47:42,068 INFO [train.py:842] (0/4) Epoch 36, batch 2000, loss[loss=0.1373, simple_loss=0.2095, pruned_loss=0.03257, over 7013.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2606, pruned_loss=0.04093, over 1420383.60 frames.], batch size: 16, lr: 1.53e-04 2022-05-29 11:48:21,462 INFO [train.py:842] (0/4) Epoch 36, batch 2050, loss[loss=0.1396, simple_loss=0.2359, pruned_loss=0.0217, over 7330.00 frames.], tot_loss[loss=0.1716, simple_loss=0.261, pruned_loss=0.04113, over 1424507.39 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 11:49:01,223 INFO [train.py:842] (0/4) Epoch 36, batch 2100, loss[loss=0.1928, simple_loss=0.2846, pruned_loss=0.05045, over 7419.00 frames.], tot_loss[loss=0.171, simple_loss=0.2604, pruned_loss=0.04083, over 1423456.06 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 11:49:40,702 INFO [train.py:842] (0/4) Epoch 36, batch 2150, loss[loss=0.128, simple_loss=0.2166, pruned_loss=0.01975, over 7268.00 frames.], tot_loss[loss=0.1704, simple_loss=0.2596, pruned_loss=0.04056, over 1426310.68 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 11:50:20,096 INFO [train.py:842] (0/4) Epoch 36, batch 2200, loss[loss=0.1381, simple_loss=0.2395, pruned_loss=0.01838, over 7427.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2611, pruned_loss=0.04115, over 1425940.83 frames.], batch size: 18, lr: 1.53e-04 2022-05-29 11:50:58,853 INFO [train.py:842] (0/4) Epoch 36, batch 2250, loss[loss=0.1513, simple_loss=0.2476, pruned_loss=0.02752, over 7337.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2625, pruned_loss=0.04155, over 1423099.02 frames.], batch size: 22, lr: 1.53e-04 2022-05-29 11:51:38,535 INFO [train.py:842] (0/4) Epoch 36, batch 2300, loss[loss=0.138, simple_loss=0.2175, pruned_loss=0.02923, over 7156.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2599, pruned_loss=0.04058, over 1426440.33 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 11:52:17,606 INFO [train.py:842] (0/4) Epoch 36, batch 2350, loss[loss=0.2008, simple_loss=0.281, pruned_loss=0.06034, over 4820.00 frames.], tot_loss[loss=0.1709, simple_loss=0.26, pruned_loss=0.04089, over 1424341.90 frames.], batch size: 52, lr: 1.53e-04 2022-05-29 11:52:57,251 INFO [train.py:842] (0/4) Epoch 36, batch 2400, loss[loss=0.1345, simple_loss=0.218, pruned_loss=0.02549, over 7411.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2602, pruned_loss=0.04129, over 1427319.06 frames.], batch size: 18, lr: 1.53e-04 2022-05-29 11:53:36,481 INFO [train.py:842] (0/4) Epoch 36, batch 2450, loss[loss=0.1582, simple_loss=0.2333, pruned_loss=0.04154, over 7161.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2605, pruned_loss=0.04139, over 1423441.11 frames.], batch size: 18, lr: 1.53e-04 2022-05-29 11:54:16,221 INFO [train.py:842] (0/4) Epoch 36, batch 2500, loss[loss=0.1619, simple_loss=0.254, pruned_loss=0.03485, over 7150.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2609, pruned_loss=0.04189, over 1427677.90 frames.], batch size: 20, lr: 1.53e-04 2022-05-29 11:54:55,457 INFO [train.py:842] (0/4) Epoch 36, batch 2550, loss[loss=0.1877, simple_loss=0.2778, pruned_loss=0.04878, over 7360.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2611, pruned_loss=0.04211, over 1424513.34 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 11:55:34,913 INFO [train.py:842] (0/4) Epoch 36, batch 2600, loss[loss=0.1655, simple_loss=0.252, pruned_loss=0.03951, over 7164.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2611, pruned_loss=0.04176, over 1424719.87 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 11:56:14,157 INFO [train.py:842] (0/4) Epoch 36, batch 2650, loss[loss=0.1767, simple_loss=0.2663, pruned_loss=0.04351, over 5157.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2604, pruned_loss=0.04122, over 1423713.55 frames.], batch size: 52, lr: 1.53e-04 2022-05-29 11:56:54,018 INFO [train.py:842] (0/4) Epoch 36, batch 2700, loss[loss=0.1696, simple_loss=0.2673, pruned_loss=0.03591, over 7323.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2599, pruned_loss=0.04098, over 1425072.11 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 11:57:33,284 INFO [train.py:842] (0/4) Epoch 36, batch 2750, loss[loss=0.1757, simple_loss=0.2742, pruned_loss=0.03862, over 7115.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2597, pruned_loss=0.04067, over 1426636.74 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 11:58:12,912 INFO [train.py:842] (0/4) Epoch 36, batch 2800, loss[loss=0.2494, simple_loss=0.3443, pruned_loss=0.07727, over 7187.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2597, pruned_loss=0.04084, over 1428656.57 frames.], batch size: 22, lr: 1.53e-04 2022-05-29 11:58:52,401 INFO [train.py:842] (0/4) Epoch 36, batch 2850, loss[loss=0.1224, simple_loss=0.2128, pruned_loss=0.01601, over 7272.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2585, pruned_loss=0.04017, over 1429421.30 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 11:59:32,215 INFO [train.py:842] (0/4) Epoch 36, batch 2900, loss[loss=0.1666, simple_loss=0.2541, pruned_loss=0.03958, over 7263.00 frames.], tot_loss[loss=0.1685, simple_loss=0.2576, pruned_loss=0.03969, over 1429244.47 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 12:00:11,120 INFO [train.py:842] (0/4) Epoch 36, batch 2950, loss[loss=0.1501, simple_loss=0.2369, pruned_loss=0.03166, over 7171.00 frames.], tot_loss[loss=0.1686, simple_loss=0.258, pruned_loss=0.0396, over 1426487.01 frames.], batch size: 18, lr: 1.53e-04 2022-05-29 12:00:50,583 INFO [train.py:842] (0/4) Epoch 36, batch 3000, loss[loss=0.1656, simple_loss=0.2473, pruned_loss=0.04196, over 7169.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2607, pruned_loss=0.04093, over 1422732.08 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 12:00:50,585 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 12:01:00,561 INFO [train.py:871] (0/4) Epoch 36, validation: loss=0.1657, simple_loss=0.263, pruned_loss=0.03426, over 868885.00 frames. 2022-05-29 12:01:39,787 INFO [train.py:842] (0/4) Epoch 36, batch 3050, loss[loss=0.1451, simple_loss=0.2396, pruned_loss=0.02534, over 7306.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2611, pruned_loss=0.04098, over 1424813.37 frames.], batch size: 24, lr: 1.53e-04 2022-05-29 12:02:19,630 INFO [train.py:842] (0/4) Epoch 36, batch 3100, loss[loss=0.1891, simple_loss=0.2884, pruned_loss=0.0449, over 7308.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2609, pruned_loss=0.04111, over 1429033.39 frames.], batch size: 25, lr: 1.53e-04 2022-05-29 12:02:58,621 INFO [train.py:842] (0/4) Epoch 36, batch 3150, loss[loss=0.1723, simple_loss=0.2642, pruned_loss=0.0402, over 7384.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2611, pruned_loss=0.04139, over 1426822.25 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 12:03:38,028 INFO [train.py:842] (0/4) Epoch 36, batch 3200, loss[loss=0.1431, simple_loss=0.2246, pruned_loss=0.03077, over 7145.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2625, pruned_loss=0.0425, over 1420870.11 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 12:04:17,308 INFO [train.py:842] (0/4) Epoch 36, batch 3250, loss[loss=0.1648, simple_loss=0.2529, pruned_loss=0.03832, over 5129.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2622, pruned_loss=0.04208, over 1417887.85 frames.], batch size: 53, lr: 1.53e-04 2022-05-29 12:04:56,916 INFO [train.py:842] (0/4) Epoch 36, batch 3300, loss[loss=0.1826, simple_loss=0.2748, pruned_loss=0.04522, over 7212.00 frames.], tot_loss[loss=0.1726, simple_loss=0.262, pruned_loss=0.04163, over 1421498.22 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 12:05:36,376 INFO [train.py:842] (0/4) Epoch 36, batch 3350, loss[loss=0.1821, simple_loss=0.2724, pruned_loss=0.04593, over 7199.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2631, pruned_loss=0.04274, over 1425721.15 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 12:06:16,117 INFO [train.py:842] (0/4) Epoch 36, batch 3400, loss[loss=0.1986, simple_loss=0.2681, pruned_loss=0.0645, over 7261.00 frames.], tot_loss[loss=0.174, simple_loss=0.2628, pruned_loss=0.04262, over 1425033.26 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 12:06:55,212 INFO [train.py:842] (0/4) Epoch 36, batch 3450, loss[loss=0.13, simple_loss=0.2217, pruned_loss=0.01915, over 7279.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2623, pruned_loss=0.04203, over 1422600.74 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 12:07:34,904 INFO [train.py:842] (0/4) Epoch 36, batch 3500, loss[loss=0.1896, simple_loss=0.2944, pruned_loss=0.04245, over 7402.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2616, pruned_loss=0.04163, over 1418188.82 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 12:08:13,998 INFO [train.py:842] (0/4) Epoch 36, batch 3550, loss[loss=0.1752, simple_loss=0.2727, pruned_loss=0.03891, over 7051.00 frames.], tot_loss[loss=0.172, simple_loss=0.2614, pruned_loss=0.04132, over 1422301.53 frames.], batch size: 28, lr: 1.53e-04 2022-05-29 12:08:53,420 INFO [train.py:842] (0/4) Epoch 36, batch 3600, loss[loss=0.1945, simple_loss=0.2897, pruned_loss=0.04964, over 7315.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2614, pruned_loss=0.04106, over 1421982.32 frames.], batch size: 25, lr: 1.53e-04 2022-05-29 12:09:32,742 INFO [train.py:842] (0/4) Epoch 36, batch 3650, loss[loss=0.2198, simple_loss=0.3034, pruned_loss=0.06808, over 7300.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2606, pruned_loss=0.04083, over 1423626.38 frames.], batch size: 24, lr: 1.53e-04 2022-05-29 12:10:12,403 INFO [train.py:842] (0/4) Epoch 36, batch 3700, loss[loss=0.1842, simple_loss=0.2771, pruned_loss=0.04569, over 7110.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2616, pruned_loss=0.04147, over 1426393.80 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 12:10:51,872 INFO [train.py:842] (0/4) Epoch 36, batch 3750, loss[loss=0.1702, simple_loss=0.2634, pruned_loss=0.03846, over 7347.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2611, pruned_loss=0.04105, over 1426142.04 frames.], batch size: 22, lr: 1.53e-04 2022-05-29 12:11:31,343 INFO [train.py:842] (0/4) Epoch 36, batch 3800, loss[loss=0.135, simple_loss=0.2252, pruned_loss=0.02237, over 7368.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2616, pruned_loss=0.04097, over 1427997.28 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 12:12:10,354 INFO [train.py:842] (0/4) Epoch 36, batch 3850, loss[loss=0.1467, simple_loss=0.2339, pruned_loss=0.02977, over 6995.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2624, pruned_loss=0.04119, over 1424019.95 frames.], batch size: 16, lr: 1.53e-04 2022-05-29 12:12:50,351 INFO [train.py:842] (0/4) Epoch 36, batch 3900, loss[loss=0.1806, simple_loss=0.274, pruned_loss=0.04361, over 7188.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2624, pruned_loss=0.04194, over 1426137.19 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 12:13:29,168 INFO [train.py:842] (0/4) Epoch 36, batch 3950, loss[loss=0.1778, simple_loss=0.2705, pruned_loss=0.04257, over 6718.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2629, pruned_loss=0.04204, over 1424959.63 frames.], batch size: 31, lr: 1.53e-04 2022-05-29 12:14:08,409 INFO [train.py:842] (0/4) Epoch 36, batch 4000, loss[loss=0.1621, simple_loss=0.2581, pruned_loss=0.03302, over 7032.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2636, pruned_loss=0.04225, over 1424326.52 frames.], batch size: 28, lr: 1.53e-04 2022-05-29 12:14:47,654 INFO [train.py:842] (0/4) Epoch 36, batch 4050, loss[loss=0.1553, simple_loss=0.2525, pruned_loss=0.02903, over 6290.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2642, pruned_loss=0.04229, over 1425497.65 frames.], batch size: 38, lr: 1.53e-04 2022-05-29 12:15:27,252 INFO [train.py:842] (0/4) Epoch 36, batch 4100, loss[loss=0.1922, simple_loss=0.2813, pruned_loss=0.05161, over 7231.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2644, pruned_loss=0.04267, over 1426138.44 frames.], batch size: 20, lr: 1.53e-04 2022-05-29 12:16:06,486 INFO [train.py:842] (0/4) Epoch 36, batch 4150, loss[loss=0.2006, simple_loss=0.2911, pruned_loss=0.05502, over 7338.00 frames.], tot_loss[loss=0.1752, simple_loss=0.2644, pruned_loss=0.043, over 1423469.70 frames.], batch size: 22, lr: 1.53e-04 2022-05-29 12:16:45,818 INFO [train.py:842] (0/4) Epoch 36, batch 4200, loss[loss=0.1521, simple_loss=0.2555, pruned_loss=0.0243, over 7338.00 frames.], tot_loss[loss=0.1754, simple_loss=0.2645, pruned_loss=0.04314, over 1418752.27 frames.], batch size: 22, lr: 1.53e-04 2022-05-29 12:17:25,113 INFO [train.py:842] (0/4) Epoch 36, batch 4250, loss[loss=0.1845, simple_loss=0.279, pruned_loss=0.04502, over 7190.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2635, pruned_loss=0.04259, over 1418692.50 frames.], batch size: 22, lr: 1.53e-04 2022-05-29 12:18:04,938 INFO [train.py:842] (0/4) Epoch 36, batch 4300, loss[loss=0.1651, simple_loss=0.2547, pruned_loss=0.03771, over 7197.00 frames.], tot_loss[loss=0.1739, simple_loss=0.263, pruned_loss=0.04238, over 1418697.45 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 12:18:44,046 INFO [train.py:842] (0/4) Epoch 36, batch 4350, loss[loss=0.1831, simple_loss=0.2682, pruned_loss=0.04901, over 7346.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2632, pruned_loss=0.04254, over 1414049.39 frames.], batch size: 19, lr: 1.53e-04 2022-05-29 12:19:23,720 INFO [train.py:842] (0/4) Epoch 36, batch 4400, loss[loss=0.1847, simple_loss=0.2754, pruned_loss=0.04702, over 6676.00 frames.], tot_loss[loss=0.1748, simple_loss=0.2636, pruned_loss=0.04305, over 1417069.69 frames.], batch size: 31, lr: 1.53e-04 2022-05-29 12:20:13,620 INFO [train.py:842] (0/4) Epoch 36, batch 4450, loss[loss=0.176, simple_loss=0.2563, pruned_loss=0.04785, over 7413.00 frames.], tot_loss[loss=0.1751, simple_loss=0.2641, pruned_loss=0.043, over 1417647.36 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 12:20:53,261 INFO [train.py:842] (0/4) Epoch 36, batch 4500, loss[loss=0.1428, simple_loss=0.2332, pruned_loss=0.02622, over 7168.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2631, pruned_loss=0.04227, over 1422200.48 frames.], batch size: 18, lr: 1.53e-04 2022-05-29 12:21:32,470 INFO [train.py:842] (0/4) Epoch 36, batch 4550, loss[loss=0.1859, simple_loss=0.2734, pruned_loss=0.04919, over 7384.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2627, pruned_loss=0.04189, over 1422892.36 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 12:22:11,960 INFO [train.py:842] (0/4) Epoch 36, batch 4600, loss[loss=0.2161, simple_loss=0.3011, pruned_loss=0.06561, over 5128.00 frames.], tot_loss[loss=0.173, simple_loss=0.2626, pruned_loss=0.04175, over 1419919.30 frames.], batch size: 53, lr: 1.53e-04 2022-05-29 12:22:50,842 INFO [train.py:842] (0/4) Epoch 36, batch 4650, loss[loss=0.1355, simple_loss=0.218, pruned_loss=0.02649, over 7295.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2623, pruned_loss=0.04153, over 1417020.71 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 12:23:30,508 INFO [train.py:842] (0/4) Epoch 36, batch 4700, loss[loss=0.1552, simple_loss=0.2576, pruned_loss=0.02643, over 6451.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2609, pruned_loss=0.04094, over 1420281.90 frames.], batch size: 37, lr: 1.53e-04 2022-05-29 12:24:09,549 INFO [train.py:842] (0/4) Epoch 36, batch 4750, loss[loss=0.1949, simple_loss=0.283, pruned_loss=0.05344, over 7103.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2619, pruned_loss=0.04136, over 1415377.43 frames.], batch size: 28, lr: 1.53e-04 2022-05-29 12:24:49,155 INFO [train.py:842] (0/4) Epoch 36, batch 4800, loss[loss=0.1668, simple_loss=0.2635, pruned_loss=0.03507, over 7211.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2614, pruned_loss=0.0415, over 1416362.59 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 12:25:49,874 INFO [train.py:842] (0/4) Epoch 36, batch 4850, loss[loss=0.1397, simple_loss=0.2368, pruned_loss=0.02125, over 7121.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2615, pruned_loss=0.04098, over 1417143.70 frames.], batch size: 21, lr: 1.53e-04 2022-05-29 12:26:29,649 INFO [train.py:842] (0/4) Epoch 36, batch 4900, loss[loss=0.1506, simple_loss=0.2245, pruned_loss=0.03837, over 7277.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2608, pruned_loss=0.04119, over 1421240.44 frames.], batch size: 17, lr: 1.53e-04 2022-05-29 12:27:08,905 INFO [train.py:842] (0/4) Epoch 36, batch 4950, loss[loss=0.1981, simple_loss=0.2903, pruned_loss=0.05295, over 7272.00 frames.], tot_loss[loss=0.1722, simple_loss=0.261, pruned_loss=0.04168, over 1421007.68 frames.], batch size: 25, lr: 1.53e-04 2022-05-29 12:27:48,575 INFO [train.py:842] (0/4) Epoch 36, batch 5000, loss[loss=0.2066, simple_loss=0.2916, pruned_loss=0.06083, over 7375.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2607, pruned_loss=0.04146, over 1424437.28 frames.], batch size: 23, lr: 1.53e-04 2022-05-29 12:28:27,749 INFO [train.py:842] (0/4) Epoch 36, batch 5050, loss[loss=0.1867, simple_loss=0.2681, pruned_loss=0.05269, over 5179.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2606, pruned_loss=0.04133, over 1419978.66 frames.], batch size: 52, lr: 1.53e-04 2022-05-29 12:29:07,303 INFO [train.py:842] (0/4) Epoch 36, batch 5100, loss[loss=0.1833, simple_loss=0.2864, pruned_loss=0.04012, over 6985.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2608, pruned_loss=0.04134, over 1421929.19 frames.], batch size: 28, lr: 1.53e-04 2022-05-29 12:29:46,340 INFO [train.py:842] (0/4) Epoch 36, batch 5150, loss[loss=0.1716, simple_loss=0.2631, pruned_loss=0.04008, over 7336.00 frames.], tot_loss[loss=0.1728, simple_loss=0.262, pruned_loss=0.04179, over 1421250.65 frames.], batch size: 22, lr: 1.53e-04 2022-05-29 12:30:26,077 INFO [train.py:842] (0/4) Epoch 36, batch 5200, loss[loss=0.156, simple_loss=0.249, pruned_loss=0.03152, over 7352.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2626, pruned_loss=0.04198, over 1419143.86 frames.], batch size: 19, lr: 1.52e-04 2022-05-29 12:31:05,210 INFO [train.py:842] (0/4) Epoch 36, batch 5250, loss[loss=0.2752, simple_loss=0.3468, pruned_loss=0.1018, over 7122.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2637, pruned_loss=0.04234, over 1423251.39 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 12:31:44,694 INFO [train.py:842] (0/4) Epoch 36, batch 5300, loss[loss=0.1994, simple_loss=0.2766, pruned_loss=0.0611, over 7189.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2632, pruned_loss=0.04178, over 1427140.20 frames.], batch size: 23, lr: 1.52e-04 2022-05-29 12:32:23,860 INFO [train.py:842] (0/4) Epoch 36, batch 5350, loss[loss=0.1938, simple_loss=0.2822, pruned_loss=0.05266, over 7302.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2625, pruned_loss=0.04144, over 1423457.18 frames.], batch size: 24, lr: 1.52e-04 2022-05-29 12:33:03,181 INFO [train.py:842] (0/4) Epoch 36, batch 5400, loss[loss=0.1986, simple_loss=0.2855, pruned_loss=0.05586, over 7064.00 frames.], tot_loss[loss=0.173, simple_loss=0.2621, pruned_loss=0.04193, over 1418213.90 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 12:33:42,352 INFO [train.py:842] (0/4) Epoch 36, batch 5450, loss[loss=0.1465, simple_loss=0.2341, pruned_loss=0.02945, over 7164.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2617, pruned_loss=0.04163, over 1417805.91 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 12:34:21,835 INFO [train.py:842] (0/4) Epoch 36, batch 5500, loss[loss=0.1655, simple_loss=0.2586, pruned_loss=0.03618, over 7222.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2617, pruned_loss=0.0418, over 1419275.77 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 12:35:00,638 INFO [train.py:842] (0/4) Epoch 36, batch 5550, loss[loss=0.159, simple_loss=0.2497, pruned_loss=0.03416, over 7329.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2617, pruned_loss=0.04153, over 1415084.31 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 12:35:40,060 INFO [train.py:842] (0/4) Epoch 36, batch 5600, loss[loss=0.2108, simple_loss=0.3041, pruned_loss=0.05878, over 7342.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2619, pruned_loss=0.04133, over 1416764.55 frames.], batch size: 22, lr: 1.52e-04 2022-05-29 12:36:19,249 INFO [train.py:842] (0/4) Epoch 36, batch 5650, loss[loss=0.2151, simple_loss=0.2946, pruned_loss=0.06782, over 7387.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2616, pruned_loss=0.04168, over 1417005.38 frames.], batch size: 23, lr: 1.52e-04 2022-05-29 12:36:58,835 INFO [train.py:842] (0/4) Epoch 36, batch 5700, loss[loss=0.1458, simple_loss=0.2247, pruned_loss=0.03339, over 7407.00 frames.], tot_loss[loss=0.1721, simple_loss=0.261, pruned_loss=0.04161, over 1414398.61 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 12:37:37,889 INFO [train.py:842] (0/4) Epoch 36, batch 5750, loss[loss=0.1945, simple_loss=0.2769, pruned_loss=0.056, over 7294.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2621, pruned_loss=0.04213, over 1411295.76 frames.], batch size: 25, lr: 1.52e-04 2022-05-29 12:38:17,288 INFO [train.py:842] (0/4) Epoch 36, batch 5800, loss[loss=0.2097, simple_loss=0.2904, pruned_loss=0.06447, over 7067.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2613, pruned_loss=0.04171, over 1414500.71 frames.], batch size: 28, lr: 1.52e-04 2022-05-29 12:38:56,579 INFO [train.py:842] (0/4) Epoch 36, batch 5850, loss[loss=0.1688, simple_loss=0.2561, pruned_loss=0.04078, over 7156.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2617, pruned_loss=0.04191, over 1419027.18 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 12:39:36,363 INFO [train.py:842] (0/4) Epoch 36, batch 5900, loss[loss=0.2058, simple_loss=0.2743, pruned_loss=0.06869, over 7394.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2619, pruned_loss=0.04253, over 1420049.69 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 12:40:15,766 INFO [train.py:842] (0/4) Epoch 36, batch 5950, loss[loss=0.1354, simple_loss=0.2223, pruned_loss=0.02431, over 7167.00 frames.], tot_loss[loss=0.1719, simple_loss=0.26, pruned_loss=0.04186, over 1419604.42 frames.], batch size: 19, lr: 1.52e-04 2022-05-29 12:40:55,445 INFO [train.py:842] (0/4) Epoch 36, batch 6000, loss[loss=0.1762, simple_loss=0.2635, pruned_loss=0.04444, over 7182.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2602, pruned_loss=0.04118, over 1420794.40 frames.], batch size: 22, lr: 1.52e-04 2022-05-29 12:40:55,446 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 12:41:05,037 INFO [train.py:871] (0/4) Epoch 36, validation: loss=0.1636, simple_loss=0.2603, pruned_loss=0.03352, over 868885.00 frames. 2022-05-29 12:41:44,502 INFO [train.py:842] (0/4) Epoch 36, batch 6050, loss[loss=0.1336, simple_loss=0.2221, pruned_loss=0.02252, over 7132.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2597, pruned_loss=0.04081, over 1421224.03 frames.], batch size: 17, lr: 1.52e-04 2022-05-29 12:42:24,264 INFO [train.py:842] (0/4) Epoch 36, batch 6100, loss[loss=0.1566, simple_loss=0.2523, pruned_loss=0.03049, over 7173.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2604, pruned_loss=0.04128, over 1422857.72 frames.], batch size: 19, lr: 1.52e-04 2022-05-29 12:43:03,490 INFO [train.py:842] (0/4) Epoch 36, batch 6150, loss[loss=0.1635, simple_loss=0.2558, pruned_loss=0.03566, over 7343.00 frames.], tot_loss[loss=0.171, simple_loss=0.2601, pruned_loss=0.04098, over 1424106.34 frames.], batch size: 22, lr: 1.52e-04 2022-05-29 12:43:43,140 INFO [train.py:842] (0/4) Epoch 36, batch 6200, loss[loss=0.1644, simple_loss=0.2638, pruned_loss=0.03252, over 7079.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2604, pruned_loss=0.041, over 1421384.96 frames.], batch size: 28, lr: 1.52e-04 2022-05-29 12:44:22,462 INFO [train.py:842] (0/4) Epoch 36, batch 6250, loss[loss=0.2145, simple_loss=0.2997, pruned_loss=0.06468, over 7298.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2608, pruned_loss=0.04129, over 1421053.26 frames.], batch size: 25, lr: 1.52e-04 2022-05-29 12:44:51,061 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-328000.pt 2022-05-29 12:45:04,952 INFO [train.py:842] (0/4) Epoch 36, batch 6300, loss[loss=0.1593, simple_loss=0.2478, pruned_loss=0.03543, over 7126.00 frames.], tot_loss[loss=0.1743, simple_loss=0.2627, pruned_loss=0.04291, over 1420092.89 frames.], batch size: 17, lr: 1.52e-04 2022-05-29 12:45:44,240 INFO [train.py:842] (0/4) Epoch 36, batch 6350, loss[loss=0.184, simple_loss=0.2758, pruned_loss=0.04616, over 7310.00 frames.], tot_loss[loss=0.1724, simple_loss=0.2609, pruned_loss=0.04191, over 1419290.79 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 12:46:23,731 INFO [train.py:842] (0/4) Epoch 36, batch 6400, loss[loss=0.1656, simple_loss=0.2529, pruned_loss=0.03916, over 7342.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2609, pruned_loss=0.04144, over 1421776.15 frames.], batch size: 22, lr: 1.52e-04 2022-05-29 12:47:02,920 INFO [train.py:842] (0/4) Epoch 36, batch 6450, loss[loss=0.1746, simple_loss=0.2704, pruned_loss=0.03943, over 7255.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2621, pruned_loss=0.04218, over 1423802.46 frames.], batch size: 19, lr: 1.52e-04 2022-05-29 12:47:42,437 INFO [train.py:842] (0/4) Epoch 36, batch 6500, loss[loss=0.1498, simple_loss=0.2388, pruned_loss=0.03042, over 7170.00 frames.], tot_loss[loss=0.1732, simple_loss=0.262, pruned_loss=0.04221, over 1423441.27 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 12:48:21,585 INFO [train.py:842] (0/4) Epoch 36, batch 6550, loss[loss=0.1678, simple_loss=0.2665, pruned_loss=0.03454, over 7141.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2625, pruned_loss=0.04284, over 1421972.63 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 12:49:01,322 INFO [train.py:842] (0/4) Epoch 36, batch 6600, loss[loss=0.1678, simple_loss=0.2553, pruned_loss=0.04021, over 7165.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2626, pruned_loss=0.04246, over 1423462.05 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 12:49:40,645 INFO [train.py:842] (0/4) Epoch 36, batch 6650, loss[loss=0.1683, simple_loss=0.2597, pruned_loss=0.03842, over 6701.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2626, pruned_loss=0.04234, over 1424162.12 frames.], batch size: 31, lr: 1.52e-04 2022-05-29 12:50:20,354 INFO [train.py:842] (0/4) Epoch 36, batch 6700, loss[loss=0.1601, simple_loss=0.2519, pruned_loss=0.03414, over 7231.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2623, pruned_loss=0.04238, over 1426310.53 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 12:50:59,321 INFO [train.py:842] (0/4) Epoch 36, batch 6750, loss[loss=0.1502, simple_loss=0.2455, pruned_loss=0.02745, over 7328.00 frames.], tot_loss[loss=0.174, simple_loss=0.2631, pruned_loss=0.0425, over 1421412.46 frames.], batch size: 22, lr: 1.52e-04 2022-05-29 12:51:38,822 INFO [train.py:842] (0/4) Epoch 36, batch 6800, loss[loss=0.1722, simple_loss=0.2619, pruned_loss=0.04127, over 7347.00 frames.], tot_loss[loss=0.174, simple_loss=0.2635, pruned_loss=0.04228, over 1425316.80 frames.], batch size: 19, lr: 1.52e-04 2022-05-29 12:52:18,239 INFO [train.py:842] (0/4) Epoch 36, batch 6850, loss[loss=0.203, simple_loss=0.2896, pruned_loss=0.05818, over 7058.00 frames.], tot_loss[loss=0.173, simple_loss=0.2623, pruned_loss=0.04183, over 1425344.16 frames.], batch size: 28, lr: 1.52e-04 2022-05-29 12:52:57,986 INFO [train.py:842] (0/4) Epoch 36, batch 6900, loss[loss=0.1821, simple_loss=0.2812, pruned_loss=0.04151, over 7435.00 frames.], tot_loss[loss=0.172, simple_loss=0.2614, pruned_loss=0.04131, over 1426791.75 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 12:53:37,191 INFO [train.py:842] (0/4) Epoch 36, batch 6950, loss[loss=0.1578, simple_loss=0.2455, pruned_loss=0.03501, over 7244.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2619, pruned_loss=0.04156, over 1428824.05 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 12:54:16,961 INFO [train.py:842] (0/4) Epoch 36, batch 7000, loss[loss=0.1973, simple_loss=0.2783, pruned_loss=0.05815, over 7438.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2609, pruned_loss=0.04123, over 1429670.59 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 12:54:56,302 INFO [train.py:842] (0/4) Epoch 36, batch 7050, loss[loss=0.1725, simple_loss=0.2751, pruned_loss=0.03489, over 7425.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2613, pruned_loss=0.04124, over 1426800.33 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 12:55:36,074 INFO [train.py:842] (0/4) Epoch 36, batch 7100, loss[loss=0.1793, simple_loss=0.273, pruned_loss=0.04286, over 7262.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2605, pruned_loss=0.04116, over 1427554.89 frames.], batch size: 19, lr: 1.52e-04 2022-05-29 12:56:15,254 INFO [train.py:842] (0/4) Epoch 36, batch 7150, loss[loss=0.1723, simple_loss=0.272, pruned_loss=0.03629, over 7293.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2611, pruned_loss=0.04128, over 1430903.18 frames.], batch size: 24, lr: 1.52e-04 2022-05-29 12:56:54,951 INFO [train.py:842] (0/4) Epoch 36, batch 7200, loss[loss=0.1346, simple_loss=0.2205, pruned_loss=0.02439, over 7277.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2609, pruned_loss=0.0414, over 1429400.87 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 12:57:34,356 INFO [train.py:842] (0/4) Epoch 36, batch 7250, loss[loss=0.1708, simple_loss=0.2623, pruned_loss=0.03959, over 7180.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2605, pruned_loss=0.0411, over 1428643.08 frames.], batch size: 26, lr: 1.52e-04 2022-05-29 12:58:13,754 INFO [train.py:842] (0/4) Epoch 36, batch 7300, loss[loss=0.1897, simple_loss=0.277, pruned_loss=0.05123, over 7055.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2613, pruned_loss=0.04129, over 1426094.78 frames.], batch size: 28, lr: 1.52e-04 2022-05-29 12:58:52,904 INFO [train.py:842] (0/4) Epoch 36, batch 7350, loss[loss=0.2237, simple_loss=0.2933, pruned_loss=0.07702, over 6791.00 frames.], tot_loss[loss=0.172, simple_loss=0.2609, pruned_loss=0.04154, over 1426182.86 frames.], batch size: 15, lr: 1.52e-04 2022-05-29 12:59:32,489 INFO [train.py:842] (0/4) Epoch 36, batch 7400, loss[loss=0.1625, simple_loss=0.2565, pruned_loss=0.03426, over 7427.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2592, pruned_loss=0.04059, over 1424344.19 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 13:00:11,804 INFO [train.py:842] (0/4) Epoch 36, batch 7450, loss[loss=0.148, simple_loss=0.2311, pruned_loss=0.03249, over 7406.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2594, pruned_loss=0.04066, over 1421439.44 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 13:00:51,486 INFO [train.py:842] (0/4) Epoch 36, batch 7500, loss[loss=0.1662, simple_loss=0.2497, pruned_loss=0.0413, over 7163.00 frames.], tot_loss[loss=0.171, simple_loss=0.26, pruned_loss=0.04097, over 1424615.75 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 13:01:30,811 INFO [train.py:842] (0/4) Epoch 36, batch 7550, loss[loss=0.195, simple_loss=0.283, pruned_loss=0.05347, over 7218.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2607, pruned_loss=0.0412, over 1425225.85 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 13:02:10,475 INFO [train.py:842] (0/4) Epoch 36, batch 7600, loss[loss=0.1342, simple_loss=0.216, pruned_loss=0.02615, over 7287.00 frames.], tot_loss[loss=0.172, simple_loss=0.261, pruned_loss=0.04154, over 1421766.59 frames.], batch size: 17, lr: 1.52e-04 2022-05-29 13:02:49,593 INFO [train.py:842] (0/4) Epoch 36, batch 7650, loss[loss=0.2077, simple_loss=0.2978, pruned_loss=0.05874, over 7377.00 frames.], tot_loss[loss=0.173, simple_loss=0.262, pruned_loss=0.04204, over 1421413.27 frames.], batch size: 23, lr: 1.52e-04 2022-05-29 13:03:29,279 INFO [train.py:842] (0/4) Epoch 36, batch 7700, loss[loss=0.1969, simple_loss=0.2904, pruned_loss=0.05169, over 7227.00 frames.], tot_loss[loss=0.172, simple_loss=0.2611, pruned_loss=0.04147, over 1425353.05 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 13:04:08,596 INFO [train.py:842] (0/4) Epoch 36, batch 7750, loss[loss=0.1484, simple_loss=0.2339, pruned_loss=0.03142, over 7170.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2609, pruned_loss=0.04144, over 1425033.24 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 13:04:48,279 INFO [train.py:842] (0/4) Epoch 36, batch 7800, loss[loss=0.1632, simple_loss=0.2605, pruned_loss=0.03291, over 7434.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2611, pruned_loss=0.04136, over 1424836.35 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 13:05:27,423 INFO [train.py:842] (0/4) Epoch 36, batch 7850, loss[loss=0.1503, simple_loss=0.2378, pruned_loss=0.03142, over 7421.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2604, pruned_loss=0.0404, over 1427182.42 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 13:06:06,996 INFO [train.py:842] (0/4) Epoch 36, batch 7900, loss[loss=0.1467, simple_loss=0.2345, pruned_loss=0.02941, over 7070.00 frames.], tot_loss[loss=0.17, simple_loss=0.2597, pruned_loss=0.04012, over 1425351.56 frames.], batch size: 18, lr: 1.52e-04 2022-05-29 13:06:46,346 INFO [train.py:842] (0/4) Epoch 36, batch 7950, loss[loss=0.1916, simple_loss=0.2881, pruned_loss=0.04753, over 7107.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2604, pruned_loss=0.04088, over 1424951.42 frames.], batch size: 28, lr: 1.52e-04 2022-05-29 13:07:25,885 INFO [train.py:842] (0/4) Epoch 36, batch 8000, loss[loss=0.1657, simple_loss=0.2518, pruned_loss=0.03978, over 7283.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2609, pruned_loss=0.0409, over 1425011.74 frames.], batch size: 24, lr: 1.52e-04 2022-05-29 13:08:05,188 INFO [train.py:842] (0/4) Epoch 36, batch 8050, loss[loss=0.1767, simple_loss=0.2771, pruned_loss=0.03817, over 6693.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2599, pruned_loss=0.0407, over 1423171.92 frames.], batch size: 31, lr: 1.52e-04 2022-05-29 13:08:44,864 INFO [train.py:842] (0/4) Epoch 36, batch 8100, loss[loss=0.1567, simple_loss=0.2499, pruned_loss=0.03178, over 7362.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2602, pruned_loss=0.04073, over 1423305.13 frames.], batch size: 19, lr: 1.52e-04 2022-05-29 13:09:24,145 INFO [train.py:842] (0/4) Epoch 36, batch 8150, loss[loss=0.1713, simple_loss=0.2622, pruned_loss=0.04025, over 7297.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2614, pruned_loss=0.04148, over 1424219.30 frames.], batch size: 25, lr: 1.52e-04 2022-05-29 13:10:03,954 INFO [train.py:842] (0/4) Epoch 36, batch 8200, loss[loss=0.2238, simple_loss=0.315, pruned_loss=0.06631, over 7144.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2613, pruned_loss=0.0416, over 1427702.86 frames.], batch size: 26, lr: 1.52e-04 2022-05-29 13:10:43,097 INFO [train.py:842] (0/4) Epoch 36, batch 8250, loss[loss=0.1559, simple_loss=0.2512, pruned_loss=0.03033, over 7228.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2616, pruned_loss=0.04177, over 1425144.77 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 13:11:22,815 INFO [train.py:842] (0/4) Epoch 36, batch 8300, loss[loss=0.178, simple_loss=0.2688, pruned_loss=0.0436, over 7149.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2609, pruned_loss=0.04104, over 1417959.27 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 13:12:02,058 INFO [train.py:842] (0/4) Epoch 36, batch 8350, loss[loss=0.1703, simple_loss=0.2557, pruned_loss=0.04248, over 7316.00 frames.], tot_loss[loss=0.172, simple_loss=0.2614, pruned_loss=0.04128, over 1418465.18 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 13:12:41,430 INFO [train.py:842] (0/4) Epoch 36, batch 8400, loss[loss=0.1251, simple_loss=0.2149, pruned_loss=0.01769, over 6970.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2608, pruned_loss=0.04121, over 1418596.15 frames.], batch size: 16, lr: 1.52e-04 2022-05-29 13:13:20,674 INFO [train.py:842] (0/4) Epoch 36, batch 8450, loss[loss=0.1605, simple_loss=0.2533, pruned_loss=0.03378, over 4652.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2582, pruned_loss=0.04013, over 1417855.37 frames.], batch size: 52, lr: 1.52e-04 2022-05-29 13:14:00,557 INFO [train.py:842] (0/4) Epoch 36, batch 8500, loss[loss=0.1853, simple_loss=0.2693, pruned_loss=0.05062, over 7154.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2585, pruned_loss=0.04066, over 1416992.36 frames.], batch size: 17, lr: 1.52e-04 2022-05-29 13:14:39,938 INFO [train.py:842] (0/4) Epoch 36, batch 8550, loss[loss=0.1798, simple_loss=0.2735, pruned_loss=0.04305, over 7245.00 frames.], tot_loss[loss=0.1707, simple_loss=0.259, pruned_loss=0.04114, over 1415311.02 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 13:15:19,708 INFO [train.py:842] (0/4) Epoch 36, batch 8600, loss[loss=0.2175, simple_loss=0.3055, pruned_loss=0.06475, over 7225.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2595, pruned_loss=0.04111, over 1413471.73 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 13:15:59,157 INFO [train.py:842] (0/4) Epoch 36, batch 8650, loss[loss=0.2209, simple_loss=0.3005, pruned_loss=0.0706, over 5370.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2586, pruned_loss=0.04057, over 1414051.20 frames.], batch size: 53, lr: 1.52e-04 2022-05-29 13:16:38,725 INFO [train.py:842] (0/4) Epoch 36, batch 8700, loss[loss=0.1488, simple_loss=0.2407, pruned_loss=0.02847, over 6503.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2584, pruned_loss=0.04026, over 1411472.70 frames.], batch size: 38, lr: 1.52e-04 2022-05-29 13:17:17,977 INFO [train.py:842] (0/4) Epoch 36, batch 8750, loss[loss=0.161, simple_loss=0.2514, pruned_loss=0.03528, over 7239.00 frames.], tot_loss[loss=0.1704, simple_loss=0.2594, pruned_loss=0.04069, over 1412042.42 frames.], batch size: 20, lr: 1.52e-04 2022-05-29 13:17:57,609 INFO [train.py:842] (0/4) Epoch 36, batch 8800, loss[loss=0.1719, simple_loss=0.266, pruned_loss=0.03891, over 7207.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2585, pruned_loss=0.04011, over 1411464.47 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 13:18:36,661 INFO [train.py:842] (0/4) Epoch 36, batch 8850, loss[loss=0.2003, simple_loss=0.2777, pruned_loss=0.06144, over 4984.00 frames.], tot_loss[loss=0.1688, simple_loss=0.2578, pruned_loss=0.03989, over 1397786.26 frames.], batch size: 52, lr: 1.52e-04 2022-05-29 13:19:16,401 INFO [train.py:842] (0/4) Epoch 36, batch 8900, loss[loss=0.1872, simple_loss=0.2692, pruned_loss=0.05261, over 5089.00 frames.], tot_loss[loss=0.1698, simple_loss=0.2588, pruned_loss=0.04034, over 1399503.62 frames.], batch size: 52, lr: 1.52e-04 2022-05-29 13:19:55,419 INFO [train.py:842] (0/4) Epoch 36, batch 8950, loss[loss=0.1553, simple_loss=0.2486, pruned_loss=0.03105, over 7328.00 frames.], tot_loss[loss=0.1691, simple_loss=0.2582, pruned_loss=0.04004, over 1393043.98 frames.], batch size: 21, lr: 1.52e-04 2022-05-29 13:20:34,561 INFO [train.py:842] (0/4) Epoch 36, batch 9000, loss[loss=0.188, simple_loss=0.2685, pruned_loss=0.05369, over 7255.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2586, pruned_loss=0.04042, over 1385040.33 frames.], batch size: 19, lr: 1.52e-04 2022-05-29 13:20:34,562 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 13:20:44,417 INFO [train.py:871] (0/4) Epoch 36, validation: loss=0.1634, simple_loss=0.2603, pruned_loss=0.03321, over 868885.00 frames. 2022-05-29 13:21:23,713 INFO [train.py:842] (0/4) Epoch 36, batch 9050, loss[loss=0.1482, simple_loss=0.2359, pruned_loss=0.03022, over 6998.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2587, pruned_loss=0.04077, over 1385076.36 frames.], batch size: 16, lr: 1.52e-04 2022-05-29 13:22:03,048 INFO [train.py:842] (0/4) Epoch 36, batch 9100, loss[loss=0.1664, simple_loss=0.2591, pruned_loss=0.03688, over 4929.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2587, pruned_loss=0.04096, over 1368427.50 frames.], batch size: 52, lr: 1.52e-04 2022-05-29 13:22:41,480 INFO [train.py:842] (0/4) Epoch 36, batch 9150, loss[loss=0.213, simple_loss=0.2947, pruned_loss=0.06559, over 5070.00 frames.], tot_loss[loss=0.1736, simple_loss=0.2613, pruned_loss=0.04292, over 1319982.31 frames.], batch size: 52, lr: 1.52e-04 2022-05-29 13:23:14,287 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-36.pt 2022-05-29 13:23:33,557 INFO [train.py:842] (0/4) Epoch 37, batch 0, loss[loss=0.2439, simple_loss=0.34, pruned_loss=0.07387, over 7342.00 frames.], tot_loss[loss=0.2439, simple_loss=0.34, pruned_loss=0.07387, over 7342.00 frames.], batch size: 22, lr: 1.50e-04 2022-05-29 13:24:13,143 INFO [train.py:842] (0/4) Epoch 37, batch 50, loss[loss=0.1918, simple_loss=0.2789, pruned_loss=0.05236, over 7457.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2606, pruned_loss=0.04046, over 321334.01 frames.], batch size: 19, lr: 1.50e-04 2022-05-29 13:24:52,825 INFO [train.py:842] (0/4) Epoch 37, batch 100, loss[loss=0.1576, simple_loss=0.2554, pruned_loss=0.02985, over 7322.00 frames.], tot_loss[loss=0.172, simple_loss=0.2627, pruned_loss=0.04068, over 566394.83 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:25:32,060 INFO [train.py:842] (0/4) Epoch 37, batch 150, loss[loss=0.1646, simple_loss=0.2643, pruned_loss=0.03248, over 7095.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2619, pruned_loss=0.04094, over 754935.29 frames.], batch size: 28, lr: 1.49e-04 2022-05-29 13:26:11,459 INFO [train.py:842] (0/4) Epoch 37, batch 200, loss[loss=0.1676, simple_loss=0.2654, pruned_loss=0.03494, over 7317.00 frames.], tot_loss[loss=0.1744, simple_loss=0.2648, pruned_loss=0.04197, over 906486.57 frames.], batch size: 21, lr: 1.49e-04 2022-05-29 13:26:50,727 INFO [train.py:842] (0/4) Epoch 37, batch 250, loss[loss=0.1677, simple_loss=0.2616, pruned_loss=0.03687, over 7266.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2631, pruned_loss=0.0416, over 1017885.82 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 13:27:30,279 INFO [train.py:842] (0/4) Epoch 37, batch 300, loss[loss=0.1648, simple_loss=0.2621, pruned_loss=0.03374, over 7325.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2623, pruned_loss=0.0416, over 1103710.02 frames.], batch size: 22, lr: 1.49e-04 2022-05-29 13:28:09,422 INFO [train.py:842] (0/4) Epoch 37, batch 350, loss[loss=0.1697, simple_loss=0.2528, pruned_loss=0.04326, over 7167.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2622, pruned_loss=0.04115, over 1172186.92 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 13:28:49,117 INFO [train.py:842] (0/4) Epoch 37, batch 400, loss[loss=0.1815, simple_loss=0.2721, pruned_loss=0.04544, over 7228.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2625, pruned_loss=0.04148, over 1231516.73 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:29:28,330 INFO [train.py:842] (0/4) Epoch 37, batch 450, loss[loss=0.1721, simple_loss=0.2637, pruned_loss=0.04022, over 7144.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2638, pruned_loss=0.04223, over 1275904.19 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:30:07,657 INFO [train.py:842] (0/4) Epoch 37, batch 500, loss[loss=0.1628, simple_loss=0.2529, pruned_loss=0.03633, over 7226.00 frames.], tot_loss[loss=0.174, simple_loss=0.2636, pruned_loss=0.0422, over 1306402.81 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:30:46,584 INFO [train.py:842] (0/4) Epoch 37, batch 550, loss[loss=0.1804, simple_loss=0.2648, pruned_loss=0.04797, over 7062.00 frames.], tot_loss[loss=0.1742, simple_loss=0.2637, pruned_loss=0.04234, over 1322429.22 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 13:31:26,450 INFO [train.py:842] (0/4) Epoch 37, batch 600, loss[loss=0.1725, simple_loss=0.2583, pruned_loss=0.04339, over 7431.00 frames.], tot_loss[loss=0.172, simple_loss=0.2612, pruned_loss=0.04135, over 1347484.93 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:32:06,312 INFO [train.py:842] (0/4) Epoch 37, batch 650, loss[loss=0.1476, simple_loss=0.228, pruned_loss=0.03354, over 7123.00 frames.], tot_loss[loss=0.171, simple_loss=0.2599, pruned_loss=0.041, over 1366384.26 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 13:32:45,846 INFO [train.py:842] (0/4) Epoch 37, batch 700, loss[loss=0.1438, simple_loss=0.2418, pruned_loss=0.02293, over 7228.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2608, pruned_loss=0.04142, over 1379399.42 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:33:25,178 INFO [train.py:842] (0/4) Epoch 37, batch 750, loss[loss=0.1526, simple_loss=0.2419, pruned_loss=0.03164, over 7150.00 frames.], tot_loss[loss=0.172, simple_loss=0.261, pruned_loss=0.04156, over 1388899.27 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 13:34:04,838 INFO [train.py:842] (0/4) Epoch 37, batch 800, loss[loss=0.1539, simple_loss=0.2391, pruned_loss=0.03435, over 7421.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2601, pruned_loss=0.04127, over 1398198.22 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 13:34:43,816 INFO [train.py:842] (0/4) Epoch 37, batch 850, loss[loss=0.1594, simple_loss=0.2469, pruned_loss=0.03594, over 7254.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2607, pruned_loss=0.04127, over 1398005.32 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 13:35:23,647 INFO [train.py:842] (0/4) Epoch 37, batch 900, loss[loss=0.1465, simple_loss=0.2315, pruned_loss=0.03071, over 7081.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2604, pruned_loss=0.04102, over 1406232.79 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 13:36:02,992 INFO [train.py:842] (0/4) Epoch 37, batch 950, loss[loss=0.1792, simple_loss=0.2538, pruned_loss=0.05232, over 7271.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2604, pruned_loss=0.04132, over 1410513.59 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 13:36:42,565 INFO [train.py:842] (0/4) Epoch 37, batch 1000, loss[loss=0.1658, simple_loss=0.2697, pruned_loss=0.031, over 6814.00 frames.], tot_loss[loss=0.1704, simple_loss=0.2592, pruned_loss=0.04075, over 1412739.28 frames.], batch size: 31, lr: 1.49e-04 2022-05-29 13:37:22,110 INFO [train.py:842] (0/4) Epoch 37, batch 1050, loss[loss=0.1823, simple_loss=0.2697, pruned_loss=0.04746, over 7383.00 frames.], tot_loss[loss=0.1704, simple_loss=0.2591, pruned_loss=0.04091, over 1417264.93 frames.], batch size: 23, lr: 1.49e-04 2022-05-29 13:38:01,725 INFO [train.py:842] (0/4) Epoch 37, batch 1100, loss[loss=0.1951, simple_loss=0.3019, pruned_loss=0.04418, over 7227.00 frames.], tot_loss[loss=0.17, simple_loss=0.2587, pruned_loss=0.04067, over 1418466.09 frames.], batch size: 21, lr: 1.49e-04 2022-05-29 13:38:41,075 INFO [train.py:842] (0/4) Epoch 37, batch 1150, loss[loss=0.1913, simple_loss=0.2772, pruned_loss=0.05272, over 5191.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2591, pruned_loss=0.04068, over 1417979.80 frames.], batch size: 53, lr: 1.49e-04 2022-05-29 13:39:20,635 INFO [train.py:842] (0/4) Epoch 37, batch 1200, loss[loss=0.2419, simple_loss=0.3147, pruned_loss=0.08458, over 7151.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2607, pruned_loss=0.04154, over 1420758.87 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:39:59,645 INFO [train.py:842] (0/4) Epoch 37, batch 1250, loss[loss=0.1714, simple_loss=0.2709, pruned_loss=0.03598, over 7213.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2611, pruned_loss=0.04159, over 1420264.23 frames.], batch size: 22, lr: 1.49e-04 2022-05-29 13:40:39,208 INFO [train.py:842] (0/4) Epoch 37, batch 1300, loss[loss=0.1516, simple_loss=0.231, pruned_loss=0.03607, over 7138.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2613, pruned_loss=0.04159, over 1421997.86 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 13:41:18,355 INFO [train.py:842] (0/4) Epoch 37, batch 1350, loss[loss=0.1665, simple_loss=0.2582, pruned_loss=0.03743, over 7069.00 frames.], tot_loss[loss=0.172, simple_loss=0.2608, pruned_loss=0.04161, over 1418285.18 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 13:41:58,013 INFO [train.py:842] (0/4) Epoch 37, batch 1400, loss[loss=0.1535, simple_loss=0.2379, pruned_loss=0.03457, over 6990.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2611, pruned_loss=0.04168, over 1418260.30 frames.], batch size: 16, lr: 1.49e-04 2022-05-29 13:42:37,217 INFO [train.py:842] (0/4) Epoch 37, batch 1450, loss[loss=0.1814, simple_loss=0.2691, pruned_loss=0.0469, over 7297.00 frames.], tot_loss[loss=0.173, simple_loss=0.2619, pruned_loss=0.04199, over 1420129.21 frames.], batch size: 24, lr: 1.49e-04 2022-05-29 13:43:16,541 INFO [train.py:842] (0/4) Epoch 37, batch 1500, loss[loss=0.1751, simple_loss=0.2683, pruned_loss=0.0409, over 7314.00 frames.], tot_loss[loss=0.1741, simple_loss=0.2629, pruned_loss=0.04267, over 1416825.85 frames.], batch size: 24, lr: 1.49e-04 2022-05-29 13:43:55,697 INFO [train.py:842] (0/4) Epoch 37, batch 1550, loss[loss=0.1693, simple_loss=0.2627, pruned_loss=0.03794, over 6657.00 frames.], tot_loss[loss=0.1739, simple_loss=0.2625, pruned_loss=0.0426, over 1411258.52 frames.], batch size: 31, lr: 1.49e-04 2022-05-29 13:44:35,491 INFO [train.py:842] (0/4) Epoch 37, batch 1600, loss[loss=0.1875, simple_loss=0.2753, pruned_loss=0.04983, over 7377.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2619, pruned_loss=0.04213, over 1411751.25 frames.], batch size: 23, lr: 1.49e-04 2022-05-29 13:45:14,783 INFO [train.py:842] (0/4) Epoch 37, batch 1650, loss[loss=0.1849, simple_loss=0.2752, pruned_loss=0.04725, over 7191.00 frames.], tot_loss[loss=0.1718, simple_loss=0.261, pruned_loss=0.04131, over 1415276.28 frames.], batch size: 22, lr: 1.49e-04 2022-05-29 13:45:53,931 INFO [train.py:842] (0/4) Epoch 37, batch 1700, loss[loss=0.194, simple_loss=0.2789, pruned_loss=0.05457, over 7178.00 frames.], tot_loss[loss=0.172, simple_loss=0.2612, pruned_loss=0.04143, over 1413644.91 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 13:46:43,574 INFO [train.py:842] (0/4) Epoch 37, batch 1750, loss[loss=0.139, simple_loss=0.2341, pruned_loss=0.022, over 7360.00 frames.], tot_loss[loss=0.172, simple_loss=0.2613, pruned_loss=0.04135, over 1407954.46 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 13:47:22,923 INFO [train.py:842] (0/4) Epoch 37, batch 1800, loss[loss=0.1715, simple_loss=0.2515, pruned_loss=0.04574, over 7289.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2617, pruned_loss=0.0414, over 1409810.88 frames.], batch size: 24, lr: 1.49e-04 2022-05-29 13:48:02,159 INFO [train.py:842] (0/4) Epoch 37, batch 1850, loss[loss=0.1731, simple_loss=0.255, pruned_loss=0.04565, over 7261.00 frames.], tot_loss[loss=0.171, simple_loss=0.2602, pruned_loss=0.04087, over 1410933.10 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 13:48:41,598 INFO [train.py:842] (0/4) Epoch 37, batch 1900, loss[loss=0.1641, simple_loss=0.2608, pruned_loss=0.03367, over 6780.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2611, pruned_loss=0.04106, over 1417252.20 frames.], batch size: 31, lr: 1.49e-04 2022-05-29 13:49:20,951 INFO [train.py:842] (0/4) Epoch 37, batch 1950, loss[loss=0.2191, simple_loss=0.31, pruned_loss=0.0641, over 7224.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2604, pruned_loss=0.04094, over 1420785.62 frames.], batch size: 21, lr: 1.49e-04 2022-05-29 13:50:00,393 INFO [train.py:842] (0/4) Epoch 37, batch 2000, loss[loss=0.1549, simple_loss=0.2485, pruned_loss=0.03067, over 7429.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2609, pruned_loss=0.04131, over 1417245.72 frames.], batch size: 21, lr: 1.49e-04 2022-05-29 13:50:39,846 INFO [train.py:842] (0/4) Epoch 37, batch 2050, loss[loss=0.1654, simple_loss=0.2578, pruned_loss=0.03652, over 7227.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2594, pruned_loss=0.04062, over 1420553.51 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:51:19,324 INFO [train.py:842] (0/4) Epoch 37, batch 2100, loss[loss=0.1857, simple_loss=0.2747, pruned_loss=0.04834, over 7149.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2603, pruned_loss=0.04112, over 1419994.81 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:51:58,331 INFO [train.py:842] (0/4) Epoch 37, batch 2150, loss[loss=0.1566, simple_loss=0.2564, pruned_loss=0.02841, over 7417.00 frames.], tot_loss[loss=0.171, simple_loss=0.2603, pruned_loss=0.04088, over 1417214.65 frames.], batch size: 21, lr: 1.49e-04 2022-05-29 13:52:37,790 INFO [train.py:842] (0/4) Epoch 37, batch 2200, loss[loss=0.153, simple_loss=0.2456, pruned_loss=0.03013, over 7266.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2597, pruned_loss=0.04068, over 1418636.79 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 13:53:16,121 INFO [train.py:842] (0/4) Epoch 37, batch 2250, loss[loss=0.191, simple_loss=0.2868, pruned_loss=0.04763, over 7151.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2599, pruned_loss=0.04096, over 1419303.54 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 13:53:54,630 INFO [train.py:842] (0/4) Epoch 37, batch 2300, loss[loss=0.1725, simple_loss=0.2591, pruned_loss=0.04294, over 7191.00 frames.], tot_loss[loss=0.171, simple_loss=0.2601, pruned_loss=0.04092, over 1418764.43 frames.], batch size: 23, lr: 1.49e-04 2022-05-29 13:54:32,869 INFO [train.py:842] (0/4) Epoch 37, batch 2350, loss[loss=0.1375, simple_loss=0.2174, pruned_loss=0.02878, over 7272.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2606, pruned_loss=0.0412, over 1413193.82 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 13:55:11,695 INFO [train.py:842] (0/4) Epoch 37, batch 2400, loss[loss=0.1778, simple_loss=0.2762, pruned_loss=0.03967, over 7281.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2595, pruned_loss=0.04044, over 1419789.75 frames.], batch size: 25, lr: 1.49e-04 2022-05-29 13:55:50,271 INFO [train.py:842] (0/4) Epoch 37, batch 2450, loss[loss=0.1601, simple_loss=0.2578, pruned_loss=0.03118, over 7181.00 frames.], tot_loss[loss=0.17, simple_loss=0.2594, pruned_loss=0.04034, over 1425564.59 frames.], batch size: 26, lr: 1.49e-04 2022-05-29 13:56:28,822 INFO [train.py:842] (0/4) Epoch 37, batch 2500, loss[loss=0.1689, simple_loss=0.2616, pruned_loss=0.03812, over 7161.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2599, pruned_loss=0.0408, over 1428039.43 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 13:57:07,132 INFO [train.py:842] (0/4) Epoch 37, batch 2550, loss[loss=0.1766, simple_loss=0.2649, pruned_loss=0.04422, over 7280.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2602, pruned_loss=0.04076, over 1428533.31 frames.], batch size: 24, lr: 1.49e-04 2022-05-29 13:57:45,688 INFO [train.py:842] (0/4) Epoch 37, batch 2600, loss[loss=0.1457, simple_loss=0.2286, pruned_loss=0.03143, over 6820.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2606, pruned_loss=0.04092, over 1425244.18 frames.], batch size: 15, lr: 1.49e-04 2022-05-29 13:58:24,127 INFO [train.py:842] (0/4) Epoch 37, batch 2650, loss[loss=0.175, simple_loss=0.2691, pruned_loss=0.04051, over 7225.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2612, pruned_loss=0.04117, over 1428125.76 frames.], batch size: 22, lr: 1.49e-04 2022-05-29 13:59:02,652 INFO [train.py:842] (0/4) Epoch 37, batch 2700, loss[loss=0.1746, simple_loss=0.2629, pruned_loss=0.04316, over 6580.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2617, pruned_loss=0.04134, over 1424434.79 frames.], batch size: 38, lr: 1.49e-04 2022-05-29 13:59:40,782 INFO [train.py:842] (0/4) Epoch 37, batch 2750, loss[loss=0.1818, simple_loss=0.2776, pruned_loss=0.04295, over 5355.00 frames.], tot_loss[loss=0.1736, simple_loss=0.263, pruned_loss=0.04208, over 1425314.88 frames.], batch size: 52, lr: 1.49e-04 2022-05-29 14:00:19,743 INFO [train.py:842] (0/4) Epoch 37, batch 2800, loss[loss=0.1642, simple_loss=0.2504, pruned_loss=0.039, over 7279.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2625, pruned_loss=0.04184, over 1429594.26 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 14:00:58,246 INFO [train.py:842] (0/4) Epoch 37, batch 2850, loss[loss=0.1601, simple_loss=0.2554, pruned_loss=0.03234, over 6532.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2616, pruned_loss=0.04145, over 1428347.53 frames.], batch size: 39, lr: 1.49e-04 2022-05-29 14:01:37,106 INFO [train.py:842] (0/4) Epoch 37, batch 2900, loss[loss=0.1361, simple_loss=0.2246, pruned_loss=0.02379, over 6998.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2608, pruned_loss=0.04093, over 1428660.40 frames.], batch size: 16, lr: 1.49e-04 2022-05-29 14:02:15,625 INFO [train.py:842] (0/4) Epoch 37, batch 2950, loss[loss=0.154, simple_loss=0.2442, pruned_loss=0.03195, over 7425.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2604, pruned_loss=0.04058, over 1425800.37 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 14:02:54,131 INFO [train.py:842] (0/4) Epoch 37, batch 3000, loss[loss=0.1727, simple_loss=0.2635, pruned_loss=0.04095, over 7221.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2614, pruned_loss=0.04115, over 1421319.01 frames.], batch size: 21, lr: 1.49e-04 2022-05-29 14:02:54,132 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 14:03:03,497 INFO [train.py:871] (0/4) Epoch 37, validation: loss=0.1648, simple_loss=0.2616, pruned_loss=0.03398, over 868885.00 frames. 2022-05-29 14:03:41,914 INFO [train.py:842] (0/4) Epoch 37, batch 3050, loss[loss=0.1524, simple_loss=0.2258, pruned_loss=0.03952, over 6795.00 frames.], tot_loss[loss=0.1717, simple_loss=0.261, pruned_loss=0.04117, over 1420291.52 frames.], batch size: 15, lr: 1.49e-04 2022-05-29 14:04:20,732 INFO [train.py:842] (0/4) Epoch 37, batch 3100, loss[loss=0.1783, simple_loss=0.2645, pruned_loss=0.04609, over 7064.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2609, pruned_loss=0.04131, over 1418260.64 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 14:04:59,174 INFO [train.py:842] (0/4) Epoch 37, batch 3150, loss[loss=0.1513, simple_loss=0.228, pruned_loss=0.03733, over 7432.00 frames.], tot_loss[loss=0.1721, simple_loss=0.261, pruned_loss=0.04162, over 1418101.90 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 14:05:37,821 INFO [train.py:842] (0/4) Epoch 37, batch 3200, loss[loss=0.3693, simple_loss=0.4142, pruned_loss=0.1622, over 5040.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2606, pruned_loss=0.04134, over 1418717.80 frames.], batch size: 52, lr: 1.49e-04 2022-05-29 14:06:16,112 INFO [train.py:842] (0/4) Epoch 37, batch 3250, loss[loss=0.1814, simple_loss=0.2679, pruned_loss=0.04747, over 7202.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2603, pruned_loss=0.04089, over 1417815.53 frames.], batch size: 22, lr: 1.49e-04 2022-05-29 14:06:54,705 INFO [train.py:842] (0/4) Epoch 37, batch 3300, loss[loss=0.1683, simple_loss=0.2616, pruned_loss=0.0375, over 7409.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2609, pruned_loss=0.04104, over 1414518.87 frames.], batch size: 21, lr: 1.49e-04 2022-05-29 14:07:32,797 INFO [train.py:842] (0/4) Epoch 37, batch 3350, loss[loss=0.1594, simple_loss=0.248, pruned_loss=0.0354, over 7359.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2618, pruned_loss=0.04122, over 1411097.35 frames.], batch size: 23, lr: 1.49e-04 2022-05-29 14:08:11,576 INFO [train.py:842] (0/4) Epoch 37, batch 3400, loss[loss=0.193, simple_loss=0.27, pruned_loss=0.05804, over 7147.00 frames.], tot_loss[loss=0.1728, simple_loss=0.262, pruned_loss=0.04181, over 1415739.84 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 14:08:50,146 INFO [train.py:842] (0/4) Epoch 37, batch 3450, loss[loss=0.1533, simple_loss=0.2332, pruned_loss=0.0367, over 7284.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2616, pruned_loss=0.04178, over 1418529.50 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 14:09:28,737 INFO [train.py:842] (0/4) Epoch 37, batch 3500, loss[loss=0.1667, simple_loss=0.2495, pruned_loss=0.0419, over 7357.00 frames.], tot_loss[loss=0.172, simple_loss=0.2612, pruned_loss=0.04136, over 1416785.66 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 14:10:06,959 INFO [train.py:842] (0/4) Epoch 37, batch 3550, loss[loss=0.1515, simple_loss=0.2366, pruned_loss=0.03316, over 6791.00 frames.], tot_loss[loss=0.171, simple_loss=0.2605, pruned_loss=0.04076, over 1413560.41 frames.], batch size: 15, lr: 1.49e-04 2022-05-29 14:10:45,910 INFO [train.py:842] (0/4) Epoch 37, batch 3600, loss[loss=0.1573, simple_loss=0.2447, pruned_loss=0.03498, over 6990.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2588, pruned_loss=0.04008, over 1420116.96 frames.], batch size: 16, lr: 1.49e-04 2022-05-29 14:11:24,117 INFO [train.py:842] (0/4) Epoch 37, batch 3650, loss[loss=0.1472, simple_loss=0.2386, pruned_loss=0.02792, over 7154.00 frames.], tot_loss[loss=0.17, simple_loss=0.2592, pruned_loss=0.04035, over 1422422.66 frames.], batch size: 19, lr: 1.49e-04 2022-05-29 14:12:02,944 INFO [train.py:842] (0/4) Epoch 37, batch 3700, loss[loss=0.1598, simple_loss=0.2549, pruned_loss=0.03235, over 7252.00 frames.], tot_loss[loss=0.1695, simple_loss=0.259, pruned_loss=0.04002, over 1425620.58 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 14:12:41,229 INFO [train.py:842] (0/4) Epoch 37, batch 3750, loss[loss=0.1838, simple_loss=0.2656, pruned_loss=0.051, over 7300.00 frames.], tot_loss[loss=0.1691, simple_loss=0.2587, pruned_loss=0.03971, over 1423355.85 frames.], batch size: 24, lr: 1.49e-04 2022-05-29 14:13:19,981 INFO [train.py:842] (0/4) Epoch 37, batch 3800, loss[loss=0.1335, simple_loss=0.2194, pruned_loss=0.02376, over 7295.00 frames.], tot_loss[loss=0.1694, simple_loss=0.259, pruned_loss=0.03987, over 1425117.10 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 14:13:58,262 INFO [train.py:842] (0/4) Epoch 37, batch 3850, loss[loss=0.2172, simple_loss=0.2968, pruned_loss=0.0688, over 4974.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2596, pruned_loss=0.04042, over 1423458.29 frames.], batch size: 53, lr: 1.49e-04 2022-05-29 14:14:37,038 INFO [train.py:842] (0/4) Epoch 37, batch 3900, loss[loss=0.1782, simple_loss=0.2749, pruned_loss=0.04075, over 7327.00 frames.], tot_loss[loss=0.17, simple_loss=0.2593, pruned_loss=0.04039, over 1425662.28 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 14:15:15,726 INFO [train.py:842] (0/4) Epoch 37, batch 3950, loss[loss=0.1437, simple_loss=0.236, pruned_loss=0.02573, over 7274.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2593, pruned_loss=0.04058, over 1426896.12 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 14:15:54,288 INFO [train.py:842] (0/4) Epoch 37, batch 4000, loss[loss=0.1718, simple_loss=0.2615, pruned_loss=0.0411, over 7071.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2609, pruned_loss=0.0417, over 1426835.24 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 14:16:32,610 INFO [train.py:842] (0/4) Epoch 37, batch 4050, loss[loss=0.1626, simple_loss=0.2496, pruned_loss=0.03776, over 7296.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2608, pruned_loss=0.0413, over 1427791.11 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 14:17:11,126 INFO [train.py:842] (0/4) Epoch 37, batch 4100, loss[loss=0.1521, simple_loss=0.2489, pruned_loss=0.02762, over 7128.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2613, pruned_loss=0.04097, over 1424140.62 frames.], batch size: 21, lr: 1.49e-04 2022-05-29 14:17:49,511 INFO [train.py:842] (0/4) Epoch 37, batch 4150, loss[loss=0.1669, simple_loss=0.2675, pruned_loss=0.0332, over 7292.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2607, pruned_loss=0.0408, over 1424951.69 frames.], batch size: 24, lr: 1.49e-04 2022-05-29 14:18:28,585 INFO [train.py:842] (0/4) Epoch 37, batch 4200, loss[loss=0.185, simple_loss=0.2506, pruned_loss=0.05968, over 7273.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2599, pruned_loss=0.04083, over 1428802.85 frames.], batch size: 17, lr: 1.49e-04 2022-05-29 14:19:06,939 INFO [train.py:842] (0/4) Epoch 37, batch 4250, loss[loss=0.1628, simple_loss=0.2603, pruned_loss=0.03264, over 7225.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2604, pruned_loss=0.04086, over 1428732.47 frames.], batch size: 20, lr: 1.49e-04 2022-05-29 14:19:45,855 INFO [train.py:842] (0/4) Epoch 37, batch 4300, loss[loss=0.1394, simple_loss=0.2282, pruned_loss=0.02532, over 7403.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2598, pruned_loss=0.0408, over 1431001.70 frames.], batch size: 18, lr: 1.49e-04 2022-05-29 14:20:24,407 INFO [train.py:842] (0/4) Epoch 37, batch 4350, loss[loss=0.1642, simple_loss=0.2563, pruned_loss=0.03607, over 7103.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2595, pruned_loss=0.04077, over 1428996.36 frames.], batch size: 28, lr: 1.49e-04 2022-05-29 14:21:03,035 INFO [train.py:842] (0/4) Epoch 37, batch 4400, loss[loss=0.1681, simple_loss=0.2646, pruned_loss=0.03578, over 7330.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2602, pruned_loss=0.04128, over 1427227.47 frames.], batch size: 22, lr: 1.49e-04 2022-05-29 14:21:41,577 INFO [train.py:842] (0/4) Epoch 37, batch 4450, loss[loss=0.2195, simple_loss=0.3124, pruned_loss=0.06325, over 7091.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2601, pruned_loss=0.04123, over 1429675.19 frames.], batch size: 28, lr: 1.49e-04 2022-05-29 14:22:20,381 INFO [train.py:842] (0/4) Epoch 37, batch 4500, loss[loss=0.2105, simple_loss=0.2952, pruned_loss=0.06286, over 7299.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2599, pruned_loss=0.04135, over 1425636.27 frames.], batch size: 24, lr: 1.49e-04 2022-05-29 14:22:58,723 INFO [train.py:842] (0/4) Epoch 37, batch 4550, loss[loss=0.209, simple_loss=0.29, pruned_loss=0.06396, over 7277.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2605, pruned_loss=0.04144, over 1424650.74 frames.], batch size: 25, lr: 1.48e-04 2022-05-29 14:23:37,637 INFO [train.py:842] (0/4) Epoch 37, batch 4600, loss[loss=0.1281, simple_loss=0.2241, pruned_loss=0.01602, over 7159.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2606, pruned_loss=0.04134, over 1424114.18 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 14:24:16,132 INFO [train.py:842] (0/4) Epoch 37, batch 4650, loss[loss=0.1881, simple_loss=0.2804, pruned_loss=0.04789, over 7137.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2594, pruned_loss=0.04066, over 1425533.75 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:24:54,742 INFO [train.py:842] (0/4) Epoch 37, batch 4700, loss[loss=0.1459, simple_loss=0.2403, pruned_loss=0.02571, over 6805.00 frames.], tot_loss[loss=0.1697, simple_loss=0.259, pruned_loss=0.04022, over 1423999.34 frames.], batch size: 31, lr: 1.48e-04 2022-05-29 14:25:33,359 INFO [train.py:842] (0/4) Epoch 37, batch 4750, loss[loss=0.1713, simple_loss=0.2559, pruned_loss=0.04338, over 7243.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2591, pruned_loss=0.04037, over 1426795.11 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:26:11,876 INFO [train.py:842] (0/4) Epoch 37, batch 4800, loss[loss=0.1696, simple_loss=0.2619, pruned_loss=0.03864, over 7191.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2597, pruned_loss=0.04037, over 1425584.30 frames.], batch size: 26, lr: 1.48e-04 2022-05-29 14:26:50,423 INFO [train.py:842] (0/4) Epoch 37, batch 4850, loss[loss=0.1573, simple_loss=0.2528, pruned_loss=0.03088, over 7101.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2593, pruned_loss=0.04069, over 1429742.08 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:27:29,124 INFO [train.py:842] (0/4) Epoch 37, batch 4900, loss[loss=0.1412, simple_loss=0.233, pruned_loss=0.02471, over 6826.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2603, pruned_loss=0.04092, over 1432112.05 frames.], batch size: 15, lr: 1.48e-04 2022-05-29 14:28:07,578 INFO [train.py:842] (0/4) Epoch 37, batch 4950, loss[loss=0.155, simple_loss=0.2506, pruned_loss=0.02968, over 7409.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2603, pruned_loss=0.04114, over 1434969.02 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:28:46,390 INFO [train.py:842] (0/4) Epoch 37, batch 5000, loss[loss=0.1591, simple_loss=0.2386, pruned_loss=0.03983, over 7273.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2596, pruned_loss=0.04099, over 1431610.75 frames.], batch size: 17, lr: 1.48e-04 2022-05-29 14:29:24,650 INFO [train.py:842] (0/4) Epoch 37, batch 5050, loss[loss=0.1767, simple_loss=0.2619, pruned_loss=0.04571, over 7066.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2597, pruned_loss=0.04045, over 1428136.45 frames.], batch size: 28, lr: 1.48e-04 2022-05-29 14:29:58,761 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-336000.pt 2022-05-29 14:30:06,270 INFO [train.py:842] (0/4) Epoch 37, batch 5100, loss[loss=0.1791, simple_loss=0.2711, pruned_loss=0.04359, over 7228.00 frames.], tot_loss[loss=0.1698, simple_loss=0.2591, pruned_loss=0.04028, over 1420519.92 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:30:44,608 INFO [train.py:842] (0/4) Epoch 37, batch 5150, loss[loss=0.1555, simple_loss=0.2478, pruned_loss=0.03154, over 7028.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2604, pruned_loss=0.04093, over 1419360.46 frames.], batch size: 28, lr: 1.48e-04 2022-05-29 14:31:23,406 INFO [train.py:842] (0/4) Epoch 37, batch 5200, loss[loss=0.1605, simple_loss=0.2523, pruned_loss=0.03434, over 7279.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2606, pruned_loss=0.0411, over 1420880.58 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 14:32:01,771 INFO [train.py:842] (0/4) Epoch 37, batch 5250, loss[loss=0.1422, simple_loss=0.2218, pruned_loss=0.03128, over 6841.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2609, pruned_loss=0.04138, over 1422890.72 frames.], batch size: 15, lr: 1.48e-04 2022-05-29 14:32:40,356 INFO [train.py:842] (0/4) Epoch 37, batch 5300, loss[loss=0.1789, simple_loss=0.2716, pruned_loss=0.04306, over 7319.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2611, pruned_loss=0.04082, over 1426528.47 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:33:18,728 INFO [train.py:842] (0/4) Epoch 37, batch 5350, loss[loss=0.1655, simple_loss=0.2661, pruned_loss=0.03244, over 7337.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2605, pruned_loss=0.04049, over 1426731.53 frames.], batch size: 22, lr: 1.48e-04 2022-05-29 14:33:57,369 INFO [train.py:842] (0/4) Epoch 37, batch 5400, loss[loss=0.1663, simple_loss=0.2604, pruned_loss=0.03603, over 7285.00 frames.], tot_loss[loss=0.171, simple_loss=0.2605, pruned_loss=0.0408, over 1424213.78 frames.], batch size: 25, lr: 1.48e-04 2022-05-29 14:34:35,833 INFO [train.py:842] (0/4) Epoch 37, batch 5450, loss[loss=0.161, simple_loss=0.2555, pruned_loss=0.03324, over 7372.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2607, pruned_loss=0.04093, over 1426106.18 frames.], batch size: 23, lr: 1.48e-04 2022-05-29 14:35:14,218 INFO [train.py:842] (0/4) Epoch 37, batch 5500, loss[loss=0.1976, simple_loss=0.2774, pruned_loss=0.05891, over 7255.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2613, pruned_loss=0.04143, over 1420643.35 frames.], batch size: 19, lr: 1.48e-04 2022-05-29 14:35:52,476 INFO [train.py:842] (0/4) Epoch 37, batch 5550, loss[loss=0.1745, simple_loss=0.2679, pruned_loss=0.04053, over 7212.00 frames.], tot_loss[loss=0.1732, simple_loss=0.2621, pruned_loss=0.04215, over 1417954.20 frames.], batch size: 22, lr: 1.48e-04 2022-05-29 14:36:31,228 INFO [train.py:842] (0/4) Epoch 37, batch 5600, loss[loss=0.2012, simple_loss=0.2916, pruned_loss=0.05543, over 7062.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2609, pruned_loss=0.04121, over 1418875.51 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 14:37:09,317 INFO [train.py:842] (0/4) Epoch 37, batch 5650, loss[loss=0.1944, simple_loss=0.2911, pruned_loss=0.04888, over 7315.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2606, pruned_loss=0.04091, over 1416196.24 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:37:47,958 INFO [train.py:842] (0/4) Epoch 37, batch 5700, loss[loss=0.1527, simple_loss=0.2421, pruned_loss=0.03166, over 7162.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2607, pruned_loss=0.04097, over 1417917.68 frames.], batch size: 19, lr: 1.48e-04 2022-05-29 14:38:26,353 INFO [train.py:842] (0/4) Epoch 37, batch 5750, loss[loss=0.1587, simple_loss=0.2472, pruned_loss=0.03514, over 7259.00 frames.], tot_loss[loss=0.1707, simple_loss=0.26, pruned_loss=0.04069, over 1419463.59 frames.], batch size: 19, lr: 1.48e-04 2022-05-29 14:39:04,949 INFO [train.py:842] (0/4) Epoch 37, batch 5800, loss[loss=0.1593, simple_loss=0.2564, pruned_loss=0.03113, over 7414.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2612, pruned_loss=0.04078, over 1420095.06 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:39:43,218 INFO [train.py:842] (0/4) Epoch 37, batch 5850, loss[loss=0.1508, simple_loss=0.2433, pruned_loss=0.02916, over 7057.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2612, pruned_loss=0.04073, over 1419977.41 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 14:40:21,996 INFO [train.py:842] (0/4) Epoch 37, batch 5900, loss[loss=0.1965, simple_loss=0.294, pruned_loss=0.04951, over 7152.00 frames.], tot_loss[loss=0.1711, simple_loss=0.261, pruned_loss=0.04062, over 1421981.03 frames.], batch size: 26, lr: 1.48e-04 2022-05-29 14:41:00,175 INFO [train.py:842] (0/4) Epoch 37, batch 5950, loss[loss=0.1562, simple_loss=0.2514, pruned_loss=0.03048, over 7072.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2615, pruned_loss=0.04086, over 1421402.21 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 14:41:39,109 INFO [train.py:842] (0/4) Epoch 37, batch 6000, loss[loss=0.1629, simple_loss=0.2476, pruned_loss=0.03914, over 7406.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2604, pruned_loss=0.0405, over 1424755.09 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 14:41:39,110 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 14:41:48,827 INFO [train.py:871] (0/4) Epoch 37, validation: loss=0.1649, simple_loss=0.2613, pruned_loss=0.03429, over 868885.00 frames. 2022-05-29 14:42:27,124 INFO [train.py:842] (0/4) Epoch 37, batch 6050, loss[loss=0.1944, simple_loss=0.2879, pruned_loss=0.0505, over 7287.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2605, pruned_loss=0.04062, over 1423112.09 frames.], batch size: 25, lr: 1.48e-04 2022-05-29 14:43:05,755 INFO [train.py:842] (0/4) Epoch 37, batch 6100, loss[loss=0.1366, simple_loss=0.2256, pruned_loss=0.02379, over 7337.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2609, pruned_loss=0.04068, over 1424614.24 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:43:43,958 INFO [train.py:842] (0/4) Epoch 37, batch 6150, loss[loss=0.1611, simple_loss=0.231, pruned_loss=0.04559, over 6994.00 frames.], tot_loss[loss=0.1704, simple_loss=0.26, pruned_loss=0.04038, over 1419311.24 frames.], batch size: 16, lr: 1.48e-04 2022-05-29 14:44:22,777 INFO [train.py:842] (0/4) Epoch 37, batch 6200, loss[loss=0.1638, simple_loss=0.258, pruned_loss=0.03476, over 7117.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2611, pruned_loss=0.041, over 1417592.36 frames.], batch size: 28, lr: 1.48e-04 2022-05-29 14:45:01,300 INFO [train.py:842] (0/4) Epoch 37, batch 6250, loss[loss=0.1791, simple_loss=0.2502, pruned_loss=0.05403, over 7118.00 frames.], tot_loss[loss=0.171, simple_loss=0.2604, pruned_loss=0.04078, over 1421649.57 frames.], batch size: 17, lr: 1.48e-04 2022-05-29 14:45:40,233 INFO [train.py:842] (0/4) Epoch 37, batch 6300, loss[loss=0.129, simple_loss=0.2139, pruned_loss=0.02206, over 6975.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2586, pruned_loss=0.04037, over 1422787.11 frames.], batch size: 16, lr: 1.48e-04 2022-05-29 14:46:18,700 INFO [train.py:842] (0/4) Epoch 37, batch 6350, loss[loss=0.1607, simple_loss=0.2561, pruned_loss=0.03266, over 7155.00 frames.], tot_loss[loss=0.1688, simple_loss=0.2577, pruned_loss=0.03996, over 1425456.54 frames.], batch size: 26, lr: 1.48e-04 2022-05-29 14:46:57,216 INFO [train.py:842] (0/4) Epoch 37, batch 6400, loss[loss=0.172, simple_loss=0.2702, pruned_loss=0.03689, over 7322.00 frames.], tot_loss[loss=0.1686, simple_loss=0.2576, pruned_loss=0.03984, over 1423250.56 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:47:35,911 INFO [train.py:842] (0/4) Epoch 37, batch 6450, loss[loss=0.1748, simple_loss=0.2678, pruned_loss=0.04086, over 6377.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2589, pruned_loss=0.04067, over 1424921.40 frames.], batch size: 38, lr: 1.48e-04 2022-05-29 14:48:14,936 INFO [train.py:842] (0/4) Epoch 37, batch 6500, loss[loss=0.1555, simple_loss=0.2462, pruned_loss=0.03242, over 7140.00 frames.], tot_loss[loss=0.17, simple_loss=0.259, pruned_loss=0.0405, over 1428880.14 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:48:53,195 INFO [train.py:842] (0/4) Epoch 37, batch 6550, loss[loss=0.1694, simple_loss=0.2672, pruned_loss=0.03581, over 7279.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2603, pruned_loss=0.04089, over 1424052.25 frames.], batch size: 24, lr: 1.48e-04 2022-05-29 14:49:31,826 INFO [train.py:842] (0/4) Epoch 37, batch 6600, loss[loss=0.1743, simple_loss=0.265, pruned_loss=0.04182, over 7375.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2605, pruned_loss=0.04104, over 1420631.41 frames.], batch size: 23, lr: 1.48e-04 2022-05-29 14:50:10,221 INFO [train.py:842] (0/4) Epoch 37, batch 6650, loss[loss=0.1566, simple_loss=0.2479, pruned_loss=0.03261, over 7111.00 frames.], tot_loss[loss=0.1709, simple_loss=0.26, pruned_loss=0.04089, over 1418259.15 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:50:48,908 INFO [train.py:842] (0/4) Epoch 37, batch 6700, loss[loss=0.2183, simple_loss=0.3021, pruned_loss=0.06727, over 7139.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2609, pruned_loss=0.04115, over 1416864.54 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:51:27,564 INFO [train.py:842] (0/4) Epoch 37, batch 6750, loss[loss=0.2037, simple_loss=0.2899, pruned_loss=0.05876, over 7411.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2601, pruned_loss=0.04067, over 1420735.08 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:52:06,476 INFO [train.py:842] (0/4) Epoch 37, batch 6800, loss[loss=0.1775, simple_loss=0.2776, pruned_loss=0.03871, over 7226.00 frames.], tot_loss[loss=0.17, simple_loss=0.2596, pruned_loss=0.04024, over 1424622.55 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 14:52:44,823 INFO [train.py:842] (0/4) Epoch 37, batch 6850, loss[loss=0.1601, simple_loss=0.2605, pruned_loss=0.02985, over 7195.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2597, pruned_loss=0.04039, over 1419001.04 frames.], batch size: 23, lr: 1.48e-04 2022-05-29 14:53:23,544 INFO [train.py:842] (0/4) Epoch 37, batch 6900, loss[loss=0.1642, simple_loss=0.2562, pruned_loss=0.03616, over 7235.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2595, pruned_loss=0.04029, over 1421639.23 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:54:01,938 INFO [train.py:842] (0/4) Epoch 37, batch 6950, loss[loss=0.1507, simple_loss=0.2373, pruned_loss=0.03212, over 7367.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2606, pruned_loss=0.04103, over 1421629.03 frames.], batch size: 19, lr: 1.48e-04 2022-05-29 14:54:40,633 INFO [train.py:842] (0/4) Epoch 37, batch 7000, loss[loss=0.209, simple_loss=0.3059, pruned_loss=0.05605, over 7379.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2606, pruned_loss=0.0412, over 1419839.10 frames.], batch size: 23, lr: 1.48e-04 2022-05-29 14:55:19,147 INFO [train.py:842] (0/4) Epoch 37, batch 7050, loss[loss=0.1804, simple_loss=0.2719, pruned_loss=0.04451, over 7193.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2611, pruned_loss=0.04155, over 1419162.06 frames.], batch size: 22, lr: 1.48e-04 2022-05-29 14:55:57,882 INFO [train.py:842] (0/4) Epoch 37, batch 7100, loss[loss=0.2053, simple_loss=0.287, pruned_loss=0.06183, over 7384.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2607, pruned_loss=0.04152, over 1414498.37 frames.], batch size: 23, lr: 1.48e-04 2022-05-29 14:56:36,268 INFO [train.py:842] (0/4) Epoch 37, batch 7150, loss[loss=0.1716, simple_loss=0.2631, pruned_loss=0.04004, over 6333.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2609, pruned_loss=0.04118, over 1416777.88 frames.], batch size: 37, lr: 1.48e-04 2022-05-29 14:57:14,667 INFO [train.py:842] (0/4) Epoch 37, batch 7200, loss[loss=0.1772, simple_loss=0.2614, pruned_loss=0.04654, over 7260.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2605, pruned_loss=0.04061, over 1414705.66 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 14:58:03,030 INFO [train.py:842] (0/4) Epoch 37, batch 7250, loss[loss=0.2601, simple_loss=0.3362, pruned_loss=0.09198, over 6204.00 frames.], tot_loss[loss=0.1698, simple_loss=0.2593, pruned_loss=0.0402, over 1412551.22 frames.], batch size: 37, lr: 1.48e-04 2022-05-29 14:58:41,377 INFO [train.py:842] (0/4) Epoch 37, batch 7300, loss[loss=0.2163, simple_loss=0.3037, pruned_loss=0.06449, over 7147.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2611, pruned_loss=0.04107, over 1407693.36 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:59:20,035 INFO [train.py:842] (0/4) Epoch 37, batch 7350, loss[loss=0.184, simple_loss=0.2651, pruned_loss=0.05149, over 7422.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2608, pruned_loss=0.04096, over 1416069.63 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 14:59:58,579 INFO [train.py:842] (0/4) Epoch 37, batch 7400, loss[loss=0.2028, simple_loss=0.2794, pruned_loss=0.06315, over 7189.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2611, pruned_loss=0.04119, over 1411848.94 frames.], batch size: 22, lr: 1.48e-04 2022-05-29 15:00:37,088 INFO [train.py:842] (0/4) Epoch 37, batch 7450, loss[loss=0.2277, simple_loss=0.3077, pruned_loss=0.07383, over 7134.00 frames.], tot_loss[loss=0.1727, simple_loss=0.2616, pruned_loss=0.0419, over 1415876.11 frames.], batch size: 26, lr: 1.48e-04 2022-05-29 15:01:15,778 INFO [train.py:842] (0/4) Epoch 37, batch 7500, loss[loss=0.1719, simple_loss=0.2687, pruned_loss=0.03757, over 7409.00 frames.], tot_loss[loss=0.1735, simple_loss=0.2622, pruned_loss=0.04238, over 1417270.59 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 15:01:54,105 INFO [train.py:842] (0/4) Epoch 37, batch 7550, loss[loss=0.1587, simple_loss=0.2517, pruned_loss=0.03291, over 6790.00 frames.], tot_loss[loss=0.1728, simple_loss=0.262, pruned_loss=0.04181, over 1419760.39 frames.], batch size: 31, lr: 1.48e-04 2022-05-29 15:02:32,820 INFO [train.py:842] (0/4) Epoch 37, batch 7600, loss[loss=0.1665, simple_loss=0.2494, pruned_loss=0.0418, over 5517.00 frames.], tot_loss[loss=0.1734, simple_loss=0.2622, pruned_loss=0.04232, over 1416676.27 frames.], batch size: 52, lr: 1.48e-04 2022-05-29 15:03:20,989 INFO [train.py:842] (0/4) Epoch 37, batch 7650, loss[loss=0.1633, simple_loss=0.2673, pruned_loss=0.02969, over 7420.00 frames.], tot_loss[loss=0.1731, simple_loss=0.2625, pruned_loss=0.04185, over 1422554.85 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 15:04:09,724 INFO [train.py:842] (0/4) Epoch 37, batch 7700, loss[loss=0.1713, simple_loss=0.2582, pruned_loss=0.04227, over 7201.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2617, pruned_loss=0.04149, over 1423277.99 frames.], batch size: 23, lr: 1.48e-04 2022-05-29 15:04:48,025 INFO [train.py:842] (0/4) Epoch 37, batch 7750, loss[loss=0.1642, simple_loss=0.2334, pruned_loss=0.04754, over 7224.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2611, pruned_loss=0.04095, over 1423693.99 frames.], batch size: 16, lr: 1.48e-04 2022-05-29 15:05:26,788 INFO [train.py:842] (0/4) Epoch 37, batch 7800, loss[loss=0.2089, simple_loss=0.2997, pruned_loss=0.05904, over 7336.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2614, pruned_loss=0.04145, over 1423912.01 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 15:06:05,348 INFO [train.py:842] (0/4) Epoch 37, batch 7850, loss[loss=0.1505, simple_loss=0.2369, pruned_loss=0.03203, over 6510.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2591, pruned_loss=0.04066, over 1428556.85 frames.], batch size: 38, lr: 1.48e-04 2022-05-29 15:06:44,242 INFO [train.py:842] (0/4) Epoch 37, batch 7900, loss[loss=0.1537, simple_loss=0.2492, pruned_loss=0.02912, over 7361.00 frames.], tot_loss[loss=0.1698, simple_loss=0.259, pruned_loss=0.04024, over 1432189.61 frames.], batch size: 19, lr: 1.48e-04 2022-05-29 15:07:22,859 INFO [train.py:842] (0/4) Epoch 37, batch 7950, loss[loss=0.2342, simple_loss=0.3161, pruned_loss=0.07616, over 7309.00 frames.], tot_loss[loss=0.17, simple_loss=0.2592, pruned_loss=0.04039, over 1434449.16 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 15:08:01,508 INFO [train.py:842] (0/4) Epoch 37, batch 8000, loss[loss=0.1707, simple_loss=0.2418, pruned_loss=0.04974, over 6993.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2593, pruned_loss=0.04043, over 1425694.82 frames.], batch size: 16, lr: 1.48e-04 2022-05-29 15:08:39,684 INFO [train.py:842] (0/4) Epoch 37, batch 8050, loss[loss=0.1695, simple_loss=0.2652, pruned_loss=0.03692, over 7139.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2594, pruned_loss=0.0405, over 1423465.56 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 15:09:18,460 INFO [train.py:842] (0/4) Epoch 37, batch 8100, loss[loss=0.154, simple_loss=0.2594, pruned_loss=0.02431, over 7314.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2587, pruned_loss=0.03995, over 1424771.37 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 15:09:56,853 INFO [train.py:842] (0/4) Epoch 37, batch 8150, loss[loss=0.1716, simple_loss=0.2573, pruned_loss=0.04297, over 7320.00 frames.], tot_loss[loss=0.1696, simple_loss=0.2589, pruned_loss=0.04018, over 1418293.32 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 15:10:35,179 INFO [train.py:842] (0/4) Epoch 37, batch 8200, loss[loss=0.1602, simple_loss=0.2586, pruned_loss=0.03088, over 7134.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2597, pruned_loss=0.04023, over 1419959.07 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 15:11:13,491 INFO [train.py:842] (0/4) Epoch 37, batch 8250, loss[loss=0.1725, simple_loss=0.2761, pruned_loss=0.03445, over 7267.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2598, pruned_loss=0.04036, over 1420196.19 frames.], batch size: 24, lr: 1.48e-04 2022-05-29 15:11:51,976 INFO [train.py:842] (0/4) Epoch 37, batch 8300, loss[loss=0.1925, simple_loss=0.2863, pruned_loss=0.0494, over 7198.00 frames.], tot_loss[loss=0.171, simple_loss=0.261, pruned_loss=0.04056, over 1417354.33 frames.], batch size: 23, lr: 1.48e-04 2022-05-29 15:12:30,164 INFO [train.py:842] (0/4) Epoch 37, batch 8350, loss[loss=0.197, simple_loss=0.2991, pruned_loss=0.04741, over 7335.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2623, pruned_loss=0.04101, over 1420445.15 frames.], batch size: 22, lr: 1.48e-04 2022-05-29 15:13:09,198 INFO [train.py:842] (0/4) Epoch 37, batch 8400, loss[loss=0.1217, simple_loss=0.2013, pruned_loss=0.02106, over 6858.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2606, pruned_loss=0.04052, over 1422300.40 frames.], batch size: 15, lr: 1.48e-04 2022-05-29 15:13:47,728 INFO [train.py:842] (0/4) Epoch 37, batch 8450, loss[loss=0.166, simple_loss=0.2565, pruned_loss=0.03776, over 7079.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2599, pruned_loss=0.04086, over 1422191.35 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 15:14:26,405 INFO [train.py:842] (0/4) Epoch 37, batch 8500, loss[loss=0.1596, simple_loss=0.2395, pruned_loss=0.03986, over 7285.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2591, pruned_loss=0.04058, over 1422131.65 frames.], batch size: 17, lr: 1.48e-04 2022-05-29 15:15:04,511 INFO [train.py:842] (0/4) Epoch 37, batch 8550, loss[loss=0.1927, simple_loss=0.2705, pruned_loss=0.0574, over 7118.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2601, pruned_loss=0.04106, over 1422136.86 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 15:15:43,227 INFO [train.py:842] (0/4) Epoch 37, batch 8600, loss[loss=0.2121, simple_loss=0.3039, pruned_loss=0.06019, over 7075.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2608, pruned_loss=0.04156, over 1417414.04 frames.], batch size: 28, lr: 1.48e-04 2022-05-29 15:16:21,378 INFO [train.py:842] (0/4) Epoch 37, batch 8650, loss[loss=0.1493, simple_loss=0.2406, pruned_loss=0.02895, over 7423.00 frames.], tot_loss[loss=0.171, simple_loss=0.2601, pruned_loss=0.04096, over 1416587.71 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 15:16:59,580 INFO [train.py:842] (0/4) Epoch 37, batch 8700, loss[loss=0.175, simple_loss=0.2667, pruned_loss=0.04159, over 7434.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2605, pruned_loss=0.04081, over 1410482.01 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 15:17:37,718 INFO [train.py:842] (0/4) Epoch 37, batch 8750, loss[loss=0.1665, simple_loss=0.2475, pruned_loss=0.04273, over 7146.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2617, pruned_loss=0.04133, over 1410267.40 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 15:18:16,319 INFO [train.py:842] (0/4) Epoch 37, batch 8800, loss[loss=0.1919, simple_loss=0.2785, pruned_loss=0.05271, over 7154.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2613, pruned_loss=0.041, over 1411689.36 frames.], batch size: 20, lr: 1.48e-04 2022-05-29 15:18:54,788 INFO [train.py:842] (0/4) Epoch 37, batch 8850, loss[loss=0.159, simple_loss=0.2453, pruned_loss=0.0364, over 7288.00 frames.], tot_loss[loss=0.1716, simple_loss=0.261, pruned_loss=0.04114, over 1411505.81 frames.], batch size: 18, lr: 1.48e-04 2022-05-29 15:19:33,458 INFO [train.py:842] (0/4) Epoch 37, batch 8900, loss[loss=0.1543, simple_loss=0.2587, pruned_loss=0.02495, over 6464.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2612, pruned_loss=0.0411, over 1412839.71 frames.], batch size: 38, lr: 1.48e-04 2022-05-29 15:20:11,861 INFO [train.py:842] (0/4) Epoch 37, batch 8950, loss[loss=0.1385, simple_loss=0.2257, pruned_loss=0.02563, over 7129.00 frames.], tot_loss[loss=0.1714, simple_loss=0.261, pruned_loss=0.04092, over 1411398.21 frames.], batch size: 17, lr: 1.48e-04 2022-05-29 15:20:50,450 INFO [train.py:842] (0/4) Epoch 37, batch 9000, loss[loss=0.1876, simple_loss=0.2752, pruned_loss=0.05004, over 7121.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2601, pruned_loss=0.04066, over 1407798.76 frames.], batch size: 21, lr: 1.48e-04 2022-05-29 15:20:50,451 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 15:20:59,818 INFO [train.py:871] (0/4) Epoch 37, validation: loss=0.1648, simple_loss=0.2612, pruned_loss=0.03416, over 868885.00 frames. 2022-05-29 15:21:38,648 INFO [train.py:842] (0/4) Epoch 37, batch 9050, loss[loss=0.1564, simple_loss=0.2376, pruned_loss=0.03764, over 7137.00 frames.], tot_loss[loss=0.1692, simple_loss=0.2581, pruned_loss=0.0402, over 1402619.40 frames.], batch size: 17, lr: 1.48e-04 2022-05-29 15:22:16,606 INFO [train.py:842] (0/4) Epoch 37, batch 9100, loss[loss=0.1799, simple_loss=0.2748, pruned_loss=0.04252, over 6479.00 frames.], tot_loss[loss=0.171, simple_loss=0.2596, pruned_loss=0.04119, over 1369387.72 frames.], batch size: 37, lr: 1.47e-04 2022-05-29 15:22:53,403 INFO [train.py:842] (0/4) Epoch 37, batch 9150, loss[loss=0.1722, simple_loss=0.2557, pruned_loss=0.04433, over 5077.00 frames.], tot_loss[loss=0.1744, simple_loss=0.263, pruned_loss=0.0429, over 1318874.30 frames.], batch size: 52, lr: 1.47e-04 2022-05-29 15:23:24,440 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-37.pt 2022-05-29 15:23:42,244 INFO [train.py:842] (0/4) Epoch 38, batch 0, loss[loss=0.1606, simple_loss=0.2489, pruned_loss=0.03612, over 7360.00 frames.], tot_loss[loss=0.1606, simple_loss=0.2489, pruned_loss=0.03612, over 7360.00 frames.], batch size: 19, lr: 1.46e-04 2022-05-29 15:24:21,097 INFO [train.py:842] (0/4) Epoch 38, batch 50, loss[loss=0.1762, simple_loss=0.2655, pruned_loss=0.04341, over 6470.00 frames.], tot_loss[loss=0.1657, simple_loss=0.2547, pruned_loss=0.03829, over 322884.22 frames.], batch size: 38, lr: 1.46e-04 2022-05-29 15:24:59,384 INFO [train.py:842] (0/4) Epoch 38, batch 100, loss[loss=0.1913, simple_loss=0.2861, pruned_loss=0.04823, over 7260.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2616, pruned_loss=0.04152, over 560229.77 frames.], batch size: 19, lr: 1.46e-04 2022-05-29 15:25:37,979 INFO [train.py:842] (0/4) Epoch 38, batch 150, loss[loss=0.1569, simple_loss=0.258, pruned_loss=0.02786, over 7382.00 frames.], tot_loss[loss=0.1729, simple_loss=0.2634, pruned_loss=0.04121, over 748099.70 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 15:26:16,305 INFO [train.py:842] (0/4) Epoch 38, batch 200, loss[loss=0.1856, simple_loss=0.2924, pruned_loss=0.03947, over 7408.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2617, pruned_loss=0.04049, over 896704.31 frames.], batch size: 21, lr: 1.45e-04 2022-05-29 15:26:54,834 INFO [train.py:842] (0/4) Epoch 38, batch 250, loss[loss=0.15, simple_loss=0.2337, pruned_loss=0.03315, over 7367.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2608, pruned_loss=0.04052, over 1015204.42 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 15:27:33,114 INFO [train.py:842] (0/4) Epoch 38, batch 300, loss[loss=0.1614, simple_loss=0.2676, pruned_loss=0.02759, over 7229.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2604, pruned_loss=0.04036, over 1105426.67 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:28:11,829 INFO [train.py:842] (0/4) Epoch 38, batch 350, loss[loss=0.1798, simple_loss=0.2617, pruned_loss=0.04895, over 7251.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2594, pruned_loss=0.03967, over 1172789.95 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 15:28:50,339 INFO [train.py:842] (0/4) Epoch 38, batch 400, loss[loss=0.1566, simple_loss=0.2366, pruned_loss=0.03831, over 7283.00 frames.], tot_loss[loss=0.1688, simple_loss=0.2588, pruned_loss=0.03942, over 1232348.30 frames.], batch size: 17, lr: 1.45e-04 2022-05-29 15:29:28,795 INFO [train.py:842] (0/4) Epoch 38, batch 450, loss[loss=0.177, simple_loss=0.2714, pruned_loss=0.04131, over 7438.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2587, pruned_loss=0.03995, over 1276188.74 frames.], batch size: 22, lr: 1.45e-04 2022-05-29 15:30:07,433 INFO [train.py:842] (0/4) Epoch 38, batch 500, loss[loss=0.1999, simple_loss=0.2749, pruned_loss=0.06241, over 7295.00 frames.], tot_loss[loss=0.1688, simple_loss=0.2579, pruned_loss=0.03985, over 1312443.41 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:30:46,038 INFO [train.py:842] (0/4) Epoch 38, batch 550, loss[loss=0.1664, simple_loss=0.2628, pruned_loss=0.03498, over 7322.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2596, pruned_loss=0.04056, over 1336957.46 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:31:24,321 INFO [train.py:842] (0/4) Epoch 38, batch 600, loss[loss=0.1571, simple_loss=0.2574, pruned_loss=0.02843, over 7379.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2598, pruned_loss=0.04071, over 1358285.76 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 15:32:03,010 INFO [train.py:842] (0/4) Epoch 38, batch 650, loss[loss=0.1662, simple_loss=0.2588, pruned_loss=0.03676, over 7354.00 frames.], tot_loss[loss=0.171, simple_loss=0.2605, pruned_loss=0.04076, over 1374325.60 frames.], batch size: 22, lr: 1.45e-04 2022-05-29 15:32:41,270 INFO [train.py:842] (0/4) Epoch 38, batch 700, loss[loss=0.1534, simple_loss=0.2434, pruned_loss=0.03169, over 7147.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2605, pruned_loss=0.04056, over 1386923.33 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:33:19,955 INFO [train.py:842] (0/4) Epoch 38, batch 750, loss[loss=0.1569, simple_loss=0.2572, pruned_loss=0.02827, over 7378.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2593, pruned_loss=0.03963, over 1401428.75 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 15:33:58,068 INFO [train.py:842] (0/4) Epoch 38, batch 800, loss[loss=0.139, simple_loss=0.2238, pruned_loss=0.02706, over 7402.00 frames.], tot_loss[loss=0.169, simple_loss=0.2595, pruned_loss=0.03926, over 1409146.47 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:34:36,746 INFO [train.py:842] (0/4) Epoch 38, batch 850, loss[loss=0.152, simple_loss=0.2385, pruned_loss=0.03275, over 7372.00 frames.], tot_loss[loss=0.1691, simple_loss=0.2596, pruned_loss=0.03934, over 1411456.22 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 15:35:15,000 INFO [train.py:842] (0/4) Epoch 38, batch 900, loss[loss=0.1775, simple_loss=0.2666, pruned_loss=0.04424, over 7288.00 frames.], tot_loss[loss=0.1688, simple_loss=0.2596, pruned_loss=0.039, over 1413269.95 frames.], batch size: 24, lr: 1.45e-04 2022-05-29 15:35:53,549 INFO [train.py:842] (0/4) Epoch 38, batch 950, loss[loss=0.1868, simple_loss=0.2712, pruned_loss=0.05123, over 7248.00 frames.], tot_loss[loss=0.1687, simple_loss=0.2592, pruned_loss=0.03907, over 1418871.80 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 15:36:31,940 INFO [train.py:842] (0/4) Epoch 38, batch 1000, loss[loss=0.2029, simple_loss=0.284, pruned_loss=0.06095, over 7204.00 frames.], tot_loss[loss=0.1687, simple_loss=0.2588, pruned_loss=0.0393, over 1422012.23 frames.], batch size: 22, lr: 1.45e-04 2022-05-29 15:37:10,452 INFO [train.py:842] (0/4) Epoch 38, batch 1050, loss[loss=0.1674, simple_loss=0.2579, pruned_loss=0.03841, over 7322.00 frames.], tot_loss[loss=0.1696, simple_loss=0.2598, pruned_loss=0.03974, over 1422076.90 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:37:48,973 INFO [train.py:842] (0/4) Epoch 38, batch 1100, loss[loss=0.1409, simple_loss=0.2136, pruned_loss=0.03406, over 7232.00 frames.], tot_loss[loss=0.1708, simple_loss=0.261, pruned_loss=0.04026, over 1425014.43 frames.], batch size: 16, lr: 1.45e-04 2022-05-29 15:38:27,633 INFO [train.py:842] (0/4) Epoch 38, batch 1150, loss[loss=0.1447, simple_loss=0.2369, pruned_loss=0.02627, over 7286.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2614, pruned_loss=0.04011, over 1422410.35 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:39:05,886 INFO [train.py:842] (0/4) Epoch 38, batch 1200, loss[loss=0.166, simple_loss=0.2566, pruned_loss=0.03771, over 7154.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2629, pruned_loss=0.04089, over 1424338.06 frames.], batch size: 26, lr: 1.45e-04 2022-05-29 15:39:44,559 INFO [train.py:842] (0/4) Epoch 38, batch 1250, loss[loss=0.1681, simple_loss=0.2703, pruned_loss=0.03298, over 6500.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2621, pruned_loss=0.04036, over 1427479.27 frames.], batch size: 38, lr: 1.45e-04 2022-05-29 15:40:22,863 INFO [train.py:842] (0/4) Epoch 38, batch 1300, loss[loss=0.1209, simple_loss=0.2028, pruned_loss=0.01954, over 7275.00 frames.], tot_loss[loss=0.1717, simple_loss=0.262, pruned_loss=0.04073, over 1426458.14 frames.], batch size: 17, lr: 1.45e-04 2022-05-29 15:41:01,502 INFO [train.py:842] (0/4) Epoch 38, batch 1350, loss[loss=0.1869, simple_loss=0.2839, pruned_loss=0.04494, over 7115.00 frames.], tot_loss[loss=0.1721, simple_loss=0.2623, pruned_loss=0.04095, over 1420065.99 frames.], batch size: 21, lr: 1.45e-04 2022-05-29 15:41:40,068 INFO [train.py:842] (0/4) Epoch 38, batch 1400, loss[loss=0.1791, simple_loss=0.2677, pruned_loss=0.04522, over 7303.00 frames.], tot_loss[loss=0.1713, simple_loss=0.261, pruned_loss=0.04076, over 1420950.62 frames.], batch size: 24, lr: 1.45e-04 2022-05-29 15:42:18,694 INFO [train.py:842] (0/4) Epoch 38, batch 1450, loss[loss=0.2048, simple_loss=0.2892, pruned_loss=0.06021, over 7211.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2607, pruned_loss=0.04092, over 1425098.94 frames.], batch size: 22, lr: 1.45e-04 2022-05-29 15:42:57,037 INFO [train.py:842] (0/4) Epoch 38, batch 1500, loss[loss=0.1759, simple_loss=0.2564, pruned_loss=0.04774, over 7277.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2592, pruned_loss=0.04026, over 1425514.60 frames.], batch size: 25, lr: 1.45e-04 2022-05-29 15:43:35,783 INFO [train.py:842] (0/4) Epoch 38, batch 1550, loss[loss=0.152, simple_loss=0.2424, pruned_loss=0.03081, over 7239.00 frames.], tot_loss[loss=0.17, simple_loss=0.2596, pruned_loss=0.04019, over 1421920.80 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:44:14,284 INFO [train.py:842] (0/4) Epoch 38, batch 1600, loss[loss=0.1465, simple_loss=0.2377, pruned_loss=0.02765, over 7262.00 frames.], tot_loss[loss=0.1698, simple_loss=0.2597, pruned_loss=0.04, over 1424861.11 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 15:44:53,031 INFO [train.py:842] (0/4) Epoch 38, batch 1650, loss[loss=0.1609, simple_loss=0.2535, pruned_loss=0.03413, over 7075.00 frames.], tot_loss[loss=0.17, simple_loss=0.2596, pruned_loss=0.04023, over 1424152.50 frames.], batch size: 28, lr: 1.45e-04 2022-05-29 15:45:31,678 INFO [train.py:842] (0/4) Epoch 38, batch 1700, loss[loss=0.152, simple_loss=0.2359, pruned_loss=0.03409, over 7157.00 frames.], tot_loss[loss=0.1694, simple_loss=0.259, pruned_loss=0.03994, over 1423375.09 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:46:10,389 INFO [train.py:842] (0/4) Epoch 38, batch 1750, loss[loss=0.2669, simple_loss=0.3464, pruned_loss=0.09375, over 5097.00 frames.], tot_loss[loss=0.1687, simple_loss=0.2584, pruned_loss=0.03946, over 1422553.69 frames.], batch size: 52, lr: 1.45e-04 2022-05-29 15:46:48,915 INFO [train.py:842] (0/4) Epoch 38, batch 1800, loss[loss=0.1354, simple_loss=0.2201, pruned_loss=0.02529, over 7324.00 frames.], tot_loss[loss=0.1692, simple_loss=0.2587, pruned_loss=0.03988, over 1420293.39 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:47:27,688 INFO [train.py:842] (0/4) Epoch 38, batch 1850, loss[loss=0.1645, simple_loss=0.2475, pruned_loss=0.04073, over 7264.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2589, pruned_loss=0.03994, over 1422371.72 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:48:05,993 INFO [train.py:842] (0/4) Epoch 38, batch 1900, loss[loss=0.1732, simple_loss=0.2528, pruned_loss=0.04681, over 6803.00 frames.], tot_loss[loss=0.1694, simple_loss=0.259, pruned_loss=0.03988, over 1425045.60 frames.], batch size: 15, lr: 1.45e-04 2022-05-29 15:48:44,638 INFO [train.py:842] (0/4) Epoch 38, batch 1950, loss[loss=0.1498, simple_loss=0.2372, pruned_loss=0.03122, over 7266.00 frames.], tot_loss[loss=0.171, simple_loss=0.2603, pruned_loss=0.04087, over 1427356.65 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 15:49:23,244 INFO [train.py:842] (0/4) Epoch 38, batch 2000, loss[loss=0.1669, simple_loss=0.2484, pruned_loss=0.04273, over 7398.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2606, pruned_loss=0.0409, over 1426029.70 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:50:01,697 INFO [train.py:842] (0/4) Epoch 38, batch 2050, loss[loss=0.149, simple_loss=0.2447, pruned_loss=0.0266, over 7264.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2616, pruned_loss=0.0414, over 1422970.62 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 15:50:39,901 INFO [train.py:842] (0/4) Epoch 38, batch 2100, loss[loss=0.1768, simple_loss=0.2706, pruned_loss=0.04152, over 7191.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2622, pruned_loss=0.0414, over 1417092.56 frames.], batch size: 26, lr: 1.45e-04 2022-05-29 15:51:18,604 INFO [train.py:842] (0/4) Epoch 38, batch 2150, loss[loss=0.146, simple_loss=0.2405, pruned_loss=0.02576, over 7066.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2622, pruned_loss=0.04123, over 1417758.91 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:51:56,336 INFO [train.py:842] (0/4) Epoch 38, batch 2200, loss[loss=0.1496, simple_loss=0.2319, pruned_loss=0.03365, over 7072.00 frames.], tot_loss[loss=0.1733, simple_loss=0.2633, pruned_loss=0.04162, over 1419815.48 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:52:34,385 INFO [train.py:842] (0/4) Epoch 38, batch 2250, loss[loss=0.1884, simple_loss=0.284, pruned_loss=0.04637, over 6372.00 frames.], tot_loss[loss=0.1738, simple_loss=0.2636, pruned_loss=0.04197, over 1418457.05 frames.], batch size: 37, lr: 1.45e-04 2022-05-29 15:53:12,413 INFO [train.py:842] (0/4) Epoch 38, batch 2300, loss[loss=0.136, simple_loss=0.2269, pruned_loss=0.0226, over 7061.00 frames.], tot_loss[loss=0.173, simple_loss=0.263, pruned_loss=0.04153, over 1422793.45 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:53:50,465 INFO [train.py:842] (0/4) Epoch 38, batch 2350, loss[loss=0.1593, simple_loss=0.2523, pruned_loss=0.03312, over 7326.00 frames.], tot_loss[loss=0.1728, simple_loss=0.2628, pruned_loss=0.04146, over 1419789.81 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:54:28,591 INFO [train.py:842] (0/4) Epoch 38, batch 2400, loss[loss=0.1751, simple_loss=0.2568, pruned_loss=0.04665, over 7415.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2614, pruned_loss=0.04103, over 1425100.85 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:55:06,812 INFO [train.py:842] (0/4) Epoch 38, batch 2450, loss[loss=0.2052, simple_loss=0.2857, pruned_loss=0.06239, over 7325.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2613, pruned_loss=0.0412, over 1426617.02 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:55:44,797 INFO [train.py:842] (0/4) Epoch 38, batch 2500, loss[loss=0.1708, simple_loss=0.2593, pruned_loss=0.04116, over 7158.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2612, pruned_loss=0.04102, over 1426267.03 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:56:22,956 INFO [train.py:842] (0/4) Epoch 38, batch 2550, loss[loss=0.1447, simple_loss=0.233, pruned_loss=0.02821, over 7168.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2615, pruned_loss=0.04104, over 1423545.42 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 15:57:00,957 INFO [train.py:842] (0/4) Epoch 38, batch 2600, loss[loss=0.1664, simple_loss=0.2641, pruned_loss=0.03438, over 7430.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2612, pruned_loss=0.04099, over 1423752.73 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:57:39,134 INFO [train.py:842] (0/4) Epoch 38, batch 2650, loss[loss=0.1519, simple_loss=0.2501, pruned_loss=0.0269, over 7204.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2616, pruned_loss=0.04113, over 1424973.63 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 15:58:17,262 INFO [train.py:842] (0/4) Epoch 38, batch 2700, loss[loss=0.1793, simple_loss=0.2617, pruned_loss=0.04849, over 7247.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2607, pruned_loss=0.04094, over 1423836.84 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 15:58:55,518 INFO [train.py:842] (0/4) Epoch 38, batch 2750, loss[loss=0.1845, simple_loss=0.2798, pruned_loss=0.04459, over 7360.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2611, pruned_loss=0.04093, over 1425170.75 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 15:59:33,528 INFO [train.py:842] (0/4) Epoch 38, batch 2800, loss[loss=0.1531, simple_loss=0.2513, pruned_loss=0.02743, over 7309.00 frames.], tot_loss[loss=0.1717, simple_loss=0.261, pruned_loss=0.04122, over 1423707.01 frames.], batch size: 24, lr: 1.45e-04 2022-05-29 16:00:11,768 INFO [train.py:842] (0/4) Epoch 38, batch 2850, loss[loss=0.1681, simple_loss=0.2608, pruned_loss=0.03773, over 7408.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2617, pruned_loss=0.04135, over 1423592.95 frames.], batch size: 21, lr: 1.45e-04 2022-05-29 16:00:49,751 INFO [train.py:842] (0/4) Epoch 38, batch 2900, loss[loss=0.145, simple_loss=0.2294, pruned_loss=0.03031, over 7111.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2603, pruned_loss=0.04071, over 1425075.14 frames.], batch size: 17, lr: 1.45e-04 2022-05-29 16:01:28,077 INFO [train.py:842] (0/4) Epoch 38, batch 2950, loss[loss=0.1562, simple_loss=0.2431, pruned_loss=0.03465, over 7405.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2603, pruned_loss=0.04061, over 1429844.19 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 16:02:05,953 INFO [train.py:842] (0/4) Epoch 38, batch 3000, loss[loss=0.2371, simple_loss=0.3209, pruned_loss=0.07662, over 7214.00 frames.], tot_loss[loss=0.171, simple_loss=0.2608, pruned_loss=0.04065, over 1429089.75 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 16:02:05,955 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 16:02:15,341 INFO [train.py:871] (0/4) Epoch 38, validation: loss=0.1634, simple_loss=0.2602, pruned_loss=0.03326, over 868885.00 frames. 2022-05-29 16:02:53,592 INFO [train.py:842] (0/4) Epoch 38, batch 3050, loss[loss=0.1749, simple_loss=0.2562, pruned_loss=0.04676, over 7173.00 frames.], tot_loss[loss=0.1711, simple_loss=0.261, pruned_loss=0.04054, over 1429285.27 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 16:03:31,421 INFO [train.py:842] (0/4) Epoch 38, batch 3100, loss[loss=0.1884, simple_loss=0.28, pruned_loss=0.04836, over 7207.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2607, pruned_loss=0.04039, over 1422577.82 frames.], batch size: 22, lr: 1.45e-04 2022-05-29 16:04:09,670 INFO [train.py:842] (0/4) Epoch 38, batch 3150, loss[loss=0.173, simple_loss=0.2687, pruned_loss=0.03865, over 7394.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2603, pruned_loss=0.04061, over 1420924.16 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 16:04:47,694 INFO [train.py:842] (0/4) Epoch 38, batch 3200, loss[loss=0.1757, simple_loss=0.2756, pruned_loss=0.03788, over 7118.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2591, pruned_loss=0.03971, over 1425257.77 frames.], batch size: 21, lr: 1.45e-04 2022-05-29 16:05:26,135 INFO [train.py:842] (0/4) Epoch 38, batch 3250, loss[loss=0.1516, simple_loss=0.2417, pruned_loss=0.03078, over 7267.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2599, pruned_loss=0.04029, over 1425841.79 frames.], batch size: 18, lr: 1.45e-04 2022-05-29 16:06:04,012 INFO [train.py:842] (0/4) Epoch 38, batch 3300, loss[loss=0.18, simple_loss=0.2727, pruned_loss=0.04367, over 7234.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2597, pruned_loss=0.04001, over 1424739.72 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 16:06:41,992 INFO [train.py:842] (0/4) Epoch 38, batch 3350, loss[loss=0.2059, simple_loss=0.298, pruned_loss=0.05691, over 7219.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2609, pruned_loss=0.04051, over 1425975.60 frames.], batch size: 22, lr: 1.45e-04 2022-05-29 16:07:19,985 INFO [train.py:842] (0/4) Epoch 38, batch 3400, loss[loss=0.1813, simple_loss=0.2713, pruned_loss=0.0456, over 6805.00 frames.], tot_loss[loss=0.1714, simple_loss=0.2613, pruned_loss=0.04078, over 1429686.07 frames.], batch size: 31, lr: 1.45e-04 2022-05-29 16:07:58,232 INFO [train.py:842] (0/4) Epoch 38, batch 3450, loss[loss=0.1526, simple_loss=0.2402, pruned_loss=0.03254, over 7441.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2611, pruned_loss=0.04065, over 1431272.02 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 16:08:36,207 INFO [train.py:842] (0/4) Epoch 38, batch 3500, loss[loss=0.1505, simple_loss=0.242, pruned_loss=0.02955, over 7236.00 frames.], tot_loss[loss=0.171, simple_loss=0.2606, pruned_loss=0.04066, over 1430397.43 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 16:09:14,386 INFO [train.py:842] (0/4) Epoch 38, batch 3550, loss[loss=0.17, simple_loss=0.2677, pruned_loss=0.03613, over 7146.00 frames.], tot_loss[loss=0.1721, simple_loss=0.262, pruned_loss=0.04107, over 1431350.14 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 16:09:52,125 INFO [train.py:842] (0/4) Epoch 38, batch 3600, loss[loss=0.1537, simple_loss=0.2453, pruned_loss=0.03105, over 6854.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2628, pruned_loss=0.04118, over 1429195.39 frames.], batch size: 31, lr: 1.45e-04 2022-05-29 16:10:30,404 INFO [train.py:842] (0/4) Epoch 38, batch 3650, loss[loss=0.1794, simple_loss=0.2755, pruned_loss=0.04166, over 7078.00 frames.], tot_loss[loss=0.1722, simple_loss=0.2626, pruned_loss=0.04094, over 1431493.12 frames.], batch size: 28, lr: 1.45e-04 2022-05-29 16:11:08,252 INFO [train.py:842] (0/4) Epoch 38, batch 3700, loss[loss=0.1516, simple_loss=0.2432, pruned_loss=0.03007, over 7288.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2612, pruned_loss=0.04053, over 1423433.35 frames.], batch size: 24, lr: 1.45e-04 2022-05-29 16:11:46,334 INFO [train.py:842] (0/4) Epoch 38, batch 3750, loss[loss=0.143, simple_loss=0.2352, pruned_loss=0.02536, over 7159.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2616, pruned_loss=0.04099, over 1418329.36 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 16:12:24,508 INFO [train.py:842] (0/4) Epoch 38, batch 3800, loss[loss=0.1798, simple_loss=0.2802, pruned_loss=0.03969, over 7382.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2608, pruned_loss=0.04071, over 1418360.21 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 16:13:02,700 INFO [train.py:842] (0/4) Epoch 38, batch 3850, loss[loss=0.1717, simple_loss=0.2602, pruned_loss=0.0416, over 7116.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2607, pruned_loss=0.04073, over 1420371.70 frames.], batch size: 21, lr: 1.45e-04 2022-05-29 16:13:40,941 INFO [train.py:842] (0/4) Epoch 38, batch 3900, loss[loss=0.133, simple_loss=0.2172, pruned_loss=0.02441, over 7334.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2608, pruned_loss=0.0415, over 1422373.32 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 16:13:43,583 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-344000.pt 2022-05-29 16:14:21,828 INFO [train.py:842] (0/4) Epoch 38, batch 3950, loss[loss=0.1483, simple_loss=0.2375, pruned_loss=0.02956, over 6756.00 frames.], tot_loss[loss=0.171, simple_loss=0.2601, pruned_loss=0.04093, over 1419316.98 frames.], batch size: 31, lr: 1.45e-04 2022-05-29 16:14:59,657 INFO [train.py:842] (0/4) Epoch 38, batch 4000, loss[loss=0.1738, simple_loss=0.253, pruned_loss=0.04724, over 7116.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2607, pruned_loss=0.04139, over 1419962.55 frames.], batch size: 17, lr: 1.45e-04 2022-05-29 16:15:37,907 INFO [train.py:842] (0/4) Epoch 38, batch 4050, loss[loss=0.1415, simple_loss=0.2264, pruned_loss=0.02829, over 7008.00 frames.], tot_loss[loss=0.171, simple_loss=0.2604, pruned_loss=0.04082, over 1417940.69 frames.], batch size: 16, lr: 1.45e-04 2022-05-29 16:16:15,906 INFO [train.py:842] (0/4) Epoch 38, batch 4100, loss[loss=0.1579, simple_loss=0.25, pruned_loss=0.03287, over 7149.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2602, pruned_loss=0.04046, over 1417531.68 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 16:16:54,256 INFO [train.py:842] (0/4) Epoch 38, batch 4150, loss[loss=0.1516, simple_loss=0.2339, pruned_loss=0.03464, over 7259.00 frames.], tot_loss[loss=0.1704, simple_loss=0.2599, pruned_loss=0.04045, over 1420339.82 frames.], batch size: 16, lr: 1.45e-04 2022-05-29 16:17:32,184 INFO [train.py:842] (0/4) Epoch 38, batch 4200, loss[loss=0.1836, simple_loss=0.262, pruned_loss=0.0526, over 7365.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2595, pruned_loss=0.04035, over 1419075.14 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 16:18:10,305 INFO [train.py:842] (0/4) Epoch 38, batch 4250, loss[loss=0.1734, simple_loss=0.2664, pruned_loss=0.04014, over 7289.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2607, pruned_loss=0.04071, over 1419897.59 frames.], batch size: 24, lr: 1.45e-04 2022-05-29 16:18:57,692 INFO [train.py:842] (0/4) Epoch 38, batch 4300, loss[loss=0.1497, simple_loss=0.2428, pruned_loss=0.02834, over 7327.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2603, pruned_loss=0.04062, over 1423623.00 frames.], batch size: 22, lr: 1.45e-04 2022-05-29 16:19:35,760 INFO [train.py:842] (0/4) Epoch 38, batch 4350, loss[loss=0.1503, simple_loss=0.2308, pruned_loss=0.0349, over 7308.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2598, pruned_loss=0.04067, over 1423211.16 frames.], batch size: 17, lr: 1.45e-04 2022-05-29 16:20:13,802 INFO [train.py:842] (0/4) Epoch 38, batch 4400, loss[loss=0.1547, simple_loss=0.2528, pruned_loss=0.02831, over 7071.00 frames.], tot_loss[loss=0.1685, simple_loss=0.2577, pruned_loss=0.03968, over 1422617.45 frames.], batch size: 28, lr: 1.45e-04 2022-05-29 16:20:52,133 INFO [train.py:842] (0/4) Epoch 38, batch 4450, loss[loss=0.1904, simple_loss=0.2865, pruned_loss=0.04718, over 7327.00 frames.], tot_loss[loss=0.1689, simple_loss=0.2581, pruned_loss=0.03987, over 1424189.59 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 16:21:30,254 INFO [train.py:842] (0/4) Epoch 38, batch 4500, loss[loss=0.1699, simple_loss=0.271, pruned_loss=0.03444, over 7445.00 frames.], tot_loss[loss=0.1691, simple_loss=0.2583, pruned_loss=0.03996, over 1427772.36 frames.], batch size: 22, lr: 1.45e-04 2022-05-29 16:22:08,242 INFO [train.py:842] (0/4) Epoch 38, batch 4550, loss[loss=0.1942, simple_loss=0.2795, pruned_loss=0.0545, over 7257.00 frames.], tot_loss[loss=0.1693, simple_loss=0.259, pruned_loss=0.03979, over 1429587.31 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 16:22:46,091 INFO [train.py:842] (0/4) Epoch 38, batch 4600, loss[loss=0.171, simple_loss=0.2527, pruned_loss=0.04471, over 7384.00 frames.], tot_loss[loss=0.1691, simple_loss=0.2588, pruned_loss=0.03976, over 1427800.10 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 16:23:24,372 INFO [train.py:842] (0/4) Epoch 38, batch 4650, loss[loss=0.1641, simple_loss=0.2589, pruned_loss=0.03461, over 7383.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2596, pruned_loss=0.04025, over 1428351.42 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 16:24:02,468 INFO [train.py:842] (0/4) Epoch 38, batch 4700, loss[loss=0.159, simple_loss=0.2512, pruned_loss=0.03337, over 7208.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2586, pruned_loss=0.0402, over 1424855.01 frames.], batch size: 23, lr: 1.45e-04 2022-05-29 16:24:40,729 INFO [train.py:842] (0/4) Epoch 38, batch 4750, loss[loss=0.1506, simple_loss=0.2395, pruned_loss=0.03088, over 7172.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2584, pruned_loss=0.04029, over 1422340.72 frames.], batch size: 19, lr: 1.45e-04 2022-05-29 16:25:18,455 INFO [train.py:842] (0/4) Epoch 38, batch 4800, loss[loss=0.157, simple_loss=0.2447, pruned_loss=0.0346, over 7144.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2592, pruned_loss=0.04048, over 1423148.92 frames.], batch size: 20, lr: 1.45e-04 2022-05-29 16:25:56,652 INFO [train.py:842] (0/4) Epoch 38, batch 4850, loss[loss=0.1963, simple_loss=0.2963, pruned_loss=0.04818, over 7030.00 frames.], tot_loss[loss=0.1701, simple_loss=0.259, pruned_loss=0.04064, over 1421181.39 frames.], batch size: 28, lr: 1.44e-04 2022-05-29 16:26:34,235 INFO [train.py:842] (0/4) Epoch 38, batch 4900, loss[loss=0.1693, simple_loss=0.2723, pruned_loss=0.03313, over 7224.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2596, pruned_loss=0.04037, over 1415206.43 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 16:27:12,488 INFO [train.py:842] (0/4) Epoch 38, batch 4950, loss[loss=0.1872, simple_loss=0.27, pruned_loss=0.0522, over 7063.00 frames.], tot_loss[loss=0.1696, simple_loss=0.2592, pruned_loss=0.04002, over 1418271.56 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:27:50,530 INFO [train.py:842] (0/4) Epoch 38, batch 5000, loss[loss=0.172, simple_loss=0.2608, pruned_loss=0.04156, over 7135.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2595, pruned_loss=0.03999, over 1421432.30 frames.], batch size: 26, lr: 1.44e-04 2022-05-29 16:28:28,902 INFO [train.py:842] (0/4) Epoch 38, batch 5050, loss[loss=0.1644, simple_loss=0.2557, pruned_loss=0.03653, over 6217.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2594, pruned_loss=0.04019, over 1425309.18 frames.], batch size: 37, lr: 1.44e-04 2022-05-29 16:29:07,179 INFO [train.py:842] (0/4) Epoch 38, batch 5100, loss[loss=0.1714, simple_loss=0.2752, pruned_loss=0.03381, over 7285.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2582, pruned_loss=0.04028, over 1426060.09 frames.], batch size: 24, lr: 1.44e-04 2022-05-29 16:29:45,462 INFO [train.py:842] (0/4) Epoch 38, batch 5150, loss[loss=0.1694, simple_loss=0.2602, pruned_loss=0.03932, over 7432.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2585, pruned_loss=0.0401, over 1428596.02 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 16:30:23,530 INFO [train.py:842] (0/4) Epoch 38, batch 5200, loss[loss=0.1808, simple_loss=0.2775, pruned_loss=0.04205, over 7223.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2584, pruned_loss=0.04028, over 1426975.58 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 16:31:01,711 INFO [train.py:842] (0/4) Epoch 38, batch 5250, loss[loss=0.1979, simple_loss=0.2819, pruned_loss=0.05698, over 7322.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2592, pruned_loss=0.04095, over 1424016.24 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 16:31:39,568 INFO [train.py:842] (0/4) Epoch 38, batch 5300, loss[loss=0.1958, simple_loss=0.284, pruned_loss=0.05385, over 5288.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2594, pruned_loss=0.04097, over 1420249.83 frames.], batch size: 52, lr: 1.44e-04 2022-05-29 16:32:17,622 INFO [train.py:842] (0/4) Epoch 38, batch 5350, loss[loss=0.1555, simple_loss=0.2416, pruned_loss=0.03471, over 7274.00 frames.], tot_loss[loss=0.1698, simple_loss=0.2586, pruned_loss=0.04051, over 1412940.77 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:32:55,660 INFO [train.py:842] (0/4) Epoch 38, batch 5400, loss[loss=0.1792, simple_loss=0.2692, pruned_loss=0.04454, over 7339.00 frames.], tot_loss[loss=0.1721, simple_loss=0.261, pruned_loss=0.0416, over 1417033.35 frames.], batch size: 22, lr: 1.44e-04 2022-05-29 16:33:34,070 INFO [train.py:842] (0/4) Epoch 38, batch 5450, loss[loss=0.1603, simple_loss=0.2511, pruned_loss=0.03479, over 7219.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2607, pruned_loss=0.04157, over 1418940.73 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 16:34:11,843 INFO [train.py:842] (0/4) Epoch 38, batch 5500, loss[loss=0.1511, simple_loss=0.2355, pruned_loss=0.03339, over 7407.00 frames.], tot_loss[loss=0.1717, simple_loss=0.2608, pruned_loss=0.04126, over 1421496.72 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:34:50,167 INFO [train.py:842] (0/4) Epoch 38, batch 5550, loss[loss=0.168, simple_loss=0.2568, pruned_loss=0.03963, over 7324.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2592, pruned_loss=0.04066, over 1421019.22 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 16:35:27,883 INFO [train.py:842] (0/4) Epoch 38, batch 5600, loss[loss=0.1855, simple_loss=0.2787, pruned_loss=0.04616, over 7344.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2595, pruned_loss=0.04054, over 1418078.34 frames.], batch size: 22, lr: 1.44e-04 2022-05-29 16:36:06,391 INFO [train.py:842] (0/4) Epoch 38, batch 5650, loss[loss=0.1389, simple_loss=0.2233, pruned_loss=0.02725, over 7402.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2589, pruned_loss=0.0403, over 1422712.79 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:36:44,326 INFO [train.py:842] (0/4) Epoch 38, batch 5700, loss[loss=0.1987, simple_loss=0.2852, pruned_loss=0.0561, over 7289.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2597, pruned_loss=0.04065, over 1422561.55 frames.], batch size: 24, lr: 1.44e-04 2022-05-29 16:37:22,670 INFO [train.py:842] (0/4) Epoch 38, batch 5750, loss[loss=0.2327, simple_loss=0.3027, pruned_loss=0.08133, over 7064.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2604, pruned_loss=0.04105, over 1425889.70 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:38:00,879 INFO [train.py:842] (0/4) Epoch 38, batch 5800, loss[loss=0.1631, simple_loss=0.2516, pruned_loss=0.0373, over 7278.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2591, pruned_loss=0.0399, over 1430099.35 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:38:38,996 INFO [train.py:842] (0/4) Epoch 38, batch 5850, loss[loss=0.2078, simple_loss=0.3038, pruned_loss=0.05587, over 6748.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2592, pruned_loss=0.03996, over 1423639.13 frames.], batch size: 31, lr: 1.44e-04 2022-05-29 16:39:16,841 INFO [train.py:842] (0/4) Epoch 38, batch 5900, loss[loss=0.1501, simple_loss=0.2307, pruned_loss=0.0347, over 7210.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2605, pruned_loss=0.0409, over 1422334.01 frames.], batch size: 16, lr: 1.44e-04 2022-05-29 16:39:55,208 INFO [train.py:842] (0/4) Epoch 38, batch 5950, loss[loss=0.1908, simple_loss=0.2878, pruned_loss=0.04689, over 7333.00 frames.], tot_loss[loss=0.1706, simple_loss=0.26, pruned_loss=0.04053, over 1421799.36 frames.], batch size: 25, lr: 1.44e-04 2022-05-29 16:40:33,178 INFO [train.py:842] (0/4) Epoch 38, batch 6000, loss[loss=0.1572, simple_loss=0.2399, pruned_loss=0.03724, over 7156.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2605, pruned_loss=0.04111, over 1419433.59 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 16:40:33,179 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 16:40:42,180 INFO [train.py:871] (0/4) Epoch 38, validation: loss=0.1638, simple_loss=0.2604, pruned_loss=0.03358, over 868885.00 frames. 2022-05-29 16:41:20,627 INFO [train.py:842] (0/4) Epoch 38, batch 6050, loss[loss=0.1541, simple_loss=0.2404, pruned_loss=0.03396, over 7254.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2601, pruned_loss=0.04103, over 1416287.16 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 16:41:58,587 INFO [train.py:842] (0/4) Epoch 38, batch 6100, loss[loss=0.1669, simple_loss=0.2643, pruned_loss=0.03478, over 7332.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2601, pruned_loss=0.04102, over 1416533.90 frames.], batch size: 22, lr: 1.44e-04 2022-05-29 16:42:36,998 INFO [train.py:842] (0/4) Epoch 38, batch 6150, loss[loss=0.1536, simple_loss=0.2497, pruned_loss=0.02873, over 6653.00 frames.], tot_loss[loss=0.1704, simple_loss=0.2594, pruned_loss=0.0407, over 1418077.51 frames.], batch size: 31, lr: 1.44e-04 2022-05-29 16:43:15,001 INFO [train.py:842] (0/4) Epoch 38, batch 6200, loss[loss=0.2162, simple_loss=0.3012, pruned_loss=0.06562, over 7342.00 frames.], tot_loss[loss=0.1709, simple_loss=0.26, pruned_loss=0.04089, over 1420260.81 frames.], batch size: 22, lr: 1.44e-04 2022-05-29 16:43:53,223 INFO [train.py:842] (0/4) Epoch 38, batch 6250, loss[loss=0.1629, simple_loss=0.2529, pruned_loss=0.03648, over 7171.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2606, pruned_loss=0.04104, over 1422381.26 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 16:44:31,152 INFO [train.py:842] (0/4) Epoch 38, batch 6300, loss[loss=0.153, simple_loss=0.2338, pruned_loss=0.03606, over 7357.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2599, pruned_loss=0.04064, over 1423864.49 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 16:45:09,389 INFO [train.py:842] (0/4) Epoch 38, batch 6350, loss[loss=0.1971, simple_loss=0.2819, pruned_loss=0.05617, over 7382.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2594, pruned_loss=0.0402, over 1425284.56 frames.], batch size: 23, lr: 1.44e-04 2022-05-29 16:45:47,313 INFO [train.py:842] (0/4) Epoch 38, batch 6400, loss[loss=0.1751, simple_loss=0.2583, pruned_loss=0.04594, over 7264.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2596, pruned_loss=0.04032, over 1424419.60 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 16:46:25,806 INFO [train.py:842] (0/4) Epoch 38, batch 6450, loss[loss=0.1691, simple_loss=0.2603, pruned_loss=0.03894, over 7234.00 frames.], tot_loss[loss=0.1684, simple_loss=0.2575, pruned_loss=0.03964, over 1426288.66 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 16:47:03,926 INFO [train.py:842] (0/4) Epoch 38, batch 6500, loss[loss=0.1877, simple_loss=0.2904, pruned_loss=0.04256, over 7150.00 frames.], tot_loss[loss=0.1675, simple_loss=0.2566, pruned_loss=0.03916, over 1429028.82 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 16:47:42,056 INFO [train.py:842] (0/4) Epoch 38, batch 6550, loss[loss=0.1794, simple_loss=0.2698, pruned_loss=0.04444, over 7143.00 frames.], tot_loss[loss=0.1682, simple_loss=0.2574, pruned_loss=0.03945, over 1429418.76 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 16:48:20,069 INFO [train.py:842] (0/4) Epoch 38, batch 6600, loss[loss=0.1716, simple_loss=0.2712, pruned_loss=0.03596, over 7161.00 frames.], tot_loss[loss=0.168, simple_loss=0.2571, pruned_loss=0.03944, over 1425112.93 frames.], batch size: 26, lr: 1.44e-04 2022-05-29 16:48:58,332 INFO [train.py:842] (0/4) Epoch 38, batch 6650, loss[loss=0.1907, simple_loss=0.2784, pruned_loss=0.05145, over 7365.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2585, pruned_loss=0.04015, over 1424143.52 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 16:49:36,294 INFO [train.py:842] (0/4) Epoch 38, batch 6700, loss[loss=0.1418, simple_loss=0.2311, pruned_loss=0.02626, over 7220.00 frames.], tot_loss[loss=0.1689, simple_loss=0.2584, pruned_loss=0.03971, over 1426234.86 frames.], batch size: 16, lr: 1.44e-04 2022-05-29 16:50:14,380 INFO [train.py:842] (0/4) Epoch 38, batch 6750, loss[loss=0.1668, simple_loss=0.263, pruned_loss=0.03534, over 7185.00 frames.], tot_loss[loss=0.1692, simple_loss=0.2584, pruned_loss=0.03993, over 1420677.89 frames.], batch size: 23, lr: 1.44e-04 2022-05-29 16:50:52,129 INFO [train.py:842] (0/4) Epoch 38, batch 6800, loss[loss=0.1713, simple_loss=0.2593, pruned_loss=0.0417, over 7322.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2588, pruned_loss=0.03998, over 1418611.21 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 16:51:30,583 INFO [train.py:842] (0/4) Epoch 38, batch 6850, loss[loss=0.1653, simple_loss=0.2467, pruned_loss=0.04199, over 7285.00 frames.], tot_loss[loss=0.1688, simple_loss=0.2582, pruned_loss=0.03966, over 1422398.74 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:52:08,718 INFO [train.py:842] (0/4) Epoch 38, batch 6900, loss[loss=0.179, simple_loss=0.2716, pruned_loss=0.04316, over 7290.00 frames.], tot_loss[loss=0.1692, simple_loss=0.2586, pruned_loss=0.03985, over 1427611.24 frames.], batch size: 24, lr: 1.44e-04 2022-05-29 16:52:46,977 INFO [train.py:842] (0/4) Epoch 38, batch 6950, loss[loss=0.1395, simple_loss=0.2242, pruned_loss=0.02737, over 7397.00 frames.], tot_loss[loss=0.1691, simple_loss=0.2584, pruned_loss=0.03993, over 1427803.45 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:53:24,869 INFO [train.py:842] (0/4) Epoch 38, batch 7000, loss[loss=0.1624, simple_loss=0.242, pruned_loss=0.04143, over 7071.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2591, pruned_loss=0.04037, over 1427918.91 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:54:03,211 INFO [train.py:842] (0/4) Epoch 38, batch 7050, loss[loss=0.1528, simple_loss=0.24, pruned_loss=0.03285, over 7354.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2591, pruned_loss=0.04019, over 1427412.10 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 16:54:41,059 INFO [train.py:842] (0/4) Epoch 38, batch 7100, loss[loss=0.1694, simple_loss=0.2602, pruned_loss=0.03926, over 7109.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2605, pruned_loss=0.04108, over 1424102.00 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 16:55:19,343 INFO [train.py:842] (0/4) Epoch 38, batch 7150, loss[loss=0.1543, simple_loss=0.2511, pruned_loss=0.02874, over 6387.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2602, pruned_loss=0.04071, over 1422936.18 frames.], batch size: 37, lr: 1.44e-04 2022-05-29 16:55:57,115 INFO [train.py:842] (0/4) Epoch 38, batch 7200, loss[loss=0.1759, simple_loss=0.2737, pruned_loss=0.03902, over 7428.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2594, pruned_loss=0.04022, over 1422675.35 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 16:56:35,182 INFO [train.py:842] (0/4) Epoch 38, batch 7250, loss[loss=0.1713, simple_loss=0.264, pruned_loss=0.03936, over 7370.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2608, pruned_loss=0.04067, over 1423253.48 frames.], batch size: 23, lr: 1.44e-04 2022-05-29 16:57:13,239 INFO [train.py:842] (0/4) Epoch 38, batch 7300, loss[loss=0.1342, simple_loss=0.2215, pruned_loss=0.02348, over 7419.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2606, pruned_loss=0.04019, over 1427238.08 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 16:57:51,509 INFO [train.py:842] (0/4) Epoch 38, batch 7350, loss[loss=0.1896, simple_loss=0.2693, pruned_loss=0.0549, over 6994.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2602, pruned_loss=0.04011, over 1429621.19 frames.], batch size: 16, lr: 1.44e-04 2022-05-29 16:58:29,500 INFO [train.py:842] (0/4) Epoch 38, batch 7400, loss[loss=0.1817, simple_loss=0.275, pruned_loss=0.04418, over 7413.00 frames.], tot_loss[loss=0.1705, simple_loss=0.2605, pruned_loss=0.04026, over 1430736.84 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 16:59:07,987 INFO [train.py:842] (0/4) Epoch 38, batch 7450, loss[loss=0.1753, simple_loss=0.2718, pruned_loss=0.03938, over 7075.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2594, pruned_loss=0.03963, over 1434329.58 frames.], batch size: 28, lr: 1.44e-04 2022-05-29 16:59:45,794 INFO [train.py:842] (0/4) Epoch 38, batch 7500, loss[loss=0.1656, simple_loss=0.2623, pruned_loss=0.03442, over 7084.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2607, pruned_loss=0.04041, over 1432333.28 frames.], batch size: 26, lr: 1.44e-04 2022-05-29 17:00:23,994 INFO [train.py:842] (0/4) Epoch 38, batch 7550, loss[loss=0.1854, simple_loss=0.275, pruned_loss=0.04793, over 6744.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2606, pruned_loss=0.04024, over 1432446.05 frames.], batch size: 31, lr: 1.44e-04 2022-05-29 17:01:01,992 INFO [train.py:842] (0/4) Epoch 38, batch 7600, loss[loss=0.1627, simple_loss=0.2592, pruned_loss=0.03307, over 7106.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2602, pruned_loss=0.04006, over 1432491.67 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 17:01:40,326 INFO [train.py:842] (0/4) Epoch 38, batch 7650, loss[loss=0.1885, simple_loss=0.2802, pruned_loss=0.04836, over 7233.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2593, pruned_loss=0.04006, over 1432498.01 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 17:02:18,326 INFO [train.py:842] (0/4) Epoch 38, batch 7700, loss[loss=0.1692, simple_loss=0.2523, pruned_loss=0.04302, over 7284.00 frames.], tot_loss[loss=0.1688, simple_loss=0.2581, pruned_loss=0.03975, over 1431084.21 frames.], batch size: 24, lr: 1.44e-04 2022-05-29 17:02:56,562 INFO [train.py:842] (0/4) Epoch 38, batch 7750, loss[loss=0.181, simple_loss=0.2674, pruned_loss=0.04731, over 7214.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2593, pruned_loss=0.04024, over 1429790.82 frames.], batch size: 22, lr: 1.44e-04 2022-05-29 17:03:34,676 INFO [train.py:842] (0/4) Epoch 38, batch 7800, loss[loss=0.1686, simple_loss=0.2664, pruned_loss=0.03537, over 7202.00 frames.], tot_loss[loss=0.1687, simple_loss=0.258, pruned_loss=0.03969, over 1427142.16 frames.], batch size: 23, lr: 1.44e-04 2022-05-29 17:04:13,043 INFO [train.py:842] (0/4) Epoch 38, batch 7850, loss[loss=0.1693, simple_loss=0.2559, pruned_loss=0.04139, over 6842.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2587, pruned_loss=0.04013, over 1429035.52 frames.], batch size: 31, lr: 1.44e-04 2022-05-29 17:04:51,067 INFO [train.py:842] (0/4) Epoch 38, batch 7900, loss[loss=0.3134, simple_loss=0.3865, pruned_loss=0.1201, over 7202.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2597, pruned_loss=0.04082, over 1427864.34 frames.], batch size: 23, lr: 1.44e-04 2022-05-29 17:05:29,264 INFO [train.py:842] (0/4) Epoch 38, batch 7950, loss[loss=0.1765, simple_loss=0.2656, pruned_loss=0.04367, over 6284.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2591, pruned_loss=0.04063, over 1425508.81 frames.], batch size: 37, lr: 1.44e-04 2022-05-29 17:06:07,451 INFO [train.py:842] (0/4) Epoch 38, batch 8000, loss[loss=0.1803, simple_loss=0.2749, pruned_loss=0.04285, over 7356.00 frames.], tot_loss[loss=0.1692, simple_loss=0.2582, pruned_loss=0.0401, over 1428697.27 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 17:06:45,754 INFO [train.py:842] (0/4) Epoch 38, batch 8050, loss[loss=0.1643, simple_loss=0.2641, pruned_loss=0.03227, over 7326.00 frames.], tot_loss[loss=0.1691, simple_loss=0.2584, pruned_loss=0.03986, over 1429900.26 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 17:07:23,704 INFO [train.py:842] (0/4) Epoch 38, batch 8100, loss[loss=0.1515, simple_loss=0.2412, pruned_loss=0.03087, over 7195.00 frames.], tot_loss[loss=0.1686, simple_loss=0.2582, pruned_loss=0.03949, over 1427575.85 frames.], batch size: 22, lr: 1.44e-04 2022-05-29 17:08:01,981 INFO [train.py:842] (0/4) Epoch 38, batch 8150, loss[loss=0.2042, simple_loss=0.2994, pruned_loss=0.05454, over 7196.00 frames.], tot_loss[loss=0.1701, simple_loss=0.2597, pruned_loss=0.04026, over 1430052.35 frames.], batch size: 22, lr: 1.44e-04 2022-05-29 17:08:39,922 INFO [train.py:842] (0/4) Epoch 38, batch 8200, loss[loss=0.1513, simple_loss=0.2286, pruned_loss=0.03701, over 6984.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2603, pruned_loss=0.04072, over 1425782.74 frames.], batch size: 16, lr: 1.44e-04 2022-05-29 17:09:18,215 INFO [train.py:842] (0/4) Epoch 38, batch 8250, loss[loss=0.1358, simple_loss=0.2204, pruned_loss=0.02564, over 6999.00 frames.], tot_loss[loss=0.1711, simple_loss=0.2608, pruned_loss=0.04071, over 1425642.99 frames.], batch size: 16, lr: 1.44e-04 2022-05-29 17:09:56,313 INFO [train.py:842] (0/4) Epoch 38, batch 8300, loss[loss=0.2681, simple_loss=0.3476, pruned_loss=0.09432, over 7282.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2599, pruned_loss=0.04033, over 1427024.43 frames.], batch size: 25, lr: 1.44e-04 2022-05-29 17:10:34,654 INFO [train.py:842] (0/4) Epoch 38, batch 8350, loss[loss=0.2426, simple_loss=0.3232, pruned_loss=0.08103, over 7220.00 frames.], tot_loss[loss=0.1705, simple_loss=0.26, pruned_loss=0.0405, over 1427769.41 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 17:11:12,608 INFO [train.py:842] (0/4) Epoch 38, batch 8400, loss[loss=0.1947, simple_loss=0.2803, pruned_loss=0.05451, over 6765.00 frames.], tot_loss[loss=0.1702, simple_loss=0.2596, pruned_loss=0.04043, over 1425264.58 frames.], batch size: 31, lr: 1.44e-04 2022-05-29 17:11:50,859 INFO [train.py:842] (0/4) Epoch 38, batch 8450, loss[loss=0.1597, simple_loss=0.2583, pruned_loss=0.03054, over 6751.00 frames.], tot_loss[loss=0.171, simple_loss=0.2602, pruned_loss=0.04084, over 1424714.03 frames.], batch size: 31, lr: 1.44e-04 2022-05-29 17:12:28,626 INFO [train.py:842] (0/4) Epoch 38, batch 8500, loss[loss=0.1565, simple_loss=0.2594, pruned_loss=0.02682, over 7341.00 frames.], tot_loss[loss=0.1725, simple_loss=0.2618, pruned_loss=0.0416, over 1417673.33 frames.], batch size: 22, lr: 1.44e-04 2022-05-29 17:13:06,632 INFO [train.py:842] (0/4) Epoch 38, batch 8550, loss[loss=0.1333, simple_loss=0.2176, pruned_loss=0.02454, over 7109.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2615, pruned_loss=0.04116, over 1417500.12 frames.], batch size: 17, lr: 1.44e-04 2022-05-29 17:13:44,591 INFO [train.py:842] (0/4) Epoch 38, batch 8600, loss[loss=0.1378, simple_loss=0.2302, pruned_loss=0.02269, over 7157.00 frames.], tot_loss[loss=0.1726, simple_loss=0.2621, pruned_loss=0.04155, over 1417372.32 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 17:14:22,658 INFO [train.py:842] (0/4) Epoch 38, batch 8650, loss[loss=0.1389, simple_loss=0.2205, pruned_loss=0.02864, over 7150.00 frames.], tot_loss[loss=0.172, simple_loss=0.2617, pruned_loss=0.04115, over 1415782.82 frames.], batch size: 17, lr: 1.44e-04 2022-05-29 17:15:00,644 INFO [train.py:842] (0/4) Epoch 38, batch 8700, loss[loss=0.1651, simple_loss=0.2582, pruned_loss=0.03602, over 7331.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2611, pruned_loss=0.04126, over 1417727.20 frames.], batch size: 20, lr: 1.44e-04 2022-05-29 17:15:38,994 INFO [train.py:842] (0/4) Epoch 38, batch 8750, loss[loss=0.1462, simple_loss=0.225, pruned_loss=0.03366, over 6854.00 frames.], tot_loss[loss=0.1696, simple_loss=0.2588, pruned_loss=0.04017, over 1414135.39 frames.], batch size: 15, lr: 1.44e-04 2022-05-29 17:16:17,262 INFO [train.py:842] (0/4) Epoch 38, batch 8800, loss[loss=0.1683, simple_loss=0.2514, pruned_loss=0.04259, over 7344.00 frames.], tot_loss[loss=0.1689, simple_loss=0.2579, pruned_loss=0.04, over 1411843.41 frames.], batch size: 19, lr: 1.44e-04 2022-05-29 17:16:55,765 INFO [train.py:842] (0/4) Epoch 38, batch 8850, loss[loss=0.1621, simple_loss=0.2525, pruned_loss=0.0358, over 6978.00 frames.], tot_loss[loss=0.1691, simple_loss=0.2581, pruned_loss=0.04001, over 1411799.56 frames.], batch size: 16, lr: 1.44e-04 2022-05-29 17:17:33,403 INFO [train.py:842] (0/4) Epoch 38, batch 8900, loss[loss=0.1731, simple_loss=0.2743, pruned_loss=0.03593, over 7412.00 frames.], tot_loss[loss=0.1704, simple_loss=0.2594, pruned_loss=0.04071, over 1404223.75 frames.], batch size: 21, lr: 1.44e-04 2022-05-29 17:18:11,734 INFO [train.py:842] (0/4) Epoch 38, batch 8950, loss[loss=0.1594, simple_loss=0.244, pruned_loss=0.03736, over 7281.00 frames.], tot_loss[loss=0.17, simple_loss=0.2589, pruned_loss=0.04059, over 1404042.00 frames.], batch size: 18, lr: 1.44e-04 2022-05-29 17:18:49,633 INFO [train.py:842] (0/4) Epoch 38, batch 9000, loss[loss=0.1542, simple_loss=0.2525, pruned_loss=0.02796, over 6400.00 frames.], tot_loss[loss=0.1685, simple_loss=0.2572, pruned_loss=0.03993, over 1391730.36 frames.], batch size: 38, lr: 1.44e-04 2022-05-29 17:18:49,633 INFO [train.py:862] (0/4) Computing validation loss 2022-05-29 17:18:58,750 INFO [train.py:871] (0/4) Epoch 38, validation: loss=0.165, simple_loss=0.2618, pruned_loss=0.03413, over 868885.00 frames. 2022-05-29 17:19:36,242 INFO [train.py:842] (0/4) Epoch 38, batch 9050, loss[loss=0.2274, simple_loss=0.3102, pruned_loss=0.07228, over 4970.00 frames.], tot_loss[loss=0.1719, simple_loss=0.2602, pruned_loss=0.04179, over 1354602.67 frames.], batch size: 52, lr: 1.44e-04 2022-05-29 17:20:12,620 INFO [train.py:842] (0/4) Epoch 38, batch 9100, loss[loss=0.1539, simple_loss=0.242, pruned_loss=0.03291, over 5086.00 frames.], tot_loss[loss=0.1749, simple_loss=0.2632, pruned_loss=0.04329, over 1303855.00 frames.], batch size: 52, lr: 1.44e-04 2022-05-29 17:20:49,730 INFO [train.py:842] (0/4) Epoch 38, batch 9150, loss[loss=0.1985, simple_loss=0.2776, pruned_loss=0.05971, over 4883.00 frames.], tot_loss[loss=0.1795, simple_loss=0.2669, pruned_loss=0.04603, over 1239057.68 frames.], batch size: 52, lr: 1.44e-04 2022-05-29 17:21:21,144 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/epoch-38.pt 2022-05-29 17:21:35,367 INFO [train.py:842] (0/4) Epoch 39, batch 0, loss[loss=0.1609, simple_loss=0.2616, pruned_loss=0.03009, over 7266.00 frames.], tot_loss[loss=0.1609, simple_loss=0.2616, pruned_loss=0.03009, over 7266.00 frames.], batch size: 19, lr: 1.42e-04 2022-05-29 17:22:13,619 INFO [train.py:842] (0/4) Epoch 39, batch 50, loss[loss=0.1882, simple_loss=0.287, pruned_loss=0.04469, over 7140.00 frames.], tot_loss[loss=0.169, simple_loss=0.2606, pruned_loss=0.03867, over 320364.22 frames.], batch size: 20, lr: 1.42e-04 2022-05-29 17:22:51,511 INFO [train.py:842] (0/4) Epoch 39, batch 100, loss[loss=0.1824, simple_loss=0.2739, pruned_loss=0.04544, over 6832.00 frames.], tot_loss[loss=0.1709, simple_loss=0.2616, pruned_loss=0.04003, over 566207.70 frames.], batch size: 31, lr: 1.42e-04 2022-05-29 17:23:29,994 INFO [train.py:842] (0/4) Epoch 39, batch 150, loss[loss=0.1574, simple_loss=0.2427, pruned_loss=0.03609, over 7176.00 frames.], tot_loss[loss=0.1706, simple_loss=0.2598, pruned_loss=0.04069, over 754564.22 frames.], batch size: 18, lr: 1.42e-04 2022-05-29 17:24:07,929 INFO [train.py:842] (0/4) Epoch 39, batch 200, loss[loss=0.1725, simple_loss=0.2652, pruned_loss=0.03995, over 7434.00 frames.], tot_loss[loss=0.1723, simple_loss=0.2616, pruned_loss=0.04153, over 901679.12 frames.], batch size: 20, lr: 1.42e-04 2022-05-29 17:24:45,987 INFO [train.py:842] (0/4) Epoch 39, batch 250, loss[loss=0.1892, simple_loss=0.2792, pruned_loss=0.04963, over 6439.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2611, pruned_loss=0.04097, over 1017408.24 frames.], batch size: 38, lr: 1.42e-04 2022-05-29 17:25:24,156 INFO [train.py:842] (0/4) Epoch 39, batch 300, loss[loss=0.1802, simple_loss=0.2678, pruned_loss=0.04624, over 7428.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2594, pruned_loss=0.03978, over 1112499.50 frames.], batch size: 20, lr: 1.42e-04 2022-05-29 17:26:02,383 INFO [train.py:842] (0/4) Epoch 39, batch 350, loss[loss=0.1688, simple_loss=0.257, pruned_loss=0.04028, over 7263.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2592, pruned_loss=0.04012, over 1179487.42 frames.], batch size: 24, lr: 1.42e-04 2022-05-29 17:26:40,281 INFO [train.py:842] (0/4) Epoch 39, batch 400, loss[loss=0.1751, simple_loss=0.2729, pruned_loss=0.0386, over 7232.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2588, pruned_loss=0.03997, over 1229357.87 frames.], batch size: 21, lr: 1.42e-04 2022-05-29 17:27:18,764 INFO [train.py:842] (0/4) Epoch 39, batch 450, loss[loss=0.1894, simple_loss=0.2849, pruned_loss=0.04696, over 7185.00 frames.], tot_loss[loss=0.1697, simple_loss=0.259, pruned_loss=0.04022, over 1274325.29 frames.], batch size: 23, lr: 1.42e-04 2022-05-29 17:27:56,685 INFO [train.py:842] (0/4) Epoch 39, batch 500, loss[loss=0.1594, simple_loss=0.2567, pruned_loss=0.03109, over 7145.00 frames.], tot_loss[loss=0.1689, simple_loss=0.2583, pruned_loss=0.03975, over 1300886.58 frames.], batch size: 20, lr: 1.42e-04 2022-05-29 17:28:35,068 INFO [train.py:842] (0/4) Epoch 39, batch 550, loss[loss=0.2211, simple_loss=0.2954, pruned_loss=0.07345, over 7426.00 frames.], tot_loss[loss=0.1698, simple_loss=0.259, pruned_loss=0.04032, over 1327037.97 frames.], batch size: 20, lr: 1.42e-04 2022-05-29 17:29:12,928 INFO [train.py:842] (0/4) Epoch 39, batch 600, loss[loss=0.1845, simple_loss=0.2704, pruned_loss=0.0493, over 7167.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2597, pruned_loss=0.04044, over 1346063.50 frames.], batch size: 18, lr: 1.42e-04 2022-05-29 17:29:51,400 INFO [train.py:842] (0/4) Epoch 39, batch 650, loss[loss=0.1591, simple_loss=0.2376, pruned_loss=0.04028, over 7295.00 frames.], tot_loss[loss=0.1695, simple_loss=0.2589, pruned_loss=0.04007, over 1365870.00 frames.], batch size: 17, lr: 1.42e-04 2022-05-29 17:30:39,077 INFO [train.py:842] (0/4) Epoch 39, batch 700, loss[loss=0.1621, simple_loss=0.2483, pruned_loss=0.03798, over 7215.00 frames.], tot_loss[loss=0.1682, simple_loss=0.2573, pruned_loss=0.03957, over 1378651.62 frames.], batch size: 16, lr: 1.42e-04 2022-05-29 17:31:17,547 INFO [train.py:842] (0/4) Epoch 39, batch 750, loss[loss=0.1771, simple_loss=0.2778, pruned_loss=0.03816, over 6407.00 frames.], tot_loss[loss=0.169, simple_loss=0.2578, pruned_loss=0.04012, over 1387805.64 frames.], batch size: 37, lr: 1.42e-04 2022-05-29 17:31:55,664 INFO [train.py:842] (0/4) Epoch 39, batch 800, loss[loss=0.1801, simple_loss=0.2695, pruned_loss=0.04529, over 7223.00 frames.], tot_loss[loss=0.1684, simple_loss=0.2575, pruned_loss=0.03966, over 1400121.10 frames.], batch size: 20, lr: 1.42e-04 2022-05-29 17:32:33,965 INFO [train.py:842] (0/4) Epoch 39, batch 850, loss[loss=0.1514, simple_loss=0.2464, pruned_loss=0.02819, over 7053.00 frames.], tot_loss[loss=0.1679, simple_loss=0.2573, pruned_loss=0.03928, over 1406375.16 frames.], batch size: 28, lr: 1.42e-04 2022-05-29 17:33:11,857 INFO [train.py:842] (0/4) Epoch 39, batch 900, loss[loss=0.1767, simple_loss=0.2644, pruned_loss=0.04452, over 7409.00 frames.], tot_loss[loss=0.1676, simple_loss=0.2573, pruned_loss=0.03891, over 1405196.01 frames.], batch size: 21, lr: 1.42e-04 2022-05-29 17:33:49,927 INFO [train.py:842] (0/4) Epoch 39, batch 950, loss[loss=0.1466, simple_loss=0.2325, pruned_loss=0.03035, over 7142.00 frames.], tot_loss[loss=0.1683, simple_loss=0.2582, pruned_loss=0.03921, over 1406178.87 frames.], batch size: 17, lr: 1.42e-04 2022-05-29 17:34:28,091 INFO [train.py:842] (0/4) Epoch 39, batch 1000, loss[loss=0.1792, simple_loss=0.2554, pruned_loss=0.05155, over 7350.00 frames.], tot_loss[loss=0.1682, simple_loss=0.2581, pruned_loss=0.0391, over 1408673.02 frames.], batch size: 19, lr: 1.42e-04 2022-05-29 17:35:06,298 INFO [train.py:842] (0/4) Epoch 39, batch 1050, loss[loss=0.1957, simple_loss=0.2882, pruned_loss=0.05163, over 6770.00 frames.], tot_loss[loss=0.1684, simple_loss=0.2585, pruned_loss=0.03921, over 1411499.01 frames.], batch size: 31, lr: 1.42e-04 2022-05-29 17:35:53,920 INFO [train.py:842] (0/4) Epoch 39, batch 1100, loss[loss=0.145, simple_loss=0.2485, pruned_loss=0.02073, over 7370.00 frames.], tot_loss[loss=0.1677, simple_loss=0.2576, pruned_loss=0.03892, over 1417304.24 frames.], batch size: 23, lr: 1.42e-04 2022-05-29 17:36:32,294 INFO [train.py:842] (0/4) Epoch 39, batch 1150, loss[loss=0.1558, simple_loss=0.2412, pruned_loss=0.03516, over 7280.00 frames.], tot_loss[loss=0.1675, simple_loss=0.257, pruned_loss=0.03895, over 1420581.18 frames.], batch size: 18, lr: 1.42e-04 2022-05-29 17:37:19,671 INFO [train.py:842] (0/4) Epoch 39, batch 1200, loss[loss=0.1875, simple_loss=0.2795, pruned_loss=0.04779, over 6670.00 frames.], tot_loss[loss=0.17, simple_loss=0.259, pruned_loss=0.04048, over 1421694.57 frames.], batch size: 31, lr: 1.42e-04 2022-05-29 17:37:57,971 INFO [train.py:842] (0/4) Epoch 39, batch 1250, loss[loss=0.148, simple_loss=0.2478, pruned_loss=0.02411, over 7430.00 frames.], tot_loss[loss=0.1712, simple_loss=0.2607, pruned_loss=0.04084, over 1422875.51 frames.], batch size: 20, lr: 1.41e-04 2022-05-29 17:38:36,164 INFO [train.py:842] (0/4) Epoch 39, batch 1300, loss[loss=0.1573, simple_loss=0.2321, pruned_loss=0.0412, over 7287.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2596, pruned_loss=0.0405, over 1426276.23 frames.], batch size: 17, lr: 1.41e-04 2022-05-29 17:39:14,319 INFO [train.py:842] (0/4) Epoch 39, batch 1350, loss[loss=0.1324, simple_loss=0.2251, pruned_loss=0.01986, over 7332.00 frames.], tot_loss[loss=0.17, simple_loss=0.2597, pruned_loss=0.04017, over 1426301.58 frames.], batch size: 20, lr: 1.41e-04 2022-05-29 17:39:52,321 INFO [train.py:842] (0/4) Epoch 39, batch 1400, loss[loss=0.2125, simple_loss=0.2966, pruned_loss=0.06416, over 7152.00 frames.], tot_loss[loss=0.1692, simple_loss=0.2591, pruned_loss=0.03966, over 1426541.12 frames.], batch size: 19, lr: 1.41e-04 2022-05-29 17:40:30,522 INFO [train.py:842] (0/4) Epoch 39, batch 1450, loss[loss=0.1869, simple_loss=0.2767, pruned_loss=0.0485, over 7284.00 frames.], tot_loss[loss=0.1712, simple_loss=0.261, pruned_loss=0.04067, over 1426274.17 frames.], batch size: 25, lr: 1.41e-04 2022-05-29 17:41:08,503 INFO [train.py:842] (0/4) Epoch 39, batch 1500, loss[loss=0.1792, simple_loss=0.2662, pruned_loss=0.04611, over 7097.00 frames.], tot_loss[loss=0.1716, simple_loss=0.2615, pruned_loss=0.04087, over 1424544.06 frames.], batch size: 21, lr: 1.41e-04 2022-05-29 17:41:46,918 INFO [train.py:842] (0/4) Epoch 39, batch 1550, loss[loss=0.1933, simple_loss=0.283, pruned_loss=0.05179, over 7200.00 frames.], tot_loss[loss=0.1715, simple_loss=0.2611, pruned_loss=0.04093, over 1424361.29 frames.], batch size: 22, lr: 1.41e-04 2022-05-29 17:42:25,012 INFO [train.py:842] (0/4) Epoch 39, batch 1600, loss[loss=0.1694, simple_loss=0.2671, pruned_loss=0.03587, over 6629.00 frames.], tot_loss[loss=0.1696, simple_loss=0.259, pruned_loss=0.04014, over 1426201.36 frames.], batch size: 31, lr: 1.41e-04 2022-05-29 17:43:03,328 INFO [train.py:842] (0/4) Epoch 39, batch 1650, loss[loss=0.1859, simple_loss=0.2811, pruned_loss=0.04533, over 7218.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2589, pruned_loss=0.04022, over 1425250.95 frames.], batch size: 21, lr: 1.41e-04 2022-05-29 17:43:41,229 INFO [train.py:842] (0/4) Epoch 39, batch 1700, loss[loss=0.2099, simple_loss=0.2988, pruned_loss=0.0605, over 7070.00 frames.], tot_loss[loss=0.1697, simple_loss=0.2593, pruned_loss=0.04003, over 1426597.67 frames.], batch size: 28, lr: 1.41e-04 2022-05-29 17:44:19,439 INFO [train.py:842] (0/4) Epoch 39, batch 1750, loss[loss=0.1833, simple_loss=0.2776, pruned_loss=0.04454, over 7431.00 frames.], tot_loss[loss=0.1707, simple_loss=0.2606, pruned_loss=0.04035, over 1426560.32 frames.], batch size: 20, lr: 1.41e-04 2022-05-29 17:44:57,266 INFO [train.py:842] (0/4) Epoch 39, batch 1800, loss[loss=0.2159, simple_loss=0.3001, pruned_loss=0.06587, over 7200.00 frames.], tot_loss[loss=0.1713, simple_loss=0.2612, pruned_loss=0.04068, over 1424483.55 frames.], batch size: 23, lr: 1.41e-04 2022-05-29 17:45:35,462 INFO [train.py:842] (0/4) Epoch 39, batch 1850, loss[loss=0.1458, simple_loss=0.2344, pruned_loss=0.02862, over 7161.00 frames.], tot_loss[loss=0.1703, simple_loss=0.26, pruned_loss=0.0403, over 1421833.68 frames.], batch size: 19, lr: 1.41e-04 2022-05-29 17:46:13,559 INFO [train.py:842] (0/4) Epoch 39, batch 1900, loss[loss=0.1476, simple_loss=0.2335, pruned_loss=0.03088, over 7289.00 frames.], tot_loss[loss=0.1698, simple_loss=0.2595, pruned_loss=0.04007, over 1424063.89 frames.], batch size: 18, lr: 1.41e-04 2022-05-29 17:46:51,815 INFO [train.py:842] (0/4) Epoch 39, batch 1950, loss[loss=0.1754, simple_loss=0.2726, pruned_loss=0.0391, over 7318.00 frames.], tot_loss[loss=0.1692, simple_loss=0.2591, pruned_loss=0.03966, over 1424088.71 frames.], batch size: 21, lr: 1.41e-04 2022-05-29 17:47:29,742 INFO [train.py:842] (0/4) Epoch 39, batch 2000, loss[loss=0.1546, simple_loss=0.2394, pruned_loss=0.03488, over 7261.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2591, pruned_loss=0.0398, over 1423840.69 frames.], batch size: 19, lr: 1.41e-04 2022-05-29 17:48:07,761 INFO [train.py:842] (0/4) Epoch 39, batch 2050, loss[loss=0.1523, simple_loss=0.2519, pruned_loss=0.0264, over 7331.00 frames.], tot_loss[loss=0.169, simple_loss=0.2586, pruned_loss=0.03968, over 1421576.89 frames.], batch size: 20, lr: 1.41e-04 2022-05-29 17:48:45,823 INFO [train.py:842] (0/4) Epoch 39, batch 2100, loss[loss=0.1415, simple_loss=0.2255, pruned_loss=0.02871, over 6741.00 frames.], tot_loss[loss=0.1689, simple_loss=0.2587, pruned_loss=0.03959, over 1422276.76 frames.], batch size: 15, lr: 1.41e-04 2022-05-29 17:49:24,058 INFO [train.py:842] (0/4) Epoch 39, batch 2150, loss[loss=0.1393, simple_loss=0.2344, pruned_loss=0.02213, over 7264.00 frames.], tot_loss[loss=0.1685, simple_loss=0.2584, pruned_loss=0.03929, over 1420133.80 frames.], batch size: 19, lr: 1.41e-04 2022-05-29 17:50:01,945 INFO [train.py:842] (0/4) Epoch 39, batch 2200, loss[loss=0.1618, simple_loss=0.2555, pruned_loss=0.03411, over 7207.00 frames.], tot_loss[loss=0.169, simple_loss=0.2591, pruned_loss=0.03945, over 1420606.44 frames.], batch size: 22, lr: 1.41e-04 2022-05-29 17:50:40,597 INFO [train.py:842] (0/4) Epoch 39, batch 2250, loss[loss=0.173, simple_loss=0.2654, pruned_loss=0.04031, over 7146.00 frames.], tot_loss[loss=0.1679, simple_loss=0.2578, pruned_loss=0.039, over 1424022.54 frames.], batch size: 20, lr: 1.41e-04 2022-05-29 17:51:18,397 INFO [train.py:842] (0/4) Epoch 39, batch 2300, loss[loss=0.1946, simple_loss=0.2871, pruned_loss=0.05101, over 7160.00 frames.], tot_loss[loss=0.1689, simple_loss=0.2587, pruned_loss=0.0395, over 1423842.19 frames.], batch size: 19, lr: 1.41e-04 2022-05-29 17:51:56,795 INFO [train.py:842] (0/4) Epoch 39, batch 2350, loss[loss=0.1622, simple_loss=0.2533, pruned_loss=0.03554, over 7219.00 frames.], tot_loss[loss=0.1681, simple_loss=0.258, pruned_loss=0.03908, over 1425474.68 frames.], batch size: 20, lr: 1.41e-04 2022-05-29 17:52:34,836 INFO [train.py:842] (0/4) Epoch 39, batch 2400, loss[loss=0.1612, simple_loss=0.2586, pruned_loss=0.03186, over 7148.00 frames.], tot_loss[loss=0.1693, simple_loss=0.2592, pruned_loss=0.03972, over 1428667.47 frames.], batch size: 20, lr: 1.41e-04 2022-05-29 17:53:13,350 INFO [train.py:842] (0/4) Epoch 39, batch 2450, loss[loss=0.1312, simple_loss=0.2111, pruned_loss=0.02562, over 7421.00 frames.], tot_loss[loss=0.1686, simple_loss=0.2583, pruned_loss=0.03944, over 1429565.89 frames.], batch size: 18, lr: 1.41e-04 2022-05-29 17:53:51,359 INFO [train.py:842] (0/4) Epoch 39, batch 2500, loss[loss=0.1655, simple_loss=0.2439, pruned_loss=0.04356, over 7401.00 frames.], tot_loss[loss=0.1694, simple_loss=0.2591, pruned_loss=0.03986, over 1427386.53 frames.], batch size: 18, lr: 1.41e-04 2022-05-29 17:54:29,852 INFO [train.py:842] (0/4) Epoch 39, batch 2550, loss[loss=0.1778, simple_loss=0.2773, pruned_loss=0.03919, over 7430.00 frames.], tot_loss[loss=0.1703, simple_loss=0.2601, pruned_loss=0.04027, over 1431647.08 frames.], batch size: 20, lr: 1.41e-04 2022-05-29 17:55:07,763 INFO [train.py:842] (0/4) Epoch 39, batch 2600, loss[loss=0.1816, simple_loss=0.2685, pruned_loss=0.04735, over 7174.00 frames.], tot_loss[loss=0.1718, simple_loss=0.2615, pruned_loss=0.0411, over 1429944.19 frames.], batch size: 26, lr: 1.41e-04 2022-05-29 17:55:46,321 INFO [train.py:842] (0/4) Epoch 39, batch 2650, loss[loss=0.1741, simple_loss=0.2668, pruned_loss=0.04073, over 7109.00 frames.], tot_loss[loss=0.1708, simple_loss=0.2605, pruned_loss=0.04057, over 1430795.57 frames.], batch size: 28, lr: 1.41e-04 2022-05-29 17:56:24,497 INFO [train.py:842] (0/4) Epoch 39, batch 2700, loss[loss=0.1671, simple_loss=0.2544, pruned_loss=0.03989, over 7307.00 frames.], tot_loss[loss=0.1699, simple_loss=0.2597, pruned_loss=0.04009, over 1428964.37 frames.], batch size: 25, lr: 1.41e-04 2022-05-29 17:56:33,312 INFO [checkpoint.py:75] (0/4) Saving checkpoint to streaming_pruned_transducer_stateless4/exp/checkpoint-352000.pt 2022-05-29 17:57:06,505 INFO [train.py:842] (0/4) Epoch 39, batch 2750, loss[loss=0.1396, simple_loss=0.2336, pruned_loss=0.02283, over 7160.00 frames.], tot_loss[loss=0.1689, simple_loss=0.2585, pruned_loss=0.03967, over 1428420.13 frames.], batch size: 19, lr: 1.41e-04 2022-05-29 17:57:44,930 INFO [train.py:842] (0/4) Epoch 39, batch 2800, loss[loss=0.1477, simple_loss=0.2471, pruned_loss=0.02419, over 7333.00 frames.], tot_loss[loss=0.1687, simple_loss=0.2589, pruned_loss=0.03926, over 1425465.02 frames.], batch size: 22, lr: 1.41e-04 2022-05-29 17:58:23,827 INFO [train.py:842] (0/4) Epoch 39, batch 2850, loss[loss=0.1944, simple_loss=0.2859, pruned_loss=0.05142, over 6456.00 frames.], tot_loss[loss=0.17, simple_loss=0.2604, pruned_loss=0.03979, over 1425568.02 frames.], batch size: 38, lr: 1.41e-04