icefall-asr-librispeech-pruned-stateless-streaming-conformer-rnnt4-2022-06-10
/
exp
/log
/log-train-2022-05-26-10-46-41-3
2022-05-26 10:46:41,638 INFO [train.py:906] (3/4) Training started | |
2022-05-26 10:46:41,638 INFO [train.py:916] (3/4) Device: cuda:3 | |
2022-05-26 10:46:41,640 INFO [train.py:934] (3/4) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'encoder_dim': 512, 'nhead': 8, 'dim_feedforward': 2048, 'num_encoder_layers': 12, 'decoder_dim': 512, 'joiner_dim': 512, 'model_warm_step': 3000, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': 'ecfe7bd6d9189964bf3ff043038918d889a43185', 'k2-git-date': 'Tue May 10 10:57:55 2022', 'lhotse-version': '1.1.0', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'streaming-conformer', 'icefall-git-sha1': '364bccb-clean', 'icefall-git-date': 'Thu May 26 10:29:08 2022', 'icefall-path': '/ceph-kw/kangwei/code/icefall_reworked2', 'k2-path': '/ceph-kw/kangwei/code/k2/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-hw/kangwei/dev_tools/anaconda3/envs/rnnt2/lib/python3.8/site-packages/lhotse-1.1.0-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-3-0307202051-57dc848959-8tmmp', 'IP address': '10.177.24.138'}, 'world_size': 4, 'master_port': 13498, 'tensorboard': True, 'num_epochs': 50, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('streaming_pruned_transducer_stateless4/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 8000, 'keep_last_k': 20, 'average_period': 100, 'use_fp16': False, 'dynamic_chunk_training': True, 'causal_convolution': True, 'short_chunk_size': 32, 'num_left_chunks': 4, 'delay_penalty': 0.0, 'return_sym_delay': False, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 300, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'blank_id': 0, 'vocab_size': 500} | |
2022-05-26 10:46:41,641 INFO [train.py:936] (3/4) About to create model | |
2022-05-26 10:46:42,071 INFO [train.py:940] (3/4) Number of model parameters: 78648040 | |
2022-05-26 10:46:47,178 INFO [train.py:955] (3/4) Using DDP | |
2022-05-26 10:46:47,469 INFO [asr_datamodule.py:391] (3/4) About to get train-clean-100 cuts | |
2022-05-26 10:46:54,133 INFO [asr_datamodule.py:398] (3/4) About to get train-clean-360 cuts | |
2022-05-26 10:47:21,183 INFO [asr_datamodule.py:405] (3/4) About to get train-other-500 cuts | |
2022-05-26 10:48:06,644 INFO [asr_datamodule.py:209] (3/4) Enable MUSAN | |
2022-05-26 10:48:06,644 INFO [asr_datamodule.py:210] (3/4) About to get Musan cuts | |
2022-05-26 10:48:08,122 INFO [asr_datamodule.py:238] (3/4) Enable SpecAugment | |
2022-05-26 10:48:08,122 INFO [asr_datamodule.py:239] (3/4) Time warp factor: 80 | |
2022-05-26 10:48:08,123 INFO [asr_datamodule.py:251] (3/4) Num frame mask: 10 | |
2022-05-26 10:48:08,123 INFO [asr_datamodule.py:264] (3/4) About to create train dataset | |
2022-05-26 10:48:08,123 INFO [asr_datamodule.py:292] (3/4) Using BucketingSampler. | |
2022-05-26 10:48:13,205 INFO [asr_datamodule.py:308] (3/4) About to create train dataloader | |
2022-05-26 10:48:13,206 INFO [asr_datamodule.py:412] (3/4) About to get dev-clean cuts | |
2022-05-26 10:48:13,493 INFO [asr_datamodule.py:417] (3/4) About to get dev-other cuts | |
2022-05-26 10:48:13,628 INFO [asr_datamodule.py:339] (3/4) About to create dev dataset | |
2022-05-26 10:48:13,639 INFO [asr_datamodule.py:358] (3/4) About to create dev dataloader | |
2022-05-26 10:48:13,640 INFO [train.py:1082] (3/4) Sanity check -- see if any of the batches in epoch 1 would cause OOM. | |
2022-05-26 10:48:23,784 INFO [distributed.py:874] (3/4) Reducer buckets have been rebuilt in this iteration. | |
2022-05-26 10:48:40,741 INFO [train.py:842] (3/4) Epoch 1, batch 0, loss[loss=0.7871, simple_loss=1.574, pruned_loss=6.62, over 7286.00 frames.], tot_loss[loss=0.7871, simple_loss=1.574, pruned_loss=6.62, over 7286.00 frames.], batch size: 17, lr: 3.00e-03 | |
2022-05-26 10:49:19,670 INFO [train.py:842] (3/4) Epoch 1, batch 50, loss[loss=0.5062, simple_loss=1.012, pruned_loss=6.972, over 7154.00 frames.], tot_loss[loss=0.5644, simple_loss=1.129, pruned_loss=7.098, over 323510.86 frames.], batch size: 19, lr: 3.00e-03 | |
2022-05-26 10:49:59,120 INFO [train.py:842] (3/4) Epoch 1, batch 100, loss[loss=0.3753, simple_loss=0.7505, pruned_loss=6.697, over 6995.00 frames.], tot_loss[loss=0.5042, simple_loss=1.008, pruned_loss=7.008, over 566288.36 frames.], batch size: 16, lr: 3.00e-03 | |
2022-05-26 10:50:37,746 INFO [train.py:842] (3/4) Epoch 1, batch 150, loss[loss=0.3834, simple_loss=0.7669, pruned_loss=6.754, over 6985.00 frames.], tot_loss[loss=0.4747, simple_loss=0.9495, pruned_loss=6.941, over 757098.60 frames.], batch size: 16, lr: 3.00e-03 | |
2022-05-26 10:51:16,830 INFO [train.py:842] (3/4) Epoch 1, batch 200, loss[loss=0.4142, simple_loss=0.8284, pruned_loss=6.717, over 7280.00 frames.], tot_loss[loss=0.4528, simple_loss=0.9056, pruned_loss=6.886, over 907386.22 frames.], batch size: 25, lr: 3.00e-03 | |
2022-05-26 10:51:55,434 INFO [train.py:842] (3/4) Epoch 1, batch 250, loss[loss=0.4633, simple_loss=0.9265, pruned_loss=6.849, over 7323.00 frames.], tot_loss[loss=0.4381, simple_loss=0.8763, pruned_loss=6.832, over 1015609.46 frames.], batch size: 21, lr: 3.00e-03 | |
2022-05-26 10:52:34,301 INFO [train.py:842] (3/4) Epoch 1, batch 300, loss[loss=0.4215, simple_loss=0.843, pruned_loss=6.724, over 7310.00 frames.], tot_loss[loss=0.4273, simple_loss=0.8546, pruned_loss=6.792, over 1108244.19 frames.], batch size: 25, lr: 3.00e-03 | |
2022-05-26 10:53:13,227 INFO [train.py:842] (3/4) Epoch 1, batch 350, loss[loss=0.4126, simple_loss=0.8253, pruned_loss=6.743, over 7262.00 frames.], tot_loss[loss=0.4187, simple_loss=0.8374, pruned_loss=6.761, over 1177779.62 frames.], batch size: 19, lr: 3.00e-03 | |
2022-05-26 10:53:52,169 INFO [train.py:842] (3/4) Epoch 1, batch 400, loss[loss=0.4145, simple_loss=0.8291, pruned_loss=6.755, over 7414.00 frames.], tot_loss[loss=0.4122, simple_loss=0.8245, pruned_loss=6.744, over 1231059.03 frames.], batch size: 21, lr: 3.00e-03 | |
2022-05-26 10:54:30,769 INFO [train.py:842] (3/4) Epoch 1, batch 450, loss[loss=0.3903, simple_loss=0.7806, pruned_loss=6.674, over 7419.00 frames.], tot_loss[loss=0.4059, simple_loss=0.8118, pruned_loss=6.726, over 1267538.95 frames.], batch size: 21, lr: 2.99e-03 | |
2022-05-26 10:55:09,756 INFO [train.py:842] (3/4) Epoch 1, batch 500, loss[loss=0.3933, simple_loss=0.7865, pruned_loss=6.719, over 7205.00 frames.], tot_loss[loss=0.3994, simple_loss=0.7989, pruned_loss=6.705, over 1303912.65 frames.], batch size: 22, lr: 2.99e-03 | |
2022-05-26 10:55:48,098 INFO [train.py:842] (3/4) Epoch 1, batch 550, loss[loss=0.3899, simple_loss=0.7799, pruned_loss=6.687, over 7341.00 frames.], tot_loss[loss=0.3935, simple_loss=0.787, pruned_loss=6.701, over 1329815.41 frames.], batch size: 22, lr: 2.99e-03 | |
2022-05-26 10:56:27,042 INFO [train.py:842] (3/4) Epoch 1, batch 600, loss[loss=0.3397, simple_loss=0.6794, pruned_loss=6.733, over 7105.00 frames.], tot_loss[loss=0.3829, simple_loss=0.7658, pruned_loss=6.692, over 1351103.07 frames.], batch size: 21, lr: 2.99e-03 | |
2022-05-26 10:57:05,641 INFO [train.py:842] (3/4) Epoch 1, batch 650, loss[loss=0.2631, simple_loss=0.5262, pruned_loss=6.573, over 7023.00 frames.], tot_loss[loss=0.3715, simple_loss=0.7429, pruned_loss=6.697, over 1369431.55 frames.], batch size: 16, lr: 2.99e-03 | |
2022-05-26 10:57:44,563 INFO [train.py:842] (3/4) Epoch 1, batch 700, loss[loss=0.3245, simple_loss=0.6491, pruned_loss=6.792, over 7195.00 frames.], tot_loss[loss=0.3584, simple_loss=0.7168, pruned_loss=6.694, over 1380708.18 frames.], batch size: 23, lr: 2.99e-03 | |
2022-05-26 10:58:23,525 INFO [train.py:842] (3/4) Epoch 1, batch 750, loss[loss=0.259, simple_loss=0.518, pruned_loss=6.628, over 7259.00 frames.], tot_loss[loss=0.3467, simple_loss=0.6934, pruned_loss=6.695, over 1392868.80 frames.], batch size: 17, lr: 2.98e-03 | |
2022-05-26 10:59:02,473 INFO [train.py:842] (3/4) Epoch 1, batch 800, loss[loss=0.347, simple_loss=0.6941, pruned_loss=6.882, over 7121.00 frames.], tot_loss[loss=0.3358, simple_loss=0.6715, pruned_loss=6.703, over 1398277.20 frames.], batch size: 21, lr: 2.98e-03 | |
2022-05-26 10:59:41,348 INFO [train.py:842] (3/4) Epoch 1, batch 850, loss[loss=0.2752, simple_loss=0.5503, pruned_loss=6.813, over 7227.00 frames.], tot_loss[loss=0.3249, simple_loss=0.6499, pruned_loss=6.708, over 1403413.24 frames.], batch size: 21, lr: 2.98e-03 | |
2022-05-26 11:00:20,378 INFO [train.py:842] (3/4) Epoch 1, batch 900, loss[loss=0.2937, simple_loss=0.5874, pruned_loss=6.75, over 7321.00 frames.], tot_loss[loss=0.3153, simple_loss=0.6305, pruned_loss=6.71, over 1407996.48 frames.], batch size: 21, lr: 2.98e-03 | |
2022-05-26 11:00:58,858 INFO [train.py:842] (3/4) Epoch 1, batch 950, loss[loss=0.2455, simple_loss=0.4909, pruned_loss=6.648, over 6996.00 frames.], tot_loss[loss=0.3074, simple_loss=0.6149, pruned_loss=6.714, over 1405094.66 frames.], batch size: 16, lr: 2.97e-03 | |
2022-05-26 11:01:37,558 INFO [train.py:842] (3/4) Epoch 1, batch 1000, loss[loss=0.2379, simple_loss=0.4758, pruned_loss=6.684, over 6997.00 frames.], tot_loss[loss=0.3011, simple_loss=0.6022, pruned_loss=6.721, over 1405616.58 frames.], batch size: 16, lr: 2.97e-03 | |
2022-05-26 11:02:16,202 INFO [train.py:842] (3/4) Epoch 1, batch 1050, loss[loss=0.2262, simple_loss=0.4524, pruned_loss=6.55, over 6989.00 frames.], tot_loss[loss=0.2948, simple_loss=0.5897, pruned_loss=6.726, over 1407733.91 frames.], batch size: 16, lr: 2.97e-03 | |
2022-05-26 11:02:54,897 INFO [train.py:842] (3/4) Epoch 1, batch 1100, loss[loss=0.2619, simple_loss=0.5237, pruned_loss=6.704, over 7208.00 frames.], tot_loss[loss=0.2898, simple_loss=0.5796, pruned_loss=6.732, over 1411516.18 frames.], batch size: 22, lr: 2.96e-03 | |
2022-05-26 11:03:33,470 INFO [train.py:842] (3/4) Epoch 1, batch 1150, loss[loss=0.278, simple_loss=0.5559, pruned_loss=6.806, over 6855.00 frames.], tot_loss[loss=0.2848, simple_loss=0.5696, pruned_loss=6.738, over 1412253.25 frames.], batch size: 31, lr: 2.96e-03 | |
2022-05-26 11:04:12,440 INFO [train.py:842] (3/4) Epoch 1, batch 1200, loss[loss=0.2663, simple_loss=0.5326, pruned_loss=6.723, over 7186.00 frames.], tot_loss[loss=0.2798, simple_loss=0.5595, pruned_loss=6.745, over 1419815.46 frames.], batch size: 26, lr: 2.96e-03 | |
2022-05-26 11:04:50,808 INFO [train.py:842] (3/4) Epoch 1, batch 1250, loss[loss=0.251, simple_loss=0.502, pruned_loss=6.76, over 7376.00 frames.], tot_loss[loss=0.2747, simple_loss=0.5495, pruned_loss=6.748, over 1413895.16 frames.], batch size: 23, lr: 2.95e-03 | |
2022-05-26 11:05:29,856 INFO [train.py:842] (3/4) Epoch 1, batch 1300, loss[loss=0.2511, simple_loss=0.5022, pruned_loss=6.805, over 7273.00 frames.], tot_loss[loss=0.2708, simple_loss=0.5416, pruned_loss=6.757, over 1421257.98 frames.], batch size: 24, lr: 2.95e-03 | |
2022-05-26 11:06:08,507 INFO [train.py:842] (3/4) Epoch 1, batch 1350, loss[loss=0.2487, simple_loss=0.4975, pruned_loss=6.867, over 7146.00 frames.], tot_loss[loss=0.2663, simple_loss=0.5326, pruned_loss=6.762, over 1423403.38 frames.], batch size: 20, lr: 2.95e-03 | |
2022-05-26 11:06:47,140 INFO [train.py:842] (3/4) Epoch 1, batch 1400, loss[loss=0.2965, simple_loss=0.5931, pruned_loss=6.981, over 7284.00 frames.], tot_loss[loss=0.2633, simple_loss=0.5265, pruned_loss=6.766, over 1419027.32 frames.], batch size: 24, lr: 2.94e-03 | |
2022-05-26 11:07:25,757 INFO [train.py:842] (3/4) Epoch 1, batch 1450, loss[loss=0.2065, simple_loss=0.413, pruned_loss=6.575, over 7141.00 frames.], tot_loss[loss=0.26, simple_loss=0.5199, pruned_loss=6.769, over 1419259.51 frames.], batch size: 17, lr: 2.94e-03 | |
2022-05-26 11:08:04,600 INFO [train.py:842] (3/4) Epoch 1, batch 1500, loss[loss=0.2385, simple_loss=0.477, pruned_loss=6.793, over 7308.00 frames.], tot_loss[loss=0.2562, simple_loss=0.5124, pruned_loss=6.773, over 1422822.25 frames.], batch size: 24, lr: 2.94e-03 | |
2022-05-26 11:08:43,074 INFO [train.py:842] (3/4) Epoch 1, batch 1550, loss[loss=0.2557, simple_loss=0.5113, pruned_loss=6.868, over 7110.00 frames.], tot_loss[loss=0.2525, simple_loss=0.505, pruned_loss=6.777, over 1423612.06 frames.], batch size: 21, lr: 2.93e-03 | |
2022-05-26 11:09:22,179 INFO [train.py:842] (3/4) Epoch 1, batch 1600, loss[loss=0.2361, simple_loss=0.4721, pruned_loss=6.854, over 7340.00 frames.], tot_loss[loss=0.2501, simple_loss=0.5002, pruned_loss=6.779, over 1422095.32 frames.], batch size: 20, lr: 2.93e-03 | |
2022-05-26 11:10:01,425 INFO [train.py:842] (3/4) Epoch 1, batch 1650, loss[loss=0.2369, simple_loss=0.4737, pruned_loss=6.828, over 7163.00 frames.], tot_loss[loss=0.247, simple_loss=0.494, pruned_loss=6.784, over 1424533.13 frames.], batch size: 18, lr: 2.92e-03 | |
2022-05-26 11:10:40,840 INFO [train.py:842] (3/4) Epoch 1, batch 1700, loss[loss=0.2461, simple_loss=0.4922, pruned_loss=6.86, over 6483.00 frames.], tot_loss[loss=0.2445, simple_loss=0.489, pruned_loss=6.787, over 1419126.91 frames.], batch size: 38, lr: 2.92e-03 | |
2022-05-26 11:11:19,951 INFO [train.py:842] (3/4) Epoch 1, batch 1750, loss[loss=0.2455, simple_loss=0.491, pruned_loss=6.797, over 6449.00 frames.], tot_loss[loss=0.2417, simple_loss=0.4835, pruned_loss=6.788, over 1419272.93 frames.], batch size: 38, lr: 2.91e-03 | |
2022-05-26 11:12:00,018 INFO [train.py:842] (3/4) Epoch 1, batch 1800, loss[loss=0.2443, simple_loss=0.4887, pruned_loss=6.832, over 7000.00 frames.], tot_loss[loss=0.2403, simple_loss=0.4805, pruned_loss=6.793, over 1419474.48 frames.], batch size: 28, lr: 2.91e-03 | |
2022-05-26 11:12:39,030 INFO [train.py:842] (3/4) Epoch 1, batch 1850, loss[loss=0.2485, simple_loss=0.497, pruned_loss=6.815, over 4924.00 frames.], tot_loss[loss=0.2371, simple_loss=0.4743, pruned_loss=6.794, over 1420368.07 frames.], batch size: 52, lr: 2.91e-03 | |
2022-05-26 11:13:18,090 INFO [train.py:842] (3/4) Epoch 1, batch 1900, loss[loss=0.2081, simple_loss=0.4162, pruned_loss=6.779, over 7258.00 frames.], tot_loss[loss=0.2355, simple_loss=0.4711, pruned_loss=6.796, over 1420309.03 frames.], batch size: 19, lr: 2.90e-03 | |
2022-05-26 11:13:56,910 INFO [train.py:842] (3/4) Epoch 1, batch 1950, loss[loss=0.2187, simple_loss=0.4374, pruned_loss=6.756, over 7318.00 frames.], tot_loss[loss=0.2342, simple_loss=0.4683, pruned_loss=6.793, over 1423011.87 frames.], batch size: 21, lr: 2.90e-03 | |
2022-05-26 11:14:35,960 INFO [train.py:842] (3/4) Epoch 1, batch 2000, loss[loss=0.2378, simple_loss=0.4756, pruned_loss=6.818, over 6790.00 frames.], tot_loss[loss=0.2329, simple_loss=0.4659, pruned_loss=6.793, over 1423369.50 frames.], batch size: 15, lr: 2.89e-03 | |
2022-05-26 11:15:15,115 INFO [train.py:842] (3/4) Epoch 1, batch 2050, loss[loss=0.2384, simple_loss=0.4767, pruned_loss=6.821, over 7167.00 frames.], tot_loss[loss=0.232, simple_loss=0.464, pruned_loss=6.792, over 1421410.40 frames.], batch size: 26, lr: 2.89e-03 | |
2022-05-26 11:15:53,869 INFO [train.py:842] (3/4) Epoch 1, batch 2100, loss[loss=0.2112, simple_loss=0.4223, pruned_loss=6.633, over 7158.00 frames.], tot_loss[loss=0.2312, simple_loss=0.4623, pruned_loss=6.796, over 1418657.25 frames.], batch size: 18, lr: 2.88e-03 | |
2022-05-26 11:16:32,689 INFO [train.py:842] (3/4) Epoch 1, batch 2150, loss[loss=0.2406, simple_loss=0.4811, pruned_loss=6.948, over 7337.00 frames.], tot_loss[loss=0.2294, simple_loss=0.4587, pruned_loss=6.794, over 1422122.74 frames.], batch size: 22, lr: 2.88e-03 | |
2022-05-26 11:17:11,547 INFO [train.py:842] (3/4) Epoch 1, batch 2200, loss[loss=0.2266, simple_loss=0.4532, pruned_loss=6.878, over 7323.00 frames.], tot_loss[loss=0.2281, simple_loss=0.4562, pruned_loss=6.791, over 1421840.75 frames.], batch size: 25, lr: 2.87e-03 | |
2022-05-26 11:17:50,094 INFO [train.py:842] (3/4) Epoch 1, batch 2250, loss[loss=0.2237, simple_loss=0.4473, pruned_loss=6.873, over 7216.00 frames.], tot_loss[loss=0.2269, simple_loss=0.4537, pruned_loss=6.791, over 1420600.97 frames.], batch size: 21, lr: 2.86e-03 | |
2022-05-26 11:18:28,796 INFO [train.py:842] (3/4) Epoch 1, batch 2300, loss[loss=0.2642, simple_loss=0.5284, pruned_loss=6.828, over 7257.00 frames.], tot_loss[loss=0.2261, simple_loss=0.4522, pruned_loss=6.798, over 1415264.71 frames.], batch size: 19, lr: 2.86e-03 | |
2022-05-26 11:19:07,795 INFO [train.py:842] (3/4) Epoch 1, batch 2350, loss[loss=0.2333, simple_loss=0.4665, pruned_loss=6.815, over 5067.00 frames.], tot_loss[loss=0.2254, simple_loss=0.4508, pruned_loss=6.802, over 1415271.83 frames.], batch size: 52, lr: 2.85e-03 | |
2022-05-26 11:19:47,049 INFO [train.py:842] (3/4) Epoch 1, batch 2400, loss[loss=0.1953, simple_loss=0.3907, pruned_loss=6.81, over 7439.00 frames.], tot_loss[loss=0.2248, simple_loss=0.4496, pruned_loss=6.806, over 1411796.56 frames.], batch size: 20, lr: 2.85e-03 | |
2022-05-26 11:20:25,501 INFO [train.py:842] (3/4) Epoch 1, batch 2450, loss[loss=0.2529, simple_loss=0.5059, pruned_loss=6.819, over 4834.00 frames.], tot_loss[loss=0.2234, simple_loss=0.4469, pruned_loss=6.802, over 1411224.60 frames.], batch size: 54, lr: 2.84e-03 | |
2022-05-26 11:21:04,501 INFO [train.py:842] (3/4) Epoch 1, batch 2500, loss[loss=0.2038, simple_loss=0.4075, pruned_loss=6.731, over 7328.00 frames.], tot_loss[loss=0.2225, simple_loss=0.4449, pruned_loss=6.803, over 1417149.80 frames.], batch size: 20, lr: 2.84e-03 | |
2022-05-26 11:21:42,883 INFO [train.py:842] (3/4) Epoch 1, batch 2550, loss[loss=0.2286, simple_loss=0.4573, pruned_loss=6.828, over 7410.00 frames.], tot_loss[loss=0.2225, simple_loss=0.445, pruned_loss=6.805, over 1417654.67 frames.], batch size: 18, lr: 2.83e-03 | |
2022-05-26 11:22:21,930 INFO [train.py:842] (3/4) Epoch 1, batch 2600, loss[loss=0.2089, simple_loss=0.4179, pruned_loss=6.84, over 7228.00 frames.], tot_loss[loss=0.2205, simple_loss=0.4409, pruned_loss=6.797, over 1419827.63 frames.], batch size: 20, lr: 2.83e-03 | |
2022-05-26 11:23:00,466 INFO [train.py:842] (3/4) Epoch 1, batch 2650, loss[loss=0.2107, simple_loss=0.4215, pruned_loss=6.839, over 7236.00 frames.], tot_loss[loss=0.2197, simple_loss=0.4394, pruned_loss=6.792, over 1420883.87 frames.], batch size: 20, lr: 2.82e-03 | |
2022-05-26 11:23:39,471 INFO [train.py:842] (3/4) Epoch 1, batch 2700, loss[loss=0.189, simple_loss=0.3779, pruned_loss=6.747, over 7142.00 frames.], tot_loss[loss=0.2186, simple_loss=0.4371, pruned_loss=6.79, over 1421439.41 frames.], batch size: 20, lr: 2.81e-03 | |
2022-05-26 11:24:17,972 INFO [train.py:842] (3/4) Epoch 1, batch 2750, loss[loss=0.1968, simple_loss=0.3935, pruned_loss=6.833, over 7337.00 frames.], tot_loss[loss=0.2182, simple_loss=0.4364, pruned_loss=6.789, over 1422773.55 frames.], batch size: 20, lr: 2.81e-03 | |
2022-05-26 11:24:56,718 INFO [train.py:842] (3/4) Epoch 1, batch 2800, loss[loss=0.2173, simple_loss=0.4346, pruned_loss=6.843, over 7146.00 frames.], tot_loss[loss=0.2166, simple_loss=0.4333, pruned_loss=6.79, over 1421417.62 frames.], batch size: 20, lr: 2.80e-03 | |
2022-05-26 11:25:35,326 INFO [train.py:842] (3/4) Epoch 1, batch 2850, loss[loss=0.2077, simple_loss=0.4154, pruned_loss=6.696, over 7349.00 frames.], tot_loss[loss=0.2155, simple_loss=0.431, pruned_loss=6.789, over 1424617.57 frames.], batch size: 19, lr: 2.80e-03 | |
2022-05-26 11:26:13,799 INFO [train.py:842] (3/4) Epoch 1, batch 2900, loss[loss=0.2302, simple_loss=0.4603, pruned_loss=6.849, over 7333.00 frames.], tot_loss[loss=0.2162, simple_loss=0.4324, pruned_loss=6.796, over 1420924.00 frames.], batch size: 20, lr: 2.79e-03 | |
2022-05-26 11:26:52,588 INFO [train.py:842] (3/4) Epoch 1, batch 2950, loss[loss=0.223, simple_loss=0.446, pruned_loss=6.867, over 7237.00 frames.], tot_loss[loss=0.2148, simple_loss=0.4296, pruned_loss=6.798, over 1417183.50 frames.], batch size: 26, lr: 2.78e-03 | |
2022-05-26 11:27:31,350 INFO [train.py:842] (3/4) Epoch 1, batch 3000, loss[loss=0.3516, simple_loss=0.3664, pruned_loss=1.684, over 7276.00 frames.], tot_loss[loss=0.249, simple_loss=0.4296, pruned_loss=6.771, over 1421366.26 frames.], batch size: 17, lr: 2.78e-03 | |
2022-05-26 11:27:31,351 INFO [train.py:862] (3/4) Computing validation loss | |
2022-05-26 11:27:40,551 INFO [train.py:871] (3/4) Epoch 1, validation: loss=2.017, simple_loss=0.4861, pruned_loss=1.774, over 868885.00 frames. | |
2022-05-26 11:28:19,127 INFO [train.py:842] (3/4) Epoch 1, batch 3050, loss[loss=0.3487, simple_loss=0.4823, pruned_loss=1.075, over 6465.00 frames.], tot_loss[loss=0.2757, simple_loss=0.4396, pruned_loss=5.567, over 1420042.63 frames.], batch size: 38, lr: 2.77e-03 | |
2022-05-26 11:28:58,770 INFO [train.py:842] (3/4) Epoch 1, batch 3100, loss[loss=0.2883, simple_loss=0.4297, pruned_loss=0.7342, over 7416.00 frames.], tot_loss[loss=0.2799, simple_loss=0.4344, pruned_loss=4.507, over 1426222.57 frames.], batch size: 21, lr: 2.77e-03 | |
2022-05-26 11:29:37,551 INFO [train.py:842] (3/4) Epoch 1, batch 3150, loss[loss=0.2644, simple_loss=0.4267, pruned_loss=0.5106, over 7406.00 frames.], tot_loss[loss=0.2786, simple_loss=0.4329, pruned_loss=3.64, over 1427226.23 frames.], batch size: 21, lr: 2.76e-03 | |
2022-05-26 11:30:16,514 INFO [train.py:842] (3/4) Epoch 1, batch 3200, loss[loss=0.2803, simple_loss=0.4668, pruned_loss=0.4693, over 7306.00 frames.], tot_loss[loss=0.2739, simple_loss=0.4314, pruned_loss=2.938, over 1423035.27 frames.], batch size: 24, lr: 2.75e-03 | |
2022-05-26 11:30:54,981 INFO [train.py:842] (3/4) Epoch 1, batch 3250, loss[loss=0.2723, simple_loss=0.4676, pruned_loss=0.3855, over 7129.00 frames.], tot_loss[loss=0.2677, simple_loss=0.429, pruned_loss=2.366, over 1422897.47 frames.], batch size: 20, lr: 2.75e-03 | |
2022-05-26 11:31:34,090 INFO [train.py:842] (3/4) Epoch 1, batch 3300, loss[loss=0.2366, simple_loss=0.418, pruned_loss=0.2759, over 7379.00 frames.], tot_loss[loss=0.2631, simple_loss=0.4288, pruned_loss=1.92, over 1418020.36 frames.], batch size: 23, lr: 2.74e-03 | |
2022-05-26 11:32:12,607 INFO [train.py:842] (3/4) Epoch 1, batch 3350, loss[loss=0.2557, simple_loss=0.4483, pruned_loss=0.3158, over 7312.00 frames.], tot_loss[loss=0.2567, simple_loss=0.425, pruned_loss=1.553, over 1422645.31 frames.], batch size: 24, lr: 2.73e-03 | |
2022-05-26 11:32:51,531 INFO [train.py:842] (3/4) Epoch 1, batch 3400, loss[loss=0.2303, simple_loss=0.4079, pruned_loss=0.2637, over 7263.00 frames.], tot_loss[loss=0.253, simple_loss=0.4244, pruned_loss=1.272, over 1423695.67 frames.], batch size: 19, lr: 2.73e-03 | |
2022-05-26 11:33:30,153 INFO [train.py:842] (3/4) Epoch 1, batch 3450, loss[loss=0.2334, simple_loss=0.4171, pruned_loss=0.2483, over 7260.00 frames.], tot_loss[loss=0.2492, simple_loss=0.4229, pruned_loss=1.05, over 1423534.63 frames.], batch size: 25, lr: 2.72e-03 | |
2022-05-26 11:34:09,044 INFO [train.py:842] (3/4) Epoch 1, batch 3500, loss[loss=0.3027, simple_loss=0.518, pruned_loss=0.4373, over 7145.00 frames.], tot_loss[loss=0.2468, simple_loss=0.4228, pruned_loss=0.8776, over 1421597.34 frames.], batch size: 26, lr: 2.72e-03 | |
2022-05-26 11:34:47,678 INFO [train.py:842] (3/4) Epoch 1, batch 3550, loss[loss=0.2123, simple_loss=0.3822, pruned_loss=0.2117, over 7229.00 frames.], tot_loss[loss=0.2423, simple_loss=0.4186, pruned_loss=0.7374, over 1422564.11 frames.], batch size: 21, lr: 2.71e-03 | |
2022-05-26 11:35:26,432 INFO [train.py:842] (3/4) Epoch 1, batch 3600, loss[loss=0.1898, simple_loss=0.3422, pruned_loss=0.1871, over 7416.00 frames.], tot_loss[loss=0.2389, simple_loss=0.4157, pruned_loss=0.628, over 1421501.17 frames.], batch size: 17, lr: 2.70e-03 | |
2022-05-26 11:36:05,096 INFO [train.py:842] (3/4) Epoch 1, batch 3650, loss[loss=0.2173, simple_loss=0.3929, pruned_loss=0.2084, over 7221.00 frames.], tot_loss[loss=0.2366, simple_loss=0.4143, pruned_loss=0.5416, over 1421755.19 frames.], batch size: 21, lr: 2.70e-03 | |
2022-05-26 11:36:43,948 INFO [train.py:842] (3/4) Epoch 1, batch 3700, loss[loss=0.2266, simple_loss=0.4092, pruned_loss=0.2201, over 6906.00 frames.], tot_loss[loss=0.2348, simple_loss=0.4132, pruned_loss=0.4735, over 1425829.99 frames.], batch size: 31, lr: 2.69e-03 | |
2022-05-26 11:37:22,477 INFO [train.py:842] (3/4) Epoch 1, batch 3750, loss[loss=0.2021, simple_loss=0.3662, pruned_loss=0.19, over 7266.00 frames.], tot_loss[loss=0.2334, simple_loss=0.4125, pruned_loss=0.4221, over 1417517.01 frames.], batch size: 18, lr: 2.68e-03 | |
2022-05-26 11:38:01,292 INFO [train.py:842] (3/4) Epoch 1, batch 3800, loss[loss=0.2488, simple_loss=0.4387, pruned_loss=0.2945, over 7127.00 frames.], tot_loss[loss=0.233, simple_loss=0.4131, pruned_loss=0.3815, over 1417664.66 frames.], batch size: 17, lr: 2.68e-03 | |
2022-05-26 11:38:40,182 INFO [train.py:842] (3/4) Epoch 1, batch 3850, loss[loss=0.2294, simple_loss=0.4091, pruned_loss=0.248, over 7125.00 frames.], tot_loss[loss=0.2322, simple_loss=0.4129, pruned_loss=0.3484, over 1423204.52 frames.], batch size: 17, lr: 2.67e-03 | |
2022-05-26 11:39:18,907 INFO [train.py:842] (3/4) Epoch 1, batch 3900, loss[loss=0.1932, simple_loss=0.3534, pruned_loss=0.165, over 6827.00 frames.], tot_loss[loss=0.2319, simple_loss=0.4132, pruned_loss=0.3233, over 1419254.47 frames.], batch size: 15, lr: 2.66e-03 | |
2022-05-26 11:39:57,460 INFO [train.py:842] (3/4) Epoch 1, batch 3950, loss[loss=0.2405, simple_loss=0.4355, pruned_loss=0.2276, over 6829.00 frames.], tot_loss[loss=0.2298, simple_loss=0.4106, pruned_loss=0.3003, over 1417912.59 frames.], batch size: 31, lr: 2.66e-03 | |
2022-05-26 11:40:36,048 INFO [train.py:842] (3/4) Epoch 1, batch 4000, loss[loss=0.2387, simple_loss=0.4316, pruned_loss=0.2292, over 7153.00 frames.], tot_loss[loss=0.2304, simple_loss=0.4123, pruned_loss=0.285, over 1418648.23 frames.], batch size: 26, lr: 2.65e-03 | |
2022-05-26 11:41:14,461 INFO [train.py:842] (3/4) Epoch 1, batch 4050, loss[loss=0.2472, simple_loss=0.442, pruned_loss=0.2623, over 5447.00 frames.], tot_loss[loss=0.2282, simple_loss=0.4095, pruned_loss=0.2683, over 1421255.77 frames.], batch size: 53, lr: 2.64e-03 | |
2022-05-26 11:41:53,418 INFO [train.py:842] (3/4) Epoch 1, batch 4100, loss[loss=0.2317, simple_loss=0.4183, pruned_loss=0.2258, over 6524.00 frames.], tot_loss[loss=0.2264, simple_loss=0.4068, pruned_loss=0.2558, over 1419561.07 frames.], batch size: 38, lr: 2.64e-03 | |
2022-05-26 11:42:32,081 INFO [train.py:842] (3/4) Epoch 1, batch 4150, loss[loss=0.2009, simple_loss=0.3655, pruned_loss=0.1821, over 7428.00 frames.], tot_loss[loss=0.2262, simple_loss=0.4069, pruned_loss=0.2477, over 1424730.30 frames.], batch size: 20, lr: 2.63e-03 | |
2022-05-26 11:43:10,840 INFO [train.py:842] (3/4) Epoch 1, batch 4200, loss[loss=0.2098, simple_loss=0.3853, pruned_loss=0.1711, over 7318.00 frames.], tot_loss[loss=0.2262, simple_loss=0.4073, pruned_loss=0.2412, over 1428523.61 frames.], batch size: 21, lr: 2.63e-03 | |
2022-05-26 11:43:49,396 INFO [train.py:842] (3/4) Epoch 1, batch 4250, loss[loss=0.2366, simple_loss=0.4247, pruned_loss=0.2425, over 7150.00 frames.], tot_loss[loss=0.2255, simple_loss=0.4064, pruned_loss=0.2353, over 1427213.08 frames.], batch size: 20, lr: 2.62e-03 | |
2022-05-26 11:44:28,196 INFO [train.py:842] (3/4) Epoch 1, batch 4300, loss[loss=0.2429, simple_loss=0.4401, pruned_loss=0.2289, over 7210.00 frames.], tot_loss[loss=0.2246, simple_loss=0.405, pruned_loss=0.2302, over 1425963.86 frames.], batch size: 22, lr: 2.61e-03 | |
2022-05-26 11:45:06,710 INFO [train.py:842] (3/4) Epoch 1, batch 4350, loss[loss=0.1831, simple_loss=0.3376, pruned_loss=0.1432, over 7165.00 frames.], tot_loss[loss=0.2232, simple_loss=0.403, pruned_loss=0.2246, over 1428069.46 frames.], batch size: 19, lr: 2.61e-03 | |
2022-05-26 11:45:45,327 INFO [train.py:842] (3/4) Epoch 1, batch 4400, loss[loss=0.2303, simple_loss=0.419, pruned_loss=0.2076, over 7221.00 frames.], tot_loss[loss=0.2242, simple_loss=0.405, pruned_loss=0.2223, over 1428782.80 frames.], batch size: 21, lr: 2.60e-03 | |
2022-05-26 11:46:23,890 INFO [train.py:842] (3/4) Epoch 1, batch 4450, loss[loss=0.212, simple_loss=0.3872, pruned_loss=0.1843, over 7157.00 frames.], tot_loss[loss=0.2225, simple_loss=0.4024, pruned_loss=0.2174, over 1430519.19 frames.], batch size: 19, lr: 2.59e-03 | |
2022-05-26 11:47:02,780 INFO [train.py:842] (3/4) Epoch 1, batch 4500, loss[loss=0.2305, simple_loss=0.4175, pruned_loss=0.2175, over 7253.00 frames.], tot_loss[loss=0.2225, simple_loss=0.4028, pruned_loss=0.2144, over 1432936.63 frames.], batch size: 19, lr: 2.59e-03 | |
2022-05-26 11:47:41,427 INFO [train.py:842] (3/4) Epoch 1, batch 4550, loss[loss=0.2243, simple_loss=0.4079, pruned_loss=0.2035, over 7067.00 frames.], tot_loss[loss=0.2216, simple_loss=0.4012, pruned_loss=0.2121, over 1430859.47 frames.], batch size: 18, lr: 2.58e-03 | |
2022-05-26 11:48:20,137 INFO [train.py:842] (3/4) Epoch 1, batch 4600, loss[loss=0.2311, simple_loss=0.4195, pruned_loss=0.214, over 7263.00 frames.], tot_loss[loss=0.221, simple_loss=0.4005, pruned_loss=0.2096, over 1429763.17 frames.], batch size: 19, lr: 2.57e-03 | |
2022-05-26 11:48:58,612 INFO [train.py:842] (3/4) Epoch 1, batch 4650, loss[loss=0.2411, simple_loss=0.435, pruned_loss=0.2356, over 7114.00 frames.], tot_loss[loss=0.22, simple_loss=0.3991, pruned_loss=0.2066, over 1430346.19 frames.], batch size: 28, lr: 2.57e-03 | |
2022-05-26 11:49:37,493 INFO [train.py:842] (3/4) Epoch 1, batch 4700, loss[loss=0.2013, simple_loss=0.3645, pruned_loss=0.191, over 7295.00 frames.], tot_loss[loss=0.2198, simple_loss=0.3987, pruned_loss=0.2059, over 1429774.83 frames.], batch size: 17, lr: 2.56e-03 | |
2022-05-26 11:50:15,963 INFO [train.py:842] (3/4) Epoch 1, batch 4750, loss[loss=0.2164, simple_loss=0.3939, pruned_loss=0.1943, over 5366.00 frames.], tot_loss[loss=0.2214, simple_loss=0.4014, pruned_loss=0.208, over 1428151.19 frames.], batch size: 54, lr: 2.55e-03 | |
2022-05-26 11:50:54,734 INFO [train.py:842] (3/4) Epoch 1, batch 4800, loss[loss=0.195, simple_loss=0.3577, pruned_loss=0.1611, over 7427.00 frames.], tot_loss[loss=0.2204, simple_loss=0.4, pruned_loss=0.205, over 1429667.85 frames.], batch size: 20, lr: 2.55e-03 | |
2022-05-26 11:51:33,202 INFO [train.py:842] (3/4) Epoch 1, batch 4850, loss[loss=0.1953, simple_loss=0.3556, pruned_loss=0.1749, over 7267.00 frames.], tot_loss[loss=0.2202, simple_loss=0.3997, pruned_loss=0.2044, over 1427592.80 frames.], batch size: 19, lr: 2.54e-03 | |
2022-05-26 11:52:11,926 INFO [train.py:842] (3/4) Epoch 1, batch 4900, loss[loss=0.1904, simple_loss=0.3478, pruned_loss=0.1648, over 7331.00 frames.], tot_loss[loss=0.2203, simple_loss=0.3999, pruned_loss=0.2038, over 1428065.58 frames.], batch size: 20, lr: 2.54e-03 | |
2022-05-26 11:52:50,289 INFO [train.py:842] (3/4) Epoch 1, batch 4950, loss[loss=0.2174, simple_loss=0.3932, pruned_loss=0.2079, over 7363.00 frames.], tot_loss[loss=0.2204, simple_loss=0.4002, pruned_loss=0.2038, over 1423351.96 frames.], batch size: 19, lr: 2.53e-03 | |
2022-05-26 11:53:29,137 INFO [train.py:842] (3/4) Epoch 1, batch 5000, loss[loss=0.2162, simple_loss=0.3982, pruned_loss=0.1711, over 7342.00 frames.], tot_loss[loss=0.2199, simple_loss=0.3993, pruned_loss=0.2022, over 1423230.45 frames.], batch size: 22, lr: 2.52e-03 | |
2022-05-26 11:54:07,521 INFO [train.py:842] (3/4) Epoch 1, batch 5050, loss[loss=0.2652, simple_loss=0.4691, pruned_loss=0.3071, over 7320.00 frames.], tot_loss[loss=0.2193, simple_loss=0.3984, pruned_loss=0.2011, over 1422400.41 frames.], batch size: 21, lr: 2.52e-03 | |
2022-05-26 11:54:46,105 INFO [train.py:842] (3/4) Epoch 1, batch 5100, loss[loss=0.2005, simple_loss=0.3684, pruned_loss=0.1628, over 7207.00 frames.], tot_loss[loss=0.218, simple_loss=0.3963, pruned_loss=0.1988, over 1420561.21 frames.], batch size: 22, lr: 2.51e-03 | |
2022-05-26 11:55:24,627 INFO [train.py:842] (3/4) Epoch 1, batch 5150, loss[loss=0.2115, simple_loss=0.3864, pruned_loss=0.1836, over 7432.00 frames.], tot_loss[loss=0.2172, simple_loss=0.395, pruned_loss=0.1966, over 1422149.51 frames.], batch size: 20, lr: 2.50e-03 | |
2022-05-26 11:56:03,340 INFO [train.py:842] (3/4) Epoch 1, batch 5200, loss[loss=0.268, simple_loss=0.4783, pruned_loss=0.2887, over 7288.00 frames.], tot_loss[loss=0.2176, simple_loss=0.3958, pruned_loss=0.1972, over 1420971.71 frames.], batch size: 25, lr: 2.50e-03 | |
2022-05-26 11:56:41,778 INFO [train.py:842] (3/4) Epoch 1, batch 5250, loss[loss=0.2349, simple_loss=0.426, pruned_loss=0.2192, over 4993.00 frames.], tot_loss[loss=0.2161, simple_loss=0.3934, pruned_loss=0.1943, over 1418888.63 frames.], batch size: 53, lr: 2.49e-03 | |
2022-05-26 11:57:20,422 INFO [train.py:842] (3/4) Epoch 1, batch 5300, loss[loss=0.2096, simple_loss=0.3814, pruned_loss=0.1893, over 7278.00 frames.], tot_loss[loss=0.2161, simple_loss=0.3934, pruned_loss=0.1937, over 1416489.11 frames.], batch size: 17, lr: 2.49e-03 | |
2022-05-26 11:57:58,740 INFO [train.py:842] (3/4) Epoch 1, batch 5350, loss[loss=0.2242, simple_loss=0.4084, pruned_loss=0.1999, over 7368.00 frames.], tot_loss[loss=0.2155, simple_loss=0.3925, pruned_loss=0.1922, over 1414281.58 frames.], batch size: 23, lr: 2.48e-03 | |
2022-05-26 11:58:37,528 INFO [train.py:842] (3/4) Epoch 1, batch 5400, loss[loss=0.2345, simple_loss=0.4246, pruned_loss=0.2221, over 7153.00 frames.], tot_loss[loss=0.2154, simple_loss=0.3924, pruned_loss=0.1914, over 1420059.07 frames.], batch size: 28, lr: 2.47e-03 | |
2022-05-26 11:59:16,009 INFO [train.py:842] (3/4) Epoch 1, batch 5450, loss[loss=0.2399, simple_loss=0.4354, pruned_loss=0.2216, over 7139.00 frames.], tot_loss[loss=0.2144, simple_loss=0.3908, pruned_loss=0.1895, over 1421044.10 frames.], batch size: 20, lr: 2.47e-03 | |
2022-05-26 11:59:54,857 INFO [train.py:842] (3/4) Epoch 1, batch 5500, loss[loss=0.2377, simple_loss=0.4265, pruned_loss=0.2443, over 4735.00 frames.], tot_loss[loss=0.2138, simple_loss=0.3899, pruned_loss=0.1886, over 1418912.06 frames.], batch size: 52, lr: 2.46e-03 | |
2022-05-26 12:00:33,672 INFO [train.py:842] (3/4) Epoch 1, batch 5550, loss[loss=0.189, simple_loss=0.3469, pruned_loss=0.1555, over 6803.00 frames.], tot_loss[loss=0.2135, simple_loss=0.3892, pruned_loss=0.1892, over 1421003.88 frames.], batch size: 15, lr: 2.45e-03 | |
2022-05-26 12:01:12,572 INFO [train.py:842] (3/4) Epoch 1, batch 5600, loss[loss=0.2454, simple_loss=0.4451, pruned_loss=0.2284, over 6364.00 frames.], tot_loss[loss=0.2137, simple_loss=0.3898, pruned_loss=0.1885, over 1422691.35 frames.], batch size: 37, lr: 2.45e-03 | |
2022-05-26 12:01:51,150 INFO [train.py:842] (3/4) Epoch 1, batch 5650, loss[loss=0.2064, simple_loss=0.3742, pruned_loss=0.1931, over 7289.00 frames.], tot_loss[loss=0.214, simple_loss=0.3903, pruned_loss=0.1889, over 1420877.07 frames.], batch size: 17, lr: 2.44e-03 | |
2022-05-26 12:02:29,832 INFO [train.py:842] (3/4) Epoch 1, batch 5700, loss[loss=0.2071, simple_loss=0.3819, pruned_loss=0.1614, over 7407.00 frames.], tot_loss[loss=0.2144, simple_loss=0.391, pruned_loss=0.189, over 1420449.70 frames.], batch size: 20, lr: 2.44e-03 | |
2022-05-26 12:03:08,176 INFO [train.py:842] (3/4) Epoch 1, batch 5750, loss[loss=0.1849, simple_loss=0.3386, pruned_loss=0.1562, over 7272.00 frames.], tot_loss[loss=0.2141, simple_loss=0.3906, pruned_loss=0.1881, over 1422405.91 frames.], batch size: 18, lr: 2.43e-03 | |
2022-05-26 12:03:47,055 INFO [train.py:842] (3/4) Epoch 1, batch 5800, loss[loss=0.2348, simple_loss=0.4308, pruned_loss=0.1941, over 7192.00 frames.], tot_loss[loss=0.214, simple_loss=0.3906, pruned_loss=0.1867, over 1428219.63 frames.], batch size: 22, lr: 2.42e-03 | |
2022-05-26 12:04:25,396 INFO [train.py:842] (3/4) Epoch 1, batch 5850, loss[loss=0.2152, simple_loss=0.3928, pruned_loss=0.1882, over 7428.00 frames.], tot_loss[loss=0.2146, simple_loss=0.3916, pruned_loss=0.1879, over 1426771.91 frames.], batch size: 20, lr: 2.42e-03 | |
2022-05-26 12:05:04,402 INFO [train.py:842] (3/4) Epoch 1, batch 5900, loss[loss=0.2193, simple_loss=0.4013, pruned_loss=0.1868, over 7316.00 frames.], tot_loss[loss=0.2131, simple_loss=0.389, pruned_loss=0.186, over 1429847.35 frames.], batch size: 21, lr: 2.41e-03 | |
2022-05-26 12:05:43,148 INFO [train.py:842] (3/4) Epoch 1, batch 5950, loss[loss=0.2312, simple_loss=0.4196, pruned_loss=0.2142, over 7176.00 frames.], tot_loss[loss=0.2131, simple_loss=0.389, pruned_loss=0.186, over 1430253.99 frames.], batch size: 19, lr: 2.41e-03 | |
2022-05-26 12:06:21,966 INFO [train.py:842] (3/4) Epoch 1, batch 6000, loss[loss=0.4227, simple_loss=0.3998, pruned_loss=0.2228, over 7177.00 frames.], tot_loss[loss=0.215, simple_loss=0.3891, pruned_loss=0.1857, over 1426150.35 frames.], batch size: 26, lr: 2.40e-03 | |
2022-05-26 12:06:21,966 INFO [train.py:862] (3/4) Computing validation loss | |
2022-05-26 12:06:31,825 INFO [train.py:871] (3/4) Epoch 1, validation: loss=0.2892, simple_loss=0.3436, pruned_loss=0.1174, over 868885.00 frames. | |
2022-05-26 12:07:10,559 INFO [train.py:842] (3/4) Epoch 1, batch 6050, loss[loss=0.3221, simple_loss=0.3439, pruned_loss=0.1501, over 6737.00 frames.], tot_loss[loss=0.2596, simple_loss=0.3931, pruned_loss=0.1919, over 1422957.05 frames.], batch size: 15, lr: 2.39e-03 | |
2022-05-26 12:07:49,764 INFO [train.py:842] (3/4) Epoch 1, batch 6100, loss[loss=0.3213, simple_loss=0.3436, pruned_loss=0.1495, over 6769.00 frames.], tot_loss[loss=0.2855, simple_loss=0.3915, pruned_loss=0.1899, over 1424935.38 frames.], batch size: 15, lr: 2.39e-03 | |
2022-05-26 12:08:28,493 INFO [train.py:842] (3/4) Epoch 1, batch 6150, loss[loss=0.3884, simple_loss=0.4114, pruned_loss=0.1827, over 7122.00 frames.], tot_loss[loss=0.3058, simple_loss=0.3914, pruned_loss=0.1879, over 1426594.30 frames.], batch size: 21, lr: 2.38e-03 | |
2022-05-26 12:09:07,165 INFO [train.py:842] (3/4) Epoch 1, batch 6200, loss[loss=0.4401, simple_loss=0.4294, pruned_loss=0.2254, over 7341.00 frames.], tot_loss[loss=0.3216, simple_loss=0.3918, pruned_loss=0.1862, over 1427244.09 frames.], batch size: 22, lr: 2.38e-03 | |
2022-05-26 12:09:45,799 INFO [train.py:842] (3/4) Epoch 1, batch 6250, loss[loss=0.3602, simple_loss=0.3759, pruned_loss=0.1723, over 7380.00 frames.], tot_loss[loss=0.331, simple_loss=0.3902, pruned_loss=0.1831, over 1428384.63 frames.], batch size: 23, lr: 2.37e-03 | |
2022-05-26 12:10:25,219 INFO [train.py:842] (3/4) Epoch 1, batch 6300, loss[loss=0.3679, simple_loss=0.3883, pruned_loss=0.1738, over 7284.00 frames.], tot_loss[loss=0.3395, simple_loss=0.389, pruned_loss=0.1817, over 1425182.07 frames.], batch size: 18, lr: 2.37e-03 | |
2022-05-26 12:11:03,822 INFO [train.py:842] (3/4) Epoch 1, batch 6350, loss[loss=0.404, simple_loss=0.4109, pruned_loss=0.1985, over 7145.00 frames.], tot_loss[loss=0.3451, simple_loss=0.3883, pruned_loss=0.1795, over 1425303.28 frames.], batch size: 20, lr: 2.36e-03 | |
2022-05-26 12:11:42,706 INFO [train.py:842] (3/4) Epoch 1, batch 6400, loss[loss=0.3363, simple_loss=0.3643, pruned_loss=0.1542, over 7357.00 frames.], tot_loss[loss=0.3542, simple_loss=0.3905, pruned_loss=0.1812, over 1425061.76 frames.], batch size: 19, lr: 2.35e-03 | |
2022-05-26 12:12:21,107 INFO [train.py:842] (3/4) Epoch 1, batch 6450, loss[loss=0.3578, simple_loss=0.3957, pruned_loss=0.16, over 7098.00 frames.], tot_loss[loss=0.3579, simple_loss=0.3907, pruned_loss=0.1798, over 1425953.06 frames.], batch size: 21, lr: 2.35e-03 | |
2022-05-26 12:12:59,896 INFO [train.py:842] (3/4) Epoch 1, batch 6500, loss[loss=0.2868, simple_loss=0.3272, pruned_loss=0.1232, over 7129.00 frames.], tot_loss[loss=0.3602, simple_loss=0.3898, pruned_loss=0.1788, over 1421632.79 frames.], batch size: 17, lr: 2.34e-03 | |
2022-05-26 12:13:38,148 INFO [train.py:842] (3/4) Epoch 1, batch 6550, loss[loss=0.3438, simple_loss=0.3916, pruned_loss=0.148, over 7305.00 frames.], tot_loss[loss=0.3626, simple_loss=0.3906, pruned_loss=0.1779, over 1418530.56 frames.], batch size: 21, lr: 2.34e-03 | |
2022-05-26 12:14:17,094 INFO [train.py:842] (3/4) Epoch 1, batch 6600, loss[loss=0.3356, simple_loss=0.3865, pruned_loss=0.1423, over 7151.00 frames.], tot_loss[loss=0.3651, simple_loss=0.3914, pruned_loss=0.1776, over 1424226.78 frames.], batch size: 26, lr: 2.33e-03 | |
2022-05-26 12:14:55,791 INFO [train.py:842] (3/4) Epoch 1, batch 6650, loss[loss=0.3267, simple_loss=0.3524, pruned_loss=0.1505, over 7067.00 frames.], tot_loss[loss=0.369, simple_loss=0.3934, pruned_loss=0.1786, over 1422448.15 frames.], batch size: 18, lr: 2.33e-03 | |
2022-05-26 12:15:34,738 INFO [train.py:842] (3/4) Epoch 1, batch 6700, loss[loss=0.5536, simple_loss=0.509, pruned_loss=0.2991, over 4874.00 frames.], tot_loss[loss=0.3678, simple_loss=0.3924, pruned_loss=0.1766, over 1423798.36 frames.], batch size: 52, lr: 2.32e-03 | |
2022-05-26 12:16:13,284 INFO [train.py:842] (3/4) Epoch 1, batch 6750, loss[loss=0.3527, simple_loss=0.3896, pruned_loss=0.1579, over 7289.00 frames.], tot_loss[loss=0.3658, simple_loss=0.3908, pruned_loss=0.1742, over 1427469.67 frames.], batch size: 24, lr: 2.31e-03 | |
2022-05-26 12:16:52,116 INFO [train.py:842] (3/4) Epoch 1, batch 6800, loss[loss=0.2862, simple_loss=0.3378, pruned_loss=0.1173, over 7418.00 frames.], tot_loss[loss=0.3651, simple_loss=0.3904, pruned_loss=0.1729, over 1429144.47 frames.], batch size: 20, lr: 2.31e-03 | |
2022-05-26 12:17:30,589 INFO [train.py:842] (3/4) Epoch 1, batch 6850, loss[loss=0.3805, simple_loss=0.4079, pruned_loss=0.1766, over 7189.00 frames.], tot_loss[loss=0.3634, simple_loss=0.3892, pruned_loss=0.1711, over 1427928.13 frames.], batch size: 23, lr: 2.30e-03 | |
2022-05-26 12:18:19,079 INFO [train.py:842] (3/4) Epoch 1, batch 6900, loss[loss=0.37, simple_loss=0.3939, pruned_loss=0.1731, over 7414.00 frames.], tot_loss[loss=0.361, simple_loss=0.3868, pruned_loss=0.1694, over 1428166.51 frames.], batch size: 21, lr: 2.30e-03 | |
2022-05-26 12:18:57,551 INFO [train.py:842] (3/4) Epoch 1, batch 6950, loss[loss=0.4072, simple_loss=0.4053, pruned_loss=0.2045, over 7278.00 frames.], tot_loss[loss=0.3603, simple_loss=0.3866, pruned_loss=0.1684, over 1422541.43 frames.], batch size: 18, lr: 2.29e-03 | |
2022-05-26 12:19:36,269 INFO [train.py:842] (3/4) Epoch 1, batch 7000, loss[loss=0.3768, simple_loss=0.3898, pruned_loss=0.1819, over 7168.00 frames.], tot_loss[loss=0.3614, simple_loss=0.3878, pruned_loss=0.1686, over 1420330.96 frames.], batch size: 18, lr: 2.29e-03 | |
2022-05-26 12:20:14,724 INFO [train.py:842] (3/4) Epoch 1, batch 7050, loss[loss=0.3037, simple_loss=0.3486, pruned_loss=0.1294, over 7154.00 frames.], tot_loss[loss=0.3608, simple_loss=0.3871, pruned_loss=0.1681, over 1421393.54 frames.], batch size: 19, lr: 2.28e-03 | |
2022-05-26 12:20:53,594 INFO [train.py:842] (3/4) Epoch 1, batch 7100, loss[loss=0.3241, simple_loss=0.376, pruned_loss=0.1361, over 7333.00 frames.], tot_loss[loss=0.3602, simple_loss=0.3865, pruned_loss=0.1676, over 1424446.24 frames.], batch size: 22, lr: 2.28e-03 | |
2022-05-26 12:21:32,665 INFO [train.py:842] (3/4) Epoch 1, batch 7150, loss[loss=0.3971, simple_loss=0.4197, pruned_loss=0.1873, over 7212.00 frames.], tot_loss[loss=0.3584, simple_loss=0.385, pruned_loss=0.1664, over 1420378.63 frames.], batch size: 22, lr: 2.27e-03 | |
2022-05-26 12:22:11,496 INFO [train.py:842] (3/4) Epoch 1, batch 7200, loss[loss=0.3351, simple_loss=0.3794, pruned_loss=0.1454, over 7343.00 frames.], tot_loss[loss=0.3569, simple_loss=0.3847, pruned_loss=0.165, over 1423534.68 frames.], batch size: 22, lr: 2.27e-03 | |
2022-05-26 12:22:50,068 INFO [train.py:842] (3/4) Epoch 1, batch 7250, loss[loss=0.3564, simple_loss=0.369, pruned_loss=0.1719, over 7073.00 frames.], tot_loss[loss=0.3594, simple_loss=0.3863, pruned_loss=0.1665, over 1417538.21 frames.], batch size: 18, lr: 2.26e-03 | |
2022-05-26 12:23:28,697 INFO [train.py:842] (3/4) Epoch 1, batch 7300, loss[loss=0.3675, simple_loss=0.3981, pruned_loss=0.1684, over 7045.00 frames.], tot_loss[loss=0.3614, simple_loss=0.3877, pruned_loss=0.1678, over 1417053.27 frames.], batch size: 28, lr: 2.26e-03 | |
2022-05-26 12:24:07,153 INFO [train.py:842] (3/4) Epoch 1, batch 7350, loss[loss=0.3269, simple_loss=0.3424, pruned_loss=0.1557, over 6800.00 frames.], tot_loss[loss=0.3594, simple_loss=0.3865, pruned_loss=0.1663, over 1416693.41 frames.], batch size: 15, lr: 2.25e-03 | |
2022-05-26 12:24:45,818 INFO [train.py:842] (3/4) Epoch 1, batch 7400, loss[loss=0.345, simple_loss=0.3706, pruned_loss=0.1597, over 7411.00 frames.], tot_loss[loss=0.3594, simple_loss=0.3871, pruned_loss=0.166, over 1417592.86 frames.], batch size: 18, lr: 2.24e-03 | |
2022-05-26 12:25:24,574 INFO [train.py:842] (3/4) Epoch 1, batch 7450, loss[loss=0.3711, simple_loss=0.3722, pruned_loss=0.1849, over 7419.00 frames.], tot_loss[loss=0.3618, simple_loss=0.3889, pruned_loss=0.1674, over 1425179.29 frames.], batch size: 18, lr: 2.24e-03 | |
2022-05-26 12:26:03,341 INFO [train.py:842] (3/4) Epoch 1, batch 7500, loss[loss=0.4276, simple_loss=0.4282, pruned_loss=0.2136, over 7418.00 frames.], tot_loss[loss=0.3636, simple_loss=0.3899, pruned_loss=0.1688, over 1421842.47 frames.], batch size: 20, lr: 2.23e-03 | |
2022-05-26 12:26:41,956 INFO [train.py:842] (3/4) Epoch 1, batch 7550, loss[loss=0.3387, simple_loss=0.3788, pruned_loss=0.1492, over 7327.00 frames.], tot_loss[loss=0.3615, simple_loss=0.3883, pruned_loss=0.1674, over 1420317.32 frames.], batch size: 20, lr: 2.23e-03 | |
2022-05-26 12:27:21,044 INFO [train.py:842] (3/4) Epoch 1, batch 7600, loss[loss=0.2795, simple_loss=0.343, pruned_loss=0.108, over 7419.00 frames.], tot_loss[loss=0.3585, simple_loss=0.3861, pruned_loss=0.1655, over 1423857.20 frames.], batch size: 21, lr: 2.22e-03 | |
2022-05-26 12:28:28,323 INFO [train.py:842] (3/4) Epoch 1, batch 7650, loss[loss=0.4082, simple_loss=0.4156, pruned_loss=0.2005, over 7323.00 frames.], tot_loss[loss=0.3587, simple_loss=0.3868, pruned_loss=0.1654, over 1427335.46 frames.], batch size: 20, lr: 2.22e-03 | |
2022-05-26 12:29:07,139 INFO [train.py:842] (3/4) Epoch 1, batch 7700, loss[loss=0.3387, simple_loss=0.3843, pruned_loss=0.1465, over 7240.00 frames.], tot_loss[loss=0.359, simple_loss=0.3872, pruned_loss=0.1654, over 1425196.14 frames.], batch size: 20, lr: 2.21e-03 | |
2022-05-26 12:29:46,014 INFO [train.py:842] (3/4) Epoch 1, batch 7750, loss[loss=0.3221, simple_loss=0.3523, pruned_loss=0.146, over 7357.00 frames.], tot_loss[loss=0.3557, simple_loss=0.3849, pruned_loss=0.1632, over 1426045.69 frames.], batch size: 19, lr: 2.21e-03 | |
2022-05-26 12:30:24,829 INFO [train.py:842] (3/4) Epoch 1, batch 7800, loss[loss=0.3813, simple_loss=0.4029, pruned_loss=0.1798, over 7127.00 frames.], tot_loss[loss=0.3547, simple_loss=0.3843, pruned_loss=0.1626, over 1428399.57 frames.], batch size: 28, lr: 2.20e-03 | |
2022-05-26 12:31:03,156 INFO [train.py:842] (3/4) Epoch 1, batch 7850, loss[loss=0.3901, simple_loss=0.4085, pruned_loss=0.1858, over 7284.00 frames.], tot_loss[loss=0.3556, simple_loss=0.3858, pruned_loss=0.1627, over 1430896.11 frames.], batch size: 24, lr: 2.20e-03 | |
2022-05-26 12:31:41,890 INFO [train.py:842] (3/4) Epoch 1, batch 7900, loss[loss=0.3716, simple_loss=0.4033, pruned_loss=0.17, over 7440.00 frames.], tot_loss[loss=0.3565, simple_loss=0.3863, pruned_loss=0.1633, over 1428190.48 frames.], batch size: 20, lr: 2.19e-03 | |
2022-05-26 12:32:20,422 INFO [train.py:842] (3/4) Epoch 1, batch 7950, loss[loss=0.4043, simple_loss=0.428, pruned_loss=0.1903, over 6385.00 frames.], tot_loss[loss=0.3541, simple_loss=0.3849, pruned_loss=0.1616, over 1423044.26 frames.], batch size: 38, lr: 2.19e-03 | |
2022-05-26 12:33:01,914 INFO [train.py:842] (3/4) Epoch 1, batch 8000, loss[loss=0.3228, simple_loss=0.3506, pruned_loss=0.1475, over 7145.00 frames.], tot_loss[loss=0.3544, simple_loss=0.3852, pruned_loss=0.1618, over 1425238.67 frames.], batch size: 17, lr: 2.18e-03 | |
2022-05-26 12:33:40,593 INFO [train.py:842] (3/4) Epoch 1, batch 8050, loss[loss=0.2623, simple_loss=0.3013, pruned_loss=0.1116, over 7114.00 frames.], tot_loss[loss=0.3526, simple_loss=0.3839, pruned_loss=0.1607, over 1429039.89 frames.], batch size: 17, lr: 2.18e-03 | |
2022-05-26 12:34:19,302 INFO [train.py:842] (3/4) Epoch 1, batch 8100, loss[loss=0.3533, simple_loss=0.3873, pruned_loss=0.1596, over 7263.00 frames.], tot_loss[loss=0.352, simple_loss=0.3837, pruned_loss=0.1602, over 1427710.29 frames.], batch size: 19, lr: 2.17e-03 | |
2022-05-26 12:34:57,733 INFO [train.py:842] (3/4) Epoch 1, batch 8150, loss[loss=0.3318, simple_loss=0.3771, pruned_loss=0.1432, over 7215.00 frames.], tot_loss[loss=0.3554, simple_loss=0.386, pruned_loss=0.1624, over 1424043.69 frames.], batch size: 22, lr: 2.17e-03 | |
2022-05-26 12:35:36,466 INFO [train.py:842] (3/4) Epoch 1, batch 8200, loss[loss=0.3297, simple_loss=0.3603, pruned_loss=0.1496, over 7147.00 frames.], tot_loss[loss=0.3538, simple_loss=0.3853, pruned_loss=0.1611, over 1421765.25 frames.], batch size: 18, lr: 2.16e-03 | |
2022-05-26 12:36:15,279 INFO [train.py:842] (3/4) Epoch 1, batch 8250, loss[loss=0.3359, simple_loss=0.3632, pruned_loss=0.1543, over 7261.00 frames.], tot_loss[loss=0.3524, simple_loss=0.3842, pruned_loss=0.1603, over 1423075.61 frames.], batch size: 19, lr: 2.16e-03 | |
2022-05-26 12:36:53,991 INFO [train.py:842] (3/4) Epoch 1, batch 8300, loss[loss=0.4621, simple_loss=0.4722, pruned_loss=0.226, over 6672.00 frames.], tot_loss[loss=0.3515, simple_loss=0.3837, pruned_loss=0.1597, over 1421938.00 frames.], batch size: 31, lr: 2.15e-03 | |
2022-05-26 12:37:32,678 INFO [train.py:842] (3/4) Epoch 1, batch 8350, loss[loss=0.2582, simple_loss=0.3092, pruned_loss=0.1036, over 7267.00 frames.], tot_loss[loss=0.3503, simple_loss=0.3828, pruned_loss=0.1589, over 1425087.95 frames.], batch size: 18, lr: 2.15e-03 | |
2022-05-26 12:38:11,578 INFO [train.py:842] (3/4) Epoch 1, batch 8400, loss[loss=0.4115, simple_loss=0.4319, pruned_loss=0.1956, over 7317.00 frames.], tot_loss[loss=0.3487, simple_loss=0.3825, pruned_loss=0.1575, over 1424096.90 frames.], batch size: 25, lr: 2.15e-03 | |
2022-05-26 12:38:49,958 INFO [train.py:842] (3/4) Epoch 1, batch 8450, loss[loss=0.333, simple_loss=0.3629, pruned_loss=0.1515, over 7102.00 frames.], tot_loss[loss=0.3467, simple_loss=0.3813, pruned_loss=0.156, over 1423236.21 frames.], batch size: 21, lr: 2.14e-03 | |
2022-05-26 12:39:28,678 INFO [train.py:842] (3/4) Epoch 1, batch 8500, loss[loss=0.3163, simple_loss=0.3693, pruned_loss=0.1317, over 7146.00 frames.], tot_loss[loss=0.3482, simple_loss=0.3824, pruned_loss=0.157, over 1422396.53 frames.], batch size: 20, lr: 2.14e-03 | |
2022-05-26 12:40:07,540 INFO [train.py:842] (3/4) Epoch 1, batch 8550, loss[loss=0.392, simple_loss=0.4031, pruned_loss=0.1905, over 7167.00 frames.], tot_loss[loss=0.3469, simple_loss=0.3813, pruned_loss=0.1563, over 1424028.98 frames.], batch size: 18, lr: 2.13e-03 | |
2022-05-26 12:40:46,243 INFO [train.py:842] (3/4) Epoch 1, batch 8600, loss[loss=0.3512, simple_loss=0.3825, pruned_loss=0.1599, over 7074.00 frames.], tot_loss[loss=0.3495, simple_loss=0.3826, pruned_loss=0.1582, over 1420735.62 frames.], batch size: 18, lr: 2.13e-03 | |
2022-05-26 12:41:24,622 INFO [train.py:842] (3/4) Epoch 1, batch 8650, loss[loss=0.3123, simple_loss=0.3648, pruned_loss=0.1299, over 7315.00 frames.], tot_loss[loss=0.3487, simple_loss=0.3825, pruned_loss=0.1575, over 1414669.86 frames.], batch size: 21, lr: 2.12e-03 | |
2022-05-26 12:42:03,388 INFO [train.py:842] (3/4) Epoch 1, batch 8700, loss[loss=0.2852, simple_loss=0.3272, pruned_loss=0.1216, over 7122.00 frames.], tot_loss[loss=0.3489, simple_loss=0.383, pruned_loss=0.1573, over 1411031.49 frames.], batch size: 17, lr: 2.12e-03 | |
2022-05-26 12:42:41,802 INFO [train.py:842] (3/4) Epoch 1, batch 8750, loss[loss=0.3584, simple_loss=0.3976, pruned_loss=0.1596, over 6815.00 frames.], tot_loss[loss=0.3487, simple_loss=0.3828, pruned_loss=0.1572, over 1412396.37 frames.], batch size: 31, lr: 2.11e-03 | |
2022-05-26 12:43:20,376 INFO [train.py:842] (3/4) Epoch 1, batch 8800, loss[loss=0.3967, simple_loss=0.4115, pruned_loss=0.1909, over 6720.00 frames.], tot_loss[loss=0.3472, simple_loss=0.3817, pruned_loss=0.1563, over 1415559.89 frames.], batch size: 31, lr: 2.11e-03 | |
2022-05-26 12:43:58,636 INFO [train.py:842] (3/4) Epoch 1, batch 8850, loss[loss=0.4092, simple_loss=0.412, pruned_loss=0.2033, over 5000.00 frames.], tot_loss[loss=0.3504, simple_loss=0.3839, pruned_loss=0.1584, over 1411346.15 frames.], batch size: 53, lr: 2.10e-03 | |
2022-05-26 12:44:37,292 INFO [train.py:842] (3/4) Epoch 1, batch 8900, loss[loss=0.3185, simple_loss=0.3492, pruned_loss=0.1439, over 7021.00 frames.], tot_loss[loss=0.3479, simple_loss=0.3824, pruned_loss=0.1567, over 1403499.24 frames.], batch size: 16, lr: 2.10e-03 | |
2022-05-26 12:45:15,575 INFO [train.py:842] (3/4) Epoch 1, batch 8950, loss[loss=0.3476, simple_loss=0.3816, pruned_loss=0.1568, over 7311.00 frames.], tot_loss[loss=0.3488, simple_loss=0.3834, pruned_loss=0.1571, over 1405883.72 frames.], batch size: 21, lr: 2.10e-03 | |
2022-05-26 12:45:54,297 INFO [train.py:842] (3/4) Epoch 1, batch 9000, loss[loss=0.3844, simple_loss=0.4038, pruned_loss=0.1825, over 5129.00 frames.], tot_loss[loss=0.3505, simple_loss=0.3855, pruned_loss=0.1577, over 1399392.57 frames.], batch size: 54, lr: 2.09e-03 | |
2022-05-26 12:45:54,299 INFO [train.py:862] (3/4) Computing validation loss | |
2022-05-26 12:46:03,568 INFO [train.py:871] (3/4) Epoch 1, validation: loss=0.2508, simple_loss=0.3369, pruned_loss=0.08236, over 868885.00 frames. | |
2022-05-26 12:46:41,382 INFO [train.py:842] (3/4) Epoch 1, batch 9050, loss[loss=0.3797, simple_loss=0.3927, pruned_loss=0.1833, over 4877.00 frames.], tot_loss[loss=0.3545, simple_loss=0.3885, pruned_loss=0.1603, over 1388010.98 frames.], batch size: 53, lr: 2.09e-03 | |
2022-05-26 12:47:18,748 INFO [train.py:842] (3/4) Epoch 1, batch 9100, loss[loss=0.4397, simple_loss=0.4445, pruned_loss=0.2175, over 5046.00 frames.], tot_loss[loss=0.36, simple_loss=0.3924, pruned_loss=0.1638, over 1343756.78 frames.], batch size: 55, lr: 2.08e-03 | |
2022-05-26 12:47:56,202 INFO [train.py:842] (3/4) Epoch 1, batch 9150, loss[loss=0.3835, simple_loss=0.4074, pruned_loss=0.1798, over 5361.00 frames.], tot_loss[loss=0.366, simple_loss=0.3962, pruned_loss=0.1679, over 1284647.04 frames.], batch size: 52, lr: 2.08e-03 | |
2022-05-26 12:48:47,793 INFO [train.py:842] (3/4) Epoch 2, batch 0, loss[loss=0.3949, simple_loss=0.424, pruned_loss=0.1829, over 7129.00 frames.], tot_loss[loss=0.3949, simple_loss=0.424, pruned_loss=0.1829, over 7129.00 frames.], batch size: 26, lr: 2.06e-03 | |
2022-05-26 12:49:27,364 INFO [train.py:842] (3/4) Epoch 2, batch 50, loss[loss=0.3582, simple_loss=0.3932, pruned_loss=0.1616, over 7240.00 frames.], tot_loss[loss=0.3417, simple_loss=0.3786, pruned_loss=0.1524, over 311642.06 frames.], batch size: 20, lr: 2.06e-03 | |
2022-05-26 12:50:06,219 INFO [train.py:842] (3/4) Epoch 2, batch 100, loss[loss=0.3876, simple_loss=0.3986, pruned_loss=0.1883, over 7429.00 frames.], tot_loss[loss=0.3466, simple_loss=0.3812, pruned_loss=0.156, over 559786.86 frames.], batch size: 20, lr: 2.05e-03 | |
2022-05-26 12:50:45,170 INFO [train.py:842] (3/4) Epoch 2, batch 150, loss[loss=0.3175, simple_loss=0.3768, pruned_loss=0.1291, over 7327.00 frames.], tot_loss[loss=0.346, simple_loss=0.3814, pruned_loss=0.1553, over 750853.81 frames.], batch size: 20, lr: 2.05e-03 | |
2022-05-26 12:51:23,764 INFO [train.py:842] (3/4) Epoch 2, batch 200, loss[loss=0.4492, simple_loss=0.4444, pruned_loss=0.227, over 7160.00 frames.], tot_loss[loss=0.3438, simple_loss=0.3799, pruned_loss=0.1538, over 901095.45 frames.], batch size: 19, lr: 2.04e-03 | |
2022-05-26 12:52:03,071 INFO [train.py:842] (3/4) Epoch 2, batch 250, loss[loss=0.3493, simple_loss=0.3926, pruned_loss=0.1531, over 7388.00 frames.], tot_loss[loss=0.3435, simple_loss=0.3791, pruned_loss=0.1539, over 1016036.36 frames.], batch size: 23, lr: 2.04e-03 | |
2022-05-26 12:52:42,061 INFO [train.py:842] (3/4) Epoch 2, batch 300, loss[loss=0.2848, simple_loss=0.3421, pruned_loss=0.1137, over 7259.00 frames.], tot_loss[loss=0.3418, simple_loss=0.379, pruned_loss=0.1523, over 1105768.98 frames.], batch size: 19, lr: 2.03e-03 | |
2022-05-26 12:53:21,158 INFO [train.py:842] (3/4) Epoch 2, batch 350, loss[loss=0.3242, simple_loss=0.3785, pruned_loss=0.135, over 7212.00 frames.], tot_loss[loss=0.3387, simple_loss=0.377, pruned_loss=0.1503, over 1174418.99 frames.], batch size: 21, lr: 2.03e-03 | |
2022-05-26 12:53:59,770 INFO [train.py:842] (3/4) Epoch 2, batch 400, loss[loss=0.3864, simple_loss=0.4205, pruned_loss=0.1762, over 7155.00 frames.], tot_loss[loss=0.3386, simple_loss=0.377, pruned_loss=0.1501, over 1230616.56 frames.], batch size: 20, lr: 2.03e-03 | |
2022-05-26 12:54:38,385 INFO [train.py:842] (3/4) Epoch 2, batch 450, loss[loss=0.3256, simple_loss=0.3568, pruned_loss=0.1472, over 7154.00 frames.], tot_loss[loss=0.3381, simple_loss=0.3772, pruned_loss=0.1495, over 1275352.70 frames.], batch size: 19, lr: 2.02e-03 | |
2022-05-26 12:55:16,805 INFO [train.py:842] (3/4) Epoch 2, batch 500, loss[loss=0.2537, simple_loss=0.3144, pruned_loss=0.09652, over 7162.00 frames.], tot_loss[loss=0.3361, simple_loss=0.3762, pruned_loss=0.148, over 1306865.40 frames.], batch size: 18, lr: 2.02e-03 | |
2022-05-26 12:55:56,050 INFO [train.py:842] (3/4) Epoch 2, batch 550, loss[loss=0.2782, simple_loss=0.3305, pruned_loss=0.113, over 7359.00 frames.], tot_loss[loss=0.3371, simple_loss=0.3767, pruned_loss=0.1488, over 1332673.46 frames.], batch size: 19, lr: 2.01e-03 | |
2022-05-26 12:56:34,280 INFO [train.py:842] (3/4) Epoch 2, batch 600, loss[loss=0.345, simple_loss=0.3735, pruned_loss=0.1582, over 7372.00 frames.], tot_loss[loss=0.3403, simple_loss=0.3791, pruned_loss=0.1507, over 1354293.37 frames.], batch size: 23, lr: 2.01e-03 | |
2022-05-26 12:57:13,134 INFO [train.py:842] (3/4) Epoch 2, batch 650, loss[loss=0.2837, simple_loss=0.3374, pruned_loss=0.115, over 7287.00 frames.], tot_loss[loss=0.3363, simple_loss=0.3763, pruned_loss=0.1482, over 1368629.84 frames.], batch size: 18, lr: 2.01e-03 | |
2022-05-26 12:57:51,842 INFO [train.py:842] (3/4) Epoch 2, batch 700, loss[loss=0.4484, simple_loss=0.4451, pruned_loss=0.2258, over 4778.00 frames.], tot_loss[loss=0.3336, simple_loss=0.3742, pruned_loss=0.1465, over 1380443.18 frames.], batch size: 52, lr: 2.00e-03 | |
2022-05-26 12:58:30,916 INFO [train.py:842] (3/4) Epoch 2, batch 750, loss[loss=0.3367, simple_loss=0.3784, pruned_loss=0.1475, over 7253.00 frames.], tot_loss[loss=0.3368, simple_loss=0.3762, pruned_loss=0.1487, over 1391124.08 frames.], batch size: 19, lr: 2.00e-03 | |
2022-05-26 12:59:09,535 INFO [train.py:842] (3/4) Epoch 2, batch 800, loss[loss=0.2886, simple_loss=0.3383, pruned_loss=0.1194, over 7062.00 frames.], tot_loss[loss=0.3355, simple_loss=0.3749, pruned_loss=0.148, over 1400771.24 frames.], batch size: 18, lr: 1.99e-03 | |
2022-05-26 12:59:48,508 INFO [train.py:842] (3/4) Epoch 2, batch 850, loss[loss=0.4039, simple_loss=0.4087, pruned_loss=0.1995, over 7323.00 frames.], tot_loss[loss=0.3324, simple_loss=0.3727, pruned_loss=0.1461, over 1408962.37 frames.], batch size: 20, lr: 1.99e-03 | |
2022-05-26 13:00:27,148 INFO [train.py:842] (3/4) Epoch 2, batch 900, loss[loss=0.3081, simple_loss=0.3576, pruned_loss=0.1293, over 7422.00 frames.], tot_loss[loss=0.3326, simple_loss=0.3733, pruned_loss=0.1459, over 1413085.54 frames.], batch size: 20, lr: 1.99e-03 | |
2022-05-26 13:01:06,407 INFO [train.py:842] (3/4) Epoch 2, batch 950, loss[loss=0.2797, simple_loss=0.3306, pruned_loss=0.1144, over 7260.00 frames.], tot_loss[loss=0.332, simple_loss=0.3729, pruned_loss=0.1455, over 1414718.19 frames.], batch size: 19, lr: 1.98e-03 | |
2022-05-26 13:01:45,065 INFO [train.py:842] (3/4) Epoch 2, batch 1000, loss[loss=0.3696, simple_loss=0.4065, pruned_loss=0.1663, over 6844.00 frames.], tot_loss[loss=0.3313, simple_loss=0.3726, pruned_loss=0.145, over 1416825.43 frames.], batch size: 31, lr: 1.98e-03 | |
2022-05-26 13:02:24,229 INFO [train.py:842] (3/4) Epoch 2, batch 1050, loss[loss=0.3009, simple_loss=0.3527, pruned_loss=0.1245, over 7429.00 frames.], tot_loss[loss=0.3309, simple_loss=0.372, pruned_loss=0.1449, over 1419355.46 frames.], batch size: 20, lr: 1.97e-03 | |
2022-05-26 13:03:02,459 INFO [train.py:842] (3/4) Epoch 2, batch 1100, loss[loss=0.307, simple_loss=0.3512, pruned_loss=0.1315, over 7154.00 frames.], tot_loss[loss=0.3354, simple_loss=0.3749, pruned_loss=0.148, over 1420261.87 frames.], batch size: 18, lr: 1.97e-03 | |
2022-05-26 13:03:41,569 INFO [train.py:842] (3/4) Epoch 2, batch 1150, loss[loss=0.32, simple_loss=0.3754, pruned_loss=0.1323, over 7235.00 frames.], tot_loss[loss=0.3318, simple_loss=0.3729, pruned_loss=0.1454, over 1423797.12 frames.], batch size: 20, lr: 1.97e-03 | |
2022-05-26 13:04:19,998 INFO [train.py:842] (3/4) Epoch 2, batch 1200, loss[loss=0.3247, simple_loss=0.3731, pruned_loss=0.1381, over 7043.00 frames.], tot_loss[loss=0.3333, simple_loss=0.3739, pruned_loss=0.1464, over 1422813.61 frames.], batch size: 28, lr: 1.96e-03 | |
2022-05-26 13:04:58,728 INFO [train.py:842] (3/4) Epoch 2, batch 1250, loss[loss=0.2684, simple_loss=0.3213, pruned_loss=0.1077, over 7277.00 frames.], tot_loss[loss=0.3329, simple_loss=0.374, pruned_loss=0.1459, over 1422444.80 frames.], batch size: 18, lr: 1.96e-03 | |
2022-05-26 13:05:37,193 INFO [train.py:842] (3/4) Epoch 2, batch 1300, loss[loss=0.3428, simple_loss=0.3882, pruned_loss=0.1487, over 7223.00 frames.], tot_loss[loss=0.3336, simple_loss=0.3747, pruned_loss=0.1463, over 1416952.67 frames.], batch size: 21, lr: 1.95e-03 | |
2022-05-26 13:06:15,946 INFO [train.py:842] (3/4) Epoch 2, batch 1350, loss[loss=0.2912, simple_loss=0.3276, pruned_loss=0.1274, over 7265.00 frames.], tot_loss[loss=0.3325, simple_loss=0.3738, pruned_loss=0.1456, over 1419767.04 frames.], batch size: 17, lr: 1.95e-03 | |
2022-05-26 13:06:54,269 INFO [train.py:842] (3/4) Epoch 2, batch 1400, loss[loss=0.3059, simple_loss=0.3575, pruned_loss=0.1271, over 7221.00 frames.], tot_loss[loss=0.3336, simple_loss=0.3746, pruned_loss=0.1463, over 1418015.29 frames.], batch size: 21, lr: 1.95e-03 | |
2022-05-26 13:07:33,468 INFO [train.py:842] (3/4) Epoch 2, batch 1450, loss[loss=0.3756, simple_loss=0.4137, pruned_loss=0.1687, over 7202.00 frames.], tot_loss[loss=0.3344, simple_loss=0.3754, pruned_loss=0.1467, over 1422041.50 frames.], batch size: 26, lr: 1.94e-03 | |
2022-05-26 13:08:12,030 INFO [train.py:842] (3/4) Epoch 2, batch 1500, loss[loss=0.3256, simple_loss=0.3722, pruned_loss=0.1395, over 6205.00 frames.], tot_loss[loss=0.3346, simple_loss=0.3754, pruned_loss=0.1469, over 1422460.75 frames.], batch size: 37, lr: 1.94e-03 | |
2022-05-26 13:08:50,765 INFO [train.py:842] (3/4) Epoch 2, batch 1550, loss[loss=0.3429, simple_loss=0.3768, pruned_loss=0.1545, over 7430.00 frames.], tot_loss[loss=0.3329, simple_loss=0.3746, pruned_loss=0.1456, over 1425161.51 frames.], batch size: 20, lr: 1.94e-03 | |
2022-05-26 13:09:29,425 INFO [train.py:842] (3/4) Epoch 2, batch 1600, loss[loss=0.3056, simple_loss=0.3584, pruned_loss=0.1264, over 7169.00 frames.], tot_loss[loss=0.3312, simple_loss=0.3726, pruned_loss=0.1448, over 1424715.11 frames.], batch size: 18, lr: 1.93e-03 | |
2022-05-26 13:10:08,266 INFO [train.py:842] (3/4) Epoch 2, batch 1650, loss[loss=0.301, simple_loss=0.3543, pruned_loss=0.1239, over 7437.00 frames.], tot_loss[loss=0.3308, simple_loss=0.3725, pruned_loss=0.1445, over 1424327.32 frames.], batch size: 20, lr: 1.93e-03 | |
2022-05-26 13:10:46,853 INFO [train.py:842] (3/4) Epoch 2, batch 1700, loss[loss=0.3542, simple_loss=0.3909, pruned_loss=0.1587, over 7418.00 frames.], tot_loss[loss=0.3303, simple_loss=0.372, pruned_loss=0.1443, over 1423792.97 frames.], batch size: 21, lr: 1.92e-03 | |
2022-05-26 13:11:25,797 INFO [train.py:842] (3/4) Epoch 2, batch 1750, loss[loss=0.3243, simple_loss=0.3654, pruned_loss=0.1416, over 7279.00 frames.], tot_loss[loss=0.3325, simple_loss=0.3741, pruned_loss=0.1454, over 1423111.62 frames.], batch size: 18, lr: 1.92e-03 | |
2022-05-26 13:12:04,203 INFO [train.py:842] (3/4) Epoch 2, batch 1800, loss[loss=0.2829, simple_loss=0.3345, pruned_loss=0.1157, over 7355.00 frames.], tot_loss[loss=0.3323, simple_loss=0.3741, pruned_loss=0.1452, over 1424344.77 frames.], batch size: 19, lr: 1.92e-03 | |
2022-05-26 13:12:43,081 INFO [train.py:842] (3/4) Epoch 2, batch 1850, loss[loss=0.2702, simple_loss=0.3317, pruned_loss=0.1044, over 7319.00 frames.], tot_loss[loss=0.3285, simple_loss=0.371, pruned_loss=0.143, over 1424592.50 frames.], batch size: 20, lr: 1.91e-03 | |
2022-05-26 13:13:21,387 INFO [train.py:842] (3/4) Epoch 2, batch 1900, loss[loss=0.314, simple_loss=0.3499, pruned_loss=0.139, over 6994.00 frames.], tot_loss[loss=0.3276, simple_loss=0.3709, pruned_loss=0.1421, over 1428527.22 frames.], batch size: 16, lr: 1.91e-03 | |
2022-05-26 13:14:00,103 INFO [train.py:842] (3/4) Epoch 2, batch 1950, loss[loss=0.372, simple_loss=0.3823, pruned_loss=0.1809, over 7288.00 frames.], tot_loss[loss=0.3272, simple_loss=0.3708, pruned_loss=0.1418, over 1429081.33 frames.], batch size: 18, lr: 1.91e-03 | |
2022-05-26 13:14:38,217 INFO [train.py:842] (3/4) Epoch 2, batch 2000, loss[loss=0.3531, simple_loss=0.3826, pruned_loss=0.1618, over 7128.00 frames.], tot_loss[loss=0.3275, simple_loss=0.3711, pruned_loss=0.1419, over 1422890.90 frames.], batch size: 21, lr: 1.90e-03 | |
2022-05-26 13:15:17,087 INFO [train.py:842] (3/4) Epoch 2, batch 2050, loss[loss=0.2838, simple_loss=0.3537, pruned_loss=0.107, over 7011.00 frames.], tot_loss[loss=0.3256, simple_loss=0.3696, pruned_loss=0.1408, over 1424290.30 frames.], batch size: 28, lr: 1.90e-03 | |
2022-05-26 13:15:55,564 INFO [train.py:842] (3/4) Epoch 2, batch 2100, loss[loss=0.2799, simple_loss=0.3183, pruned_loss=0.1207, over 7418.00 frames.], tot_loss[loss=0.3263, simple_loss=0.3697, pruned_loss=0.1415, over 1424669.56 frames.], batch size: 18, lr: 1.90e-03 | |
2022-05-26 13:16:34,433 INFO [train.py:842] (3/4) Epoch 2, batch 2150, loss[loss=0.3476, simple_loss=0.3829, pruned_loss=0.1561, over 7409.00 frames.], tot_loss[loss=0.3269, simple_loss=0.37, pruned_loss=0.1419, over 1423784.11 frames.], batch size: 21, lr: 1.89e-03 | |
2022-05-26 13:17:12,972 INFO [train.py:842] (3/4) Epoch 2, batch 2200, loss[loss=0.3127, simple_loss=0.367, pruned_loss=0.1292, over 7119.00 frames.], tot_loss[loss=0.323, simple_loss=0.3673, pruned_loss=0.1393, over 1422268.73 frames.], batch size: 21, lr: 1.89e-03 | |
2022-05-26 13:17:52,122 INFO [train.py:842] (3/4) Epoch 2, batch 2250, loss[loss=0.2657, simple_loss=0.3334, pruned_loss=0.09905, over 7208.00 frames.], tot_loss[loss=0.3219, simple_loss=0.3666, pruned_loss=0.1386, over 1424052.19 frames.], batch size: 21, lr: 1.89e-03 | |
2022-05-26 13:18:30,739 INFO [train.py:842] (3/4) Epoch 2, batch 2300, loss[loss=0.2864, simple_loss=0.3537, pruned_loss=0.1095, over 7195.00 frames.], tot_loss[loss=0.3241, simple_loss=0.3684, pruned_loss=0.14, over 1425135.08 frames.], batch size: 22, lr: 1.88e-03 | |
2022-05-26 13:19:09,875 INFO [train.py:842] (3/4) Epoch 2, batch 2350, loss[loss=0.2905, simple_loss=0.3459, pruned_loss=0.1175, over 7231.00 frames.], tot_loss[loss=0.3222, simple_loss=0.3668, pruned_loss=0.1388, over 1423815.31 frames.], batch size: 20, lr: 1.88e-03 | |
2022-05-26 13:19:48,359 INFO [train.py:842] (3/4) Epoch 2, batch 2400, loss[loss=0.3519, simple_loss=0.391, pruned_loss=0.1564, over 7313.00 frames.], tot_loss[loss=0.3234, simple_loss=0.3677, pruned_loss=0.1396, over 1423702.20 frames.], batch size: 21, lr: 1.87e-03 | |
2022-05-26 13:20:27,498 INFO [train.py:842] (3/4) Epoch 2, batch 2450, loss[loss=0.315, simple_loss=0.354, pruned_loss=0.1379, over 7315.00 frames.], tot_loss[loss=0.3245, simple_loss=0.3688, pruned_loss=0.1401, over 1427205.21 frames.], batch size: 21, lr: 1.87e-03 | |
2022-05-26 13:21:06,104 INFO [train.py:842] (3/4) Epoch 2, batch 2500, loss[loss=0.3565, simple_loss=0.4019, pruned_loss=0.1556, over 7213.00 frames.], tot_loss[loss=0.3235, simple_loss=0.3684, pruned_loss=0.1393, over 1427664.21 frames.], batch size: 26, lr: 1.87e-03 | |
2022-05-26 13:21:44,941 INFO [train.py:842] (3/4) Epoch 2, batch 2550, loss[loss=0.2857, simple_loss=0.3254, pruned_loss=0.123, over 6975.00 frames.], tot_loss[loss=0.3213, simple_loss=0.3666, pruned_loss=0.138, over 1428149.82 frames.], batch size: 16, lr: 1.86e-03 | |
2022-05-26 13:22:23,549 INFO [train.py:842] (3/4) Epoch 2, batch 2600, loss[loss=0.3629, simple_loss=0.4155, pruned_loss=0.1552, over 7161.00 frames.], tot_loss[loss=0.3219, simple_loss=0.3669, pruned_loss=0.1384, over 1430152.01 frames.], batch size: 26, lr: 1.86e-03 | |
2022-05-26 13:23:02,366 INFO [train.py:842] (3/4) Epoch 2, batch 2650, loss[loss=0.3198, simple_loss=0.371, pruned_loss=0.1343, over 6120.00 frames.], tot_loss[loss=0.3217, simple_loss=0.3673, pruned_loss=0.138, over 1428766.13 frames.], batch size: 37, lr: 1.86e-03 | |
2022-05-26 13:23:41,039 INFO [train.py:842] (3/4) Epoch 2, batch 2700, loss[loss=0.3188, simple_loss=0.3734, pruned_loss=0.1321, over 6670.00 frames.], tot_loss[loss=0.3202, simple_loss=0.3661, pruned_loss=0.1372, over 1428259.79 frames.], batch size: 31, lr: 1.85e-03 | |
2022-05-26 13:24:20,339 INFO [train.py:842] (3/4) Epoch 2, batch 2750, loss[loss=0.3923, simple_loss=0.4296, pruned_loss=0.1775, over 7278.00 frames.], tot_loss[loss=0.3184, simple_loss=0.3646, pruned_loss=0.1361, over 1424457.49 frames.], batch size: 24, lr: 1.85e-03 | |
2022-05-26 13:24:58,804 INFO [train.py:842] (3/4) Epoch 2, batch 2800, loss[loss=0.2883, simple_loss=0.3433, pruned_loss=0.1166, over 7201.00 frames.], tot_loss[loss=0.317, simple_loss=0.3637, pruned_loss=0.1351, over 1426843.39 frames.], batch size: 23, lr: 1.85e-03 | |
2022-05-26 13:25:37,869 INFO [train.py:842] (3/4) Epoch 2, batch 2850, loss[loss=0.2958, simple_loss=0.3583, pruned_loss=0.1167, over 7295.00 frames.], tot_loss[loss=0.3177, simple_loss=0.3644, pruned_loss=0.1356, over 1426603.30 frames.], batch size: 24, lr: 1.84e-03 | |
2022-05-26 13:26:16,461 INFO [train.py:842] (3/4) Epoch 2, batch 2900, loss[loss=0.3693, simple_loss=0.4085, pruned_loss=0.165, over 7234.00 frames.], tot_loss[loss=0.3188, simple_loss=0.3653, pruned_loss=0.1361, over 1421532.66 frames.], batch size: 20, lr: 1.84e-03 | |
2022-05-26 13:26:55,519 INFO [train.py:842] (3/4) Epoch 2, batch 2950, loss[loss=0.2775, simple_loss=0.3539, pruned_loss=0.1005, over 7226.00 frames.], tot_loss[loss=0.3171, simple_loss=0.3644, pruned_loss=0.135, over 1422405.70 frames.], batch size: 20, lr: 1.84e-03 | |
2022-05-26 13:27:34,165 INFO [train.py:842] (3/4) Epoch 2, batch 3000, loss[loss=0.2783, simple_loss=0.3282, pruned_loss=0.1142, over 7270.00 frames.], tot_loss[loss=0.3185, simple_loss=0.365, pruned_loss=0.136, over 1426015.89 frames.], batch size: 17, lr: 1.84e-03 | |
2022-05-26 13:27:34,166 INFO [train.py:862] (3/4) Computing validation loss | |
2022-05-26 13:27:43,281 INFO [train.py:871] (3/4) Epoch 2, validation: loss=0.238, simple_loss=0.3286, pruned_loss=0.07369, over 868885.00 frames. | |
2022-05-26 13:28:22,611 INFO [train.py:842] (3/4) Epoch 2, batch 3050, loss[loss=0.2339, simple_loss=0.306, pruned_loss=0.08092, over 7274.00 frames.], tot_loss[loss=0.319, simple_loss=0.3646, pruned_loss=0.1367, over 1421869.97 frames.], batch size: 18, lr: 1.83e-03 | |
2022-05-26 13:29:00,958 INFO [train.py:842] (3/4) Epoch 2, batch 3100, loss[loss=0.3821, simple_loss=0.4077, pruned_loss=0.1782, over 5297.00 frames.], tot_loss[loss=0.3191, simple_loss=0.3651, pruned_loss=0.1366, over 1421548.84 frames.], batch size: 53, lr: 1.83e-03 | |
2022-05-26 13:29:39,787 INFO [train.py:842] (3/4) Epoch 2, batch 3150, loss[loss=0.3258, simple_loss=0.3536, pruned_loss=0.149, over 7237.00 frames.], tot_loss[loss=0.3175, simple_loss=0.3638, pruned_loss=0.1356, over 1423991.59 frames.], batch size: 16, lr: 1.83e-03 | |
2022-05-26 13:30:18,096 INFO [train.py:842] (3/4) Epoch 2, batch 3200, loss[loss=0.3463, simple_loss=0.3763, pruned_loss=0.1582, over 5158.00 frames.], tot_loss[loss=0.3199, simple_loss=0.3659, pruned_loss=0.1369, over 1413847.47 frames.], batch size: 52, lr: 1.82e-03 | |
2022-05-26 13:30:56,866 INFO [train.py:842] (3/4) Epoch 2, batch 3250, loss[loss=0.3642, simple_loss=0.4009, pruned_loss=0.1638, over 7196.00 frames.], tot_loss[loss=0.3216, simple_loss=0.3673, pruned_loss=0.138, over 1415862.24 frames.], batch size: 23, lr: 1.82e-03 | |
2022-05-26 13:31:35,503 INFO [train.py:842] (3/4) Epoch 2, batch 3300, loss[loss=0.2849, simple_loss=0.3501, pruned_loss=0.1099, over 7205.00 frames.], tot_loss[loss=0.3188, simple_loss=0.365, pruned_loss=0.1363, over 1420374.31 frames.], batch size: 22, lr: 1.82e-03 | |
2022-05-26 13:32:14,266 INFO [train.py:842] (3/4) Epoch 2, batch 3350, loss[loss=0.324, simple_loss=0.3807, pruned_loss=0.1337, over 7119.00 frames.], tot_loss[loss=0.319, simple_loss=0.3657, pruned_loss=0.1361, over 1422764.85 frames.], batch size: 26, lr: 1.81e-03 | |
2022-05-26 13:32:52,902 INFO [train.py:842] (3/4) Epoch 2, batch 3400, loss[loss=0.2517, simple_loss=0.3047, pruned_loss=0.09936, over 7143.00 frames.], tot_loss[loss=0.3176, simple_loss=0.3649, pruned_loss=0.1352, over 1423831.16 frames.], batch size: 17, lr: 1.81e-03 | |
2022-05-26 13:33:31,553 INFO [train.py:842] (3/4) Epoch 2, batch 3450, loss[loss=0.279, simple_loss=0.3582, pruned_loss=0.09993, over 7274.00 frames.], tot_loss[loss=0.3163, simple_loss=0.3643, pruned_loss=0.1341, over 1425946.39 frames.], batch size: 24, lr: 1.81e-03 | |
2022-05-26 13:34:10,032 INFO [train.py:842] (3/4) Epoch 2, batch 3500, loss[loss=0.2934, simple_loss=0.3542, pruned_loss=0.1162, over 6436.00 frames.], tot_loss[loss=0.3169, simple_loss=0.3644, pruned_loss=0.1347, over 1423212.87 frames.], batch size: 38, lr: 1.80e-03 | |
2022-05-26 13:34:48,900 INFO [train.py:842] (3/4) Epoch 2, batch 3550, loss[loss=0.2958, simple_loss=0.3505, pruned_loss=0.1206, over 7286.00 frames.], tot_loss[loss=0.3156, simple_loss=0.3641, pruned_loss=0.1336, over 1423340.90 frames.], batch size: 25, lr: 1.80e-03 | |
2022-05-26 13:35:27,121 INFO [train.py:842] (3/4) Epoch 2, batch 3600, loss[loss=0.4114, simple_loss=0.4318, pruned_loss=0.1955, over 7241.00 frames.], tot_loss[loss=0.3141, simple_loss=0.3633, pruned_loss=0.1324, over 1425122.63 frames.], batch size: 20, lr: 1.80e-03 | |
2022-05-26 13:36:06,095 INFO [train.py:842] (3/4) Epoch 2, batch 3650, loss[loss=0.3084, simple_loss=0.3504, pruned_loss=0.1332, over 6821.00 frames.], tot_loss[loss=0.3153, simple_loss=0.364, pruned_loss=0.1333, over 1426525.05 frames.], batch size: 15, lr: 1.79e-03 | |
2022-05-26 13:36:44,546 INFO [train.py:842] (3/4) Epoch 2, batch 3700, loss[loss=0.3065, simple_loss=0.3599, pruned_loss=0.1266, over 7171.00 frames.], tot_loss[loss=0.313, simple_loss=0.3629, pruned_loss=0.1315, over 1428877.66 frames.], batch size: 19, lr: 1.79e-03 | |
2022-05-26 13:37:23,448 INFO [train.py:842] (3/4) Epoch 2, batch 3750, loss[loss=0.3296, simple_loss=0.3755, pruned_loss=0.1419, over 7277.00 frames.], tot_loss[loss=0.313, simple_loss=0.363, pruned_loss=0.1315, over 1429752.06 frames.], batch size: 24, lr: 1.79e-03 | |
2022-05-26 13:38:02,069 INFO [train.py:842] (3/4) Epoch 2, batch 3800, loss[loss=0.2989, simple_loss=0.344, pruned_loss=0.127, over 6992.00 frames.], tot_loss[loss=0.313, simple_loss=0.3622, pruned_loss=0.1319, over 1430144.97 frames.], batch size: 16, lr: 1.79e-03 | |
2022-05-26 13:38:40,884 INFO [train.py:842] (3/4) Epoch 2, batch 3850, loss[loss=0.341, simple_loss=0.3767, pruned_loss=0.1526, over 7201.00 frames.], tot_loss[loss=0.3126, simple_loss=0.3618, pruned_loss=0.1317, over 1430331.51 frames.], batch size: 22, lr: 1.78e-03 | |
2022-05-26 13:39:19,562 INFO [train.py:842] (3/4) Epoch 2, batch 3900, loss[loss=0.3508, simple_loss=0.3929, pruned_loss=0.1544, over 6163.00 frames.], tot_loss[loss=0.3123, simple_loss=0.3615, pruned_loss=0.1315, over 1432263.15 frames.], batch size: 37, lr: 1.78e-03 | |
2022-05-26 13:39:58,486 INFO [train.py:842] (3/4) Epoch 2, batch 3950, loss[loss=0.3579, simple_loss=0.4122, pruned_loss=0.1518, over 7311.00 frames.], tot_loss[loss=0.3143, simple_loss=0.3629, pruned_loss=0.1328, over 1429907.05 frames.], batch size: 21, lr: 1.78e-03 | |
2022-05-26 13:40:36,946 INFO [train.py:842] (3/4) Epoch 2, batch 4000, loss[loss=0.3996, simple_loss=0.4176, pruned_loss=0.1909, over 4766.00 frames.], tot_loss[loss=0.315, simple_loss=0.3632, pruned_loss=0.1333, over 1430035.56 frames.], batch size: 52, lr: 1.77e-03 | |
2022-05-26 13:41:15,595 INFO [train.py:842] (3/4) Epoch 2, batch 4050, loss[loss=0.3489, simple_loss=0.3928, pruned_loss=0.1525, over 6809.00 frames.], tot_loss[loss=0.3161, simple_loss=0.3643, pruned_loss=0.1339, over 1426030.23 frames.], batch size: 31, lr: 1.77e-03 | |
2022-05-26 13:41:54,114 INFO [train.py:842] (3/4) Epoch 2, batch 4100, loss[loss=0.3007, simple_loss=0.3645, pruned_loss=0.1184, over 7043.00 frames.], tot_loss[loss=0.3154, simple_loss=0.3637, pruned_loss=0.1335, over 1428306.10 frames.], batch size: 28, lr: 1.77e-03 | |
2022-05-26 13:42:32,942 INFO [train.py:842] (3/4) Epoch 2, batch 4150, loss[loss=0.3483, simple_loss=0.3934, pruned_loss=0.1516, over 7101.00 frames.], tot_loss[loss=0.3143, simple_loss=0.3624, pruned_loss=0.133, over 1424944.61 frames.], batch size: 26, lr: 1.76e-03 | |
2022-05-26 13:43:11,638 INFO [train.py:842] (3/4) Epoch 2, batch 4200, loss[loss=0.2366, simple_loss=0.2981, pruned_loss=0.08754, over 6991.00 frames.], tot_loss[loss=0.3121, simple_loss=0.3615, pruned_loss=0.1314, over 1424063.68 frames.], batch size: 16, lr: 1.76e-03 | |
2022-05-26 13:43:50,292 INFO [train.py:842] (3/4) Epoch 2, batch 4250, loss[loss=0.3563, simple_loss=0.3907, pruned_loss=0.161, over 7205.00 frames.], tot_loss[loss=0.3123, simple_loss=0.3615, pruned_loss=0.1316, over 1422513.76 frames.], batch size: 22, lr: 1.76e-03 | |
2022-05-26 13:44:28,864 INFO [train.py:842] (3/4) Epoch 2, batch 4300, loss[loss=0.3158, simple_loss=0.3828, pruned_loss=0.1244, over 7344.00 frames.], tot_loss[loss=0.3113, simple_loss=0.3606, pruned_loss=0.131, over 1425348.35 frames.], batch size: 22, lr: 1.76e-03 | |
2022-05-26 13:45:07,444 INFO [train.py:842] (3/4) Epoch 2, batch 4350, loss[loss=0.3657, simple_loss=0.3937, pruned_loss=0.1688, over 7158.00 frames.], tot_loss[loss=0.3098, simple_loss=0.3597, pruned_loss=0.1299, over 1422371.50 frames.], batch size: 19, lr: 1.75e-03 | |
2022-05-26 13:45:45,802 INFO [train.py:842] (3/4) Epoch 2, batch 4400, loss[loss=0.2884, simple_loss=0.3492, pruned_loss=0.1139, over 7282.00 frames.], tot_loss[loss=0.3097, simple_loss=0.3596, pruned_loss=0.13, over 1423774.00 frames.], batch size: 24, lr: 1.75e-03 | |
2022-05-26 13:46:25,238 INFO [train.py:842] (3/4) Epoch 2, batch 4450, loss[loss=0.2313, simple_loss=0.2952, pruned_loss=0.08371, over 7419.00 frames.], tot_loss[loss=0.3078, simple_loss=0.3579, pruned_loss=0.1288, over 1424859.33 frames.], batch size: 18, lr: 1.75e-03 | |
2022-05-26 13:47:03,787 INFO [train.py:842] (3/4) Epoch 2, batch 4500, loss[loss=0.3533, simple_loss=0.3899, pruned_loss=0.1583, over 7322.00 frames.], tot_loss[loss=0.3109, simple_loss=0.36, pruned_loss=0.1309, over 1426409.15 frames.], batch size: 20, lr: 1.74e-03 | |
2022-05-26 13:47:42,548 INFO [train.py:842] (3/4) Epoch 2, batch 4550, loss[loss=0.3551, simple_loss=0.3955, pruned_loss=0.1573, over 7274.00 frames.], tot_loss[loss=0.3095, simple_loss=0.3593, pruned_loss=0.1299, over 1426604.46 frames.], batch size: 18, lr: 1.74e-03 | |
2022-05-26 13:48:20,820 INFO [train.py:842] (3/4) Epoch 2, batch 4600, loss[loss=0.3518, simple_loss=0.3994, pruned_loss=0.1521, over 7199.00 frames.], tot_loss[loss=0.3115, simple_loss=0.3606, pruned_loss=0.1312, over 1420687.62 frames.], batch size: 22, lr: 1.74e-03 | |
2022-05-26 13:48:59,685 INFO [train.py:842] (3/4) Epoch 2, batch 4650, loss[loss=0.3778, simple_loss=0.415, pruned_loss=0.1703, over 7324.00 frames.], tot_loss[loss=0.3111, simple_loss=0.3605, pruned_loss=0.1308, over 1423664.59 frames.], batch size: 25, lr: 1.74e-03 | |
2022-05-26 13:49:38,513 INFO [train.py:842] (3/4) Epoch 2, batch 4700, loss[loss=0.277, simple_loss=0.3329, pruned_loss=0.1106, over 7327.00 frames.], tot_loss[loss=0.3103, simple_loss=0.36, pruned_loss=0.1303, over 1425316.62 frames.], batch size: 21, lr: 1.73e-03 | |
2022-05-26 13:50:16,919 INFO [train.py:842] (3/4) Epoch 2, batch 4750, loss[loss=0.3876, simple_loss=0.4294, pruned_loss=0.1729, over 7414.00 frames.], tot_loss[loss=0.3124, simple_loss=0.3615, pruned_loss=0.1317, over 1417238.41 frames.], batch size: 21, lr: 1.73e-03 | |
2022-05-26 13:50:55,342 INFO [train.py:842] (3/4) Epoch 2, batch 4800, loss[loss=0.3295, simple_loss=0.3882, pruned_loss=0.1354, over 7269.00 frames.], tot_loss[loss=0.3136, simple_loss=0.3629, pruned_loss=0.1322, over 1415718.03 frames.], batch size: 24, lr: 1.73e-03 | |
2022-05-26 13:51:34,140 INFO [train.py:842] (3/4) Epoch 2, batch 4850, loss[loss=0.2645, simple_loss=0.3218, pruned_loss=0.1036, over 7160.00 frames.], tot_loss[loss=0.3138, simple_loss=0.3632, pruned_loss=0.1322, over 1416148.44 frames.], batch size: 18, lr: 1.73e-03 | |
2022-05-26 13:52:12,637 INFO [train.py:842] (3/4) Epoch 2, batch 4900, loss[loss=0.2987, simple_loss=0.3336, pruned_loss=0.1319, over 7302.00 frames.], tot_loss[loss=0.3102, simple_loss=0.3608, pruned_loss=0.1299, over 1418752.55 frames.], batch size: 17, lr: 1.72e-03 | |
2022-05-26 13:52:51,834 INFO [train.py:842] (3/4) Epoch 2, batch 4950, loss[loss=0.3224, simple_loss=0.3746, pruned_loss=0.135, over 7235.00 frames.], tot_loss[loss=0.31, simple_loss=0.3603, pruned_loss=0.1298, over 1421007.95 frames.], batch size: 20, lr: 1.72e-03 | |
2022-05-26 13:53:30,426 INFO [train.py:842] (3/4) Epoch 2, batch 5000, loss[loss=0.3086, simple_loss=0.3386, pruned_loss=0.1393, over 7292.00 frames.], tot_loss[loss=0.3107, simple_loss=0.3611, pruned_loss=0.1301, over 1423450.20 frames.], batch size: 17, lr: 1.72e-03 | |
2022-05-26 13:54:08,976 INFO [train.py:842] (3/4) Epoch 2, batch 5050, loss[loss=0.3555, simple_loss=0.3997, pruned_loss=0.1556, over 7410.00 frames.], tot_loss[loss=0.313, simple_loss=0.363, pruned_loss=0.1315, over 1417422.27 frames.], batch size: 21, lr: 1.71e-03 | |
2022-05-26 13:54:47,658 INFO [train.py:842] (3/4) Epoch 2, batch 5100, loss[loss=0.2875, simple_loss=0.3388, pruned_loss=0.1181, over 7165.00 frames.], tot_loss[loss=0.3102, simple_loss=0.3605, pruned_loss=0.1299, over 1420209.52 frames.], batch size: 19, lr: 1.71e-03 | |
2022-05-26 13:55:26,644 INFO [train.py:842] (3/4) Epoch 2, batch 5150, loss[loss=0.3941, simple_loss=0.4229, pruned_loss=0.1826, over 7214.00 frames.], tot_loss[loss=0.3121, simple_loss=0.3619, pruned_loss=0.1312, over 1421585.66 frames.], batch size: 21, lr: 1.71e-03 | |
2022-05-26 13:56:05,018 INFO [train.py:842] (3/4) Epoch 2, batch 5200, loss[loss=0.3678, simple_loss=0.4041, pruned_loss=0.1658, over 7250.00 frames.], tot_loss[loss=0.3109, simple_loss=0.3614, pruned_loss=0.1302, over 1421911.00 frames.], batch size: 25, lr: 1.71e-03 | |
2022-05-26 13:56:43,807 INFO [train.py:842] (3/4) Epoch 2, batch 5250, loss[loss=0.4066, simple_loss=0.434, pruned_loss=0.1896, over 6863.00 frames.], tot_loss[loss=0.3109, simple_loss=0.3616, pruned_loss=0.1301, over 1424294.58 frames.], batch size: 31, lr: 1.70e-03 | |
2022-05-26 13:57:22,564 INFO [train.py:842] (3/4) Epoch 2, batch 5300, loss[loss=0.306, simple_loss=0.362, pruned_loss=0.125, over 7369.00 frames.], tot_loss[loss=0.3104, simple_loss=0.361, pruned_loss=0.1299, over 1420983.75 frames.], batch size: 23, lr: 1.70e-03 | |
2022-05-26 13:58:01,629 INFO [train.py:842] (3/4) Epoch 2, batch 5350, loss[loss=0.2792, simple_loss=0.3361, pruned_loss=0.1112, over 7359.00 frames.], tot_loss[loss=0.3084, simple_loss=0.3591, pruned_loss=0.1288, over 1418496.45 frames.], batch size: 19, lr: 1.70e-03 | |
2022-05-26 13:58:40,176 INFO [train.py:842] (3/4) Epoch 2, batch 5400, loss[loss=0.2953, simple_loss=0.3582, pruned_loss=0.1162, over 6545.00 frames.], tot_loss[loss=0.3066, simple_loss=0.3574, pruned_loss=0.1279, over 1418527.16 frames.], batch size: 38, lr: 1.70e-03 | |
2022-05-26 13:59:19,548 INFO [train.py:842] (3/4) Epoch 2, batch 5450, loss[loss=0.3026, simple_loss=0.3366, pruned_loss=0.1343, over 7216.00 frames.], tot_loss[loss=0.3052, simple_loss=0.3562, pruned_loss=0.1271, over 1420110.43 frames.], batch size: 16, lr: 1.69e-03 | |
2022-05-26 13:59:58,079 INFO [train.py:842] (3/4) Epoch 2, batch 5500, loss[loss=0.249, simple_loss=0.3129, pruned_loss=0.09254, over 7127.00 frames.], tot_loss[loss=0.3046, simple_loss=0.3558, pruned_loss=0.1267, over 1421907.45 frames.], batch size: 17, lr: 1.69e-03 | |
2022-05-26 14:00:37,022 INFO [train.py:842] (3/4) Epoch 2, batch 5550, loss[loss=0.2522, simple_loss=0.3127, pruned_loss=0.09592, over 7004.00 frames.], tot_loss[loss=0.3027, simple_loss=0.3543, pruned_loss=0.1255, over 1422986.50 frames.], batch size: 16, lr: 1.69e-03 | |
2022-05-26 14:01:15,396 INFO [train.py:842] (3/4) Epoch 2, batch 5600, loss[loss=0.3076, simple_loss=0.3679, pruned_loss=0.1236, over 7309.00 frames.], tot_loss[loss=0.3024, simple_loss=0.355, pruned_loss=0.1249, over 1423246.86 frames.], batch size: 24, lr: 1.69e-03 | |
2022-05-26 14:01:54,150 INFO [train.py:842] (3/4) Epoch 2, batch 5650, loss[loss=0.2779, simple_loss=0.3381, pruned_loss=0.1088, over 7201.00 frames.], tot_loss[loss=0.3036, simple_loss=0.3564, pruned_loss=0.1254, over 1424465.80 frames.], batch size: 23, lr: 1.68e-03 | |
2022-05-26 14:02:32,787 INFO [train.py:842] (3/4) Epoch 2, batch 5700, loss[loss=0.2526, simple_loss=0.3059, pruned_loss=0.09966, over 7286.00 frames.], tot_loss[loss=0.3035, simple_loss=0.3556, pruned_loss=0.1257, over 1422926.36 frames.], batch size: 18, lr: 1.68e-03 | |
2022-05-26 14:03:11,579 INFO [train.py:842] (3/4) Epoch 2, batch 5750, loss[loss=0.3933, simple_loss=0.4154, pruned_loss=0.1856, over 7312.00 frames.], tot_loss[loss=0.307, simple_loss=0.3577, pruned_loss=0.1282, over 1421166.56 frames.], batch size: 21, lr: 1.68e-03 | |
2022-05-26 14:03:50,314 INFO [train.py:842] (3/4) Epoch 2, batch 5800, loss[loss=0.3568, simple_loss=0.4046, pruned_loss=0.1545, over 7127.00 frames.], tot_loss[loss=0.3062, simple_loss=0.3577, pruned_loss=0.1273, over 1425041.09 frames.], batch size: 26, lr: 1.68e-03 | |
2022-05-26 14:04:29,351 INFO [train.py:842] (3/4) Epoch 2, batch 5850, loss[loss=0.3669, simple_loss=0.3978, pruned_loss=0.168, over 7417.00 frames.], tot_loss[loss=0.3073, simple_loss=0.3588, pruned_loss=0.1279, over 1420518.68 frames.], batch size: 21, lr: 1.67e-03 | |
2022-05-26 14:05:07,958 INFO [train.py:842] (3/4) Epoch 2, batch 5900, loss[loss=0.3512, simple_loss=0.3696, pruned_loss=0.1664, over 7280.00 frames.], tot_loss[loss=0.3051, simple_loss=0.3572, pruned_loss=0.1265, over 1423287.50 frames.], batch size: 17, lr: 1.67e-03 | |
2022-05-26 14:05:46,681 INFO [train.py:842] (3/4) Epoch 2, batch 5950, loss[loss=0.2839, simple_loss=0.3539, pruned_loss=0.1069, over 7218.00 frames.], tot_loss[loss=0.3056, simple_loss=0.3579, pruned_loss=0.1267, over 1422512.11 frames.], batch size: 22, lr: 1.67e-03 | |
2022-05-26 14:06:25,161 INFO [train.py:842] (3/4) Epoch 2, batch 6000, loss[loss=0.2889, simple_loss=0.3461, pruned_loss=0.1159, over 7415.00 frames.], tot_loss[loss=0.3048, simple_loss=0.3567, pruned_loss=0.1264, over 1418877.91 frames.], batch size: 21, lr: 1.67e-03 | |
2022-05-26 14:06:25,162 INFO [train.py:862] (3/4) Computing validation loss | |
2022-05-26 14:06:34,411 INFO [train.py:871] (3/4) Epoch 2, validation: loss=0.2262, simple_loss=0.3196, pruned_loss=0.0664, over 868885.00 frames. | |
2022-05-26 14:07:13,314 INFO [train.py:842] (3/4) Epoch 2, batch 6050, loss[loss=0.2816, simple_loss=0.3461, pruned_loss=0.1085, over 7194.00 frames.], tot_loss[loss=0.3048, simple_loss=0.3568, pruned_loss=0.1264, over 1423295.48 frames.], batch size: 23, lr: 1.66e-03 | |
2022-05-26 14:07:51,792 INFO [train.py:842] (3/4) Epoch 2, batch 6100, loss[loss=0.2683, simple_loss=0.337, pruned_loss=0.09984, over 7387.00 frames.], tot_loss[loss=0.3046, simple_loss=0.3565, pruned_loss=0.1263, over 1425355.55 frames.], batch size: 23, lr: 1.66e-03 | |
2022-05-26 14:08:30,703 INFO [train.py:842] (3/4) Epoch 2, batch 6150, loss[loss=0.2933, simple_loss=0.357, pruned_loss=0.1148, over 7080.00 frames.], tot_loss[loss=0.3012, simple_loss=0.354, pruned_loss=0.1243, over 1425889.11 frames.], batch size: 28, lr: 1.66e-03 | |
2022-05-26 14:09:09,177 INFO [train.py:842] (3/4) Epoch 2, batch 6200, loss[loss=0.3221, simple_loss=0.3766, pruned_loss=0.1338, over 6739.00 frames.], tot_loss[loss=0.3028, simple_loss=0.3553, pruned_loss=0.1251, over 1423370.45 frames.], batch size: 31, lr: 1.66e-03 | |
2022-05-26 14:09:47,961 INFO [train.py:842] (3/4) Epoch 2, batch 6250, loss[loss=0.2931, simple_loss=0.3574, pruned_loss=0.1144, over 7111.00 frames.], tot_loss[loss=0.3029, simple_loss=0.3553, pruned_loss=0.1252, over 1426520.31 frames.], batch size: 21, lr: 1.65e-03 | |
2022-05-26 14:10:27,080 INFO [train.py:842] (3/4) Epoch 2, batch 6300, loss[loss=0.3365, simple_loss=0.3741, pruned_loss=0.1495, over 7201.00 frames.], tot_loss[loss=0.3011, simple_loss=0.3541, pruned_loss=0.1241, over 1430635.94 frames.], batch size: 26, lr: 1.65e-03 | |
2022-05-26 14:11:05,638 INFO [train.py:842] (3/4) Epoch 2, batch 6350, loss[loss=0.248, simple_loss=0.3223, pruned_loss=0.08682, over 6315.00 frames.], tot_loss[loss=0.3028, simple_loss=0.3559, pruned_loss=0.1248, over 1429864.80 frames.], batch size: 38, lr: 1.65e-03 | |
2022-05-26 14:11:44,206 INFO [train.py:842] (3/4) Epoch 2, batch 6400, loss[loss=0.3179, simple_loss=0.3699, pruned_loss=0.1329, over 6775.00 frames.], tot_loss[loss=0.3049, simple_loss=0.3569, pruned_loss=0.1265, over 1425028.12 frames.], batch size: 31, lr: 1.65e-03 | |
2022-05-26 14:12:23,053 INFO [train.py:842] (3/4) Epoch 2, batch 6450, loss[loss=0.2376, simple_loss=0.2906, pruned_loss=0.09236, over 7422.00 frames.], tot_loss[loss=0.303, simple_loss=0.3556, pruned_loss=0.1252, over 1425054.92 frames.], batch size: 18, lr: 1.64e-03 | |
2022-05-26 14:13:01,667 INFO [train.py:842] (3/4) Epoch 2, batch 6500, loss[loss=0.3421, simple_loss=0.387, pruned_loss=0.1486, over 7227.00 frames.], tot_loss[loss=0.3022, simple_loss=0.3548, pruned_loss=0.1248, over 1424925.62 frames.], batch size: 22, lr: 1.64e-03 | |
2022-05-26 14:13:40,421 INFO [train.py:842] (3/4) Epoch 2, batch 6550, loss[loss=0.2672, simple_loss=0.3292, pruned_loss=0.1026, over 7075.00 frames.], tot_loss[loss=0.3007, simple_loss=0.3539, pruned_loss=0.1238, over 1421260.50 frames.], batch size: 18, lr: 1.64e-03 | |
2022-05-26 14:14:18,985 INFO [train.py:842] (3/4) Epoch 2, batch 6600, loss[loss=0.2596, simple_loss=0.3084, pruned_loss=0.1054, over 7280.00 frames.], tot_loss[loss=0.2991, simple_loss=0.3525, pruned_loss=0.1228, over 1420820.08 frames.], batch size: 18, lr: 1.64e-03 | |
2022-05-26 14:14:57,590 INFO [train.py:842] (3/4) Epoch 2, batch 6650, loss[loss=0.2846, simple_loss=0.3506, pruned_loss=0.1093, over 7209.00 frames.], tot_loss[loss=0.303, simple_loss=0.3552, pruned_loss=0.1254, over 1414368.08 frames.], batch size: 23, lr: 1.63e-03 | |
2022-05-26 14:15:36,115 INFO [train.py:842] (3/4) Epoch 2, batch 6700, loss[loss=0.2426, simple_loss=0.2938, pruned_loss=0.09575, over 7292.00 frames.], tot_loss[loss=0.2996, simple_loss=0.3526, pruned_loss=0.1233, over 1419489.49 frames.], batch size: 17, lr: 1.63e-03 | |
2022-05-26 14:16:14,711 INFO [train.py:842] (3/4) Epoch 2, batch 6750, loss[loss=0.2895, simple_loss=0.3436, pruned_loss=0.1177, over 7230.00 frames.], tot_loss[loss=0.3, simple_loss=0.3537, pruned_loss=0.1231, over 1421944.63 frames.], batch size: 20, lr: 1.63e-03 | |
2022-05-26 14:16:53,063 INFO [train.py:842] (3/4) Epoch 2, batch 6800, loss[loss=0.334, simple_loss=0.3881, pruned_loss=0.1399, over 7117.00 frames.], tot_loss[loss=0.2998, simple_loss=0.3539, pruned_loss=0.1228, over 1424931.19 frames.], batch size: 21, lr: 1.63e-03 | |
2022-05-26 14:17:34,593 INFO [train.py:842] (3/4) Epoch 2, batch 6850, loss[loss=0.2549, simple_loss=0.3217, pruned_loss=0.09407, over 7330.00 frames.], tot_loss[loss=0.3002, simple_loss=0.3547, pruned_loss=0.1228, over 1421837.80 frames.], batch size: 20, lr: 1.63e-03 | |
2022-05-26 14:18:13,249 INFO [train.py:842] (3/4) Epoch 2, batch 6900, loss[loss=0.3825, simple_loss=0.42, pruned_loss=0.1725, over 7424.00 frames.], tot_loss[loss=0.3019, simple_loss=0.3555, pruned_loss=0.1241, over 1422544.10 frames.], batch size: 20, lr: 1.62e-03 | |
2022-05-26 14:18:52,142 INFO [train.py:842] (3/4) Epoch 2, batch 6950, loss[loss=0.2445, simple_loss=0.2978, pruned_loss=0.09559, over 7296.00 frames.], tot_loss[loss=0.3021, simple_loss=0.3552, pruned_loss=0.1245, over 1422019.41 frames.], batch size: 18, lr: 1.62e-03 | |
2022-05-26 14:19:30,571 INFO [train.py:842] (3/4) Epoch 2, batch 7000, loss[loss=0.3702, simple_loss=0.413, pruned_loss=0.1637, over 7334.00 frames.], tot_loss[loss=0.3013, simple_loss=0.3548, pruned_loss=0.1239, over 1424457.52 frames.], batch size: 21, lr: 1.62e-03 | |
2022-05-26 14:20:09,776 INFO [train.py:842] (3/4) Epoch 2, batch 7050, loss[loss=0.3769, simple_loss=0.4087, pruned_loss=0.1726, over 5189.00 frames.], tot_loss[loss=0.2988, simple_loss=0.3532, pruned_loss=0.1222, over 1427301.70 frames.], batch size: 52, lr: 1.62e-03 | |
2022-05-26 14:20:48,378 INFO [train.py:842] (3/4) Epoch 2, batch 7100, loss[loss=0.3244, simple_loss=0.3737, pruned_loss=0.1375, over 7114.00 frames.], tot_loss[loss=0.2986, simple_loss=0.3533, pruned_loss=0.122, over 1426429.00 frames.], batch size: 21, lr: 1.61e-03 | |
2022-05-26 14:21:26,958 INFO [train.py:842] (3/4) Epoch 2, batch 7150, loss[loss=0.2727, simple_loss=0.3568, pruned_loss=0.0943, over 7412.00 frames.], tot_loss[loss=0.3021, simple_loss=0.356, pruned_loss=0.1241, over 1422674.00 frames.], batch size: 21, lr: 1.61e-03 | |
2022-05-26 14:22:05,296 INFO [train.py:842] (3/4) Epoch 2, batch 7200, loss[loss=0.2312, simple_loss=0.2961, pruned_loss=0.08313, over 6990.00 frames.], tot_loss[loss=0.302, simple_loss=0.356, pruned_loss=0.124, over 1420277.64 frames.], batch size: 16, lr: 1.61e-03 | |
2022-05-26 14:22:44,443 INFO [train.py:842] (3/4) Epoch 2, batch 7250, loss[loss=0.2991, simple_loss=0.3675, pruned_loss=0.1153, over 7233.00 frames.], tot_loss[loss=0.3005, simple_loss=0.355, pruned_loss=0.1231, over 1425372.70 frames.], batch size: 20, lr: 1.61e-03 | |
2022-05-26 14:23:22,972 INFO [train.py:842] (3/4) Epoch 2, batch 7300, loss[loss=0.3662, simple_loss=0.3869, pruned_loss=0.1728, over 7228.00 frames.], tot_loss[loss=0.3022, simple_loss=0.3556, pruned_loss=0.1244, over 1427991.91 frames.], batch size: 21, lr: 1.60e-03 | |
2022-05-26 14:24:01,883 INFO [train.py:842] (3/4) Epoch 2, batch 7350, loss[loss=0.3136, simple_loss=0.3638, pruned_loss=0.1317, over 4979.00 frames.], tot_loss[loss=0.3008, simple_loss=0.3541, pruned_loss=0.1238, over 1423924.97 frames.], batch size: 52, lr: 1.60e-03 | |
2022-05-26 14:24:40,484 INFO [train.py:842] (3/4) Epoch 2, batch 7400, loss[loss=0.2948, simple_loss=0.3448, pruned_loss=0.1224, over 6982.00 frames.], tot_loss[loss=0.2999, simple_loss=0.3534, pruned_loss=0.1232, over 1423510.36 frames.], batch size: 16, lr: 1.60e-03 | |
2022-05-26 14:25:19,333 INFO [train.py:842] (3/4) Epoch 2, batch 7450, loss[loss=0.2674, simple_loss=0.3258, pruned_loss=0.1045, over 7353.00 frames.], tot_loss[loss=0.3004, simple_loss=0.3537, pruned_loss=0.1236, over 1418825.07 frames.], batch size: 19, lr: 1.60e-03 | |
2022-05-26 14:25:58,001 INFO [train.py:842] (3/4) Epoch 2, batch 7500, loss[loss=0.2676, simple_loss=0.3351, pruned_loss=0.1001, over 7217.00 frames.], tot_loss[loss=0.3001, simple_loss=0.3537, pruned_loss=0.1232, over 1419837.47 frames.], batch size: 21, lr: 1.60e-03 | |
2022-05-26 14:26:36,920 INFO [train.py:842] (3/4) Epoch 2, batch 7550, loss[loss=0.3969, simple_loss=0.4395, pruned_loss=0.1771, over 7400.00 frames.], tot_loss[loss=0.2978, simple_loss=0.3525, pruned_loss=0.1216, over 1420707.99 frames.], batch size: 21, lr: 1.59e-03 | |
2022-05-26 14:27:15,668 INFO [train.py:842] (3/4) Epoch 2, batch 7600, loss[loss=0.3536, simple_loss=0.3889, pruned_loss=0.1591, over 5214.00 frames.], tot_loss[loss=0.2973, simple_loss=0.3521, pruned_loss=0.1213, over 1420868.39 frames.], batch size: 52, lr: 1.59e-03 | |
2022-05-26 14:27:54,294 INFO [train.py:842] (3/4) Epoch 2, batch 7650, loss[loss=0.4145, simple_loss=0.4341, pruned_loss=0.1974, over 7404.00 frames.], tot_loss[loss=0.2981, simple_loss=0.3523, pruned_loss=0.1219, over 1421445.41 frames.], batch size: 21, lr: 1.59e-03 | |
2022-05-26 14:28:32,789 INFO [train.py:842] (3/4) Epoch 2, batch 7700, loss[loss=0.3837, simple_loss=0.413, pruned_loss=0.1771, over 7335.00 frames.], tot_loss[loss=0.2981, simple_loss=0.352, pruned_loss=0.1221, over 1422074.30 frames.], batch size: 22, lr: 1.59e-03 | |
2022-05-26 14:29:11,472 INFO [train.py:842] (3/4) Epoch 2, batch 7750, loss[loss=0.3012, simple_loss=0.3584, pruned_loss=0.122, over 6970.00 frames.], tot_loss[loss=0.2979, simple_loss=0.3526, pruned_loss=0.1216, over 1424092.93 frames.], batch size: 28, lr: 1.59e-03 | |
2022-05-26 14:29:50,006 INFO [train.py:842] (3/4) Epoch 2, batch 7800, loss[loss=0.3446, simple_loss=0.3872, pruned_loss=0.151, over 7149.00 frames.], tot_loss[loss=0.2985, simple_loss=0.3531, pruned_loss=0.122, over 1423096.84 frames.], batch size: 20, lr: 1.58e-03 | |
2022-05-26 14:30:28,810 INFO [train.py:842] (3/4) Epoch 2, batch 7850, loss[loss=0.3005, simple_loss=0.3642, pruned_loss=0.1183, over 7338.00 frames.], tot_loss[loss=0.2966, simple_loss=0.3515, pruned_loss=0.1208, over 1423634.63 frames.], batch size: 21, lr: 1.58e-03 | |
2022-05-26 14:31:07,204 INFO [train.py:842] (3/4) Epoch 2, batch 7900, loss[loss=0.3798, simple_loss=0.3934, pruned_loss=0.1831, over 5241.00 frames.], tot_loss[loss=0.2973, simple_loss=0.3518, pruned_loss=0.1214, over 1425831.81 frames.], batch size: 52, lr: 1.58e-03 | |
2022-05-26 14:31:46,049 INFO [train.py:842] (3/4) Epoch 2, batch 7950, loss[loss=0.3107, simple_loss=0.3551, pruned_loss=0.1332, over 7180.00 frames.], tot_loss[loss=0.2991, simple_loss=0.353, pruned_loss=0.1226, over 1428328.41 frames.], batch size: 18, lr: 1.58e-03 | |
2022-05-26 14:32:24,261 INFO [train.py:842] (3/4) Epoch 2, batch 8000, loss[loss=0.2525, simple_loss=0.3364, pruned_loss=0.08428, over 7214.00 frames.], tot_loss[loss=0.2973, simple_loss=0.3518, pruned_loss=0.1214, over 1426872.51 frames.], batch size: 21, lr: 1.57e-03 | |
2022-05-26 14:33:02,895 INFO [train.py:842] (3/4) Epoch 2, batch 8050, loss[loss=0.3253, simple_loss=0.3736, pruned_loss=0.1385, over 6374.00 frames.], tot_loss[loss=0.2965, simple_loss=0.3516, pruned_loss=0.1207, over 1425085.17 frames.], batch size: 38, lr: 1.57e-03 | |
2022-05-26 14:33:41,422 INFO [train.py:842] (3/4) Epoch 2, batch 8100, loss[loss=0.2726, simple_loss=0.3429, pruned_loss=0.1012, over 7119.00 frames.], tot_loss[loss=0.2965, simple_loss=0.3517, pruned_loss=0.1207, over 1427266.96 frames.], batch size: 26, lr: 1.57e-03 | |
2022-05-26 14:34:20,549 INFO [train.py:842] (3/4) Epoch 2, batch 8150, loss[loss=0.3108, simple_loss=0.3557, pruned_loss=0.1329, over 7068.00 frames.], tot_loss[loss=0.2958, simple_loss=0.351, pruned_loss=0.1203, over 1429564.72 frames.], batch size: 18, lr: 1.57e-03 | |
2022-05-26 14:34:58,972 INFO [train.py:842] (3/4) Epoch 2, batch 8200, loss[loss=0.2625, simple_loss=0.3232, pruned_loss=0.1009, over 7260.00 frames.], tot_loss[loss=0.2976, simple_loss=0.3523, pruned_loss=0.1214, over 1424123.91 frames.], batch size: 18, lr: 1.57e-03 | |
2022-05-26 14:35:38,135 INFO [train.py:842] (3/4) Epoch 2, batch 8250, loss[loss=0.2708, simple_loss=0.3454, pruned_loss=0.09806, over 6987.00 frames.], tot_loss[loss=0.2961, simple_loss=0.3508, pruned_loss=0.1207, over 1422567.10 frames.], batch size: 28, lr: 1.56e-03 | |
2022-05-26 14:36:16,479 INFO [train.py:842] (3/4) Epoch 2, batch 8300, loss[loss=0.3006, simple_loss=0.3641, pruned_loss=0.1185, over 7134.00 frames.], tot_loss[loss=0.2978, simple_loss=0.3522, pruned_loss=0.1217, over 1420468.30 frames.], batch size: 20, lr: 1.56e-03 | |
2022-05-26 14:36:55,271 INFO [train.py:842] (3/4) Epoch 2, batch 8350, loss[loss=0.4307, simple_loss=0.4366, pruned_loss=0.2124, over 5121.00 frames.], tot_loss[loss=0.299, simple_loss=0.3537, pruned_loss=0.1222, over 1417925.11 frames.], batch size: 52, lr: 1.56e-03 | |
2022-05-26 14:37:33,569 INFO [train.py:842] (3/4) Epoch 2, batch 8400, loss[loss=0.2617, simple_loss=0.3206, pruned_loss=0.1014, over 7127.00 frames.], tot_loss[loss=0.2999, simple_loss=0.3544, pruned_loss=0.1227, over 1417208.74 frames.], batch size: 17, lr: 1.56e-03 | |
2022-05-26 14:38:12,073 INFO [train.py:842] (3/4) Epoch 2, batch 8450, loss[loss=0.3602, simple_loss=0.4093, pruned_loss=0.1556, over 7213.00 frames.], tot_loss[loss=0.2996, simple_loss=0.3546, pruned_loss=0.1223, over 1413238.75 frames.], batch size: 22, lr: 1.56e-03 | |
2022-05-26 14:38:50,517 INFO [train.py:842] (3/4) Epoch 2, batch 8500, loss[loss=0.266, simple_loss=0.3233, pruned_loss=0.1044, over 7123.00 frames.], tot_loss[loss=0.2994, simple_loss=0.3546, pruned_loss=0.1221, over 1417758.02 frames.], batch size: 17, lr: 1.55e-03 | |
2022-05-26 14:39:29,180 INFO [train.py:842] (3/4) Epoch 2, batch 8550, loss[loss=0.2564, simple_loss=0.3184, pruned_loss=0.09718, over 7357.00 frames.], tot_loss[loss=0.2996, simple_loss=0.3549, pruned_loss=0.1222, over 1422680.65 frames.], batch size: 19, lr: 1.55e-03 | |
2022-05-26 14:40:07,844 INFO [train.py:842] (3/4) Epoch 2, batch 8600, loss[loss=0.3222, simple_loss=0.3736, pruned_loss=0.1354, over 6588.00 frames.], tot_loss[loss=0.2965, simple_loss=0.3523, pruned_loss=0.1204, over 1421311.22 frames.], batch size: 38, lr: 1.55e-03 | |
2022-05-26 14:40:46,977 INFO [train.py:842] (3/4) Epoch 2, batch 8650, loss[loss=0.3181, simple_loss=0.3635, pruned_loss=0.1363, over 7153.00 frames.], tot_loss[loss=0.2968, simple_loss=0.3522, pruned_loss=0.1207, over 1423705.99 frames.], batch size: 20, lr: 1.55e-03 | |
2022-05-26 14:41:25,741 INFO [train.py:842] (3/4) Epoch 2, batch 8700, loss[loss=0.2248, simple_loss=0.2966, pruned_loss=0.07652, over 7066.00 frames.], tot_loss[loss=0.2954, simple_loss=0.3509, pruned_loss=0.12, over 1422456.19 frames.], batch size: 18, lr: 1.55e-03 | |
2022-05-26 14:42:04,182 INFO [train.py:842] (3/4) Epoch 2, batch 8750, loss[loss=0.3533, simple_loss=0.3773, pruned_loss=0.1646, over 7171.00 frames.], tot_loss[loss=0.2971, simple_loss=0.352, pruned_loss=0.1211, over 1420891.72 frames.], batch size: 18, lr: 1.54e-03 | |
2022-05-26 14:42:42,561 INFO [train.py:842] (3/4) Epoch 2, batch 8800, loss[loss=0.4379, simple_loss=0.4582, pruned_loss=0.2089, over 7322.00 frames.], tot_loss[loss=0.2984, simple_loss=0.3525, pruned_loss=0.1221, over 1413784.11 frames.], batch size: 22, lr: 1.54e-03 | |
2022-05-26 14:43:21,150 INFO [train.py:842] (3/4) Epoch 2, batch 8850, loss[loss=0.3707, simple_loss=0.4164, pruned_loss=0.1625, over 7288.00 frames.], tot_loss[loss=0.2988, simple_loss=0.3529, pruned_loss=0.1223, over 1412079.63 frames.], batch size: 24, lr: 1.54e-03 | |
2022-05-26 14:43:59,311 INFO [train.py:842] (3/4) Epoch 2, batch 8900, loss[loss=0.4548, simple_loss=0.4508, pruned_loss=0.2295, over 6693.00 frames.], tot_loss[loss=0.3009, simple_loss=0.3546, pruned_loss=0.1236, over 1402096.89 frames.], batch size: 31, lr: 1.54e-03 | |
2022-05-26 14:44:37,788 INFO [train.py:842] (3/4) Epoch 2, batch 8950, loss[loss=0.2906, simple_loss=0.3473, pruned_loss=0.1169, over 7114.00 frames.], tot_loss[loss=0.2993, simple_loss=0.3535, pruned_loss=0.1226, over 1401797.79 frames.], batch size: 21, lr: 1.54e-03 | |
2022-05-26 14:45:16,058 INFO [train.py:842] (3/4) Epoch 2, batch 9000, loss[loss=0.2696, simple_loss=0.3261, pruned_loss=0.1066, over 7273.00 frames.], tot_loss[loss=0.2991, simple_loss=0.3536, pruned_loss=0.1223, over 1397092.03 frames.], batch size: 18, lr: 1.53e-03 | |
2022-05-26 14:45:16,060 INFO [train.py:862] (3/4) Computing validation loss | |
2022-05-26 14:45:25,235 INFO [train.py:871] (3/4) Epoch 2, validation: loss=0.2179, simple_loss=0.3144, pruned_loss=0.06069, over 868885.00 frames. | |
2022-05-26 14:46:03,595 INFO [train.py:842] (3/4) Epoch 2, batch 9050, loss[loss=0.2418, simple_loss=0.3079, pruned_loss=0.08781, over 7295.00 frames.], tot_loss[loss=0.3012, simple_loss=0.3549, pruned_loss=0.1238, over 1381892.84 frames.], batch size: 18, lr: 1.53e-03 | |
2022-05-26 14:46:40,943 INFO [train.py:842] (3/4) Epoch 2, batch 9100, loss[loss=0.3112, simple_loss=0.3576, pruned_loss=0.1324, over 4951.00 frames.], tot_loss[loss=0.3053, simple_loss=0.3577, pruned_loss=0.1265, over 1328823.15 frames.], batch size: 52, lr: 1.53e-03 | |
2022-05-26 14:47:18,821 INFO [train.py:842] (3/4) Epoch 2, batch 9150, loss[loss=0.3026, simple_loss=0.3562, pruned_loss=0.1245, over 4852.00 frames.], tot_loss[loss=0.3158, simple_loss=0.3646, pruned_loss=0.1335, over 1256508.25 frames.], batch size: 52, lr: 1.53e-03 | |