|
07/30/2024 02:49:08 - INFO - llamafactory.hparams.parser - Process rank: 7, device: cuda:7, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 |
|
|
|
07/30/2024 02:49:08 - INFO - llamafactory.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 |
|
|
|
07/30/2024 02:49:08 - INFO - llamafactory.hparams.parser - Process rank: 4, device: cuda:4, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 |
|
|
|
07/30/2024 02:49:08 - INFO - llamafactory.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 |
|
|
|
07/30/2024 02:49:08 - INFO - llamafactory.hparams.parser - Process rank: 6, device: cuda:6, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 |
|
|
|
07/30/2024 02:49:08 - INFO - llamafactory.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Add pad token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Add pad token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Add pad token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Add pad token: <|eot_id|> |
|
|
|
[INFO|tokenization_utils_base.py:2289] 2024-07-30 02:49:09,083 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/b2a4d0f33b41fcd59a6d31662cc63b8d53367e1e/tokenizer.json |
|
|
|
[INFO|tokenization_utils_base.py:2289] 2024-07-30 02:49:09,083 >> loading file added_tokens.json from cache at None |
|
|
|
[INFO|tokenization_utils_base.py:2289] 2024-07-30 02:49:09,083 >> loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/b2a4d0f33b41fcd59a6d31662cc63b8d53367e1e/special_tokens_map.json |
|
|
|
[INFO|tokenization_utils_base.py:2289] 2024-07-30 02:49:09,083 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/b2a4d0f33b41fcd59a6d31662cc63b8d53367e1e/tokenizer_config.json |
|
|
|
[INFO|tokenization_utils_base.py:2533] 2024-07-30 02:49:09,345 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
|
|
[INFO|template.py:270] 2024-07-30 02:49:09,345 >> Replace eos token: <|eot_id|> |
|
|
|
[INFO|template.py:372] 2024-07-30 02:49:09,345 >> Add pad token: <|eot_id|> |
|
|
|
[INFO|loader.py:52] 2024-07-30 02:49:09,346 >> Loading dataset 0716_truthfulqa_benchmark_train_2.json... |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Add pad token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Add pad token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> |
|
|
|
07/30/2024 02:49:09 - INFO - llamafactory.data.template - Add pad token: <|eot_id|> |
|
|
|
07/30/2024 02:49:11 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... |
|
|
|
07/30/2024 02:49:11 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... |
|
|
|
07/30/2024 02:49:11 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... |
|
|
|
07/30/2024 02:49:11 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... |
|
|
|
07/30/2024 02:49:11 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... |
|
|
|
07/30/2024 02:49:11 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... |
|
|
|
07/30/2024 02:49:11 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... |
|
|
|
[INFO|configuration_utils.py:733] 2024-07-30 02:49:14,919 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/b2a4d0f33b41fcd59a6d31662cc63b8d53367e1e/config.json |
|
|
|
[INFO|configuration_utils.py:800] 2024-07-30 02:49:14,925 >> Model config LlamaConfig { |
|
"_name_or_path": "meta-llama/Meta-Llama-3.1-8B-Instruct", |
|
"architectures": [ |
|
"LlamaForCausalLM" |
|
], |
|
"attention_bias": false, |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128000, |
|
"eos_token_id": [ |
|
128001, |
|
128008, |
|
128009 |
|
], |
|
"hidden_act": "silu", |
|
"hidden_size": 4096, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 14336, |
|
"max_position_embeddings": 131072, |
|
"mlp_bias": false, |
|
"model_type": "llama", |
|
"num_attention_heads": 32, |
|
"num_hidden_layers": 32, |
|
"num_key_value_heads": 8, |
|
"pretraining_tp": 1, |
|
"rms_norm_eps": 1e-05, |
|
"rope_scaling": { |
|
"factor": 8.0, |
|
"high_freq_factor": 4.0, |
|
"low_freq_factor": 1.0, |
|
"original_max_position_embeddings": 8192, |
|
"rope_type": "llama3" |
|
}, |
|
"rope_theta": 500000.0, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.43.3", |
|
"use_cache": true, |
|
"vocab_size": 128256 |
|
} |
|
|
|
|
|
[INFO|modeling_utils.py:3634] 2024-07-30 02:49:14,976 >> loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/b2a4d0f33b41fcd59a6d31662cc63b8d53367e1e/model.safetensors.index.json |
|
|
|
[INFO|modeling_utils.py:1572] 2024-07-30 02:49:14,978 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. |
|
|
|
[INFO|configuration_utils.py:1038] 2024-07-30 02:49:14,981 >> Generate config GenerationConfig { |
|
"bos_token_id": 128000, |
|
"eos_token_id": [ |
|
128001, |
|
128008, |
|
128009 |
|
] |
|
} |
|
|
|
|
|
[INFO|modeling_utils.py:4463] 2024-07-30 02:49:19,162 >> All model checkpoint weights were used when initializing LlamaForCausalLM. |
|
|
|
|
|
[INFO|modeling_utils.py:4471] 2024-07-30 02:49:19,162 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Meta-Llama-3.1-8B-Instruct. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. |
|
|
|
[INFO|configuration_utils.py:993] 2024-07-30 02:49:19,337 >> loading configuration file generation_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B-Instruct/snapshots/b2a4d0f33b41fcd59a6d31662cc63b8d53367e1e/generation_config.json |
|
|
|
[INFO|configuration_utils.py:1038] 2024-07-30 02:49:19,337 >> Generate config GenerationConfig { |
|
"bos_token_id": 128000, |
|
"do_sample": true, |
|
"eos_token_id": [ |
|
128001, |
|
128008, |
|
128009 |
|
], |
|
"temperature": 0.6, |
|
"top_p": 0.9 |
|
} |
|
|
|
|
|
[INFO|checkpointing.py:103] 2024-07-30 02:49:19,344 >> Gradient checkpointing enabled. |
|
|
|
[INFO|attention.py:84] 2024-07-30 02:49:19,345 >> Using torch SDPA for faster training and inference. |
|
|
|
[INFO|adapter.py:302] 2024-07-30 02:49:19,345 >> Upcasting trainable params to float32. |
|
|
|
[INFO|adapter.py:48] 2024-07-30 02:49:19,345 >> Fine-tuning method: Full |
|
|
|
[INFO|loader.py:196] 2024-07-30 02:49:19,389 >> trainable params: 8,030,261,248 || all params: 8,030,261,248 || trainable%: 100.0000 |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Fine-tuning method: Full |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Fine-tuning method: Full |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Fine-tuning method: Full |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.loader - trainable params: 8,030,261,248 || all params: 8,030,261,248 || trainable%: 100.0000 |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.loader - trainable params: 8,030,261,248 || all params: 8,030,261,248 || trainable%: 100.0000 |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.loader - trainable params: 8,030,261,248 || all params: 8,030,261,248 || trainable%: 100.0000 |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Fine-tuning method: Full |
|
|
|
[INFO|trainer.py:648] 2024-07-30 02:49:19,394 >> Using auto half precision backend |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Fine-tuning method: Full |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.loader - trainable params: 8,030,261,248 || all params: 8,030,261,248 || trainable%: 100.0000 |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.loader - trainable params: 8,030,261,248 || all params: 8,030,261,248 || trainable%: 100.0000 |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Fine-tuning method: Full |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.loader - trainable params: 8,030,261,248 || all params: 8,030,261,248 || trainable%: 100.0000 |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. |
|
|
|
07/30/2024 02:49:19 - INFO - llamafactory.model.adapter - Fine-tuning method: Full |
|
|
|
07/30/2024 02:49:20 - INFO - llamafactory.model.loader - trainable params: 8,030,261,248 || all params: 8,030,261,248 || trainable%: 100.0000 |
|
|
|
[INFO|trainer.py:2134] 2024-07-30 02:49:41,827 >> ***** Running training ***** |
|
|
|
[INFO|trainer.py:2135] 2024-07-30 02:49:41,827 >> Num examples = 4,958 |
|
|
|
[INFO|trainer.py:2136] 2024-07-30 02:49:41,827 >> Num Epochs = 5 |
|
|
|
[INFO|trainer.py:2137] 2024-07-30 02:49:41,827 >> Instantaneous batch size per device = 2 |
|
|
|
[INFO|trainer.py:2140] 2024-07-30 02:49:41,828 >> Total train batch size (w. parallel, distributed & accumulation) = 128 |
|
|
|
[INFO|trainer.py:2141] 2024-07-30 02:49:41,828 >> Gradient Accumulation steps = 8 |
|
|
|
[INFO|trainer.py:2142] 2024-07-30 02:49:41,828 >> Total optimization steps = 190 |
|
|
|
[INFO|trainer.py:2143] 2024-07-30 02:49:41,829 >> Number of trainable parameters = 8,030,261,248 |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:49:56,428 >> {'loss': 12.4760, 'learning_rate': 5.0000e-07, 'epoch': 0.03, 'throughput': 439.53} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:50:09,604 >> {'loss': 12.1047, 'learning_rate': 1.0000e-06, 'epoch': 0.05, 'throughput': 467.79} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:50:22,768 >> {'loss': 12.0404, 'learning_rate': 1.5000e-06, 'epoch': 0.08, 'throughput': 475.65} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:50:35,923 >> {'loss': 10.5293, 'learning_rate': 2.0000e-06, 'epoch': 0.10, 'throughput': 479.47} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:50:49,106 >> {'loss': 8.3117, 'learning_rate': 2.5000e-06, 'epoch': 0.13, 'throughput': 479.70} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:51:02,273 >> {'loss': 6.0338, 'learning_rate': 3.0000e-06, 'epoch': 0.15, 'throughput': 480.34} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:51:15,440 >> {'loss': 4.8226, 'learning_rate': 3.5000e-06, 'epoch': 0.18, 'throughput': 480.63} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:51:28,623 >> {'loss': 2.9485, 'learning_rate': 4.0000e-06, 'epoch': 0.21, 'throughput': 480.63} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:51:41,799 >> {'loss': 0.9784, 'learning_rate': 4.5000e-06, 'epoch': 0.23, 'throughput': 478.52} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:51:54,970 >> {'loss': 0.5759, 'learning_rate': 5.0000e-06, 'epoch': 0.26, 'throughput': 478.41} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:52:08,132 >> {'loss': 1.1284, 'learning_rate': 4.9996e-06, 'epoch': 0.28, 'throughput': 478.35} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:52:21,287 >> {'loss': 1.1272, 'learning_rate': 4.9985e-06, 'epoch': 0.31, 'throughput': 479.03} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:52:34,460 >> {'loss': 0.9501, 'learning_rate': 4.9966e-06, 'epoch': 0.34, 'throughput': 479.08} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:52:47,615 >> {'loss': 0.4610, 'learning_rate': 4.9939e-06, 'epoch': 0.36, 'throughput': 478.92} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:53:00,776 >> {'loss': 1.2016, 'learning_rate': 4.9905e-06, 'epoch': 0.39, 'throughput': 479.89} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:53:13,956 >> {'loss': 0.3310, 'learning_rate': 4.9863e-06, 'epoch': 0.41, 'throughput': 480.39} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:53:27,114 >> {'loss': 0.3565, 'learning_rate': 4.9814e-06, 'epoch': 0.44, 'throughput': 480.60} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:53:40,291 >> {'loss': 0.6088, 'learning_rate': 4.9757e-06, 'epoch': 0.46, 'throughput': 480.01} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:53:53,463 >> {'loss': 0.2701, 'learning_rate': 4.9692e-06, 'epoch': 0.49, 'throughput': 480.13} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:54:06,625 >> {'loss': 0.7005, 'learning_rate': 4.9620e-06, 'epoch': 0.52, 'throughput': 480.07} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:54:19,803 >> {'loss': 0.3424, 'learning_rate': 4.9541e-06, 'epoch': 0.54, 'throughput': 481.03} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:54:32,946 >> {'loss': 0.6274, 'learning_rate': 4.9454e-06, 'epoch': 0.57, 'throughput': 480.58} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:54:46,125 >> {'loss': 0.4183, 'learning_rate': 4.9359e-06, 'epoch': 0.59, 'throughput': 480.64} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:54:59,279 >> {'loss': 0.1517, 'learning_rate': 4.9257e-06, 'epoch': 0.62, 'throughput': 481.29} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:55:12,444 >> {'loss': 0.1906, 'learning_rate': 4.9148e-06, 'epoch': 0.65, 'throughput': 480.85} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:55:25,606 >> {'loss': 0.1537, 'learning_rate': 4.9032e-06, 'epoch': 0.67, 'throughput': 480.78} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:55:38,769 >> {'loss': 0.1957, 'learning_rate': 4.8908e-06, 'epoch': 0.70, 'throughput': 481.34} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:55:51,936 >> {'loss': 0.3026, 'learning_rate': 4.8776e-06, 'epoch': 0.72, 'throughput': 481.25} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:56:05,100 >> {'loss': 0.2031, 'learning_rate': 4.8638e-06, 'epoch': 0.75, 'throughput': 481.25} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:56:18,268 >> {'loss': 0.1461, 'learning_rate': 4.8492e-06, 'epoch': 0.77, 'throughput': 481.53} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:56:31,447 >> {'loss': 0.1873, 'learning_rate': 4.8340e-06, 'epoch': 0.80, 'throughput': 481.00} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:56:44,603 >> {'loss': 0.1388, 'learning_rate': 4.8180e-06, 'epoch': 0.83, 'throughput': 480.75} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:56:57,774 >> {'loss': 0.1127, 'learning_rate': 4.8013e-06, 'epoch': 0.85, 'throughput': 481.16} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:57:10,949 >> {'loss': 0.1243, 'learning_rate': 4.7839e-06, 'epoch': 0.88, 'throughput': 480.87} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:57:24,122 >> {'loss': 0.0969, 'learning_rate': 4.7658e-06, 'epoch': 0.90, 'throughput': 480.63} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:57:37,282 >> {'loss': 0.0890, 'learning_rate': 4.7470e-06, 'epoch': 0.93, 'throughput': 480.62} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:57:50,444 >> {'loss': 0.1703, 'learning_rate': 4.7275e-06, 'epoch': 0.95, 'throughput': 481.20} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:58:03,606 >> {'loss': 0.1132, 'learning_rate': 4.7074e-06, 'epoch': 0.98, 'throughput': 481.46} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:58:16,767 >> {'loss': 0.1294, 'learning_rate': 4.6865e-06, 'epoch': 1.01, 'throughput': 481.77} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:58:29,908 >> {'loss': 0.0881, 'learning_rate': 4.6651e-06, 'epoch': 1.03, 'throughput': 481.78} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:58:43,077 >> {'loss': 0.0504, 'learning_rate': 4.6429e-06, 'epoch': 1.06, 'throughput': 481.55} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:58:56,238 >> {'loss': 0.0723, 'learning_rate': 4.6201e-06, 'epoch': 1.08, 'throughput': 481.72} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:59:09,398 >> {'loss': 0.0726, 'learning_rate': 4.5967e-06, 'epoch': 1.11, 'throughput': 481.69} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:59:22,561 >> {'loss': 0.1355, 'learning_rate': 4.5726e-06, 'epoch': 1.14, 'throughput': 481.60} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:59:35,735 >> {'loss': 0.0713, 'learning_rate': 4.5479e-06, 'epoch': 1.16, 'throughput': 481.51} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 02:59:48,896 >> {'loss': 0.0796, 'learning_rate': 4.5225e-06, 'epoch': 1.19, 'throughput': 481.50} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:00:02,045 >> {'loss': 0.0778, 'learning_rate': 4.4966e-06, 'epoch': 1.21, 'throughput': 481.46} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:00:15,214 >> {'loss': 0.0606, 'learning_rate': 4.4700e-06, 'epoch': 1.24, 'throughput': 481.40} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:00:28,372 >> {'loss': 0.0411, 'learning_rate': 4.4429e-06, 'epoch': 1.26, 'throughput': 481.53} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:00:41,534 >> {'loss': 0.0773, 'learning_rate': 4.4151e-06, 'epoch': 1.29, 'throughput': 481.53} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:00:54,678 >> {'loss': 0.0355, 'learning_rate': 4.3868e-06, 'epoch': 1.32, 'throughput': 481.73} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:01:07,849 >> {'loss': 0.0607, 'learning_rate': 4.3579e-06, 'epoch': 1.34, 'throughput': 481.48} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:01:21,013 >> {'loss': 0.0542, 'learning_rate': 4.3284e-06, 'epoch': 1.37, 'throughput': 481.43} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:01:34,182 >> {'loss': 0.0629, 'learning_rate': 4.2983e-06, 'epoch': 1.39, 'throughput': 481.42} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:01:47,353 >> {'loss': 0.0519, 'learning_rate': 4.2678e-06, 'epoch': 1.42, 'throughput': 481.73} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:02:00,520 >> {'loss': 0.0481, 'learning_rate': 4.2366e-06, 'epoch': 1.45, 'throughput': 481.67} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:02:13,678 >> {'loss': 0.0659, 'learning_rate': 4.2050e-06, 'epoch': 1.47, 'throughput': 481.67} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:02:26,831 >> {'loss': 0.0980, 'learning_rate': 4.1728e-06, 'epoch': 1.50, 'throughput': 482.09} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:02:40,005 >> {'loss': 0.0411, 'learning_rate': 4.1401e-06, 'epoch': 1.52, 'throughput': 482.24} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:02:53,178 >> {'loss': 0.0396, 'learning_rate': 4.1070e-06, 'epoch': 1.55, 'throughput': 481.97} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:03:06,330 >> {'loss': 0.0413, 'learning_rate': 4.0733e-06, 'epoch': 1.57, 'throughput': 481.73} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:03:19,497 >> {'loss': 0.1195, 'learning_rate': 4.0392e-06, 'epoch': 1.60, 'throughput': 482.02} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:03:32,670 >> {'loss': 0.0534, 'learning_rate': 4.0045e-06, 'epoch': 1.63, 'throughput': 482.06} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:03:45,839 >> {'loss': 0.0662, 'learning_rate': 3.9695e-06, 'epoch': 1.65, 'throughput': 481.93} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:03:59,009 >> {'loss': 0.0462, 'learning_rate': 3.9339e-06, 'epoch': 1.68, 'throughput': 481.86} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:04:12,160 >> {'loss': 0.0899, 'learning_rate': 3.8980e-06, 'epoch': 1.70, 'throughput': 481.90} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:04:25,334 >> {'loss': 0.0691, 'learning_rate': 3.8616e-06, 'epoch': 1.73, 'throughput': 482.08} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:04:38,487 >> {'loss': 0.1022, 'learning_rate': 3.8248e-06, 'epoch': 1.75, 'throughput': 482.24} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:04:51,658 >> {'loss': 0.1062, 'learning_rate': 3.7876e-06, 'epoch': 1.78, 'throughput': 482.17} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:05:04,814 >> {'loss': 0.0491, 'learning_rate': 3.7500e-06, 'epoch': 1.81, 'throughput': 482.44} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:05:17,972 >> {'loss': 0.1507, 'learning_rate': 3.7120e-06, 'epoch': 1.83, 'throughput': 482.42} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:05:31,123 >> {'loss': 0.1234, 'learning_rate': 3.6737e-06, 'epoch': 1.86, 'throughput': 482.31} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:05:44,271 >> {'loss': 0.0450, 'learning_rate': 3.6350e-06, 'epoch': 1.88, 'throughput': 482.26} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:05:57,439 >> {'loss': 0.0615, 'learning_rate': 3.5959e-06, 'epoch': 1.91, 'throughput': 482.50} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:06:10,604 >> {'loss': 0.1961, 'learning_rate': 3.5565e-06, 'epoch': 1.94, 'throughput': 482.59} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:06:23,764 >> {'loss': 0.2311, 'learning_rate': 3.5168e-06, 'epoch': 1.96, 'throughput': 482.60} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:06:36,916 >> {'loss': 0.1556, 'learning_rate': 3.4768e-06, 'epoch': 1.99, 'throughput': 482.48} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:06:50,068 >> {'loss': 0.0626, 'learning_rate': 3.4365e-06, 'epoch': 2.01, 'throughput': 482.38} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:07:03,233 >> {'loss': 0.0197, 'learning_rate': 3.3959e-06, 'epoch': 2.04, 'throughput': 482.41} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:07:16,401 >> {'loss': 0.0057, 'learning_rate': 3.3551e-06, 'epoch': 2.06, 'throughput': 482.65} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:07:29,571 >> {'loss': 0.0290, 'learning_rate': 3.3139e-06, 'epoch': 2.09, 'throughput': 482.53} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:07:42,734 >> {'loss': 0.0593, 'learning_rate': 3.2725e-06, 'epoch': 2.12, 'throughput': 482.74} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:07:55,878 >> {'loss': 0.0455, 'learning_rate': 3.2309e-06, 'epoch': 2.14, 'throughput': 482.77} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:08:09,034 >> {'loss': 0.0325, 'learning_rate': 3.1891e-06, 'epoch': 2.17, 'throughput': 482.74} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:08:22,192 >> {'loss': 0.0071, 'learning_rate': 3.1470e-06, 'epoch': 2.19, 'throughput': 482.90} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:08:35,352 >> {'loss': 0.0336, 'learning_rate': 3.1048e-06, 'epoch': 2.22, 'throughput': 482.86} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:08:48,514 >> {'loss': 0.0389, 'learning_rate': 3.0624e-06, 'epoch': 2.25, 'throughput': 482.92} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:09:01,669 >> {'loss': 0.0016, 'learning_rate': 3.0198e-06, 'epoch': 2.27, 'throughput': 482.87} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:09:14,848 >> {'loss': 0.0625, 'learning_rate': 2.9770e-06, 'epoch': 2.30, 'throughput': 483.05} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:09:27,996 >> {'loss': 0.0201, 'learning_rate': 2.9341e-06, 'epoch': 2.32, 'throughput': 482.95} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:09:41,158 >> {'loss': 0.0126, 'learning_rate': 2.8911e-06, 'epoch': 2.35, 'throughput': 482.79} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:09:54,341 >> {'loss': 0.0148, 'learning_rate': 2.8479e-06, 'epoch': 2.37, 'throughput': 482.91} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:10:07,511 >> {'loss': 0.0140, 'learning_rate': 2.8047e-06, 'epoch': 2.40, 'throughput': 482.83} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:10:20,674 >> {'loss': 0.0096, 'learning_rate': 2.7613e-06, 'epoch': 2.43, 'throughput': 482.82} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:10:33,834 >> {'loss': 0.0249, 'learning_rate': 2.7179e-06, 'epoch': 2.45, 'throughput': 482.82} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:10:47,009 >> {'loss': 0.0358, 'learning_rate': 2.6744e-06, 'epoch': 2.48, 'throughput': 482.76} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:11:00,178 >> {'loss': 0.0494, 'learning_rate': 2.6308e-06, 'epoch': 2.50, 'throughput': 482.67} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:11:13,329 >> {'loss': 0.0092, 'learning_rate': 2.5872e-06, 'epoch': 2.53, 'throughput': 482.63} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:11:26,479 >> {'loss': 0.0215, 'learning_rate': 2.5436e-06, 'epoch': 2.55, 'throughput': 482.59} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:11:39,631 >> {'loss': 0.0122, 'learning_rate': 2.5000e-06, 'epoch': 2.58, 'throughput': 482.73} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:11:52,780 >> {'loss': 0.0296, 'learning_rate': 2.4564e-06, 'epoch': 2.61, 'throughput': 482.73} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:12:05,936 >> {'loss': 0.0089, 'learning_rate': 2.4128e-06, 'epoch': 2.63, 'throughput': 482.78} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:12:19,112 >> {'loss': 0.0406, 'learning_rate': 2.3692e-06, 'epoch': 2.66, 'throughput': 482.65} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:12:32,273 >> {'loss': 0.0114, 'learning_rate': 2.3256e-06, 'epoch': 2.68, 'throughput': 482.82} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:12:45,431 >> {'loss': 0.0396, 'learning_rate': 2.2821e-06, 'epoch': 2.71, 'throughput': 482.81} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:12:58,595 >> {'loss': 0.0077, 'learning_rate': 2.2387e-06, 'epoch': 2.74, 'throughput': 482.67} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:13:11,749 >> {'loss': 0.0044, 'learning_rate': 2.1953e-06, 'epoch': 2.76, 'throughput': 482.63} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:13:24,906 >> {'loss': 0.0045, 'learning_rate': 2.1521e-06, 'epoch': 2.79, 'throughput': 482.54} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:13:38,052 >> {'loss': 0.0405, 'learning_rate': 2.1089e-06, 'epoch': 2.81, 'throughput': 482.51} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:13:51,205 >> {'loss': 0.0225, 'learning_rate': 2.0659e-06, 'epoch': 2.84, 'throughput': 482.55} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:14:04,369 >> {'loss': 0.0415, 'learning_rate': 2.0230e-06, 'epoch': 2.86, 'throughput': 482.50} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:14:17,523 >> {'loss': 0.0173, 'learning_rate': 1.9802e-06, 'epoch': 2.89, 'throughput': 482.45} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:14:30,685 >> {'loss': 0.0005, 'learning_rate': 1.9376e-06, 'epoch': 2.92, 'throughput': 482.38} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:14:43,845 >> {'loss': 0.0306, 'learning_rate': 1.8952e-06, 'epoch': 2.94, 'throughput': 482.57} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:14:57,012 >> {'loss': 0.0422, 'learning_rate': 1.8530e-06, 'epoch': 2.97, 'throughput': 482.66} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:15:10,157 >> {'loss': 0.0472, 'learning_rate': 1.8109e-06, 'epoch': 2.99, 'throughput': 482.55} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:15:23,306 >> {'loss': 0.0259, 'learning_rate': 1.7691e-06, 'epoch': 3.02, 'throughput': 482.50} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:15:36,475 >> {'loss': 0.0029, 'learning_rate': 1.7275e-06, 'epoch': 3.05, 'throughput': 482.46} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:15:49,636 >> {'loss': 0.0350, 'learning_rate': 1.6861e-06, 'epoch': 3.07, 'throughput': 482.46} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:16:02,800 >> {'loss': 0.0015, 'learning_rate': 1.6449e-06, 'epoch': 3.10, 'throughput': 482.26} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:16:15,954 >> {'loss': 0.0006, 'learning_rate': 1.6041e-06, 'epoch': 3.12, 'throughput': 482.20} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:16:29,114 >> {'loss': 0.0143, 'learning_rate': 1.5635e-06, 'epoch': 3.15, 'throughput': 482.19} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:16:42,268 >> {'loss': 0.0219, 'learning_rate': 1.5232e-06, 'epoch': 3.17, 'throughput': 482.19} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:16:55,411 >> {'loss': 0.0074, 'learning_rate': 1.4832e-06, 'epoch': 3.20, 'throughput': 482.31} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:17:08,589 >> {'loss': 0.0052, 'learning_rate': 1.4435e-06, 'epoch': 3.23, 'throughput': 482.23} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:17:21,739 >> {'loss': 0.0013, 'learning_rate': 1.4041e-06, 'epoch': 3.25, 'throughput': 482.19} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:17:34,901 >> {'loss': 0.0018, 'learning_rate': 1.3650e-06, 'epoch': 3.28, 'throughput': 482.26} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:17:48,057 >> {'loss': 0.0077, 'learning_rate': 1.3263e-06, 'epoch': 3.30, 'throughput': 482.22} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:18:01,209 >> {'loss': 0.0138, 'learning_rate': 1.2880e-06, 'epoch': 3.33, 'throughput': 482.24} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:18:14,360 >> {'loss': 0.0102, 'learning_rate': 1.2500e-06, 'epoch': 3.35, 'throughput': 482.29} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:18:27,523 >> {'loss': 0.0067, 'learning_rate': 1.2124e-06, 'epoch': 3.38, 'throughput': 482.41} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:18:40,673 >> {'loss': 0.0056, 'learning_rate': 1.1752e-06, 'epoch': 3.41, 'throughput': 482.43} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:18:53,846 >> {'loss': 0.0066, 'learning_rate': 1.1384e-06, 'epoch': 3.43, 'throughput': 482.59} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:19:07,002 >> {'loss': 0.0033, 'learning_rate': 1.1020e-06, 'epoch': 3.46, 'throughput': 482.61} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:19:20,156 >> {'loss': 0.0008, 'learning_rate': 1.0661e-06, 'epoch': 3.48, 'throughput': 482.56} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:19:33,324 >> {'loss': 0.0027, 'learning_rate': 1.0305e-06, 'epoch': 3.51, 'throughput': 482.56} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:19:46,477 >> {'loss': 0.0021, 'learning_rate': 9.9546e-07, 'epoch': 3.54, 'throughput': 482.46} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:19:59,648 >> {'loss': 0.0008, 'learning_rate': 9.6085e-07, 'epoch': 3.56, 'throughput': 482.42} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:20:12,805 >> {'loss': 0.0051, 'learning_rate': 9.2670e-07, 'epoch': 3.59, 'throughput': 482.52} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:20:25,967 >> {'loss': 0.0026, 'learning_rate': 8.9303e-07, 'epoch': 3.61, 'throughput': 482.52} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:20:39,143 >> {'loss': 0.0041, 'learning_rate': 8.5985e-07, 'epoch': 3.64, 'throughput': 482.51} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:20:52,302 >> {'loss': 0.0230, 'learning_rate': 8.2717e-07, 'epoch': 3.66, 'throughput': 482.38} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:21:05,454 >> {'loss': 0.0106, 'learning_rate': 7.9500e-07, 'epoch': 3.69, 'throughput': 482.32} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:21:18,598 >> {'loss': 0.0238, 'learning_rate': 7.6335e-07, 'epoch': 3.72, 'throughput': 482.48} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:21:31,757 >> {'loss': 0.0088, 'learning_rate': 7.3223e-07, 'epoch': 3.74, 'throughput': 482.51} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:21:44,919 >> {'loss': 0.0391, 'learning_rate': 7.0165e-07, 'epoch': 3.77, 'throughput': 482.58} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:21:58,081 >> {'loss': 0.0008, 'learning_rate': 6.7162e-07, 'epoch': 3.79, 'throughput': 482.64} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:22:11,241 >> {'loss': 0.0177, 'learning_rate': 6.4214e-07, 'epoch': 3.82, 'throughput': 482.62} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:22:24,389 >> {'loss': 0.0001, 'learning_rate': 6.1323e-07, 'epoch': 3.85, 'throughput': 482.53} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:22:37,539 >> {'loss': 0.0002, 'learning_rate': 5.8489e-07, 'epoch': 3.87, 'throughput': 482.57} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:22:50,689 >> {'loss': 0.0044, 'learning_rate': 5.5714e-07, 'epoch': 3.90, 'throughput': 482.50} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:23:03,841 >> {'loss': 0.0015, 'learning_rate': 5.2997e-07, 'epoch': 3.92, 'throughput': 482.56} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:23:16,993 >> {'loss': 0.0003, 'learning_rate': 5.0341e-07, 'epoch': 3.95, 'throughput': 482.53} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:23:30,143 >> {'loss': 0.0361, 'learning_rate': 4.7746e-07, 'epoch': 3.97, 'throughput': 482.59} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:23:43,318 >> {'loss': 0.0005, 'learning_rate': 4.5212e-07, 'epoch': 4.00, 'throughput': 482.75} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:23:56,477 >> {'loss': 0.0022, 'learning_rate': 4.2741e-07, 'epoch': 4.03, 'throughput': 482.76} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:24:09,624 >> {'loss': 0.0212, 'learning_rate': 4.0332e-07, 'epoch': 4.05, 'throughput': 482.78} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:24:22,793 >> {'loss': 0.0003, 'learning_rate': 3.7988e-07, 'epoch': 4.08, 'throughput': 482.68} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:24:35,959 >> {'loss': 0.0047, 'learning_rate': 3.5708e-07, 'epoch': 4.10, 'throughput': 482.59} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:24:49,103 >> {'loss': 0.0014, 'learning_rate': 3.3494e-07, 'epoch': 4.13, 'throughput': 482.54} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:25:02,253 >> {'loss': 0.0006, 'learning_rate': 3.1345e-07, 'epoch': 4.15, 'throughput': 482.62} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:25:15,421 >> {'loss': 0.0003, 'learning_rate': 2.9263e-07, 'epoch': 4.18, 'throughput': 482.66} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:25:28,590 >> {'loss': 0.0021, 'learning_rate': 2.7248e-07, 'epoch': 4.21, 'throughput': 482.63} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:25:41,746 >> {'loss': 0.0001, 'learning_rate': 2.5301e-07, 'epoch': 4.23, 'throughput': 482.54} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:25:54,902 >> {'loss': 0.0007, 'learning_rate': 2.3423e-07, 'epoch': 4.26, 'throughput': 482.58} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:26:08,056 >> {'loss': 0.0013, 'learning_rate': 2.1614e-07, 'epoch': 4.28, 'throughput': 482.49} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:26:21,216 >> {'loss': 0.0002, 'learning_rate': 1.9874e-07, 'epoch': 4.31, 'throughput': 482.55} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:26:34,365 >> {'loss': 0.0011, 'learning_rate': 1.8204e-07, 'epoch': 4.34, 'throughput': 482.47} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:26:47,530 >> {'loss': 0.0001, 'learning_rate': 1.6605e-07, 'epoch': 4.36, 'throughput': 482.39} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:27:00,675 >> {'loss': 0.0006, 'learning_rate': 1.5077e-07, 'epoch': 4.39, 'throughput': 482.46} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:27:13,849 >> {'loss': 0.0003, 'learning_rate': 1.3620e-07, 'epoch': 4.41, 'throughput': 482.60} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:27:27,011 >> {'loss': 0.0002, 'learning_rate': 1.2236e-07, 'epoch': 4.44, 'throughput': 482.60} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:27:40,171 >> {'loss': 0.0027, 'learning_rate': 1.0924e-07, 'epoch': 4.46, 'throughput': 482.69} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:27:53,338 >> {'loss': 0.0002, 'learning_rate': 9.6846e-08, 'epoch': 4.49, 'throughput': 482.67} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:28:06,490 >> {'loss': 0.0001, 'learning_rate': 8.5185e-08, 'epoch': 4.52, 'throughput': 482.72} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:28:19,653 >> {'loss': 0.0109, 'learning_rate': 7.4261e-08, 'epoch': 4.54, 'throughput': 482.85} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:28:32,819 >> {'loss': 0.0039, 'learning_rate': 6.4075e-08, 'epoch': 4.57, 'throughput': 482.78} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:28:45,970 >> {'loss': 0.0026, 'learning_rate': 5.4631e-08, 'epoch': 4.59, 'throughput': 482.83} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:28:59,127 >> {'loss': 0.0002, 'learning_rate': 4.5932e-08, 'epoch': 4.62, 'throughput': 482.85} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:29:12,277 >> {'loss': 0.0044, 'learning_rate': 3.7981e-08, 'epoch': 4.65, 'throughput': 482.83} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:29:25,426 >> {'loss': 0.0001, 'learning_rate': 3.0779e-08, 'epoch': 4.67, 'throughput': 482.83} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:29:38,563 >> {'loss': 0.0103, 'learning_rate': 2.4330e-08, 'epoch': 4.70, 'throughput': 482.82} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:29:51,712 >> {'loss': 0.0002, 'learning_rate': 1.8635e-08, 'epoch': 4.72, 'throughput': 482.82} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:30:04,875 >> {'loss': 0.0038, 'learning_rate': 1.3695e-08, 'epoch': 4.75, 'throughput': 482.82} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:30:18,027 >> {'loss': 0.0039, 'learning_rate': 9.5133e-09, 'epoch': 4.77, 'throughput': 482.88} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:30:31,175 >> {'loss': 0.0005, 'learning_rate': 6.0899e-09, 'epoch': 4.80, 'throughput': 482.83} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:30:44,329 >> {'loss': 0.0002, 'learning_rate': 3.4262e-09, 'epoch': 4.83, 'throughput': 482.85} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:30:57,472 >> {'loss': 0.0011, 'learning_rate': 1.5229e-09, 'epoch': 4.85, 'throughput': 482.82} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:31:10,622 >> {'loss': 0.0007, 'learning_rate': 3.8076e-10, 'epoch': 4.88, 'throughput': 482.75} |
|
|
|
[INFO|callbacks.py:310] 2024-07-30 03:31:23,759 >> {'loss': 0.0002, 'learning_rate': 0.0000e+00, 'epoch': 4.90, 'throughput': 482.73} |
|
|
|
[INFO|trainer.py:3503] 2024-07-30 03:31:31,698 >> Saving model checkpoint to saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/checkpoint-190 |
|
|
|
[INFO|configuration_utils.py:472] 2024-07-30 03:31:31,701 >> Configuration saved in saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/checkpoint-190/config.json |
|
|
|
[INFO|configuration_utils.py:807] 2024-07-30 03:31:31,701 >> Configuration saved in saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/checkpoint-190/generation_config.json |
|
|
|
[INFO|modeling_utils.py:2763] 2024-07-30 03:31:48,048 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 4 checkpoint shards. You can find where each parameters has been saved in the index located at saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/checkpoint-190/model.safetensors.index.json. |
|
|
|
[INFO|tokenization_utils_base.py:2702] 2024-07-30 03:31:48,052 >> tokenizer config file saved in saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/checkpoint-190/tokenizer_config.json |
|
|
|
[INFO|tokenization_utils_base.py:2711] 2024-07-30 03:31:48,052 >> Special tokens file saved in saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/checkpoint-190/special_tokens_map.json |
|
|
|
[INFO|trainer.py:2394] 2024-07-30 03:32:24,590 >> |
|
|
|
Training completed. Do not forget to share your model on huggingface.co/models =) |
|
|
|
|
|
|
|
[INFO|trainer.py:3503] 2024-07-30 03:32:32,434 >> Saving model checkpoint to saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2 |
|
|
|
[INFO|configuration_utils.py:472] 2024-07-30 03:32:32,437 >> Configuration saved in saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/config.json |
|
|
|
[INFO|configuration_utils.py:807] 2024-07-30 03:32:32,437 >> Configuration saved in saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/generation_config.json |
|
|
|
[INFO|modeling_utils.py:2763] 2024-07-30 03:32:49,870 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 4 checkpoint shards. You can find where each parameters has been saved in the index located at saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/model.safetensors.index.json. |
|
|
|
[INFO|tokenization_utils_base.py:2702] 2024-07-30 03:32:49,874 >> tokenizer config file saved in saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/tokenizer_config.json |
|
|
|
[INFO|tokenization_utils_base.py:2711] 2024-07-30 03:32:49,874 >> Special tokens file saved in saves/LLaMA3.1-8B-Chat/full/train_2024-07-30-02-47-53_llama3.1_truthqa_bench2/special_tokens_map.json |
|
|
|
[WARNING|ploting.py:89] 2024-07-30 03:32:51,207 >> No metric eval_loss to plot. |
|
|
|
[WARNING|ploting.py:89] 2024-07-30 03:32:51,207 >> No metric eval_accuracy to plot. |
|
|
|
[INFO|modelcard.py:449] 2024-07-30 03:32:51,207 >> Dropping the following result as it does not have all the necessary fields: |
|
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}} |
|
|
|
|