|
05/30/2024 09:42:38 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--shenzhi-wang--Llama3-8B-Chinese-Chat/snapshots/4754413429ccde4f441fe30e44ee62fd1c93b8be/tokenizer.json |
|
|
|
05/30/2024 09:42:38 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None |
|
|
|
05/30/2024 09:42:38 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--shenzhi-wang--Llama3-8B-Chinese-Chat/snapshots/4754413429ccde4f441fe30e44ee62fd1c93b8be/special_tokens_map.json |
|
|
|
05/30/2024 09:42:38 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--shenzhi-wang--Llama3-8B-Chinese-Chat/snapshots/4754413429ccde4f441fe30e44ee62fd1c93b8be/tokenizer_config.json |
|
|
|
05/30/2024 09:42:38 - WARNING - transformers.tokenization_utils_base - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
|
|
05/30/2024 09:42:38 - INFO - llamafactory.data.template - Replace eos token: <|eot_id|> |
|
|
|
05/30/2024 09:42:38 - INFO - llamafactory.data.loader - Loading dataset Central-full.json... |
|
|
|
05/30/2024 09:42:39 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--shenzhi-wang--Llama3-8B-Chinese-Chat/snapshots/4754413429ccde4f441fe30e44ee62fd1c93b8be/config.json |
|
|
|
05/30/2024 09:42:39 - INFO - transformers.configuration_utils - Model config LlamaConfig { |
|
"_name_or_path": "shenzhi-wang/Llama3-8B-Chinese-Chat", |
|
"architectures": [ |
|
"LlamaForCausalLM" |
|
], |
|
"attention_bias": false, |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 128000, |
|
"eos_token_id": 128009, |
|
"hidden_act": "silu", |
|
"hidden_size": 4096, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 14336, |
|
"max_position_embeddings": 8192, |
|
"mlp_bias": false, |
|
"model_type": "llama", |
|
"num_attention_heads": 32, |
|
"num_hidden_layers": 32, |
|
"num_key_value_heads": 8, |
|
"pretraining_tp": 1, |
|
"rms_norm_eps": 1e-05, |
|
"rope_scaling": null, |
|
"rope_theta": 500000.0, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.41.1", |
|
"use_cache": true, |
|
"vocab_size": 128256 |
|
} |
|
|
|
|
|
05/30/2024 09:42:39 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--shenzhi-wang--Llama3-8B-Chinese-Chat/snapshots/4754413429ccde4f441fe30e44ee62fd1c93b8be/model.safetensors.index.json |
|
|
|
05/30/2024 09:42:39 - INFO - transformers.modeling_utils - Instantiating LlamaForCausalLM model under default dtype torch.float16. |
|
|
|
05/30/2024 09:42:39 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { |
|
"bos_token_id": 128000, |
|
"eos_token_id": 128009 |
|
} |
|
|
|
|
|
05/30/2024 09:42:49 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing LlamaForCausalLM. |
|
|
|
|
|
05/30/2024 09:42:49 - INFO - transformers.modeling_utils - All the weights of LlamaForCausalLM were initialized from the model checkpoint at shenzhi-wang/Llama3-8B-Chinese-Chat. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /root/.cache/huggingface/hub/models--shenzhi-wang--Llama3-8B-Chinese-Chat/snapshots/4754413429ccde4f441fe30e44ee62fd1c93b8be/generation_config.json |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { |
|
"bos_token_id": 128000, |
|
"eos_token_id": 128009, |
|
"pad_token_id": 128009 |
|
} |
|
|
|
|
|
05/30/2024 09:42:49 - INFO - llamafactory.model.utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
05/30/2024 09:42:49 - INFO - llamafactory.model.utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
05/30/2024 09:42:49 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. |
|
|
|
05/30/2024 09:42:49 - INFO - llamafactory.model.adapter - Fine-tuning method: Freeze |
|
|
|
05/30/2024 09:42:49 - INFO - llamafactory.model.adapter - Set trainable layers: 30,31 |
|
|
|
05/30/2024 09:42:49 - INFO - llamafactory.model.loader - trainable params: 436224000 || all params: 8030261248 || trainable%: 5.4323 |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - Using auto half precision backend |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - ***** Running training ***** |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - Num examples = 766 |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - Num Epochs = 3 |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - Instantaneous batch size per device = 1 |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 8 |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - Gradient Accumulation steps = 8 |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - Total optimization steps = 285 |
|
|
|
05/30/2024 09:42:49 - INFO - transformers.trainer - Number of trainable parameters = 436,224,000 |
|
|
|
05/30/2024 09:42:56 - INFO - llamafactory.extras.callbacks - {'loss': 3.0294, 'learning_rate': 4.9962e-05, 'epoch': 0.05} |
|
|
|
05/30/2024 09:43:02 - INFO - llamafactory.extras.callbacks - {'loss': 2.7312, 'learning_rate': 4.9848e-05, 'epoch': 0.10} |
|
|
|
05/30/2024 09:43:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.6282, 'learning_rate': 4.9659e-05, 'epoch': 0.16} |
|
|
|
05/30/2024 09:43:13 - INFO - llamafactory.extras.callbacks - {'loss': 2.5533, 'learning_rate': 4.9395e-05, 'epoch': 0.21} |
|
|
|
05/30/2024 09:43:19 - INFO - llamafactory.extras.callbacks - {'loss': 2.5412, 'learning_rate': 4.9057e-05, 'epoch': 0.26} |
|
|
|
05/30/2024 09:43:25 - INFO - llamafactory.extras.callbacks - {'loss': 2.5643, 'learning_rate': 4.8645e-05, 'epoch': 0.31} |
|
|
|
05/30/2024 09:43:30 - INFO - llamafactory.extras.callbacks - {'loss': 2.5158, 'learning_rate': 4.8162e-05, 'epoch': 0.37} |
|
|
|
05/30/2024 09:43:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.5183, 'learning_rate': 4.7609e-05, 'epoch': 0.42} |
|
|
|
05/30/2024 09:43:42 - INFO - llamafactory.extras.callbacks - {'loss': 2.4960, 'learning_rate': 4.6987e-05, 'epoch': 0.47} |
|
|
|
05/30/2024 09:43:48 - INFO - llamafactory.extras.callbacks - {'loss': 2.5069, 'learning_rate': 4.6298e-05, 'epoch': 0.52} |
|
|
|
05/30/2024 09:43:53 - INFO - llamafactory.extras.callbacks - {'loss': 2.4605, 'learning_rate': 4.5544e-05, 'epoch': 0.57} |
|
|
|
05/30/2024 09:43:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.4223, 'learning_rate': 4.4729e-05, 'epoch': 0.63} |
|
|
|
05/30/2024 09:44:05 - INFO - llamafactory.extras.callbacks - {'loss': 2.4468, 'learning_rate': 4.3853e-05, 'epoch': 0.68} |
|
|
|
05/30/2024 09:44:11 - INFO - llamafactory.extras.callbacks - {'loss': 2.3933, 'learning_rate': 4.2920e-05, 'epoch': 0.73} |
|
|
|
05/30/2024 09:44:16 - INFO - llamafactory.extras.callbacks - {'loss': 2.4540, 'learning_rate': 4.1932e-05, 'epoch': 0.78} |
|
|
|
05/30/2024 09:44:22 - INFO - llamafactory.extras.callbacks - {'loss': 2.4139, 'learning_rate': 4.0893e-05, 'epoch': 0.84} |
|
|
|
05/30/2024 09:44:28 - INFO - llamafactory.extras.callbacks - {'loss': 2.3528, 'learning_rate': 3.9806e-05, 'epoch': 0.89} |
|
|
|
05/30/2024 09:44:34 - INFO - llamafactory.extras.callbacks - {'loss': 2.3643, 'learning_rate': 3.8674e-05, 'epoch': 0.94} |
|
|
|
05/30/2024 09:44:39 - INFO - llamafactory.extras.callbacks - {'loss': 2.3584, 'learning_rate': 3.7500e-05, 'epoch': 0.99} |
|
|
|
05/30/2024 09:44:45 - INFO - llamafactory.extras.callbacks - {'loss': 2.1535, 'learning_rate': 3.6288e-05, 'epoch': 1.04} |
|
|
|
05/30/2024 09:44:45 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-100 |
|
|
|
05/30/2024 09:44:45 - INFO - transformers.configuration_utils - Configuration saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-100/config.json |
|
|
|
05/30/2024 09:44:45 - INFO - transformers.generation.configuration_utils - Configuration saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-100/generation_config.json |
|
|
|
05/30/2024 09:45:44 - INFO - transformers.modeling_utils - The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 4 checkpoint shards. You can find where each parameters has been saved in the index located at saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-100/model.safetensors.index.json. |
|
|
|
05/30/2024 09:45:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-100/tokenizer_config.json |
|
|
|
05/30/2024 09:45:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-100/special_tokens_map.json |
|
|
|
05/30/2024 09:46:01 - INFO - llamafactory.extras.callbacks - {'loss': 2.0786, 'learning_rate': 3.5042e-05, 'epoch': 1.10} |
|
|
|
05/30/2024 09:46:07 - INFO - llamafactory.extras.callbacks - {'loss': 2.0251, 'learning_rate': 3.3766e-05, 'epoch': 1.15} |
|
|
|
05/30/2024 09:46:12 - INFO - llamafactory.extras.callbacks - {'loss': 2.0486, 'learning_rate': 3.2463e-05, 'epoch': 1.20} |
|
|
|
05/30/2024 09:46:18 - INFO - llamafactory.extras.callbacks - {'loss': 2.0030, 'learning_rate': 3.1137e-05, 'epoch': 1.25} |
|
|
|
05/30/2024 09:46:24 - INFO - llamafactory.extras.callbacks - {'loss': 2.0196, 'learning_rate': 2.9793e-05, 'epoch': 1.31} |
|
|
|
05/30/2024 09:46:30 - INFO - llamafactory.extras.callbacks - {'loss': 1.9855, 'learning_rate': 2.8434e-05, 'epoch': 1.36} |
|
|
|
05/30/2024 09:46:36 - INFO - llamafactory.extras.callbacks - {'loss': 2.0136, 'learning_rate': 2.7064e-05, 'epoch': 1.41} |
|
|
|
05/30/2024 09:46:41 - INFO - llamafactory.extras.callbacks - {'loss': 1.9636, 'learning_rate': 2.5689e-05, 'epoch': 1.46} |
|
|
|
05/30/2024 09:46:47 - INFO - llamafactory.extras.callbacks - {'loss': 1.9941, 'learning_rate': 2.4311e-05, 'epoch': 1.51} |
|
|
|
05/30/2024 09:46:53 - INFO - llamafactory.extras.callbacks - {'loss': 1.9606, 'learning_rate': 2.2936e-05, 'epoch': 1.57} |
|
|
|
05/30/2024 09:46:59 - INFO - llamafactory.extras.callbacks - {'loss': 2.0351, 'learning_rate': 2.1566e-05, 'epoch': 1.62} |
|
|
|
05/30/2024 09:47:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.9508, 'learning_rate': 2.0207e-05, 'epoch': 1.67} |
|
|
|
05/30/2024 09:47:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.9504, 'learning_rate': 1.8863e-05, 'epoch': 1.72} |
|
|
|
05/30/2024 09:47:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.9508, 'learning_rate': 1.7537e-05, 'epoch': 1.78} |
|
|
|
05/30/2024 09:47:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.8806, 'learning_rate': 1.6234e-05, 'epoch': 1.83} |
|
|
|
05/30/2024 09:47:28 - INFO - llamafactory.extras.callbacks - {'loss': 1.9759, 'learning_rate': 1.4958e-05, 'epoch': 1.88} |
|
|
|
05/30/2024 09:47:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.9918, 'learning_rate': 1.3712e-05, 'epoch': 1.93} |
|
|
|
05/30/2024 09:47:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.8922, 'learning_rate': 1.2500e-05, 'epoch': 1.98} |
|
|
|
05/30/2024 09:47:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.7482, 'learning_rate': 1.1326e-05, 'epoch': 2.04} |
|
|
|
05/30/2024 09:47:51 - INFO - llamafactory.extras.callbacks - {'loss': 1.6307, 'learning_rate': 1.0194e-05, 'epoch': 2.09} |
|
|
|
05/30/2024 09:47:51 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-200 |
|
|
|
05/30/2024 09:47:51 - INFO - transformers.configuration_utils - Configuration saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-200/config.json |
|
|
|
05/30/2024 09:47:51 - INFO - transformers.generation.configuration_utils - Configuration saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-200/generation_config.json |
|
|
|
05/30/2024 09:48:50 - INFO - transformers.modeling_utils - The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 4 checkpoint shards. You can find where each parameters has been saved in the index located at saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-200/model.safetensors.index.json. |
|
|
|
05/30/2024 09:48:50 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-200/tokenizer_config.json |
|
|
|
05/30/2024 09:48:50 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/checkpoint-200/special_tokens_map.json |
|
|
|
05/30/2024 09:49:04 - INFO - llamafactory.extras.callbacks - {'loss': 1.6288, 'learning_rate': 9.1069e-06, 'epoch': 2.14} |
|
|
|
05/30/2024 09:49:10 - INFO - llamafactory.extras.callbacks - {'loss': 1.6239, 'learning_rate': 8.0680e-06, 'epoch': 2.19} |
|
|
|
05/30/2024 09:49:16 - INFO - llamafactory.extras.callbacks - {'loss': 1.6385, 'learning_rate': 7.0804e-06, 'epoch': 2.25} |
|
|
|
05/30/2024 09:49:22 - INFO - llamafactory.extras.callbacks - {'loss': 1.6511, 'learning_rate': 6.1473e-06, 'epoch': 2.30} |
|
|
|
05/30/2024 09:49:27 - INFO - llamafactory.extras.callbacks - {'loss': 1.6356, 'learning_rate': 5.2715e-06, 'epoch': 2.35} |
|
|
|
05/30/2024 09:49:33 - INFO - llamafactory.extras.callbacks - {'loss': 1.6163, 'learning_rate': 4.4556e-06, 'epoch': 2.40} |
|
|
|
05/30/2024 09:49:39 - INFO - llamafactory.extras.callbacks - {'loss': 1.6386, 'learning_rate': 3.7020e-06, 'epoch': 2.45} |
|
|
|
05/30/2024 09:49:45 - INFO - llamafactory.extras.callbacks - {'loss': 1.6722, 'learning_rate': 3.0132e-06, 'epoch': 2.51} |
|
|
|
05/30/2024 09:49:50 - INFO - llamafactory.extras.callbacks - {'loss': 1.6134, 'learning_rate': 2.3911e-06, 'epoch': 2.56} |
|
|
|
05/30/2024 09:49:56 - INFO - llamafactory.extras.callbacks - {'loss': 1.6621, 'learning_rate': 1.8376e-06, 'epoch': 2.61} |
|
|
|
05/30/2024 09:50:02 - INFO - llamafactory.extras.callbacks - {'loss': 1.5881, 'learning_rate': 1.3546e-06, 'epoch': 2.66} |
|
|
|
05/30/2024 09:50:08 - INFO - llamafactory.extras.callbacks - {'loss': 1.6241, 'learning_rate': 9.4330e-07, 'epoch': 2.72} |
|
|
|
05/30/2024 09:50:14 - INFO - llamafactory.extras.callbacks - {'loss': 1.5779, 'learning_rate': 6.0509e-07, 'epoch': 2.77} |
|
|
|
05/30/2024 09:50:19 - INFO - llamafactory.extras.callbacks - {'loss': 1.5785, 'learning_rate': 3.4097e-07, 'epoch': 2.82} |
|
|
|
05/30/2024 09:50:25 - INFO - llamafactory.extras.callbacks - {'loss': 1.6248, 'learning_rate': 1.5173e-07, 'epoch': 2.87} |
|
|
|
05/30/2024 09:50:31 - INFO - llamafactory.extras.callbacks - {'loss': 1.5898, 'learning_rate': 3.7962e-08, 'epoch': 2.92} |
|
|
|
05/30/2024 09:50:37 - INFO - llamafactory.extras.callbacks - {'loss': 1.5727, 'learning_rate': 0.0000e+00, 'epoch': 2.98} |
|
|
|
05/30/2024 09:50:37 - INFO - transformers.trainer - |
|
|
|
Training completed. Do not forget to share your model on huggingface.co/models =) |
|
|
|
|
|
|
|
05/30/2024 09:50:37 - INFO - transformers.trainer - Saving model checkpoint to saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42 |
|
|
|
05/30/2024 09:50:37 - INFO - transformers.configuration_utils - Configuration saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/config.json |
|
|
|
05/30/2024 09:50:37 - INFO - transformers.generation.configuration_utils - Configuration saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/generation_config.json |
|
|
|
05/30/2024 09:51:37 - INFO - transformers.modeling_utils - The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 4 checkpoint shards. You can find where each parameters has been saved in the index located at saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/model.safetensors.index.json. |
|
|
|
05/30/2024 09:51:37 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/tokenizer_config.json |
|
|
|
05/30/2024 09:51:37 - INFO - transformers.tokenization_utils_base - Special tokens file saved in saves/LLaMA3-8B-Chinese-Chat/freeze/train_2024-05-30-09-37-42/special_tokens_map.json |
|
|
|
05/30/2024 09:51:38 - WARNING - llamafactory.extras.ploting - No metric eval_loss to plot. |
|
|
|
05/30/2024 09:51:38 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields: |
|
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}} |
|
|
|
|