|
05/18/2024 20:30:32 - INFO - transformers.tokenization_utils_base - loading file vocab.json |
|
|
|
05/18/2024 20:30:32 - INFO - transformers.tokenization_utils_base - loading file merges.txt |
|
|
|
05/18/2024 20:30:32 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json |
|
|
|
05/18/2024 20:30:32 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json |
|
|
|
05/18/2024 20:30:32 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json |
|
|
|
05/18/2024 20:30:32 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json |
|
|
|
05/18/2024 20:30:32 - WARNING - transformers.tokenization_utils_base - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. |
|
|
|
05/18/2024 20:30:32 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/LangGPT_community.jsonl... |
|
|
|
05/18/2024 20:30:32 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json. |
|
|
|
05/18/2024 20:30:33 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_alpaca.jsonl... |
|
|
|
05/18/2024 20:30:33 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json. |
|
|
|
05/18/2024 20:30:34 - INFO - transformers.configuration_utils - loading configuration file /datas/huggingface/Qwen1.5-4B-Chat/config.json |
|
|
|
05/18/2024 20:30:34 - INFO - transformers.configuration_utils - Model config Qwen2Config { |
|
"_name_or_path": "/datas/huggingface/Qwen1.5-4B-Chat", |
|
"architectures": [ |
|
"Qwen2ForCausalLM" |
|
], |
|
"attention_dropout": 0.0, |
|
"bos_token_id": 151643, |
|
"eos_token_id": 151645, |
|
"hidden_act": "silu", |
|
"hidden_size": 2560, |
|
"initializer_range": 0.02, |
|
"intermediate_size": 6912, |
|
"max_position_embeddings": 32768, |
|
"max_window_layers": 21, |
|
"model_type": "qwen2", |
|
"num_attention_heads": 20, |
|
"num_hidden_layers": 40, |
|
"num_key_value_heads": 20, |
|
"rms_norm_eps": 1e-06, |
|
"rope_theta": 5000000.0, |
|
"sliding_window": 32768, |
|
"tie_word_embeddings": false, |
|
"torch_dtype": "bfloat16", |
|
"transformers_version": "4.40.2", |
|
"use_cache": true, |
|
"use_sliding_window": false, |
|
"vocab_size": 151936 |
|
} |
|
|
|
|
|
05/18/2024 20:30:34 - INFO - transformers.modeling_utils - loading weights file /datas/huggingface/Qwen1.5-4B-Chat/model.safetensors.index.json |
|
|
|
05/18/2024 20:30:34 - INFO - transformers.modeling_utils - Instantiating Qwen2ForCausalLM model under default dtype torch.float16. |
|
|
|
05/18/2024 20:30:34 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { |
|
"bos_token_id": 151643, |
|
"eos_token_id": 151645, |
|
"use_cache": false |
|
} |
|
|
|
|
|
05/18/2024 20:30:37 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing Qwen2ForCausalLM. |
|
|
|
|
|
05/18/2024 20:30:37 - INFO - transformers.modeling_utils - All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at /datas/huggingface/Qwen1.5-4B-Chat. |
|
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training. |
|
|
|
05/18/2024 20:30:37 - INFO - transformers.generation.configuration_utils - loading configuration file /datas/huggingface/Qwen1.5-4B-Chat/generation_config.json |
|
|
|
05/18/2024 20:30:37 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig { |
|
"bos_token_id": 151643, |
|
"do_sample": true, |
|
"eos_token_id": [ |
|
151645, |
|
151643 |
|
], |
|
"pad_token_id": 151643, |
|
"repetition_penalty": 1.1, |
|
"top_p": 0.8 |
|
} |
|
|
|
|
|
05/18/2024 20:30:37 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled. |
|
|
|
05/18/2024 20:30:37 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference. |
|
|
|
05/18/2024 20:30:37 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA |
|
|
|
05/18/2024 20:30:37 - INFO - llmtuner.model.loader - trainable params: 3276800 || all params: 3953646080 || trainable%: 0.0829 |
|
|
|
05/18/2024 20:30:37 - INFO - transformers.trainer - Using auto half precision backend |
|
|
|
05/18/2024 20:30:38 - INFO - transformers.trainer - ***** Running training ***** |
|
|
|
05/18/2024 20:30:38 - INFO - transformers.trainer - Num examples = 5,346 |
|
|
|
05/18/2024 20:30:38 - INFO - transformers.trainer - Num Epochs = 10 |
|
|
|
05/18/2024 20:30:38 - INFO - transformers.trainer - Instantaneous batch size per device = 2 |
|
|
|
05/18/2024 20:30:38 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 |
|
|
|
05/18/2024 20:30:38 - INFO - transformers.trainer - Gradient Accumulation steps = 8 |
|
|
|
05/18/2024 20:30:38 - INFO - transformers.trainer - Total optimization steps = 3,340 |
|
|
|
05/18/2024 20:30:38 - INFO - transformers.trainer - Number of trainable parameters = 3,276,800 |
|
|
|
05/18/2024 20:31:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.3208, 'learning_rate': 5.0000e-05, 'epoch': 0.01} |
|
|
|
05/18/2024 20:32:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.2903, 'learning_rate': 4.9999e-05, 'epoch': 0.03} |
|
|
|
05/18/2024 20:32:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2271, 'learning_rate': 4.9998e-05, 'epoch': 0.04} |
|
|
|
05/18/2024 20:33:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.1860, 'learning_rate': 4.9996e-05, 'epoch': 0.06} |
|
|
|
05/18/2024 20:34:14 - INFO - llmtuner.extras.callbacks - {'loss': 1.1302, 'learning_rate': 4.9993e-05, 'epoch': 0.07} |
|
|
|
05/18/2024 20:34:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.1372, 'learning_rate': 4.9990e-05, 'epoch': 0.09} |
|
|
|
05/18/2024 20:35:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.0359, 'learning_rate': 4.9986e-05, 'epoch': 0.10} |
|
|
|
05/18/2024 20:36:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.0511, 'learning_rate': 4.9982e-05, 'epoch': 0.12} |
|
|
|
05/18/2024 20:37:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.9909, 'learning_rate': 4.9978e-05, 'epoch': 0.13} |
|
|
|
05/18/2024 20:37:50 - INFO - llmtuner.extras.callbacks - {'loss': 1.0139, 'learning_rate': 4.9972e-05, 'epoch': 0.15} |
|
|
|
05/18/2024 20:38:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.0156, 'learning_rate': 4.9967e-05, 'epoch': 0.16} |
|
|
|
05/18/2024 20:39:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.9950, 'learning_rate': 4.9960e-05, 'epoch': 0.18} |
|
|
|
05/18/2024 20:40:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.9641, 'learning_rate': 4.9953e-05, 'epoch': 0.19} |
|
|
|
05/18/2024 20:40:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9934, 'learning_rate': 4.9946e-05, 'epoch': 0.21} |
|
|
|
05/18/2024 20:41:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.9460, 'learning_rate': 4.9938e-05, 'epoch': 0.22} |
|
|
|
05/18/2024 20:42:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.9100, 'learning_rate': 4.9929e-05, 'epoch': 0.24} |
|
|
|
05/18/2024 20:43:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9612, 'learning_rate': 4.9920e-05, 'epoch': 0.25} |
|
|
|
05/18/2024 20:43:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.8687, 'learning_rate': 4.9910e-05, 'epoch': 0.27} |
|
|
|
05/18/2024 20:44:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.9038, 'learning_rate': 4.9900e-05, 'epoch': 0.28} |
|
|
|
05/18/2024 20:45:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.9063, 'learning_rate': 4.9889e-05, 'epoch': 0.30} |
|
|
|
05/18/2024 20:45:23 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-100 |
|
|
|
05/18/2024 20:45:23 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-100/tokenizer_config.json |
|
|
|
05/18/2024 20:45:23 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-100/special_tokens_map.json |
|
|
|
05/18/2024 20:46:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.9065, 'learning_rate': 4.9878e-05, 'epoch': 0.31} |
|
|
|
05/18/2024 20:46:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.9492, 'learning_rate': 4.9866e-05, 'epoch': 0.33} |
|
|
|
05/18/2024 20:47:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.9085, 'learning_rate': 4.9854e-05, 'epoch': 0.34} |
|
|
|
05/18/2024 20:48:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.9125, 'learning_rate': 4.9841e-05, 'epoch': 0.36} |
|
|
|
05/18/2024 20:49:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.9019, 'learning_rate': 4.9827e-05, 'epoch': 0.37} |
|
|
|
05/18/2024 20:49:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.8188, 'learning_rate': 4.9813e-05, 'epoch': 0.39} |
|
|
|
05/18/2024 20:50:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.8757, 'learning_rate': 4.9799e-05, 'epoch': 0.40} |
|
|
|
05/18/2024 20:51:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.8711, 'learning_rate': 4.9784e-05, 'epoch': 0.42} |
|
|
|
05/18/2024 20:51:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.8371, 'learning_rate': 4.9768e-05, 'epoch': 0.43} |
|
|
|
05/18/2024 20:52:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.8282, 'learning_rate': 4.9752e-05, 'epoch': 0.45} |
|
|
|
05/18/2024 20:53:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.8702, 'learning_rate': 4.9735e-05, 'epoch': 0.46} |
|
|
|
05/18/2024 20:54:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.8779, 'learning_rate': 4.9717e-05, 'epoch': 0.48} |
|
|
|
05/18/2024 20:54:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.8855, 'learning_rate': 4.9700e-05, 'epoch': 0.49} |
|
|
|
05/18/2024 20:55:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.8211, 'learning_rate': 4.9681e-05, 'epoch': 0.51} |
|
|
|
05/18/2024 20:56:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.8400, 'learning_rate': 4.9662e-05, 'epoch': 0.52} |
|
|
|
05/18/2024 20:57:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.8529, 'learning_rate': 4.9643e-05, 'epoch': 0.54} |
|
|
|
05/18/2024 20:57:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.8891, 'learning_rate': 4.9622e-05, 'epoch': 0.55} |
|
|
|
05/18/2024 20:58:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.8705, 'learning_rate': 4.9602e-05, 'epoch': 0.57} |
|
|
|
05/18/2024 20:59:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.8671, 'learning_rate': 4.9581e-05, 'epoch': 0.58} |
|
|
|
05/18/2024 20:59:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.8379, 'learning_rate': 4.9559e-05, 'epoch': 0.60} |
|
|
|
05/18/2024 20:59:58 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-200 |
|
|
|
05/18/2024 20:59:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-200/tokenizer_config.json |
|
|
|
05/18/2024 20:59:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-200/special_tokens_map.json |
|
|
|
05/18/2024 21:00:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.8086, 'learning_rate': 4.9537e-05, 'epoch': 0.61} |
|
|
|
05/18/2024 21:01:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.8183, 'learning_rate': 4.9514e-05, 'epoch': 0.63} |
|
|
|
05/18/2024 21:02:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.8557, 'learning_rate': 4.9491e-05, 'epoch': 0.64} |
|
|
|
05/18/2024 21:03:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.8809, 'learning_rate': 4.9467e-05, 'epoch': 0.66} |
|
|
|
05/18/2024 21:03:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.7887, 'learning_rate': 4.9442e-05, 'epoch': 0.67} |
|
|
|
05/18/2024 21:04:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.8965, 'learning_rate': 4.9417e-05, 'epoch': 0.69} |
|
|
|
05/18/2024 21:05:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.9189, 'learning_rate': 4.9392e-05, 'epoch': 0.70} |
|
|
|
05/18/2024 21:06:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.8793, 'learning_rate': 4.9366e-05, 'epoch': 0.72} |
|
|
|
05/18/2024 21:06:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.8052, 'learning_rate': 4.9339e-05, 'epoch': 0.73} |
|
|
|
05/18/2024 21:07:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.8033, 'learning_rate': 4.9312e-05, 'epoch': 0.75} |
|
|
|
05/18/2024 21:08:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.9045, 'learning_rate': 4.9284e-05, 'epoch': 0.76} |
|
|
|
05/18/2024 21:09:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.8486, 'learning_rate': 4.9256e-05, 'epoch': 0.78} |
|
|
|
05/18/2024 21:09:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.8324, 'learning_rate': 4.9227e-05, 'epoch': 0.79} |
|
|
|
05/18/2024 21:10:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.8172, 'learning_rate': 4.9198e-05, 'epoch': 0.81} |
|
|
|
05/18/2024 21:11:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.8258, 'learning_rate': 4.9168e-05, 'epoch': 0.82} |
|
|
|
05/18/2024 21:11:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.8670, 'learning_rate': 4.9138e-05, 'epoch': 0.84} |
|
|
|
05/18/2024 21:12:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.7974, 'learning_rate': 4.9107e-05, 'epoch': 0.85} |
|
|
|
05/18/2024 21:13:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.8596, 'learning_rate': 4.9076e-05, 'epoch': 0.87} |
|
|
|
05/18/2024 21:14:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.8076, 'learning_rate': 4.9044e-05, 'epoch': 0.88} |
|
|
|
05/18/2024 21:14:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.8126, 'learning_rate': 4.9011e-05, 'epoch': 0.90} |
|
|
|
05/18/2024 21:14:48 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-300 |
|
|
|
05/18/2024 21:14:48 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-300/tokenizer_config.json |
|
|
|
05/18/2024 21:14:48 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-300/special_tokens_map.json |
|
|
|
05/18/2024 21:15:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.8170, 'learning_rate': 4.8978e-05, 'epoch': 0.91} |
|
|
|
05/18/2024 21:16:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.9226, 'learning_rate': 4.8945e-05, 'epoch': 0.93} |
|
|
|
05/18/2024 21:16:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.7878, 'learning_rate': 4.8911e-05, 'epoch': 0.94} |
|
|
|
05/18/2024 21:17:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.8187, 'learning_rate': 4.8876e-05, 'epoch': 0.96} |
|
|
|
05/18/2024 21:18:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.8531, 'learning_rate': 4.8841e-05, 'epoch': 0.97} |
|
|
|
05/18/2024 21:19:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7615, 'learning_rate': 4.8805e-05, 'epoch': 0.99} |
|
|
|
05/18/2024 21:19:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.8132, 'learning_rate': 4.8769e-05, 'epoch': 1.00} |
|
|
|
05/18/2024 21:20:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.7877, 'learning_rate': 4.8732e-05, 'epoch': 1.02} |
|
|
|
05/18/2024 21:21:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7892, 'learning_rate': 4.8695e-05, 'epoch': 1.03} |
|
|
|
05/18/2024 21:22:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.8192, 'learning_rate': 4.8657e-05, 'epoch': 1.05} |
|
|
|
05/18/2024 21:22:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.7530, 'learning_rate': 4.8619e-05, 'epoch': 1.06} |
|
|
|
05/18/2024 21:23:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.8113, 'learning_rate': 4.8580e-05, 'epoch': 1.08} |
|
|
|
05/18/2024 21:24:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.8731, 'learning_rate': 4.8541e-05, 'epoch': 1.09} |
|
|
|
05/18/2024 21:24:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.8649, 'learning_rate': 4.8501e-05, 'epoch': 1.11} |
|
|
|
05/18/2024 21:25:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.8200, 'learning_rate': 4.8461e-05, 'epoch': 1.12} |
|
|
|
05/18/2024 21:26:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.8236, 'learning_rate': 4.8420e-05, 'epoch': 1.14} |
|
|
|
05/18/2024 21:27:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7584, 'learning_rate': 4.8379e-05, 'epoch': 1.15} |
|
|
|
05/18/2024 21:28:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.7847, 'learning_rate': 4.8337e-05, 'epoch': 1.17} |
|
|
|
05/18/2024 21:28:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.7597, 'learning_rate': 4.8294e-05, 'epoch': 1.18} |
|
|
|
05/18/2024 21:29:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.7330, 'learning_rate': 4.8251e-05, 'epoch': 1.20} |
|
|
|
05/18/2024 21:29:32 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-400 |
|
|
|
05/18/2024 21:29:32 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-400/tokenizer_config.json |
|
|
|
05/18/2024 21:29:32 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-400/special_tokens_map.json |
|
|
|
05/18/2024 21:30:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7729, 'learning_rate': 4.8208e-05, 'epoch': 1.21} |
|
|
|
05/18/2024 21:30:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7690, 'learning_rate': 4.8164e-05, 'epoch': 1.23} |
|
|
|
05/18/2024 21:31:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.8017, 'learning_rate': 4.8119e-05, 'epoch': 1.24} |
|
|
|
05/18/2024 21:32:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7916, 'learning_rate': 4.8074e-05, 'epoch': 1.26} |
|
|
|
05/18/2024 21:33:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.8058, 'learning_rate': 4.8029e-05, 'epoch': 1.27} |
|
|
|
05/18/2024 21:33:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.8267, 'learning_rate': 4.7983e-05, 'epoch': 1.29} |
|
|
|
05/18/2024 21:34:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.8316, 'learning_rate': 4.7936e-05, 'epoch': 1.30} |
|
|
|
05/18/2024 21:35:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.7874, 'learning_rate': 4.7889e-05, 'epoch': 1.32} |
|
|
|
05/18/2024 21:36:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.8268, 'learning_rate': 4.7842e-05, 'epoch': 1.33} |
|
|
|
05/18/2024 21:36:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.8366, 'learning_rate': 4.7794e-05, 'epoch': 1.35} |
|
|
|
05/18/2024 21:37:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.8279, 'learning_rate': 4.7745e-05, 'epoch': 1.36} |
|
|
|
05/18/2024 21:38:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.8070, 'learning_rate': 4.7696e-05, 'epoch': 1.38} |
|
|
|
05/18/2024 21:39:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.8259, 'learning_rate': 4.7647e-05, 'epoch': 1.39} |
|
|
|
05/18/2024 21:39:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.8344, 'learning_rate': 4.7597e-05, 'epoch': 1.41} |
|
|
|
05/18/2024 21:40:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.7565, 'learning_rate': 4.7546e-05, 'epoch': 1.42} |
|
|
|
05/18/2024 21:41:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.7771, 'learning_rate': 4.7495e-05, 'epoch': 1.44} |
|
|
|
05/18/2024 21:41:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.7803, 'learning_rate': 4.7443e-05, 'epoch': 1.45} |
|
|
|
05/18/2024 21:42:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.8118, 'learning_rate': 4.7391e-05, 'epoch': 1.47} |
|
|
|
05/18/2024 21:43:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.8490, 'learning_rate': 4.7339e-05, 'epoch': 1.48} |
|
|
|
05/18/2024 21:44:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.7255, 'learning_rate': 4.7286e-05, 'epoch': 1.50} |
|
|
|
05/18/2024 21:44:07 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-500 |
|
|
|
05/18/2024 21:44:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-500/tokenizer_config.json |
|
|
|
05/18/2024 21:44:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-500/special_tokens_map.json |
|
|
|
05/18/2024 21:44:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.7413, 'learning_rate': 4.7232e-05, 'epoch': 1.51} |
|
|
|
05/18/2024 21:45:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.7648, 'learning_rate': 4.7178e-05, 'epoch': 1.53} |
|
|
|
05/18/2024 21:46:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.7203, 'learning_rate': 4.7124e-05, 'epoch': 1.54} |
|
|
|
05/18/2024 21:47:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.7813, 'learning_rate': 4.7069e-05, 'epoch': 1.56} |
|
|
|
05/18/2024 21:47:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.8098, 'learning_rate': 4.7013e-05, 'epoch': 1.57} |
|
|
|
05/18/2024 21:48:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.7752, 'learning_rate': 4.6957e-05, 'epoch': 1.59} |
|
|
|
05/18/2024 21:49:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.8124, 'learning_rate': 4.6901e-05, 'epoch': 1.60} |
|
|
|
05/18/2024 21:49:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.8453, 'learning_rate': 4.6844e-05, 'epoch': 1.62} |
|
|
|
05/18/2024 21:50:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.7736, 'learning_rate': 4.6786e-05, 'epoch': 1.63} |
|
|
|
05/18/2024 21:51:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.8248, 'learning_rate': 4.6729e-05, 'epoch': 1.65} |
|
|
|
05/18/2024 21:52:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.7903, 'learning_rate': 4.6670e-05, 'epoch': 1.66} |
|
|
|
05/18/2024 21:52:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7495, 'learning_rate': 4.6611e-05, 'epoch': 1.68} |
|
|
|
05/18/2024 21:53:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.7600, 'learning_rate': 4.6552e-05, 'epoch': 1.69} |
|
|
|
05/18/2024 21:54:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7618, 'learning_rate': 4.6492e-05, 'epoch': 1.71} |
|
|
|
05/18/2024 21:55:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.8169, 'learning_rate': 4.6432e-05, 'epoch': 1.72} |
|
|
|
05/18/2024 21:55:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.8384, 'learning_rate': 4.6371e-05, 'epoch': 1.74} |
|
|
|
05/18/2024 21:56:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.7570, 'learning_rate': 4.6310e-05, 'epoch': 1.75} |
|
|
|
05/18/2024 21:57:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.8712, 'learning_rate': 4.6248e-05, 'epoch': 1.77} |
|
|
|
05/18/2024 21:58:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.8009, 'learning_rate': 4.6186e-05, 'epoch': 1.78} |
|
|
|
05/18/2024 21:58:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.8323, 'learning_rate': 4.6123e-05, 'epoch': 1.80} |
|
|
|
05/18/2024 21:58:54 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-600 |
|
|
|
05/18/2024 21:58:54 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-600/tokenizer_config.json |
|
|
|
05/18/2024 21:58:54 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-600/special_tokens_map.json |
|
|
|
05/18/2024 21:59:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.7373, 'learning_rate': 4.6060e-05, 'epoch': 1.81} |
|
|
|
05/18/2024 22:00:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.7533, 'learning_rate': 4.5997e-05, 'epoch': 1.83} |
|
|
|
05/18/2024 22:01:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.7831, 'learning_rate': 4.5933e-05, 'epoch': 1.84} |
|
|
|
05/18/2024 22:01:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.7733, 'learning_rate': 4.5868e-05, 'epoch': 1.86} |
|
|
|
05/18/2024 22:02:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.8178, 'learning_rate': 4.5803e-05, 'epoch': 1.87} |
|
|
|
05/18/2024 22:03:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.8028, 'learning_rate': 4.5738e-05, 'epoch': 1.89} |
|
|
|
05/18/2024 22:04:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.7499, 'learning_rate': 4.5672e-05, 'epoch': 1.90} |
|
|
|
05/18/2024 22:04:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7516, 'learning_rate': 4.5605e-05, 'epoch': 1.92} |
|
|
|
05/18/2024 22:05:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.7598, 'learning_rate': 4.5539e-05, 'epoch': 1.93} |
|
|
|
05/18/2024 22:06:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.8039, 'learning_rate': 4.5471e-05, 'epoch': 1.95} |
|
|
|
05/18/2024 22:07:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.7574, 'learning_rate': 4.5404e-05, 'epoch': 1.96} |
|
|
|
05/18/2024 22:07:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.7594, 'learning_rate': 4.5335e-05, 'epoch': 1.98} |
|
|
|
05/18/2024 22:08:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.7489, 'learning_rate': 4.5267e-05, 'epoch': 1.99} |
|
|
|
05/18/2024 22:09:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7376, 'learning_rate': 4.5198e-05, 'epoch': 2.01} |
|
|
|
05/18/2024 22:09:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7275, 'learning_rate': 4.5128e-05, 'epoch': 2.02} |
|
|
|
05/18/2024 22:10:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.7490, 'learning_rate': 4.5058e-05, 'epoch': 2.04} |
|
|
|
05/18/2024 22:11:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7563, 'learning_rate': 4.4988e-05, 'epoch': 2.05} |
|
|
|
05/18/2024 22:12:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.7426, 'learning_rate': 4.4917e-05, 'epoch': 2.07} |
|
|
|
05/18/2024 22:12:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7323, 'learning_rate': 4.4846e-05, 'epoch': 2.08} |
|
|
|
05/18/2024 22:13:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.7474, 'learning_rate': 4.4774e-05, 'epoch': 2.10} |
|
|
|
05/18/2024 22:13:33 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-700 |
|
|
|
05/18/2024 22:13:33 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-700/tokenizer_config.json |
|
|
|
05/18/2024 22:13:33 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-700/special_tokens_map.json |
|
|
|
05/18/2024 22:14:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7660, 'learning_rate': 4.4702e-05, 'epoch': 2.11} |
|
|
|
05/18/2024 22:15:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.8314, 'learning_rate': 4.4629e-05, 'epoch': 2.12} |
|
|
|
05/18/2024 22:15:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7388, 'learning_rate': 4.4556e-05, 'epoch': 2.14} |
|
|
|
05/18/2024 22:16:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.7698, 'learning_rate': 4.4483e-05, 'epoch': 2.15} |
|
|
|
05/18/2024 22:17:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.7692, 'learning_rate': 4.4409e-05, 'epoch': 2.17} |
|
|
|
05/18/2024 22:17:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7261, 'learning_rate': 4.4335e-05, 'epoch': 2.18} |
|
|
|
05/18/2024 22:18:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.8001, 'learning_rate': 4.4260e-05, 'epoch': 2.20} |
|
|
|
05/18/2024 22:19:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.8036, 'learning_rate': 4.4185e-05, 'epoch': 2.21} |
|
|
|
05/18/2024 22:20:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7775, 'learning_rate': 4.4109e-05, 'epoch': 2.23} |
|
|
|
05/18/2024 22:20:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7449, 'learning_rate': 4.4033e-05, 'epoch': 2.24} |
|
|
|
05/18/2024 22:21:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.7197, 'learning_rate': 4.3957e-05, 'epoch': 2.26} |
|
|
|
05/18/2024 22:22:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.7691, 'learning_rate': 4.3880e-05, 'epoch': 2.27} |
|
|
|
05/18/2024 22:23:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.7501, 'learning_rate': 4.3802e-05, 'epoch': 2.29} |
|
|
|
05/18/2024 22:23:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7606, 'learning_rate': 4.3725e-05, 'epoch': 2.30} |
|
|
|
05/18/2024 22:24:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.7605, 'learning_rate': 4.3647e-05, 'epoch': 2.32} |
|
|
|
05/18/2024 22:25:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.7186, 'learning_rate': 4.3568e-05, 'epoch': 2.33} |
|
|
|
05/18/2024 22:25:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.7614, 'learning_rate': 4.3489e-05, 'epoch': 2.35} |
|
|
|
05/18/2024 22:26:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.7537, 'learning_rate': 4.3410e-05, 'epoch': 2.36} |
|
|
|
05/18/2024 22:27:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.7246, 'learning_rate': 4.3330e-05, 'epoch': 2.38} |
|
|
|
05/18/2024 22:28:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7561, 'learning_rate': 4.3250e-05, 'epoch': 2.39} |
|
|
|
05/18/2024 22:28:10 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-800 |
|
|
|
05/18/2024 22:28:10 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-800/tokenizer_config.json |
|
|
|
05/18/2024 22:28:10 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-800/special_tokens_map.json |
|
|
|
05/18/2024 22:28:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.7418, 'learning_rate': 4.3169e-05, 'epoch': 2.41} |
|
|
|
05/18/2024 22:29:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6956, 'learning_rate': 4.3088e-05, 'epoch': 2.42} |
|
|
|
05/18/2024 22:30:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.7715, 'learning_rate': 4.3007e-05, 'epoch': 2.44} |
|
|
|
05/18/2024 22:31:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.7782, 'learning_rate': 4.2925e-05, 'epoch': 2.45} |
|
|
|
05/18/2024 22:31:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7125, 'learning_rate': 4.2843e-05, 'epoch': 2.47} |
|
|
|
05/18/2024 22:32:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.7647, 'learning_rate': 4.2761e-05, 'epoch': 2.48} |
|
|
|
05/18/2024 22:33:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.7327, 'learning_rate': 4.2678e-05, 'epoch': 2.50} |
|
|
|
05/18/2024 22:34:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.7553, 'learning_rate': 4.2594e-05, 'epoch': 2.51} |
|
|
|
05/18/2024 22:34:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.7685, 'learning_rate': 4.2511e-05, 'epoch': 2.53} |
|
|
|
05/18/2024 22:35:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.7207, 'learning_rate': 4.2427e-05, 'epoch': 2.54} |
|
|
|
05/18/2024 22:36:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7253, 'learning_rate': 4.2342e-05, 'epoch': 2.56} |
|
|
|
05/18/2024 22:37:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.8055, 'learning_rate': 4.2257e-05, 'epoch': 2.57} |
|
|
|
05/18/2024 22:37:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.7747, 'learning_rate': 4.2172e-05, 'epoch': 2.59} |
|
|
|
05/18/2024 22:38:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.7525, 'learning_rate': 4.2086e-05, 'epoch': 2.60} |
|
|
|
05/18/2024 22:39:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7259, 'learning_rate': 4.2000e-05, 'epoch': 2.62} |
|
|
|
05/18/2024 22:39:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.7267, 'learning_rate': 4.1914e-05, 'epoch': 2.63} |
|
|
|
05/18/2024 22:40:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.7862, 'learning_rate': 4.1827e-05, 'epoch': 2.65} |
|
|
|
05/18/2024 22:41:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.8079, 'learning_rate': 4.1740e-05, 'epoch': 2.66} |
|
|
|
05/18/2024 22:42:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.7319, 'learning_rate': 4.1652e-05, 'epoch': 2.68} |
|
|
|
05/18/2024 22:42:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.7478, 'learning_rate': 4.1565e-05, 'epoch': 2.69} |
|
|
|
05/18/2024 22:42:51 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-900 |
|
|
|
05/18/2024 22:42:51 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-900/tokenizer_config.json |
|
|
|
05/18/2024 22:42:51 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-900/special_tokens_map.json |
|
|
|
05/18/2024 22:43:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.8038, 'learning_rate': 4.1476e-05, 'epoch': 2.71} |
|
|
|
05/18/2024 22:44:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.7596, 'learning_rate': 4.1388e-05, 'epoch': 2.72} |
|
|
|
05/18/2024 22:45:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6991, 'learning_rate': 4.1299e-05, 'epoch': 2.74} |
|
|
|
05/18/2024 22:45:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.7014, 'learning_rate': 4.1209e-05, 'epoch': 2.75} |
|
|
|
05/18/2024 22:46:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.8249, 'learning_rate': 4.1120e-05, 'epoch': 2.77} |
|
|
|
05/18/2024 22:47:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.7091, 'learning_rate': 4.1030e-05, 'epoch': 2.78} |
|
|
|
05/18/2024 22:48:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.8038, 'learning_rate': 4.0939e-05, 'epoch': 2.80} |
|
|
|
05/18/2024 22:48:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7707, 'learning_rate': 4.0848e-05, 'epoch': 2.81} |
|
|
|
05/18/2024 22:49:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.7483, 'learning_rate': 4.0757e-05, 'epoch': 2.83} |
|
|
|
05/18/2024 22:50:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.7838, 'learning_rate': 4.0666e-05, 'epoch': 2.84} |
|
|
|
05/18/2024 22:51:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.7758, 'learning_rate': 4.0574e-05, 'epoch': 2.86} |
|
|
|
05/18/2024 22:51:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7591, 'learning_rate': 4.0482e-05, 'epoch': 2.87} |
|
|
|
05/18/2024 22:52:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.7012, 'learning_rate': 4.0389e-05, 'epoch': 2.89} |
|
|
|
05/18/2024 22:53:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.8008, 'learning_rate': 4.0297e-05, 'epoch': 2.90} |
|
|
|
05/18/2024 22:53:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.7492, 'learning_rate': 4.0203e-05, 'epoch': 2.92} |
|
|
|
05/18/2024 22:54:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.7736, 'learning_rate': 4.0110e-05, 'epoch': 2.93} |
|
|
|
05/18/2024 22:55:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7198, 'learning_rate': 4.0016e-05, 'epoch': 2.95} |
|
|
|
05/18/2024 22:56:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.7513, 'learning_rate': 3.9922e-05, 'epoch': 2.96} |
|
|
|
05/18/2024 22:56:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.8040, 'learning_rate': 3.9827e-05, 'epoch': 2.98} |
|
|
|
05/18/2024 22:57:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6987, 'learning_rate': 3.9733e-05, 'epoch': 2.99} |
|
|
|
05/18/2024 22:57:34 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1000 |
|
|
|
05/18/2024 22:57:34 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1000/tokenizer_config.json |
|
|
|
05/18/2024 22:57:34 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1000/special_tokens_map.json |
|
|
|
05/18/2024 22:58:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.8132, 'learning_rate': 3.9638e-05, 'epoch': 3.01} |
|
|
|
05/18/2024 22:59:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.7291, 'learning_rate': 3.9542e-05, 'epoch': 3.02} |
|
|
|
05/18/2024 22:59:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.7181, 'learning_rate': 3.9446e-05, 'epoch': 3.04} |
|
|
|
05/18/2024 23:00:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.7320, 'learning_rate': 3.9350e-05, 'epoch': 3.05} |
|
|
|
05/18/2024 23:01:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.8267, 'learning_rate': 3.9254e-05, 'epoch': 3.07} |
|
|
|
05/18/2024 23:02:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6677, 'learning_rate': 3.9157e-05, 'epoch': 3.08} |
|
|
|
05/18/2024 23:02:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.7701, 'learning_rate': 3.9060e-05, 'epoch': 3.10} |
|
|
|
05/18/2024 23:03:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6776, 'learning_rate': 3.8962e-05, 'epoch': 3.11} |
|
|
|
05/18/2024 23:04:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7917, 'learning_rate': 3.8865e-05, 'epoch': 3.13} |
|
|
|
05/18/2024 23:05:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.7032, 'learning_rate': 3.8767e-05, 'epoch': 3.14} |
|
|
|
05/18/2024 23:05:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.7442, 'learning_rate': 3.8669e-05, 'epoch': 3.16} |
|
|
|
05/18/2024 23:06:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6668, 'learning_rate': 3.8570e-05, 'epoch': 3.17} |
|
|
|
05/18/2024 23:07:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.7408, 'learning_rate': 3.8471e-05, 'epoch': 3.19} |
|
|
|
05/18/2024 23:08:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.7615, 'learning_rate': 3.8372e-05, 'epoch': 3.20} |
|
|
|
05/18/2024 23:08:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6853, 'learning_rate': 3.8272e-05, 'epoch': 3.22} |
|
|
|
05/18/2024 23:09:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6563, 'learning_rate': 3.8173e-05, 'epoch': 3.23} |
|
|
|
05/18/2024 23:10:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.7744, 'learning_rate': 3.8072e-05, 'epoch': 3.25} |
|
|
|
05/18/2024 23:10:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.7257, 'learning_rate': 3.7972e-05, 'epoch': 3.26} |
|
|
|
05/18/2024 23:11:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.7711, 'learning_rate': 3.7871e-05, 'epoch': 3.28} |
|
|
|
05/18/2024 23:12:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.7528, 'learning_rate': 3.7771e-05, 'epoch': 3.29} |
|
|
|
05/18/2024 23:12:19 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1100 |
|
|
|
05/18/2024 23:12:19 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1100/tokenizer_config.json |
|
|
|
05/18/2024 23:12:19 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1100/special_tokens_map.json |
|
|
|
05/18/2024 23:13:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.7642, 'learning_rate': 3.7669e-05, 'epoch': 3.31} |
|
|
|
05/18/2024 23:13:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.7566, 'learning_rate': 3.7568e-05, 'epoch': 3.32} |
|
|
|
05/18/2024 23:14:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6818, 'learning_rate': 3.7466e-05, 'epoch': 3.34} |
|
|
|
05/18/2024 23:15:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.7248, 'learning_rate': 3.7364e-05, 'epoch': 3.35} |
|
|
|
05/18/2024 23:15:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7569, 'learning_rate': 3.7262e-05, 'epoch': 3.37} |
|
|
|
05/18/2024 23:16:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.7775, 'learning_rate': 3.7159e-05, 'epoch': 3.38} |
|
|
|
05/18/2024 23:17:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.7637, 'learning_rate': 3.7056e-05, 'epoch': 3.40} |
|
|
|
05/18/2024 23:18:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.7701, 'learning_rate': 3.6953e-05, 'epoch': 3.41} |
|
|
|
05/18/2024 23:18:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7202, 'learning_rate': 3.6850e-05, 'epoch': 3.43} |
|
|
|
05/18/2024 23:19:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.7360, 'learning_rate': 3.6746e-05, 'epoch': 3.44} |
|
|
|
05/18/2024 23:20:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.7361, 'learning_rate': 3.6642e-05, 'epoch': 3.46} |
|
|
|
05/18/2024 23:21:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.7012, 'learning_rate': 3.6538e-05, 'epoch': 3.47} |
|
|
|
05/18/2024 23:21:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6902, 'learning_rate': 3.6433e-05, 'epoch': 3.49} |
|
|
|
05/18/2024 23:22:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.7524, 'learning_rate': 3.6329e-05, 'epoch': 3.50} |
|
|
|
05/18/2024 23:23:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.7270, 'learning_rate': 3.6224e-05, 'epoch': 3.52} |
|
|
|
05/18/2024 23:24:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.7811, 'learning_rate': 3.6119e-05, 'epoch': 3.53} |
|
|
|
05/18/2024 23:24:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7164, 'learning_rate': 3.6013e-05, 'epoch': 3.55} |
|
|
|
05/18/2024 23:25:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.7238, 'learning_rate': 3.5908e-05, 'epoch': 3.56} |
|
|
|
05/18/2024 23:26:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7886, 'learning_rate': 3.5802e-05, 'epoch': 3.58} |
|
|
|
05/18/2024 23:26:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7491, 'learning_rate': 3.5696e-05, 'epoch': 3.59} |
|
|
|
05/18/2024 23:26:57 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1200 |
|
|
|
05/18/2024 23:26:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1200/tokenizer_config.json |
|
|
|
05/18/2024 23:26:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1200/special_tokens_map.json |
|
|
|
05/18/2024 23:27:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.7499, 'learning_rate': 3.5589e-05, 'epoch': 3.61} |
|
|
|
05/18/2024 23:28:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6803, 'learning_rate': 3.5483e-05, 'epoch': 3.62} |
|
|
|
05/18/2024 23:29:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.7302, 'learning_rate': 3.5376e-05, 'epoch': 3.64} |
|
|
|
05/18/2024 23:29:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.7106, 'learning_rate': 3.5269e-05, 'epoch': 3.65} |
|
|
|
05/18/2024 23:30:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7930, 'learning_rate': 3.5161e-05, 'epoch': 3.67} |
|
|
|
05/18/2024 23:31:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.8239, 'learning_rate': 3.5054e-05, 'epoch': 3.68} |
|
|
|
05/18/2024 23:32:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6556, 'learning_rate': 3.4946e-05, 'epoch': 3.70} |
|
|
|
05/18/2024 23:33:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.7934, 'learning_rate': 3.4838e-05, 'epoch': 3.71} |
|
|
|
05/18/2024 23:33:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7730, 'learning_rate': 3.4730e-05, 'epoch': 3.73} |
|
|
|
05/18/2024 23:34:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.7345, 'learning_rate': 3.4621e-05, 'epoch': 3.74} |
|
|
|
05/18/2024 23:35:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7036, 'learning_rate': 3.4513e-05, 'epoch': 3.76} |
|
|
|
05/18/2024 23:36:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.8012, 'learning_rate': 3.4404e-05, 'epoch': 3.77} |
|
|
|
05/18/2024 23:36:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.7567, 'learning_rate': 3.4295e-05, 'epoch': 3.79} |
|
|
|
05/18/2024 23:37:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.7258, 'learning_rate': 3.4186e-05, 'epoch': 3.80} |
|
|
|
05/18/2024 23:38:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.7048, 'learning_rate': 3.4076e-05, 'epoch': 3.82} |
|
|
|
05/18/2024 23:38:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.7275, 'learning_rate': 3.3967e-05, 'epoch': 3.83} |
|
|
|
05/18/2024 23:39:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6935, 'learning_rate': 3.3857e-05, 'epoch': 3.85} |
|
|
|
05/18/2024 23:40:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.7128, 'learning_rate': 3.3747e-05, 'epoch': 3.86} |
|
|
|
05/18/2024 23:41:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6997, 'learning_rate': 3.3636e-05, 'epoch': 3.88} |
|
|
|
05/18/2024 23:41:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6989, 'learning_rate': 3.3526e-05, 'epoch': 3.89} |
|
|
|
05/18/2024 23:41:43 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1300 |
|
|
|
05/18/2024 23:41:43 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1300/tokenizer_config.json |
|
|
|
05/18/2024 23:41:43 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1300/special_tokens_map.json |
|
|
|
05/18/2024 23:42:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.7412, 'learning_rate': 3.3415e-05, 'epoch': 3.91} |
|
|
|
05/18/2024 23:43:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7208, 'learning_rate': 3.3305e-05, 'epoch': 3.92} |
|
|
|
05/18/2024 23:43:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.7097, 'learning_rate': 3.3194e-05, 'epoch': 3.94} |
|
|
|
05/18/2024 23:44:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.7063, 'learning_rate': 3.3082e-05, 'epoch': 3.95} |
|
|
|
05/18/2024 23:45:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.7256, 'learning_rate': 3.2971e-05, 'epoch': 3.97} |
|
|
|
05/18/2024 23:46:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.7000, 'learning_rate': 3.2859e-05, 'epoch': 3.98} |
|
|
|
05/18/2024 23:46:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7179, 'learning_rate': 3.2748e-05, 'epoch': 4.00} |
|
|
|
05/18/2024 23:47:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.7768, 'learning_rate': 3.2636e-05, 'epoch': 4.01} |
|
|
|
05/18/2024 23:48:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.6737, 'learning_rate': 3.2524e-05, 'epoch': 4.03} |
|
|
|
05/18/2024 23:49:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6480, 'learning_rate': 3.2412e-05, 'epoch': 4.04} |
|
|
|
05/18/2024 23:49:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7591, 'learning_rate': 3.2299e-05, 'epoch': 4.06} |
|
|
|
05/18/2024 23:50:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6756, 'learning_rate': 3.2187e-05, 'epoch': 4.07} |
|
|
|
05/18/2024 23:51:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.7229, 'learning_rate': 3.2074e-05, 'epoch': 4.09} |
|
|
|
05/18/2024 23:52:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7267, 'learning_rate': 3.1961e-05, 'epoch': 4.10} |
|
|
|
05/18/2024 23:52:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.7363, 'learning_rate': 3.1848e-05, 'epoch': 4.12} |
|
|
|
05/18/2024 23:53:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.7083, 'learning_rate': 3.1735e-05, 'epoch': 4.13} |
|
|
|
05/18/2024 23:54:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.6997, 'learning_rate': 3.1622e-05, 'epoch': 4.15} |
|
|
|
05/18/2024 23:55:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.7047, 'learning_rate': 3.1508e-05, 'epoch': 4.16} |
|
|
|
05/18/2024 23:55:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6965, 'learning_rate': 3.1395e-05, 'epoch': 4.18} |
|
|
|
05/18/2024 23:56:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.7139, 'learning_rate': 3.1281e-05, 'epoch': 4.19} |
|
|
|
05/18/2024 23:56:33 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1400 |
|
|
|
05/18/2024 23:56:33 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1400/tokenizer_config.json |
|
|
|
05/18/2024 23:56:33 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1400/special_tokens_map.json |
|
|
|
05/18/2024 23:57:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.7125, 'learning_rate': 3.1167e-05, 'epoch': 4.21} |
|
|
|
05/18/2024 23:58:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.7247, 'learning_rate': 3.1053e-05, 'epoch': 4.22} |
|
|
|
05/18/2024 23:58:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6983, 'learning_rate': 3.0939e-05, 'epoch': 4.23} |
|
|
|
05/18/2024 23:59:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.7491, 'learning_rate': 3.0825e-05, 'epoch': 4.25} |
|
|
|
05/19/2024 00:00:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6763, 'learning_rate': 3.0710e-05, 'epoch': 4.26} |
|
|
|
05/19/2024 00:00:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.7401, 'learning_rate': 3.0596e-05, 'epoch': 4.28} |
|
|
|
05/19/2024 00:01:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6849, 'learning_rate': 3.0481e-05, 'epoch': 4.29} |
|
|
|
05/19/2024 00:02:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7617, 'learning_rate': 3.0366e-05, 'epoch': 4.31} |
|
|
|
05/19/2024 00:03:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.7097, 'learning_rate': 3.0251e-05, 'epoch': 4.32} |
|
|
|
05/19/2024 00:03:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6899, 'learning_rate': 3.0136e-05, 'epoch': 4.34} |
|
|
|
05/19/2024 00:04:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.7034, 'learning_rate': 3.0021e-05, 'epoch': 4.35} |
|
|
|
05/19/2024 00:05:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7223, 'learning_rate': 2.9906e-05, 'epoch': 4.37} |
|
|
|
05/19/2024 00:05:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7026, 'learning_rate': 2.9791e-05, 'epoch': 4.38} |
|
|
|
05/19/2024 00:06:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.7340, 'learning_rate': 2.9675e-05, 'epoch': 4.40} |
|
|
|
05/19/2024 00:07:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.7691, 'learning_rate': 2.9560e-05, 'epoch': 4.41} |
|
|
|
05/19/2024 00:08:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7691, 'learning_rate': 2.9444e-05, 'epoch': 4.43} |
|
|
|
05/19/2024 00:08:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.7064, 'learning_rate': 2.9328e-05, 'epoch': 4.44} |
|
|
|
05/19/2024 00:09:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6983, 'learning_rate': 2.9212e-05, 'epoch': 4.46} |
|
|
|
05/19/2024 00:10:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7616, 'learning_rate': 2.9097e-05, 'epoch': 4.47} |
|
|
|
05/19/2024 00:11:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.7386, 'learning_rate': 2.8981e-05, 'epoch': 4.49} |
|
|
|
05/19/2024 00:11:04 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1500 |
|
|
|
05/19/2024 00:11:04 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1500/tokenizer_config.json |
|
|
|
05/19/2024 00:11:04 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1500/special_tokens_map.json |
|
|
|
05/19/2024 00:11:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6959, 'learning_rate': 2.8864e-05, 'epoch': 4.50} |
|
|
|
05/19/2024 00:12:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6960, 'learning_rate': 2.8748e-05, 'epoch': 4.52} |
|
|
|
05/19/2024 00:13:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.7367, 'learning_rate': 2.8632e-05, 'epoch': 4.53} |
|
|
|
05/19/2024 00:14:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.7221, 'learning_rate': 2.8516e-05, 'epoch': 4.55} |
|
|
|
05/19/2024 00:14:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.7485, 'learning_rate': 2.8399e-05, 'epoch': 4.56} |
|
|
|
05/19/2024 00:15:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.8012, 'learning_rate': 2.8283e-05, 'epoch': 4.58} |
|
|
|
05/19/2024 00:16:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6885, 'learning_rate': 2.8166e-05, 'epoch': 4.59} |
|
|
|
05/19/2024 00:17:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.7332, 'learning_rate': 2.8049e-05, 'epoch': 4.61} |
|
|
|
05/19/2024 00:17:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.7952, 'learning_rate': 2.7933e-05, 'epoch': 4.62} |
|
|
|
05/19/2024 00:18:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.7742, 'learning_rate': 2.7816e-05, 'epoch': 4.64} |
|
|
|
05/19/2024 00:19:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7715, 'learning_rate': 2.7699e-05, 'epoch': 4.65} |
|
|
|
05/19/2024 00:19:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7624, 'learning_rate': 2.7582e-05, 'epoch': 4.67} |
|
|
|
05/19/2024 00:20:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6833, 'learning_rate': 2.7465e-05, 'epoch': 4.68} |
|
|
|
05/19/2024 00:21:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6924, 'learning_rate': 2.7348e-05, 'epoch': 4.70} |
|
|
|
05/19/2024 00:22:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6909, 'learning_rate': 2.7231e-05, 'epoch': 4.71} |
|
|
|
05/19/2024 00:22:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.7844, 'learning_rate': 2.7114e-05, 'epoch': 4.73} |
|
|
|
05/19/2024 00:23:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.7149, 'learning_rate': 2.6997e-05, 'epoch': 4.74} |
|
|
|
05/19/2024 00:24:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.6882, 'learning_rate': 2.6879e-05, 'epoch': 4.76} |
|
|
|
05/19/2024 00:25:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.7442, 'learning_rate': 2.6762e-05, 'epoch': 4.77} |
|
|
|
05/19/2024 00:25:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7078, 'learning_rate': 2.6645e-05, 'epoch': 4.79} |
|
|
|
05/19/2024 00:25:45 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1600 |
|
|
|
05/19/2024 00:25:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1600/tokenizer_config.json |
|
|
|
05/19/2024 00:25:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1600/special_tokens_map.json |
|
|
|
05/19/2024 00:26:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6789, 'learning_rate': 2.6528e-05, 'epoch': 4.80} |
|
|
|
05/19/2024 00:27:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6703, 'learning_rate': 2.6410e-05, 'epoch': 4.82} |
|
|
|
05/19/2024 00:27:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6671, 'learning_rate': 2.6293e-05, 'epoch': 4.83} |
|
|
|
05/19/2024 00:28:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6936, 'learning_rate': 2.6175e-05, 'epoch': 4.85} |
|
|
|
05/19/2024 00:29:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6755, 'learning_rate': 2.6058e-05, 'epoch': 4.86} |
|
|
|
05/19/2024 00:30:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6515, 'learning_rate': 2.5940e-05, 'epoch': 4.88} |
|
|
|
05/19/2024 00:30:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7108, 'learning_rate': 2.5823e-05, 'epoch': 4.89} |
|
|
|
05/19/2024 00:31:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.7038, 'learning_rate': 2.5705e-05, 'epoch': 4.91} |
|
|
|
05/19/2024 00:32:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6804, 'learning_rate': 2.5588e-05, 'epoch': 4.92} |
|
|
|
05/19/2024 00:33:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7491, 'learning_rate': 2.5470e-05, 'epoch': 4.94} |
|
|
|
05/19/2024 00:33:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.7297, 'learning_rate': 2.5353e-05, 'epoch': 4.95} |
|
|
|
05/19/2024 00:34:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6731, 'learning_rate': 2.5235e-05, 'epoch': 4.97} |
|
|
|
05/19/2024 00:35:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.7156, 'learning_rate': 2.5118e-05, 'epoch': 4.98} |
|
|
|
05/19/2024 00:36:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.7455, 'learning_rate': 2.5000e-05, 'epoch': 5.00} |
|
|
|
05/19/2024 00:36:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7206, 'learning_rate': 2.4882e-05, 'epoch': 5.01} |
|
|
|
05/19/2024 00:37:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6685, 'learning_rate': 2.4765e-05, 'epoch': 5.03} |
|
|
|
05/19/2024 00:38:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6722, 'learning_rate': 2.4647e-05, 'epoch': 5.04} |
|
|
|
05/19/2024 00:38:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7271, 'learning_rate': 2.4530e-05, 'epoch': 5.06} |
|
|
|
05/19/2024 00:39:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6775, 'learning_rate': 2.4412e-05, 'epoch': 5.07} |
|
|
|
05/19/2024 00:40:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.7324, 'learning_rate': 2.4295e-05, 'epoch': 5.09} |
|
|
|
05/19/2024 00:40:22 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1700 |
|
|
|
05/19/2024 00:40:22 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1700/tokenizer_config.json |
|
|
|
05/19/2024 00:40:22 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1700/special_tokens_map.json |
|
|
|
05/19/2024 00:41:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.7041, 'learning_rate': 2.4177e-05, 'epoch': 5.10} |
|
|
|
05/19/2024 00:41:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6902, 'learning_rate': 2.4060e-05, 'epoch': 5.12} |
|
|
|
05/19/2024 00:42:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.7462, 'learning_rate': 2.3942e-05, 'epoch': 5.13} |
|
|
|
05/19/2024 00:43:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7482, 'learning_rate': 2.3825e-05, 'epoch': 5.15} |
|
|
|
05/19/2024 00:43:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.7026, 'learning_rate': 2.3707e-05, 'epoch': 5.16} |
|
|
|
05/19/2024 00:44:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6810, 'learning_rate': 2.3590e-05, 'epoch': 5.18} |
|
|
|
05/19/2024 00:45:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7101, 'learning_rate': 2.3472e-05, 'epoch': 5.19} |
|
|
|
05/19/2024 00:46:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6857, 'learning_rate': 2.3355e-05, 'epoch': 5.21} |
|
|
|
05/19/2024 00:46:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.7127, 'learning_rate': 2.3238e-05, 'epoch': 5.22} |
|
|
|
05/19/2024 00:47:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6823, 'learning_rate': 2.3121e-05, 'epoch': 5.24} |
|
|
|
05/19/2024 00:48:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.7118, 'learning_rate': 2.3003e-05, 'epoch': 5.25} |
|
|
|
05/19/2024 00:49:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.7521, 'learning_rate': 2.2886e-05, 'epoch': 5.27} |
|
|
|
05/19/2024 00:49:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7331, 'learning_rate': 2.2769e-05, 'epoch': 5.28} |
|
|
|
05/19/2024 00:50:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.7031, 'learning_rate': 2.2652e-05, 'epoch': 5.30} |
|
|
|
05/19/2024 00:51:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6622, 'learning_rate': 2.2535e-05, 'epoch': 5.31} |
|
|
|
05/19/2024 00:52:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6662, 'learning_rate': 2.2418e-05, 'epoch': 5.33} |
|
|
|
05/19/2024 00:52:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6913, 'learning_rate': 2.2301e-05, 'epoch': 5.34} |
|
|
|
05/19/2024 00:53:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6777, 'learning_rate': 2.2184e-05, 'epoch': 5.36} |
|
|
|
05/19/2024 00:54:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6608, 'learning_rate': 2.2067e-05, 'epoch': 5.37} |
|
|
|
05/19/2024 00:55:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6696, 'learning_rate': 2.1951e-05, 'epoch': 5.39} |
|
|
|
05/19/2024 00:55:01 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1800 |
|
|
|
05/19/2024 00:55:01 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1800/tokenizer_config.json |
|
|
|
05/19/2024 00:55:01 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1800/special_tokens_map.json |
|
|
|
05/19/2024 00:55:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.7117, 'learning_rate': 2.1834e-05, 'epoch': 5.40} |
|
|
|
05/19/2024 00:56:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6851, 'learning_rate': 2.1717e-05, 'epoch': 5.42} |
|
|
|
05/19/2024 00:57:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.7386, 'learning_rate': 2.1601e-05, 'epoch': 5.43} |
|
|
|
05/19/2024 00:58:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6995, 'learning_rate': 2.1484e-05, 'epoch': 5.45} |
|
|
|
05/19/2024 00:58:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6936, 'learning_rate': 2.1368e-05, 'epoch': 5.46} |
|
|
|
05/19/2024 00:59:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.7228, 'learning_rate': 2.1252e-05, 'epoch': 5.48} |
|
|
|
05/19/2024 01:00:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.7700, 'learning_rate': 2.1136e-05, 'epoch': 5.49} |
|
|
|
05/19/2024 01:00:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6844, 'learning_rate': 2.1019e-05, 'epoch': 5.51} |
|
|
|
05/19/2024 01:01:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.7025, 'learning_rate': 2.0903e-05, 'epoch': 5.52} |
|
|
|
05/19/2024 01:02:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6874, 'learning_rate': 2.0788e-05, 'epoch': 5.54} |
|
|
|
05/19/2024 01:03:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.7446, 'learning_rate': 2.0672e-05, 'epoch': 5.55} |
|
|
|
05/19/2024 01:03:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7386, 'learning_rate': 2.0556e-05, 'epoch': 5.57} |
|
|
|
05/19/2024 01:04:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.7292, 'learning_rate': 2.0440e-05, 'epoch': 5.58} |
|
|
|
05/19/2024 01:05:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.7167, 'learning_rate': 2.0325e-05, 'epoch': 5.60} |
|
|
|
05/19/2024 01:06:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7249, 'learning_rate': 2.0209e-05, 'epoch': 5.61} |
|
|
|
05/19/2024 01:06:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7436, 'learning_rate': 2.0094e-05, 'epoch': 5.63} |
|
|
|
05/19/2024 01:07:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6922, 'learning_rate': 1.9979e-05, 'epoch': 5.64} |
|
|
|
05/19/2024 01:08:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.7689, 'learning_rate': 1.9864e-05, 'epoch': 5.66} |
|
|
|
05/19/2024 01:09:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.7074, 'learning_rate': 1.9749e-05, 'epoch': 5.67} |
|
|
|
05/19/2024 01:09:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7115, 'learning_rate': 1.9634e-05, 'epoch': 5.69} |
|
|
|
05/19/2024 01:09:52 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1900 |
|
|
|
05/19/2024 01:09:52 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1900/tokenizer_config.json |
|
|
|
05/19/2024 01:09:52 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-1900/special_tokens_map.json |
|
|
|
05/19/2024 01:10:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.7230, 'learning_rate': 1.9519e-05, 'epoch': 5.70} |
|
|
|
05/19/2024 01:11:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.6543, 'learning_rate': 1.9404e-05, 'epoch': 5.72} |
|
|
|
05/19/2024 01:12:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.7399, 'learning_rate': 1.9290e-05, 'epoch': 5.73} |
|
|
|
05/19/2024 01:12:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.6795, 'learning_rate': 1.9175e-05, 'epoch': 5.75} |
|
|
|
05/19/2024 01:13:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.7210, 'learning_rate': 1.9061e-05, 'epoch': 5.76} |
|
|
|
05/19/2024 01:14:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6991, 'learning_rate': 1.8947e-05, 'epoch': 5.78} |
|
|
|
05/19/2024 01:14:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.7136, 'learning_rate': 1.8833e-05, 'epoch': 5.79} |
|
|
|
05/19/2024 01:15:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6695, 'learning_rate': 1.8719e-05, 'epoch': 5.81} |
|
|
|
05/19/2024 01:16:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6473, 'learning_rate': 1.8605e-05, 'epoch': 5.82} |
|
|
|
05/19/2024 01:17:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6412, 'learning_rate': 1.8492e-05, 'epoch': 5.84} |
|
|
|
05/19/2024 01:17:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.6956, 'learning_rate': 1.8378e-05, 'epoch': 5.85} |
|
|
|
05/19/2024 01:18:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.7798, 'learning_rate': 1.8265e-05, 'epoch': 5.87} |
|
|
|
05/19/2024 01:19:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.7011, 'learning_rate': 1.8152e-05, 'epoch': 5.88} |
|
|
|
05/19/2024 01:20:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.7240, 'learning_rate': 1.8039e-05, 'epoch': 5.90} |
|
|
|
05/19/2024 01:20:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6702, 'learning_rate': 1.7926e-05, 'epoch': 5.91} |
|
|
|
05/19/2024 01:21:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.7470, 'learning_rate': 1.7813e-05, 'epoch': 5.93} |
|
|
|
05/19/2024 01:22:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6690, 'learning_rate': 1.7701e-05, 'epoch': 5.94} |
|
|
|
05/19/2024 01:22:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7339, 'learning_rate': 1.7588e-05, 'epoch': 5.96} |
|
|
|
05/19/2024 01:23:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6925, 'learning_rate': 1.7476e-05, 'epoch': 5.97} |
|
|
|
05/19/2024 01:24:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6798, 'learning_rate': 1.7364e-05, 'epoch': 5.99} |
|
|
|
05/19/2024 01:24:24 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2000 |
|
|
|
05/19/2024 01:24:24 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2000/tokenizer_config.json |
|
|
|
05/19/2024 01:24:24 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2000/special_tokens_map.json |
|
|
|
05/19/2024 01:25:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.7133, 'learning_rate': 1.7252e-05, 'epoch': 6.00} |
|
|
|
05/19/2024 01:25:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6953, 'learning_rate': 1.7141e-05, 'epoch': 6.02} |
|
|
|
05/19/2024 01:26:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6381, 'learning_rate': 1.7029e-05, 'epoch': 6.03} |
|
|
|
05/19/2024 01:27:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.7127, 'learning_rate': 1.6918e-05, 'epoch': 6.05} |
|
|
|
05/19/2024 01:28:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.7081, 'learning_rate': 1.6806e-05, 'epoch': 6.06} |
|
|
|
05/19/2024 01:28:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6766, 'learning_rate': 1.6695e-05, 'epoch': 6.08} |
|
|
|
05/19/2024 01:29:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6857, 'learning_rate': 1.6585e-05, 'epoch': 6.09} |
|
|
|
05/19/2024 01:30:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.7685, 'learning_rate': 1.6474e-05, 'epoch': 6.11} |
|
|
|
05/19/2024 01:31:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7044, 'learning_rate': 1.6364e-05, 'epoch': 6.12} |
|
|
|
05/19/2024 01:32:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6415, 'learning_rate': 1.6253e-05, 'epoch': 6.14} |
|
|
|
05/19/2024 01:32:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.7047, 'learning_rate': 1.6143e-05, 'epoch': 6.15} |
|
|
|
05/19/2024 01:33:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.7164, 'learning_rate': 1.6033e-05, 'epoch': 6.17} |
|
|
|
05/19/2024 01:34:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6773, 'learning_rate': 1.5924e-05, 'epoch': 6.18} |
|
|
|
05/19/2024 01:34:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6457, 'learning_rate': 1.5814e-05, 'epoch': 6.20} |
|
|
|
05/19/2024 01:35:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.7297, 'learning_rate': 1.5705e-05, 'epoch': 6.21} |
|
|
|
05/19/2024 01:36:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.7535, 'learning_rate': 1.5596e-05, 'epoch': 6.23} |
|
|
|
05/19/2024 01:37:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.7605, 'learning_rate': 1.5487e-05, 'epoch': 6.24} |
|
|
|
05/19/2024 01:38:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.7251, 'learning_rate': 1.5379e-05, 'epoch': 6.26} |
|
|
|
05/19/2024 01:38:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6835, 'learning_rate': 1.5270e-05, 'epoch': 6.27} |
|
|
|
05/19/2024 01:39:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6598, 'learning_rate': 1.5162e-05, 'epoch': 6.29} |
|
|
|
05/19/2024 01:39:29 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2100 |
|
|
|
05/19/2024 01:39:29 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2100/tokenizer_config.json |
|
|
|
05/19/2024 01:39:29 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2100/special_tokens_map.json |
|
|
|
05/19/2024 01:40:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6683, 'learning_rate': 1.5054e-05, 'epoch': 6.30} |
|
|
|
05/19/2024 01:40:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6762, 'learning_rate': 1.4946e-05, 'epoch': 6.32} |
|
|
|
05/19/2024 01:41:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6821, 'learning_rate': 1.4839e-05, 'epoch': 6.33} |
|
|
|
05/19/2024 01:42:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6303, 'learning_rate': 1.4731e-05, 'epoch': 6.34} |
|
|
|
05/19/2024 01:43:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6306, 'learning_rate': 1.4624e-05, 'epoch': 6.36} |
|
|
|
05/19/2024 01:43:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.7352, 'learning_rate': 1.4517e-05, 'epoch': 6.37} |
|
|
|
05/19/2024 01:44:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.6911, 'learning_rate': 1.4411e-05, 'epoch': 6.39} |
|
|
|
05/19/2024 01:45:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6651, 'learning_rate': 1.4304e-05, 'epoch': 6.40} |
|
|
|
05/19/2024 01:46:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6779, 'learning_rate': 1.4198e-05, 'epoch': 6.42} |
|
|
|
05/19/2024 01:46:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.7289, 'learning_rate': 1.4092e-05, 'epoch': 6.43} |
|
|
|
05/19/2024 01:47:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6656, 'learning_rate': 1.3987e-05, 'epoch': 6.45} |
|
|
|
05/19/2024 01:48:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6836, 'learning_rate': 1.3881e-05, 'epoch': 6.46} |
|
|
|
05/19/2024 01:49:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.7178, 'learning_rate': 1.3776e-05, 'epoch': 6.48} |
|
|
|
05/19/2024 01:49:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.6895, 'learning_rate': 1.3671e-05, 'epoch': 6.49} |
|
|
|
05/19/2024 01:50:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.7292, 'learning_rate': 1.3567e-05, 'epoch': 6.51} |
|
|
|
05/19/2024 01:51:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7560, 'learning_rate': 1.3462e-05, 'epoch': 6.52} |
|
|
|
05/19/2024 01:52:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6959, 'learning_rate': 1.3358e-05, 'epoch': 6.54} |
|
|
|
05/19/2024 01:52:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7918, 'learning_rate': 1.3254e-05, 'epoch': 6.55} |
|
|
|
05/19/2024 01:53:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6790, 'learning_rate': 1.3150e-05, 'epoch': 6.57} |
|
|
|
05/19/2024 01:54:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.8020, 'learning_rate': 1.3047e-05, 'epoch': 6.58} |
|
|
|
05/19/2024 01:54:21 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2200 |
|
|
|
05/19/2024 01:54:21 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2200/tokenizer_config.json |
|
|
|
05/19/2024 01:54:21 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2200/special_tokens_map.json |
|
|
|
05/19/2024 01:55:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.7470, 'learning_rate': 1.2944e-05, 'epoch': 6.60} |
|
|
|
05/19/2024 01:55:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6693, 'learning_rate': 1.2841e-05, 'epoch': 6.61} |
|
|
|
05/19/2024 01:56:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.7011, 'learning_rate': 1.2738e-05, 'epoch': 6.63} |
|
|
|
05/19/2024 01:57:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7627, 'learning_rate': 1.2636e-05, 'epoch': 6.64} |
|
|
|
05/19/2024 01:57:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6859, 'learning_rate': 1.2534e-05, 'epoch': 6.66} |
|
|
|
05/19/2024 01:58:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.7204, 'learning_rate': 1.2432e-05, 'epoch': 6.67} |
|
|
|
05/19/2024 01:59:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6392, 'learning_rate': 1.2331e-05, 'epoch': 6.69} |
|
|
|
05/19/2024 02:00:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.7128, 'learning_rate': 1.2229e-05, 'epoch': 6.70} |
|
|
|
05/19/2024 02:00:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7145, 'learning_rate': 1.2129e-05, 'epoch': 6.72} |
|
|
|
05/19/2024 02:01:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6784, 'learning_rate': 1.2028e-05, 'epoch': 6.73} |
|
|
|
05/19/2024 02:02:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6929, 'learning_rate': 1.1928e-05, 'epoch': 6.75} |
|
|
|
05/19/2024 02:03:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.7516, 'learning_rate': 1.1827e-05, 'epoch': 6.76} |
|
|
|
05/19/2024 02:03:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6990, 'learning_rate': 1.1728e-05, 'epoch': 6.78} |
|
|
|
05/19/2024 02:04:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.7607, 'learning_rate': 1.1628e-05, 'epoch': 6.79} |
|
|
|
05/19/2024 02:05:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6541, 'learning_rate': 1.1529e-05, 'epoch': 6.81} |
|
|
|
05/19/2024 02:05:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6840, 'learning_rate': 1.1430e-05, 'epoch': 6.82} |
|
|
|
05/19/2024 02:06:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.7134, 'learning_rate': 1.1331e-05, 'epoch': 6.84} |
|
|
|
05/19/2024 02:07:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.7275, 'learning_rate': 1.1233e-05, 'epoch': 6.85} |
|
|
|
05/19/2024 02:08:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6796, 'learning_rate': 1.1135e-05, 'epoch': 6.87} |
|
|
|
05/19/2024 02:08:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6740, 'learning_rate': 1.1038e-05, 'epoch': 6.88} |
|
|
|
05/19/2024 02:08:45 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2300 |
|
|
|
05/19/2024 02:08:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2300/tokenizer_config.json |
|
|
|
05/19/2024 02:08:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2300/special_tokens_map.json |
|
|
|
05/19/2024 02:09:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6678, 'learning_rate': 1.0940e-05, 'epoch': 6.90} |
|
|
|
05/19/2024 02:10:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6954, 'learning_rate': 1.0843e-05, 'epoch': 6.91} |
|
|
|
05/19/2024 02:10:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.6517, 'learning_rate': 1.0746e-05, 'epoch': 6.93} |
|
|
|
05/19/2024 02:11:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6424, 'learning_rate': 1.0650e-05, 'epoch': 6.94} |
|
|
|
05/19/2024 02:12:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.7396, 'learning_rate': 1.0554e-05, 'epoch': 6.96} |
|
|
|
05/19/2024 02:13:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.6806, 'learning_rate': 1.0458e-05, 'epoch': 6.97} |
|
|
|
05/19/2024 02:13:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.6835, 'learning_rate': 1.0362e-05, 'epoch': 6.99} |
|
|
|
05/19/2024 02:14:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6672, 'learning_rate': 1.0267e-05, 'epoch': 7.00} |
|
|
|
05/19/2024 02:15:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6341, 'learning_rate': 1.0173e-05, 'epoch': 7.02} |
|
|
|
05/19/2024 02:15:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6665, 'learning_rate': 1.0078e-05, 'epoch': 7.03} |
|
|
|
05/19/2024 02:16:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6903, 'learning_rate': 9.9839e-06, 'epoch': 7.05} |
|
|
|
05/19/2024 02:17:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7205, 'learning_rate': 9.8900e-06, 'epoch': 7.06} |
|
|
|
05/19/2024 02:18:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6929, 'learning_rate': 9.7965e-06, 'epoch': 7.08} |
|
|
|
05/19/2024 02:18:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6346, 'learning_rate': 9.7033e-06, 'epoch': 7.09} |
|
|
|
05/19/2024 02:19:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6626, 'learning_rate': 9.6105e-06, 'epoch': 7.11} |
|
|
|
05/19/2024 02:20:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6738, 'learning_rate': 9.5180e-06, 'epoch': 7.12} |
|
|
|
05/19/2024 02:20:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6570, 'learning_rate': 9.4259e-06, 'epoch': 7.14} |
|
|
|
05/19/2024 02:21:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.7120, 'learning_rate': 9.3341e-06, 'epoch': 7.15} |
|
|
|
05/19/2024 02:22:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7623, 'learning_rate': 9.2426e-06, 'epoch': 7.17} |
|
|
|
05/19/2024 02:23:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7891, 'learning_rate': 9.1515e-06, 'epoch': 7.18} |
|
|
|
05/19/2024 02:23:16 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2400 |
|
|
|
05/19/2024 02:23:16 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2400/tokenizer_config.json |
|
|
|
05/19/2024 02:23:16 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2400/special_tokens_map.json |
|
|
|
05/19/2024 02:23:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6479, 'learning_rate': 9.0608e-06, 'epoch': 7.20} |
|
|
|
05/19/2024 02:24:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.7534, 'learning_rate': 8.9704e-06, 'epoch': 7.21} |
|
|
|
05/19/2024 02:25:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6790, 'learning_rate': 8.8803e-06, 'epoch': 7.23} |
|
|
|
05/19/2024 02:26:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.6811, 'learning_rate': 8.7906e-06, 'epoch': 7.24} |
|
|
|
05/19/2024 02:26:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6518, 'learning_rate': 8.7013e-06, 'epoch': 7.26} |
|
|
|
05/19/2024 02:27:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6485, 'learning_rate': 8.6123e-06, 'epoch': 7.27} |
|
|
|
05/19/2024 02:28:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6528, 'learning_rate': 8.5237e-06, 'epoch': 7.29} |
|
|
|
05/19/2024 02:29:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6834, 'learning_rate': 8.4355e-06, 'epoch': 7.30} |
|
|
|
05/19/2024 02:29:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6883, 'learning_rate': 8.3476e-06, 'epoch': 7.32} |
|
|
|
05/19/2024 02:30:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6906, 'learning_rate': 8.2601e-06, 'epoch': 7.33} |
|
|
|
05/19/2024 02:31:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.7172, 'learning_rate': 8.1729e-06, 'epoch': 7.35} |
|
|
|
05/19/2024 02:32:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6523, 'learning_rate': 8.0862e-06, 'epoch': 7.36} |
|
|
|
05/19/2024 02:32:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.7196, 'learning_rate': 7.9998e-06, 'epoch': 7.38} |
|
|
|
05/19/2024 02:33:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6475, 'learning_rate': 7.9138e-06, 'epoch': 7.39} |
|
|
|
05/19/2024 02:34:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6756, 'learning_rate': 7.8281e-06, 'epoch': 7.41} |
|
|
|
05/19/2024 02:34:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6886, 'learning_rate': 7.7429e-06, 'epoch': 7.42} |
|
|
|
05/19/2024 02:35:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6894, 'learning_rate': 7.6580e-06, 'epoch': 7.44} |
|
|
|
05/19/2024 02:36:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6653, 'learning_rate': 7.5735e-06, 'epoch': 7.45} |
|
|
|
05/19/2024 02:37:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.7941, 'learning_rate': 7.4894e-06, 'epoch': 7.47} |
|
|
|
05/19/2024 02:37:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.6890, 'learning_rate': 7.4057e-06, 'epoch': 7.48} |
|
|
|
05/19/2024 02:37:53 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2500 |
|
|
|
05/19/2024 02:37:53 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2500/tokenizer_config.json |
|
|
|
05/19/2024 02:37:53 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2500/special_tokens_map.json |
|
|
|
05/19/2024 02:38:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6626, 'learning_rate': 7.3223e-06, 'epoch': 7.50} |
|
|
|
05/19/2024 02:39:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.7061, 'learning_rate': 7.2394e-06, 'epoch': 7.51} |
|
|
|
05/19/2024 02:40:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6958, 'learning_rate': 7.1568e-06, 'epoch': 7.53} |
|
|
|
05/19/2024 02:40:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.7661, 'learning_rate': 7.0747e-06, 'epoch': 7.54} |
|
|
|
05/19/2024 02:41:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.6989, 'learning_rate': 6.9929e-06, 'epoch': 7.56} |
|
|
|
05/19/2024 02:42:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.7610, 'learning_rate': 6.9116e-06, 'epoch': 7.57} |
|
|
|
05/19/2024 02:43:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6834, 'learning_rate': 6.8306e-06, 'epoch': 7.59} |
|
|
|
05/19/2024 02:43:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6939, 'learning_rate': 6.7500e-06, 'epoch': 7.60} |
|
|
|
05/19/2024 02:44:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6789, 'learning_rate': 6.6699e-06, 'epoch': 7.62} |
|
|
|
05/19/2024 02:45:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6925, 'learning_rate': 6.5901e-06, 'epoch': 7.63} |
|
|
|
05/19/2024 02:45:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6714, 'learning_rate': 6.5108e-06, 'epoch': 7.65} |
|
|
|
05/19/2024 02:46:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6973, 'learning_rate': 6.4319e-06, 'epoch': 7.66} |
|
|
|
05/19/2024 02:47:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6669, 'learning_rate': 6.3534e-06, 'epoch': 7.68} |
|
|
|
05/19/2024 02:48:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.7157, 'learning_rate': 6.2752e-06, 'epoch': 7.69} |
|
|
|
05/19/2024 02:48:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.7474, 'learning_rate': 6.1975e-06, 'epoch': 7.71} |
|
|
|
05/19/2024 02:49:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6442, 'learning_rate': 6.1203e-06, 'epoch': 7.72} |
|
|
|
05/19/2024 02:50:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6729, 'learning_rate': 6.0434e-06, 'epoch': 7.74} |
|
|
|
05/19/2024 02:51:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.7085, 'learning_rate': 5.9670e-06, 'epoch': 7.75} |
|
|
|
05/19/2024 02:51:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.6983, 'learning_rate': 5.8909e-06, 'epoch': 7.77} |
|
|
|
05/19/2024 02:52:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.6758, 'learning_rate': 5.8153e-06, 'epoch': 7.78} |
|
|
|
05/19/2024 02:52:45 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2600 |
|
|
|
05/19/2024 02:52:45 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2600/tokenizer_config.json |
|
|
|
05/19/2024 02:52:45 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2600/special_tokens_map.json |
|
|
|
05/19/2024 02:53:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6605, 'learning_rate': 5.7402e-06, 'epoch': 7.80} |
|
|
|
05/19/2024 02:54:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6348, 'learning_rate': 5.6654e-06, 'epoch': 7.81} |
|
|
|
05/19/2024 02:54:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7460, 'learning_rate': 5.5911e-06, 'epoch': 7.83} |
|
|
|
05/19/2024 02:55:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6511, 'learning_rate': 5.5172e-06, 'epoch': 7.84} |
|
|
|
05/19/2024 02:56:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6767, 'learning_rate': 5.4437e-06, 'epoch': 7.86} |
|
|
|
05/19/2024 02:57:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6918, 'learning_rate': 5.3707e-06, 'epoch': 7.87} |
|
|
|
05/19/2024 02:57:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.7017, 'learning_rate': 5.2981e-06, 'epoch': 7.89} |
|
|
|
05/19/2024 02:58:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.7181, 'learning_rate': 5.2260e-06, 'epoch': 7.90} |
|
|
|
05/19/2024 02:59:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.7121, 'learning_rate': 5.1542e-06, 'epoch': 7.92} |
|
|
|
05/19/2024 03:00:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6642, 'learning_rate': 5.0830e-06, 'epoch': 7.93} |
|
|
|
05/19/2024 03:00:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.7676, 'learning_rate': 5.0121e-06, 'epoch': 7.95} |
|
|
|
05/19/2024 03:01:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.6912, 'learning_rate': 4.9417e-06, 'epoch': 7.96} |
|
|
|
05/19/2024 03:02:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6582, 'learning_rate': 4.8718e-06, 'epoch': 7.98} |
|
|
|
05/19/2024 03:03:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6806, 'learning_rate': 4.8023e-06, 'epoch': 7.99} |
|
|
|
05/19/2024 03:03:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.6896, 'learning_rate': 4.7332e-06, 'epoch': 8.01} |
|
|
|
05/19/2024 03:04:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6491, 'learning_rate': 4.6646e-06, 'epoch': 8.02} |
|
|
|
05/19/2024 03:05:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.6457, 'learning_rate': 4.5964e-06, 'epoch': 8.04} |
|
|
|
05/19/2024 03:05:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.6730, 'learning_rate': 4.5287e-06, 'epoch': 8.05} |
|
|
|
05/19/2024 03:06:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6610, 'learning_rate': 4.4614e-06, 'epoch': 8.07} |
|
|
|
05/19/2024 03:07:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.7065, 'learning_rate': 4.3946e-06, 'epoch': 8.08} |
|
|
|
05/19/2024 03:07:25 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2700 |
|
|
|
05/19/2024 03:07:25 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2700/tokenizer_config.json |
|
|
|
05/19/2024 03:07:25 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2700/special_tokens_map.json |
|
|
|
05/19/2024 03:08:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.7074, 'learning_rate': 4.3283e-06, 'epoch': 8.10} |
|
|
|
05/19/2024 03:08:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.6660, 'learning_rate': 4.2624e-06, 'epoch': 8.11} |
|
|
|
05/19/2024 03:09:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6469, 'learning_rate': 4.1969e-06, 'epoch': 8.13} |
|
|
|
05/19/2024 03:10:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6642, 'learning_rate': 4.1320e-06, 'epoch': 8.14} |
|
|
|
05/19/2024 03:11:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.6602, 'learning_rate': 4.0675e-06, 'epoch': 8.16} |
|
|
|
05/19/2024 03:11:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6465, 'learning_rate': 4.0034e-06, 'epoch': 8.17} |
|
|
|
05/19/2024 03:12:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6823, 'learning_rate': 3.9398e-06, 'epoch': 8.19} |
|
|
|
05/19/2024 03:13:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.7163, 'learning_rate': 3.8767e-06, 'epoch': 8.20} |
|
|
|
05/19/2024 03:14:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.7558, 'learning_rate': 3.8140e-06, 'epoch': 8.22} |
|
|
|
05/19/2024 03:14:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.7451, 'learning_rate': 3.7519e-06, 'epoch': 8.23} |
|
|
|
05/19/2024 03:15:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6998, 'learning_rate': 3.6901e-06, 'epoch': 8.25} |
|
|
|
05/19/2024 03:16:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6885, 'learning_rate': 3.6289e-06, 'epoch': 8.26} |
|
|
|
05/19/2024 03:17:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6851, 'learning_rate': 3.5681e-06, 'epoch': 8.28} |
|
|
|
05/19/2024 03:17:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.7307, 'learning_rate': 3.5078e-06, 'epoch': 8.29} |
|
|
|
05/19/2024 03:18:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6878, 'learning_rate': 3.4480e-06, 'epoch': 8.31} |
|
|
|
05/19/2024 03:19:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6968, 'learning_rate': 3.3887e-06, 'epoch': 8.32} |
|
|
|
05/19/2024 03:19:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.7097, 'learning_rate': 3.3298e-06, 'epoch': 8.34} |
|
|
|
05/19/2024 03:20:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6899, 'learning_rate': 3.2714e-06, 'epoch': 8.35} |
|
|
|
05/19/2024 03:21:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.6110, 'learning_rate': 3.2135e-06, 'epoch': 8.37} |
|
|
|
05/19/2024 03:22:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.6673, 'learning_rate': 3.1561e-06, 'epoch': 8.38} |
|
|
|
05/19/2024 03:22:13 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2800 |
|
|
|
05/19/2024 03:22:13 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2800/tokenizer_config.json |
|
|
|
05/19/2024 03:22:13 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2800/special_tokens_map.json |
|
|
|
05/19/2024 03:22:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7485, 'learning_rate': 3.0991e-06, 'epoch': 8.40} |
|
|
|
05/19/2024 03:23:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6873, 'learning_rate': 3.0427e-06, 'epoch': 8.41} |
|
|
|
05/19/2024 03:24:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7064, 'learning_rate': 2.9867e-06, 'epoch': 8.42} |
|
|
|
05/19/2024 03:25:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6871, 'learning_rate': 2.9312e-06, 'epoch': 8.44} |
|
|
|
05/19/2024 03:25:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7363, 'learning_rate': 2.8762e-06, 'epoch': 8.45} |
|
|
|
05/19/2024 03:26:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.6719, 'learning_rate': 2.8217e-06, 'epoch': 8.47} |
|
|
|
05/19/2024 03:27:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6715, 'learning_rate': 2.7677e-06, 'epoch': 8.48} |
|
|
|
05/19/2024 03:28:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6697, 'learning_rate': 2.7142e-06, 'epoch': 8.50} |
|
|
|
05/19/2024 03:28:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.7199, 'learning_rate': 2.6611e-06, 'epoch': 8.51} |
|
|
|
05/19/2024 03:29:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.6810, 'learning_rate': 2.6086e-06, 'epoch': 8.53} |
|
|
|
05/19/2024 03:30:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.6461, 'learning_rate': 2.5566e-06, 'epoch': 8.54} |
|
|
|
05/19/2024 03:30:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7230, 'learning_rate': 2.5050e-06, 'epoch': 8.56} |
|
|
|
05/19/2024 03:31:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6481, 'learning_rate': 2.4540e-06, 'epoch': 8.57} |
|
|
|
05/19/2024 03:32:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.7229, 'learning_rate': 2.4034e-06, 'epoch': 8.59} |
|
|
|
05/19/2024 03:33:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6762, 'learning_rate': 2.3534e-06, 'epoch': 8.60} |
|
|
|
05/19/2024 03:33:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6379, 'learning_rate': 2.3038e-06, 'epoch': 8.62} |
|
|
|
05/19/2024 03:34:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6753, 'learning_rate': 2.2548e-06, 'epoch': 8.63} |
|
|
|
05/19/2024 03:35:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7395, 'learning_rate': 2.2062e-06, 'epoch': 8.65} |
|
|
|
05/19/2024 03:36:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.7564, 'learning_rate': 2.1582e-06, 'epoch': 8.66} |
|
|
|
05/19/2024 03:36:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.6862, 'learning_rate': 2.1106e-06, 'epoch': 8.68} |
|
|
|
05/19/2024 03:36:53 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2900 |
|
|
|
05/19/2024 03:36:53 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2900/tokenizer_config.json |
|
|
|
05/19/2024 03:36:53 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-2900/special_tokens_map.json |
|
|
|
05/19/2024 03:37:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.7351, 'learning_rate': 2.0636e-06, 'epoch': 8.69} |
|
|
|
05/19/2024 03:38:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6748, 'learning_rate': 2.0171e-06, 'epoch': 8.71} |
|
|
|
05/19/2024 03:39:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6935, 'learning_rate': 1.9711e-06, 'epoch': 8.72} |
|
|
|
05/19/2024 03:39:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6629, 'learning_rate': 1.9256e-06, 'epoch': 8.74} |
|
|
|
05/19/2024 03:40:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6644, 'learning_rate': 1.8806e-06, 'epoch': 8.75} |
|
|
|
05/19/2024 03:41:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6469, 'learning_rate': 1.8361e-06, 'epoch': 8.77} |
|
|
|
05/19/2024 03:41:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6460, 'learning_rate': 1.7921e-06, 'epoch': 8.78} |
|
|
|
05/19/2024 03:42:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.7487, 'learning_rate': 1.7487e-06, 'epoch': 8.80} |
|
|
|
05/19/2024 03:43:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.7020, 'learning_rate': 1.7057e-06, 'epoch': 8.81} |
|
|
|
05/19/2024 03:44:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6807, 'learning_rate': 1.6633e-06, 'epoch': 8.83} |
|
|
|
05/19/2024 03:45:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6846, 'learning_rate': 1.6214e-06, 'epoch': 8.84} |
|
|
|
05/19/2024 03:45:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.7325, 'learning_rate': 1.5800e-06, 'epoch': 8.86} |
|
|
|
05/19/2024 03:46:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6899, 'learning_rate': 1.5391e-06, 'epoch': 8.87} |
|
|
|
05/19/2024 03:47:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6596, 'learning_rate': 1.4988e-06, 'epoch': 8.89} |
|
|
|
05/19/2024 03:47:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7111, 'learning_rate': 1.4589e-06, 'epoch': 8.90} |
|
|
|
05/19/2024 03:48:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.7168, 'learning_rate': 1.4196e-06, 'epoch': 8.92} |
|
|
|
05/19/2024 03:49:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.7182, 'learning_rate': 1.3808e-06, 'epoch': 8.93} |
|
|
|
05/19/2024 03:50:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6455, 'learning_rate': 1.3425e-06, 'epoch': 8.95} |
|
|
|
05/19/2024 03:50:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.7042, 'learning_rate': 1.3048e-06, 'epoch': 8.96} |
|
|
|
05/19/2024 03:51:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6742, 'learning_rate': 1.2676e-06, 'epoch': 8.98} |
|
|
|
05/19/2024 03:51:33 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3000 |
|
|
|
05/19/2024 03:51:33 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3000/tokenizer_config.json |
|
|
|
05/19/2024 03:51:33 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3000/special_tokens_map.json |
|
|
|
05/19/2024 03:52:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.6838, 'learning_rate': 1.2309e-06, 'epoch': 8.99} |
|
|
|
05/19/2024 03:52:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.6411, 'learning_rate': 1.1947e-06, 'epoch': 9.01} |
|
|
|
05/19/2024 03:53:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.7136, 'learning_rate': 1.1590e-06, 'epoch': 9.02} |
|
|
|
05/19/2024 03:54:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.7436, 'learning_rate': 1.1239e-06, 'epoch': 9.04} |
|
|
|
05/19/2024 03:55:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.7149, 'learning_rate': 1.0893e-06, 'epoch': 9.05} |
|
|
|
05/19/2024 03:55:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.7207, 'learning_rate': 1.0553e-06, 'epoch': 9.07} |
|
|
|
05/19/2024 03:56:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6602, 'learning_rate': 1.0217e-06, 'epoch': 9.08} |
|
|
|
05/19/2024 03:57:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.7312, 'learning_rate': 9.8873e-07, 'epoch': 9.10} |
|
|
|
05/19/2024 03:58:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.7044, 'learning_rate': 9.5625e-07, 'epoch': 9.11} |
|
|
|
05/19/2024 03:58:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7221, 'learning_rate': 9.2431e-07, 'epoch': 9.13} |
|
|
|
05/19/2024 03:59:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.6504, 'learning_rate': 8.9290e-07, 'epoch': 9.14} |
|
|
|
05/19/2024 04:00:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6788, 'learning_rate': 8.6203e-07, 'epoch': 9.16} |
|
|
|
05/19/2024 04:01:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6803, 'learning_rate': 8.3169e-07, 'epoch': 9.17} |
|
|
|
05/19/2024 04:01:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6904, 'learning_rate': 8.0188e-07, 'epoch': 9.19} |
|
|
|
05/19/2024 04:02:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.6777, 'learning_rate': 7.7261e-07, 'epoch': 9.20} |
|
|
|
05/19/2024 04:03:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.6850, 'learning_rate': 7.4387e-07, 'epoch': 9.22} |
|
|
|
05/19/2024 04:04:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.6712, 'learning_rate': 7.1567e-07, 'epoch': 9.23} |
|
|
|
05/19/2024 04:04:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.7124, 'learning_rate': 6.8801e-07, 'epoch': 9.25} |
|
|
|
05/19/2024 04:05:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.6788, 'learning_rate': 6.6089e-07, 'epoch': 9.26} |
|
|
|
05/19/2024 04:06:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.6900, 'learning_rate': 6.3430e-07, 'epoch': 9.28} |
|
|
|
05/19/2024 04:06:15 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3100 |
|
|
|
05/19/2024 04:06:15 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3100/tokenizer_config.json |
|
|
|
05/19/2024 04:06:15 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3100/special_tokens_map.json |
|
|
|
05/19/2024 04:06:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.7124, 'learning_rate': 6.0825e-07, 'epoch': 9.29} |
|
|
|
05/19/2024 04:07:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6702, 'learning_rate': 5.8274e-07, 'epoch': 9.31} |
|
|
|
05/19/2024 04:08:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6630, 'learning_rate': 5.5778e-07, 'epoch': 9.32} |
|
|
|
05/19/2024 04:09:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6927, 'learning_rate': 5.3335e-07, 'epoch': 9.34} |
|
|
|
05/19/2024 04:09:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.6629, 'learning_rate': 5.0946e-07, 'epoch': 9.35} |
|
|
|
05/19/2024 04:10:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.7342, 'learning_rate': 4.8612e-07, 'epoch': 9.37} |
|
|
|
05/19/2024 04:11:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.6597, 'learning_rate': 4.6332e-07, 'epoch': 9.38} |
|
|
|
05/19/2024 04:12:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.6654, 'learning_rate': 4.4106e-07, 'epoch': 9.40} |
|
|
|
05/19/2024 04:13:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.7134, 'learning_rate': 4.1934e-07, 'epoch': 9.41} |
|
|
|
05/19/2024 04:13:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.6154, 'learning_rate': 3.9817e-07, 'epoch': 9.43} |
|
|
|
05/19/2024 04:14:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6862, 'learning_rate': 3.7754e-07, 'epoch': 9.44} |
|
|
|
05/19/2024 04:15:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6778, 'learning_rate': 3.5746e-07, 'epoch': 9.46} |
|
|
|
05/19/2024 04:15:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.6891, 'learning_rate': 3.3792e-07, 'epoch': 9.47} |
|
|
|
05/19/2024 04:16:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.6576, 'learning_rate': 3.1893e-07, 'epoch': 9.49} |
|
|
|
05/19/2024 04:17:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.6533, 'learning_rate': 3.0048e-07, 'epoch': 9.50} |
|
|
|
05/19/2024 04:18:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.7639, 'learning_rate': 2.8258e-07, 'epoch': 9.52} |
|
|
|
05/19/2024 04:18:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.6856, 'learning_rate': 2.6522e-07, 'epoch': 9.53} |
|
|
|
05/19/2024 04:19:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6486, 'learning_rate': 2.4842e-07, 'epoch': 9.55} |
|
|
|
05/19/2024 04:20:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.6377, 'learning_rate': 2.3216e-07, 'epoch': 9.56} |
|
|
|
05/19/2024 04:20:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.6703, 'learning_rate': 2.1644e-07, 'epoch': 9.58} |
|
|
|
05/19/2024 04:20:58 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3200 |
|
|
|
05/19/2024 04:20:58 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3200/tokenizer_config.json |
|
|
|
05/19/2024 04:20:58 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3200/special_tokens_map.json |
|
|
|
05/19/2024 04:21:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.6743, 'learning_rate': 2.0128e-07, 'epoch': 9.59} |
|
|
|
05/19/2024 04:22:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.7117, 'learning_rate': 1.8666e-07, 'epoch': 9.61} |
|
|
|
05/19/2024 04:23:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6817, 'learning_rate': 1.7260e-07, 'epoch': 9.62} |
|
|
|
05/19/2024 04:23:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.7122, 'learning_rate': 1.5908e-07, 'epoch': 9.64} |
|
|
|
05/19/2024 04:24:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.6386, 'learning_rate': 1.4611e-07, 'epoch': 9.65} |
|
|
|
05/19/2024 04:25:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.6882, 'learning_rate': 1.3369e-07, 'epoch': 9.67} |
|
|
|
05/19/2024 04:26:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.6826, 'learning_rate': 1.2183e-07, 'epoch': 9.68} |
|
|
|
05/19/2024 04:26:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.7251, 'learning_rate': 1.1051e-07, 'epoch': 9.70} |
|
|
|
05/19/2024 04:27:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.7072, 'learning_rate': 9.9741e-08, 'epoch': 9.71} |
|
|
|
05/19/2024 04:28:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.6555, 'learning_rate': 8.9525e-08, 'epoch': 9.73} |
|
|
|
05/19/2024 04:29:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.6642, 'learning_rate': 7.9859e-08, 'epoch': 9.74} |
|
|
|
05/19/2024 04:29:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6987, 'learning_rate': 7.0744e-08, 'epoch': 9.76} |
|
|
|
05/19/2024 04:30:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.7699, 'learning_rate': 6.2181e-08, 'epoch': 9.77} |
|
|
|
05/19/2024 04:31:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.6654, 'learning_rate': 5.4170e-08, 'epoch': 9.79} |
|
|
|
05/19/2024 04:32:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.6681, 'learning_rate': 4.6710e-08, 'epoch': 9.80} |
|
|
|
05/19/2024 04:32:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.7033, 'learning_rate': 3.9802e-08, 'epoch': 9.82} |
|
|
|
05/19/2024 04:33:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.6899, 'learning_rate': 3.3446e-08, 'epoch': 9.83} |
|
|
|
05/19/2024 04:34:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.6837, 'learning_rate': 2.7642e-08, 'epoch': 9.85} |
|
|
|
05/19/2024 04:34:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.6578, 'learning_rate': 2.2391e-08, 'epoch': 9.86} |
|
|
|
05/19/2024 04:35:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.6546, 'learning_rate': 1.7692e-08, 'epoch': 9.88} |
|
|
|
05/19/2024 04:35:33 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3300 |
|
|
|
05/19/2024 04:35:33 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3300/tokenizer_config.json |
|
|
|
05/19/2024 04:35:33 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/checkpoint-3300/special_tokens_map.json |
|
|
|
05/19/2024 04:36:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.6437, 'learning_rate': 1.3546e-08, 'epoch': 9.89} |
|
|
|
05/19/2024 04:37:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.7260, 'learning_rate': 9.9525e-09, 'epoch': 9.91} |
|
|
|
05/19/2024 04:37:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.6458, 'learning_rate': 6.9116e-09, 'epoch': 9.92} |
|
|
|
05/19/2024 04:38:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.6936, 'learning_rate': 4.4235e-09, 'epoch': 9.94} |
|
|
|
05/19/2024 04:39:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.7048, 'learning_rate': 2.4882e-09, 'epoch': 9.95} |
|
|
|
05/19/2024 04:40:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.6899, 'learning_rate': 1.1059e-09, 'epoch': 9.97} |
|
|
|
05/19/2024 04:40:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.7091, 'learning_rate': 2.7648e-10, 'epoch': 9.98} |
|
|
|
05/19/2024 04:41:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.6954, 'learning_rate': 0.0000e+00, 'epoch': 10.00} |
|
|
|
05/19/2024 04:41:28 - INFO - transformers.trainer - |
|
|
|
Training completed. Do not forget to share your model on huggingface.co/models =) |
|
|
|
|
|
|
|
05/19/2024 04:41:28 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca |
|
|
|
05/19/2024 04:41:28 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/tokenizer_config.json |
|
|
|
05/19/2024 04:41:28 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/Qwen1.5-4B-Chat/full_alpaca/special_tokens_map.json |
|
|
|
05/19/2024 04:41:28 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields: |
|
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}} |
|
|
|
|