bloomz-7b1-sa-v0.1 / running_log.txt
sci-m-wang's picture
Upload 13 files
82cd964 verified
05/30/2024 10:42:29 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json
05/30/2024 10:42:29 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json
05/30/2024 10:42:29 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json
05/30/2024 10:42:29 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json
05/30/2024 10:42:30 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/LangGPT_community.jsonl...
05/30/2024 10:42:30 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/30/2024 10:42:31 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_alpaca.jsonl...
05/30/2024 10:42:31 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/30/2024 10:42:32 - INFO - llmtuner.data.loader - Loading dataset /datas/wangm/LLM4LangGPT/constructed_datasets/langgpt_seed.jsonl...
05/30/2024 10:42:32 - WARNING - llmtuner.data.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
05/30/2024 10:42:50 - INFO - transformers.configuration_utils - loading configuration file /datas/huggingface/bloomz-7b1/config.json
05/30/2024 10:42:50 - INFO - transformers.configuration_utils - Model config BloomConfig {
"_name_or_path": "/datas/huggingface/bloomz-7b1",
"apply_residual_connection_post_layernorm": false,
"architectures": [
"BloomForCausalLM"
],
"attention_dropout": 0.0,
"attention_softmax_in_fp32": true,
"bias_dropout_fusion": true,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_dropout": 0.0,
"hidden_size": 4096,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"masked_softmax_fusion": true,
"model_type": "bloom",
"n_head": 32,
"n_inner": null,
"n_layer": 30,
"offset_alibi": 100,
"pad_token_id": 3,
"pretraining_tp": 4,
"seq_length": 2048,
"skip_bias_add": true,
"skip_bias_add_qkv": false,
"slow_but_exact": false,
"transformers_version": "4.40.2",
"unk_token_id": 0,
"use_cache": true,
"vocab_size": 250880
}
05/30/2024 10:42:50 - INFO - transformers.modeling_utils - loading weights file /datas/huggingface/bloomz-7b1/pytorch_model.bin
05/30/2024 10:42:50 - INFO - transformers.modeling_utils - Instantiating BloomForCausalLM model under default dtype torch.float16.
05/30/2024 10:42:50 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 3
}
05/30/2024 10:43:05 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing BloomForCausalLM.
05/30/2024 10:43:05 - INFO - transformers.modeling_utils - All the weights of BloomForCausalLM were initialized from the model checkpoint at /datas/huggingface/bloomz-7b1.
If your task is similar to the task the model of the checkpoint was trained on, you can already use BloomForCausalLM for predictions without further training.
05/30/2024 10:43:05 - INFO - transformers.modeling_utils - Generation config file not found, using a generation config created from the model config.
05/30/2024 10:43:05 - INFO - llmtuner.model.utils.checkpointing - Gradient checkpointing enabled.
05/30/2024 10:43:05 - INFO - llmtuner.model.utils.attention - Using vanilla Attention implementation.
05/30/2024 10:43:05 - INFO - llmtuner.model.adapter - Fine-tuning method: LoRA
05/30/2024 10:43:05 - INFO - llmtuner.model.loader - trainable params: 3932160 || all params: 7072948224 || trainable%: 0.0556
05/30/2024 10:43:05 - INFO - transformers.trainer - Using auto half precision backend
05/30/2024 10:43:06 - INFO - transformers.trainer - ***** Running training *****
05/30/2024 10:43:06 - INFO - transformers.trainer - Num examples = 8,531
05/30/2024 10:43:06 - INFO - transformers.trainer - Num Epochs = 5
05/30/2024 10:43:06 - INFO - transformers.trainer - Instantaneous batch size per device = 2
05/30/2024 10:43:06 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16
05/30/2024 10:43:06 - INFO - transformers.trainer - Gradient Accumulation steps = 8
05/30/2024 10:43:06 - INFO - transformers.trainer - Total optimization steps = 2,665
05/30/2024 10:43:06 - INFO - transformers.trainer - Number of trainable parameters = 3,932,160
05/30/2024 10:44:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.7027, 'learning_rate': 5.0000e-05, 'epoch': 0.01}
05/30/2024 10:45:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.6394, 'learning_rate': 4.9998e-05, 'epoch': 0.02}
05/30/2024 10:46:54 - INFO - llmtuner.extras.callbacks - {'loss': 1.5473, 'learning_rate': 4.9996e-05, 'epoch': 0.03}
05/30/2024 10:48:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.5474, 'learning_rate': 4.9993e-05, 'epoch': 0.04}
05/30/2024 10:49:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.4599, 'learning_rate': 4.9989e-05, 'epoch': 0.05}
05/30/2024 10:50:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.3353, 'learning_rate': 4.9984e-05, 'epoch': 0.06}
05/30/2024 10:51:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.3729, 'learning_rate': 4.9979e-05, 'epoch': 0.07}
05/30/2024 10:52:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.3635, 'learning_rate': 4.9972e-05, 'epoch': 0.08}
05/30/2024 10:54:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.3403, 'learning_rate': 4.9965e-05, 'epoch': 0.08}
05/30/2024 10:55:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.2804, 'learning_rate': 4.9957e-05, 'epoch': 0.09}
05/30/2024 10:56:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.2542, 'learning_rate': 4.9947e-05, 'epoch': 0.10}
05/30/2024 10:57:49 - INFO - llmtuner.extras.callbacks - {'loss': 1.2351, 'learning_rate': 4.9937e-05, 'epoch': 0.11}
05/30/2024 10:59:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.2151, 'learning_rate': 4.9927e-05, 'epoch': 0.12}
05/30/2024 11:00:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.1700, 'learning_rate': 4.9915e-05, 'epoch': 0.13}
05/30/2024 11:01:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.2411, 'learning_rate': 4.9902e-05, 'epoch': 0.14}
05/30/2024 11:02:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.1342, 'learning_rate': 4.9889e-05, 'epoch': 0.15}
05/30/2024 11:04:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.1589, 'learning_rate': 4.9875e-05, 'epoch': 0.16}
05/30/2024 11:05:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.1653, 'learning_rate': 4.9859e-05, 'epoch': 0.17}
05/30/2024 11:06:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.1219, 'learning_rate': 4.9843e-05, 'epoch': 0.18}
05/30/2024 11:07:39 - INFO - llmtuner.extras.callbacks - {'loss': 1.0841, 'learning_rate': 4.9826e-05, 'epoch': 0.19}
05/30/2024 11:07:39 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-100
05/30/2024 11:07:39 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-100/tokenizer_config.json
05/30/2024 11:07:39 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-100/special_tokens_map.json
05/30/2024 11:08:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1562, 'learning_rate': 4.9809e-05, 'epoch': 0.20}
05/30/2024 11:10:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.1180, 'learning_rate': 4.9790e-05, 'epoch': 0.21}
05/30/2024 11:11:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.1179, 'learning_rate': 4.9771e-05, 'epoch': 0.22}
05/30/2024 11:12:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.1630, 'learning_rate': 4.9750e-05, 'epoch': 0.23}
05/30/2024 11:13:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.1339, 'learning_rate': 4.9729e-05, 'epoch': 0.23}
05/30/2024 11:15:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.1412, 'learning_rate': 4.9707e-05, 'epoch': 0.24}
05/30/2024 11:16:23 - INFO - llmtuner.extras.callbacks - {'loss': 1.1259, 'learning_rate': 4.9684e-05, 'epoch': 0.25}
05/30/2024 11:17:35 - INFO - llmtuner.extras.callbacks - {'loss': 1.0775, 'learning_rate': 4.9660e-05, 'epoch': 0.26}
05/30/2024 11:18:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0885, 'learning_rate': 4.9636e-05, 'epoch': 0.27}
05/30/2024 11:19:59 - INFO - llmtuner.extras.callbacks - {'loss': 1.1239, 'learning_rate': 4.9610e-05, 'epoch': 0.28}
05/30/2024 11:21:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0638, 'learning_rate': 4.9584e-05, 'epoch': 0.29}
05/30/2024 11:22:26 - INFO - llmtuner.extras.callbacks - {'loss': 1.0941, 'learning_rate': 4.9557e-05, 'epoch': 0.30}
05/30/2024 11:23:37 - INFO - llmtuner.extras.callbacks - {'loss': 1.1336, 'learning_rate': 4.9529e-05, 'epoch': 0.31}
05/30/2024 11:24:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0971, 'learning_rate': 4.9500e-05, 'epoch': 0.32}
05/30/2024 11:26:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1161, 'learning_rate': 4.9470e-05, 'epoch': 0.33}
05/30/2024 11:27:28 - INFO - llmtuner.extras.callbacks - {'loss': 1.0917, 'learning_rate': 4.9439e-05, 'epoch': 0.34}
05/30/2024 11:28:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.0964, 'learning_rate': 4.9408e-05, 'epoch': 0.35}
05/30/2024 11:29:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.0621, 'learning_rate': 4.9376e-05, 'epoch': 0.36}
05/30/2024 11:31:12 - INFO - llmtuner.extras.callbacks - {'loss': 1.1547, 'learning_rate': 4.9342e-05, 'epoch': 0.37}
05/30/2024 11:32:25 - INFO - llmtuner.extras.callbacks - {'loss': 1.1364, 'learning_rate': 4.9308e-05, 'epoch': 0.38}
05/30/2024 11:32:25 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-200
05/30/2024 11:32:25 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-200/tokenizer_config.json
05/30/2024 11:32:25 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-200/special_tokens_map.json
05/30/2024 11:33:48 - INFO - llmtuner.extras.callbacks - {'loss': 1.0841, 'learning_rate': 4.9274e-05, 'epoch': 0.38}
05/30/2024 11:35:01 - INFO - llmtuner.extras.callbacks - {'loss': 1.0684, 'learning_rate': 4.9238e-05, 'epoch': 0.39}
05/30/2024 11:36:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0385, 'learning_rate': 4.9201e-05, 'epoch': 0.40}
05/30/2024 11:37:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.0618, 'learning_rate': 4.9164e-05, 'epoch': 0.41}
05/30/2024 11:38:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.0139, 'learning_rate': 4.9126e-05, 'epoch': 0.42}
05/30/2024 11:40:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.0323, 'learning_rate': 4.9087e-05, 'epoch': 0.43}
05/30/2024 11:41:19 - INFO - llmtuner.extras.callbacks - {'loss': 1.1061, 'learning_rate': 4.9047e-05, 'epoch': 0.44}
05/30/2024 11:42:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.0454, 'learning_rate': 4.9006e-05, 'epoch': 0.45}
05/30/2024 11:43:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0813, 'learning_rate': 4.8965e-05, 'epoch': 0.46}
05/30/2024 11:45:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.0461, 'learning_rate': 4.8922e-05, 'epoch': 0.47}
05/30/2024 11:46:18 - INFO - llmtuner.extras.callbacks - {'loss': 1.0858, 'learning_rate': 4.8879e-05, 'epoch': 0.48}
05/30/2024 11:47:29 - INFO - llmtuner.extras.callbacks - {'loss': 1.0054, 'learning_rate': 4.8835e-05, 'epoch': 0.49}
05/30/2024 11:48:41 - INFO - llmtuner.extras.callbacks - {'loss': 1.0296, 'learning_rate': 4.8790e-05, 'epoch': 0.50}
05/30/2024 11:49:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0272, 'learning_rate': 4.8744e-05, 'epoch': 0.51}
05/30/2024 11:51:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.9981, 'learning_rate': 4.8698e-05, 'epoch': 0.52}
05/30/2024 11:52:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.0496, 'learning_rate': 4.8650e-05, 'epoch': 0.53}
05/30/2024 11:53:34 - INFO - llmtuner.extras.callbacks - {'loss': 1.0160, 'learning_rate': 4.8602e-05, 'epoch': 0.53}
05/30/2024 11:54:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0128, 'learning_rate': 4.8553e-05, 'epoch': 0.54}
05/30/2024 11:55:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.0705, 'learning_rate': 4.8503e-05, 'epoch': 0.55}
05/30/2024 11:57:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0587, 'learning_rate': 4.8453e-05, 'epoch': 0.56}
05/30/2024 11:57:11 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-300
05/30/2024 11:57:11 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-300/tokenizer_config.json
05/30/2024 11:57:11 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-300/special_tokens_map.json
05/30/2024 11:58:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.0140, 'learning_rate': 4.8401e-05, 'epoch': 0.57}
05/30/2024 11:59:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.0169, 'learning_rate': 4.8349e-05, 'epoch': 0.58}
05/30/2024 12:00:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9951, 'learning_rate': 4.8296e-05, 'epoch': 0.59}
05/30/2024 12:02:02 - INFO - llmtuner.extras.callbacks - {'loss': 1.0865, 'learning_rate': 4.8242e-05, 'epoch': 0.60}
05/30/2024 12:03:15 - INFO - llmtuner.extras.callbacks - {'loss': 1.0631, 'learning_rate': 4.8188e-05, 'epoch': 0.61}
05/30/2024 12:04:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.9968, 'learning_rate': 4.8132e-05, 'epoch': 0.62}
05/30/2024 12:05:43 - INFO - llmtuner.extras.callbacks - {'loss': 1.0817, 'learning_rate': 4.8076e-05, 'epoch': 0.63}
05/30/2024 12:06:56 - INFO - llmtuner.extras.callbacks - {'loss': 1.0022, 'learning_rate': 4.8019e-05, 'epoch': 0.64}
05/30/2024 12:08:10 - INFO - llmtuner.extras.callbacks - {'loss': 1.0603, 'learning_rate': 4.7961e-05, 'epoch': 0.65}
05/30/2024 12:09:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.0975, 'learning_rate': 4.7902e-05, 'epoch': 0.66}
05/30/2024 12:10:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.0030, 'learning_rate': 4.7843e-05, 'epoch': 0.67}
05/30/2024 12:11:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0548, 'learning_rate': 4.7782e-05, 'epoch': 0.68}
05/30/2024 12:13:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9659, 'learning_rate': 4.7721e-05, 'epoch': 0.68}
05/30/2024 12:14:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.9866, 'learning_rate': 4.7659e-05, 'epoch': 0.69}
05/30/2024 12:15:33 - INFO - llmtuner.extras.callbacks - {'loss': 1.0477, 'learning_rate': 4.7597e-05, 'epoch': 0.70}
05/30/2024 12:16:47 - INFO - llmtuner.extras.callbacks - {'loss': 1.0367, 'learning_rate': 4.7533e-05, 'epoch': 0.71}
05/30/2024 12:18:00 - INFO - llmtuner.extras.callbacks - {'loss': 1.0003, 'learning_rate': 4.7469e-05, 'epoch': 0.72}
05/30/2024 12:19:11 - INFO - llmtuner.extras.callbacks - {'loss': 1.0319, 'learning_rate': 4.7404e-05, 'epoch': 0.73}
05/30/2024 12:20:22 - INFO - llmtuner.extras.callbacks - {'loss': 1.0411, 'learning_rate': 4.7338e-05, 'epoch': 0.74}
05/30/2024 12:21:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.0621, 'learning_rate': 4.7272e-05, 'epoch': 0.75}
05/30/2024 12:21:36 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-400
05/30/2024 12:21:36 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-400/tokenizer_config.json
05/30/2024 12:21:36 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-400/special_tokens_map.json
05/30/2024 12:22:51 - INFO - llmtuner.extras.callbacks - {'loss': 1.0058, 'learning_rate': 4.7204e-05, 'epoch': 0.76}
05/30/2024 12:24:06 - INFO - llmtuner.extras.callbacks - {'loss': 1.0058, 'learning_rate': 4.7136e-05, 'epoch': 0.77}
05/30/2024 12:25:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.0262, 'learning_rate': 4.7068e-05, 'epoch': 0.78}
05/30/2024 12:26:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0481, 'learning_rate': 4.6998e-05, 'epoch': 0.79}
05/30/2024 12:27:57 - INFO - llmtuner.extras.callbacks - {'loss': 1.0294, 'learning_rate': 4.6928e-05, 'epoch': 0.80}
05/30/2024 12:29:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.9968, 'learning_rate': 4.6856e-05, 'epoch': 0.81}
05/30/2024 12:30:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.9212, 'learning_rate': 4.6784e-05, 'epoch': 0.82}
05/30/2024 12:31:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.9670, 'learning_rate': 4.6712e-05, 'epoch': 0.83}
05/30/2024 12:32:53 - INFO - llmtuner.extras.callbacks - {'loss': 1.0146, 'learning_rate': 4.6638e-05, 'epoch': 0.83}
05/30/2024 12:34:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.9864, 'learning_rate': 4.6564e-05, 'epoch': 0.84}
05/30/2024 12:35:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.9832, 'learning_rate': 4.6489e-05, 'epoch': 0.85}
05/30/2024 12:36:31 - INFO - llmtuner.extras.callbacks - {'loss': 1.0270, 'learning_rate': 4.6414e-05, 'epoch': 0.86}
05/30/2024 12:37:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.9880, 'learning_rate': 4.6337e-05, 'epoch': 0.87}
05/30/2024 12:38:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0056, 'learning_rate': 4.6260e-05, 'epoch': 0.88}
05/30/2024 12:40:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.0289, 'learning_rate': 4.6182e-05, 'epoch': 0.89}
05/30/2024 12:41:21 - INFO - llmtuner.extras.callbacks - {'loss': 1.0112, 'learning_rate': 4.6103e-05, 'epoch': 0.90}
05/30/2024 12:42:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.9772, 'learning_rate': 4.6024e-05, 'epoch': 0.91}
05/30/2024 12:43:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.9950, 'learning_rate': 4.5944e-05, 'epoch': 0.92}
05/30/2024 12:45:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.1026, 'learning_rate': 4.5863e-05, 'epoch': 0.93}
05/30/2024 12:46:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9985, 'learning_rate': 4.5782e-05, 'epoch': 0.94}
05/30/2024 12:46:14 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-500
05/30/2024 12:46:14 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-500/tokenizer_config.json
05/30/2024 12:46:14 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-500/special_tokens_map.json
05/30/2024 12:47:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.0109, 'learning_rate': 4.5699e-05, 'epoch': 0.95}
05/30/2024 12:48:45 - INFO - llmtuner.extras.callbacks - {'loss': 1.0503, 'learning_rate': 4.5616e-05, 'epoch': 0.96}
05/30/2024 12:50:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9864, 'learning_rate': 4.5533e-05, 'epoch': 0.97}
05/30/2024 12:51:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.9893, 'learning_rate': 4.5448e-05, 'epoch': 0.98}
05/30/2024 12:52:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0072, 'learning_rate': 4.5363e-05, 'epoch': 0.98}
05/30/2024 12:53:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.9989, 'learning_rate': 4.5277e-05, 'epoch': 0.99}
05/30/2024 12:55:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.9566, 'learning_rate': 4.5191e-05, 'epoch': 1.00}
05/30/2024 12:56:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9350, 'learning_rate': 4.5103e-05, 'epoch': 1.01}
05/30/2024 12:57:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.9329, 'learning_rate': 4.5016e-05, 'epoch': 1.02}
05/30/2024 12:59:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.9975, 'learning_rate': 4.4927e-05, 'epoch': 1.03}
05/30/2024 13:00:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.9970, 'learning_rate': 4.4838e-05, 'epoch': 1.04}
05/30/2024 13:01:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.9210, 'learning_rate': 4.4748e-05, 'epoch': 1.05}
05/30/2024 13:02:52 - INFO - llmtuner.extras.callbacks - {'loss': 1.0122, 'learning_rate': 4.4657e-05, 'epoch': 1.06}
05/30/2024 13:04:04 - INFO - llmtuner.extras.callbacks - {'loss': 1.0006, 'learning_rate': 4.4565e-05, 'epoch': 1.07}
05/30/2024 13:05:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9699, 'learning_rate': 4.4473e-05, 'epoch': 1.08}
05/30/2024 13:06:24 - INFO - llmtuner.extras.callbacks - {'loss': 1.0238, 'learning_rate': 4.4381e-05, 'epoch': 1.09}
05/30/2024 13:07:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.9836, 'learning_rate': 4.4287e-05, 'epoch': 1.10}
05/30/2024 13:08:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9578, 'learning_rate': 4.4193e-05, 'epoch': 1.11}
05/30/2024 13:10:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9906, 'learning_rate': 4.4098e-05, 'epoch': 1.12}
05/30/2024 13:11:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.9712, 'learning_rate': 4.4003e-05, 'epoch': 1.13}
05/30/2024 13:11:26 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-600
05/30/2024 13:11:26 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-600/tokenizer_config.json
05/30/2024 13:11:26 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-600/special_tokens_map.json
05/30/2024 13:12:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.9854, 'learning_rate': 4.3907e-05, 'epoch': 1.13}
05/30/2024 13:13:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.9407, 'learning_rate': 4.3810e-05, 'epoch': 1.14}
05/30/2024 13:15:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.9269, 'learning_rate': 4.3713e-05, 'epoch': 1.15}
05/30/2024 13:16:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.9230, 'learning_rate': 4.3615e-05, 'epoch': 1.16}
05/30/2024 13:17:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.9533, 'learning_rate': 4.3516e-05, 'epoch': 1.17}
05/30/2024 13:18:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.9878, 'learning_rate': 4.3417e-05, 'epoch': 1.18}
05/30/2024 13:20:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.9606, 'learning_rate': 4.3317e-05, 'epoch': 1.19}
05/30/2024 13:21:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.9418, 'learning_rate': 4.3216e-05, 'epoch': 1.20}
05/30/2024 13:22:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.9300, 'learning_rate': 4.3115e-05, 'epoch': 1.21}
05/30/2024 13:23:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.9960, 'learning_rate': 4.3013e-05, 'epoch': 1.22}
05/30/2024 13:25:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.9998, 'learning_rate': 4.2911e-05, 'epoch': 1.23}
05/30/2024 13:26:13 - INFO - llmtuner.extras.callbacks - {'loss': 1.0008, 'learning_rate': 4.2807e-05, 'epoch': 1.24}
05/30/2024 13:27:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.9660, 'learning_rate': 4.2704e-05, 'epoch': 1.25}
05/30/2024 13:28:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0695, 'learning_rate': 4.2599e-05, 'epoch': 1.26}
05/30/2024 13:29:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0130, 'learning_rate': 4.2494e-05, 'epoch': 1.27}
05/30/2024 13:31:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.9362, 'learning_rate': 4.2389e-05, 'epoch': 1.28}
05/30/2024 13:32:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.9940, 'learning_rate': 4.2283e-05, 'epoch': 1.28}
05/30/2024 13:33:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.9997, 'learning_rate': 4.2176e-05, 'epoch': 1.29}
05/30/2024 13:34:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.9748, 'learning_rate': 4.2069e-05, 'epoch': 1.30}
05/30/2024 13:36:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.9516, 'learning_rate': 4.1961e-05, 'epoch': 1.31}
05/30/2024 13:36:02 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-700
05/30/2024 13:36:02 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-700/tokenizer_config.json
05/30/2024 13:36:02 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-700/special_tokens_map.json
05/30/2024 13:37:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.9625, 'learning_rate': 4.1852e-05, 'epoch': 1.32}
05/30/2024 13:38:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.9782, 'learning_rate': 4.1743e-05, 'epoch': 1.33}
05/30/2024 13:39:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.9163, 'learning_rate': 4.1633e-05, 'epoch': 1.34}
05/30/2024 13:40:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.9832, 'learning_rate': 4.1523e-05, 'epoch': 1.35}
05/30/2024 13:42:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.9748, 'learning_rate': 4.1412e-05, 'epoch': 1.36}
05/30/2024 13:43:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.9930, 'learning_rate': 4.1301e-05, 'epoch': 1.37}
05/30/2024 13:44:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.9433, 'learning_rate': 4.1189e-05, 'epoch': 1.38}
05/30/2024 13:45:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.9330, 'learning_rate': 4.1076e-05, 'epoch': 1.39}
05/30/2024 13:46:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.9937, 'learning_rate': 4.0963e-05, 'epoch': 1.40}
05/30/2024 13:48:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.9602, 'learning_rate': 4.0849e-05, 'epoch': 1.41}
05/30/2024 13:49:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.9654, 'learning_rate': 4.0735e-05, 'epoch': 1.42}
05/30/2024 13:50:40 - INFO - llmtuner.extras.callbacks - {'loss': 1.0112, 'learning_rate': 4.0620e-05, 'epoch': 1.43}
05/30/2024 13:51:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.9699, 'learning_rate': 4.0505e-05, 'epoch': 1.43}
05/30/2024 13:53:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.9482, 'learning_rate': 4.0389e-05, 'epoch': 1.44}
05/30/2024 13:54:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.9445, 'learning_rate': 4.0273e-05, 'epoch': 1.45}
05/30/2024 13:55:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.9534, 'learning_rate': 4.0156e-05, 'epoch': 1.46}
05/30/2024 13:56:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.9643, 'learning_rate': 4.0038e-05, 'epoch': 1.47}
05/30/2024 13:58:03 - INFO - llmtuner.extras.callbacks - {'loss': 1.0389, 'learning_rate': 3.9920e-05, 'epoch': 1.48}
05/30/2024 13:59:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.9738, 'learning_rate': 3.9802e-05, 'epoch': 1.49}
05/30/2024 14:00:27 - INFO - llmtuner.extras.callbacks - {'loss': 1.0068, 'learning_rate': 3.9683e-05, 'epoch': 1.50}
05/30/2024 14:00:27 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-800
05/30/2024 14:00:27 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-800/tokenizer_config.json
05/30/2024 14:00:27 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-800/special_tokens_map.json
05/30/2024 14:01:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.9412, 'learning_rate': 3.9563e-05, 'epoch': 1.51}
05/30/2024 14:02:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.9319, 'learning_rate': 3.9443e-05, 'epoch': 1.52}
05/30/2024 14:03:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.9222, 'learning_rate': 3.9323e-05, 'epoch': 1.53}
05/30/2024 14:05:16 - INFO - llmtuner.extras.callbacks - {'loss': 1.0032, 'learning_rate': 3.9202e-05, 'epoch': 1.54}
05/30/2024 14:06:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.9756, 'learning_rate': 3.9080e-05, 'epoch': 1.55}
05/30/2024 14:07:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.9822, 'learning_rate': 3.8958e-05, 'epoch': 1.56}
05/30/2024 14:08:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.8832, 'learning_rate': 3.8836e-05, 'epoch': 1.57}
05/30/2024 14:10:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.9586, 'learning_rate': 3.8713e-05, 'epoch': 1.58}
05/30/2024 14:11:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.9309, 'learning_rate': 3.8589e-05, 'epoch': 1.58}
05/30/2024 14:12:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.9778, 'learning_rate': 3.8465e-05, 'epoch': 1.59}
05/30/2024 14:13:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.9883, 'learning_rate': 3.8341e-05, 'epoch': 1.60}
05/30/2024 14:15:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.9687, 'learning_rate': 3.8216e-05, 'epoch': 1.61}
05/30/2024 14:16:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.9135, 'learning_rate': 3.8091e-05, 'epoch': 1.62}
05/30/2024 14:17:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.9763, 'learning_rate': 3.7965e-05, 'epoch': 1.63}
05/30/2024 14:18:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.9217, 'learning_rate': 3.7839e-05, 'epoch': 1.64}
05/30/2024 14:19:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.9196, 'learning_rate': 3.7712e-05, 'epoch': 1.65}
05/30/2024 14:21:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.9939, 'learning_rate': 3.7585e-05, 'epoch': 1.66}
05/30/2024 14:22:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.9145, 'learning_rate': 3.7457e-05, 'epoch': 1.67}
05/30/2024 14:23:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.9603, 'learning_rate': 3.7329e-05, 'epoch': 1.68}
05/30/2024 14:24:55 - INFO - llmtuner.extras.callbacks - {'loss': 1.0102, 'learning_rate': 3.7201e-05, 'epoch': 1.69}
05/30/2024 14:24:55 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-900
05/30/2024 14:24:55 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-900/tokenizer_config.json
05/30/2024 14:24:55 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-900/special_tokens_map.json
05/30/2024 14:26:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.9842, 'learning_rate': 3.7072e-05, 'epoch': 1.70}
05/30/2024 14:27:30 - INFO - llmtuner.extras.callbacks - {'loss': 1.0078, 'learning_rate': 3.6943e-05, 'epoch': 1.71}
05/30/2024 14:28:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.9644, 'learning_rate': 3.6813e-05, 'epoch': 1.72}
05/30/2024 14:30:07 - INFO - llmtuner.extras.callbacks - {'loss': 1.0845, 'learning_rate': 3.6683e-05, 'epoch': 1.73}
05/30/2024 14:31:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.9390, 'learning_rate': 3.6553e-05, 'epoch': 1.73}
05/30/2024 14:32:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.9760, 'learning_rate': 3.6422e-05, 'epoch': 1.74}
05/30/2024 14:33:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.9645, 'learning_rate': 3.6291e-05, 'epoch': 1.75}
05/30/2024 14:34:58 - INFO - llmtuner.extras.callbacks - {'loss': 1.0190, 'learning_rate': 3.6159e-05, 'epoch': 1.76}
05/30/2024 14:36:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.9370, 'learning_rate': 3.6027e-05, 'epoch': 1.77}
05/30/2024 14:37:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.9758, 'learning_rate': 3.5894e-05, 'epoch': 1.78}
05/30/2024 14:38:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.9291, 'learning_rate': 3.5762e-05, 'epoch': 1.79}
05/30/2024 14:40:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.0252, 'learning_rate': 3.5628e-05, 'epoch': 1.80}
05/30/2024 14:41:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.9532, 'learning_rate': 3.5495e-05, 'epoch': 1.81}
05/30/2024 14:42:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.9651, 'learning_rate': 3.5361e-05, 'epoch': 1.82}
05/30/2024 14:43:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9145, 'learning_rate': 3.5227e-05, 'epoch': 1.83}
05/30/2024 14:45:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.9166, 'learning_rate': 3.5092e-05, 'epoch': 1.84}
05/30/2024 14:46:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.9265, 'learning_rate': 3.4957e-05, 'epoch': 1.85}
05/30/2024 14:47:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.9177, 'learning_rate': 3.4822e-05, 'epoch': 1.86}
05/30/2024 14:48:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9404, 'learning_rate': 3.4686e-05, 'epoch': 1.87}
05/30/2024 14:49:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.9299, 'learning_rate': 3.4550e-05, 'epoch': 1.88}
05/30/2024 14:49:59 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1000
05/30/2024 14:49:59 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1000/tokenizer_config.json
05/30/2024 14:49:59 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1000/special_tokens_map.json
05/30/2024 14:51:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.8892, 'learning_rate': 3.4414e-05, 'epoch': 1.88}
05/30/2024 14:52:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.9277, 'learning_rate': 3.4277e-05, 'epoch': 1.89}
05/30/2024 14:53:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.9632, 'learning_rate': 3.4140e-05, 'epoch': 1.90}
05/30/2024 14:54:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9616, 'learning_rate': 3.4003e-05, 'epoch': 1.91}
05/30/2024 14:56:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.9027, 'learning_rate': 3.3865e-05, 'epoch': 1.92}
05/30/2024 14:57:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9408, 'learning_rate': 3.3727e-05, 'epoch': 1.93}
05/30/2024 14:58:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.9275, 'learning_rate': 3.3589e-05, 'epoch': 1.94}
05/30/2024 14:59:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9469, 'learning_rate': 3.3450e-05, 'epoch': 1.95}
05/30/2024 15:00:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.9531, 'learning_rate': 3.3312e-05, 'epoch': 1.96}
05/30/2024 15:02:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.9059, 'learning_rate': 3.3172e-05, 'epoch': 1.97}
05/30/2024 15:03:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.9192, 'learning_rate': 3.3033e-05, 'epoch': 1.98}
05/30/2024 15:04:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.9132, 'learning_rate': 3.2893e-05, 'epoch': 1.99}
05/30/2024 15:05:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.9327, 'learning_rate': 3.2753e-05, 'epoch': 2.00}
05/30/2024 15:07:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.8606, 'learning_rate': 3.2613e-05, 'epoch': 2.01}
05/30/2024 15:08:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.8882, 'learning_rate': 3.2473e-05, 'epoch': 2.02}
05/30/2024 15:09:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9651, 'learning_rate': 3.2332e-05, 'epoch': 2.03}
05/30/2024 15:11:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.9254, 'learning_rate': 3.2191e-05, 'epoch': 2.03}
05/30/2024 15:12:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.9382, 'learning_rate': 3.2050e-05, 'epoch': 2.04}
05/30/2024 15:13:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.9736, 'learning_rate': 3.1908e-05, 'epoch': 2.05}
05/30/2024 15:14:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.9202, 'learning_rate': 3.1767e-05, 'epoch': 2.06}
05/30/2024 15:14:46 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1100
05/30/2024 15:14:46 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1100/tokenizer_config.json
05/30/2024 15:14:46 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1100/special_tokens_map.json
05/30/2024 15:16:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.8651, 'learning_rate': 3.1625e-05, 'epoch': 2.07}
05/30/2024 15:17:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.8944, 'learning_rate': 3.1482e-05, 'epoch': 2.08}
05/30/2024 15:18:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.9092, 'learning_rate': 3.1340e-05, 'epoch': 2.09}
05/30/2024 15:19:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9258, 'learning_rate': 3.1197e-05, 'epoch': 2.10}
05/30/2024 15:20:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.9232, 'learning_rate': 3.1054e-05, 'epoch': 2.11}
05/30/2024 15:22:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9486, 'learning_rate': 3.0911e-05, 'epoch': 2.12}
05/30/2024 15:23:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.9340, 'learning_rate': 3.0768e-05, 'epoch': 2.13}
05/30/2024 15:24:38 - INFO - llmtuner.extras.callbacks - {'loss': 1.0254, 'learning_rate': 3.0625e-05, 'epoch': 2.14}
05/30/2024 15:25:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.8991, 'learning_rate': 3.0481e-05, 'epoch': 2.15}
05/30/2024 15:27:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.9025, 'learning_rate': 3.0337e-05, 'epoch': 2.16}
05/30/2024 15:28:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.9131, 'learning_rate': 3.0193e-05, 'epoch': 2.17}
05/30/2024 15:29:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.9235, 'learning_rate': 3.0049e-05, 'epoch': 2.18}
05/30/2024 15:30:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.9250, 'learning_rate': 2.9904e-05, 'epoch': 2.18}
05/30/2024 15:32:05 - INFO - llmtuner.extras.callbacks - {'loss': 1.0009, 'learning_rate': 2.9760e-05, 'epoch': 2.19}
05/30/2024 15:33:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.9432, 'learning_rate': 2.9615e-05, 'epoch': 2.20}
05/30/2024 15:34:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.9375, 'learning_rate': 2.9470e-05, 'epoch': 2.21}
05/30/2024 15:35:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.9316, 'learning_rate': 2.9325e-05, 'epoch': 2.22}
05/30/2024 15:37:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.8937, 'learning_rate': 2.9180e-05, 'epoch': 2.23}
05/30/2024 15:38:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.8951, 'learning_rate': 2.9035e-05, 'epoch': 2.24}
05/30/2024 15:39:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.9487, 'learning_rate': 2.8889e-05, 'epoch': 2.25}
05/30/2024 15:39:38 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1200
05/30/2024 15:39:38 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1200/tokenizer_config.json
05/30/2024 15:39:38 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1200/special_tokens_map.json
05/30/2024 15:40:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.9214, 'learning_rate': 2.8743e-05, 'epoch': 2.26}
05/30/2024 15:42:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.9936, 'learning_rate': 2.8598e-05, 'epoch': 2.27}
05/30/2024 15:43:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.9019, 'learning_rate': 2.8452e-05, 'epoch': 2.28}
05/30/2024 15:44:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.8895, 'learning_rate': 2.8306e-05, 'epoch': 2.29}
05/30/2024 15:45:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.9255, 'learning_rate': 2.8160e-05, 'epoch': 2.30}
05/30/2024 15:47:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.8846, 'learning_rate': 2.8013e-05, 'epoch': 2.31}
05/30/2024 15:48:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.9341, 'learning_rate': 2.7867e-05, 'epoch': 2.32}
05/30/2024 15:49:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.9270, 'learning_rate': 2.7721e-05, 'epoch': 2.33}
05/30/2024 15:50:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9358, 'learning_rate': 2.7574e-05, 'epoch': 2.33}
05/30/2024 15:51:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.9501, 'learning_rate': 2.7428e-05, 'epoch': 2.34}
05/30/2024 15:53:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.9466, 'learning_rate': 2.7281e-05, 'epoch': 2.35}
05/30/2024 15:54:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.9051, 'learning_rate': 2.7134e-05, 'epoch': 2.36}
05/30/2024 15:55:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.9231, 'learning_rate': 2.6987e-05, 'epoch': 2.37}
05/30/2024 15:56:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.9898, 'learning_rate': 2.6840e-05, 'epoch': 2.38}
05/30/2024 15:58:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.8959, 'learning_rate': 2.6693e-05, 'epoch': 2.39}
05/30/2024 15:59:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.9697, 'learning_rate': 2.6546e-05, 'epoch': 2.40}
05/30/2024 16:00:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.9519, 'learning_rate': 2.6399e-05, 'epoch': 2.41}
05/30/2024 16:01:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.9182, 'learning_rate': 2.6252e-05, 'epoch': 2.42}
05/30/2024 16:03:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.8680, 'learning_rate': 2.6105e-05, 'epoch': 2.43}
05/30/2024 16:04:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.9526, 'learning_rate': 2.5958e-05, 'epoch': 2.44}
05/30/2024 16:04:18 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1300
05/30/2024 16:04:18 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1300/tokenizer_config.json
05/30/2024 16:04:18 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1300/special_tokens_map.json
05/30/2024 16:05:36 - INFO - llmtuner.extras.callbacks - {'loss': 1.0140, 'learning_rate': 2.5810e-05, 'epoch': 2.45}
05/30/2024 16:06:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.9385, 'learning_rate': 2.5663e-05, 'epoch': 2.46}
05/30/2024 16:08:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.9132, 'learning_rate': 2.5516e-05, 'epoch': 2.47}
05/30/2024 16:09:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.8749, 'learning_rate': 2.5368e-05, 'epoch': 2.48}
05/30/2024 16:10:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.9267, 'learning_rate': 2.5221e-05, 'epoch': 2.48}
05/30/2024 16:11:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9000, 'learning_rate': 2.5074e-05, 'epoch': 2.49}
05/30/2024 16:12:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.8971, 'learning_rate': 2.4926e-05, 'epoch': 2.50}
05/30/2024 16:14:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.9519, 'learning_rate': 2.4779e-05, 'epoch': 2.51}
05/30/2024 16:15:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.8930, 'learning_rate': 2.4632e-05, 'epoch': 2.52}
05/30/2024 16:16:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.8910, 'learning_rate': 2.4484e-05, 'epoch': 2.53}
05/30/2024 16:17:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9476, 'learning_rate': 2.4337e-05, 'epoch': 2.54}
05/30/2024 16:19:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9589, 'learning_rate': 2.4190e-05, 'epoch': 2.55}
05/30/2024 16:20:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.8797, 'learning_rate': 2.4042e-05, 'epoch': 2.56}
05/30/2024 16:21:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.8877, 'learning_rate': 2.3895e-05, 'epoch': 2.57}
05/30/2024 16:22:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.9501, 'learning_rate': 2.3748e-05, 'epoch': 2.58}
05/30/2024 16:24:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.9101, 'learning_rate': 2.3601e-05, 'epoch': 2.59}
05/30/2024 16:25:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.8970, 'learning_rate': 2.3454e-05, 'epoch': 2.60}
05/30/2024 16:26:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.8630, 'learning_rate': 2.3307e-05, 'epoch': 2.61}
05/30/2024 16:27:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.9754, 'learning_rate': 2.3160e-05, 'epoch': 2.62}
05/30/2024 16:29:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.9069, 'learning_rate': 2.3013e-05, 'epoch': 2.63}
05/30/2024 16:29:04 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1400
05/30/2024 16:29:04 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1400/tokenizer_config.json
05/30/2024 16:29:04 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1400/special_tokens_map.json
05/30/2024 16:30:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.9034, 'learning_rate': 2.2866e-05, 'epoch': 2.63}
05/30/2024 16:31:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.8772, 'learning_rate': 2.2719e-05, 'epoch': 2.64}
05/30/2024 16:32:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.8777, 'learning_rate': 2.2572e-05, 'epoch': 2.65}
05/30/2024 16:33:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9086, 'learning_rate': 2.2426e-05, 'epoch': 2.66}
05/30/2024 16:35:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.9158, 'learning_rate': 2.2279e-05, 'epoch': 2.67}
05/30/2024 16:36:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9377, 'learning_rate': 2.2133e-05, 'epoch': 2.68}
05/30/2024 16:37:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.9212, 'learning_rate': 2.1987e-05, 'epoch': 2.69}
05/30/2024 16:38:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.9168, 'learning_rate': 2.1840e-05, 'epoch': 2.70}
05/30/2024 16:39:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.9544, 'learning_rate': 2.1694e-05, 'epoch': 2.71}
05/30/2024 16:41:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9239, 'learning_rate': 2.1548e-05, 'epoch': 2.72}
05/30/2024 16:42:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.9053, 'learning_rate': 2.1402e-05, 'epoch': 2.73}
05/30/2024 16:43:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.9093, 'learning_rate': 2.1257e-05, 'epoch': 2.74}
05/30/2024 16:44:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.9245, 'learning_rate': 2.1111e-05, 'epoch': 2.75}
05/30/2024 16:45:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.9336, 'learning_rate': 2.0965e-05, 'epoch': 2.76}
05/30/2024 16:47:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.9212, 'learning_rate': 2.0820e-05, 'epoch': 2.77}
05/30/2024 16:48:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.8969, 'learning_rate': 2.0675e-05, 'epoch': 2.78}
05/30/2024 16:49:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.9375, 'learning_rate': 2.0530e-05, 'epoch': 2.78}
05/30/2024 16:50:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.9377, 'learning_rate': 2.0385e-05, 'epoch': 2.79}
05/30/2024 16:52:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.9511, 'learning_rate': 2.0240e-05, 'epoch': 2.80}
05/30/2024 16:53:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.8959, 'learning_rate': 2.0096e-05, 'epoch': 2.81}
05/30/2024 16:53:18 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1500
05/30/2024 16:53:18 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1500/tokenizer_config.json
05/30/2024 16:53:18 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1500/special_tokens_map.json
05/30/2024 16:54:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.8659, 'learning_rate': 1.9951e-05, 'epoch': 2.82}
05/30/2024 16:55:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.8930, 'learning_rate': 1.9807e-05, 'epoch': 2.83}
05/30/2024 16:56:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.9602, 'learning_rate': 1.9663e-05, 'epoch': 2.84}
05/30/2024 16:58:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.8979, 'learning_rate': 1.9519e-05, 'epoch': 2.85}
05/30/2024 16:59:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.9401, 'learning_rate': 1.9375e-05, 'epoch': 2.86}
05/30/2024 17:00:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.8771, 'learning_rate': 1.9232e-05, 'epoch': 2.87}
05/30/2024 17:01:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.9263, 'learning_rate': 1.9089e-05, 'epoch': 2.88}
05/30/2024 17:03:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.8654, 'learning_rate': 1.8946e-05, 'epoch': 2.89}
05/30/2024 17:04:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.9671, 'learning_rate': 1.8803e-05, 'epoch': 2.90}
05/30/2024 17:05:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.9001, 'learning_rate': 1.8660e-05, 'epoch': 2.91}
05/30/2024 17:06:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.9013, 'learning_rate': 1.8518e-05, 'epoch': 2.92}
05/30/2024 17:08:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.9181, 'learning_rate': 1.8375e-05, 'epoch': 2.93}
05/30/2024 17:09:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.9420, 'learning_rate': 1.8233e-05, 'epoch': 2.93}
05/30/2024 17:10:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.9301, 'learning_rate': 1.8092e-05, 'epoch': 2.94}
05/30/2024 17:11:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9402, 'learning_rate': 1.7950e-05, 'epoch': 2.95}
05/30/2024 17:12:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.9272, 'learning_rate': 1.7809e-05, 'epoch': 2.96}
05/30/2024 17:14:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.8882, 'learning_rate': 1.7668e-05, 'epoch': 2.97}
05/30/2024 17:15:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.9638, 'learning_rate': 1.7527e-05, 'epoch': 2.98}
05/30/2024 17:16:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.9586, 'learning_rate': 1.7387e-05, 'epoch': 2.99}
05/30/2024 17:17:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.9006, 'learning_rate': 1.7247e-05, 'epoch': 3.00}
05/30/2024 17:17:44 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1600
05/30/2024 17:17:44 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1600/tokenizer_config.json
05/30/2024 17:17:44 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1600/special_tokens_map.json
05/30/2024 17:19:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.9363, 'learning_rate': 1.7107e-05, 'epoch': 3.01}
05/30/2024 17:20:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.9020, 'learning_rate': 1.6967e-05, 'epoch': 3.02}
05/30/2024 17:21:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.9303, 'learning_rate': 1.6828e-05, 'epoch': 3.03}
05/30/2024 17:22:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.9281, 'learning_rate': 1.6688e-05, 'epoch': 3.04}
05/30/2024 17:23:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.9317, 'learning_rate': 1.6550e-05, 'epoch': 3.05}
05/30/2024 17:25:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.8836, 'learning_rate': 1.6411e-05, 'epoch': 3.06}
05/30/2024 17:26:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.8471, 'learning_rate': 1.6273e-05, 'epoch': 3.07}
05/30/2024 17:27:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.9477, 'learning_rate': 1.6135e-05, 'epoch': 3.08}
05/30/2024 17:28:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.8276, 'learning_rate': 1.5997e-05, 'epoch': 3.08}
05/30/2024 17:30:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.9156, 'learning_rate': 1.5860e-05, 'epoch': 3.09}
05/30/2024 17:31:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.9534, 'learning_rate': 1.5723e-05, 'epoch': 3.10}
05/30/2024 17:32:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.8857, 'learning_rate': 1.5586e-05, 'epoch': 3.11}
05/30/2024 17:33:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.9076, 'learning_rate': 1.5450e-05, 'epoch': 3.12}
05/30/2024 17:34:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.8764, 'learning_rate': 1.5314e-05, 'epoch': 3.13}
05/30/2024 17:36:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.8489, 'learning_rate': 1.5178e-05, 'epoch': 3.14}
05/30/2024 17:37:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9567, 'learning_rate': 1.5043e-05, 'epoch': 3.15}
05/30/2024 17:38:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.8767, 'learning_rate': 1.4908e-05, 'epoch': 3.16}
05/30/2024 17:39:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9065, 'learning_rate': 1.4773e-05, 'epoch': 3.17}
05/30/2024 17:40:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.8593, 'learning_rate': 1.4639e-05, 'epoch': 3.18}
05/30/2024 17:42:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.8936, 'learning_rate': 1.4505e-05, 'epoch': 3.19}
05/30/2024 17:42:05 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1700
05/30/2024 17:42:05 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1700/tokenizer_config.json
05/30/2024 17:42:05 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1700/special_tokens_map.json
05/30/2024 17:43:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.8548, 'learning_rate': 1.4372e-05, 'epoch': 3.20}
05/30/2024 17:44:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.8998, 'learning_rate': 1.4238e-05, 'epoch': 3.21}
05/30/2024 17:45:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.9286, 'learning_rate': 1.4106e-05, 'epoch': 3.22}
05/30/2024 17:46:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.9571, 'learning_rate': 1.3973e-05, 'epoch': 3.23}
05/30/2024 17:48:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.8892, 'learning_rate': 1.3841e-05, 'epoch': 3.23}
05/30/2024 17:49:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.8617, 'learning_rate': 1.3709e-05, 'epoch': 3.24}
05/30/2024 17:50:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.8228, 'learning_rate': 1.3578e-05, 'epoch': 3.25}
05/30/2024 17:51:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.9472, 'learning_rate': 1.3447e-05, 'epoch': 3.26}
05/30/2024 17:53:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.9054, 'learning_rate': 1.3317e-05, 'epoch': 3.27}
05/30/2024 17:54:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.8655, 'learning_rate': 1.3187e-05, 'epoch': 3.28}
05/30/2024 17:55:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.9144, 'learning_rate': 1.3057e-05, 'epoch': 3.29}
05/30/2024 17:56:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.8433, 'learning_rate': 1.2928e-05, 'epoch': 3.30}
05/30/2024 17:58:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.8647, 'learning_rate': 1.2799e-05, 'epoch': 3.31}
05/30/2024 17:59:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.8638, 'learning_rate': 1.2671e-05, 'epoch': 3.32}
05/30/2024 18:00:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.8999, 'learning_rate': 1.2543e-05, 'epoch': 3.33}
05/30/2024 18:01:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.9433, 'learning_rate': 1.2415e-05, 'epoch': 3.34}
05/30/2024 18:02:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.8708, 'learning_rate': 1.2288e-05, 'epoch': 3.35}
05/30/2024 18:04:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.9198, 'learning_rate': 1.2161e-05, 'epoch': 3.36}
05/30/2024 18:05:21 - INFO - llmtuner.extras.callbacks - {'loss': 0.9337, 'learning_rate': 1.2035e-05, 'epoch': 3.37}
05/30/2024 18:06:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.9156, 'learning_rate': 1.1909e-05, 'epoch': 3.38}
05/30/2024 18:06:35 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1800
05/30/2024 18:06:35 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1800/tokenizer_config.json
05/30/2024 18:06:35 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1800/special_tokens_map.json
05/30/2024 18:07:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9195, 'learning_rate': 1.1784e-05, 'epoch': 3.38}
05/30/2024 18:09:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.9246, 'learning_rate': 1.1659e-05, 'epoch': 3.39}
05/30/2024 18:10:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9024, 'learning_rate': 1.1535e-05, 'epoch': 3.40}
05/30/2024 18:11:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.9600, 'learning_rate': 1.1411e-05, 'epoch': 3.41}
05/30/2024 18:12:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.8981, 'learning_rate': 1.1287e-05, 'epoch': 3.42}
05/30/2024 18:13:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.8665, 'learning_rate': 1.1164e-05, 'epoch': 3.43}
05/30/2024 18:15:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.9277, 'learning_rate': 1.1042e-05, 'epoch': 3.44}
05/30/2024 18:16:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.8436, 'learning_rate': 1.0920e-05, 'epoch': 3.45}
05/30/2024 18:17:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.8536, 'learning_rate': 1.0798e-05, 'epoch': 3.46}
05/30/2024 18:19:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.9152, 'learning_rate': 1.0677e-05, 'epoch': 3.47}
05/30/2024 18:20:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.8880, 'learning_rate': 1.0557e-05, 'epoch': 3.48}
05/30/2024 18:21:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.8762, 'learning_rate': 1.0437e-05, 'epoch': 3.49}
05/30/2024 18:22:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.8495, 'learning_rate': 1.0317e-05, 'epoch': 3.50}
05/30/2024 18:24:02 - INFO - llmtuner.extras.callbacks - {'loss': 0.8993, 'learning_rate': 1.0198e-05, 'epoch': 3.51}
05/30/2024 18:25:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.8906, 'learning_rate': 1.0080e-05, 'epoch': 3.52}
05/30/2024 18:26:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.8754, 'learning_rate': 9.9618e-06, 'epoch': 3.53}
05/30/2024 18:27:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9499, 'learning_rate': 9.8444e-06, 'epoch': 3.53}
05/30/2024 18:29:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.8777, 'learning_rate': 9.7274e-06, 'epoch': 3.54}
05/30/2024 18:30:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.9424, 'learning_rate': 9.6110e-06, 'epoch': 3.55}
05/30/2024 18:31:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.9045, 'learning_rate': 9.4952e-06, 'epoch': 3.56}
05/30/2024 18:31:32 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1900
05/30/2024 18:31:32 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1900/tokenizer_config.json
05/30/2024 18:31:32 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-1900/special_tokens_map.json
05/30/2024 18:32:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.9204, 'learning_rate': 9.3799e-06, 'epoch': 3.57}
05/30/2024 18:33:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.9251, 'learning_rate': 9.2651e-06, 'epoch': 3.58}
05/30/2024 18:35:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.8666, 'learning_rate': 9.1508e-06, 'epoch': 3.59}
05/30/2024 18:36:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.8977, 'learning_rate': 9.0372e-06, 'epoch': 3.60}
05/30/2024 18:37:37 - INFO - llmtuner.extras.callbacks - {'loss': 0.8685, 'learning_rate': 8.9240e-06, 'epoch': 3.61}
05/30/2024 18:38:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.9547, 'learning_rate': 8.8115e-06, 'epoch': 3.62}
05/30/2024 18:40:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.9702, 'learning_rate': 8.6995e-06, 'epoch': 3.63}
05/30/2024 18:41:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.9219, 'learning_rate': 8.5880e-06, 'epoch': 3.64}
05/30/2024 18:42:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.9426, 'learning_rate': 8.4772e-06, 'epoch': 3.65}
05/30/2024 18:43:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.9388, 'learning_rate': 8.3669e-06, 'epoch': 3.66}
05/30/2024 18:45:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.9110, 'learning_rate': 8.2571e-06, 'epoch': 3.67}
05/30/2024 18:46:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.9256, 'learning_rate': 8.1480e-06, 'epoch': 3.68}
05/30/2024 18:47:30 - INFO - llmtuner.extras.callbacks - {'loss': 0.8783, 'learning_rate': 8.0395e-06, 'epoch': 3.68}
05/30/2024 18:48:46 - INFO - llmtuner.extras.callbacks - {'loss': 1.0136, 'learning_rate': 7.9315e-06, 'epoch': 3.69}
05/30/2024 18:50:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.8919, 'learning_rate': 7.8241e-06, 'epoch': 3.70}
05/30/2024 18:51:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.8764, 'learning_rate': 7.7173e-06, 'epoch': 3.71}
05/30/2024 18:52:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.9047, 'learning_rate': 7.6112e-06, 'epoch': 3.72}
05/30/2024 18:53:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.8827, 'learning_rate': 7.5056e-06, 'epoch': 3.73}
05/30/2024 18:55:23 - INFO - llmtuner.extras.callbacks - {'loss': 0.9144, 'learning_rate': 7.4006e-06, 'epoch': 3.74}
05/30/2024 18:56:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.9430, 'learning_rate': 7.2963e-06, 'epoch': 3.75}
05/30/2024 18:56:38 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2000
05/30/2024 18:56:38 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2000/tokenizer_config.json
05/30/2024 18:56:38 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2000/special_tokens_map.json
05/30/2024 18:57:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.9759, 'learning_rate': 7.1926e-06, 'epoch': 3.76}
05/30/2024 18:59:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.8259, 'learning_rate': 7.0895e-06, 'epoch': 3.77}
05/30/2024 19:00:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.8540, 'learning_rate': 6.9870e-06, 'epoch': 3.78}
05/30/2024 19:01:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.8934, 'learning_rate': 6.8851e-06, 'epoch': 3.79}
05/30/2024 19:02:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.9174, 'learning_rate': 6.7839e-06, 'epoch': 3.80}
05/30/2024 19:04:09 - INFO - llmtuner.extras.callbacks - {'loss': 0.8880, 'learning_rate': 6.6833e-06, 'epoch': 3.81}
05/30/2024 19:05:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.9307, 'learning_rate': 6.5833e-06, 'epoch': 3.82}
05/30/2024 19:06:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.8861, 'learning_rate': 6.4840e-06, 'epoch': 3.83}
05/30/2024 19:07:47 - INFO - llmtuner.extras.callbacks - {'loss': 0.9151, 'learning_rate': 6.3853e-06, 'epoch': 3.83}
05/30/2024 19:08:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.8311, 'learning_rate': 6.2872e-06, 'epoch': 3.84}
05/30/2024 19:10:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.9491, 'learning_rate': 6.1898e-06, 'epoch': 3.85}
05/30/2024 19:11:25 - INFO - llmtuner.extras.callbacks - {'loss': 0.8991, 'learning_rate': 6.0931e-06, 'epoch': 3.86}
05/30/2024 19:12:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.8685, 'learning_rate': 5.9970e-06, 'epoch': 3.87}
05/30/2024 19:13:48 - INFO - llmtuner.extras.callbacks - {'loss': 0.8965, 'learning_rate': 5.9016e-06, 'epoch': 3.88}
05/30/2024 19:15:05 - INFO - llmtuner.extras.callbacks - {'loss': 0.9744, 'learning_rate': 5.8069e-06, 'epoch': 3.89}
05/30/2024 19:16:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.9073, 'learning_rate': 5.7128e-06, 'epoch': 3.90}
05/30/2024 19:17:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.8553, 'learning_rate': 5.6194e-06, 'epoch': 3.91}
05/30/2024 19:18:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.8700, 'learning_rate': 5.5266e-06, 'epoch': 3.92}
05/30/2024 19:19:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.9418, 'learning_rate': 5.4345e-06, 'epoch': 3.93}
05/30/2024 19:21:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.9319, 'learning_rate': 5.3432e-06, 'epoch': 3.94}
05/30/2024 19:21:07 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2100
05/30/2024 19:21:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2100/tokenizer_config.json
05/30/2024 19:21:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2100/special_tokens_map.json
05/30/2024 19:22:31 - INFO - llmtuner.extras.callbacks - {'loss': 0.8899, 'learning_rate': 5.2524e-06, 'epoch': 3.95}
05/30/2024 19:23:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.9009, 'learning_rate': 5.1624e-06, 'epoch': 3.96}
05/30/2024 19:24:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.8726, 'learning_rate': 5.0731e-06, 'epoch': 3.97}
05/30/2024 19:26:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.9409, 'learning_rate': 4.9845e-06, 'epoch': 3.98}
05/30/2024 19:27:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.8981, 'learning_rate': 4.8965e-06, 'epoch': 3.98}
05/30/2024 19:28:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.8947, 'learning_rate': 4.8093e-06, 'epoch': 3.99}
05/30/2024 19:29:46 - INFO - llmtuner.extras.callbacks - {'loss': 0.8818, 'learning_rate': 4.7227e-06, 'epoch': 4.00}
05/30/2024 19:31:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.9535, 'learning_rate': 4.6369e-06, 'epoch': 4.01}
05/30/2024 19:32:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.8730, 'learning_rate': 4.5518e-06, 'epoch': 4.02}
05/30/2024 19:33:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.9041, 'learning_rate': 4.4673e-06, 'epoch': 4.03}
05/30/2024 19:34:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.8957, 'learning_rate': 4.3836e-06, 'epoch': 4.04}
05/30/2024 19:35:52 - INFO - llmtuner.extras.callbacks - {'loss': 0.8928, 'learning_rate': 4.3006e-06, 'epoch': 4.05}
05/30/2024 19:37:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.9154, 'learning_rate': 4.2184e-06, 'epoch': 4.06}
05/30/2024 19:38:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.9108, 'learning_rate': 4.1368e-06, 'epoch': 4.07}
05/30/2024 19:39:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.8975, 'learning_rate': 4.0560e-06, 'epoch': 4.08}
05/30/2024 19:40:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.8395, 'learning_rate': 3.9759e-06, 'epoch': 4.09}
05/30/2024 19:41:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.8536, 'learning_rate': 3.8965e-06, 'epoch': 4.10}
05/30/2024 19:43:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.9231, 'learning_rate': 3.8179e-06, 'epoch': 4.11}
05/30/2024 19:44:19 - INFO - llmtuner.extras.callbacks - {'loss': 0.8989, 'learning_rate': 3.7400e-06, 'epoch': 4.12}
05/30/2024 19:45:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.8504, 'learning_rate': 3.6629e-06, 'epoch': 4.13}
05/30/2024 19:45:41 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2200
05/30/2024 19:45:41 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2200/tokenizer_config.json
05/30/2024 19:45:41 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2200/special_tokens_map.json
05/30/2024 19:46:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.8720, 'learning_rate': 3.5864e-06, 'epoch': 4.14}
05/30/2024 19:48:08 - INFO - llmtuner.extras.callbacks - {'loss': 0.8666, 'learning_rate': 3.5108e-06, 'epoch': 4.14}
05/30/2024 19:49:18 - INFO - llmtuner.extras.callbacks - {'loss': 0.8860, 'learning_rate': 3.4358e-06, 'epoch': 4.15}
05/30/2024 19:50:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.8748, 'learning_rate': 3.3617e-06, 'epoch': 4.16}
05/30/2024 19:51:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.8654, 'learning_rate': 3.2882e-06, 'epoch': 4.17}
05/30/2024 19:52:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.9251, 'learning_rate': 3.2156e-06, 'epoch': 4.18}
05/30/2024 19:54:15 - INFO - llmtuner.extras.callbacks - {'loss': 0.9080, 'learning_rate': 3.1436e-06, 'epoch': 4.19}
05/30/2024 19:55:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.9194, 'learning_rate': 3.0725e-06, 'epoch': 4.20}
05/30/2024 19:56:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.8530, 'learning_rate': 3.0021e-06, 'epoch': 4.21}
05/30/2024 19:57:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.9359, 'learning_rate': 2.9325e-06, 'epoch': 4.22}
05/30/2024 19:59:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.8605, 'learning_rate': 2.8636e-06, 'epoch': 4.23}
05/30/2024 20:00:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.8714, 'learning_rate': 2.7955e-06, 'epoch': 4.24}
05/30/2024 20:01:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.8910, 'learning_rate': 2.7282e-06, 'epoch': 4.25}
05/30/2024 20:02:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9735, 'learning_rate': 2.6616e-06, 'epoch': 4.26}
05/30/2024 20:04:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.8840, 'learning_rate': 2.5959e-06, 'epoch': 4.27}
05/30/2024 20:05:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.8861, 'learning_rate': 2.5309e-06, 'epoch': 4.28}
05/30/2024 20:06:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.8910, 'learning_rate': 2.4667e-06, 'epoch': 4.29}
05/30/2024 20:07:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.9195, 'learning_rate': 2.4032e-06, 'epoch': 4.29}
05/30/2024 20:08:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.8517, 'learning_rate': 2.3406e-06, 'epoch': 4.30}
05/30/2024 20:10:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.8951, 'learning_rate': 2.2787e-06, 'epoch': 4.31}
05/30/2024 20:10:07 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2300
05/30/2024 20:10:07 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2300/tokenizer_config.json
05/30/2024 20:10:07 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2300/special_tokens_map.json
05/30/2024 20:11:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.8348, 'learning_rate': 2.2176e-06, 'epoch': 4.32}
05/30/2024 20:12:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.8935, 'learning_rate': 2.1574e-06, 'epoch': 4.33}
05/30/2024 20:13:43 - INFO - llmtuner.extras.callbacks - {'loss': 0.9058, 'learning_rate': 2.0979e-06, 'epoch': 4.34}
05/30/2024 20:14:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.8384, 'learning_rate': 2.0392e-06, 'epoch': 4.35}
05/30/2024 20:16:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.8563, 'learning_rate': 1.9813e-06, 'epoch': 4.36}
05/30/2024 20:17:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.8629, 'learning_rate': 1.9242e-06, 'epoch': 4.37}
05/30/2024 20:18:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.8800, 'learning_rate': 1.8679e-06, 'epoch': 4.38}
05/30/2024 20:20:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.8356, 'learning_rate': 1.8124e-06, 'epoch': 4.39}
05/30/2024 20:21:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.8987, 'learning_rate': 1.7578e-06, 'epoch': 4.40}
05/30/2024 20:22:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.8965, 'learning_rate': 1.7039e-06, 'epoch': 4.41}
05/30/2024 20:23:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.9386, 'learning_rate': 1.6508e-06, 'epoch': 4.42}
05/30/2024 20:25:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.8898, 'learning_rate': 1.5986e-06, 'epoch': 4.43}
05/30/2024 20:26:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9369, 'learning_rate': 1.5471e-06, 'epoch': 4.44}
05/30/2024 20:27:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.8385, 'learning_rate': 1.4965e-06, 'epoch': 4.44}
05/30/2024 20:28:38 - INFO - llmtuner.extras.callbacks - {'loss': 0.8916, 'learning_rate': 1.4467e-06, 'epoch': 4.45}
05/30/2024 20:29:53 - INFO - llmtuner.extras.callbacks - {'loss': 0.8874, 'learning_rate': 1.3977e-06, 'epoch': 4.46}
05/30/2024 20:31:07 - INFO - llmtuner.extras.callbacks - {'loss': 0.8804, 'learning_rate': 1.3495e-06, 'epoch': 4.47}
05/30/2024 20:32:20 - INFO - llmtuner.extras.callbacks - {'loss': 0.8431, 'learning_rate': 1.3022e-06, 'epoch': 4.48}
05/30/2024 20:33:40 - INFO - llmtuner.extras.callbacks - {'loss': 0.9226, 'learning_rate': 1.2557e-06, 'epoch': 4.49}
05/30/2024 20:34:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.8676, 'learning_rate': 1.2100e-06, 'epoch': 4.50}
05/30/2024 20:34:51 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2400
05/30/2024 20:34:51 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2400/tokenizer_config.json
05/30/2024 20:34:51 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2400/special_tokens_map.json
05/30/2024 20:36:01 - INFO - llmtuner.extras.callbacks - {'loss': 0.8533, 'learning_rate': 1.1651e-06, 'epoch': 4.51}
05/30/2024 20:37:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.9981, 'learning_rate': 1.1210e-06, 'epoch': 4.52}
05/30/2024 20:38:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.8777, 'learning_rate': 1.0778e-06, 'epoch': 4.53}
05/30/2024 20:39:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.8736, 'learning_rate': 1.0354e-06, 'epoch': 4.54}
05/30/2024 20:40:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9003, 'learning_rate': 9.9389e-07, 'epoch': 4.55}
05/30/2024 20:42:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.9017, 'learning_rate': 9.5317e-07, 'epoch': 4.56}
05/30/2024 20:43:13 - INFO - llmtuner.extras.callbacks - {'loss': 0.9836, 'learning_rate': 9.1329e-07, 'epoch': 4.57}
05/30/2024 20:44:29 - INFO - llmtuner.extras.callbacks - {'loss': 0.8772, 'learning_rate': 8.7424e-07, 'epoch': 4.58}
05/30/2024 20:45:41 - INFO - llmtuner.extras.callbacks - {'loss': 0.8810, 'learning_rate': 8.3604e-07, 'epoch': 4.59}
05/30/2024 20:46:54 - INFO - llmtuner.extras.callbacks - {'loss': 0.9520, 'learning_rate': 7.9867e-07, 'epoch': 4.59}
05/30/2024 20:48:10 - INFO - llmtuner.extras.callbacks - {'loss': 0.9020, 'learning_rate': 7.6214e-07, 'epoch': 4.60}
05/30/2024 20:49:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.8522, 'learning_rate': 7.2645e-07, 'epoch': 4.61}
05/30/2024 20:50:34 - INFO - llmtuner.extras.callbacks - {'loss': 0.8857, 'learning_rate': 6.9161e-07, 'epoch': 4.62}
05/30/2024 20:51:51 - INFO - llmtuner.extras.callbacks - {'loss': 0.9644, 'learning_rate': 6.5761e-07, 'epoch': 4.63}
05/30/2024 20:53:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.8806, 'learning_rate': 6.2446e-07, 'epoch': 4.64}
05/30/2024 20:54:28 - INFO - llmtuner.extras.callbacks - {'loss': 0.9009, 'learning_rate': 5.9216e-07, 'epoch': 4.65}
05/30/2024 20:55:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.9075, 'learning_rate': 5.6070e-07, 'epoch': 4.66}
05/30/2024 20:56:57 - INFO - llmtuner.extras.callbacks - {'loss': 0.9156, 'learning_rate': 5.3009e-07, 'epoch': 4.67}
05/30/2024 20:58:12 - INFO - llmtuner.extras.callbacks - {'loss': 0.8865, 'learning_rate': 5.0033e-07, 'epoch': 4.68}
05/30/2024 20:59:24 - INFO - llmtuner.extras.callbacks - {'loss': 0.9218, 'learning_rate': 4.7143e-07, 'epoch': 4.69}
05/30/2024 20:59:24 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2500
05/30/2024 20:59:24 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2500/tokenizer_config.json
05/30/2024 20:59:24 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2500/special_tokens_map.json
05/30/2024 21:00:36 - INFO - llmtuner.extras.callbacks - {'loss': 0.8799, 'learning_rate': 4.4337e-07, 'epoch': 4.70}
05/30/2024 21:01:49 - INFO - llmtuner.extras.callbacks - {'loss': 0.9703, 'learning_rate': 4.1617e-07, 'epoch': 4.71}
05/30/2024 21:03:03 - INFO - llmtuner.extras.callbacks - {'loss': 0.8682, 'learning_rate': 3.8982e-07, 'epoch': 4.72}
05/30/2024 21:04:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.9274, 'learning_rate': 3.6433e-07, 'epoch': 4.73}
05/30/2024 21:05:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.8791, 'learning_rate': 3.3969e-07, 'epoch': 4.74}
05/30/2024 21:06:44 - INFO - llmtuner.extras.callbacks - {'loss': 0.8867, 'learning_rate': 3.1591e-07, 'epoch': 4.74}
05/30/2024 21:07:55 - INFO - llmtuner.extras.callbacks - {'loss': 0.8912, 'learning_rate': 2.9299e-07, 'epoch': 4.75}
05/30/2024 21:09:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.8595, 'learning_rate': 2.7093e-07, 'epoch': 4.76}
05/30/2024 21:10:32 - INFO - llmtuner.extras.callbacks - {'loss': 0.9578, 'learning_rate': 2.4972e-07, 'epoch': 4.77}
05/30/2024 21:11:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.8869, 'learning_rate': 2.2937e-07, 'epoch': 4.78}
05/30/2024 21:12:59 - INFO - llmtuner.extras.callbacks - {'loss': 0.9197, 'learning_rate': 2.0989e-07, 'epoch': 4.79}
05/30/2024 21:14:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.9270, 'learning_rate': 1.9127e-07, 'epoch': 4.80}
05/30/2024 21:15:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.8589, 'learning_rate': 1.7351e-07, 'epoch': 4.81}
05/30/2024 21:16:39 - INFO - llmtuner.extras.callbacks - {'loss': 0.8852, 'learning_rate': 1.5661e-07, 'epoch': 4.82}
05/30/2024 21:17:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.9177, 'learning_rate': 1.4057e-07, 'epoch': 4.83}
05/30/2024 21:19:11 - INFO - llmtuner.extras.callbacks - {'loss': 0.9058, 'learning_rate': 1.2540e-07, 'epoch': 4.84}
05/30/2024 21:20:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.8251, 'learning_rate': 1.1109e-07, 'epoch': 4.85}
05/30/2024 21:21:45 - INFO - llmtuner.extras.callbacks - {'loss': 0.8398, 'learning_rate': 9.7646e-08, 'epoch': 4.86}
05/30/2024 21:22:56 - INFO - llmtuner.extras.callbacks - {'loss': 0.8930, 'learning_rate': 8.5068e-08, 'epoch': 4.87}
05/30/2024 21:24:06 - INFO - llmtuner.extras.callbacks - {'loss': 0.8928, 'learning_rate': 7.3355e-08, 'epoch': 4.88}
05/30/2024 21:24:06 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2600
05/30/2024 21:24:06 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2600/tokenizer_config.json
05/30/2024 21:24:06 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/checkpoint-2600/special_tokens_map.json
05/30/2024 21:25:22 - INFO - llmtuner.extras.callbacks - {'loss': 0.9486, 'learning_rate': 6.2508e-08, 'epoch': 4.89}
05/30/2024 21:26:35 - INFO - llmtuner.extras.callbacks - {'loss': 0.8662, 'learning_rate': 5.2528e-08, 'epoch': 4.89}
05/30/2024 21:27:50 - INFO - llmtuner.extras.callbacks - {'loss': 0.9009, 'learning_rate': 4.3414e-08, 'epoch': 4.90}
05/30/2024 21:29:04 - INFO - llmtuner.extras.callbacks - {'loss': 0.8491, 'learning_rate': 3.5167e-08, 'epoch': 4.91}
05/30/2024 21:30:17 - INFO - llmtuner.extras.callbacks - {'loss': 0.8408, 'learning_rate': 2.7788e-08, 'epoch': 4.92}
05/30/2024 21:31:33 - INFO - llmtuner.extras.callbacks - {'loss': 0.9309, 'learning_rate': 2.1276e-08, 'epoch': 4.93}
05/30/2024 21:32:58 - INFO - llmtuner.extras.callbacks - {'loss': 0.9281, 'learning_rate': 1.5632e-08, 'epoch': 4.94}
05/30/2024 21:34:16 - INFO - llmtuner.extras.callbacks - {'loss': 0.8968, 'learning_rate': 1.0856e-08, 'epoch': 4.95}
05/30/2024 21:35:27 - INFO - llmtuner.extras.callbacks - {'loss': 0.8543, 'learning_rate': 6.9479e-09, 'epoch': 4.96}
05/30/2024 21:36:42 - INFO - llmtuner.extras.callbacks - {'loss': 0.8648, 'learning_rate': 3.9083e-09, 'epoch': 4.97}
05/30/2024 21:38:00 - INFO - llmtuner.extras.callbacks - {'loss': 0.9323, 'learning_rate': 1.7370e-09, 'epoch': 4.98}
05/30/2024 21:39:14 - INFO - llmtuner.extras.callbacks - {'loss': 0.8781, 'learning_rate': 4.3426e-10, 'epoch': 4.99}
05/30/2024 21:40:26 - INFO - llmtuner.extras.callbacks - {'loss': 0.9100, 'learning_rate': 0.0000e+00, 'epoch': 5.00}
05/30/2024 21:40:26 - INFO - transformers.trainer -
Training completed. Do not forget to share your model on huggingface.co/models =)
05/30/2024 21:40:26 - INFO - transformers.trainer - Saving model checkpoint to /datas/wangm/LLM4LangGPT/output/bloomz-7b1
05/30/2024 21:40:26 - INFO - transformers.tokenization_utils_base - tokenizer config file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/tokenizer_config.json
05/30/2024 21:40:26 - INFO - transformers.tokenization_utils_base - Special tokens file saved in /datas/wangm/LLM4LangGPT/output/bloomz-7b1/special_tokens_map.json
05/30/2024 21:40:26 - INFO - transformers.modelcard - Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}