Update README.md
Browse files
README.md
CHANGED
@@ -103,7 +103,7 @@ Introducing xinchen9/Llama3.1_8B_Instruct_CoT, an advanced language model compri
|
|
103 |
|
104 |
The llama3-b8 model was fine-tuning on dataset [CoT_Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
|
105 |
|
106 |
-
The training step is
|
107 |
Learning Rate: 0.0003
|
108 |
|
109 |
### 2. How to Use
|
|
|
103 |
|
104 |
The llama3-b8 model was fine-tuning on dataset [CoT_Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
|
105 |
|
106 |
+
The training step is 46,000. The batch of each device is 8 and toal GPU is 5.
|
107 |
Learning Rate: 0.0003
|
108 |
|
109 |
### 2. How to Use
|