xinchen9 commited on
Commit
6c6d127
1 Parent(s): c5ed4b8

[Update]Update model name

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,7 +6,7 @@ license: apache-2.0
6
  Introducing xinchen9/llama3-b8-ft, an advanced language model comprising 8 billion parameters. It has been fine-trained based on
7
  [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B).
8
 
9
- The llama3-b8 model was fine-tuning on dataset [CoT_ollection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
10
 
11
  The training step is 30,000. The batch of each device is 8 and toal GPU is 5.
12
 
@@ -17,7 +17,7 @@ Here give some examples of how to use our model.
17
  import torch
18
  from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
19
 
20
- model_name = "xinchen9/Llama3.1_CoT"
21
  tokenizer = AutoTokenizer.from_pretrained(model_name)
22
  model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
23
  model.generation_config = GenerationConfig.from_pretrained(model_name)
 
6
  Introducing xinchen9/llama3-b8-ft, an advanced language model comprising 8 billion parameters. It has been fine-trained based on
7
  [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B).
8
 
9
+ The llama3-b8 model was fine-tuning on dataset [CoT_Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
10
 
11
  The training step is 30,000. The batch of each device is 8 and toal GPU is 5.
12
 
 
17
  import torch
18
  from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
19
 
20
+ model_name = "xinchen9/Llama3.1_CoT_V1"
21
  tokenizer = AutoTokenizer.from_pretrained(model_name)
22
  model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
23
  model.generation_config = GenerationConfig.from_pretrained(model_name)