we1kkk commited on
Commit
46a40f6
1 Parent(s): f4955e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -14
README.md CHANGED
@@ -4,21 +4,24 @@
4
  This repo contains the tokenizer, Chinese-Alpaca "merged" weights and configs for Chinese-LLaMA-Alpaca
5
  Directly load `merge` weight for chinese-llama-alpaca-plus-lora-7b
6
 
 
 
7
 
8
- model_name_or_path = 'we1kkk/chinese-llama-alpaca-plus-lora-7b'
9
-
10
- config = LlamaConfig.from_pretrained(
11
- model_name_or_path,
12
- # trust_remote_code=True
13
- )
14
- tokenizer = LlamaTokenizer.from_pretrained(
15
- model_name_or_path,
16
- # trust_remote_code=True
17
- )
18
- model = LlamaForCausalLM.from_pretrained(
19
- model_name_or_path,
20
- config=config,
21
- ).half().cuda()
 
22
 
23
  ## Citation
24
 
 
4
  This repo contains the tokenizer, Chinese-Alpaca "merged" weights and configs for Chinese-LLaMA-Alpaca
5
  Directly load `merge` weight for chinese-llama-alpaca-plus-lora-7b
6
 
7
+ ```python
8
+ model_name_or_path = 'we1kkk/chinese-llama-alpaca-plus-lora-7b'
9
 
10
+ config = LlamaConfig.from_pretrained(
11
+ model_name_or_path,
12
+ # trust_remote_code=True
13
+ )
14
+ tokenizer = LlamaTokenizer.from_pretrained(
15
+ model_name_or_path,
16
+ # trust_remote_code=True
17
+ )
18
+ model = LlamaForCausalLM.from_pretrained(
19
+ model_name_or_path,
20
+ config=config,
21
+ ).half().cuda()
22
+ ```
23
+
24
+
25
 
26
  ## Citation
27