minlik commited on
Commit
9399715
1 Parent(s): 1edf208

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: chinese-alpaca-plus-7b-merged
3
+ emoji: 📚
4
+ colorFrom: gray
5
+ colorTo: red
6
+ sdk: gradio
7
+ sdk_version: 3.23.0
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ 加入中文词表并继续预训练中文Embedding,并在此基础上继续使用指令数据集finetuning,得到的中文Alpaca-plus模型。
13
+
14
+ 详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v3.0
15
+
16
+ ### 使用方法参考
17
+ 1. 安装模块包
18
+ ```bash
19
+ pip install sentencepiece
20
+ pip install transformers>=4.28.0
21
+ ```
22
+
23
+ 2. 生成文本
24
+ ```python
25
+ import torch
26
+ import transformers
27
+ from transformers import LlamaTokenizer, LlamaForCausalLM
28
+
29
+ def generate_prompt(text):
30
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
31
+
32
+ ### Instruction:
33
+ {text}
34
+
35
+ ### Response:"""
36
+
37
+
38
+ tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-plus-7b-merged')
39
+ model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-plus-7b-merged').half().to('cuda')
40
+ model.eval()
41
+
42
+ text = '第一个登上月球的人是谁?'
43
+ prompt = generate_prompt(text)
44
+ input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
45
+
46
+
47
+ with torch.no_grad():
48
+ output_ids = model.generate(
49
+ input_ids=input_ids,
50
+ max_new_tokens=128,
51
+ temperature=1,
52
+ top_k=40,
53
+ top_p=0.9,
54
+ repetition_penalty=1.15
55
+ ).cuda()
56
+ output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
57
+ print(output.replace('text', '').strip())