Update README.md
Browse files
README.md
CHANGED
@@ -19,9 +19,20 @@ Low-rank adapters (r=8) finetuned over 1.6m new tokens of a FLAN task mixture, w
|
|
19 |
|
20 |
The model reaches a train ppl of 4.36 and an eval ppl of 4.32.
|
21 |
|
22 |
-
### Example
|
23 |
|
24 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
Q: Answer the following yes/no question by reasoning step-by-step. Could a dandelion suffer from hepatitis?
|
26 |
A: Hepatitis only affects organisms with livers. Dandelions don’t have a liver. The answer is no.
|
27 |
|
@@ -29,8 +40,19 @@ Q: Answer the following yes/no question by reasoning step-by-step. Can you write
|
|
29 |
A: A haiku is a japanese three-line poem. That is short enough to fit in 280 characters. The answer is yes.
|
30 |
|
31 |
Q: Answer the following yes/no question by reasoning step-by-step. Can you reach space with a Cessna?
|
32 |
-
A:
|
33 |
-
|
34 |
-
|
35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
```
|
|
|
19 |
|
20 |
The model reaches a train ppl of 4.36 and an eval ppl of 4.32.
|
21 |
|
22 |
+
### Inference Example (Chain-of-Thought prompt):
|
23 |
|
24 |
+
```python
|
25 |
+
from peft import PeftModel, PeftConfig
|
26 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
27 |
+
peft_model_id = "crumb/FLAN-OPT-6.7b-LoRA"
|
28 |
+
|
29 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
30 |
+
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, low_cpu_mem_usage=True, device_map='auto')
|
31 |
+
model = PeftModel.from_pretrained(model, peft_model_id)
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|
33 |
+
|
34 |
+
import torch
|
35 |
+
prompt = """
|
36 |
Q: Answer the following yes/no question by reasoning step-by-step. Could a dandelion suffer from hepatitis?
|
37 |
A: Hepatitis only affects organisms with livers. Dandelions don’t have a liver. The answer is no.
|
38 |
|
|
|
40 |
A: A haiku is a japanese three-line poem. That is short enough to fit in 280 characters. The answer is yes.
|
41 |
|
42 |
Q: Answer the following yes/no question by reasoning step-by-step. Can you reach space with a Cessna?
|
43 |
+
A:
|
44 |
+
""".strip()
|
45 |
+
inputs = tokenizer([prompt], return_tensors='pt')
|
46 |
+
|
47 |
+
with torch.autocast("cuda", dtype=torch.float16):
|
48 |
+
outputs = model.generate(
|
49 |
+
input_ids=inputs.input_ids.cuda(),
|
50 |
+
attention_mask=inputs.attention_mask.cuda(),
|
51 |
+
max_new_tokens=32,
|
52 |
+
top_p=0.95,
|
53 |
+
temperature=0.5,
|
54 |
+
do_sample=True
|
55 |
+
)
|
56 |
+
print("\n".join(tokenizer.decode(outputs[0]).split("\n")[:prompt.count("\n")+1]))
|
57 |
+
# A Cessna is a small plane. A small plane can't get into space. The answer is no.
|
58 |
```
|