Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,16 @@ Our model hasn't been fine-tuned through reinforcement learning from human feedb
|
|
16 |
|
17 |
## Intended Uses
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
Phi-2 is intended for research purposes only. Given the nature of the training data, the phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
|
20 |
|
21 |
#### QA format:
|
|
|
16 |
|
17 |
## Intended Uses
|
18 |
|
19 |
+
Below are example codes to load phi-2, we support two modes of execution for the model:
|
20 |
+
1. loading in fp-16 format with flash-attention support:
|
21 |
+
```python
|
22 |
+
model = AutoModelForCausalLM.from_pretrained('microsoft/phi-2', torch_dtype='auto', flash_attn=True, flash_rotary=True, fused_dense=True, trust_remote_code=True)
|
23 |
+
```
|
24 |
+
2. loading in fp-16 without flash-attention
|
25 |
+
```python
|
26 |
+
model = AutoModelForCausalLM.from_pretrained('microsoft/phi-2', torch_dtype='auto', trust_remote_code=True)
|
27 |
+
```
|
28 |
+
|
29 |
Phi-2 is intended for research purposes only. Given the nature of the training data, the phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
|
30 |
|
31 |
#### QA format:
|