Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,8 @@ Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the
|
|
17 |
|
18 |
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
|
19 |
|
|
|
|
|
20 |
## Intended Uses
|
21 |
|
22 |
Phi-2 is intended for research purposes only. Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
|
|
|
17 |
|
18 |
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
|
19 |
|
20 |
+
add comments
|
21 |
+
|
22 |
## Intended Uses
|
23 |
|
24 |
Phi-2 is intended for research purposes only. Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
|