Pankaj Mathur
commited on
Commit
•
9543074
1
Parent(s):
94cd0cd
Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ An Open_LLaMA-13B model trained on custom explain tuned datasets, created using
|
|
10 |
|
11 |
# Dataset
|
12 |
|
13 |
-
We trained [OpenLLaMa-
|
14 |
|
15 |
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
|
16 |
|
|
|
10 |
|
11 |
# Dataset
|
12 |
|
13 |
+
We trained [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) on custom explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
|
14 |
|
15 |
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
|
16 |
|