Pankaj Mathur
commited on
Commit
β’
21ef508
1
Parent(s):
373e60f
Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,31 @@ pipeline_tag: text-generation
|
|
13 |
|
14 |
A LLama2-7b model trained on Orca Style datasets.
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
### quantized versions
|
17 |
|
18 |
Big thanks to [@TheBloke](https://huggingface.co/TheBloke)
|
@@ -21,11 +46,13 @@ Big thanks to [@TheBloke](https://huggingface.co/TheBloke)
|
|
21 |
|
22 |
2) https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ
|
23 |
|
|
|
24 |
|
25 |
#### license disclaimer:
|
26 |
|
27 |
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
|
28 |
|
|
|
29 |
|
30 |
## evaluation
|
31 |
|
@@ -43,8 +70,7 @@ Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](htt
|
|
43 |
|**Total Average**|-|**0.59865**||
|
44 |
|
45 |
|
46 |
-
|
47 |
-
|
48 |
|
49 |
## example esage
|
50 |
|
@@ -87,6 +113,7 @@ print(tokenizer.decode(output[0], skip_special_tokens=True))
|
|
87 |
|
88 |
```
|
89 |
|
|
|
90 |
|
91 |
#### limitations & biases:
|
92 |
|
@@ -97,6 +124,7 @@ Despite diligent efforts in refining the pretraining data, there remains a possi
|
|
97 |
Exercise caution and cross-check information when necessary.
|
98 |
|
99 |
|
|
|
100 |
|
101 |
### citiation:
|
102 |
|
|
|
13 |
|
14 |
A LLama2-7b model trained on Orca Style datasets.
|
15 |
|
16 |
+
<br>
|
17 |
+
|
18 |
+
![orca-mini](https://huggingface.co/psmathur/orca_mini_v3_7b/resolve/main/orca_minis_small.jpeg)
|
19 |
+
|
20 |
+
<br>
|
21 |
+
|
22 |
+
π€ How good is orca-mini-v3-7b? Do the evaluation results from HuggingFace Open LLM leaderboard translate to real-world use cases?
|
23 |
+
|
24 |
+
π Now you can figure it out for yourself!
|
25 |
+
|
26 |
+
Introducing the orca-mini chatbot powered by the orca-mini-v3-7b model. Dive in and see how the open source 7b model stacks up in the world of massive language models. π
|
27 |
+
|
28 |
+
β° Hurry up before I run out of GPU credits! π
|
29 |
+
|
30 |
+
Check it out here π
|
31 |
+
|
32 |
+
[https://huggingface.co/spaces/psmathur/psmathur-orca_mini_v3_7b](https://huggingface.co/spaces/psmathur/psmathur-orca_mini_v3_7b)
|
33 |
+
|
34 |
+
|
35 |
+
<br>
|
36 |
+
|
37 |
+
**P.S. I am actively seeking sponsorship and partnership opportunities. If you're interested, please connect with me at www.linkedin.com/in/pankajam.**
|
38 |
+
|
39 |
+
<br>
|
40 |
+
|
41 |
### quantized versions
|
42 |
|
43 |
Big thanks to [@TheBloke](https://huggingface.co/TheBloke)
|
|
|
46 |
|
47 |
2) https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ
|
48 |
|
49 |
+
<br>
|
50 |
|
51 |
#### license disclaimer:
|
52 |
|
53 |
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
|
54 |
|
55 |
+
<br>
|
56 |
|
57 |
## evaluation
|
58 |
|
|
|
70 |
|**Total Average**|-|**0.59865**||
|
71 |
|
72 |
|
73 |
+
<br>
|
|
|
74 |
|
75 |
## example esage
|
76 |
|
|
|
113 |
|
114 |
```
|
115 |
|
116 |
+
<br>
|
117 |
|
118 |
#### limitations & biases:
|
119 |
|
|
|
124 |
Exercise caution and cross-check information when necessary.
|
125 |
|
126 |
|
127 |
+
<br>
|
128 |
|
129 |
### citiation:
|
130 |
|