Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@
|
|
7 |
|
8 |
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
|
9 |
|
10 |
-
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
|
11 |
|
12 |
```python
|
13 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
@@ -25,7 +25,7 @@ pipeline = transformers.pipeline(
|
|
25 |
trust_remote_code=True,
|
26 |
)
|
27 |
sequences = pipeline(
|
28 |
-
"
|
29 |
max_length=200,
|
30 |
do_sample=True,
|
31 |
top_k=10,
|
@@ -93,7 +93,7 @@ pipeline = transformers.pipeline(
|
|
93 |
device_map="auto",
|
94 |
)
|
95 |
sequences = pipeline(
|
96 |
-
"
|
97 |
max_length=200,
|
98 |
do_sample=True,
|
99 |
top_k=10,
|
@@ -149,16 +149,17 @@ The model training took roughly two months.
|
|
149 |
|
150 |
## Evaluation
|
151 |
|
152 |
-
|English Benchmark | **Value** |
|
153 |
-
|
154 |
-
|
|
155 |
-
|
|
156 |
-
|
|
157 |
-
|
|
158 |
-
|
|
159 |
-
| GSM8k-5shots | 53.83 |
|
160 |
-
| ARC-Challenge-0shot | 50.17 |
|
161 |
-
| ARC-Easy-0shot | 77.78 |
|
|
|
162 |
|
163 |
We thank the leaderboard team from HuggingFace for providing an official evaluation of our model on the leaderboard tasks.
|
164 |
|
|
|
7 |
|
8 |
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
|
9 |
|
10 |
+
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
|
11 |
|
12 |
```python
|
13 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
25 |
trust_remote_code=True,
|
26 |
)
|
27 |
sequences = pipeline(
|
28 |
+
"Can you explain the concepts of Quantum Computing?",
|
29 |
max_length=200,
|
30 |
do_sample=True,
|
31 |
top_k=10,
|
|
|
93 |
device_map="auto",
|
94 |
)
|
95 |
sequences = pipeline(
|
96 |
+
"Can you explain the concepts of Quantum Computing?",
|
97 |
max_length=200,
|
98 |
do_sample=True,
|
99 |
top_k=10,
|
|
|
149 |
|
150 |
## Evaluation
|
151 |
|
152 |
+
|English Benchmark | **Value** |
|
153 |
+
|--------------------|------------|
|
154 |
+
| ARC-Challenge-25shots | 59.73 |
|
155 |
+
| HellaSwag-10shots | 82.91 |
|
156 |
+
| MMLU-5shots | 58.37 |
|
157 |
+
| Winogrande-5shots | 78.30 |
|
158 |
+
| TruthfulQA-0shot | 52.56 |
|
159 |
+
| GSM8k-5shots | 53.83 |
|
160 |
+
| ARC-Challenge-0shot | 50.17 |
|
161 |
+
| ARC-Easy-0shot | 77.78 |
|
162 |
+
| Hellaswag-0shot | 82.07 |
|
163 |
|
164 |
We thank the leaderboard team from HuggingFace for providing an official evaluation of our model on the leaderboard tasks.
|
165 |
|