InferenceIllusionist commited on
Commit
d5f6c8b
1 Parent(s): db0e615

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -59,3 +59,29 @@ merge_method: model_stock
59
  base_model: models/Mixtral-8x7B-v0.1-Instruct
60
  dtype: float16
61
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  base_model: models/Mixtral-8x7B-v0.1-Instruct
60
  dtype: float16
61
  ```
62
+
63
+
64
+ ## Apendix - Llama.cpp MMLU Benchmark Results*
65
+
66
+ <i>These results were calculated using perplexity.exe from llama.cpp using the following params:</i>
67
+
68
+ `.\perplexity -m .\models\TeTO-8x7b-MS-v0.03\TeTO-MS-8x7b-Q6_K.gguf -bf .\evaluations\mmlu-test.bin --multiple-choice -c 8192 -t 23 -ngl 200`
69
+
70
+
71
+ ```
72
+ * V0.01 (4 model / Mixtral Base):
73
+ Final result: 43.3049 +/- 0.4196
74
+ Random chance: 25.0000 +/- 0.3667
75
+
76
+
77
+ * V0.02 (3 model / Tess Mixtral Base):
78
+ Final result: 43.8356 +/- 0.4202
79
+ Random chance: 25.0000 +/- 0.3667
80
+
81
+
82
+ * V0.03 (4 model / Mixtral Instruct Base):
83
+ Final result: 45.7004 +/- 0.4219
84
+ Random chance: 25.0000 +/- 0.3667
85
+ ```
86
+
87
+ *Please be advised metrics below are not representative of final HF benchmark scores for reasons given [here](https://github.com/ggerganov/llama.cpp/pull/5047)