brittlewis12
commited on
Commit
•
fcb5f02
1
Parent(s):
96b5999
Update README.md
Browse files
README.md
CHANGED
@@ -86,6 +86,7 @@ or Tiny CoT:
|
|
86 |
| [OpenLLaMA 7B v2 open-instruct](http://hf.co/VMware/open-llama-7b-v2-open-instruct) | **7B** | **Human** (nearly: ecqa is an exception) | SFT | 8.64% | 23.21% | 29.84% |
|
87 |
| [StableLM Zephyr 3B](https://hf.co/stabilityai/stablelm-zephyr-3b) | 3B | GPT | DPO | possibly contaminated (45.72%) | **33.31%** | 0.91% |
|
88 |
| [**Memphis-CoT 3B**](https://hf.co/euclaise/memphis-cot-3b) | 3B | **Human** | Self-teaching | **13.8%** | *26.24%* | **38.24%** |
|
|
|
89 |
*5-shot, as performed automatically by LM Evaluation Harness bbh_cot_fewshot even with num_fewshot=0
|
90 |
|
91 |
> Memphis outperforms other primarily-human-data models that are over twice its size, along with SFT models of its size, and trades with the Zephyr DPO model. That said, Zephyr uses synthetic data, and *much* more of it.
|
|
|
86 |
| [OpenLLaMA 7B v2 open-instruct](http://hf.co/VMware/open-llama-7b-v2-open-instruct) | **7B** | **Human** (nearly: ecqa is an exception) | SFT | 8.64% | 23.21% | 29.84% |
|
87 |
| [StableLM Zephyr 3B](https://hf.co/stabilityai/stablelm-zephyr-3b) | 3B | GPT | DPO | possibly contaminated (45.72%) | **33.31%** | 0.91% |
|
88 |
| [**Memphis-CoT 3B**](https://hf.co/euclaise/memphis-cot-3b) | 3B | **Human** | Self-teaching | **13.8%** | *26.24%* | **38.24%** |
|
89 |
+
|
90 |
*5-shot, as performed automatically by LM Evaluation Harness bbh_cot_fewshot even with num_fewshot=0
|
91 |
|
92 |
> Memphis outperforms other primarily-human-data models that are over twice its size, along with SFT models of its size, and trades with the Zephyr DPO model. That said, Zephyr uses synthetic data, and *much* more of it.
|