Update README.md
Browse files
README.md
CHANGED
@@ -21,8 +21,9 @@ license: apache-2.0
|
|
21 |
|
22 |
**🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖**
|
23 |
|
24 |
-
<div
|
25 |
-
<img src="https://raw.githubusercontent.com/imoneoi/openchat/
|
|
|
26 |
</div>
|
27 |
|
28 |
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|
|
|
21 |
|
22 |
**🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖**
|
23 |
|
24 |
+
<div style="display: flex; justify-content: center; align-items: center">
|
25 |
+
<img src="https://raw.githubusercontent.com/imoneoi/openchat/imoneoi-add-grok-baseline/assets/openchat.png" style="width: 45%;">
|
26 |
+
<img src="https://raw.githubusercontent.com/imoneoi/openchat/imoneoi-add-grok-baseline/assets/openchat_grok.png" style="width: 45%;">
|
27 |
</div>
|
28 |
|
29 |
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|