alpayariyak
commited on
Commit
•
7e65595
1
Parent(s):
a11af6b
Update README.md
Browse files
README.md
CHANGED
@@ -36,10 +36,9 @@ pipeline_tag: text-generation
|
|
36 |
|
37 |
**🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖**
|
38 |
|
39 |
-
<div style="
|
40 |
-
<img src="https://
|
41 |
-
|
42 |
-
</div>
|
43 |
|
44 |
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|
45 |
|
|
|
36 |
|
37 |
**🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖**
|
38 |
|
39 |
+
<div align="center" style="justify-content: center; align-items: center; "'>
|
40 |
+
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/Untitled%20design-17.png?raw=true" style="width: 100%; border-radius: 0.5em">
|
41 |
+
</div>
|
|
|
42 |
|
43 |
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|
44 |
|