Romain-Cosentino
commited on
Commit
•
1071054
1
Parent(s):
69c0428
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,103 @@
|
|
1 |
---
|
2 |
license: llama3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: llama3
|
3 |
+
datasets:
|
4 |
+
- HuggingFaceH4/ultrafeedback_binarized
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
library_name: transformers
|
8 |
+
tags:
|
9 |
+
- tenyx-fine-tuning
|
10 |
+
- dpo
|
11 |
+
- tenyxchat
|
12 |
+
- llama3
|
13 |
+
pipeline_tag: text-generation
|
14 |
---
|
15 |
+
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
|
16 |
+
|
17 |
+
Introducing Llama-3-TenyxChat-70B, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
|
18 |
+
|
19 |
+
We fine-tune [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)), ,
|
20 |
+
which shows an increase in [MT-Bench](https://arxiv.org/abs/2306.05685)*, without a drop in performance of the model on other benchmarks.
|
21 |
+
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner,
|
22 |
+
thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
|
23 |
+
Llama-3-TenyxChat-70B was trained using eight A100s (80GB) for fifteen hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
|
24 |
+
|
25 |
+
*The MT-Bench evaluation we perform follows the latest eval upgrade as PR'd [here](https://github.com/lm-sys/FastChat/pull/3158). This PR upgrades the evaluation from `GPT-4-0613` to `GPT-4-preview-0125` (latest version) as well as corrects and improves the quality of the reference answers for a subset of questions. These changes are required to correct the erroneous rating during the evaluation.
|
26 |
+
|
27 |
+
|
28 |
+
**Model Developers** [Tenyx Research](https://www.tenyx.com/research)
|
29 |
+
|
30 |
+
|
31 |
+
# Model details
|
32 |
+
|
33 |
+
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
|
34 |
+
- License: Meta Llama 3 Community License
|
35 |
+
- Base model: [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
|
36 |
+
- Demo: Coming Soon!
|
37 |
+
|
38 |
+
## Usage
|
39 |
+
|
40 |
+
Our model uses the same chat template as [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
|
41 |
+
|
42 |
+
### Hugging face Example
|
43 |
+
|
44 |
+
```python
|
45 |
+
import torch
|
46 |
+
from transformers import pipeline
|
47 |
+
|
48 |
+
pipe = pipeline("text-generation", model="tenyx/Llama3-TenyxChat-70B", torch_dtype=torch.bfloat16, device_map="auto")
|
49 |
+
|
50 |
+
messages = [
|
51 |
+
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
|
52 |
+
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
|
53 |
+
]
|
54 |
+
|
55 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
56 |
+
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
|
57 |
+
```
|
58 |
+
|
59 |
+
|
60 |
+
# Performance
|
61 |
+
|
62 |
+
At the time of release (April 2024), Llama3-TenyxChat-70B is the highest-ranked open source model on the MT-Bench evaluation available for download.
|
63 |
+
|
64 |
+
## MT-Bench
|
65 |
+
|
66 |
+
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
|
67 |
+
|
68 |
+
| Model-name | GPT4-0125-preview MT Bench | Chat Arena Elo |
|
69 |
+
|--------------------------------|----------------------------|----------------|
|
70 |
+
| GPT-4-1106 | 8.79 | 1251 |
|
71 |
+
| Claude 3 Opus (20240229) | 8.57 | 1247 |
|
72 |
+
| **Llama3-TenyxChat-70B ** | 8.15 | NA |
|
73 |
+
| *Llama3-70B-Instruct* | 7.97 | 1207 |
|
74 |
+
| Claude 3 Sonnet (20240229) | 7.82 | 1190 |
|
75 |
+
| GPT-4-0314 | 7.96 | 1185 |
|
76 |
+
| Mixtral | 7.38 | 1114 |
|
77 |
+
| gpt-3.5-turbo-0613 | 7.37 | 1113 |
|
78 |
+
| Yi-34B | 6.46 | 1099 |
|
79 |
+
| gpt-3.5-turbo-0125 | 7.52 | 1096 |
|
80 |
+
| Llama 2 70B | 6.01 | 1082 |
|
81 |
+
| NV-Llama2-70B-SteerLM-Chat | 6.57 | 1076 |
|
82 |
+
|
83 |
+
![hexplot.png](assets/hexplot_llama3-tenyxchat-70b.png)
|
84 |
+
|
85 |
+
# Limitations
|
86 |
+
|
87 |
+
Llama3-TenyxChat-70B, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
|
88 |
+
|
89 |
+
# License
|
90 |
+
|
91 |
+
Llama3-TenyxChat-70B is distributed under the Meta Llama 3 Community License.
|
92 |
+
|
93 |
+
# Citation
|
94 |
+
|
95 |
+
If you use Llama3-TenyxChat-70B for your research, cite us as
|
96 |
+
|
97 |
+
```
|
98 |
+
@misc{tenyxchat2024,
|
99 |
+
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
|
100 |
+
author={Tenyx},
|
101 |
+
year={2024},
|
102 |
+
}
|
103 |
+
```
|