aashish1904 commited on
Commit
f702758
1 Parent(s): f31e635

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +147 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ license: mit
6
+ language:
7
+ - fr
8
+ - en
9
+ tags:
10
+ - french
11
+ - chocolatine
12
+ datasets:
13
+ - jpacifico/french-orca-dpo-pairs-revised
14
+ pipeline_tag: text-generation
15
+
16
+ ---
17
+
18
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
19
+
20
+ # QuantFactory/Chocolatine-3B-Instruct-DPO-v1.2-GGUF
21
+ This is quantized version of [jpacifico/Chocolatine-3B-Instruct-DPO-v1.2](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-v1.2) created using llama.cpp
22
+
23
+ # Original Model Card
24
+
25
+
26
+ ### Chocolatine-3B-Instruct-DPO-v1.2
27
+
28
+ DPO fine-tuned of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) (3.82B params)
29
+ using the [jpacifico/french-orca-dpo-pairs-revised](https://huggingface.co/datasets/jpacifico/french-orca-dpo-pairs-revised) rlhf dataset.
30
+ Training in French also improves the model in English, surpassing the performances of its base model.
31
+ *The model supports 128K context length*.
32
+
33
+ ### OpenLLM Leaderboard
34
+
35
+ TBD.
36
+
37
+ ### MT-Bench-French
38
+
39
+ Chocolatine-3B-Instruct-DPO-v1.2 is outperforming Phi-3-medium-4k-instruct (14B) and its base model Phi-3.5-mini-instruct on [MT-Bench-French](https://huggingface.co/datasets/bofenghuang/mt-bench-french), used with [multilingual-mt-bench](https://github.com/Peter-Devine/multilingual_mt_bench) and GPT-4-Turbo as LLM-judge.
40
+
41
+ ```
42
+ ########## First turn ##########
43
+ score
44
+ model turn
45
+ gpt-4o-mini 1 9.2875
46
+ Chocolatine-14B-Instruct-4k-DPO 1 8.6375
47
+ Chocolatine-14B-Instruct-DPO-v1.2 1 8.6125
48
+ Phi-3.5-mini-instruct 1 8.5250
49
+ Chocolatine-3B-Instruct-DPO-v1.2 1 8.3750
50
+ Phi-3-medium-4k-instruct 1 8.2250
51
+ gpt-3.5-turbo 1 8.1375
52
+ Chocolatine-3B-Instruct-DPO-Revised 1 7.9875
53
+ Daredevil-8B 1 7.8875
54
+ Meta-Llama-3.1-8B-Instruct 1 7.0500
55
+ vigostral-7b-chat 1 6.7875
56
+ Mistral-7B-Instruct-v0.3 1 6.7500
57
+ gemma-2-2b-it 1 6.4500
58
+ French-Alpaca-7B-Instruct_beta 1 5.6875
59
+ vigogne-2-7b-chat 1 5.6625
60
+
61
+ ########## Second turn ##########
62
+ score
63
+ model turn
64
+ gpt-4o-mini 2 8.912500
65
+ Chocolatine-14B-Instruct-DPO-v1.2 2 8.337500
66
+ Chocolatine-3B-Instruct-DPO-Revised 2 7.937500
67
+ Chocolatine-3B-Instruct-DPO-v1.2 2 7.862500
68
+ Phi-3-medium-4k-instruct 2 7.750000
69
+ Chocolatine-14B-Instruct-4k-DPO 2 7.737500
70
+ gpt-3.5-turbo 2 7.679167
71
+ Phi-3.5-mini-instruct 2 7.575000
72
+ Daredevil-8B 2 7.087500
73
+ Meta-Llama-3.1-8B-Instruct 2 6.787500
74
+ Mistral-7B-Instruct-v0.3 2 6.500000
75
+ vigostral-7b-chat 2 6.162500
76
+ gemma-2-2b-it 2 6.100000
77
+ French-Alpaca-7B-Instruct_beta 2 5.487395
78
+ vigogne-2-7b-chat 2 2.775000
79
+
80
+ ########## Average ##########
81
+ score
82
+ model
83
+ gpt-4o-mini 9.100000
84
+ Chocolatine-14B-Instruct-DPO-v1.2 8.475000
85
+ Chocolatine-14B-Instruct-4k-DPO 8.187500
86
+ Chocolatine-3B-Instruct-DPO-v1.2 8.118750
87
+ Phi-3.5-mini-instruct 8.050000
88
+ Phi-3-medium-4k-instruct 7.987500
89
+ Chocolatine-3B-Instruct-DPO-Revised 7.962500
90
+ gpt-3.5-turbo 7.908333
91
+ Daredevil-8B 7.487500
92
+ Meta-Llama-3.1-8B-Instruct 6.918750
93
+ Mistral-7B-Instruct-v0.3 6.625000
94
+ vigostral-7b-chat 6.475000
95
+ gemma-2-2b-it 6.275000
96
+ French-Alpaca-7B-Instruct_beta 5.587866
97
+ vigogne-2-7b-chat 4.218750
98
+ ```
99
+
100
+ ### Usage
101
+
102
+ You can run this model using my [Colab notebook](https://github.com/jpacifico/Chocolatine-LLM/blob/main/Chocolatine_3B_inference_test_colab.ipynb)
103
+
104
+ You can also run Chocolatine using the following code:
105
+
106
+ ```python
107
+ import transformers
108
+ from transformers import AutoTokenizer
109
+
110
+ # Format prompt
111
+ message = [
112
+ {"role": "system", "content": "You are a helpful assistant chatbot."},
113
+ {"role": "user", "content": "What is a Large Language Model?"}
114
+ ]
115
+ tokenizer = AutoTokenizer.from_pretrained(new_model)
116
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
117
+
118
+ # Create pipeline
119
+ pipeline = transformers.pipeline(
120
+ "text-generation",
121
+ model=new_model,
122
+ tokenizer=tokenizer
123
+ )
124
+
125
+ # Generate text
126
+ sequences = pipeline(
127
+ prompt,
128
+ do_sample=True,
129
+ temperature=0.7,
130
+ top_p=0.9,
131
+ num_return_sequences=1,
132
+ max_length=200,
133
+ )
134
+ print(sequences[0]['generated_text'])
135
+ ```
136
+
137
+ * **4-bit quantized version** is available here : [jpacifico/Chocolatine-3B-Instruct-DPO-v1.2-Q4_K_M-GGUF](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-v1.2-Q4_K_M-GGUF)
138
+
139
+ ### Limitations
140
+
141
+ The Chocolatine model is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.
142
+ It does not have any moderation mechanism.
143
+
144
+ - **Developed by:** Jonathan Pacifico, 2024
145
+ - **Model type:** LLM
146
+ - **Language(s) (NLP):** French, English
147
+ - **License:** MIT