kobkrit commited on
Commit
1009cff
1 Parent(s): 461ac4b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +233 -0
README.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen
4
+ language:
5
+ - th
6
+ - en
7
+ library_name: transformers
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - openthaigpt
11
+ - qwen
12
+ ---
13
+
14
+ # 🇹🇭 OpenThaiGPT 7b 1.5.0 Chat
15
+ ![OpenThaiGPT](https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Fb8eiMDaqiEQL6ahbAY0h%2Fimage.png?alt=media&token=6fce78fd-2cca-4c0a-9648-bd5518e644ce)
16
+ [More Info](https://openthaigpt.aieat.or.th/)
17
+
18
+ 🇹🇭 **OpenThaiGPT 7b Version 1.5.0** is an advanced 7-billion-parameter Thai language chat model based on Qwen v2.5 released on September 30, 2024. It has been specifically fine-tuned on over 2,000,000 Thai instruction pairs and is capable of answering Thai-specific domain questions.
19
+
20
+ ## Highlights
21
+ - **State-of-the-art Thai language LLM**, achieving the highest average scores across various Thai language exams compared to other open-source Thai LLMs.
22
+ - **Multi-turn conversation support** for extended dialogues.
23
+ - **Retrieval Augmented Generation (RAG) compatibility** for enhanced response generation.
24
+ - **Impressive context handling**: Processes up to 131,072 tokens of input and generates up to 8,192 tokens, enabling detailed and complex interactions.
25
+
26
+ ## Benchmark on [OpenThaiGPT Eval](https://huggingface.co/datasets/openthaigpt/openthaigpt_eval)
27
+ ** Please take a look at ``openthaigpt/openthaigpt1.5-7b-instruct`` for this model's evaluation result.
28
+ | **Exam names** | **scb10x/llama-3-typhoon-v1.5x-70b-instruct** | **meta-llama/Llama-3.1-70B-Instruct** | **Qwen/Qwen2.5-72B-Instruct** | **openthaigpt/openthaigpt1.5-72b-instruct** |
29
+ |:------------------------------:|:---------------------------------------------:|:-------------------------------------:|:-----------------------------:|:----------------------------------:|
30
+ | **01_a_level** | 59.17% | 61.67% | 75.00% | 76.67% |
31
+ | **02_tgat** | 46.00% | 40.00% | 48.00% | 46.00% |
32
+ | **03_tpat1** | 52.50% | 50.00% | 55.00% | 55.00% |
33
+ | **04_investment_consult** | 60.00% | 52.00% | 80.00% | 72.00% |
34
+ | **05_facebook_beleble_th_200** | 87.50% | 88.00% | 90.00% | 90.00% |
35
+ | **06_xcopa_th_200** | 84.50% | 85.50% | 90.00% | 90.50% |
36
+ | **07_xnli2.0_th_200** | 62.50% | 63.00% | 65.50% | 70.50% |
37
+ | **08_onet_m3_thai** | 76.00% | 56.00% | 76.00% | 84.00% |
38
+ | **09_onet_m3_social** | 95.00% | 95.00% | 90.00% | 95.00% |
39
+ | **10_onet_m3_math** | 43.75% | 25.00% | 37.50% | 37.50% |
40
+ | **11_onet_m3_science** | 53.85% | 61.54% | 65.38% | 73.08% |
41
+ | **12_onet_m3_english** | 93.33% | 93.33% | 96.67% | 96.67% |
42
+ | **13_onet_m6_thai** | 55.38% | 60.00% | 60.00% | 56.92% |
43
+ | **14_onet_m6_math** | 41.18% | 58.82% | 23.53% | 41.18% |
44
+ | **15_onet_m6_social** | 67.27% | 76.36% | 63.64% | 65.45% |
45
+ | **16_onet_m6_science** | 50.00% | 57.14% | 64.29% | 67.86% |
46
+ | **17_onet_m6_english** | 73.08% | 82.69% | 86.54% | 90.38% |
47
+ | **Micro Average** | 69.97% | 71.09% | 75.02% | <b style="color:blue">76.73%</b> |
48
+
49
+
50
+
51
+
52
+ Thai language multiple choice exams, Test on unseen test set, Zero-shot learning. Benchmark source code and exams information: https://github.com/OpenThaiGPT/openthaigpt_eval
53
+
54
+ (Updated on: 30 September 2024)
55
+
56
+ ## Benchmark on [scb10x/thai_exam](https://huggingface.co/datasets/scb10x/thai_exam)
57
+
58
+ | Models | **Thai Exam (Acc)** |
59
+ |:----------------------------------------------------------:|:-------------------:|
60
+ | **api/claude-3-5-sonnet-20240620** | 69.2 |
61
+ | <b style="color:blue">**openthaigpt/openthaigpt1.5-72b-instruct***</b> | <b style="color:blue">64.07</b> |
62
+ | **api/gpt-4o-2024-05-13** | 63.89 |
63
+ | **hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4** | 63.54 |
64
+ | **Qwen/Qwen2-72B-Instruct** | 58.23 |
65
+ | **meta-llama/Meta-Llama-3.1-70B-Instruct** | 58.23 |
66
+ | **scb10x/llama-3-typhoon-v1.5x-70b-instruct** | 58.76 |
67
+ | **Qwen/Qwen2.5-14B-Instruct** | 57.35 |
68
+ | **api/gpt-4o-mini-2024-07-18** | 54.51 |
69
+ | <b style="color:blue">**openthaigpt/openthaigpt1.5-7b-instruct***</b> | <b style="color:blue">52.04</b> |
70
+ | **SeaLLMs/SeaLLMs-v3-7B-Chat** | 51.33 |
71
+ | **openthaigpt/openthaigpt-1.0.0-70b-chat** | 50.09 |
72
+ \* Evaluated by OpenThaiGPT team using SCBx's Thai Exam
73
+
74
+ ## Licenses
75
+ * Built with Qwen
76
+ * Qwen License: Allow **Research** and
77
+ **Commercial uses** but if your user base exceeds 100 million monthly active users, you need to negotiate a separate commercial license. Please see LICENSE file for more information.<br>
78
+
79
+ ## Sponsors
80
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/5fcd9c426d942eaf4d1ebd30/3kjN6kuTzXDXQ6o1RFvHX.png" width="600px">
81
+
82
+ ## Supports
83
+ - Official website: https://openthaigpt.aieat.or.th
84
+ - Facebook page: https://web.facebook.com/groups/openthaigpt
85
+ - A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
86
+ - E-mail: [email protected]
87
+
88
+ ## Prompt Format
89
+ Prompt format is based on Llama2 with a small modification (Adding "###" to specify the context part)
90
+ ```
91
+ <|im_start|>system\n{sytem_prompt}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n
92
+ ```
93
+
94
+ ### System prompt:
95
+ ```
96
+ คุณคือผู้ช่วยตอบคำถามที่ฉลาดและซื่อสัตย์
97
+ ```
98
+
99
+ ### Examples
100
+
101
+ #### Single Turn Conversation Example
102
+ ```
103
+ <|im_start|>system\nคุณคือผู้ช่วยตอบคำถามที่ฉลาดและซื่อสัตย์<|im_end|>\n<|im_start|>user\nสวัสดีครับ<|im_end|>\n<|im_start|>assistant\n
104
+ ```
105
+
106
+ #### Single Turn Conversation with Context (RAG) Example
107
+ ```
108
+ <|im_start|>system\nคุณคือผู้ช่วยตอบคำถามที่ฉลาดและซื่อสัตย์<|im_end|>\n<|im_start|>user\nกรุงเทพมหานคร เป็นเมืองหลวง นครและมหานครที่มีประชากรมากที่สุดของประเทศไทย กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 8 ล้านคน\nกรุงเทพมหานครมีพื้นที่เท่าไร่<|im_end|>\n<|im_start|>assistant\n
109
+ ```
110
+
111
+ #### Multi Turn Conversation Example
112
+
113
+ ##### First turn
114
+ ```
115
+ <|im_start|>system\nคุณคือผู้ช่วยตอบคำถามที่ฉลาดและซื่อสัตย์<|im_end|>\n<|im_start|>user\nสวัสดีครับ<|im_end|>\n<|im_start|>assistant\n
116
+ ```
117
+
118
+ ##### Second turn
119
+ ```
120
+ <|im_start|>system\nคุณคือผู้ช่วยตอบคำถามที่ฉลาดและซื่อสัตย์<|im_end|>\n<|im_start|>user\nสวัสดีครับ<|im_end|>\n<|im_start|>assistant\nสวัสดีครับ ยินดีต้อนรับครับ คุณต้องการให้ฉันช่วยอะไรครับ?<|im_end|>\n<|im_start|>user\nกรุงเทพมหานคร ชื่อเต็มยาวๆคืออะไร<|im_end|>\n<|im_start|>assistant\n
121
+ ```
122
+
123
+ ชื่อเต็มของกรุงเทพมหานครคือ \"กรุงเทพมหานคร อมรรัตนโกสินทร์ มหินทรายุธยา มหาดิลกภพ นพรัตนราชธานีบูรีรมย์ อุดมราชนิเวศน์มหาสถาน อมรพิมานอวตารสถิต สักกะทัตติยวิษณุกรรมประสิทธิ์\"
124
+
125
+ ##### Result
126
+ ```
127
+ <|im_start|>system\nคุณคือผู้ช่วยตอบคำถามที่ฉลาดและซื่อสัตย์<|im_end|>\n<|im_start|>user\nสวัสดีครับ<|im_end|>\n<|im_start|>assistant\nสวัสดีครับ ยินดีต้อนรับครับ คุณต้องการให้ฉันช่วยอะไรครับ?<|im_end|>\n<|im_start|>user\nกรุงเทพมหานคร ชื่อเต็มยาวๆคืออะไร<|im_end|>\n<|im_start|>assistant\nชื่อเต็มของกรุงเทพมหานครคือ \"กรุงเทพมหานคร อมรรัตนโกสินทร์ มหินทรายุธยา มหาดิลกภพ นพรัตนราชธานีบูรีรมย์ อุดมราชนิเวศน์มหาสถาน อมรพิมานอวตารสถิต สักกะทัตติยวิษณุกรรมประสิทธิ์\"
128
+ ```
129
+
130
+ ## How to use
131
+
132
+ ### Huggingface
133
+ ```python
134
+ from transformers import AutoModelForCausalLM, AutoTokenizer
135
+
136
+ model_name = "openthaigpt/openthaigpt1.5-72b-instruct"
137
+
138
+ model = AutoModelForCausalLM.from_pretrained(
139
+ model_name,
140
+ torch_dtype="auto",
141
+ device_map="auto"
142
+ )
143
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
144
+
145
+ prompt = "ประเทศไทยคืออะไร"
146
+ messages = [
147
+ {"role": "system", "content": "คุณคือผู้ช่วยตอบคำถามที่ฉลาดและซื่อสัตย์"},
148
+ {"role": "user", "content": prompt}
149
+ ]
150
+ text = tokenizer.apply_chat_template(
151
+ messages,
152
+ tokenize=False,
153
+ add_generation_prompt=True
154
+ )
155
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
156
+
157
+ generated_ids = model.generate(
158
+ **model_inputs,
159
+ max_new_tokens=512
160
+ )
161
+ generated_ids = [
162
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
163
+ ]
164
+
165
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
166
+ ```
167
+
168
+ ### vLLM
169
+
170
+ 1. Install VLLM (https://github.com/vllm-project/vllm)
171
+
172
+ 2. Run server
173
+ ```bash
174
+ vllm serve openthaigpt/openthaigpt1.5-72b-instruct --tensor-parallel-size 4
175
+ ```
176
+ 3. Run inference (CURL example)
177
+ ```bash
178
+ curl -X POST 'http://127.0.0.1:8000/v1/completions' \
179
+ -H 'Content-Type: application/json' \
180
+ -d '{
181
+ "model": ".",
182
+ "prompt": "<|im_start|>system\nคุณคือผู้ช่วยตอบคำถามที่ฉลาดและซื่อสัตย์<|im_end|>\n<|im_start|>user\nสวัสดีครับ<|im_end|>\n<|im_start|>assistant\n",
183
+ "max_tokens": 512,
184
+ "temperature": 0.7,
185
+ "top_p": 0.8,
186
+ "top_k": 40,
187
+ "stop": ["<|im_end|>"]
188
+ }'
189
+ ```
190
+
191
+ ### Processing Long Texts
192
+ The current `config.json` is set for context length up to 32,768 tokens.
193
+ To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
194
+
195
+ For supported frameworks, you could add the following to `config.json` to enable YaRN:
196
+ ```json
197
+ {
198
+ ...
199
+ "rope_scaling": {
200
+ "factor": 4.0,
201
+ "original_max_position_embeddings": 32768,
202
+ "type": "yarn"
203
+ }
204
+ }
205
+ ```
206
+
207
+ ### GPU Memory Requirements
208
+ | **Number of Parameters** | **FP 16 bits** | **8 bits (Quantized)** | **4 bits (Quantized)** | **Example Graphic Card for 4 bits** |
209
+ |------------------|----------------|------------------------|------------------------|---------------------------------------------|
210
+ | **7b** | 24 GB | 12 GB | 6 GB | Nvidia RTX 4060 8GB |
211
+ | **13b** | 48 GB | 24 GB | 12 GB | Nvidia RTX 4070 16GB |
212
+ | **72b** | 192 GB | 96 GB | 48 GB | Nvidia RTX 4090 24GB x 2 cards |
213
+
214
+ ### Authors
215
+ * Sumeth Yuenyong ([email protected])
216
+ * Kobkrit Viriyayudhakorn ([email protected])
217
+ * Apivadee Piyatumrong ([email protected])
218
+ * Jillaphat Jaroenkantasima ([email protected])
219
+ * Thaweewat Rugsujarit ([email protected])
220
+ * Norapat Buppodom ([email protected])
221
+ * Koravich Sangkaew ([email protected])
222
+ * Peerawat Rojratchadakorn ([email protected])
223
+ * Surapon Nonesung ([email protected])
224
+ * Chanon Utupon ([email protected])
225
+ * Sadhis Wongprayoon ([email protected])
226
+ * Nucharee Thongthungwong ([email protected])
227
+ * Chawakorn Phiantham ([email protected])
228
+ * Patteera Triamamornwooth ([email protected])
229
+ * Nattarika Juntarapaoraya ([email protected])
230
+ * Kriangkrai Saetan ([email protected])
231
+ * Pitikorn Khlaisamniang ([email protected])
232
+
233
+ <i>Disclaimer: Provided responses are not guaranteed.</i>