Taishi-N324 commited on
Commit
5b1d1b1
1 Parent(s): 80db994

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -184
README.md CHANGED
@@ -10,19 +10,20 @@ model_type: llama
10
 
11
  # Swallow
12
 
13
- Our Swallow model has undergone continuous pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT).
14
  Links to other models can be found in the index.
15
 
16
  # Model Release Updates
17
 
18
  We are excited to share the release schedule for our latest models:
19
- - **April 25, 2024**: Released version 1.0 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v1.0](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0), [Swallow-13b-instruct-v1.0](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0), and [Swallow-70b-instruct-v1.0](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0).
20
  - **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf).
21
  - **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf).
22
  - **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)
23
  - **December 19, 2024**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf).
 
24
  ## Swallow Model Index
25
- |Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v1.0|
26
  |---|---|---|---|
27
  |7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)|
28
  |7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A |
@@ -39,13 +40,11 @@ We are excited to share the release schedule for our latest models:
39
  ![logo](./logo.png)
40
 
41
  This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
42
- Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or our [paper](https://www.anlp.jp/proceedings/annual_meeting/2024/pdf_dir/A8-5.pdf)
43
 
44
  ## Model Details
45
 
46
  * **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture.
47
  * **Language(s)**: Japanese English
48
- * **Library**: [Megatron-LM](https://github.com/rioyokotalab/Megatron-Llama2)
49
  * **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
50
  * **Contact**: swallow[at]nlp.c.titech.ac.jp
51
 
@@ -53,66 +52,29 @@ Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or o
53
 
54
  ### MT-Bench JA
55
 
56
- TODO
57
-
58
- ## Base Model Performance
59
-
60
- ### Japanese tasks
61
 
62
- |Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|
63
  |---|---|---|---|---|---|---|---|---|---|
64
- | | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|
65
- | Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 |
66
- | Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
67
- | Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
68
- | Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
69
- | Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
70
- | Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
71
- | Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
72
- | Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** |
73
- | Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
74
- | Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
75
- ### English tasks
76
-
77
- |Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|
78
- |---|---|---|---|---|---|---|---|
79
- | | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot|
80
- | Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 |
81
- | Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
82
- | Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
83
- | Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
84
- | Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
85
- | Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
86
- | Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
87
- | Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | **0.3770** | **0.9290** | **0.5284** |
88
- | Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
89
- | Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
90
 
91
  ## Evaluation Benchmarks
92
 
93
- ### Japanese evaluation benchmarks
94
-
95
- We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows:
96
-
97
- - Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022])
98
- - Open-ended question answering (JEMHopQA [Ishii+, 2023])
99
- - Open-ended question answering (NIILC [Sekine, 2003])
100
- - Machine reading comprehension (JSQuAD [Kurihara+, 2022])
101
- - Automatic summarization (XL-Sum [Hasan+, 2021])
102
- - Machine translation (WMT2020 ja-en [Barrault+, 2020])
103
- - Machine translation (WMT2020 en-ja [Barrault+, 2020])
104
- - Mathematical reasoning (MGSM [Shi+, 2023])
105
-
106
- ### English evaluation benchmarks
107
 
108
- We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows:
 
109
 
110
- - Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018])
111
- - Open-ended question answering (TriviaQA [Joshi+, 2017])
112
- - Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018])
113
- - Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021])
114
- - Natural language inference (HellaSwag [Zellers+, 2019])
115
- - Mathematical reasoning (GSM8k [Cobbe+, 2021])
116
 
117
 
118
  ## Usage
@@ -123,7 +85,7 @@ First install additional dependencies in [requirements.txt](./requirements.txt):
123
  pip install -r requirements.txt
124
  ```
125
 
126
- ### Instruction format Ver1.0
127
  This format must be adhered to strictly, as deviations may result in less optimal outputs from the model.
128
 
129
  The template used to construct a prompt for the Instruct model is specified as follows:
@@ -132,15 +94,18 @@ The template used to construct a prompt for the Instruct model is specified as f
132
  <s>[INST] <<SYS>>\n{Instruction}\n<</SYS>>\n\n{USER_MESSAGE_1} [INST] {BOT_MESSAGE_1} </s>[INST] {USER_MESSAGE_2}[/INST]
133
  ```
134
 
135
- Please be aware that ``<s> `` and ``</s> `` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings.
 
 
136
 
137
- ### Use the instruct model Ver1.0
 
138
 
139
  ```python
140
  import torch
141
  from transformers import AutoTokenizer, AutoModelForCausalLM
142
 
143
- model_name = "tokyotech-llm/Swallow-70b-instruct-v1.0"
144
  model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
145
  tokenizer = AutoTokenizer.from_pretrained(model_name)
146
 
@@ -161,135 +126,14 @@ decoded = tokenizer.batch_decode(generated_ids)
161
  print(decoded[0])
162
  ```
163
 
164
- ### Use the instruct model
165
-
166
- **Note:** Please be aware that the inference example is based on a model version older than 1.0.
167
-
168
-
169
- ```python
170
- import torch
171
- from transformers import AutoTokenizer, AutoModelForCausalLM
172
-
173
- model_name = "tokyotech-llm/Swallow-7b-instruct-hf"
174
-
175
- tokenizer = AutoTokenizer.from_pretrained(model_name)
176
- model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto")
177
-
178
-
179
- PROMPT_DICT = {
180
- "prompt_input": (
181
- "以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。"
182
- "リクエストを適切に完了するための回答を記述してください。\n\n"
183
- "### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:"
184
-
185
- ),
186
- "prompt_no_input": (
187
- "以下に、あるタスクを説明する指示があります。"
188
- "リクエストを適切に完了するための回答を記述してください。\n\n"
189
- "### 指示:\n{instruction}\n\n### 応答:"
190
- ),
191
- }
192
-
193
- def create_prompt(instruction, input=None):
194
- """
195
- Generates a prompt based on the given instruction and an optional input.
196
- If input is provided, it uses the 'prompt_input' template from PROMPT_DICT.
197
- If no input is provided, it uses the 'prompt_no_input' template.
198
-
199
- Args:
200
- instruction (str): The instruction describing the task.
201
- input (str, optional): Additional input providing context for the task. Default is None.
202
-
203
- Returns:
204
- str: The generated prompt.
205
- """
206
- if input:
207
- # Use the 'prompt_input' template when additional input is provided
208
- return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input)
209
- else:
210
- # Use the 'prompt_no_input' template when no additional input is provided
211
- return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
212
-
213
- # Example usage
214
- instruction_example = "以下のトピックに関する詳細な情報を提供してください。"
215
- input_example = "東京工業大学の主なキャンパスについて教えてください"
216
- prompt = create_prompt(instruction_example, input_example)
217
-
218
- input_ids = tokenizer.encode(
219
- prompt,
220
- add_special_tokens=False,
221
- return_tensors="pt"
222
- )
223
-
224
- tokens = model.generate(
225
- input_ids.to(device=model.device),
226
- max_new_tokens=128,
227
- temperature=0.99,
228
- top_p=0.95,
229
- do_sample=True,
230
- )
231
-
232
- out = tokenizer.decode(tokens[0], skip_special_tokens=True)
233
- print(out)
234
-
235
- ```
236
-
237
- ### Use the base model
238
-
239
- ```python
240
- import torch
241
- from transformers import AutoTokenizer, AutoModelForCausalLM
242
-
243
- model_name = "tokyotech-llm/Swallow-7b-hf"
244
-
245
- tokenizer = AutoTokenizer.from_pretrained(model_name)
246
- model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
247
-
248
- prompt = "東京工業大学の主なキャンパスは、"
249
- input_ids = tokenizer.encode(
250
- prompt,
251
- add_special_tokens=False,
252
- return_tensors="pt"
253
- )
254
- tokens = model.generate(
255
- input_ids.to(device=model.device),
256
- max_new_tokens=128,
257
- temperature=0.99,
258
- top_p=0.95,
259
- do_sample=True,
260
- )
261
-
262
- out = tokenizer.decode(tokens[0], skip_special_tokens=True)
263
- print(out)
264
- ```
265
-
266
  ## Training Datasets
267
 
268
- ### Continual Pre-Training
269
- The following datasets were used for continual pre-training.
270
-
271
- - [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
272
- - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
273
- - Swallow Corpus
274
- - [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
275
-
276
-
277
- ### Instruction Tuning
278
-
279
- #### Ver1.0
280
 
281
  The following datasets were used for the instruction tuning.
282
 
283
  - [OpenAssistant Conversations Dataset EN top-1 thread](https://huggingface.co/datasets/OpenAssistant/oasst2)
284
  - [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja) was used, where human utterances are included but the responses are not used. Instead, the responses were generated using the [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/datasets/llm-jp/oasst1-21k-jahttps://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model.
285
-
286
- #### Old
287
-
288
- The following datasets were used for the instruction tuning.
289
-
290
- - [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
291
- - [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
292
- - [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
293
 
294
  ## Risks and Limitations
295
 
 
10
 
11
  # Swallow
12
 
13
+ Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT).
14
  Links to other models can be found in the index.
15
 
16
  # Model Release Updates
17
 
18
  We are excited to share the release schedule for our latest models:
19
+ - **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions.
20
  - **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf).
21
  - **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf).
22
  - **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)
23
  - **December 19, 2024**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf).
24
+
25
  ## Swallow Model Index
26
+ |Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1|
27
  |---|---|---|---|
28
  |7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)|
29
  |7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A |
 
40
  ![logo](./logo.png)
41
 
42
  This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
 
43
 
44
  ## Model Details
45
 
46
  * **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture.
47
  * **Language(s)**: Japanese English
 
48
  * **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
49
  * **Contact**: swallow[at]nlp.c.titech.ac.jp
50
 
 
52
 
53
  ### MT-Bench JA
54
 
55
+ * NOTE that the models with the `v0.1` suffix are newer versions compared to their original counterparts with the `hf`.
56
+ * We will update the score of `Swallow-70b-instruct-hf` soon.
 
 
 
57
 
58
+ |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities|
59
  |---|---|---|---|---|---|---|---|---|---|
60
+ | Swallow-7b-instruct-v0.1 |0.3435|0.4450|0.4720|0.1853|0.1920|0.2204|0.3015|0.4594|0.4720|
61
+ | Swallow-7b-instruct-hf |0.1833|0.2205|0.1975|0.1593|0.1045|0.1282|0.2672|0.1908|0.1980|
62
+ | Swallow-13b-instruct-v0.1 |0.3669|0.4816|0.5562|0.2769|0.1020|0.1505|0.4179|0.4347|0.5150|
63
+ | Swallow-13b-instruct-hf |0.2004|0.1932|0.2552|0.1507|0.1184|0.1285|0.2641|0.2434|0.2500|
64
+ | Swallow-70b-instruct-v0.1 |0.4513|0.4822|0.5353|0.3497|0.3492|0.2668|0.5553|0.4955|0.5767|
65
+ | Swallow-70b-instruct-hf |N/A|N/A|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
  ## Evaluation Benchmarks
68
 
69
+ ### MT-Bench JA
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
+ We used [Japanese MT-Bench](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question).
72
+ We utilized the following artifacts:
73
 
74
+ - Implemantation: FastChat [Zheng+, 2023] (commit #e86e70d0)
75
+ - Question: [Nejumi LLM-Leaderboard NEO, mtbench_ja_question_v3](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question/v3)
76
+ - Reference Answer: [Nejumi LLM-Leaderboard NEO, mtbench_ja_referenceanswer_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_referenceanswer/v1)
77
+ - Prompt for Judge: [Nejumi LLM-Lederboard NEO, mtbench_ja_prompt_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_prompt/v1)
 
 
78
 
79
 
80
  ## Usage
 
85
  pip install -r requirements.txt
86
  ```
87
 
88
+ ### Instruction format Ver0.1
89
  This format must be adhered to strictly, as deviations may result in less optimal outputs from the model.
90
 
91
  The template used to construct a prompt for the Instruct model is specified as follows:
 
94
  <s>[INST] <<SYS>>\n{Instruction}\n<</SYS>>\n\n{USER_MESSAGE_1} [INST] {BOT_MESSAGE_1} </s>[INST] {USER_MESSAGE_2}[/INST]
95
  ```
96
 
97
+ Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings.
98
+
99
+ For the "{Instruction}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタ��トです。"
100
 
101
+
102
+ ### Use the instruct model Ver0.1
103
 
104
  ```python
105
  import torch
106
  from transformers import AutoTokenizer, AutoModelForCausalLM
107
 
108
+ model_name = "tokyotech-llm/Swallow-70b-instruct-v0.1"
109
  model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
110
  tokenizer = AutoTokenizer.from_pretrained(model_name)
111
 
 
126
  print(decoded[0])
127
  ```
128
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
  ## Training Datasets
130
 
131
+ ### Instruction Tuning Ver0.1
 
 
 
 
 
 
 
 
 
 
 
132
 
133
  The following datasets were used for the instruction tuning.
134
 
135
  - [OpenAssistant Conversations Dataset EN top-1 thread](https://huggingface.co/datasets/OpenAssistant/oasst2)
136
  - [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja) was used, where human utterances are included but the responses are not used. Instead, the responses were generated using the [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/datasets/llm-jp/oasst1-21k-jahttps://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model.
 
 
 
 
 
 
 
 
137
 
138
  ## Risks and Limitations
139