|
--- |
|
language: |
|
- en |
|
- ja |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: llama2 |
|
model_type: llama |
|
--- |
|
|
|
# Swallow |
|
|
|
Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT). |
|
Links to other models can be found in the index. |
|
|
|
# Model Release Updates |
|
|
|
We are excited to share the release schedule for our latest models: |
|
- **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions. |
|
- **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf). |
|
- **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf). |
|
- **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf) |
|
- **December 19, 2024**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf). |
|
|
|
## Swallow Model Index |
|
|Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1| |
|
|---|---|---|---| |
|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)| |
|
|7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A | |
|
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0)| |
|
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0)| |
|
|
|
## Swallow Model Index NVE (No Vocabulary Expansion) |
|
|Model|Swallow-NVE-hf|Swallow-NVE-instruct-hf| |
|
|---|---|---| |
|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf)| |
|
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf) | N/A | |
|
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)| |
|
|
|
![logo](./logo.png) |
|
|
|
This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/). |
|
|
|
## Model Details |
|
|
|
* **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture. |
|
* **Language(s)**: Japanese English |
|
* **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process. |
|
* **Contact**: swallow[at]nlp.c.titech.ac.jp |
|
|
|
## Instruct Model Performance |
|
|
|
### MT-Bench JA |
|
|
|
* NOTE that the models with the `v0.1` suffix are newer versions compared to their original counterparts with the `hf`. |
|
* We will add the scores of `Swallow-70b-instruct-hf` and existing models soon. |
|
|
|
|Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |
|
|---|---|---|---|---|---|---|---|---|---| |
|
| Swallow-7b-instruct-v0.1 |0.3435|0.4450|0.4720|0.1853|0.1920|0.2204|0.3015|0.4594|0.4720| |
|
| Swallow-7b-instruct-hf |0.1833|0.2205|0.1975|0.1593|0.1045|0.1282|0.2672|0.1908|0.1980| |
|
| Swallow-13b-instruct-v0.1 |0.3669|0.4816|0.5562|0.2769|0.1020|0.1505|0.4179|0.4347|0.5150| |
|
| Swallow-13b-instruct-hf |0.2004|0.1932|0.2552|0.1507|0.1184|0.1285|0.2641|0.2434|0.2500| |
|
| Swallow-70b-instruct-v0.1 |0.4513|0.4822|0.5353|0.3497|0.3492|0.2668|0.5553|0.4955|0.5767| |
|
| Swallow-70b-instruct-hf |N/A|N/A|N/A|N/A|N/A|N/A|N/A|N/A|N/A| |
|
|
|
## Evaluation Benchmarks |
|
|
|
### MT-Bench JA |
|
|
|
We used [Japanese MT-Bench](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question) to assess the instruction-following capabilities of models. |
|
We utilized the following artifacts: |
|
|
|
- Implemantation: FastChat [Zheng+, 2023] (commit #e86e70d0) |
|
- Question: [Nejumi LLM-Leaderboard NEO, mtbench_ja_question_v3](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question/v3) |
|
- Reference Answer: [Nejumi LLM-Leaderboard NEO, mtbench_ja_referenceanswer_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_referenceanswer/v1) |
|
- Prompt for Judge: [Nejumi LLM-Lederboard NEO, mtbench_ja_prompt_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_prompt/v1) |
|
|
|
|
|
## Usage |
|
|
|
First install additional dependencies in [requirements.txt](./requirements.txt): |
|
|
|
```sh |
|
pip install -r requirements.txt |
|
``` |
|
|
|
### Instruction format Ver0.1 |
|
This format must be adhered to strictly, as deviations may result in less optimal outputs from the model. |
|
|
|
The template used to construct a prompt for the Instruct model is specified as follows: |
|
|
|
``` |
|
<s>[INST] <<SYS>>\n{Instruction}\n<</SYS>>\n\n{USER_MESSAGE_1} [INST] {BOT_MESSAGE_1} </s>[INST] {USER_MESSAGE_2}[/INST] |
|
``` |
|
|
|
Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings. |
|
|
|
For the "{Instruction}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。" |
|
|
|
|
|
### Use the instruct model Ver0.1 |
|
|
|
```python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_name = "tokyotech-llm/Swallow-70b-instruct-v0.1" |
|
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
device = "cuda" |
|
|
|
messages = [ |
|
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"}, |
|
{"role": "user", "content": "東京工業大学の主なキャンパスについて教えてください"} |
|
] |
|
|
|
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
|
|
|
model_inputs = encodeds.to(device) |
|
model.to(device) |
|
|
|
generated_ids = model.generate(model_inputs, max_new_tokens=128, do_sample=True) |
|
decoded = tokenizer.batch_decode(generated_ids) |
|
print(decoded[0]) |
|
``` |
|
|
|
## Training Datasets |
|
|
|
### Instruction Tuning Ver0.1 |
|
|
|
The following datasets were used for the instruction tuning. |
|
|
|
- [OpenAssistant Conversations Dataset EN top-1 thread](https://huggingface.co/datasets/OpenAssistant/oasst2) |
|
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja) was used, where human utterances are included but the responses are not used. Instead, the responses were generated using the [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/datasets/llm-jp/oasst1-21k-jahttps://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model. |
|
|
|
## Risks and Limitations |
|
|
|
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. |
|
|
|
## Acknowledgements |
|
|
|
We thank Meta Research for releasing Llama 2 under an open license for others to build on. |
|
|
|
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology. |
|
|
|
## License |
|
|
|
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. |
|
|
|
## Authors |
|
|
|
Here are the team members: |
|
- From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members: |
|
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html) |
|
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/) |
|
- [Hiroki Iida](https://meshidenn.github.io/) |
|
- [Mengsay Loem](https://loem-ms.github.io/) |
|
- [Shota Hirai](https://huggingface.co/Kotemo428) |
|
- [Kakeru Hattori](https://aya-se.vercel.app/) |
|
- [Masanari Ohi](https://twitter.com/stjohn2007) |
|
- From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members: |
|
- [Rio Yokota](https://twitter.com/rioyokota) |
|
- [Kazuki Fujii](https://twitter.com/okoge_kaz) |
|
- [Taishi Nakamura](https://twitter.com/Setuna7777_2) |
|
- [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto) |
|
- [Ishida Shigeki](https://www.wantedly.com/id/reborn27) |
|
|