|
--- |
|
language: |
|
- en |
|
- ko |
|
license: llama3 |
|
library_name: transformers |
|
tags: |
|
- llama-cpp |
|
- gguf-my-repo |
|
base_model: |
|
- meta-llama/Meta-Llama-3-8B |
|
- jeiku/Average_Test_v1 |
|
- MLP-KTLim/llama-3-Korean-Bllossom-8B |
|
--- |
|
|
|
|
|
<a href="https://github.com/MLP-Lab/Bllossom"> |
|
<img src="https://github.com/teddysum/bllossom/blob/main//bllossom_icon.png?raw=true" width="40%" height="50%"> |
|
</a> |
|
|
|
# Bllossom | [Demo]() | [Homepage](https://www.bllossom.ai/) | [Github](https://github.com/MLP-Lab/Bllossom) |
|
|
|
- ๋ณธ ๋ชจ๋ธ์ CPU์์ ๊ตฌ๋๊ฐ๋ฅํ๋ฉฐ ๋น ๋ฅธ ์๋๋ฅผ ์ํด์๋ 8GB GPU์์ ๊ตฌ๋ ๊ฐ๋ฅํ ์์ํ ๋ชจ๋ธ์
๋๋ค! [Colab ์์ ](https://colab.research.google.com/drive/129ZNVg5R2NPghUEFHKF0BRdxsZxinQcJ?usp=drive_link) | |
|
|
|
```bash |
|
์ ํฌ Bllossomํ ์์ ํ๊ตญ์ด-์์ด ์ด์ค ์ธ์ด๋ชจ๋ธ์ธ Bllossom์ ๊ณต๊ฐํ์ต๋๋ค! |
|
์์ธ๊ณผ๊ธฐ๋ ์ํผ์ปดํจํ
์ผํฐ์ ์ง์์ผ๋ก 100GB๊ฐ๋๋ ํ๊ตญ์ด๋ก ๋ชจ๋ธ์ ์ฒด๋ฅผ ํํ๋ํ ํ๊ตญ์ด ๊ฐํ ์ด์ค์ธ์ด ๋ชจ๋ธ์
๋๋ค! |
|
ํ๊ตญ์ด ์ํ๋ ๋ชจ๋ธ ์ฐพ๊ณ ์์ง ์์ผ์
จ๋์? |
|
- ํ๊ตญ์ด ์ต์ด! ๋ฌด๋ ค 3๋ง๊ฐ๊ฐ ๋๋ ํ๊ตญ์ด ์ดํํ์ฅ |
|
- Llama3๋๋น ๋๋ต 25% ๋ ๊ธด ๊ธธ์ด์ ํ๊ตญ์ด Context ์ฒ๋ฆฌ๊ฐ๋ฅ |
|
- ํ๊ตญ์ด-์์ด Pararell Corpus๋ฅผ ํ์ฉํ ํ๊ตญ์ด-์์ด ์ง์์ฐ๊ฒฐ (์ฌ์ ํ์ต) |
|
- ํ๊ตญ์ด ๋ฌธํ, ์ธ์ด๋ฅผ ๊ณ ๋ คํด ์ธ์ดํ์๊ฐ ์ ์ํ ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ ๋ฏธ์ธ์กฐ์ |
|
- ๊ฐํํ์ต |
|
์ด ๋ชจ๋ ๊ฒ ํ๊บผ๋ฒ์ ์ ์ฉ๋๊ณ ์์
์ ์ด์ฉ์ด ๊ฐ๋ฅํ Bllossom์ ์ด์ฉํด ์ฌ๋ฌ๋ถ ๋ง์ ๋ชจ๋ธ์ ๋ง๋ค์ด๋ณด์ธ์ฅ! |
|
๋ณธ ๋ชจ๋ธ์ CPU์์ ๊ตฌ๋๊ฐ๋ฅํ๋ฉฐ ๋น ๋ฅธ ์๋๋ฅผ ์ํด์๋ 6GB GPU์์ ๊ตฌ๋ ๊ฐ๋ฅํ ์์ํ ๋ชจ๋ธ์
๋๋ค! |
|
|
|
1. Bllossom-8B๋ ์์ธ๊ณผ๊ธฐ๋, ํ
๋์ธ, ์ฐ์ธ๋ ์ธ์ด์์ ์ฐ๊ตฌ์ค์ ์ธ์ดํ์์ ํ์
ํด ๋ง๋ ์ค์ฉ์ฃผ์๊ธฐ๋ฐ ์ธ์ด๋ชจ๋ธ์
๋๋ค! ์์ผ๋ก ์ง์์ ์ธ ์
๋ฐ์ดํธ๋ฅผ ํตํด ๊ด๋ฆฌํ๊ฒ ์ต๋๋ค ๋ง์ด ํ์ฉํด์ฃผ์ธ์ ๐ |
|
2. ์ด ๊ฐ๋ ฅํ Advanced-Bllossom 8B, 70B๋ชจ๋ธ, ์๊ฐ-์ธ์ด๋ชจ๋ธ์ ๋ณด์ ํ๊ณ ์์ต๋๋ค! (๊ถ๊ธํ์ ๋ถ์ ๊ฐ๋ณ ์ฐ๋ฝ์ฃผ์ธ์!!) |
|
3. Bllossom์ NAACL2024, LREC-COLING2024 (๊ตฌ๋) ๋ฐํ๋ก ์ฑํ๋์์ต๋๋ค. |
|
4. ์ข์ ์ธ์ด๋ชจ๋ธ ๊ณ์ ์
๋ฐ์ดํธ ํ๊ฒ ์ต๋๋ค!! ํ๊ตญ์ด ๊ฐํ๋ฅผ์ํด ๊ณต๋ ์ฐ๊ตฌํ์ค๋ถ(ํนํ๋
ผ๋ฌธ) ์ธ์ ๋ ํ์ํฉ๋๋ค!! |
|
ํนํ ์๋์ GPU๋ผ๋ ๋์ฌ ๊ฐ๋ฅํํ์ ์ธ์ ๋ ์ฐ๋ฝ์ฃผ์ธ์! ๋ง๋ค๊ณ ์ถ์๊ฑฐ ๋์๋๋ ค์. |
|
``` |
|
|
|
The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features: |
|
|
|
* **Knowledge Linking**: Linking Korean and English knowledge through additional training |
|
* **Vocabulary Expansion**: Expansion of Korean vocabulary to enhance Korean expressiveness. |
|
* **Instruction Tuning**: Tuning using custom-made instruction following data specialized for Korean language and Korean culture |
|
* **Human Feedback**: DPO has been applied |
|
* **Vision-Language Alignment**: Aligning the vision transformer with this language model |
|
|
|
**This model developed by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim).** |
|
**This model was converted to GGUF format from [`MLP-KTLim/llama-3-Korean-Bllossom-8B`](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) for more details on the model.** |
|
|
|
|
|
## Demo Video |
|
|
|
<div style="display: flex; justify-content: space-between;"> |
|
<!-- ์ฒซ ๋ฒ์งธ ์ปฌ๋ผ --> |
|
<div style="width: 49%;"> |
|
<a> |
|
<img src="https://github.com/lhsstn/lhsstn/blob/main/x-llava_dem.gif?raw=true" style="width: 100%; height: auto;"> |
|
</a> |
|
<p style="text-align: center;">Bllossom-V Demo</p> |
|
</div> |
|
|
|
<!-- ๋ ๋ฒ์งธ ์ปฌ๋ผ (ํ์ํ๋ค๋ฉด) --> |
|
<div style="width: 49%;"> |
|
<a> |
|
<img src="https://github.com/lhsstn/lhsstn/blob/main/bllossom_demo_kakao.gif?raw=true" style="width: 70%; height: auto;"> |
|
</a> |
|
<p style="text-align: center;">Bllossom Demo(Kakao)ใ
คใ
คใ
คใ
คใ
คใ
คใ
คใ
ค</p> |
|
</div> |
|
</div> |
|
|
|
|
|
|
|
## NEWS |
|
* [2024.05.08] Vocab Expansion Model Update |
|
* [2024.04.25] We released Bllossom v2.0, based on llama-3 |
|
* [2023/12] We released Bllossom-Vision v1.0, based on Bllossom |
|
* [2023/08] We released Bllossom v1.0, based on llama-2. |
|
* [2023/07] We released Bllossom v0.7, based on polyglot-ko. |
|
|
|
|
|
## Example code |
|
```python |
|
!CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python |
|
!huggingface-cli download MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M --local-dir='YOUR-LOCAL-FOLDER-PATH' |
|
|
|
from llama_cpp import Llama |
|
from transformers import AutoTokenizer |
|
|
|
model_id = 'MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M' |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = Llama( |
|
model_path='YOUR-LOCAL-FOLDER-PATH/llama-3-Korean-Bllossom-8B-Q4_K_M.gguf', |
|
n_ctx=512, |
|
n_gpu_layers=-1 # Number of model layers to offload to GPU |
|
) |
|
|
|
PROMPT = \ |
|
'''๋น์ ์ ์ ์ฉํ AI ์ด์์คํดํธ์
๋๋ค. ์ฌ์ฉ์์ ์ง์์ ๋ํด ์น์ ํ๊ณ ์ ํํ๊ฒ ๋ต๋ณํด์ผ ํฉ๋๋ค. |
|
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.''' |
|
|
|
instruction = 'Your Instruction' |
|
|
|
messages = [ |
|
{"role": "system", "content": f"{PROMPT}"}, |
|
{"role": "user", "content": f"{instruction}"} |
|
] |
|
|
|
prompt = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize = False, |
|
add_generation_prompt=True |
|
) |
|
|
|
generation_kwargs = { |
|
"max_tokens":512, |
|
"stop":["<|eot_id|>"], |
|
"top_p":0.9, |
|
"temperature":0.6, |
|
"echo":True, # Echo the prompt in the output |
|
} |
|
|
|
resonse_msg = model(prompt, **generation_kwargs) |
|
print(resonse_msg['choices'][0]['text'][len(prompt):]) |
|
``` |
|
|
|
|
|
|
|
## Citation |
|
**Language Model** |
|
```text |
|
@misc{bllossom, |
|
author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim}, |
|
title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean}, |
|
year = {2024}, |
|
journal = {LREC-COLING 2024}, |
|
paperLink = {\url{https://arxiv.org/pdf/2403.10882}}, |
|
}, |
|
} |
|
``` |
|
|
|
**Vision-Language Model** |
|
```text |
|
@misc{bllossom-V, |
|
author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim}, |
|
title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment}, |
|
year = {2024}, |
|
publisher = {GitHub}, |
|
journal = {NAACL 2024 findings}, |
|
paperLink = {\url{https://arxiv.org/pdf/2403.11399}}, |
|
}, |
|
} |
|
``` |
|
|
|
## Contact |
|
- ์๊ฒฝํ(KyungTae Lim), Professor at Seoultech. `[email protected]` |
|
- ํจ์๊ท (Younggyun Hahm), CEO of Teddysum. `[email protected]` |
|
- ๊นํ์(Hansaem Kim), Professor at Yonsei. `[email protected]` |
|
|
|
## Contributor |
|
- ์ต์ฐฝ์(Chansu Choi), [email protected] |
|
- ๊น์๋ฏผ(Sangmin Kim), [email protected] |
|
- ์์ธํธ(Inho Won), [email protected] |
|
- ๊น๋ฏผ์ค(Minjun Kim), [email protected] |
|
- ์ก์น์ฐ(Seungwoo Song), [email protected] |
|
- ์ ๋์ฌ(Dongjae Shin), [email protected] |
|
- ์ํ์(Hyeonseok Lim), [email protected] |
|
- ์ก์ ํ(Jeonghun Yuk), [email protected] |
|
- ์ ํ๊ฒฐ(Hangyeol Yoo), [email protected] |
|
- ์ก์ํ(Seohyun Song), [email protected] |