File size: 4,627 Bytes
b18f73e
 
 
 
 
 
 
 
fd6d4b1
b18f73e
 
 
 
 
 
 
61fd251
dd43166
b18f73e
 
 
 
fd6d4b1
2ab76f1
 
 
b18f73e
 
 
 
 
fd6d4b1
b18f73e
 
fd6d4b1
 
 
 
 
 
 
 
 
 
f85b4a9
fd6d4b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b18f73e
fd6d4b1
 
 
 
b18f73e
fd6d4b1
 
 
 
 
 
 
 
 
 
b18f73e
 
 
 
fd6d4b1
 
b18f73e
 
 
 
fd6d4b1
b18f73e
 
 
2ab76f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b76867d
 
b18f73e
fd6d4b1
b18f73e
fd6d4b1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
libray_name: transformers
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
language:
- ko
- en
tags:
- meta
- llama
- llama-3
- akallama
library_name: transformers
---
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
  <img src="https://github.com/0110tpwls/project/blob/master/image_720.png?raw=true" width="40%"/>
</a>


# AKALLAMA

AkaLlama is a series of Korean language models designed for practical usability across a wide range of tasks.
The initial model, AkaLlama-v0.1, is a fine-tuned version of Meta-Llama-3-70b-Instruct. It has been trained on a custom mix of publicly available datasets curated by the MIR Lab.
Our goal is to explore cost-effective ways to adapt high-performing LLMs for specific use cases, such as different languages (e.g., Korean) or domains (e.g., organization-specific chatbots).

### Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub.

- **Developed by:** [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/)
- **Language(s) (NLP):** Korean, English
- **License:** llama3
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)

## How to use

This repo provides full model weight files for AkaLlama-70B-v0.1.

# Use with transformers

See the snippet below for usage with Transformers:

```python
import transformers
import torch

model_id = "mirlab/AkaLlama-llama3-70b-v0.1"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="auto",
)

system_prompt = """
"""

messages = [
    {"role": "system", "content": "system_prompt"},
    {"role": "user", "content": "네 이름은 뭐야?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```

## Training Details
### Training Procedure

We trained AkaLlama using a preference learning alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691).
Our training pipeline is almost identical to that of [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1), aside from minor hyperparameter changes.
Please check out Huggingface's [alignment handbook](https://github.com/huggingface/alignment-handbook?tab=readme-ov-file) for further details, including the chat template. 

### Training Data

Detailed descriptions regarding training data will be announced later.

### Examples

<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
  <img src="https://github.com/0110tpwls/project/blob/master/image (8).png?raw=true" width="60%"/>
</a>

<details>
<summary><b>Math Solving[CLICK TO EXPAND]</b></summary>
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
  <img src="https://github.com/0110tpwls/project/blob/master/image (9).png?raw=true" width="60%"/>
</a>
</details>

<details>
<summary><b>Writting[CLICK TO EXPAND]</b></summary>
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
  <img src="https://github.com/0110tpwls/project/blob/master/image (13).png?raw=true" width="60%"/>
</a>

<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
  <img src="https://github.com/0110tpwls/project/blob/master/image (7).png?raw=true" width="60%"/>
</a>
</details>

<details>
<summary><b>logical Reasoning[CLICK TO EXPAND]</b></summary>
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
  <img src="https://github.com/0110tpwls/project/blob/master/image (15).png?raw=true" width="60%"/>
</a>
</details>

<details>
<summary><b>Coding [CLICK TO EXPAND]</b></summary>
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
  <img src="https://github.com/0110tpwls/project/blob/master/image (11).png?raw=true" width="60%"/>
</a>
</details>

You can find more examples at [our project page](https://yonsei-mir.github.io/AkaLLaMA-page)

## Special Thanks

- Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources