File size: 5,524 Bytes
5d1f63f
f7ca61f
 
5d1f63f
f7ca61f
 
 
5d1f63f
f7ca61f
5d1f63f
f7ca61f
 
 
 
 
 
5d1f63f
 
575beb1
 
 
e6e1f8a
 
 
9e434e1
e3d8143
f7ca61f
5d1f63f
b706dc3
 
f7ca61f
5d1f63f
f7ca61f
5d1f63f
cc14f8f
 
 
 
 
5d1f63f
f7ca61f
5d1f63f
352b05a
2baa52b
352b05a
2baa52b
352b05a
05fc736
5d1f63f
f7ca61f
5d1f63f
f7ca61f
5d1f63f
f7ca61f
 
 
aae4b08
 
 
 
 
 
 
 
f7ca61f
 
 
 
 
 
 
 
 
 
161d1b1
f7ca61f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d1f63f
 
 
f7ca61f
 
05fc736
f7ca61f
5d1f63f
 
 
 
 
 
 
 
 
f7ca61f
5d1f63f
 
 
05fc736
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d1f63f
67a0ba7
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
---
license: llama3
base_model: catallama/CataLlama-v0.1-Base
tags:
- llama
- llama-3
- Catalan
model-index:
- name: CataLlama-v0.1-Instruct-SFT
  results: []
datasets:
- catallama/Catalan-Instruct
language:
- ca
- en
pipeline_tag: text-generation
---

# NOTE: [CataLlama-v0.2](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-SFT-DPO-Merged) is out. Please use that one instead.


![](https://huggingface.co/catallama/CataLlama-v0.1-Instruct-DPO/resolve/main/CataLlama-v0.1.png)


# CataLlama-v0.1-Instruct-SFT

**CataLlama-v0.1-Instruct-SFT** is an instruct fine-tune of [catallama/CataLlama-v0.1-Base](https://huggingface.co/catallama/CataLlama-v0.1-Base) on the [catallama/Catalan-Instruct](https://huggingface.co/datasets/catallama/Catalan-Instruct) dataset.

CataLlama was trained on roughly **445 million new tokens** in three separate stages. This is the 2nd stage of the training.

The model shows improved proficiency with the Catalan language.

**This is an instruction fine-tuned model proficient on the following tasks in Catalan**

- *Information extraction (suitable for RAG)*
- *Named Entity Recognition (NER)*
- *Translation from English to Catalan and Catalan to English*
- *Summarization - both short form and long form*
- *Sentiment analysis*

The model achieves a loss rate of 0.8528 on the validation dataset after two epochs.

**NOTE:** The model was trained for one epoch on the `train` split of dataset and after manual evaluation, I decided to go for another epoch.

The first epoch logs every 100 steps while the second epoch logs every 200 steps, but I am pasting the train and eval losses for both epochs bellow.

*The `train` split of the dataset was shuffled before the second epoch. The `test` split dataset is identical in both epochs without shuffling*


**Model developers** [Laurentiu Petrea](https://www.linkedin.com/in/laurentiupetrea/) based on Llama-3 from Meta.

**Model Architecture** CataLlama is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and direct preference optimisation (DPO) to align with human preferences for helpfulness and safety.

**License** The model uses the llama-3 license available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)


## Benchmarks

| Benchmark          | Value  |
| ------------------ | ------ |
| MMLU 5 shot        | 55.28  |
| GSM8K cot 8 shot   | 51.63  |


### Use with transformers

See the snippet below for usage with Transformers:

**The model follows the same prompt template as Llama-3 Instruct**

```python
import transformers
import torch

model_id = "catallama/CataLlama-v0.1-Instruct-SFT"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Ei com estàs avui?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
    messages, 
    tokenize=False, 
    add_generation_prompt=True
)

outputs = pipeline(
    prompt,
    max_new_tokens=1024,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)

print(outputs[0]["generated_text"][len(prompt):])
```

## Training procedure

The model was trained **with the same prompt template of Llama-3 Instruct**.

The model was trained for two epochs on **6x A100 80GB GPUs using DeepSpeed ZeRO** State-3 without CPU offloading.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- distributed_type: multi-GPU
- num_devices: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2

### Training results

**Epoch 1**

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0938        | 0.11  | 100  | 1.0779          |
| 1.0186        | 0.22  | 200  | 1.0209          |
| 1.0157        | 0.32  | 300  | 0.9808          |
| 0.9588        | 0.43  | 400  | 0.9489          |
| 0.9039        | 0.54  | 500  | 0.9244          |
| 0.9111        | 0.65  | 600  | 0.9086          |
| 0.8918        | 0.75  | 700  | 0.8961          |
| 0.8971        | 0.86  | 800  | 0.8886          |
| 0.8631        | 0.97  | 900  | 0.8846          |


**Epoch 2**

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8002        | 0.22  | 200  | 0.8989          |
| 0.8068        | 0.43  | 400  | 0.8835          |
| 0.7722        | 0.65  | 600  | 0.8654          |
| 0.7805        | 0.86  | 800  | 0.8528          |


## Intended Use

**Note:** This model is not intended to beat benchmarks, but to demonstrate techniques for augmenting LLMs on new languages and preserve rare languages as part of our world heritage.

**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.

**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.

**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.