File size: 1,353 Bytes
30de888
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: llama3.1
---
Based on Meta-Llama-3.1-8B-Instruct, and is governed by Meta Llama 3.1 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct

Indonesian language continued pre-trained and Formax instruct tuned model. Excels in Bahasa Indonesia while having Formax instruct characteristics.

Model Llama 3.1 yang telat di-training dengan bahasa Indonesia dan juga menggunakan dataset bergaya Formax bahasa Indonesia. Cocok untuk kebutuhan text bahasa Indonesia.

Training:
- 8192 sequence length
- Training duration is around 6 days on 2x3090Ti
- 1 epoch training with a massive dataset.
- LORA with 64-rank 128-alpha resulting in ~2% trainable weights.

Quants:

BF16: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Indo-Formax-v1.0

GGUF: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Indo-Formax-v1.0-GGUF


Suggested prompting strategy:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a [give it a role]. You are tasked with [give it a task]. Reply in the following format: [requested format of reply]<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```