State-Of-The-Art Korean-RAG LM
Collection
Markr AI's RAG LLM (based on Ko-Mixtral)
โข
5 items
โข
Updated
โข
2
MarkrAI - AI Researchers
DopeorNope/Ko-Mixtral-v1.3-MoE-7Bx2.
Using QLoRA.
4-bit quantization
Lora_r: 64
Lora_alpha: 64
Lora_dropout: 0.05
Lora_target_modules: [embed_tokens, q_proj, k_proj, v_proj, o_proj, gate, w1, w2, w3, lm_head]
Epoch: 3
Batch size: 64
Learning_rate: 1e-5
Learning scheduler: linear
Warmup_ratio: 0.06
Private datasets: HumanF-MarkrAI/Korean-RAG-ver2
Aihub datasets ํ์ฉํ์ฌ์ ์ ์ํจ.
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "MarkrAI/RAG-KO-Mixtral-7Bx2-v1.15"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)