Edit model card

MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0

Model Details

Model Developers

MarkrAI - AI Researchers

Base Model

DopeorNope/Ko-Mixtral-v1.4-MoE-7Bx2.

Instruction tuning Method

Using QLoRA.

4-bit quantization
Lora_r: 64
Lora_alpha: 64
Lora_dropout: 0.05
Lora_target_modules: [embed_tokens, q_proj, k_proj, v_proj, o_proj, gate, w1, w2, w3, lm_head]

Hyperparameters

Epoch: 5
Batch size: 64
Learning_rate: 1e-5
Learning scheduler: linear
Warmup_ratio: 0.06

Datasets

Private datasets: HumanF-MarkrAI/Korean-RAG-ver2

Aihub datasets ํ™œ์šฉํ•˜์—ฌ์„œ ์ œ์ž‘ํ•จ.  

Implmentation Code

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0"
markrAI_RAG = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
markrAI_RAG_tokenizer = AutoTokenizer.from_pretrained(repo)

Model Benchmark

  • Coming soon...
Downloads last month
15
Safetensors
Model size
12.9B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0

Quantizations
2 models

Collection including MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0