Edit model card
Curiosity MARS model logo

MARS is the first iteration of Curiosity Technology models, based on Llama 3 8B.

We have trained MARS on in-house Turkish dataset, as well as several open-source datasets and their Turkish translations. It is our intention to release Turkish translations in near future for community to have their go on them.

MARS have been trained for 3 days on 4xA100.

Model Details

  • Base Model: Meta Llama 3 8B Instruct
  • Training Dataset: In-house & Translated Open Source Turkish Datasets
  • Training Method: LoRA Fine Tuning

How to use

You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the generate() function. Let's see examples of both.

Transformers pipeline

import transformers
import torch

model_id = "curiositytech/MARS"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "Sen korsan gibi konuşan bir korsan chatbotsun!"},
    {"role": "user", "content": "Sen kimsin?"},
]

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][-1])

Transformers AutoModelForCausalLM

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "curiositytech/MARS"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

messages = [
    {"role": "system", "content": "Sen korsan gibi konuşan bir korsan chatbotsun!"},
    {"role": "user", "content": "Sen kimsin?"},
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
Downloads last month
2,665
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for curiositytech/MARS

Finetuned
this model
Finetunes
4 models
Quantizations
3 models

Evaluation results