Edit model card

Model Card for Model ID

LLAMA2 model for Kazakh Language

Model Details

This model is from Meta LLAMA 2 parameter-efficient fine-tuning with Kazakh Language.

Model Description

  • Developed by: Mussa Aman
  • Model type: Question Answering.
  • Language(s) (NLP): Kazakh
  • License: MIT
  • Finetuned from model [optional]: Meta LLAMA 2

Model Sources [optional]

Out-of-Scope Use

There are still some mistakes during the inference process.

Bias, Risks, and Limitations

The parameter size could be larger, and the dataset need to be optimized.

Training Data

image/png

Evaluation

Run summary:

train/epoch 1.0 train/global_step 3263 train/learning_rate 0.0 train/loss 0.975 train/total_flos 5.1749473473500774e+17 train/train_loss 0.38281 train/train_runtime 13086.8735 train/train_samples_per_second 3.989 train/train_steps_per_second 0.249

Environment

  • Hardware Type: NVIDIA A100 40GB
  • Hours used: 10 hours
  • Cloud Provider: Google Colab

Citation [optional]

Citation

BibTeX:

@misc{aman_2023, author = {Aman Mussa}, title = {Self-instruct data pairs for Kazakh language}, year = {2023}, howpublished = {\url{https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1}}, }

APA:

Aman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1

Model Card Contact

Please contact in email: [email protected]

Downloads last month
17
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train AmanMussa/llama2-kazakh-7b