Edit model card

Model Details

This is meta-llama/Meta-Llama-3.1-8B-Instruct quantized with AutoRound (asymmetric quantization) and serialized with the GPTQ format in 4-bit. The model has been created, tested, and evaluated by The Kaitchup.

Details on quantization process, evaluation, and how to use the model here: The Best Quantization Methods to Run Llama 3.1 on Your GPU

  • Developed by: The Kaitchup
  • Language(s) (NLP): English
  • License: cc-by-4.0
Downloads last month
100
Safetensors
Model size
1.99B params
Tensor type
FP16
·
I32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Collection including kaitchup/Meta-Llama-3.1-8B-Instruct-autoround-gptq-4bit-asym