File size: 1,677 Bytes
ff08a26 a52bbee ff08a26 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
license_name: llama3
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- safetensors
- llama
- text-generation
- facebook
- meta
- pytorch
- llama-3
- conversational
- en
- license:other
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: Meta-Llama-3-70B-Instruct-GPTQ
base_model: meta-llama/Meta-Llama-3-70B-Instruct
inference: false
model_creator: meta-llama
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/Meta-Llama-3-70B-Instruct-GPTQ](https://huggingface.co/MaziyarPanahi/Meta-Llama-3-70B-Instruct-GPTQ) is a quantized (GPTQ) version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Meta-Llama-3-70B-Instruct-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
``` |