Text Generation
Transformers
Safetensors
llama
finetuned
quantized
4-bit precision
gptq
dataset:ai2_arc
dataset:unalignment/spicy-3.1
dataset:codeparrot/apps
dataset:facebook/belebele
dataset:boolq
dataset:jondurbin/cinematika-v0.1
dataset:drop
dataset:lmsys/lmsys-chat-1m
dataset:TIGER-Lab/MathInstruct
dataset:cais/mmlu
dataset:Muennighoff/natural-instructions
dataset:openbookqa
dataset:piqa
dataset:Vezora/Tested-22k-Python-Alpaca
dataset:cakiki/rosetta-code
dataset:Open-Orca/SlimOrca
dataset:spider
dataset:squad_v2
dataset:migtissera/Synthia-v1.3
dataset:datasets/winogrande
dataset:nvidia/HelpSteer
dataset:Intel/orca_dpo_pairs
dataset:unalignment/toxic-dpo-v0.1
dataset:jondurbin/truthy-dpo-v0.1
dataset:allenai/ultrafeedback_binarized_cleaned
dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned
dataset:LDJnr/Capybara
dataset:JULIELab/EmoBank
dataset:kingbri/PIPPA-shareGPT
Inference Endpoints
text-generation-inference
has_space
conversational
Eval Results
license: apache-2.0 | |
tags: | |
- finetuned | |
- quantized | |
- 4-bit | |
- gptq | |
- transformers | |
- safetensors | |
- llama | |
- text-generation | |
- dataset:ai2_arc | |
- dataset:unalignment/spicy-3.1 | |
- dataset:codeparrot/apps | |
- dataset:facebook/belebele | |
- dataset:boolq | |
- dataset:jondurbin/cinematika-v0.1 | |
- dataset:drop | |
- dataset:lmsys/lmsys-chat-1m | |
- dataset:TIGER-Lab/MathInstruct | |
- dataset:cais/mmlu | |
- dataset:Muennighoff/natural-instructions | |
- dataset:openbookqa | |
- dataset:piqa | |
- dataset:Vezora/Tested-22k-Python-Alpaca | |
- dataset:cakiki/rosetta-code | |
- dataset:Open-Orca/SlimOrca | |
- dataset:spider | |
- dataset:squad_v2 | |
- dataset:migtissera/Synthia-v1.3 | |
- dataset:datasets/winogrande | |
- dataset:nvidia/HelpSteer | |
- dataset:Intel/orca_dpo_pairs | |
- dataset:unalignment/toxic-dpo-v0.1 | |
- dataset:jondurbin/truthy-dpo-v0.1 | |
- dataset:allenai/ultrafeedback_binarized_cleaned | |
- dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned | |
- dataset:LDJnr/Capybara | |
- dataset:JULIELab/EmoBank | |
- dataset:kingbri/PIPPA-shareGPT | |
- license:other | |
- autotrain_compatible | |
- endpoints_compatible | |
- text-generation-inference | |
- region:us | |
- has_space | |
model_name: UNA-34Beagles-32K-bf16-v1-GPTQ | |
base_model: one-man-army/UNA-34Beagles-32K-bf16-v1 | |
inference: false | |
model_creator: one-man-army | |
pipeline_tag: text-generation | |
quantized_by: MaziyarPanahi | |
# Description | |
[MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ](https://huggingface.co/MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ) is a quantized (GPTQ) version of [one-man-army/UNA-34Beagles-32K-bf16-v1](https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1) | |
## How to use | |
### Install the necessary packages | |
``` | |
pip install --upgrade accelerate auto-gptq transformers | |
``` | |
### Example Python code | |
```python | |
from transformers import AutoTokenizer, pipeline | |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig | |
import torch | |
model_id = "MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ" | |
quantize_config = BaseQuantizeConfig( | |
bits=4, | |
group_size=128, | |
desc_act=False | |
) | |
model = AutoGPTQForCausalLM.from_quantized( | |
model_id, | |
use_safetensors=True, | |
device="cuda:0", | |
quantize_config=quantize_config) | |
tokenizer = AutoTokenizer.from_pretrained(model_id) | |
pipe = pipeline( | |
"text-generation", | |
model=model, | |
tokenizer=tokenizer, | |
max_new_tokens=512, | |
temperature=0.7, | |
top_p=0.95, | |
repetition_penalty=1.1 | |
) | |
outputs = pipe("What is a large language model?") | |
print(outputs[0]["generated_text"]) | |
``` |