File size: 2,303 Bytes
3a5e658
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
language:
- en
tags:
- llama3
- comedy
- comedian
- fun
- funny
- llama38b
- laugh
- sarcasm
- roleplay
quantized_by: bartowski
pipeline_tag: text-generation
---

## 4-bit GEMM AWQ  Quantizations of Llama-3-8B-LexiFun-Uncensored-V1

Using <a href="https://github.com/casper-hansen/AutoAWQ/">AutoAWQ</a> release <a href="https://github.com/casper-hansen/AutoAWQ/releases/tag/v0.2.4">v0.2.4</a> for quantization.

Original model: https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1

## Prompt format

```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|end_of_text|><|start_header_id|>user<|end_header_id|>

{prompt}<|end_of_text|><|start_header_id|>assistant<|end_header_id|>


```

## AWQ Parameters

 - q_group_size: 128
 - w_bit: 4
 - zero_point: True
 - version: GEMM

## How to run

From the AutoAWQ repo [here](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py)

First install autoawq pypi package:

```
pip install autoawq
```

Then run the following:

```
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer


quant_path = "models/Llama-3-8B-LexiFun-Uncensored-V1-AWQ"

# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

chat = [
    {"role": "system", "content": "You are a concise assistant that helps answer questions."},
    {"role": "user", "content": prompt},
]

# <|eot_id|> used for llama 3 models
terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

tokens = tokenizer.apply_chat_template(
    chat,
    return_tensors="pt"
).cuda()

# Generate output
generation_output = model.generate(
    tokens, 
    streamer=streamer,
    max_new_tokens=64,
    eos_token_id=terminators
)
```

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski