File size: 2,341 Bytes
b460d08
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
base_model: BEE-spoke-data/Mixtral-GQA-400m-v2
inference: false
language:
- en
license: apache-2.0
model_creator: BEE-spoke-data
model_name: Mixtral-GQA-400m-v2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# BEE-spoke-data/Mixtral-GQA-400m-v2-GGUF

Quantized GGUF model files for [Mixtral-GQA-400m-v2](https://huggingface.co/BEE-spoke-data/Mixtral-GQA-400m-v2) from [BEE-spoke-data](https://huggingface.co/BEE-spoke-data)


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mixtral-gqa-400m-v2.fp16.gguf](https://huggingface.co/afrideva/Mixtral-GQA-400m-v2-GGUF/resolve/main/mixtral-gqa-400m-v2.fp16.gguf) | fp16 | 4.01 GB  |
| [mixtral-gqa-400m-v2.q2_k.gguf](https://huggingface.co/afrideva/Mixtral-GQA-400m-v2-GGUF/resolve/main/mixtral-gqa-400m-v2.q2_k.gguf) | q2_k | 703.28 MB  |
| [mixtral-gqa-400m-v2.q3_k_m.gguf](https://huggingface.co/afrideva/Mixtral-GQA-400m-v2-GGUF/resolve/main/mixtral-gqa-400m-v2.q3_k_m.gguf) | q3_k_m | 899.86 MB  |
| [mixtral-gqa-400m-v2.q4_k_m.gguf](https://huggingface.co/afrideva/Mixtral-GQA-400m-v2-GGUF/resolve/main/mixtral-gqa-400m-v2.q4_k_m.gguf) | q4_k_m | 1.15 GB  |
| [mixtral-gqa-400m-v2.q5_k_m.gguf](https://huggingface.co/afrideva/Mixtral-GQA-400m-v2-GGUF/resolve/main/mixtral-gqa-400m-v2.q5_k_m.gguf) | q5_k_m | 1.39 GB  |
| [mixtral-gqa-400m-v2.q6_k.gguf](https://huggingface.co/afrideva/Mixtral-GQA-400m-v2-GGUF/resolve/main/mixtral-gqa-400m-v2.q6_k.gguf) | q6_k | 1.65 GB  |
| [mixtral-gqa-400m-v2.q8_0.gguf](https://huggingface.co/afrideva/Mixtral-GQA-400m-v2-GGUF/resolve/main/mixtral-gqa-400m-v2.q8_0.gguf) | q8_0 | 2.13 GB  |



## Original Model Card:
#  BEE-spoke-data/Mixtral-GQA-400m-v2




## testing code

```python
# !pip install -U -q transformers datasets accelerate sentencepiece
import pprint as pp
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="BEE-spoke-data/Mixtral-GQA-400m-v2",
    device_map="auto",
)
pipe.model.config.pad_token_id = pipe.model.config.eos_token_id

prompt = "My favorite movie is Godfather because"

res = pipe(
    prompt,
    max_new_tokens=256,
    top_k=4,
    penalty_alpha=0.6,
    use_cache=True,
    no_repeat_ngram_size=4,
    repetition_penalty=1.1,
    renormalize_logits=True,
)
pp.pprint(res[0])
```