license: apache-2.0
language:
- en
"With that naughty impish grin of hers, so damn sly it could have ensnared the devil himself, and that impish glare in her eyes, sharper than of a succubus fang, she chuckled impishly with such mischief that even the moon might’ve blushed. I needed no witch's hex to divine her nature—she was, without a doubt, a naughty little imp."
Model Details
Censorship level: Low
PENDING / 10 (10 completely uncensored)
Intended use: Role-Play, General tasks.
Impish_LLAMA_3B is available at the following quantizations:
- Original: FP16
- GGUF: Static Quants | iMatrix_GGUF
- EXL2: 3.0 bpw | 4.0 bpw | 5.0 bpw | 6.0 bpw | 7.0 bpw | 8.0 bpw
- Specialized: FP8
- Mobile (ARM): Q4_0_X_X
Model instruction template: Llama-3-Instruct
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
Recommended generation Presets:
Midnight Enigma
max_new_tokens: 512temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
min_p
max_new_tokens: 512temperature: 1
top_p: 1
top_k: 0
typical_p: 1
min_p: 0.05
repetition_penalty: 1
do_sample: True
Divine Intellect
max_new_tokens: 512temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
simple-1
max_new_tokens: 512temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
Support
- My Ko-fi page ALL donations will go for research resources and compute, every bit is appreciated 🙏🏻
Other stuff
- Blog and updates Some updates, some rambles, sort of a mix between a diary and a blog.
- SLOP_Detector Nuke GPTisms, with SLOP detector.
- LLAMA-3_8B_Unaligned The grand project that started it all.