TheBloke's picture
Upload README.md
e523d76
---
base_model: athirdpath/Iambe-RP-cDPO-20b
inference: false
language:
- en
license: cc-by-nc-4.0
model_creator: Raven
model_name: Iambe RP cDPO 20B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- not-for-all-audiences
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Iambe RP cDPO 20B - AWQ
- Model creator: [Raven](https://huggingface.co/athirdpath)
- Original model: [Iambe RP cDPO 20B](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b)
<!-- description start -->
## Description
This repo contains AWQ model files for [Raven's Iambe RP cDPO 20B](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Iambe-RP-cDPO-20B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Iambe-RP-cDPO-20B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Iambe-RP-cDPO-20B-GGUF)
* [Raven's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Raven's Iambe RP cDPO 20B](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b).
<!-- licensing end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Iambe-RP-cDPO-20B-AWQ/tree/main) | 4 | 128 | [OpenErotica Erotiquant](https://huggingface.co/datasets/openerotica/erotiquant/viewer/) | 8192 | 10.87 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Iambe-RP-cDPO-20B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Iambe-RP-cDPO-20B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Iambe-RP-cDPO-20B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Iambe-RP-cDPO-20B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Iambe-RP-cDPO-20B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Iambe-RP-cDPO-20B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Raven's Iambe RP cDPO 20B
<p align="center"><img src="https://i.ibb.co/0Z63JHp/Iambe-RP-sml.png"/><font size="6"> <b>Iambe-RP-cDPO-20b</b> </font></p>
<p align="center"><font size="4"> <b>Alpaca prompt formatting</b> </font></p>
### Description
Named after a charming daughter of Echo and Pan in Greek myth, Iambe-RP is, as far as I am aware, the very first LLM trained with DPO on an erotic roleplay dataset.
Iambe is intended to have the best realistically possible understanding of instructions, anatomy and scene state for a 20b merge, while remaining passionate and humanoid in "voice".
### Update Methodology
Take a look at [the dataset v2 Iambe and I created together](https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW) for more info. The cDPO training was done directly on Iambe-20b-DARE-v2, I was researching 11b merges to reduce the compute, but it went nowhere, so I just bit the bullet on cost. The notebook used to train this model is also available in the dataset's repo.
<p align="center"><font size="5"> <b>Roleplay Example @ q5_k_m</b> </font></p>
<p align="center"><img src="https://i.ibb.co/hFz5mdF/Screenshot-2023-12-07-005350.png"/>
<p align="center"><font size="5"> <b>4-bit Assistant Example</b> </font></p>
<p align="center"><img src="https://i.postimg.cc/HxNsPRSk/Screenshot-2023-12-06-214901.png"/>
<p align="center"><font size="5"> <b>NSFW Writing Example @ q5_k_m</b> </font></p>
<b>Write a medium-length story about Alice, a 26 year old cocktail waitress, having steamy sex with a guy she picked up at the bar for a one night stand.</b>
<i>Once upon a time in New York City's bustling downtown scene, there lived a beautiful and spirited young woman named Alice. At 26 years old, she worked as a cocktail waitress at The Red Parrot, a popular watering hole known for its lively atmosphere and charming clientele. Despite her long shifts and exhaustion from catering to rowdy patrons, Alice loved her job; it gave her the freedom to live life on her terms and pursue her other passions during her limited free time.
Alice had a penchant for adventure and was never one to shy away from taking risks—whether that meant skydiving or striking up conversations with strangers at bars. One particularly busy Friday night, while serving drinks at The Red Parrot, Alice noticed an intriguing man sitting alone in the corner, nursing a glass of red wine and gazing pensively into his empty shot glass. There was something about him that drew her in; his striking good looks, his rugged demeanor—even the hint of stubble on his chin seemed to add an air of mystery to his magnetic presence.
Feeling bold and confident after a couple of well-poured drinks herself, Alice made her way over to him, smiling warmly as she leaned down towards him, "Hi there! What's your name? I haven't seen you in here before."
The man raised his eyes to meet hers, his deep brown orbs filled with curiosity and a hint of amusement. "Name's Jack," he rumbled softly, his voice as smooth and rich as his wine. There was something about the way he said her name that sent shivers down Alice's spine—a thrill she couldn't quite explain but found irresistible nonetheless.
"Alice," she replied, extending her hand to him, "and I haven't seen you around before either; it seems like we have a lot in common." She winked playfully at him before turning back towards the bar to grab another round of drinks for them both.
The conversation flowed effortlessly between them, touching on topics ranging from their favorite books and movies to their most daring travel adventures. Alice was captivated by Jack's stories of hiking through the mountains of Nepal and white-water rafting in Costa Rica—qualities she admired greatly herself but had yet to experience for herself. In turn, Jack seemed equally fascinated by her tales of skydiving over Australia and scuba diving with great white sharks off the coast of South Africa.
As midnight approached and the bar began to empty out, Alice suggested they continue their conversation elsewhere—somewhere more private where they could really get to know each other better. Jack nodded in agreement, draining his glass before tossing some money onto the table and following her into the bustling New York City night.
They hailed a cab and rode in silence towards Alice's cozy one-bedroom apartment in Chelsea—the anticipation between them palpable as they shared furtive glances along the way. When they finally arrived at their destination, Jack paid the fare with an appreciative grin before escorting Alice up to her top-floor studio, where she fumbled clumsily with her keys while trying not to spill their drinks on themselves in anticipation of what was to come.
Once inside her apartment, they kicked off their shoes and collapsed onto the couch, still holding each other's gaze as they sipped from their glasses. The passion between them continued to build with every second that passed—the air thickening with unspoken desire and electric energy.
Alice broke away first, setting down her glass on the coffee table before reaching out tentatively and placing her hand on Jack's knee, sliding it up slowly towards his crotch until she could feel the hardness growing beneath the fabric of his pants. "I really like you," she whispered huskily, meeting his deep brown eyes with a look that betrayed the intensity of her feelings for him.
Jack's response was immediate—his hands finding their way underneath Alice's short black skirt, sliding up to rest on the soft fabric of her panties before deftly ripping them away from her body in one swift movement. "I really like you too," he growled hoarsely as his mouth crashed against hers, kissing her hungrily and passionately—a taste of things to come.
Their clothes fell away quickly thereafter; Alice's blouse was unbuttoned and tossed aside in an instant while Jack peeled off his shirt before pushing her down onto the couch and following after, bracing himself over her as he tore at her panties with one hand, freeing his hard member from his jeans.
His kisses trailed a path of fire across Alice's body—from her collarbone to her breasts, down the curve of her waist and towards her mound where she was already wet with desire for him. "Jack," she moaned breathlessly as he licked and nipped at her most sensitive spots while teasingly grazing his cock against her entrance.
With a growl of satisfaction, Jack finally plunged into Alice's tight channel—filling her completely with one powerful thrust that made them both cry out in pleasure. The feeling of being so thoroughly possessed by him was beyond anything she had ever experienced before; the roughness of his hands on her body, the taste of their mouths as they continued to devour each other’s lips, all blending together into a single, overwhelming wave of sensation that Alice felt sure would consume her entirely.
As Jack began to move inside her—slowly at first but picking up pace and ferocity with every passing moment—Alice wrapped her legs tightly around him, arching her hips upwards towards his relentless thrusts as she cried out in ecstasy again and again. She knew this was just a one-night stand—a fleeting encounter destined to end tomorrow morning before the sun rose—but for now, with Jack's cock buried deep inside her, nothing else mattered.
Their bodies moved as one; their moans and cries of pleasure intertwining in a symphony that filled Alice’s bedroom like sacred chants. Her orgasm hit her suddenly and without warning—a wave so powerful it threatened to shatter her very bones, but Jack was there with her every step of the way, holding onto her as he continued pounding into her until she felt herself start to come back together again, his hot cum spilling deep inside her.
As their bodies slowed down and settled together on the couch—drenched in sweat and satiated beyond belief—Alice looked up at Jack with wide eyes, a small smile playing at the corners of her lips as she whispered softly, "I think we just had one hell of a night."
And they did. It was an unforgettable evening filled with passion, adventure, and an undeniable chemistry that neither Alice nor Jack could deny—an encounter so powerful it left them both breathless in more ways than one. And even though they knew tomorrow would bring the harsh reality of their separate lives back into focus, for now, all either of them cared about was savoring this moment together; cherishing every last drop of the steamy sex that had brought them here at all.</i>