license: other
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
tags:
- uncensored
inference: false
WizardLM 30B uncensored:
These files are GGML format model files for Eric Hartford's 'uncensored' 30B version of WizardLM.
GGML files are for CPU inference using llama.cpp.
Other repositories available
- 4bit GPTQ model for GPU inference
- 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference
- Eric's unquantised model in fp16 HF format
THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit 2d5db48
or later) to use them.
Provided files
Name | Quant method | Bits | Size | RAM required | Use case |
---|---|---|---|---|---|
WizardLM-30B-Uncensored.ggmlv3.q4_0.bin |
q4_0 | 4bit | 18.3GB | 20GB | 4-bit. |
WizardLM-30B-Uncensored.ggmlv3.q4_1.bin |
q4_1 | 4bit | 20.3GB | 23GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
WizardLM-30B-Uncensored.ggmlv3.q5_0.bin |
q5_0 | 5bit | 22.4GB | 25GB | 5-bit. Higher accuracy, higher resource usage, slower inference. |
WizardLM-30B-Uncensored.ggmlv3.q5_1.bin |
q5_1 | 5bit | 24.4GB | 27GB | 5-bit. Even higher accuracy and resource usage, and slower inference. |
WizardLM-30B-Uncensored.ggmlv3.q8_0.bin |
q8_0 | 8bit | 34.6GB | 38GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
How to run in llama.cpp
I use the following command line; adjust for your tastes and needs:
./main -t 12 -m WizardLM-30B-Uncensored.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a story about llamas
### Response:"
Change -t 12
to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8
.
If you want to have a chat-style conversation, replace the -p <PROMPT>
argument with -i -ins
How to run in text-generation-webui
Further instructions here: text-generation-webui/docs/llama.cpp-models.md.
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute.
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Patreon special mentions: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
WizardLM-30B-Uncensored original model card
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.