--- license: llama2 language: - en --- ### Daddy Dave's stamp of approval 👍 4-bit GPTQ quants of the 1.10 version of [Sao10K](https://huggingface.co/Sao10K)'s fantastic [Stheno (collection link)](https://huggingface.co/collections/Sao10K/stheno-6536a20823c9d18c09288fb1) model. The main branch contains 4-bit groupsize of 128 and no act order. The other branches contain groupsizes of 128, 64, and 32 with act order. Original card below! *** My GGUF Quants: https://huggingface.co/Sao10K/Stheno-1.10-L2-13B-GGUF *** Oh, you thought there'd be a 2.0? Nope. Not yet. A recreation of Stheno with Updated versions of the same models and Merging Values. Feels more coherent, and is uncensored (zero context) at least according to my tests. Is somewhat smarter, I think? Atleast it passes 4/5 times in my own test suites. Feel free to try it out, I'd appreciate Feedback. Most formats could work, but my tests have all been done in Alpaca format and it works well. ``` ### Instruction: Your instruction or question here. For roleplay purposes, I suggest the following - Write 's next reply in a chat between and . Write a single reply only. ### Response: ``` support me [here](https://ko-fi.com/sao10k) :) Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) for his amazing and simple [mergekit](https://github.com/cg123/mergekit) script. Thanks to the original model creators too!