Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

flammen19X-mistral-7B - GGUF

Original model description:

library_name: transformers license: apache-2.0 base_model: - flammenai/flammen18X-mistral-7B datasets: - ResplendentAI/NSFW_RP_Format_NoQuote tags: - nsfw - not-for-all-audiences

image/png

flammen19X-mistral-7B

A Mistral 7B LLM built from merging pretrained models and finetuning on ResplendentAI/NSFW_RP_Format_NoQuote. Flammen specializes in exceptional character roleplay, creative writing, and general intelligence.

Method

Finetuned using an A100 on Google Colab.

Fine-tune Mistral-7b with SFT+TRL - Maxime Labonne

Downloads last month
117
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .