Transformers
GGUF
English
mistral
Merge
Edit model card

This repository hosts GGUF-Imatrix quantizations for Test157t/Eris-Daturamix-7b.

Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
    quantization_options = [
        "Q4_K_M", "IQ4_XS", "Q5_K_M", "Q6_K",
        "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
    ]

Relevant information:

For imatrix data generation, kalomaze's groups_merged.txt with added roleplay chats was used, you can find it here.

There's already a v2 of this model here.

Original model information:

image/jpeg

The following models were included in the merge:

Configuration

slices:
  - sources:
      - model: Test157t/Eris-Floramix-7b
        layer_range: [0, 32]
      - model: ResplendentAI/Datura_7B
        layer_range: [0, 32]
merge_method: slerp
base_model: Test157t/Eris-Floramix-7b
parameters:
  t:
    - filter: self_attn
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
27
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Inference API (serverless) has been turned off for this model.

Datasets used to train Lewdiculous/Eris-Daturamix-7b-GGUF-IQ-Imatrix

Collection including Lewdiculous/Eris-Daturamix-7b-GGUF-IQ-Imatrix