Masterjp123's picture
Update README.md
6226558 verified
metadata
base_model:
  - Riiid/sheep-duck-llama-2-13b
  - IkariDev/Athena-v4
  - TheBloke/Llama-2-13B-fp16
  - KoboldAI/LLaMA2-13B-Psyfighter2
  - KoboldAI/LLaMA2-13B-Erebus-v3
  - Henk717/echidna-tiefigther-25
  - Undi95/Unholy-v2-13B
  - ddh0/EstopianOrcaMaid-13b
tags:
  - mergekit
  - merge
  - not-for-all-audiences
  - ERP
  - RP
  - Roleplay
  - uncensored
license: llama2
language:
  - en

Model

This is the GGUF version of SnowyRP And the First Public Release of a Model in the SnowyRP series of models!

BF16

GPTQ

GGUF

Any Future Quantizations I am made aware of will be added.

Merge Details

just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure.

These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit or make the model worse.

This Model has a Very good knowledge base and understands anatomy decently, Plus this Model is VERY versitle and is great for General assistant work, RP and ERP, RPG RPs and much more.

Model Use:

This model is very good... WITH THE RIGHT SETTINGS. I personally use microstat mixed with dynamic temp with epsion cut off and eta cut off.

    Optimal Settings (so far)

  Microstat Mode: 2
    tau: 2.95
    eta: 0.05

  Dynamic Temp
    min: 0.25
    max: 1.8

  Cut offs
    epsilon: 3
    eta: 3

Go to the BF16 Repo for more usage settings.

Merge Method

This model was merged using the ties merge method using TheBloke/Llama-2-13B-fp16 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

for P1

base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
  - layer_range: [0, 40]
    model:
      model:
        path: Undi95/Unholy-v2-13B
    parameters:
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Henk717/echidna-tiefigther-25
    parameters:
      weight: 0.45
  - layer_range: [0, 40]
    model:
      model:
        path: KoboldAI/LLaMA2-13B-Erebus-v3
    parameters:
      weight: 0.33

for P2

base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16
  - layer_range: [0, 40]
    model:
      model:
        path: KoboldAI/LLaMA2-13B-Psyfighter2
    parameters:
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Riiid/sheep-duck-llama-2-13b
    parameters:
      weight: 0.45
  - layer_range: [0, 40]
    model:
      model:
        path: IkariDev/Athena-v4
    parameters:
      weight: 0.33

for the final merge

base_model:
  model:
    path: TheBloke/Llama-2-13B-fp16
dtype: bfloat16
merge_method: ties
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 40]
    model:
      model:
        path: ddh0/EstopianOrcaMaid-13b
    parameters:
      density: [1.0, 0.7, 0.1]
      weight: 1.0
  - layer_range: [0, 40]
    model:
      model:
        path: Masterjp123/snowyrpp1
    parameters:
      density: 0.5
      weight: [0.0, 0.3, 0.7, 1.0]
  - layer_range: [0, 40]
    model:
      model:
        path: Masterjp123/snowyrpp2
    parameters:
      density: 0.33
      weight:
      - filter: mlp
        value: 0.5
      - value: 0.0
  - layer_range: [0, 40]
    model:
      model:
        path: TheBloke/Llama-2-13B-fp16