RP-Stew-v2.5-34B / README.md
MarinaraSpaghetti's picture
Update README.md
4769950 verified

An attempt to make ParasiticRogue's model a tad better on longer contexts. I just ran the script, all credit for the original merge goes to my friend.

image/png

Exl2 quants already being uploaded by him: https://huggingface.co/ParasiticRogue/RP-Stew-v2.5-34B-exl2-4.65

Also, my samplers, instruct and prompt for the model (updated, new format):

Samplers: https://files.catbox.moe/8ficm1.json

Instruct: https://files.catbox.moe/nlflxw.json

Story String: https://files.catbox.moe/6bk6gj.json

If you're using Vectors (I use them for summaries of different story arcs):

image/png

Alternative samplers and an "old" version of the prompt, plus instruct (classic format):

Samplers: https://files.catbox.moe/ef67mj.json

Instruct: https://files.catbox.moe/uba6o1.json

Story String: https://files.catbox.moe/t5gfun.json

I'm unsure which one works better, so it's up to your tests.

models:
  - model: F:\Merge\ParasiticRogue_Nontoxic-PiVoT-Bagel-RP-34b
    parameters:
      weight: 0.16
      density: 0.42
  - model: F:\Merge\ParasiticRogue_Nyakura-CausalLM-RP-34B
    parameters:
      weight: 0.22
      density: 0.54
  - model: F:\Merge\migtissera_Tess-34B-v1.5b
    parameters:
      weight: 0.28
      density: 0.66
  - model: F:\Merge\brucethemoose_Capybara-Fixed-Temp
    parameters:
      weight: 0.34
      density: 0.78
merge_method: dare_ties
base_model: F:\Merge\chargoddard_Yi-34B-200K-Llama
parameters:
  int8_mask: true
dtype: bfloat16