L3-Super-Nova-RP-8B / README.md
Casual-Autopsy's picture
Update README.md
f0c7f36 verified
|
raw
history blame
6.84 kB
metadata
pipeline_tag: text-generation
library_name: transformers
language:
  - en
license: llama3
tags:
  - mergekit
  - merge
  - multi-step merge
  - rp
  - roleplay
  - role-play
  - chain-of-thoughts
  - summarization
  - emotion classification
  - biology
  - psychology
base_model:
  - nothingiisreal/L3-8B-Celeste-v1
  - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
  - Sao10K/L3-8B-Stheno-v3.2
  - ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
  - Sao10K/L3-8B-Lunaris-v1
  - turboderp/llama3-turbcat-instruct-8b
  - ChaoticNeutrals/Domain-Fusion-L3-8B
  - migtissera/Llama-3-8B-Synthia-v3.5
  - TheDrummer/Llama-3SOME-8B-v2
  - ChaoticNeutrals/Hathor_RP-v.01-L3-8B
  - TheSkullery/llama-3-cat-8b-instruct-v1
  - FPHam/L3-8B-Everything-COT
  - Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
  - OEvortex/Emotional-llama-8B
  - lighteternal/Llama3-merge-biomed-8b
  - Casual-Autopsy/Llama3-merge-psychotherapy-8b
Image generated by mayonays_on_toast - Sauce



L3-Super-Nova-RP-8B



Presets

I've(or anyone else) yet to find good Textgen Preset so here's the starting point preset I use instead, It should get you by for now.

Top K: 50
Top P: 0.85
Repetition Penalty: 1.01
# Don't make this higher, DRY handles the bulk of Squashing Repetition.
# This is justs to lightly nudge the bot to move the plot forward
Rep Pen Range: 2048 # Don't make this higher either.
Presence Penalty: 0.03 # Minor encouragement to use synonyms.
Smoothing Factor: 0.3

DRY Repetition Penalty:
  Multiplier: 0.8
  Base: 1.75
  Allowed Length: 2
  Penalty Range: 4096

Dynamic Temperature:
  Min Temp: 0.5
  Max Temp: 1.25
  Exponent: 0.85


Usage Info

Some of the INT models were chosen with some of SillyTavern's features in mind, such as emotion based sprites, dynamic music, and pretty much any feature, extension, or STscript that uses sumarization. With that said, it's recommended to use SillyTavern as your front-end.

While not required, I'd recommend building the story string prompt with Lorebooks rather than using the Advance Formatting menu. The only thing you really need in the Story String prompt within Advance Formatting is the system prompt. Doing it this way tends to keep the character more consistent as the RP goes on as all character card info is locked to a certain depth rather than getting further and further away within the context.



Quants



Merge Info

The merge methods used were Ties, Dare Ties, Breadcrumbs Ties, SLERP, and Task Arithmetic.

The model was finished off with both Merge Densification, and Negative Weighting tenchniques to boost creativity.

All merging steps had the merge calculations done in float32 and were output as bfloat16.


Models Merged

The following models were used to make this merge:



Evaluation Results


Open LLM Leaderboard

Detailed results can be found here

Explaination for AI RP newbies: IFEval is the most important evaluation for RP AIs as it determines how well it can follow OOC, Lorebooks, and most importantly character cards. The rest don't matter. At least not nearly as much as IFEval.

Metric Value
Avg. N/A
IFEval (0-Shot) N/A
BBH (3-Shot) N/A
MATH Lvl 5 (4-Shot) N/A
GPQA (0-shot) N/A
MuSR (0-shot) N/A
MMLU-PRO (5-shot) N/A

UGI Leaderboard

Information about the metrics can be found at the bottom of the UGI Leaderboard in the respective tabs.

Metric(UGI-Leaderboard) Value Value Metric(Writing Style)
UGI(Avg.) N/A N/A RegV1
W/10 N/A N/A RegV2
Unruly N/A N/A MyScore
Internet N/A N/A ASSS
Stats N/A N/A SMOG
Writing N/A N/A Yule
PolContro N/A


Secret Sauce

The following YAML configs were used to make this merge.


Super-Nova-CRE_pt.1



Super-Nova-CRE_pt.2



Super-Nova-UNC_pt.1



Super-Nova-UNC_pt.2



Super-Nova-INT_pt.1



Super-Nova-INT_pt.2



Super-Nova-CRE



Super-Nova-UNC



Super-Nova-INT



Super-Nova-RP_pt.1



Super-Nova-RP_pt.2



L3-Super-Nova-RP-8B