Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

PsyMedLewd. A merge of two of my favourite models for scifi stories:

Warning: Cute alien girls inside!

Fourth iteration. Seems more quirky and creative but also perhaps a tad unstable and tipsy. Handle with care!

RECOMMENDED SETTINGS FOR ALL PsyMedLewd VERSIONS

(based on KoboldCPP):

Temperature - 1.3

Max Ctx. Tokens - 4096

Top p Sampling - 0.99

Repetition Penalty - 1.09

Amount to Gen. - 512

Prompt template: Alpaca or ChatML

##################################################################################################

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: Undi95/PsyMedRP-v1-20B
        layer_range: [0, 62]  # PsyMedRP has 62 layers
      - model: Undi95/MXLewd-L2-20B
        layer_range: [0, 62]  # MXLewd has 62 layers
merge_method: slerp  # Changing to SLERP method
base_model: Undi95/PsyMedRP-v1-20B  # Focus on reasoning from PsyMedRP
parameters:
  t:
    - filter: self_attn
      value: [.3, .6, .9, .6, .3]  # smooth gradient of focus
      value: [.3, .6, .9, .6, .3]  # consistent level of creativity and abstract reasoning
    - value: 0.639 
dtype: bfloat16  # Use preferred dtype
Downloads last month
3
Safetensors
Model size
20B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Elfrino/PsyMedLewd_v4

Quantizations
3 models