librarian-bot's picture
Librarian Bot: Add base_model metadata to model
a8a48d9 verified
|
raw
history blame
1.49 kB
metadata
license: other
tags:
  - merge
  - mergekit
  - lazymergekit
  - microsoft/Orca-2-13b
  - KoboldAI/LLaMA2-13B-Psyfighter2
license_name: microsoft-research-license
base_model:
  - KoboldAI/LLaMA2-13B-Psyfighter2
  - microsoft/Orca-2-13b

Psyfighter2-Orca2-ties

Psyfighter2-Orca2-ties is a merge of the following models using mergekit:

This is my very first merge I have ever attempted. The motivation behind this merge is to try and create a 13B version of jebcarter/psyonic-cetacean-20B. I don't have a good GPU (GTX 1660 6GB), so although I can merge the model, I cannot actually run it. However, the Open LLM Leaderboard ranks this merge with 63.48 avg point, which is higher than both KoboldAI/LLaMA2-13B-Psyfighter2 and jebcarter/psyonic-cetacean-20B, so I must did something right. The next step is to quantize this merge into GGUF so I can actually run it with KoboldCpp.

🧩 Configuration

models:
  - model: KoboldAI/LLaMA2-13B-Psyfighter2
  - model: microsoft/Orca-2-13b
    parameters:
      density: 0.40
      weight: [0, 0.3, 0.7, 1]
merge_method: ties
base_model: KoboldAI/LLaMA2-13B-Psyfighter2
parameters:
  normalize: true
  int8_mask: true
dtype: float16