friend-8x7B-hf / README.md
rAIfle's picture
Update README.md
1cd2382 verified
|
raw
history blame
1.57 kB
metadata
base_model: []
tags:
  - mergekit
  - merge

friend-8x7B-hf

friends

Several-staged merge this time. Actually decent, from my testings.

Use ChatML or Alpaca, both seemed to work though I liked the outputs from ChatML more.

temp-output-base:

models:
  - model: mistralai/Mixtral-8x7B-v0.1+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
    parameters:
      weight: 0.65
  - model: mistralai/Mixtral-8x7B-v0.1+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
    parameters:
      weight: 0.25
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-v0.1
dtype: float16

temp-output-instruct:

models:
  - model: mistralai/Mixtral-8x7B-Instruct-v0.1+SeanWu25/Mixtral_8x7b_Medicine
    parameters:
      weight: 0.33
  - model: NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
    parameters:
      weight: 0.15
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
dtype: float16

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • ./temp-output-base
  • ./temp-output-instruct

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./temp-output-base
  - model: ./temp-output-instruct
merge_method: slerp
base_model: ./temp-output-base
parameters:
  t:
    - value: 0.5
dtype: float16