metadata
base_model:
- InferenceIllusionist/Excalibur-7b-DPO
- yam-peleg/Experiment26-7B
- MTSAIR/multi_verse_model
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
library_name: transformers
tags:
- mergekit
- merge
hapsburg-merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using InferenceIllusionist/Excalibur-7b-DPO as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: InferenceIllusionist/Excalibur-7b-DPO
- model: MTSAIR/multi_verse_model
parameters:
density: 0.53
weight: 0.4
- model: yam-peleg/Experiment26-7B
parameters:
density: 0.53
weight: 0.3
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: InferenceIllusionist/Excalibur-7b-DPO
parameters:
normalize: true
int8_mask: true
dtype: bfloat16