StarFuse-7B-DARE
StarFuse-7B-DARE is a merge of the following models using mergekit:
🧩 Configuration
models:
- model: openchat/openchat-3.5-0106
# No parameters necessary for base model
- model: FuseAI/FuseChat-7B-VaRM
parameters:
density: 0.5
weight: 0.5
- model: Nexusflow/Starling-LM-7B-beta
parameters:
density: 0.5
weight: 0.5
- model: Weyaxi/Newton-7B
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: openchat/openchat-3.5-0106
parameters:
int8_mask: true
dtype: bfloat16
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.