Well-done Models
Collection
3 items
β’
Updated
This is a merge of pre-trained language models created using mergekit.
https://huggingface.co/Alsebay/Nutopia-7B-GGUF
Want more? Not what you're looking for? Imatrix and the rest of quantization model was made by mradermacher. Thank mradermacher for help. :) https://huggingface.co/mradermacher/Nutopia-7B-GGUF
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: NurtureAI/neural-chat-7b-v3-1-16k
layer_range: [0, 32]
- model: NousResearch/Hermes-2-Pro-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: NurtureAI/neural-chat-7b-v3-1-16k
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16