base_model: | |
- migtissera/Tess-v2.5-Phi-3-medium-128k-14B | |
- jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 | |
library_name: transformers | |
tags: | |
- mergekit | |
- merge | |
license: apache-2.0 | |
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) | |
# QuantFactory/Ph3della3-14B-GGUF | |
This is quantized version of [allknowingroger/Ph3della3-14B](https://huggingface.co/allknowingroger/Ph3della3-14B) created using llama.cpp | |
# Original Model Card | |
# merge | |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
## Merge Details | |
### Merge Method | |
This model was merged using the della_linear merge method using [jpacifico/Chocolatine-14B-Instruct-DPO-v1.2](https://huggingface.co/jpacifico/Chocolatine-14B-Instruct-DPO-v1.2) as a base. | |
### Models Merged | |
The following models were included in the merge: | |
* [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) | |
### Configuration | |
The following YAML configuration was used to produce this model: | |
```yaml | |
models: | |
- model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 | |
parameters: | |
weight: 0.5 | |
density: 0.8 | |
- model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B | |
parameters: | |
weight: 0.5 | |
density: 0.8 | |
merge_method: della_linear | |
base_model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 | |
parameters: | |
epsilon: 0.05 | |
lambda: 1 | |
int8_mask: true | |
dtype: bfloat16 | |
tokenzer_source: union | |
``` | |