base_model: | |
- grimjim/llama-3-merge-virt-req-8B | |
library_name: transformers | |
pipeline_tag: text-generation | |
tags: | |
- mergekit | |
- merge | |
- meta | |
- pytorch | |
- llama | |
- llama-3 | |
license: other | |
license_name: llama3 | |
license_link: LICENSE | |
# Llama-3-8B-Irene-v0.2 | |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
## Merge Details | |
### Merge Method | |
This model was merged using the SLERP merge method. | |
### Models Merged | |
The following models were included in the merge: | |
* Mergekit/llama3-SOVL-v1 | |
* [grimjim/llama-3-merge-virt-req-8B](https://huggingface.co/grimjim/llama-3-merge-virt-req-8B) | |
### Configuration | |
The following YAML configuration was used to produce this model: | |
```yaml | |
slices: | |
- sources: | |
- model: grimjim/llama-3-merge-virt-req-8B | |
layer_range: [0, 32] | |
- model: Mergekit/llama3-SOVL-v1 | |
layer_range: [0, 32] | |
merge_method: slerp | |
base_model: grimjim/llama-3-merge-virt-req-8B | |
parameters: | |
t: | |
- value: [0.5, 0.35, 0.55, 0.35, 0.75, 0.35, 0.90, 0.35, 0.75, 0.35, 0.55, 0.35, 0.5] | |
dtype: bfloat16 | |
``` | |