testing-shit / README.md
kromeurus's picture
Upload folder using huggingface_hub
5a5fd8f verified
|
raw
history blame
No virus
881 Bytes
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# vulca2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using merge/reshape as a base.
### Models Merged
The following models were included in the merge:
* merge/apollobulk
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: merge/reshape
dtype: bfloat16
merge_method: dare_linear
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 32]
model: merge/reshape
parameters:
weight: [0.1, 0.9]
- layer_range: [0, 32]
model: merge/apollobulk
parameters:
weight: [0.9, 0.1]
tokenizer_source: base
```