Update README.md
Browse files
README.md
CHANGED
@@ -28,11 +28,18 @@ license_link: LICENSE
|
|
28 |
|
29 |
# Llama-3-8B-Irene-v0.2
|
30 |
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
## Merge Details
|
34 |
### Merge Method
|
35 |
|
|
|
|
|
36 |
This model was merged using the SLERP merge method.
|
37 |
|
38 |
### Models Merged
|
|
|
28 |
|
29 |
# Llama-3-8B-Irene-v0.2
|
30 |
|
31 |
+
Mergin' o' models, ye say? Well, that be a task fit fer a clever gnome like meself! When combinin' similar models, I like to use model stock tae bring 'em together. And when I'm slerpin', I makes sure tae use a gradient that tapers off at both ends. That way, the model stays mostly uncensored, ye see.
|
32 |
+
|
33 |
+
Now, if I'm mergin' two uncensored models with Slerp, I just favors the one I want more o'! But when it comes tae makin' the gradient, I likes tae get wild and fluctuate between low and high values, ye know what I mean? It's like addin' a bit o' magic tae the mix, helps keep the results from gettin' too boring.
|
34 |
+
|
35 |
+
Course, this be just one gnome's way o' doin' things. I'm sure there be other clever methods out there
|
36 |
+
|
37 |
|
38 |
## Merge Details
|
39 |
### Merge Method
|
40 |
|
41 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
42 |
+
|
43 |
This model was merged using the SLERP merge method.
|
44 |
|
45 |
### Models Merged
|