File size: 920 Bytes
25dfaaf
 
17b966e
 
 
 
25dfaaf
17b966e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: apache-2.0
language:
- en
tags:
- merge
---

![image/png](https://i.ibb.co/Qr4BYgc/1.png)

Test merge. Attempt to get good at RP, ERP, general things model with 128k context. Every model here has `Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context` in merge instead of regular MistralYarn 128k. The reason is because i belive Epiculous merged it with Mistral Instruct v0.2 to make first 32k context as good as possible, if not than it's sad.


Here is the "family tree" of this model, im not writing full model names cause they long af
### NeuralKunoichi-EroSumika 4x7B
```
* NeuralKunoichi-EroSumika 4x7B
	*(1) Kunocchini-7b-128k
	|
	*(2) Mistral-Instruct-v0.2-128k
		* Mistral-7B-Instruct-v0.2
		|
		* Fett-128k
	|
	*(3) Erosumika-128k
		* FErosumika 7B
		|
		* FFett-128k
	|
	*(4) Mistral-NeuralHuman-128k
		* Fett-128k
		|
		* Mistral-NeuralHuman
			* Mistral_MoreHuman
			|
			* Mistral-Neural-Story
```