xxx777xxxASD commited on
Commit
8167bc3
1 Parent(s): 9914062

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -7,10 +7,19 @@ tags:
7
  ---
8
 
9
  Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
 
10
 
11
  ### Llama 3 ChaoticSoliloquy-v1.5-4x8B
12
  ```
13
-
 
 
 
 
 
 
 
 
14
  ```
15
 
16
 
 
7
  ---
8
 
9
  Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
10
+ Im not sure but it should be better than the [first version](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B)
11
 
12
  ### Llama 3 ChaoticSoliloquy-v1.5-4x8B
13
  ```
14
+ base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1
15
+ gate_mode: random
16
+ dtype: bfloat16
17
+ experts_per_token: 2
18
+ experts:
19
+ - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.7-L3-8B
20
+ - source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1
21
+ - source_model: openlynn_Llama-3-Soliloquy-8B
22
+ - source_model: Sao10K_L3-Solana-8B-v1
23
  ```
24
 
25