grimjim commited on
Commit
e407f8d
1 Parent(s): 8d7114c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -16,6 +16,8 @@ The Interwoven Depth Up-Scaling merge formula was adapted from [Sanji Watsuki's
16
 
17
  I consider this to be a negative result, but perhaps an interesting one. I've tested casually with temperature 0.7-1.2 and minP 0.01-0.03, with both Alpaca and ChatML prompts. The resulting generated text is interesting for RP due to chaotic variation, mostly sticking to grammatically correct output, but easily veers too far into chaos (e.g., abruptly switching language) while having difficulty tracking things. Given that, the inherited 8K context length is of dubious benefit.
18
 
 
 
19
  ## Merge Details
20
  ### Merge Method
21
 
 
16
 
17
  I consider this to be a negative result, but perhaps an interesting one. I've tested casually with temperature 0.7-1.2 and minP 0.01-0.03, with both Alpaca and ChatML prompts. The resulting generated text is interesting for RP due to chaotic variation, mostly sticking to grammatically correct output, but easily veers too far into chaos (e.g., abruptly switching language) while having difficulty tracking things. Given that, the inherited 8K context length is of dubious benefit.
18
 
19
+ Maybe additional training might smooth the transition between models, but that hypothesis is untested.
20
+
21
  ## Merge Details
22
  ### Merge Method
23