base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- Model stock
Merge_XL_model_Stock
Ofcourse the model is still fully focussed on uncensored long context Roleplay and Story. By far the best itteration.
This model switches to the Smaug instruct 32K for the base bodel. Expanded with Giraffe and Gradient to keep a robuust long context window. Higgs and cat for most of the story and RP aspects. Hermes and Chinese chat are for overall intelligence and understanding.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using \Smaug-Llama-3-70B-Instruct-32K as a base.
Models Merged
The following models were included in the merge:
- \Llama-3-Giraffe-70B-Instruct
- \Llama-3-70B-Instruct-Gradient-262k
- \Hermes-2-Theta-Llama-3-70B
- \Higgs-Llama-3-70B
- \Llama3-70B-Chinese-Chat
- \Meta-LLama-3-Cat-A-LLama-70b
Configuration
The following YAML configuration was used to produce this model:
models:
- model: \Smaug-Llama-3-70B-Instruct-32K
- model: \Llama-3-70B-Instruct-Gradient-262k
- model: \Llama-3-Giraffe-70B-Instruct
- model: \Higgs-Llama-3-70B
- model: \Llama3-70B-Chinese-Chat
- model: \Meta-LLama-3-Cat-A-LLama-70b
- model: \Hermes-2-Theta-Llama-3-70B
merge_method: model_stock
base_model: \Smaug-Llama-3-70B-Instruct-32K
dtype: bfloat16
Any suggestions are very welcome My personal sampling settings are: "temp": 1, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "typical_p": 1, "min_p": 0.05, "rep_pen": 1.05, "rep_pen_range": 4096, "rep_pen_decay": 0, "rep_pen_slope": 1,