--- base_model: - princeton-nlp/gemma-2-9b-it-SimPO - TheDrummer/Gemmasutra-9B-v1 tags: - mergekit - merge - roleplay - sillytavern - gemma2 language: - en --- All quants made using imatrix option with dataset provided by bartowski [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## SillyTavern ## Text Completion presets ``` temp 0.9 top_k 30 top_p 0.75 min_p 0.2 rep_pen 1.1 smooth_factor 0.25 smooth_curve 1 ``` ## Advanced Formatting Context & Instruct Presets for Gemma [Here](https://huggingface.co/tannedbum/ST-Presets/tree/main) IMPORTANT ! Instruct Mode: Enabled This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) * [TheDrummer/Gemmasutra-9B-v1](https://huggingface.co/TheDrummer/Gemmasutra-9B-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: TheDrummer/Gemmasutra-9B-v1 layer_range: [0, 42] - model: princeton-nlp/gemma-2-9b-it-SimPO layer_range: [0, 42] merge_method: slerp base_model: TheDrummer/Gemmasutra-9B-v1 parameters: t: - filter: self_attn value: [0.2, 0.4, 0.6, 0.2, 0.4] - filter: mlp value: [0.8, 0.6, 0.4, 0.8, 0.6] - value: 0.4 dtype: bfloat16 ```