GGUF / IQ / Imatrix for Infinite-Laymons-9B
Why Importance Matrix?
Importance Matrix, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy. The Imatrix performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.
Related discussions in Github: [1] [2]
The imatrix.txt file that I used contains general, semi-random data, with some custom kink.
Infinite-Laymons-9B
Infinite-Laymons-9B is intended for fictional role-play and storytelling.
The focus is on original responses and elimitation, or reduction of refusals.
Merge Details
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Nitral-AI/Infinitely-Laydiculous-7B
layer_range: [0, 20]
- sources:
- model: ABX-AI/Infinite-Laymons-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 67.29 |
AI2 Reasoning Challenge (25-Shot) | 65.61 |
HellaSwag (10-Shot) | 84.14 |
MMLU (5-Shot) | 64.53 |
TruthfulQA (0-shot) | 54.87 |
Winogrande (5-shot) | 80.82 |
GSM8k (5-shot) | 53.75 |
- Downloads last month
- 198
Model tree for ABX-AI/Infinite-Laymons-9B-GGUF-IQ-Imatrix
Base model
Nitral-AI/Infinitely-Laydiculous-7BCollections including ABX-AI/Infinite-Laymons-9B-GGUF-IQ-Imatrix
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard65.610
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard84.140
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.530
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard54.870
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard80.820
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard53.750