MXLewd-L2-20B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
0f4d7c4
|
raw
history blame
1.35 kB
metadata
license: cc-by-nc-4.0

Merge:

layer_slices:
  - model: ./MXLewd-L2-20B-part2
    start: 0
    end: 16
  - model: ./MXLewd-L2-20B-part1
    start: 8
    end: 20
  - model: ./MXLewd-L2-20B-part2
    start: 17
    end: 32
  - model: ./MXLewd-L2-20B-part1
    start: 21
    end: 40

Part 2 is ReMM (0.33) and Xwin (0.66)

Part 1 is Xwin (0.33) and MLewd (0.66)

Models used

  • Undi95/MLewd-L2-13B-v2-3
  • Undi95/ReMM-v2.1-L2-13B
  • Xwin-LM/Xwin-LM-13B-V0.1

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that completes the request.

### Instruction:
{prompt}

### Response:

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 51.29
ARC (25-shot) 63.23
HellaSwag (10-shot) 85.33
MMLU (5-shot) 57.36
TruthfulQA (0-shot) 51.65
Winogrande (5-shot) 76.09
GSM8K (5-shot) 10.92
DROP (3-shot) 14.46