Nemomix-v4.0-12B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
cbe1064 verified
|
raw
history blame
6.04 kB
metadata
library_name: transformers
tags:
  - mergekit
  - merge
base_model: []
model-index:
  - name: Nemomix-v4.0-12B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 55.75
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MarinaraSpaghetti/Nemomix-v4.0-12B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 32.88
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MarinaraSpaghetti/Nemomix-v4.0-12B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 9.21
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MarinaraSpaghetti/Nemomix-v4.0-12B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 5.59
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MarinaraSpaghetti/Nemomix-v4.0-12B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 12.76
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MarinaraSpaghetti/Nemomix-v4.0-12B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 29.03
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MarinaraSpaghetti/Nemomix-v4.0-12B
          name: Open LLM Leaderboard

image/jpeg

image/png

The best one so far out of all the Nemomixes. Use this one.

Information

Description

My main goal is to merge the smartness of the base Instruct Nemo with the better prose from the different roleplaying fine-tunes. This one seems to be the best out of all, so far. All credits and thanks go to Intervitens, Mistralai, Invisietch, and NeverSleep for providing amazing models used in the merge.

Instruct

Mistral Instruct.

<s>[INST] {system} [/INST] {assistant}</s>[INST] {user} [/INST]

Settings

Lower Temperature of 0.35 recommended, although I had luck with Temperatures above one (1.0-1.2) if you crank up the Min P (0.01-0.1). Run with base DRY of 0.8/1.75/2/0 and you're good to go.

Presets

You can use my custom context/instruct/parameters presets for the model from here.

https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main

GGUF

https://huggingface.co/MarinaraSpaghetti/Nemomix-v4.0-12B-GGUF

Other Versions

V1: https://huggingface.co/MarinaraSpaghetti/Nemomix-v1.0-12B

V2: https://huggingface.co/MarinaraSpaghetti/Nemomix-v2.0-12B

V3: https://huggingface.co/MarinaraSpaghetti/Nemomix-v3.0-12B

V4: https://huggingface.co/MarinaraSpaghetti/Nemomix-v4.0-12B

Nemomix-v0.4-12B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the della_linear merge method using F:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base.

Models Merged

The following models were included in the merge:

  • F:\mergekit\intervitens_mini-magnum-12b-v1.1
  • F:\mergekit\mistralaiMistral-Nemo-Instruct-2407
  • F:\mergekit\invisietch_Atlantis-v0.1-12B
  • F:\mergekit\NeverSleepHistorical_lumi-nemo-e2.0

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: F:\mergekit\invisietch_Atlantis-v0.1-12B
    parameters:
      weight: 0.16
      density: 0.4
  - model: F:\mergekit\mistralaiMistral-Nemo-Instruct-2407
    parameters:
      weight: 0.23
      density: 0.5
  - model: F:\mergekit\NeverSleepHistorical_lumi-nemo-e2.0
    parameters:
      weight: 0.27
      density: 0.6
  - model: F:\mergekit\intervitens_mini-magnum-12b-v1.1
    parameters:
      weight: 0.34
      density: 0.8
merge_method: della_linear
base_model: F:\mergekit\mistralaiMistral-Nemo-Base-2407
parameters:
  epsilon: 0.05
  lambda: 1
  int8_mask: true
dtype: bfloat16

Ko-fi

Enjoying what I do? Consider donating here, thank you!

https://ko-fi.com/spicy_marinara

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 24.20
IFEval (0-Shot) 55.75
BBH (3-Shot) 32.88
MATH Lvl 5 (4-Shot) 9.21
GPQA (0-shot) 5.59
MuSR (0-shot) 12.76
MMLU-PRO (5-shot) 29.03