leaderboard-pr-bot's picture
Adding Evaluation Results
0b979fd verified
|
raw
history blame
3.71 kB
metadata
language:
  - en
license: apache-2.0
datasets:
  - HuggingFaceH4/ultrachat_200k
model-index:
  - name: Mixtral-8x7b-v0.1-sft
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 66.55
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vistagi/Mixtral-8x7b-v0.1-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 86.4
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vistagi/Mixtral-8x7b-v0.1-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 71.65
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vistagi/Mixtral-8x7b-v0.1-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 46.74
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vistagi/Mixtral-8x7b-v0.1-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 81.53
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vistagi/Mixtral-8x7b-v0.1-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 56.18
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vistagi/Mixtral-8x7b-v0.1-sft
          name: Open LLM Leaderboard

Introduction

This model vistagi/Mixtral-8x7b-v0.1-sft is trained with Ultrachat-200K dataset through supervised finetuning using Mixtral-8x7b-v0.1 as the baseline model. The training is done with bfloat16 precision using LoRA.

Details

Used Librarys

  • torch
  • deepspeed
  • pytorch lightning
  • transformers
  • peft

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 68.18
AI2 Reasoning Challenge (25-Shot) 66.55
HellaSwag (10-Shot) 86.40
MMLU (5-Shot) 71.65
TruthfulQA (0-shot) 46.74
Winogrande (5-shot) 81.53
GSM8k (5-shot) 56.18