SOLAR-math-2x10.7b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
2f63a51 verified
|
raw
history blame
10.4 kB
metadata
language:
  - en
license: apache-2.0
library_name: transformers
model-index:
  - name: SOLAR-math-2x10.7b
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 68.43
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 86.31
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 66.9
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 64.21
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 83.35
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 71.04
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b
          name: Open LLM Leaderboard

🌞🚀 SOLAR-math-2x10.7_19B

  • This model is part of MoE experimentation. The other solar models in the collection are available here

  • If you like this model, version 2 is even better! It is competitve with GPT-3.5 Turbo and Gemini Pro. It exceeds the scores of Mixtral8x7b macadeliccc/SOLAR-math-2x10.7b-v0.2

solar

🌅 Code Example

Example also available in colab

from transformers import AutoModelForCausalLM, AutoTokenizer

def generate_response(prompt):
    """
    Generate a response from the model based on the input prompt.

    Args:
    prompt (str): Prompt for the model.

    Returns:
    str: The generated response from the model.
    """
    # Tokenize the input prompt
    inputs = tokenizer(prompt, return_tensors="pt")
    
    # Generate output tokens
    outputs = model.generate(**inputs, max_new_tokens=512, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)

    # Decode the generated tokens to a string
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)

    return response


# Load the model and tokenizer
model_id = "macadeliccc/SOLAR-math-2x10.7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)

prompt = "Explain the proof of Fermat's Last Theorem and its implications in number theory."


print("Response:")
print(generate_response(prompt), "\n")

Evaluations

Model AGIEval GPT4All TruthfulQA Bigbench Average
SOLAR-math-2x10.7b 47.2 75.18 64.73 45.15 58.07

AGIEval

Task Version Metric Value Stderr
agieval_aqua_rat 0 acc 30.31 ± 2.89
acc_norm 30.31 ± 2.89
agieval_logiqa_en 0 acc 43.78 ± 1.95
acc_norm 43.93 ± 1.95
agieval_lsat_ar 0 acc 21.74 ± 2.73
acc_norm 19.13 ± 2.60
agieval_lsat_lr 0 acc 57.25 ± 2.19
acc_norm 56.47 ± 2.20
agieval_lsat_rc 0 acc 68.77 ± 2.83
acc_norm 68.03 ± 2.85
agieval_sat_en 0 acc 78.16 ± 2.89
acc_norm 79.13 ± 2.84
agieval_sat_en_without_passage 0 acc 47.57 ± 3.49
acc_norm 44.66 ± 3.47
agieval_sat_math 0 acc 41.36 ± 3.33
acc_norm 35.91 ± 3.24

Average: 47.2%

GPT4All

Task Version Metric Value Stderr
arc_challenge 0 acc 59.22 ± 1.44
acc_norm 61.43 ± 1.42
arc_easy 0 acc 84.26 ± 0.75
acc_norm 83.63 ± 0.76
boolq 1 acc 88.69 ± 0.55
hellaswag 0 acc 65.98 ± 0.47
acc_norm 84.29 ± 0.36
openbookqa 0 acc 34.20 ± 2.12
acc_norm 47.20 ± 2.23
piqa 0 acc 81.83 ± 0.90
acc_norm 82.59 ± 0.88
winogrande 0 acc 78.45 ± 1.16

Average: 75.18%

TruthfulQA

Task Version Metric Value Stderr
truthfulqa_mc 1 mc1 48.47 ± 1.75
mc2 64.73 ± 1.53

Average: 64.73%

Bigbench

Task Version Metric Value Stderr
bigbench_causal_judgement 0 multiple_choice_grade 61.05 ± 3.55
bigbench_date_understanding 0 multiple_choice_grade 68.56 ± 2.42
bigbench_disambiguation_qa 0 multiple_choice_grade 35.27 ± 2.98
bigbench_geometric_shapes 0 multiple_choice_grade 31.20 ± 2.45
exact_str_match 0.00 ± 0.00
bigbench_logical_deduction_five_objects 0 multiple_choice_grade 30.00 ± 2.05
bigbench_logical_deduction_seven_objects 0 multiple_choice_grade 23.43 ± 1.60
bigbench_logical_deduction_three_objects 0 multiple_choice_grade 46.00 ± 2.88
bigbench_movie_recommendation 0 multiple_choice_grade 35.60 ± 2.14
bigbench_navigate 0 multiple_choice_grade 57.50 ± 1.56
bigbench_reasoning_about_colored_objects 0 multiple_choice_grade 55.80 ± 1.11
bigbench_ruin_names 0 multiple_choice_grade 45.98 ± 2.36
bigbench_salient_translation_error_detection 0 multiple_choice_grade 40.58 ± 1.56
bigbench_snarks 0 multiple_choice_grade 66.85 ± 3.51
bigbench_sports_understanding 0 multiple_choice_grade 71.40 ± 1.44
bigbench_temporal_sequences 0 multiple_choice_grade 56.40 ± 1.57
bigbench_tracking_shuffled_objects_five_objects 0 multiple_choice_grade 24.00 ± 1.21
bigbench_tracking_shuffled_objects_seven_objects 0 multiple_choice_grade 17.09 ± 0.90
bigbench_tracking_shuffled_objects_three_objects 0 multiple_choice_grade 46.00 ± 2.88

Average: 45.15%

Average score: 58.07%

Elapsed time: 04:05:27

📚 Citations

@misc{kim2023solar,
      title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling}, 
      author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
      year={2023},
      eprint={2312.15166},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 73.37
AI2 Reasoning Challenge (25-Shot) 68.43
HellaSwag (10-Shot) 86.31
MMLU (5-Shot) 66.90
TruthfulQA (0-shot) 64.21
Winogrande (5-shot) 83.35
GSM8k (5-shot) 71.04