Can you provide more details about the model evaluation?

#9
by EurekaWu123 - opened

For example, in the GSM8K benchmark, the reported accuracy of Llama2 and Mistral 7B is approximately 5 points higher than the 8-shot score replicated by me (8-shot CoT).

Microsoft org

Hello @EurekaWu123

I am not able to share the full GSM8k evaluation due to some internal imports, but this snippet might help you in using code for the evaluation:

def _timeout_handler(signum: int, frame: Any) -> None:
    raise Exception()

def _validate_completion(completion: str, label: str) -> bool:
    completion_lines = completion.split("TA:")[1].strip().split("\n")
    completion_code = "\n".join(completion_lines[1:] if ":" in completion_lines[0] else completion_lines)

    try:
        signal.signal(signal.SIGALRM, _timeout_handler)
        signal.alarm(2)

        try:
            stdout = io.StringIO()
            with contextlib.redirect_stdout(stdout):
                exec(
                    "import math\nfrom math import *\nimport numpy as np\nimport hashlib\n"
                    + completion_code
                    + "\n\n"
                    + "if type(result) == str:\n\tresult = result.replace(',', '')\n"
                    + f"assert(int(result) == {label})",
                    {},
                )
            signal.alarm(0)
            prediction = 1
        except Exception:
            prediction = 0
        finally:
            signal.alarm(0)

    except Exception:
        prediction = 0

    return prediction

The overall idea is to execute the code that was generated by the model and assert whether its outputs are equal to the ground-truth label. We also added some public imports to prevent many answers from failing.

gugarosa changed discussion status to closed

Sign up or log in to comment