EEVE-Instruct-Math-10.8B
EEVE-Math
ํ๋ก์ ํธ๋
- Orca-Math-200k ๋ฒ์ญ (Orca-Math: Unlocking the potential of SLMs in Grade School Math)
- gsm8k ๋ฒ์ญ, lm_eval ํ์ฉ
- Mergekit์ ์ด์ฉํ dare-ties ์ฌ์ฉ (DARE)
์ ๋ํ ๋ด์ฉ์ ํฌ๊ดํ๊ณ ์์ต๋๋ค.
์ด ๋ชจ๋ธ์ EEVE-Math์ EEVE-Instruct์ dare-ties๋ก ๋ณํฉํ ๋ณํฉ ๋ชจ๋ธ์ ๋๋ค. ์ด ํ๋ก์ ํธ๋ ์ด๋ฐ ๊ณผ์ ์ ํตํด ํนํ ๋ชจ๋ธ์ EEVE-Math์ ์ฑ๋ฅ์ ๋ง์ด ์์ง ์๊ณ Instruct ๋ชจ๋ธ์ ์ฌ์ฉ์ฑ์ ์ ์งํ ์ ์์์ ๋ณด์ฌ์ฃผ๋ Proof of concept์ ์ฑ๊ฒฉ์ ๊ฐ์ง๊ณ ์์ต๋๋ค.
Model | gsm8k-ko(pass@1) |
---|---|
EEVE(Base) | 0.4049 |
EEVE-Math (epoch 1) | 0.508 |
EEVE-Math (epoch 2) | 0.539 |
EEVE-Instruct | 0.4511 |
EEVE-Instruct + Math | 0.4845 |
Merge Details
This model was merged using the DARE TIES merge method using yanolja/EEVE-Korean-Instruct-10.8B-v1.0 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: yanolja/EEVE-Korean-10.8B-v1.0
# no parameters necessary for base model
- model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
parameters:
density: 0.53
weight: 0.6
- model: kuotient/EEVE-Math-10.8B
parameters:
density: 0.53
weight: 0.4
merge_method: dare_ties
base_model: yanolja/EEVE-Korean-10.8B-v1.0
parameters:
int8_mask: true
dtype: bfloat16
Evaluation
gsm8k-ko, kobest
git clone https://github.com/kuotient/lm-evaluation-harness.git
cd lm-evaluation-harness
pip install -e .
lm_eval --model hf \
--model_args pretrained=yanolja/EEVE-Korean-Instruct-2.8B-v1.0 \
--tasks gsm8k-ko \
--device cuda:0 \
--batch_size auto:4
Model | gsm8k(pass@1) | boolq(acc) | copa(acc) | hellaswag(acc) | Overall |
---|---|---|---|---|---|
yanolja/EEVE-Korean-10.8B-v1.0 | 0.4049 | - | - | - | - |
yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | 0.4511 | 0.8668 | 0.7450 | 0.4940 | 0.6392 |
EEVE-Math-10.8B | 0.5390 | 0.8027 | 0.7260 | 0.4760 | 0.6359 |
EEVE-Instruct-Math-10.8B | 0.4845 | 0.8519 | 0.7410 | 0.4980 | 0.6439 |
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.