Edit model card

Reproduced Japanese Stable LM Instruct Gamma 7B

Model Description

This is a reproduction of 7B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model Japanese Stable LM Base Gamma 7B.

This model is trained with notus code base.

If you are in search of the official model, please check Japanese Stable LM Instruct Gamma 7B.

Model Details

Training Datasets

Benchmarks

The result is evaluated by Nejumi-leaderboard Neo.

  • llm-jp-eval:

    AVG EL FA MC MR NLI QA RC chabsa jamp janli jcommonsenseqa jemhopqa jnli jsem jsick jsquad mawps niilc wiki_coreference wiki_dependency wiki_ner wiki_pas wiki_reading
    0.26 0 0.14 0.27 0.1 0.302 0.2619 0.7464 0.0 0.15 0.5 0.27 0.2528 0.04 0.67 0.15 0.7464 0.1 0.271 0.0 0.0 0.0 0.0 0.7
  • Japanese Mt-Bench:

    coding extraction humanities math reasoning roleplay stem writing
    1.3 1.75 2.35 1.45 3.4 5.8 4.3 3.1
  • Overall Average: 0.283125

Credits

The training was carried out by Hwigeon Oh and Fujiki Nakamura.

Downloads last month
30
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ohwi/japanese-stablelm-instruct-gamma-7b-repro

Finetuned
(1)
this model
Finetunes
1 model