nitky's picture
fix README
a68881b verified
|
raw
history blame
3.06 kB
metadata
base_model:
  - tokyotech-llm/Swallow-7b-instruct-hf
  - allenai/tulu-2-dpo-7b
tags:
  - mergekit
  - merge
language:
  - en
  - ja
library_name: transformers
pipeline_tag: text-generation
license: llama2
model_type: llama

Superswallow

Important Notice:

This model partially utilizes the parameters of Tulu V2 DPO finetuned based on Llama 2, so it may inherit the AI2 ImpACT license. Please use the model keeping in mind that there may be changes regarding the license if AI2 contacts me.

The AI2 ImpACT license includes information about data artifacts and model artifacts, but does not cover the case of directly applying parts of the LLM parameters of a model artifact to other models. However, I respect their research and great work, so I will change the license immediately if AI2 contacts me.

Description

This is a merge of pre-trained language models created using mergekit. The model was created by injecting the ability to recognize user intent from Tulu 2 DPO into the Swallow instract model.

It was a proof of concept for merging LLMs trained in other languages, and paid close attention to preserving the linguistic capabilities of the merge-based model.

As far as I know, Swallow is the full set Llama 2 model(7B, 13B, 70B) that can output the most beautiful Japanese. Therefore, I used it as the base model for merging this time. Thank you for their wonderful work.

Prompt template: Swallow (Alpaca format)

ไปฅไธ‹ใซใ€ใ‚ใ‚‹ใ‚ฟใ‚นใ‚ฏใ‚’่ชฌๆ˜Žใ™ใ‚‹ๆŒ‡็คบใŒใ‚ใ‚Šใ€ใใ‚Œใซไป˜้šใ™ใ‚‹ๅ…ฅๅŠ›ใŒๆ›ดใชใ‚‹ๆ–‡่„ˆใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’้ฉๅˆ‡ใซๅฎŒไบ†ใ™ใ‚‹ใŸใ‚ใฎๅ›ž็ญ”ใ‚’่จ˜่ฟฐใ—ใฆใใ ใ•ใ„ใ€‚

### ๆŒ‡็คบ:
{instruction}

### ๅฟœ็ญ”:

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using tokyotech-llm/Swallow-7b-instruct-hf as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: tokyotech-llm/Swallow-7b-instruct-hf 
    # no parameters necessary for base model
  - model: allenai/tulu-2-dpo-7b # for following user intent
    parameters:
      density: 1
      weight:
      - filter: mlp.down_proj
        value: [0.3, 0.25, 0.25, 0.15, 0.1]
      - filter: mlp.gate_proj
        value: [0.7, 0.25, 0.5, 0.45, 0.4]
      - filter: mlp.up_proj
        value: [0.7, 0.25, 0.5, 0.45, 0.4]
      - filter: self_attn
        value: [0.7, 0.25, 0.5, 0.45, 0.4]
      - value: 0 # fallback for rest of tensors.
merge_method: dare_ties
base_model: tokyotech-llm/Swallow-7b-instruct-hf
dtype: bfloat16
tokenizer_source: union