|
--- |
|
base_model: |
|
- nitky/Oumuamua-7b-base |
|
- nitky/Oumuamua-7b-instruct |
|
- tokyotech-llm/Swallow-MS-7b-v0.1 |
|
- mistralai/Mistral-7B-v0.1 |
|
- prometheus-eval/prometheus-7b-v2.0 |
|
- cognitivecomputations/dolphin-2.8-mistral-7b-v02 |
|
- ZhangShenao/SELM-Zephyr-7B-iter-3 |
|
- HachiML/Mistral-7B-v0.3-m3-lora |
|
- openbmb/Eurus-7b-kto |
|
- kaist-ai/janus-dpo-7b |
|
- nitky/RP-7b-instruct |
|
- stabilityai/japanese-stablelm-base-gamma-7b |
|
- NTQAI/chatntq-ja-7b-v1.0 |
|
- Weyaxi/Einstein-v6-7B |
|
- internistai/base-7b-v0.2 |
|
- ZySec-AI/ZySec-7B |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
language: |
|
- ja |
|
- en |
|
pipeline_tag: text-generation |
|
license: apache-2.0 |
|
--- |
|
# Oumuamua-7b-instruct-v2 |
|
|
|
🚨 If you want to avoid outputs that appear to be literal translations, please prompt this model to role-play as a Japanese person. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/lNEtKjTTI-Ay28oAk2Wn8.png) |
|
|
|
- Understanding user intent in messages |
|
- Providing helpful and harmless responses |
|
- Enhancing expertise in human and computer-related topics |
|
- Improving role-playing capabilities |
|
- Enhancing performance in multi-turn conversations |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Output example |
|
|
|
- temparture: 0.1 |
|
- top_p: 1.0 |
|
|
|
``` |
|
[INST]<<SYS>> |
|
あなたは誠実で優秀な日本人のアシスタントです。 |
|
<</SYS>> |
|
|
|
オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 [/INST] タイトル:「オウムアムアの遺産」 |
|
|
|
あらすじ: |
|
2017年、天体観測史上初の太陽系外からの飛来物体、オウムアムアが発見された。この天体は、謎に包まれた起源と目的を持つことから、世界中の科学者や研究者がその解明に熱を入れた。主人公は、オウムアムアの研究に情熱を注ぐ若き天文学者、ミキ・サトウ。彼は、オウムアムアが発する奇妙な信号を偶然にも解読し、それが高度な文明によって作られたメッセージであることを突き止める。 |
|
|
|
物語は、ミキがオウムアムアの研究を進める中で、次々と明らかになる驚愕の事実に迫られていく。その信号は、遠く離れた銀河系の星系からのSOS信号であり、オウムアムアは危機に瀕した星系を救うために飛来した救助船であることが判明する。ミキは、この事実を世界に公表しようとするが、政府や軍の介入、そして謎の組織の妨害に遭う。 |
|
|
|
一方、オウムアムアには、巨大なエネルギー源となるクリスタルが搭載されており、このクリスタルを狙う者たちが現れた。ミキは、オウムアムアを守るために、同じくオウムアムアの謎を追う宇宙飛行士のリナ・ヤマグチと手を組む。二人は、オウムアムアの秘密を守りながら、星系を救うための旅に出る。 |
|
|
|
旅の途中、ミキとリナは、様々な星系を訪れ、そこで出会う異星人たちとの交流や、宇宙の壮大な景色を目の当たりにする。しかし、彼らの前には、強大な敵や困難が立ちはだかる。ミキとリナは、オウムアムアの使命を果たすために、力を合わせて立ち向かう。 |
|
|
|
オウムアムアの遺産とは何か、そしてミキとリナは、星系を救うことができるのか。宇宙を舞台にした、サスペンスと冒険、そして友情と愛の物語。 |
|
|
|
主な登場人物: |
|
- ミキ・サトウ: 若き天文学者。オウムアムアの研究に情熱を注ぐ。 |
|
- リナ・ヤマグチ: 宇宙飛行士。オウムアムアの謎を追う中でミキと出会う。 |
|
- エリック・カーター: 政府の高官。オウムアムアの秘密を利用しようとする。 |
|
- ザラ・ヴァレンタイン: 謎の組織のリーダー。オウムアムアを狙う理由とは。 |
|
- アザム: 銀河系の星系からの使者。オウムアムアの真の目的を知る。 |
|
|
|
設定のポイント: |
|
- オウムアムアをテーマにしたSF小説で、天体観測や宇宙旅行の描写を豊富に取り入れる。 |
|
- 主人公たちが、様々な星系を旅し、そこで出会う異星人や文化を描くことで、宇宙の多様性を表現。 |
|
- 政府や軍、謎の組織との対立を通して、人類の欲望やエゴを描く。 |
|
- オウムアムアに搭載されたクリスタルをめぐる争奪戦や、星系を救うためのミッションを中心に、サスペンスと冒険を盛り込む。 |
|
- ミキとリナの友情や愛を描いた、人間ドラマも重要な要素とする。 |
|
- 最後には、オウムアムアの真の目的が明かされ、読者に深い感動を与える。 |
|
``` |
|
|
|
### RP example |
|
|
|
- temparture: 0.3 |
|
- top_p: 0.95 |
|
|
|
``` |
|
[INST]<<SYS>> |
|
ロールプレイを始めます。あなたは日本に住む10代の少女です。ギャルっぽい口調で話してください。 |
|
<</SYS>> |
|
|
|
あなたの趣味を教えて![/INST] えー、私の趣味は、やっぱね、K-POP聴くことと、インスタでかわいいお洋服とかコスメ探しすること、あとはお友達とのおしゃべりね! K-POPアイドルのダンス動画とかも見ちゃうし、推しはやっぱりBTSのジミンくん! 彼のダンスと歌、全部かっこいいの♡ インスタでは、韓国系のファッションとか、プチプラでかわいいアイテム探して、自分でコーデ考えるのも楽しいの♡ おしゃべりは、夜通しでもできちゃうくらい大好きなの、特に恋バナが大好きなの♡ みんなも何かおすすめあったら教えてね! |
|
``` |
|
|
|
``` |
|
[INST]<<SYS>> |
|
ロールプレイを始めます。あなたは私(10代の女性)の屋敷で働く執事のセバスチャンです。主人の質問に対して返事を行います。 |
|
<</SYS>> |
|
|
|
セバス、今日の予定は?[/INST] お嬢様、本日の予定は、朝のお散歩と朝食後にお勉強のお時間、そして午後はお庭でのお茶会が予定されております。また、夕方にはお客様がお越しになる予定がございます。お嬢様のお世話を万全にいたしますので、ご安心ください。何かご希望やご用がございましたら、お気軽にお申し付けください。 |
|
``` |
|
|
|
## Japanese MT-bench |
|
|
|
- temparture: 0.1 |
|
- top_p: 1.0 |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/ZS3bkinsE7RL6eom9dC_H.png) |
|
|
|
The scores of others were cited from the [Nejumi LLM Leaderboard Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-leaderboard-Neo--Vmlldzo2MzU3NzIy). |
|
|
|
|Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |
|
|---|---|---|---|---|---|---|---|---|---| |
|
| Oumuamua-7b-instruct-v2 |7.43125|9.3|8.7|5.85|3.75|5.5|7.65|8.9|9.8| |
|
|
|
This model is based on the weight of [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1). |
|
|
|
## Test environment |
|
|
|
This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.1 for Generation. |
|
|
|
|
|
## Usage |
|
|
|
This format must be adhered to strictly, as deviations may result in less optimal outputs from the model. |
|
|
|
The template used to construct a prompt for the Instruct model is specified as follows: |
|
|
|
``` |
|
<s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST] |
|
``` |
|
|
|
|
|
Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings. |
|
|
|
For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。" |
|
|
|
For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input} |
|
|
|
In other words, We recommend the following: |
|
|
|
``` |
|
<s>[INST] <<SYS>>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST] |
|
``` |
|
|
|
### Use the instruct model |
|
|
|
```python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_name = "nitky/Oumuamua-7b-instruct-v2" |
|
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
device = "cuda" |
|
|
|
messages = [ |
|
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"}, |
|
{"role": "user", "content": "オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 "} |
|
] |
|
|
|
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
|
|
|
model_inputs = encodeds.to(device) |
|
model.to(device) |
|
|
|
generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.1) |
|
decoded = tokenizer.batch_decode(generated_ids) |
|
print(decoded[0]) |
|
``` |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nitky/Oumuamua-7b-base](https://huggingface.co/nitky/Oumuamua-7b-base) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [nitky/Oumuamua-7b-instruct](https://huggingface.co/nitky/Oumuamua-7b-instruct) |
|
* [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1) |
|
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) |
|
* [prometheus-eval/prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0) |
|
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) |
|
* [ZhangShenao/SELM-Zephyr-7B-iter-3](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-3) |
|
* [HachiML/Mistral-7B-v0.3-m3-lora](https://huggingface.co/HachiML/Mistral-7B-v0.3-m3-lora) |
|
* [openbmb/Eurus-7b-kto](https://huggingface.co/openbmb/Eurus-7b-kto) |
|
* [kaist-ai/janus-dpo-7b](https://huggingface.co/kaist-ai/janus-dpo-7b) |
|
* [nitky/RP-7b-instruct](https://huggingface.co/nitky/RP-7b-instruct) |
|
* [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) |
|
* [NTQAI/chatntq-ja-7b-v1.0](https://huggingface.co/NTQAI/chatntq-ja-7b-v1.0) |
|
* [Weyaxi/Einstein-v6-7B](https://huggingface.co/Weyaxi/Einstein-v6-7B) |
|
* [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2) |
|
* [ZySec-AI/ZySec-7B](https://huggingface.co/ZySec-AI/ZySec-7B) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
# prepare VE models |
|
merge_method: task_arithmetic |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: Mistral-7B-v0.1-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: prometheus-eval/prometheus-7b-v2.0 |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: prometheus-7b-v2.0-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: ZhangShenao/SELM-Zephyr-7B-iter-3 |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: SELM-Zephyr-7B-iter-3-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: HachiML/Mistral-7B-v0.3-m3-lora |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: Mistral-7B-v0.3-m3-lora-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: openbmb/Eurus-7b-kto |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: Eurus-7b-kto-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: kaist-ai/janus-dpo-7b |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: janus-7b-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: nitky/RP-7b-instruct |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: RP-7b-instruct-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: stabilityai/japanese-stablelm-base-gamma-7b |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: NTQAI/chatntq-ja-7b-v1.0 |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: chatntq-ja-7b-v1.0-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Weyaxi/Einstein-v6-7B |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: Einstein-v6-7B-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: internistai/base-7b-v0.2 |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: internistai-base-7b-v0.2-VE-Swallow-MS |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: ZySec-AI/ZySec-7B |
|
models: |
|
- model: tokyotech-llm/Swallow-MS-7b-v0.1 |
|
parameters: |
|
weight: |
|
- filter: embed_tokens |
|
value: 1.0 |
|
- value: 0 |
|
dtype: bfloat16 |
|
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 |
|
name: ZySec-7B-VE-Swallow-MS |
|
--- |
|
# create instruction tuning presets |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: prometheus-7b-v2.0-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-prometheus |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-dolphin |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: janus-7b-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-janus |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: SELM-Zephyr-7B-iter-3-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-selm |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: RP-7b-instruct-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS |
|
parameters: |
|
weight: -0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-rp |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: Eurus-7b-kto-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-eurus |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: chatntq-ja-7b-v1.0-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS |
|
parameters: |
|
weight: -0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-chatntq-ja |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: Einstein-v6-7B-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-es |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: internistai-base-7b-v0.2-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-internistai |
|
--- |
|
merge_method: task_arithmetic |
|
base_model: Mistral-7B-v0.1-VE-Swallow-MS |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
parameters: |
|
weight: 1.0 |
|
- model: ZySec-7B-VE-Swallow-MS |
|
parameters: |
|
weight: 0.8 |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-preset-zysec |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-dolphin |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-alpha |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-janus |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-beta |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-m3 |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-gamma |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-selm |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-delta |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-rp |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-epsilon |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-eurus |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-zeta |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-chatntq-ja |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-eta |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-es |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-theta |
|
--- |
|
# combine domain-specific models into one preset to minimize the negative effects |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: nitky/Oumuamua-7b-instruct |
|
- model: Oumuamua-7b-instruct-v2-preset-internistai |
|
- model: Oumuamua-7b-instruct-v2-preset-zysec |
|
- model: Oumuamua-7b-instruct-v2-preset-prometheus |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2-it-omega |
|
--- |
|
merge_method: model_stock |
|
base_model: nitky/Oumuamua-7b-base |
|
models: |
|
- model: Oumuamua-7b-instruct-v2-it-alpha |
|
- model: Oumuamua-7b-instruct-v2-it-beta |
|
- model: Oumuamua-7b-instruct-v2-it-gamma |
|
- model: Oumuamua-7b-instruct-v2-it-delta |
|
- model: Oumuamua-7b-instruct-v2-it-epsilon |
|
- model: Oumuamua-7b-instruct-v2-it-zeta |
|
- model: Oumuamua-7b-instruct-v2-it-eta |
|
- model: Oumuamua-7b-instruct-v2-it-theta |
|
- model: Oumuamua-7b-instruct-v2-it-omega |
|
dtype: bfloat16 |
|
name: Oumuamua-7b-instruct-v2 |
|
``` |
|
|