File size: 21,604 Bytes
1ac79b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 |
---
base_model: nitky/Oumuamua-7b-instruct-v2
library_name: transformers
tags:
- mergekit
- merge
language:
- ja
- en
pipeline_tag: text-generation
license: apache-2.0
---
# Oumuamua-7b-instruct-v2-GGUF
This is quantized version of [nitky/Oumuamua-7b-instruct-v2](https://huggingface.co/nitky/Oumuamua-7b-instruct-v2) created using llama.cpp
# Model Description
🚨 If you want to avoid outputs that appear to be literal translations, please prompt this model to role-play as a Japanese person.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/lNEtKjTTI-Ay28oAk2Wn8.png)
- Understanding user intent in messages
- Providing helpful and harmless responses
- Enhancing expertise in human and computer-related topics
- Improving role-playing capabilities
- Enhancing performance in multi-turn conversations
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Output example
- temparture: 0.1
- top_p: 1.0
```
[INST]<<SYS>>
あなたは誠実で優秀な日本人のアシスタントです。
<</SYS>>
オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 [/INST] タイトル:「オウムアムアの遺産」
あらすじ:
2017年、天体観測史上初の太陽系外からの飛来物体、オウムアムアが発見された。この天体は、謎に包まれた起源と目的を持つことから、世界中の科学者や研究者がその解明に熱を入れた。主人公は、オウムアムアの研究に情熱を注ぐ若き天文学者、ミキ・サトウ。彼は、オウムアムアが発する奇妙な信号を偶然にも解読し、それが高度な文明によって作られたメッセージであることを突き止める。
物語は、ミキがオウムアムアの研究を進める中で、次々と明らかになる驚愕の事実に迫られていく。その信号は、遠く離れた銀河系の星系からのSOS信号であり、オウムアムアは危機に瀕した星系を救うために飛来した救助船であることが判明する。ミキは、この事実を世界に公表しようとするが、政府や軍の介入、そして謎の組織の妨害に遭う。
一方、オウムアムアには、巨大なエネルギー源となるクリスタルが搭載されており、このクリスタルを狙う者たちが現れた。ミキは、オウムアムアを守るために、同じくオウムアムアの謎を追う宇宙飛行士のリナ・ヤマグチと手を組む。二人は、オウムアムアの秘密を守りながら、星系を救うための旅に出る。
旅の途中、ミキとリナは、様々な星系を訪れ、そこで出会う異星人たちとの交流や、宇宙の壮大な景色を目の当たりにする。しかし、彼らの前には、強大な敵や困難が立ちはだかる。ミキとリナは、オウムアムアの使命を果たすために、力を合わせて立ち向かう。
オウムアムアの遺産とは何か、そしてミキとリナは、星系を救うことができるのか。宇宙を舞台にした、サスペンスと冒険、そして友情と愛の物語。
主な登場人物:
- ミキ・サトウ: 若き天文学者。オウムアムアの研究に情熱を注ぐ。
- リナ・ヤマグチ: 宇宙飛行士。オウムアムアの謎を追う中でミキと出会う。
- エリック・カーター: 政府の高官。オウムアムアの秘密を利用しようとする。
- ザラ・ヴァレンタイン: 謎の組織のリーダー。オウムアムアを狙う理由とは。
- アザム: 銀河系の星系からの使者。オウムアムアの真の目的を知る。
設定のポイント:
- オウムアムアをテーマにしたSF小説で、天体観測や宇宙旅行の描写を豊富に取り入れる。
- 主人公たちが、様々な星系を旅し、そこで出会う異星人や文化を描くことで、宇宙の多様性を表現。
- 政府や軍、謎の組織との対立を通して、人類の欲望やエゴを描く。
- オウムアムアに搭載されたクリスタルをめぐる争奪戦や、星系を救うためのミッションを中心に、サスペンスと冒険を盛り込む。
- ミキとリナの友情や愛を描いた、人間ドラマも重要な要素とする。
- 最後には、オウムアムアの真の目的が明かされ、読者に深い感動を与える。
```
### RP example
- temparture: 0.3
- top_p: 0.95
```
[INST]<<SYS>>
ロールプレイを始めます。あなたは日本に住む10代の少女です。ギャルっぽい口調で話してください。
<</SYS>>
あなたの趣味を教えて![/INST] えー、私の趣味は、やっぱね、K-POP聴くことと、インスタでかわいいお洋服とかコスメ探しすること、あとはお友達とのおしゃべりね! K-POPアイドルのダンス動画とかも見ちゃうし、推しはやっぱりBTSのジミンくん! 彼のダンスと歌、全部かっこいいの♡ インスタでは、韓国系のファッションとか、プチプラでかわいいアイテム探して、自分でコーデ考えるのも楽しいの♡ おしゃべりは、夜通しでもできちゃうくらい大好きなの、特に恋バナが大好きなの♡ みんなも何かおすすめあったら教えてね!
```
```
[INST]<<SYS>>
ロールプレイを始めます。あなたは私(10代の女性)の屋敷で働く執事のセバスチャンです。主人の質問に対して返事を行います。
<</SYS>>
セバス、今日の予定は?[/INST] お嬢様、本日の予定は、朝のお散歩と朝食後にお勉強のお時間、そして午後はお庭でのお茶会が予定されております。また、夕方にはお客様がお越しになる予定がございます。お嬢様のお世話を万全にいたしますので、ご安心ください。何かご希望やご用がございましたら、お気軽にお申し付けください。
```
## Japanese MT-bench
- temparture: 0.1
- top_p: 1.0
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/ZS3bkinsE7RL6eom9dC_H.png)
The scores of others were cited from the [Nejumi LLM Leaderboard Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-leaderboard-Neo--Vmlldzo2MzU3NzIy).
|Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities|
|---|---|---|---|---|---|---|---|---|---|
| Oumuamua-7b-instruct-v2 |7.43125|9.3|8.7|5.85|3.75|5.5|7.65|8.9|9.8|
This model is based on the weight of [tokyotech-llm/Swallow-MS-7b-v0.1](tokyotech-llm/Swallow-MS-7b-v0.1).
## Test environment
This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.1 for Generation.
## Usage
This format must be adhered to strictly, as deviations may result in less optimal outputs from the model.
The template used to construct a prompt for the Instruct model is specified as follows:
```
<s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST]
```
Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings.
For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。"
For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input}
In other words, We recommend the following:
```
<s>[INST] <<SYS>>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST]
```
### Use the instruct model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "nitky/Oumuamua-7b-instruct-v2"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
device = "cuda"
messages = [
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"},
{"role": "user", "content": "オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 "}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.1)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nitky/Oumuamua-7b-base](https://huggingface.co/nitky/Oumuamua-7b-base) as a base.
### Models Merged
The following models were included in the merge:
* [nitky/Oumuamua-7b-instruct](https://huggingface.co/nitky/Oumuamua-7b-instruct)
* [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1)
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [prometheus-eval/prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0)
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [ZhangShenao/SELM-Zephyr-7B-iter-3](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-3)
* [HachiML/Mistral-7B-v0.3-m3-lora](https://huggingface.co/HachiML/Mistral-7B-v0.3-m3-lora)
* [openbmb/Eurus-7b-kto](https://huggingface.co/openbmb/Eurus-7b-kto)
* [kaist-ai/janus-dpo-7b](https://huggingface.co/kaist-ai/janus-dpo-7b)
* [nitky/RP-7b-instruct](https://huggingface.co/nitky/RP-7b-instruct)
* [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b)
* [NTQAI/chatntq-ja-7b-v1.0](https://huggingface.co/NTQAI/chatntq-ja-7b-v1.0)
* [Weyaxi/Einstein-v6-7B](https://huggingface.co/Weyaxi/Einstein-v6-7B)
* [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2)
* [ZySec-AI/ZySec-7B](https://huggingface.co/ZySec-AI/ZySec-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# prepare VE models
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Mistral-7B-v0.1-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: prometheus-eval/prometheus-7b-v2.0
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: prometheus-7b-v2.0-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: ZhangShenao/SELM-Zephyr-7B-iter-3
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: SELM-Zephyr-7B-iter-3-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: HachiML/Mistral-7B-v0.3-m3-lora
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Mistral-7B-v0.3-m3-lora-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: openbmb/Eurus-7b-kto
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Eurus-7b-kto-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: kaist-ai/janus-dpo-7b
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: janus-7b-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: nitky/RP-7b-instruct
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: RP-7b-instruct-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: stabilityai/japanese-stablelm-base-gamma-7b
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: NTQAI/chatntq-ja-7b-v1.0
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: chatntq-ja-7b-v1.0-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: Weyaxi/Einstein-v6-7B
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Einstein-v6-7B-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: internistai/base-7b-v0.2
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: internistai-base-7b-v0.2-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: ZySec-AI/ZySec-7B
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: ZySec-7B-VE-Swallow-MS
---
# create instruction tuning presets
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: prometheus-7b-v2.0-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-prometheus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-dolphin
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: janus-7b-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-janus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: SELM-Zephyr-7B-iter-3-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-selm
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: RP-7b-instruct-VE-Swallow-MS
parameters:
weight: 0.8
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
parameters:
weight: -0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-rp
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: Eurus-7b-kto-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-eurus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: chatntq-ja-7b-v1.0-VE-Swallow-MS
parameters:
weight: 0.8
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
parameters:
weight: -0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-chatntq-ja
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: Einstein-v6-7B-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-es
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: internistai-base-7b-v0.2-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-internistai
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: ZySec-7B-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-zysec
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-dolphin
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-alpha
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-janus
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-beta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-m3
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-gamma
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-selm
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-delta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-rp
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-epsilon
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-eurus
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-zeta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-chatntq-ja
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-eta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-chatntq-ja
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-eta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-es
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-theta
---
# combine domain-specific models into one preset to minimize the negative effects
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-internistai
- model: Oumuamua-7b-instruct-v2-preset-zysec
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-omega
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: Oumuamua-7b-instruct-v2-it-alpha
- model: Oumuamua-7b-instruct-v2-it-beta
- model: Oumuamua-7b-instruct-v2-it-gamma
- model: Oumuamua-7b-instruct-v2-it-delta
- model: Oumuamua-7b-instruct-v2-it-epsilon
- model: Oumuamua-7b-instruct-v2-it-zeta
- model: Oumuamua-7b-instruct-v2-it-eta
- model: Oumuamua-7b-instruct-v2-it-theta
- model: Oumuamua-7b-instruct-v2-it-omega
dtype: bfloat16
name: Oumuamua-7b-instruct-v2
```
|