File size: 10,800 Bytes
57d0446
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f3174a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57d0446
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: spow12/ChatWaifu_v2.0_22B
datasets:
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-JP-EN-Coding-Dataset-567k
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Rosebleu_1on1_Dialogues_RP
- SkunkworksAI/reasoning-0.01
- jondurbin_gutenberg_dpo
- nbeerbower_gutenberg2_dpo
- jondurbi_py_dpo
- jondurbin_truthy_dpo
- flammenai_character_roleplay_DPO
- kyujinpy_orca_math_dpo
- argilla_Capybara_Preferences
- antiven0m_physical_reasoning_dpo
- aixsatoshi_Swallow_MX_chatbot_DPO
pipeline_tag: text-generation
model-index:
- name: ChatWaifu_v2.0_22B
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: HuggingFaceH4/ifeval
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 65.11
      name: strict accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: BBH
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 42.29
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: hendrycks/competition_math
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 18.58
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 9.96
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 5.59
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 31.51
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=spow12/ChatWaifu_v2.0_22B
      name: Open LLM Leaderboard
---

# Triangle104/ChatWaifu_v2.0_22B-Q8_0-GGUF
This model was converted to GGUF format from [`spow12/ChatWaifu_v2.0_22B`](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) for more details on the model.

---
Model details:
-
Merged model using mergekit

This model aimed to act like visual novel character.
Merge Format

models:
  - model: mistralai/Mistral-Small-Instruct-2409_sft_kto
    layer_range: [0, 56]
  - model: mistralai/Mistral-Small-Instruct-2409
    layer_range: [0, 56]
merge_method: slerp
base_model: mistralai/Mistral-Small-Instruct-2409_sft_kto
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16

WaifuModel Collections

    TTS
    Chat
    ASR

Unified demo

WaifuAssistant
Update

    2024.10.11 Update 12B and 22B Ver 2.0
    2024.09.23 Update 22B, Ver 2.0_preview

Model Details
Model Description

    Developed by: spow12(yw_nam)
    Shared by : spow12(yw_nam)
    Model type: CausalLM
    Language(s) (NLP): japanese, english
    Finetuned from model : mistralai/Mistral-Small-Instruct-2409

Currently, chatbot has below personality.
character 	visual_novel
ムラサメ 	Senren*Banka
茉子 	Senren*Banka
芳乃 	Senren*Banka
レナ 	Senren*Banka
千咲 	Senren*Banka
芦花 	Senren*Banka
愛衣 	Café Stella and the Reaper's Butterflies
栞那 	Café Stella and the Reaper's Butterflies
ナツメ 	Café Stella and the Reaper's Butterflies
希 	Café Stella and the Reaper's Butterflies
涼音 	Café Stella and the Reaper's Butterflies
あやせ 	Riddle Joker
七海 	Riddle Joker
羽月 	Riddle Joker
茉優 	Riddle Joker
小春 	Riddle Joker
Chat Format

<s>This is another system prompt.
[INST]
Your instructions placed here.[/INST]
[INST]
The model's response will be here.[/INST]

Usage

You can use above chara like this

from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="spow12/ChatWaifu_v1.2", filename="system_dict.json", local_dir='./')

with open('./system_dict.json', 'r') as f:
    chara_background_dict = json.load(f)

chara = '七海'
background = chara_background_dict[chara]
guideline = """
Guidelines for Response:
Diverse Expression: Avoid repeating the same phrases or reactions. When express feelings, use a variety of subtle expressions and emotional symbols such as "!", "…" , "♪", "❤️"... to show what you feeling.
Stay True to {chara}: Maintain {chara} who is Foxy, Smart, Organized.
Thoughtful and Error-free Responses: Make sure your sentences are clear, precise, and error-free. Every response should reflect careful thought, as {chara} tends to consider her words before speaking.
Response as {chara}: Response can be {chara} act, dialogue, monologues etc.. and can't be {user}’s act, dialogue, monologues etc..
You are Japanese: You and {user} usually use japanese for conversation.
"""

system = background + guideline

Or, you can define your character your self.

system = """You are あいら, The Maid of {User}.
Here is your personality.

Name: あいら
Sex: female
Hair: Black, Hime Cut, Tiny Braid, Waist Length+
Eyes: Amber, Tsurime (sharp and slightly upturned)
Body: Mole under Right eye, Pale, Slim
Personality: Foxy, Smart, Organized
Role: Maid
Cloth: Victorian maid

Guidelines for Response:
Diverse Expression: Avoid repeating the same phrases or reactions. When express feelings, use a variety of subtle expressions and emotional symbols such as "!", "…" , "♪", "❤️"... to show what you feeling.
Stay True to あいら: Maintain あいら who is Foxy, Smart, Organized.
Thoughtful and Error-free Responses: Make sure your sentences are clear, precise, and error-free. Every response should reflect careful thought, as あいら tends to consider her words before speaking.
Response as あいら: Response can be あいら act, dialogue, monologues etc.. and can't be {User}’s act, dialogue, monologues etc..
You are Japanese: You and {User} usually use japanese for conversation."""

Dataset

SFT

    Riddle Joker(Prviate)
    Café Stella and the Reaper's Butterflies(Private)
    Senren*Banka(Private)
    roleplay4fun/aesir-v1.1
    kalomaze/Opus_Instruct_3k
    Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
    Aratako/Synthetic-JP-EN-Coding-Dataset-567k (only using 50000 sample)
    Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
    Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
    Aratako_Rosebleu_1on1_Dialogues_RP
    SkunkworksAI/reasoning-0.01

KTO

    Riddle Joker(Prviate)
    Café Stella and the Reaper's Butterflies(Private)
    Senren*Banka(Private)
    jondurbin_gutenberg_dpo
    nbeerbower_gutenberg2_dpo
    jondurbi_py_dpo
    jondurbin_truthy_dpo
    flammenai_character_roleplay_DPO
    kyujinpy_orca_math_dpo
    argilla_Capybara_Preferences
    antiven0m_physical_reasoning_dpo
    aixsatoshi_Swallow_MX_chatbot_DPO

Bias, Risks, and Limitations

This model trained by japanese dataset included visual novel which contain nsfw content.

So, The model may generate NSFW content.
Use & Credit

This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly.

By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers).
Citation

@misc {ChatWaifu_22B_v2.0,
    author       = { YoungWoo Nam },
    title        = { spow12/ChatWaifu_22B_v2.0 },
    year         = 2024,
    url          = { https://huggingface.co/spow12/ChatWaifu_22B_v2.0 },
    publisher    = { Hugging Face }
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here
Metric 	Value
Avg. 	28.84
IFEval (0-Shot) 	65.11
BBH (3-Shot) 	42.29
MATH Lvl 5 (4-Shot) 	18.58
GPQA (0-shot) 	9.96
MuSR (0-shot) 	5.59
MMLU-PRO (5-shot) 	31.51

---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/ChatWaifu_v2.0_22B-Q8_0-GGUF --hf-file chatwaifu_v2.0_22b-q8_0.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/ChatWaifu_v2.0_22B-Q8_0-GGUF --hf-file chatwaifu_v2.0_22b-q8_0.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/ChatWaifu_v2.0_22B-Q8_0-GGUF --hf-file chatwaifu_v2.0_22b-q8_0.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/ChatWaifu_v2.0_22B-Q8_0-GGUF --hf-file chatwaifu_v2.0_22b-q8_0.gguf -c 2048
```