File size: 2,933 Bytes
b85856b
533b4d2
 
fb3301f
 
 
b85856b
 
533b4d2
5ae74ce
533b4d2
5ae74ce
 
 
b8a588a
7144ac6
 
 
 
 
 
 
533b4d2
50307b6
 
4533b02
55b4921
 
4533b02
533b4d2
50307b6
5ae74ce
50307b6
 
 
533b4d2
 
 
 
 
 
 
5ae74ce
533b4d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
tags:
- gguf
- iMat
- conversational
- storywriting
license: cc-by-nc-4.0
---

<h3> Model Card for Fimbulvetr-11B-v2-iMat-GGUF</h3>

* Model creator: [Sao10K](https://huggingface.co/Sao10K/)
* Original model: [Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)

<b>Update 3/4/24: </b> Newest I-Quant format <b>[IQ4_XS](https://huggingface.co/InferenceIllusionist/Fimbulvetr-11B-v2-iMat-GGUF/blob/main/Fimbulvetr-11B-v2-iMat-IQ4_XS.gguf)</b> shows superior performance to previous I-quants @ a whopping 4.25 bpw in [benchmarks](https://github.com/ggerganov/llama.cpp/pull/5747) 

Tested on latest llama.cpp & koboldcpp v.1.60.

<h4> This model fits a whole lot into its size! Impressed by its understanding of other languages</h4>
<img src="https://huggingface.co/InferenceIllusionist/Fimbulvetr-11B-v2-iMat-GGUF/resolve/main/Fimbulvetr-11B-v2%20IQ4_XS.JPG" width="850"/>

<b>Tip: Select the biggest size that you can fit in VRAM while still allowing some space for context</b>


All credits to Sao10K for the original model. This is just a quick test of the new quantization types such as IQ_3S in an attempt to further reduce VRAM requirements.



Quantized from fp16 with love. Importance matrix file [Fimbulvetr-11B-v2-imatrix.dat](https://huggingface.co/InferenceIllusionist/Fimbulvetr-11B-v2-iMat-GGUF/blob/main/Fimbulvetr-11B-v2-imatrix.dat) was calculated using Q8_0. 


<i>Looking for Q3/Q4/Q5 quants? See the link in the original model card below.</i>

--- 


![Fox1](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2/resolve/main/cute1.jpg)

*Cute girl to catch your attention.*

**https://huggingface.co/Sao10K/Fimbulvetr-11B-v2-GGUF <------ GGUF**

# Fimbulvetr-v2 - A Solar-Based Model

Prompt Formats - Alpaca or Vicuna. Either one works fine.
Recommended SillyTavern Presets - Universal Light 

Alpaca:
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```

Vicuna:
```
System: <Prompt>

User: <Input>

Assistant:
```


****

Changelogs:

25/2 - repo renamed to remove test, model card redone. Model's officially out.
<br>15/2 - Heavy testing complete. Good feedback.

***

<details><summary>Rant - Kept For Historical Reasons</summary>

Ramble to meet minimum length requirements:

Tbh i wonder if this shit is even worth doing. Like im just some broke guy lmao I've spent so much. And for what? I guess creds. Feels good when a model gets good feedback, but it seems like im invisible sometimes. I should be probably advertising myself and my models on other places but I rarely have the time to. Probably just internal jealousy sparking up here and now. Wahtever I guess.

Anyway cool EMT vocation I'm doing is cool except it pays peanuts, damn bruh 1.1k per month lmao. Government to broke to pay for shit. Pays the bills I suppose.

Anyway cool beans, I'm either going to continue the Solar Train or go to Mixtral / Yi when I get paid.

You still here?
</details><br>