brittlewis12 commited on
Commit
c04ed14
1 Parent(s): 97b0626

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -0
README.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: stabilityai/stablelm-2-zephyr-1_6b
3
+ datasets:
4
+ - HuggingFaceH4/ultrachat_200k
5
+ - allenai/ultrafeedback_binarized_cleaned
6
+ - meta-math/MetaMathQA
7
+ - WizardLM/WizardLM_evol_instruct_V2_196k
8
+ - openchat/openchat_sharegpt4_dataset
9
+ - LDJnr/Capybara
10
+ - Intel/orca_dpo_pairs
11
+ - hkust-nlp/deita-10k-v0
12
+ license: other
13
+ license_link: https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b/blob/main/LICENSE
14
+ language:
15
+ - en
16
+ model_creator: stabilityai
17
+ model_name: stablelm-2-zephyr-1_6b
18
+ model_type: stablelm_epoch
19
+ inference: false
20
+ tags:
21
+ - causal-lm
22
+ - stablelm_epoch
23
+ pipeline_tag: text-generation
24
+ prompt_template: |
25
+ <|system|>
26
+ {{system_message}}<|endoftext|>
27
+ <|user|>
28
+ {{prompt}}<|endoftext|>
29
+ <|assistant|>
30
+
31
+ quantized_by: brittlewis12
32
+ ---
33
+
34
+ # StableLM 2 Zephyr 1.6B GGUF
35
+
36
+ Original model: [StableLM 2 Zephyr 1.6B](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b)
37
+ Model creator: [Stability AI](https://huggingface.co/stabilityai)
38
+
39
+ This repo contains GGUF format model files for Stability AI’s StableLM 2 Zephyr 1.6B.
40
+
41
+ > Stable LM 2 Zephyr 1.6B is a 1.6 billion parameter instruction tuned language model inspired by HugginFaceH4's Zephyr 7B training pipeline. The model is trained on a mix of publicly available datasets and synthetic datasets, utilizing Direct Preference Optimization (DPO).
42
+
43
+
44
+
45
+ ### What is GGUF?
46
+
47
+ GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
48
+ Converted using an proposed version of llama.cpp ([PR #5052](https://github.com/ggerganov/llama.cpp/pull/5052))
49
+
50
+ ### Prompt template: Zephyr
51
+
52
+ ```
53
+ <|system|>
54
+ {{system_message}}<|endoftext|>
55
+ <|user|>
56
+ {{prompt}}<|endoftext|>
57
+ <|assistant|>
58
+ ```
59
+
60
+ ---
61
+
62
+ ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!
63
+
64
+ ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg)
65
+
66
+ [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
67
+ - create & save **Characters** with custom system prompts & temperature settings
68
+ - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
69
+ - make it your own with custom **Theme colors**
70
+ - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
71
+ - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
72
+ - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
73
+
74
+ ---
75
+
76
+ ## Original Model Evaluations:
77
+
78
+ ![MT-Bench](https://cdn-uploads.huggingface.co/production/uploads/61b2bf4f5b1f7cad1799cfbb/QH00HVM3lg-5f17U_py4K.png)
79
+ | Model | Size | MT-Bench |
80
+ |-------------------------|------|----------|
81
+ | Mistral-7B-Instruct-v0.2| 7B | 7.61 |
82
+ | Llama2-Chat | 70B | 6.86 |
83
+ | stablelm-zephyr-3b | 3B | 6.64 |
84
+ | MPT-30B-Chat | 30B | 6.39 |
85
+ | **stablelm-2-zephyr-1.6b** | 1.6B | 5.42 |
86
+ | Falcon-40B-Instruct | 40B | 5.17 |
87
+ | Qwen-1.8B-Chat | 1.8B | 4.95 |
88
+ | dolphin-2.6-phi-2 | 2.7B | 4.93 |
89
+ | phi-2 | 2.7B | 4.29 |
90
+ | TinyLlama-1.1B-Chat-v1.0| 1.1B | 3.46 |
91
+
92
+ ### OpenLLM Leaderboard
93
+
94
+ | Model | Size | Average | ARC Challenge (acc_norm) | HellaSwag (acc_norm) | MMLU (acc_norm) | TruthfulQA (mc2) | Winogrande (acc) | Gsm8k (acc) |
95
+ |----------------------------------------|------|---------|-------------------------|----------------------|-----------------|------------------|------------------|-------------|
96
+ | microsoft/phi-2 | 2.7B | 61.32% | 61.09% | 75.11% | 58.11% | 44.47% | 74.35% | 54.81% |
97
+ | **stabilityai/stablelm-2-zephyr-1_6b** | 1.6B | 49.89% | 43.69% | 69.34% | 41.85% | 45.21% | 64.09% | 35.18% |
98
+ | microsoft/phi-1_5 | 1.3B | 47.69% | 52.90% | 63.79% | 43.89% | 40.89% | 72.22% | 12.43% |
99
+ | stabilityai/stablelm-2-1_6b | 1.6B | 45.54% | 43.43% | 70.49% | 38.93% | 36.65% | 65.90% | 17.82% |
100
+ | mosaicml/mpt-7b | 7B | 44.28% | 47.70% | 77.57% | 30.80% | 33.40% | 72.14% | 4.02% |
101
+ | KnutJaegersberg/Qwen-1_8B-Llamaified* | 1.8B | 44.75% | 37.71% | 58.87% | 46.37% | 39.41% | 61.72% | 24.41% |
102
+ | openlm-research/open_llama_3b_v2 | 3B | 40.28% | 40.27% | 71.60% | 27.12% | 34.78% | 67.01% | 0.91% |
103
+ | iiuae/falcon-rw-1b | 1B | 37.07% | 35.07% | 63.56% | 25.28% | 35.96% | 62.04% | 0.53% |
104
+ | TinyLlama/TinyLlama-1.1B-3T | 1.1B | 36.40% | 33.79% | 60.31% | 26.04% | 37.32% | 59.51% | 1.44% |
105
+