brittlewis12's picture
Update README.md
cad0a2c verified
metadata
base_model: stabilityai/stablelm-2-zephyr-1_6b
datasets:
  - HuggingFaceH4/ultrachat_200k
  - allenai/ultrafeedback_binarized_cleaned
  - meta-math/MetaMathQA
  - WizardLM/WizardLM_evol_instruct_V2_196k
  - openchat/openchat_sharegpt4_dataset
  - LDJnr/Capybara
  - Intel/orca_dpo_pairs
  - hkust-nlp/deita-10k-v0
license: other
license_link: https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b/blob/main/LICENSE
language:
  - en
model_creator: stabilityai
model_name: stablelm-2-zephyr-1_6b
model_type: stablelm_epoch
inference: false
tags:
  - causal-lm
  - stablelm_epoch
pipeline_tag: text-generation
prompt_template: |
  <|system|>
  {{system_message}}<|endoftext|>
  <|user|>
  {{prompt}}<|endoftext|>
  <|assistant|>
quantized_by: brittlewis12

StableLM-2-Zephyr-1.6B GGUF

Original model: StableLM 2 Zephyr 1.6B Model creator: Stability AI

This repo contains GGUF format model files for Stability AI’s StableLM 2 Zephyr 1.6B.

Stable LM 2 Zephyr 1.6B is a 1.6 billion parameter instruction tuned language model inspired by HugginFaceH4's Zephyr 7B training pipeline. The model is trained on a mix of publicly available datasets and synthetic datasets, utilizing Direct Preference Optimization (DPO).

What is GGUF?

GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using an proposed version of llama.cpp (PR #5052)

Prompt template: Zephyr

<|system|>
{{system_message}}<|endoftext|>
<|user|>
{{prompt}}<|endoftext|>
<|assistant|>

Download & run with cnvrs on iPhone, iPad, and Mac!

cnvrs.ai

cnvrs is the best app for private, local AI on your device:

  • create & save Characters with custom system prompts & temperature settings
  • download and experiment with any GGUF model you can find on HuggingFace!
  • make it your own with custom Theme colors
  • powered by Metal ⚡️ & Llama.cpp, with haptics during response streaming!
  • try it out yourself today, on Testflight!
  • follow cnvrs on twitter to stay up to date

Original Model Evaluations:

MT-Bench

Model Size MT-Bench
Mistral-7B-Instruct-v0.2 7B 7.61
Llama2-Chat 70B 6.86
stablelm-zephyr-3b 3B 6.64
MPT-30B-Chat 30B 6.39
stablelm-2-zephyr-1.6b 1.6B 5.42
Falcon-40B-Instruct 40B 5.17
Qwen-1.8B-Chat 1.8B 4.95
dolphin-2.6-phi-2 2.7B 4.93
phi-2 2.7B 4.29
TinyLlama-1.1B-Chat-v1.0 1.1B 3.46

OpenLLM Leaderboard

Model Size Average ARC Challenge (acc_norm) HellaSwag (acc_norm) MMLU (acc_norm) TruthfulQA (mc2) Winogrande (acc) Gsm8k (acc)
microsoft/phi-2 2.7B 61.32% 61.09% 75.11% 58.11% 44.47% 74.35% 54.81%
stabilityai/stablelm-2-zephyr-1_6b 1.6B 49.89% 43.69% 69.34% 41.85% 45.21% 64.09% 35.18%
microsoft/phi-1_5 1.3B 47.69% 52.90% 63.79% 43.89% 40.89% 72.22% 12.43%
stabilityai/stablelm-2-1_6b 1.6B 45.54% 43.43% 70.49% 38.93% 36.65% 65.90% 17.82%
mosaicml/mpt-7b 7B 44.28% 47.70% 77.57% 30.80% 33.40% 72.14% 4.02%
KnutJaegersberg/Qwen-1_8B-Llamaified* 1.8B 44.75% 37.71% 58.87% 46.37% 39.41% 61.72% 24.41%
openlm-research/open_llama_3b_v2 3B 40.28% 40.27% 71.60% 27.12% 34.78% 67.01% 0.91%
iiuae/falcon-rw-1b 1B 37.07% 35.07% 63.56% 25.28% 35.96% 62.04% 0.53%
TinyLlama/TinyLlama-1.1B-3T 1.1B 36.40% 33.79% 60.31% 26.04% 37.32% 59.51% 1.44%