alexmarques commited on
Commit
e5f20a3
1 Parent(s): d5dd100

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -19,8 +19,8 @@ pipeline_tag: text-generation
19
  - **Model Developers:** Neural Magic
20
 
21
  Compressed version of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) specialized for text-generation.
22
- This model was obtained by fine-tuning the Sparse Foundational model [Sparse-Llama-2-7b-pruned_50.2of4](https://huggingface.co/nm-testing/SparseLlama-2-7b-pruned_50.2of4) on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset, using [SquareHead distillation] (https://arxiv.org/abs/2310.06927) and [Llama-2-7b-ultrachat200k](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat200k) as teacher.
23
- It achieves a win rate of 64.9% on the [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval) benchmark (version 1.0) when using [Llama-2-70b-chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) as evaluator, whereas the dense [Llama-2-7b-ultrachat200k](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat200k) model achieves 57.6% win rate.
24
 
25
  This model was produced as part if Neural Magic's Sparse Foundational Models initiative, and demostrates the capability of Sparse Foundational Models to transfer to the text-generation domain.
26
 
@@ -41,4 +41,4 @@ This model was evaluated in the [AlpacaEval](https://github.com/tatsu-lab/alpaca
41
  | :----- | :--------: | :--------: |
42
  | [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) | 3.7% | -- |
43
  | [Llama-2-7b-ultrachat200k](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat200k) | 57.6% | -- |
44
- | SparseLlama-2-7b-ultrachat_200k-pruned_50.2of4 | 64.9% | 113% |
 
19
  - **Model Developers:** Neural Magic
20
 
21
  Compressed version of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) specialized for text-generation.
22
+ This model was obtained by fine-tuning the Sparse Foundational model [Sparse-Llama-2-7b-pruned_50.2of4](https://huggingface.co/nm-testing/SparseLlama-2-7b-pruned_50.2of4) on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
23
+ It achieves a win rate of 62.1% on the [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval) benchmark (version 1.0) when using [Llama-2-70b-chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) as evaluator, whereas the dense [Llama-2-7b-ultrachat200k](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat200k) model achieves 57.6% win rate.
24
 
25
  This model was produced as part if Neural Magic's Sparse Foundational Models initiative, and demostrates the capability of Sparse Foundational Models to transfer to the text-generation domain.
26
 
 
41
  | :----- | :--------: | :--------: |
42
  | [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) | 3.7% | -- |
43
  | [Llama-2-7b-ultrachat200k](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat200k) | 57.6% | -- |
44
+ | SparseLlama-2-7b-ultrachat_200k-pruned_50.2of4 | 62.1% | 108% |