OpenHermes-2.5-Mistral-7B-pruned50
This repo contains model files for OpenHermes-2.5-Mistral-7B optimized for NM-vLLM, a high-throughput serving engine for compressed LLMs.
This model was pruned with SparseGPT, using SparseML.
Inference
Install NM-vLLM for fast inference and low memory-usage:
pip install nm-vllm[sparse]
Run in a Python pipeline for local inference:
from vllm import LLM, SamplingParams
model = LLM("nm-testing/OpenHermes-2.5-Mistral-7B-pruned2.4", sparsity="semi_structured_sparse_w16a16")
prompt = "How to make banana bread?"
formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
sampling_params = SamplingParams(max_tokens=100)
outputs = model.generate(formatted_prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
"""
In order to make banana bread, you will need to follow these steps:
1. Prepare the ingredients: You will need flour, sugar, eggs, and bananas.
2. Prepare your ingredients: Prepare your bananas, flour, sugar, and eggs by preparing them in their respective bowls, ready to prepare the banana bread.
3. Make the batter: You will prepare batter by combining the flour, sugar, eggs and bananas. This
"""
Prompt template
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Sparsification
For details on how this model was sparsified, see the recipe.yaml
in this repo and follow the instructions below.
Install SparseML:
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
Replace the recipe as you like and run this one-shot compression script to apply SparseGPT:
import sparseml.transformers
original_model_name = "teknium/OpenHermes-2.5-Mistral-7B"
calibration_dataset = "open_platypus"
output_directory = "output/"
recipe = """
test_stage:
obcq_modifiers:
SparseGPTModifier:
sparsity: 0.5
sequential_update: true
mask_structure: '2:4'
targets: ['re:model.layers.\d*$']
"""
# Apply SparseGPT to the model
sparseml.transformers.oneshot(
model=original_model_name,
dataset=calibration_dataset,
recipe=recipe,
output_dir=output_directory,
)
Slack
For further support, and discussions on these models and AI in general, join Neural Magic's Slack Community
- Downloads last month
- 719
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for neuralmagic/OpenHermes-2.5-Mistral-7B-pruned2.4
Base model
mistralai/Mistral-7B-v0.1
Finetuned
teknium/OpenHermes-2.5-Mistral-7B