GGUF
English
transformers - safetensors - mistral - text-generation - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - conversational - en - dataset:teknium/OpenHermes-2.5 - base_model:mistralai/Mistral-7B-v0.1 - base_model:finetune:mistralai/Mistral-7B-v0.1 - license:apache-2.0 - autotrain_compatible - text-generation-inference - endpoints_compatible - region:us
conversational
base_model: "NousResearch/Hermes-2-Pro-Mistral-7B" | |
language: | |
- en | |
tags: | |
- transformers | |
- safetensors | |
- mistral | |
- text-generation | |
- Mistral | |
- instruct | |
- finetune | |
- chatml | |
- DPO | |
- RLHF | |
- gpt4 | |
- synthetic data | |
- distillation | |
- function calling | |
- json mode | |
- conversational | |
- en | |
- dataset:teknium/OpenHermes-2.5 | |
- base_model:mistralai/Mistral-7B-v0.1 | |
- base_model:finetune:mistralai/Mistral-7B-v0.1 | |
- license:apache-2.0 | |
- autotrain_compatible | |
- text-generation-inference | |
- endpoints_compatible | |
- region:us | |
license: "apache-2.0" | |
inference: false | |
datasets: | |
- teknium/OpenHermes-2.5 | |
quantized_by: pbatralx | |
# Hermes-2-Pro-Mistral-7B | |
This repository contains quantized versions of the model from the original repository: [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B). | |
| Name | Quantization Method | Size (GB) | | |
|------|---------------------|-----------| | |
| hermes-2-pro-mistral-7b.Q8_0.gguf | q8_0 | 7.17 | | |