File size: 1,012 Bytes
bf5a29a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49ca926
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
base_model: "NousResearch/Hermes-2-Pro-Mistral-7B"
language:
  - en
tags:
- transformers
  - safetensors
  - mistral
  - text-generation
  - Mistral
  - instruct
  - finetune
  - chatml
  - DPO
  - RLHF
  - gpt4
  - synthetic data
  - distillation
  - function calling
  - json mode
  - conversational
  - en
  - dataset:teknium/OpenHermes-2.5
  - base_model:mistralai/Mistral-7B-v0.1
  - base_model:finetune:mistralai/Mistral-7B-v0.1
  - license:apache-2.0
  - autotrain_compatible
  - text-generation-inference
  - endpoints_compatible
  - region:us
license: "apache-2.0"
inference: false
datasets:
- teknium/OpenHermes-2.5
quantized_by: pbatralx
---

# Hermes-2-Pro-Mistral-7B

This repository contains quantized versions of the model from the original repository: [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B).

| Name | Quantization Method | Size (GB) |
|------|---------------------|-----------|
| hermes-2-pro-mistral-7b.Q8_0.gguf | q8_0 | 7.17 |