add quantized versions
Browse files
README.md
CHANGED
@@ -53,6 +53,23 @@ parameters:
|
|
53 |
dtype: bfloat16
|
54 |
```
|
55 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
57 |
|
58 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__OpenHermes-2.5-neural-chat-v3-3-Slerp)
|
|
|
53 |
dtype: bfloat16
|
54 |
```
|
55 |
|
56 |
+
# Quantizationed versions
|
57 |
+
|
58 |
+
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
|
59 |
+
|
60 |
+
##### GPTQ
|
61 |
+
|
62 |
+
- [TheBloke/OpenHermes-2.5-neural-chat-v3-3-Slerp-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-v3-3-Slerp-GPTQ)
|
63 |
+
|
64 |
+
##### GGUF
|
65 |
+
|
66 |
+
- [TheBloke/OpenHermes-2.5-neural-chat-v3-3-Slerp-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-v3-3-Slerp-GGUF)
|
67 |
+
|
68 |
+
##### AWQ
|
69 |
+
|
70 |
+
- [TheBloke/OpenHermes-2.5-neural-chat-v3-3-Slerp-AWQ](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-v3-3-Slerp-AWQ)
|
71 |
+
|
72 |
+
|
73 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
74 |
|
75 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__OpenHermes-2.5-neural-chat-v3-3-Slerp)
|