Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
TheBloke
/
Nous-Hermes-Llama2-70B-GGUF
like
26
Transformers
GGUF
English
llama
llama-2
self-instruct
distillation
synthetic instruction
License:
mit
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
eaa134e
Nous-Hermes-Llama2-70B-GGUF
1 contributor
History:
27 commits
TheBloke
GGUF model commit (made with llama.cpp commit 9912b9e)
eaa134e
about 1 year ago
.gitattributes
Safe
2.39 kB
Add q6_K and q8_0 as split files due to 50GB limit
about 1 year ago
LICENSE.txt
Safe
7.02 kB
Add Llama 2 license files
about 1 year ago
Notice
Safe
112 Bytes
Add Llama 2 license files
about 1 year ago
README.md
Safe
21.3 kB
Update README.md
about 1 year ago
USE_POLICY.md
Safe
4.77 kB
Add Llama 2 license files
about 1 year ago
config.json
Safe
29 Bytes
Initial GGUF model commit
about 1 year ago
nous-hermes-llama2-70b.Q2_K.gguf
Safe
29.3 GB
LFS
GGUF model commit (made with llama.cpp commit 9912b9e)
about 1 year ago
nous-hermes-llama2-70b.Q3_K_L.gguf
Safe
36.1 GB
LFS
GGUF model commit (made with llama.cpp commit 9912b9e)
about 1 year ago
nous-hermes-llama2-70b.Q3_K_M.gguf
Safe
33.2 GB
LFS
GGUF model commit (made with llama.cpp commit 9912b9e)
about 1 year ago
nous-hermes-llama2-70b.Q3_K_S.gguf
Safe
29.9 GB
LFS
GGUF model commit (made with llama.cpp commit 9912b9e)
about 1 year ago
nous-hermes-llama2-70b.Q4_K_M.gguf
Safe
41.7 GB
LFS
Initial GGUF model commit
about 1 year ago
nous-hermes-llama2-70b.Q4_K_S.gguf
Safe
39.3 GB
LFS
Initial GGUF model commit
about 1 year ago
nous-hermes-llama2-70b.Q5_K_M.gguf
Safe
49 GB
LFS
Initial GGUF model commit
about 1 year ago
nous-hermes-llama2-70b.Q5_K_S.gguf
Safe
47.7 GB
LFS
Initial GGUF model commit
about 1 year ago
nous-hermes-llama2-70b.Q6_K.gguf-split-a
Safe
36.7 GB
LFS
Add q6_K and q8_0 as split files due to 50GB limit
about 1 year ago
nous-hermes-llama2-70b.Q6_K.gguf-split-b
Safe
20.1 GB
LFS
Add q6_K and q8_0 as split files due to 50GB limit
about 1 year ago
nous-hermes-llama2-70b.Q8_0.gguf-split-a
Safe
36.7 GB
LFS
Add q6_K and q8_0 as split files due to 50GB limit
about 1 year ago
nous-hermes-llama2-70b.Q8_0.gguf-split-b
Safe
36.6 GB
LFS
Add q6_K and q8_0 as split files due to 50GB limit
about 1 year ago