Text Generation
Transformers
GGUF
Inference Endpoints
imatrix
bartowski commited on
Commit
962e423
1 Parent(s): 0bd9b0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -13,6 +13,8 @@ pipeline_tag: text-generation
13
 
14
  Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> pull request with <a href="https://github.com/ggerganov/llama.cpp/pull/7402">Smaug support</a> for quantization.
15
 
 
 
16
  Original model: https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct
17
 
18
  All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
 
13
 
14
  Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> pull request with <a href="https://github.com/ggerganov/llama.cpp/pull/7402">Smaug support</a> for quantization.
15
 
16
+ This model can be run as of release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3001">b3001</a>
17
+
18
  Original model: https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct
19
 
20
  All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)