Update and fix test example
Browse files
README.md
CHANGED
@@ -107,9 +107,9 @@ To test the model against Hugging Face, you can use the following command:
|
|
107 |
|
108 |
```sh
|
109 |
# Example command for testing against Hugging Face
|
110 |
-
python convert-hf-to-gguf
|
111 |
|
112 |
-
./main -m ./Refact-1_6B-fim/ggml-model-f16.gguf
|
113 |
```
|
114 |
|
115 |
This resolves llama.cpp issue [#3061](https://github.com/ggerganov/llama.cpp/issues/3061).
|
|
|
107 |
|
108 |
```sh
|
109 |
# Example command for testing against Hugging Face
|
110 |
+
python convert-hf-to-gguf.py models/smallcloudai/Refact-1_6B-fim
|
111 |
|
112 |
+
./main --color -e -s 1 -c 256 -n 256 -m ./models/smallcloudai/Refact-1_6B-fim/ggml-model-f16.gguf -p "def multiply(a: int, b: int) -> int:"
|
113 |
```
|
114 |
|
115 |
This resolves llama.cpp issue [#3061](https://github.com/ggerganov/llama.cpp/issues/3061).
|