Update README.md
Browse files
README.md
CHANGED
@@ -33,6 +33,15 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
33 |
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Samantha-7B-GGML)
|
34 |
* [Eric's original unquantised fp16 model in HF format](https://huggingface.co/ehartford/samantha-7b)
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
37 |
|
38 |
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|
|
|
33 |
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Samantha-7B-GGML)
|
34 |
* [Eric's original unquantised fp16 model in HF format](https://huggingface.co/ehartford/samantha-7b)
|
35 |
|
36 |
+
## Prompt template
|
37 |
+
|
38 |
+
```
|
39 |
+
<system prompt>
|
40 |
+
|
41 |
+
USER: <prompt>
|
42 |
+
ASSISTANT:
|
43 |
+
```
|
44 |
+
|
45 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
46 |
|
47 |
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|