TheBloke commited on
Commit
f4e7275
1 Parent(s): 3fe0eae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ Please note that these GGMLs are **not compatible with llama.cpp, or currently w
28
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/MPT-30B-Dolphin-v2-GGML)
29
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/manojpreveen/mpt-30b-dolphin-v2)
30
 
31
- ## Prompt template: orca
32
 
33
  ```
34
  <system>: You are a helpful assistant
 
28
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/MPT-30B-Dolphin-v2-GGML)
29
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/manojpreveen/mpt-30b-dolphin-v2)
30
 
31
+ ## Prompt template: custom
32
 
33
  ```
34
  <system>: You are a helpful assistant