TheBloke commited on
Commit
95b7df5
1 Parent(s): 9c9014e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -22
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  inference: false
3
  language:
4
  - zh
@@ -12,7 +13,6 @@ language:
12
  library_name: transformers
13
  license: llama2
14
  model_creator: OpenBuddy
15
- model_link: https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16
16
  model_name: OpenBuddy Llama2 13B v11.1
17
  model_type: llama
18
  pipeline_tag: text-generation
@@ -40,23 +40,25 @@ quantized_by: TheBloke
40
  - Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
41
  - Original model: [OpenBuddy Llama2 13B v11.1](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16)
42
 
 
43
  ## Description
44
 
45
  This repo contains GGUF format model files for [OpenBuddy's OpenBuddy Llama2 13B v11.1](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16).
46
 
 
47
  <!-- README_GGUF.md-about-gguf start -->
48
  ### About GGUF
49
 
50
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
51
 
52
- The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
53
 
54
- Here are a list of clients and libraries that are known to support GGUF:
55
- * [llama.cpp](https://github.com/ggerganov/llama.cpp).
56
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions.
57
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
58
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
59
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
 
60
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
61
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
62
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
@@ -72,23 +74,30 @@ Here are a list of clients and libraries that are known to support GGUF:
72
  <!-- repositories-available end -->
73
 
74
  <!-- prompt-template start -->
75
- ## Prompt template: Vicuna-Short
76
 
77
  ```
78
- You are a helpful AI assistant.
 
 
 
 
 
79
 
80
- USER: {prompt}
81
- ASSISTANT:
82
 
83
  ```
84
 
85
  <!-- prompt-template end -->
 
 
86
  <!-- compatibility_gguf start -->
87
  ## Compatibility
88
 
89
- These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
90
 
91
- They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
92
 
93
  ## Explanation of quantisation methods
94
  <details>
@@ -132,18 +141,15 @@ Refer to the Provided Files table below to see what files use which methods, and
132
  <!-- README_GGUF.md-how-to-run start -->
133
  ## Example `llama.cpp` command
134
 
135
- Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
136
-
137
- For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
138
 
 
 
139
  ```
140
- ./main -t 10 -ngl 32 -m openbuddy-llama2-13b-v11.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
141
- ```
142
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
143
 
144
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
145
 
146
- Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
147
 
148
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
149
 
@@ -200,10 +206,12 @@ For further support, and discussions on these models and AI in general, join us
200
 
201
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
202
 
203
- ## Thanks, and how to contribute.
204
 
205
  Thanks to the [chirper.ai](https://chirper.ai) team!
206
 
 
 
207
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
208
 
209
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
 
1
  ---
2
+ base_model: https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16
3
  inference: false
4
  language:
5
  - zh
 
13
  library_name: transformers
14
  license: llama2
15
  model_creator: OpenBuddy
 
16
  model_name: OpenBuddy Llama2 13B v11.1
17
  model_type: llama
18
  pipeline_tag: text-generation
 
40
  - Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
41
  - Original model: [OpenBuddy Llama2 13B v11.1](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16)
42
 
43
+ <!-- description start -->
44
  ## Description
45
 
46
  This repo contains GGUF format model files for [OpenBuddy's OpenBuddy Llama2 13B v11.1](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v11.1-bf16).
47
 
48
+ <!-- description end -->
49
  <!-- README_GGUF.md-about-gguf start -->
50
  ### About GGUF
51
 
52
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
53
 
54
+ Here is an incomplate list of clients and libraries that are known to support GGUF:
55
 
56
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
57
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
58
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
59
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
 
60
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
61
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
62
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
63
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
64
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
 
74
  <!-- repositories-available end -->
75
 
76
  <!-- prompt-template start -->
77
+ ## Prompt template: OpenBuddy
78
 
79
  ```
80
+ You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.
81
+ Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
82
+ If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
83
+ You like to use emojis. You can speak fluently in many languages, for example: English, Chinese.
84
+ You cannot access the internet, but you have vast knowledge, cutoff: 2021-09.
85
+ You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.
86
 
87
+ User: {prompt}
88
+ Assistant:
89
 
90
  ```
91
 
92
  <!-- prompt-template end -->
93
+
94
+
95
  <!-- compatibility_gguf start -->
96
  ## Compatibility
97
 
98
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
99
 
100
+ They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
101
 
102
  ## Explanation of quantisation methods
103
  <details>
 
141
  <!-- README_GGUF.md-how-to-run start -->
142
  ## Example `llama.cpp` command
143
 
144
+ Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
 
 
145
 
146
+ ```shell
147
+ ./main -ngl 32 -m openbuddy-llama2-13b-v11.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.\nAlways answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nYou like to use emojis. You can speak fluently in many languages, for example: English, Chinese.\nYou cannot access the internet, but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser: {prompt}\nAssistant:"
148
  ```
 
 
 
149
 
150
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
151
 
152
+ Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
153
 
154
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
155
 
 
206
 
207
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
208
 
209
+ ## Thanks, and how to contribute
210
 
211
  Thanks to the [chirper.ai](https://chirper.ai) team!
212
 
213
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
214
+
215
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
216
 
217
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.