Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc
3
+ inference: false
4
+ language:
5
+ - en
6
+ - fa
7
+ tags:
8
+ - llama
9
+ - text-generation-inference
10
+ ---
11
+
12
+ # UniversityOfTehran's PersianMind-v1.0 GGUF
13
+
14
+ These files are GGUF format model files for [UniversityOfTehran's PersianMind-v1.0](https://huggingface.co/universitytehran/PersianMind-v1.0).
15
+
16
+ GGUF files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
17
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
18
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
19
+ * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
20
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
21
+ * [ctransformers](https://github.com/marella/ctransformers)
22
+
23
+ ## How to run in `llama.cpp`
24
+
25
+ I use the following command line, adjust for your tastes and needs:
26
+
27
+ ```
28
+ ./main -t 2 -ngl 32 -m PersianMind-v1.0.q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.2 -n -1 -e -p "This is a conversation with PersianMind. It is an artificial intelligence model designed by a team of NLP experts at the University of Tehran to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information.\nYou: در مورد هوش مصنوعی توضیح بده.\nPersianMind: "
29
+ ```
30
+ Change `-t 2` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
31
+
32
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
33
+
34
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`, you can use `--interactive-first` to start in interactive mode.
35
+
36
+ ## Compatibility
37
+
38
+ I have uploded both the original llama.cpp quant methods (`q4_0, q4_1, q5_0, q5_1, q8_0`) as well as the k-quant methods (`q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`).
39
+
40
+ Please refer to [llama.cpp](https://github.com/ggerganov/llama.cpp) and [TheBloke](https://huggingface.co/TheBloke)'s GGUF models for further explanation.
41
+
42
+ ## How to run in `text-generation-webui`
43
+
44
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
45
+
46
+ <!-- footer start -->
47
+ ## Thanks
48
+
49
+ Thanks to [Pedram Rostami, Ali Salemi, and Mohammad Javad Dousti](https://huggingface.co/universitytehran) for providing checkpoints of the model.
50
+
51
+ Thanks to [Georgi Gerganov](https://github.com/ggerganov) and all of the awesome people in the AI community.