duyntnet commited on
Commit
c199ce8
1 Parent(s): 4d9904c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - gguf
8
+ - imatrix
9
+ - mistralai
10
+ - Mistral-7B-Instruct-v0.1
11
+ ---
12
+ Quantizations of https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
13
+
14
+ # From original readme
15
+
16
+ ## Instruction format
17
+
18
+ In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
19
+
20
+ E.g.
21
+ ```
22
+ text = "<s>[INST] What is your favourite condiment? [/INST]"
23
+ "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
24
+ "[INST] Do you have mayonnaise recipes? [/INST]"
25
+ ```
26
+
27
+ This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
28
+
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+
32
+ device = "cuda" # the device to load the model onto
33
+
34
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
35
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
36
+
37
+ messages = [
38
+ {"role": "user", "content": "What is your favourite condiment?"},
39
+ {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
40
+ {"role": "user", "content": "Do you have mayonnaise recipes?"}
41
+ ]
42
+
43
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
44
+
45
+ model_inputs = encodeds.to(device)
46
+ model.to(device)
47
+
48
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
49
+ decoded = tokenizer.batch_decode(generated_ids)
50
+ print(decoded[0])
51
+ ```
52
+
53
+ ## Troubleshooting
54
+ - If you see the following error:
55
+ ```
56
+ Traceback (most recent call last):
57
+ File "", line 1, in
58
+ File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
59
+ config, kwargs = AutoConfig.from_pretrained(
60
+ File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
61
+ config_class = CONFIG_MAPPING[config_dict["model_type"]]
62
+ File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
63
+ raise KeyError(key)
64
+ KeyError: 'mistral'
65
+ ```
66
+
67
+ Installing transformers from source should solve the issue
68
+ pip install git+https://github.com/huggingface/transformers
69
+
70
+ This should not be required after transformers-v4.33.4.