Update README.md
Browse files
README.md
CHANGED
@@ -7,14 +7,13 @@ tags:
|
|
7 |
|
8 |
# Model Card for Mistral-7B-Instruct-v0.2
|
9 |
|
10 |
-
The
|
11 |
|
12 |
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
|
13 |
- 32k context window (vs 8k context in v0.1)
|
14 |
- Rope-theta = 1e6
|
15 |
- No Sliding-Window Attention
|
16 |
|
17 |
-
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
|
18 |
|
19 |
## Instruction format
|
20 |
|
@@ -51,33 +50,4 @@ model.to(device)
|
|
51 |
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
|
52 |
decoded = tokenizer.batch_decode(generated_ids)
|
53 |
print(decoded[0])
|
54 |
-
```
|
55 |
-
|
56 |
-
## Troubleshooting
|
57 |
-
- If you see the following error:
|
58 |
-
```
|
59 |
-
Traceback (most recent call last):
|
60 |
-
File "", line 1, in
|
61 |
-
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
|
62 |
-
config, kwargs = AutoConfig.from_pretrained(
|
63 |
-
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
|
64 |
-
config_class = CONFIG_MAPPING[config_dict["model_type"]]
|
65 |
-
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
|
66 |
-
raise KeyError(key)
|
67 |
-
KeyError: 'mistral'
|
68 |
-
```
|
69 |
-
|
70 |
-
Installing transformers from source should solve the issue
|
71 |
-
pip install git+https://github.com/huggingface/transformers
|
72 |
-
|
73 |
-
This should not be required after transformers-v4.33.4.
|
74 |
-
|
75 |
-
## Limitations
|
76 |
-
|
77 |
-
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
|
78 |
-
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
79 |
-
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
80 |
-
|
81 |
-
## The Mistral AI Team
|
82 |
-
|
83 |
-
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
|
|
7 |
|
8 |
# Model Card for Mistral-7B-Instruct-v0.2
|
9 |
|
10 |
+
The Model is an instruct fine-tuned version of the Mistral-7B-v0.2.
|
11 |
|
12 |
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
|
13 |
- 32k context window (vs 8k context in v0.1)
|
14 |
- Rope-theta = 1e6
|
15 |
- No Sliding-Window Attention
|
16 |
|
|
|
17 |
|
18 |
## Instruction format
|
19 |
|
|
|
50 |
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
|
51 |
decoded = tokenizer.batch_decode(generated_ids)
|
52 |
print(decoded[0])
|
53 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|