lucyknada commited on
Commit
4857ab0
1 Parent(s): b28ed41

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - mistralai/Ministral-8B-Instruct-2410
4
+ language:
5
+ - en
6
+ - fr
7
+ - de
8
+ - es
9
+ - it
10
+ - pt
11
+ - zh
12
+ - ja
13
+ - ru
14
+ - ko
15
+ license: other
16
+ license_name: mrl
17
+ license_link: https://mistral.ai/licenses/MRL-0.1.md
18
+ inference: false
19
+ ---
20
+ ### exl2 quant (measurement.json in main branch)
21
+ ---
22
+ ### check revisions for quants
23
+ ---
24
+
25
+
26
+ # Ministral-8B-Instruct-2410-HF
27
+
28
+ ## Model Description
29
+
30
+ Ministral-8B-Instruct-2410-HF is the Hugging Face version of Ministral-8B-Instruct-2410 by Mistral AI. It is a multilingual instruction-tuned language model based on the Mistral architecture, designed for various natural language processing tasks with a focus on chat-based interactions.
31
+
32
+
33
+ ## Installation
34
+
35
+ To use this model, install the required packages:
36
+
37
+ ```bash
38
+ pip install -U transformers
39
+ ```
40
+
41
+ ## Usage Example
42
+
43
+ Here's a Python script demonstrating how to use the model for chat completion:
44
+
45
+ ```python
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
+
48
+ # Model setup
49
+ model_name = "prince-canuma/Ministral-8B-Instruct-2410-HF"
50
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
51
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
52
+
53
+ # Chat interaction
54
+ prompt = "Tell me a short story about a robot learning to paint."
55
+ messages = [{"role": "user", "content": prompt}]
56
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
57
+ input_ids = tokenizer(text, return_tensors="pt").to(model.device)
58
+
59
+ # Generate response
60
+ output = model.generate(**input_ids, max_new_tokens=500, temperature=0.7, do_sample=True)
61
+ response = tokenizer.decode(output[0][input_ids.input_ids.shape[1]:])
62
+
63
+ print("User:", prompt)
64
+ print("Model:", response)
65
+ ```
66
+
67
+ ## Model Details
68
+
69
+ - **Developed by:** Mistral AI
70
+ - **Model type:** Causal Language Model
71
+ - **Language(s):** English
72
+ - **License:** [mrl](https://mistral.ai/licenses/MRL-0.1.md)
73
+ - **Resources for more information:**
74
+ - [Model Repository](https://huggingface.co/prince-canuma/Ministral-8B-Instruct-2410-HF)
75
+ - [Mistral AI GitHub](https://github.com/mistralai)