mcysqrd commited on
Commit
955dece
1 Parent(s): 7656c2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -1
README.md CHANGED
@@ -1,5 +1,37 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
 
4
  A Mistral7B Instruct (https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
5
- Finetune using QLoRA on the docs available in https://docs.modular.com/mojo/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - finetuned
6
+ inference:
7
+ parameters:
8
+ temperature: 0.1
9
  ---
10
+
11
  A Mistral7B Instruct (https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
12
+ Finetune using QLoRA on the docs available in https://docs.modular.com/mojo/
13
+
14
+ The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
15
+
16
+ For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
17
+
18
+ ## Instruction format
19
+ ```python
20
+ from transformers import AutoModelForCausalLM, AutoTokenizer
21
+
22
+ device = "cuda" # the device to load the model onto
23
+
24
+ model = AutoModelForCausalLM.from_pretrained("mcysqrd/MODULARMOJO_Mistral-V1")
25
+ tokenizer = AutoTokenizer.from_pretrained("mcysqrd/MODULARMOJO_Mistral-V1")
26
+
27
+ message = "What can you tell me about MODULAR_MOJO mojo_roadmap Scoping and mutability of statement variables?
28
+
29
+ encodeds = tokenizer.apply_chat_template(message, return_tensors="pt")
30
+
31
+ model_inputs = encodeds.to(device)
32
+ model.to(device)
33
+
34
+ generated_ids = model.generate(model_inputs, max_new_tokens=1650, do_sample=True, temperature = 0.01)
35
+ decoded = tokenizer.batch_decode(generated_ids)
36
+ print(decoded[0])
37
+ ```