File size: 1,407 Bytes
dfd565c
 
955dece
 
 
 
 
e675cff
dfd565c
955dece
ff90418
955dece
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a4f94f
955dece
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference:
  parameters:
    temperature: 0.01
---

A Mistral7B Instruct (https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) 
Finetune using QLoRA on the docs available in https://docs.modular.com/mojo/

The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.

For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).

## Instruction format
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("mcysqrd/MODULARMOJO_Mistral-V1")
tokenizer = AutoTokenizer.from_pretrained("mcysqrd/MODULARMOJO_Mistral-V1")

message = "What can you tell me about MODULAR_MOJO mojo_roadmap Scoping and mutability of statement variables?"

encodeds = tokenizer.apply_chat_template(message, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1650, do_sample=True, temperature = 0.01)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```