cosimoiaia
commited on
Commit
•
c57454b
1
Parent(s):
5f0081e
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
license: apache-2.0
|
2 |
+
datasets:
|
3 |
+
- cosimoiaia/Loquace-102k
|
4 |
+
language:
|
5 |
+
- it
|
6 |
+
tags:
|
7 |
+
- Italian
|
8 |
+
- Qlora
|
9 |
+
- finetuning
|
10 |
+
- Text Generation
|
11 |
+
pipeline_tag: text-generation
|
12 |
+
---
|
13 |
+
Model Card for Loquace-Wizard-13B [(Versione in Italiano tradotta da Loquace)](https://huggingface.co/cosimoiaia/Loquace-7B-Mistral/blob/main/Readme-ITA.md)
|
14 |
+
|
15 |
+
# 🇮🇹 Loquace-7B-Mistral v0.1 🇮🇹
|
16 |
+
|
17 |
+
Loquace is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹
|
18 |
+
|
19 |
+
Loquace-Wizard-14B's peculiar features:
|
20 |
+
|
21 |
+
- The First 13B Specifically finetuned in Italian.
|
22 |
+
- Is pretty good a following istructions in Italian.
|
23 |
+
- Responds well to prompt-engineering.
|
24 |
+
- Works well in a RAG (Retrival Augmented Generation) setup.
|
25 |
+
- It has been trained on a relatively raw dataset [Loquace-102K](https://huggingface.co/datasets/cosimoiaia/Loquace-102k) using QLoRa and Mistral-7B-Instruct as base.
|
26 |
+
- Training took only 8 hours on a 3090, costing a little more than <b>2 euro</b>! On [Genesis Cloud](https://gnsiscld.co/26qhlf) GPU.
|
27 |
+
- It is <b><i>Truly Open Source</i></b>: Model, Dataset and Code to replicate the results are completely released.
|
28 |
+
- Created in a garage in the south of Italy.
|
29 |
+
|
30 |
+
The Loquace Italian LLM models are created with the goal of democratizing AI and LLM in the Italian Landscape.
|
31 |
+
|
32 |
+
<b>No more need for expensive GPU, large funding, Big Corporation or Ivory Tower Institution, just download the code and train on your dataset on your own PC (or a cheap and reliable cloud provider like [Genesis Cloud](https://gnsiscld.co/26qhlf) )</b>
|
33 |
+
|
34 |
+
### Fine-tuning Instructions:
|
35 |
+
The related code can be found at:
|
36 |
+
https://github.com/cosimoiaia/Loquace
|
37 |
+
|
38 |
+
## Inference:
|
39 |
+
|
40 |
+
```python
|
41 |
+
from transformers import LlamaForCausalLM, AutoTokenizer
|
42 |
+
|
43 |
+
|
44 |
+
def generate_prompt(instruction):
|
45 |
+
prompt = f"""### Instruction: {instruction}
|
46 |
+
|
47 |
+
### Response:
|
48 |
+
"""
|
49 |
+
return prompt
|
50 |
+
|
51 |
+
model_name = "."
|
52 |
+
|
53 |
+
model = LlamaForCausalLM.from_pretrained(
|
54 |
+
model_name,
|
55 |
+
device_map="auto",
|
56 |
+
torch_dtype=torch.bfloat16
|
57 |
+
)
|
58 |
+
|
59 |
+
model.config.use_cache = True
|
60 |
+
|
61 |
+
|
62 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, add_eos_token=False)
|
63 |
+
|
64 |
+
prompt = generate_prompt("Chi era Dante Alighieri?")
|
65 |
+
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
66 |
+
|
67 |
+
outputs = model.generate(**inputs, do_sample = True, num_beams = 2, top_k=50, top_p= 0.95, max_new_tokens=2046, early_stopping = True)
|
68 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split("Response:")[1].strip())
|
69 |
+
```
|
70 |
+
## Model Author:
|
71 |
+
Cosimo Iaia <[email protected]>
|