tkesgin commited on
Commit
5157bbd
1 Parent(s): a94f58f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md CHANGED
@@ -1,3 +1,98 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - tr
5
+ pipeline_tag: text-generation
6
+
7
+ tags:
8
+ - Turkish
9
+ - turkish
10
+ - gpt2
11
+ - instruction-tuning
12
+ - alpaca
13
  ---
14
+
15
+
16
+ ---
17
+ library_name: peft
18
+ base_model: ytu-ce-cosmos/turkish-gpt2-large
19
+ ---
20
+
21
+ # turkish-gpt2-large-750m-instruct-v0.1
22
+
23
+ ----------
24
+
25
+ <div style="text-align:center;">
26
+ <img src="./model_cover.png" width="400px"/>
27
+ </div>
28
+
29
+ ----------
30
+
31
+ Derived from ytu-ce-cosmos/turkish-gpt2-large, this model is a Turkish Language Model (LLM) finetuned with a dataset consisting of 35K instructions.
32
+ Due to the diverse nature of the training data, which includes websites, books, and other text sources, this model can exhibit biases and generate wrong answers. Users should be aware of these biases and use the model responsibly.
33
+
34
+ ## Quickstart
35
+
36
+ ```python
37
+ import torch
38
+ from transformers import AutoTokenizer, GPT2LMHeadModel
39
+ from transformers import pipeline
40
+
41
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
42
+
43
+ model = GPT2LMHeadModel.from_pretrained("ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1").to(device)
44
+
45
+ tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/turkish-gpt2-large-750m-instruct-v0.1")
46
+
47
+ text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer, max_new_tokens=256)
48
+
49
+ def get_model_response(instruction):
50
+ instruction_prompt = f"### Kullanıcı:\n{instruction}\n### Asistan:\n"
51
+ result = text_generator(instruction_prompt)
52
+ generated_response = result[0]['generated_text']
53
+ return generated_response[len(instruction_prompt):]
54
+
55
+ print(get_model_response("Evde egzersiz yapmanın avantajlarını açıkla."))
56
+ """
57
+ Evde egzersiz yapmak, gelişmiş fiziksel ve zihinsel sağlık için harika bir yoldur. Düzenli egzersizin, artan enerji seviyeleri, gelişmiş kas gücü ve esnekliği, gelişmiş uyku kalitesi ve daha iyi genel esenlik dahil olmak üzere birçok faydası vardır. Evde egzersiz yapmak ayrıca stresi azaltmaya, kas gücünü artırmaya ve genel sağlığı iyileştirmeye yardımcı olabilir.
58
+ """
59
+ ```
60
+
61
+
62
+ ----------
63
+
64
+ ### Training Details
65
+
66
+ - We've meticulously fine-tuned this model with a 35,000-instruction Turkish dataset to enhance its precision and adaptability.
67
+
68
+ - By employing LoRA (Low-Rank Adaptation), we have successfully propelled this model to the pinnacle of its performance capabilities.
69
+ - **LoRA** Config:
70
+ * rank = 256
71
+ * lora_alpha = 512
72
+ * lora_dropout = 0.05
73
+ * bias="none"
74
+ * task_type="CAUSAL_LM"
75
+
76
+ - In addition to monitoring loss, we successfully integrated Rouge calculations into our system's evaluation metrics.
77
+ - One of the innovative techniques we adopted involved employing a model to cleanse our data.
78
+
79
+ *Further details will be provided in the forthcoming paper.*
80
+
81
+ ----------
82
+
83
+ ### Model Description
84
+ - **Developed by:** cosmos-ytuce
85
+ - **Finetuned from model :** `ytu-ce-cosmos/turkish-gpt2-large`
86
+
87
+ # Acknowledgments
88
+ - Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
89
+
90
+ ----------
91
+ ### Citation
92
+ Paper coming soon 😊
93
+
94
+ ----------
95
+
96
+ ### Framework versions
97
+
98
+ - PEFT 0.9.0