File size: 1,265 Bytes
ccf68f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# Turkish GPT-2 Model

In this repository I release GPT-2 model, that was trained on various texts for Turkish.

The model is meant to be an entry point for fine-tuning on other texts.

# Training corpora

I used a Turkish corpora that is taken from oscar-corpus.

It was possible to create byte-level BPE with Tokenizers library of Huggingface.

With the Tokenizers library, I created a 52K byte-level BPE vocab based on the training corpora.

After creating the vocab, I could train the GPT-2 for Turkish on two 2080TI over the complete training corpus (five epochs).

# Using the model

The model itself can be used in this way:

``` python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("redrussianarmy/gpt2-turkish-cased")
model = AutoModelWithLMHead.from_pretrained("redrussianarmy/gpt2-turkish-cased")
```

Here's an example that shows how to use the great Transformers Pipelines for generating text:

``` python
from transformers import pipeline
pipe = pipeline('text-generation', model="redrussianarmy/gpt2-turkish-cased",
                 tokenizer="redrussianarmy/gpt2-turkish-cased", config={'max_length':800})   
text = pipe("Akşamüstü yolda ilerlerken, ")[0]["generated_text"]
print(text)
```