redrussianarmy
commited on
Commit
•
9ce99f7
1
Parent(s):
a3c0d6c
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,14 @@ With the Tokenizers library, I created a 52K byte-level BPE vocab based on the t
|
|
14 |
|
15 |
After creating the vocab, I could train the GPT-2 for Turkish on two 2080TI over the complete training corpus (five epochs).
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
## Using the model
|
18 |
|
19 |
The model itself can be used in this way:
|
|
|
14 |
|
15 |
After creating the vocab, I could train the GPT-2 for Turkish on two 2080TI over the complete training corpus (five epochs).
|
16 |
|
17 |
+
## Model weights
|
18 |
+
|
19 |
+
Both PyTorch and Tensorflow compatible weights are available.
|
20 |
+
|
21 |
+
| Model | Downloads
|
22 |
+
| --------------------------------- | ---------------------------------------------------------------------------------------------------------------
|
23 |
+
| `dbmdz/bert-base-turkish-cased` | [`config.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/blob/main/config.json) • [`merges.txt`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/blob/main/merges.txt) • [`pytorch_model.bin`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/blob/main/pytorch_model.bin) • [`special_tokens_map.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/blob/main/special_tokens_map.json) • [`tf_model.h5`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/blob/main/tf_model.h5) • [`tokenizer_config.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/blob/main/tokenizer_config.json) • [`traning_args.bin`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/blob/main/training_args.bin) • [`vocab.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/blob/main/vocab.json)
|
24 |
+
|
25 |
## Using the model
|
26 |
|
27 |
The model itself can be used in this way:
|