ctoraman commited on
Commit
8354c33
1 Parent(s): 1bdef69

initial commit

Browse files
Files changed (4) hide show
  1. README.md +38 -0
  2. config.json +27 -0
  3. pytorch_model.bin +3 -0
  4. tokenizer.json +0 -0
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
 
 
 
 
2
  license: cc-by-nc-sa-4.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - tr
4
+ tags:
5
+ - roberta
6
  license: cc-by-nc-sa-4.0
7
+ datasets:
8
+ - oscar
9
  ---
10
+
11
+ # RoBERTa Turkish medium WordPiece 44k (uncased)
12
+
13
+ Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
14
+ The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
15
+
16
+ Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 44.5k.
17
+
18
+ The details can be found at this paper:
19
+ https://arxiv.org/...
20
+
21
+ The following code can be used for model loading and tokenization, example max length (514) can be changed:
22
+ ```
23
+ model = AutoModel.from_pretrained([model_path])
24
+ #for sequence classification:
25
+ #model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
26
+
27
+ tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
28
+ tokenizer.mask_token = "[MASK]"
29
+ tokenizer.cls_token = "[CLS]"
30
+ tokenizer.sep_token = "[SEP]"
31
+ tokenizer.pad_token = "[PAD]"
32
+ tokenizer.unk_token = "[UNK]"
33
+ tokenizer.bos_token = "[CLS]"
34
+ tokenizer.eos_token = "[SEP]"
35
+ tokenizer.model_max_length = 514
36
+ ```
37
+
38
+ ### BibTeX entry and citation info
39
+ ```bibtex
40
+ @article{}
41
+ ```
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 512,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 516,
17
+ "model_type": "roberta",
18
+ "num_attention_heads": 8,
19
+ "num_hidden_layers": 8,
20
+ "pad_token_id": 1,
21
+ "position_embedding_type": "absolute",
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.10.0",
24
+ "type_vocab_size": 1,
25
+ "use_cache": true,
26
+ "vocab_size": 44500
27
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1d9880e6be99bdf2b44232c244dba8c6f4af59f3faf7584db018d12c18d49e1
3
+ size 227949074
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff