sayakpaul HF staff commited on
Commit
e4da0e2
1 Parent(s): bff2a88

Update README.md (#1)

Browse files

- Update README.md (aabff72c68cdc18ac5b0494567da7b45f92247e7)

Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -7,7 +7,9 @@ model-index:
7
  - name: tf-tpu/roberta-base-epochs-500-no-wd
8
  results: []
9
  widget:
10
- - text: Goal of my life is to [MASK].
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
@@ -25,7 +27,7 @@ It achieves the following results on the evaluation set:
25
 
26
  ## Model description
27
 
28
- More information needed
29
 
30
  ## Intended uses & limitations
31
 
@@ -33,7 +35,7 @@ More information needed
33
 
34
  ## Training and evaluation data
35
 
36
- More information needed
37
 
38
  ## Training procedure
39
 
@@ -553,4 +555,4 @@ The following hyperparameters were used during training:
553
 
554
  - Transformers 4.27.0.dev0
555
  - TensorFlow 2.9.1
556
- - Tokenizers 0.13.2
 
7
  - name: tf-tpu/roberta-base-epochs-500-no-wd
8
  results: []
9
  widget:
10
+ - text: Goal of my life is to [MASK].
11
+ datasets:
12
+ - wikitext
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
 
27
 
28
  ## Model description
29
 
30
+ The model was trained on the [WikiText dataset](https://huggingface.co/datasets/wikitext) (v1). Training details can be found [here](https://github.com/huggingface/transformers/tree/examples/main/examples/tensorflow/tpu/language-modeling).
31
 
32
  ## Intended uses & limitations
33
 
 
35
 
36
  ## Training and evaluation data
37
 
38
+ [WikiText (v1)](https://huggingface.co/datasets/wikitext)
39
 
40
  ## Training procedure
41
 
 
555
 
556
  - Transformers 4.27.0.dev0
557
  - TensorFlow 2.9.1
558
+ - Tokenizers 0.13.2