--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer datasets: - wikitext model-index: - name: run_opt results: [] --- # run_opt This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the wikitext dataset. It achieves the following results on the evaluation set: - Loss: 0.0165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.562 | 0.55 | 50 | 6.9697 | | 6.63 | 1.1 | 100 | 6.3436 | | 5.938 | 1.65 | 150 | 5.1110 | | 3.0597 | 2.19 | 200 | 1.4150 | | 0.7989 | 2.74 | 250 | 0.3477 | | 0.2227 | 3.29 | 300 | 0.1284 | | 0.0925 | 3.84 | 350 | 0.0640 | | 0.0475 | 4.39 | 400 | 0.0412 | | 0.0314 | 4.94 | 450 | 0.0304 | | 0.0217 | 5.49 | 500 | 0.0246 | | 0.0181 | 6.04 | 550 | 0.0215 | | 0.0146 | 6.58 | 600 | 0.0194 | | 0.0132 | 7.13 | 650 | 0.0182 | | 0.012 | 7.68 | 700 | 0.0174 | | 0.0114 | 8.23 | 750 | 0.0169 | | 0.011 | 8.78 | 800 | 0.0167 | | 0.0108 | 9.33 | 850 | 0.0166 | | 0.0106 | 9.88 | 900 | 0.0165 | ### Framework versions - Transformers 4.33.1 - Pytorch 1.12.1 - Datasets 2.14.6 - Tokenizers 0.13.3