Edit model card

fine-tuned-distilroberta-nosql-injection

This model is a fine-tuned version of distilroberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 75

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 79 0.3011
1.3847 2.0 158 0.1010
0.2018 3.0 237 0.0340
0.0999 4.0 316 0.0556
0.0999 5.0 395 0.0001
0.0509 6.0 474 0.0817
0.072 7.0 553 0.0001
0.0425 8.0 632 0.0395
0.0559 9.0 711 0.1041
0.0559 10.0 790 0.0014
0.0336 11.0 869 0.0284
0.0326 12.0 948 0.0251
0.0204 13.0 1027 0.0046
0.0136 14.0 1106 0.0293
0.0136 15.0 1185 0.0495
0.0318 16.0 1264 0.0011
0.0278 17.0 1343 0.0003
0.0201 18.0 1422 0.0004
0.0202 19.0 1501 0.0194
0.0202 20.0 1580 0.0202
0.0282 21.0 1659 0.0008
0.0277 22.0 1738 0.0401
0.0121 23.0 1817 0.0405
0.0121 24.0 1896 0.0230
0.0278 25.0 1975 0.0067
0.023 26.0 2054 0.0879
0.0247 27.0 2133 0.0168
0.0348 28.0 2212 0.0373
0.0348 29.0 2291 0.0466
0.0157 30.0 2370 0.0376
0.023 31.0 2449 0.0470
0.0101 32.0 2528 0.0522
0.0123 33.0 2607 0.0000
0.0123 34.0 2686 0.0245
0.0128 35.0 2765 0.0090
0.0089 36.0 2844 0.0000
0.0219 37.0 2923 0.0000
0.0082 38.0 3002 0.0001
0.0082 39.0 3081 0.0504
0.0123 40.0 3160 0.0000
0.0078 41.0 3239 0.0096
0.0217 42.0 3318 0.0019
0.0217 43.0 3397 0.0535
0.0117 44.0 3476 0.0253
0.0218 45.0 3555 0.0330
0.0171 46.0 3634 0.0000
0.0056 47.0 3713 0.0002
0.0056 48.0 3792 0.0025
0.0111 49.0 3871 0.0162
0.0051 50.0 3950 0.0010
0.0138 51.0 4029 0.0000
0.0041 52.0 4108 0.0000
0.0041 53.0 4187 0.0186
0.0103 54.0 4266 0.0001
0.0154 55.0 4345 0.0006
0.01 56.0 4424 0.0064
0.0076 57.0 4503 0.0044
0.0076 58.0 4582 0.0000
0.0155 59.0 4661 0.0000
0.0114 60.0 4740 0.0089
0.0176 61.0 4819 0.0000
0.0176 62.0 4898 0.0414
0.0037 63.0 4977 0.0000
0.0035 64.0 5056 0.0697
0.0086 65.0 5135 0.0206
0.0056 66.0 5214 0.0222
0.0056 67.0 5293 0.0365
0.0077 68.0 5372 0.0000
0.0109 69.0 5451 0.0004
0.0076 70.0 5530 0.0002
0.0022 71.0 5609 0.0001
0.0022 72.0 5688 0.0000
0.0078 73.0 5767 0.0011
0.0128 74.0 5846 0.0005
0.0047 75.0 5925 0.0000

Framework versions

  • Transformers 4.31.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.11.0
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ankush-003/fine-tuned-distilroberta-nosql-injection

Finetuned
(519)
this model