ashabrawy commited on
Commit
97a9310
1 Parent(s): 2b1d580

NLP702-bert-large-uncased_peft-distillation_hs768-nh32-nl12

Browse files
Files changed (5) hide show
  1. README.md +6 -6
  2. best/config.json +2 -2
  3. best/model.safetensors +2 -2
  4. config.json +2 -2
  5. model.safetensors +2 -2
README.md CHANGED
@@ -15,8 +15,8 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.4827
19
- - Accuracy: 0.8386
20
 
21
  ## Model description
22
 
@@ -50,10 +50,10 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | 1.8213 | 1.39 | 500 | 0.7941 | 0.7605 |
54
- | 0.5261 | 2.78 | 1000 | 0.5432 | 0.8249 |
55
- | 0.2471 | 4.17 | 1500 | 0.4678 | 0.8377 |
56
- | 0.1381 | 5.56 | 2000 | 0.4495 | 0.8515 |
57
 
58
 
59
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.5089
19
+ - Accuracy: 0.8399
20
 
21
  ## Model description
22
 
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | 1.9429 | 1.39 | 500 | 0.9548 | 0.7118 |
54
+ | 0.6755 | 2.78 | 1000 | 0.6468 | 0.8023 |
55
+ | 0.3549 | 4.17 | 1500 | 0.5087 | 0.8411 |
56
+ | 0.1878 | 5.56 | 2000 | 0.4805 | 0.8441 |
57
 
58
 
59
  ### Framework versions
best/config.json CHANGED
@@ -136,8 +136,8 @@
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
- "num_attention_heads": 16,
140
- "num_hidden_layers": 8,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
 
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
+ "num_attention_heads": 32,
140
+ "num_hidden_layers": 12,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
best/model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5a3ecea4e085f266ba9db26e9ca222babbfb45dd0585fedf6d9b0be19169d726
3
- size 324723536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:539a36891c29872ca3da1633309b0258c3384693228439c9c1bcd36bb29dac89
3
+ size 438137056
config.json CHANGED
@@ -136,8 +136,8 @@
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
- "num_attention_heads": 16,
140
- "num_hidden_layers": 8,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
 
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
+ "num_attention_heads": 32,
140
+ "num_hidden_layers": 12,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5a3ecea4e085f266ba9db26e9ca222babbfb45dd0585fedf6d9b0be19169d726
3
- size 324723536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:539a36891c29872ca3da1633309b0258c3384693228439c9c1bcd36bb29dac89
3
+ size 438137056