lapp0 commited on
Commit
400c54b
1 Parent(s): 0c01d12

End of training

Browse files
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: roneneldan/TinyStories-33M
3
+ library_name: Distily
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: distily_bench_obj_cross_v2.4
8
+ results: []
9
+ ---
10
+
11
+ # distily_bench_obj_cross_v2.4
12
+
13
+ This student model is distilled from the teacher model [roneneldan/TinyStories-33M](https://huggingface.co/roneneldan/TinyStories-33M) using the dataset (unspecified).
14
+
15
+ The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
16
+
17
+ It achieves the following results on the evaluation set:
18
+ - eval_enwikippl: 177.0560
19
+ - eval_frwikippl: 49273.4414
20
+ - eval_zhwikippl: 358984.9062
21
+ - eval_tinystoriesppl: 10.8770
22
+ - eval_loss: 1.3021
23
+ - eval_runtime: 13.0099
24
+ - eval_samples_per_second: 76.864
25
+ - eval_steps_per_second: 9.608
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment.
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+ -->
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
49
+ - train_embeddings: True
50
+ - learning_rate: 0.004
51
+ - train_batch_size: 8
52
+ - eval_batch_size: 8
53
+ - seed: 42
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: linear
56
+ - num_epochs: 1.0
57
+
58
+ ### Resource Usage
59
+ Peak GPU Memory: 8.0557 GB
60
+
61
+ ### Eval-Phase Metrics
62
+ | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
63
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
+ | **teacher eval** | | 169.9865 | 47377.9414 | | | | | 3.9789 | 4998.1294 |
65
+ | 0 | 0 | 44977.6328 | 67315.0312 | 6.3148 | 12.9905 | 76.98 | 9.622 | 34474.9258 | 69136.0938 |
66
+ | 500 | 0.0404 | 212.9930 | 83456.9766 | 1.4082 | 12.9916 | 76.973 | 9.622 | 11.5074 | 683021.9375 |
67
+ | 1000 | 0.0808 | 193.5980 | 67809.9375 | 1.3147 | 12.9792 | 77.046 | 9.631 | 11.0045 | 547494.6875 |
68
+ | 1500 | 0.1212 | 180.8893 | 53193.7773 | 1.3036 | 12.9608 | 77.156 | 9.644 | 10.8586 | 375341.0625 |
69
+ | 2000 | 0.1616 | 178.9728 | 48776.2656 | 1.3026 | 12.951 | 77.214 | 9.652 | 11.0127 | 347488.4688 |
70
+ | 2500 | 0.2020 | 175.6217 | 50042.8477 | 1.3024 | 12.9598 | 77.162 | 9.645 | 10.7327 | 357837.4688 |
71
+ | 3000 | 0.2424 | 178.4675 | 49537.9062 | 1.3019 | 12.9914 | 76.974 | 9.622 | 10.9836 | 354417.0938 |
72
+ | 3500 | 0.2828 | 179.1670 | 50258.3086 | 1.3021 | 12.9813 | 77.034 | 9.629 | 10.9991 | 362933.2812 |
73
+ | 4000 | 0.3232 | 179.5004 | 49852.8828 | 1.3021 | 13.1031 | 76.318 | 9.54 | 11.0359 | 352154.6875 |
74
+ | 4500 | 0.3636 | 177.3718 | 50290.1914 | 1.3019 | 12.9341 | 77.315 | 9.664 | 10.8523 | 364485.8125 |
75
+ | 5000 | 0.4040 | 178.2464 | 49523.9727 | 1.3023 | 12.9459 | 77.245 | 9.656 | 10.9723 | 356503.25 |
76
+ | 5500 | 0.4444 | 178.4675 | 49796.7656 | 1.3021 | 12.9348 | 77.311 | 9.664 | 10.9573 | 355174.3438 |
77
+ | 6000 | 0.4848 | 177.2550 | 50817.1406 | 1.3019 | 12.9706 | 77.097 | 9.637 | 10.8093 | 363903.0312 |
78
+ | 6500 | 0.5253 | 178.4122 | 49384.6523 | 1.3021 | 12.9524 | 77.206 | 9.651 | 10.9446 | 358410.5625 |
79
+ | 7000 | 0.5657 | 177.5298 | 49551.8477 | 1.3021 | 12.9742 | 77.076 | 9.635 | 10.8900 | 361001.9062 |
80
+ | 7500 | 0.6061 | 176.6176 | 49329.0156 | 1.3020 | 13.0748 | 76.483 | 9.56 | 10.8483 | 358028.2812 |
81
+ | 8000 | 0.6465 | 178.5366 | 49740.6641 | 1.3019 | 12.9527 | 77.204 | 9.65 | 10.9094 | 364680.5312 |
82
+ | 8500 | 0.6869 | 177.1932 | 49916.1562 | 1.3018 | 12.9514 | 77.212 | 9.651 | 10.8523 | 361676.625 |
83
+ | 9000 | 0.7273 | 177.8464 | 49273.4414 | 1.3021 | 12.9672 | 77.117 | 9.64 | 10.8990 | 356123.0 |
84
+ | 9500 | 0.7677 | 177.0560 | 49273.4414 | 1.3021 | 13.0099 | 76.864 | 9.608 | 10.8770 | 358984.9062 |
85
+ | 10000 | 0.8081 | 177.3580 | 49440.3047 | 1.3019 | 12.9592 | 77.165 | 9.646 | 10.8882 | 363127.1562 |
86
+ | 10500 | 0.8485 | 177.8533 | 49496.0664 | 1.3021 | 13.0232 | 76.786 | 9.598 | 10.8797 | 363708.7188 |
87
+ | 11000 | 0.8889 | 177.3443 | 49356.8281 | 1.3019 | 12.9737 | 77.079 | 9.635 | 10.8689 | 363320.7812 |
88
+ | 11500 | 0.9293 | 177.1795 | 49217.9766 | 1.3019 | 12.9527 | 77.204 | 9.65 | 10.8792 | 361483.875 |
89
+ | 12000 | 0.9697 | 177.1109 | 49273.4414 | 1.3021 | 12.9676 | 77.115 | 9.639 | 10.8783 | 361773.2188 |
90
+ | 12375 | 1.0 | 177.1932 | 49273.4414 | 1.3020 | 12.9826 | 77.026 | 9.628 | 10.8792 | 361773.2188 |
91
+
92
+ ### Framework versions
93
+ - Distily 0.2.0
94
+ - Transformers 4.44.0
95
+ - Pytorch 2.3.0
96
+ - Datasets 2.21.0
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.44.0"
6
+ }
logs/learning_rate=0.004, reinitialize_weights=xavier/events.out.tfevents.1723850606.5f530b1cf724 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:603d3253108ed44372268e2ffb5b39199d954f67478cdff327e218de15244d63
3
+ size 307