lombardata commited on
Commit
0da5aab
1 Parent(s): 64a05dd

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -51
README.md CHANGED
@@ -1,75 +1,150 @@
 
1
  ---
2
- license: apache-2.0
3
- base_model: microsoft/resnet-50
 
4
  tags:
 
 
5
  - generated_from_trainer
6
- metrics:
7
- - accuracy
8
  model-index:
9
  - name: resnet-50-2024_09_13-batch-size32_epochs150_freeze
10
  results: []
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- # resnet-50-2024_09_13-batch-size32_epochs150_freeze
17
-
18
- This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
19
- It achieves the following results on the evaluation set:
20
  - Loss: nan
21
  - F1 Micro: 0.0002
22
  - F1 Macro: 0.0002
23
  - Roc Auc: 0.4995
24
  - Accuracy: 0.0003
25
- - Learning Rate: 0.0001
26
 
27
- ## Model description
 
 
 
28
 
29
- More information needed
30
 
31
- ## Intended uses & limitations
32
 
33
- More information needed
 
 
 
 
 
34
 
35
- ## Training and evaluation data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
- More information needed
38
 
39
- ## Training procedure
40
 
41
- ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - learning_rate: 0.001
45
- - train_batch_size: 32
46
- - eval_batch_size: 32
47
- - seed: 42
48
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
- - lr_scheduler_type: linear
50
- - num_epochs: 150
51
- - mixed_precision_training: Native AMP
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Roc Auc | Accuracy | Rate |
56
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-------:|:--------:|:------:|
57
- | No log | 1.0 | 273 | nan | 0.0 | 0.0 | 0.4995 | 0.0 | 0.001 |
58
- | 0.0 | 2.0 | 546 | nan | 0.0003 | 0.0004 | 0.4993 | 0.0007 | 0.001 |
59
- | 0.0 | 3.0 | 819 | nan | 0.0008 | 0.0010 | 0.4994 | 0.0017 | 0.001 |
60
- | 0.0 | 4.0 | 1092 | nan | 0.0 | 0.0 | 0.4991 | 0.0 | 0.001 |
61
- | 0.0 | 5.0 | 1365 | nan | 0.0005 | 0.0006 | 0.4994 | 0.0010 | 0.001 |
62
- | 0.0 | 6.0 | 1638 | nan | 0.0002 | 0.0002 | 0.4993 | 0.0003 | 0.001 |
63
- | 0.0 | 7.0 | 1911 | nan | 0.0 | 0.0 | 0.4993 | 0.0 | 0.0001 |
64
- | 0.0 | 8.0 | 2184 | nan | 0.0002 | 0.0002 | 0.4993 | 0.0003 | 0.0001 |
65
- | 0.0 | 9.0 | 2457 | nan | 0.0 | 0.0 | 0.4994 | 0.0 | 0.0001 |
66
- | 0.0 | 10.0 | 2730 | nan | 0.0003 | 0.0004 | 0.4994 | 0.0007 | 0.0001 |
67
- | 0.0 | 11.0 | 3003 | nan | 0.0 | 0.0 | 0.4994 | 0.0 | 0.0001 |
68
-
69
-
70
- ### Framework versions
71
-
72
- - Transformers 4.41.1
73
- - Pytorch 2.3.0+cu121
74
- - Datasets 2.19.1
75
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
  ---
3
+ language:
4
+ - eng
5
+ license: wtfpl
6
  tags:
7
+ - multilabel-image-classification
8
+ - multilabel
9
  - generated_from_trainer
10
+ base_model: microsoft/resnet-50
 
11
  model-index:
12
  - name: resnet-50-2024_09_13-batch-size32_epochs150_freeze
13
  results: []
14
  ---
15
 
16
+ DinoVd'eau is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50). It achieves the following results on the test set:
 
17
 
 
 
 
 
18
  - Loss: nan
19
  - F1 Micro: 0.0002
20
  - F1 Macro: 0.0002
21
  - Roc Auc: 0.4995
22
  - Accuracy: 0.0003
 
23
 
24
+ ---
25
+
26
+ # Model description
27
+ DinoVd'eau is a model built on top of dinov2 model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
28
 
29
+ The source code for training the model can be found in this [Git repository](https://github.com/SeatizenDOI/DinoVdeau).
30
 
31
+ - **Developed by:** [lombardata](https://huggingface.co/lombardata), credits to [César Leblanc](https://huggingface.co/CesarLeblanc) and [Victor Illien](https://huggingface.co/groderg)
32
 
33
+ ---
34
+
35
+ # Intended uses & limitations
36
+ You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
37
+
38
+ ---
39
 
40
+ # Training and evaluation data
41
+ Details on the number of images for each class are given in the following table:
42
+ | Class | train | val | test | Total |
43
+ |:-------------------------|--------:|------:|-------:|--------:|
44
+ | Acropore_branched | 1469 | 464 | 475 | 2408 |
45
+ | Acropore_digitised | 568 | 160 | 160 | 888 |
46
+ | Acropore_sub_massive | 150 | 50 | 43 | 243 |
47
+ | Acropore_tabular | 999 | 297 | 293 | 1589 |
48
+ | Algae_assembly | 2546 | 847 | 845 | 4238 |
49
+ | Algae_drawn_up | 367 | 126 | 127 | 620 |
50
+ | Algae_limestone | 1652 | 557 | 563 | 2772 |
51
+ | Algae_sodding | 3148 | 984 | 985 | 5117 |
52
+ | Atra/Leucospilota | 1084 | 348 | 360 | 1792 |
53
+ | Bleached_coral | 219 | 71 | 70 | 360 |
54
+ | Blurred | 191 | 67 | 62 | 320 |
55
+ | Dead_coral | 1979 | 642 | 643 | 3264 |
56
+ | Fish | 2018 | 656 | 647 | 3321 |
57
+ | Homo_sapiens | 161 | 62 | 59 | 282 |
58
+ | Human_object | 157 | 58 | 55 | 270 |
59
+ | Living_coral | 406 | 154 | 141 | 701 |
60
+ | Millepore | 385 | 127 | 125 | 637 |
61
+ | No_acropore_encrusting | 441 | 130 | 154 | 725 |
62
+ | No_acropore_foliaceous | 204 | 36 | 46 | 286 |
63
+ | No_acropore_massive | 1031 | 336 | 338 | 1705 |
64
+ | No_acropore_solitary | 202 | 53 | 48 | 303 |
65
+ | No_acropore_sub_massive | 1401 | 433 | 422 | 2256 |
66
+ | Rock | 4489 | 1495 | 1473 | 7457 |
67
+ | Rubble | 3092 | 1030 | 1001 | 5123 |
68
+ | Sand | 5842 | 1939 | 1938 | 9719 |
69
+ | Sea_cucumber | 1408 | 439 | 447 | 2294 |
70
+ | Sea_urchins | 327 | 107 | 111 | 545 |
71
+ | Sponge | 269 | 96 | 105 | 470 |
72
+ | Syringodium_isoetifolium | 1212 | 392 | 391 | 1995 |
73
+ | Thalassodendron_ciliatum | 782 | 261 | 260 | 1303 |
74
+ | Useless | 579 | 193 | 193 | 965 |
75
 
76
+ ---
77
 
78
+ # Training procedure
79
 
80
+ ## Training hyperparameters
81
 
82
  The following hyperparameters were used during training:
83
+
84
+ - **Number of Epochs**: 150
85
+ - **Learning Rate**: 0.001
86
+ - **Train Batch Size**: 32
87
+ - **Eval Batch Size**: 32
88
+ - **Optimizer**: Adam
89
+ - **LR Scheduler Type**: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
90
+ - **Freeze Encoder**: Yes
91
+ - **Data Augmentation**: Yes
92
+
93
+
94
+ ## Data Augmentation
95
+ Data were augmented using the following transformations :
96
+
97
+ Train Transforms
98
+ - **PreProcess**: No additional parameters
99
+ - **Resize**: probability=1.00
100
+ - **RandomHorizontalFlip**: probability=0.25
101
+ - **RandomVerticalFlip**: probability=0.25
102
+ - **ColorJiggle**: probability=0.25
103
+ - **RandomPerspective**: probability=0.25
104
+ - **Normalize**: probability=1.00
105
+
106
+ Val Transforms
107
+ - **PreProcess**: No additional parameters
108
+ - **Resize**: probability=1.00
109
+ - **Normalize**: probability=1.00
110
+
111
+
112
+
113
+ ## Training results
114
+ Epoch | Validation Loss | Accuracy | F1 Macro | F1 Micro | Learning Rate
115
+ --- | --- | --- | --- | --- | ---
116
+ 1 | nan | 0.0 | 0.0 | 0.0 | 0.001
117
+ 2 | nan | 0.000693000693000693 | 0.00031409501374165687 | 0.00040576181781294376 | 0.001
118
+ 3 | nan | 0.0017325017325017325 | 0.0007850525985241011 | 0.0010049241282283187 | 0.001
119
+ 4 | nan | 0.0 | 0.0 | 0.0 | 0.001
120
+ 5 | nan | 0.0010395010395010396 | 0.00047177229124076113 | 0.0006430178973314757 | 0.001
121
+ 6 | nan | 0.0003465003465003465 | 0.00015712153350616704 | 0.000206782464846981 | 0.001
122
+ 7 | nan | 0.0 | 0.0 | 0.0 | 0.0001
123
+ 8 | nan | 0.0003465003465003465 | 0.00015710919088766695 | 0.0002061218179944347 | 0.0001
124
+ 9 | nan | 0.0 | 0.0 | 0.0 | 0.0001
125
+ 10 | nan | 0.000693000693000693 | 0.00031441597233139445 | 0.0004230565838180856 | 0.0001
126
+ 11 | nan | 0.0 | 0.0 | 0.0 | 0.0001
127
+
128
+
129
+ ---
130
+
131
+ # CO2 Emissions
132
+
133
+ The estimated CO2 emissions for training this model are documented below:
134
+
135
+ - **Emissions**: 0.12280230273705112 grams of CO2
136
+ - **Source**: Code Carbon
137
+ - **Training Type**: fine-tuning
138
+ - **Geographical Location**: Brest, France
139
+ - **Hardware Used**: NVIDIA Tesla V100 PCIe 32 Go
140
+
141
+
142
+ ---
143
+
144
+ # Framework Versions
145
+
146
+ - **Transformers**: 4.41.1
147
+ - **Pytorch**: 2.3.0+cu121
148
+ - **Datasets**: 2.19.1
149
+ - **Tokenizers**: 0.19.1
150
+