williamdeli's picture
Update README.md
9ca86b3 verified
|
raw
history blame
5.49 kB
metadata
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: emotion-classification
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.875

emotion-classification

This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8208
  • Accuracy: 0.875

Model description

Fine Tuning from google/vit-base-patch16-224-in21k and dataset from FastJobs/Visual_Emotional_Analysis

Preprocessing :

Resize: Resizes the image to 224x224 pixels using bilinear interpolation. This ensures all images have consistent dimensions when fed into the model.

RandomHorizontalFlip: Randomly flips the image horizontally with a 50% probability. This helps the model learn to recognize objects from different horizontal orientations.

RandomVerticalFlip: Randomly flips the image vertically with a 50% probability. This helps the model learn to recognize objects from different vertical orientations.

ColorJitter: Randomly alters the brightness, contrast, saturation, and hue of the image. This simulates variations in lighting and color conditions, allowing the model to learn from a wider range of color variations.

ToTensor: Converts the image into a PyTorch tensor, which is the format required for processing images in deep learning frameworks like PyTorch.

Normalize: Normalizes each pixel of the image by subtracting the mean (0.5) and dividing by the standard deviation (0.5). This normalization helps stabilize training and improve model convergence.

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training results

Epoch Training Loss Validation Loss Accuracy 1 2.084400 2.090225 0.137500 2 2.089700 2.086973 0.118750 3 2.079400 2.086899 0.100000 4 2.098200 2.086151 0.125000 5 2.093100 2.082829 0.137500 6 2.083900 2.081236 0.137500 7 2.082900 2.086800 0.081250 8 2.060600 2.077514 0.162500 9 2.085300 2.068546 0.143750 10 2.071300 2.076601 0.131250 11 2.059300 2.063258 0.175000 12 2.054800 2.067919 0.125000 13 2.063700 2.059906 0.150000 14 2.044800 2.059610 0.206250 15 2.042200 2.055763 0.181250 16 2.029200 2.058503 0.193750 17 2.033300 2.042262 0.206250 18 2.003300 2.043147 0.206250 19 1.987800 2.035327 0.218750 20 1.987600 2.015316 0.206250 21 1.991800 2.010191 0.231250 22 1.973300 1.999294 0.250000 23 1.950500 1.980282 0.331250 24 1.930900 1.963615 0.281250 25 1.887600 1.942629 0.325000 26 1.870200 1.901906 0.381250 27 1.836300 1.867780 0.387500 28 1.804300 1.846487 0.393750 29 1.752700 1.806786 0.431250 30 1.681600 1.756060 0.437500 31 1.660400 1.708304 0.475000 32 1.624700 1.659365 0.493750 33 1.567100 1.620911 0.481250 34 1.503100 1.585212 0.506250 35 1.482700 1.546996 0.518750 36 1.468600 1.519542 0.562500 37 1.423800 1.493005 0.575000 38 1.393500 1.469010 0.531250 39 1.297800 1.446551 0.550000 40 1.322000 1.407961 0.556250 41 1.322300 1.385930 0.562500 42 1.254800 1.374024 0.562500 43 1.183200 1.338247 0.531250 44 1.173300 1.316369 0.575000 45 1.100100 1.283046 0.593750 46 1.069300 1.298898 0.575000 47 1.045900 1.297686 0.587500 48 1.032000 1.269446 0.600000 49 0.962800 1.252569 0.606250 50 0.929700 1.248749 0.587500 51 0.938900 1.213704 0.618750 52 0.887200 1.219889 0.581250 53 0.797000 1.228908 0.575000 54 0.736100 1.185892 0.631250

KeyboardInterrupt because disk full: --> continue from checkpoint 270 / epoch 54

Training Loss Epoch Step Validation Loss Accuracy
0.8456 1.0 5 0.8537 0.8562
0.7982 2.0 10 0.8021 0.8875
0.8028 3.0 15 0.8028 0.8438

Result : {'eval_loss': 0.8208037614822388, 'eval_accuracy': 0.875, 'eval_runtime': 5.3137, 'eval_samples_per_second': 30.111, 'eval_steps_per_second': 0.941, 'epoch': 3.0}

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1