Pre-trained model for CrackenPy package for crack segmentation on building material specimens
The repository contains pre-trained models using the segmentation-models-pytorch package to segment RGB images 416x416 pixels.
The resulting classes are "background," "matrix," "crack," and "pore".The purpose is a segmentation of test specimens made from
building materials such as cement, alkali-activated materials or geopolymers.
Model Description
- Model type: semantic segmentation
- Language(s) (NLP): Python
- License: BSD v2
- Finetuned from model [optional]: resnet101
Uses
The use is to segment cracks on test specimens or on images fully filled with a binder matrix containing cracks. The background should be darker than the speicmen itself. The segmentation is aimed at fine cracks from starting from 20 um up to 10 mm.
Bias, Risks, and Limitations
The background and matrix classes may sometimes be with, if the texture of specimens is too dark or smudged, it is, therefore, important to make a segmentation on possible clean specimens. The models of the current version have not been trained in exterior and may lead to bad segmentation. The pores are usually in circular shape, but there can be a situation where a crack is found on the edge of the pore. It is therefore recommended to avoid the usage of models on highly porous materials.
Training Details
The models originate from https://github.com/qubvel-org/segmentation_models.pytorch library, and are retrained on dataset crackenpy_dataset.
Training Data
The dataset for training can be downloaded from Brno University of Technology upon filling the form. The dataset is free for use in ressearch and education area under the BSD v2 license. The dataset was created under the research project of Grant Agency of Czech Republic No. 22-02098S with the title: "Experimental analysis of the shrinkage, creep and cracking mechanism of the materials based on the alkali-activated slag".
Training Procedure
The training was done using Pytorch library, where CrossEntropyLoss() together with AdamW optimizer function.
Results & Metrics
The dataset has 1207 images in resolution of 416x416 pixels together with 1207 masks. The overall accuracy of training for all classes reaches 98%, the mean intersection of union reaches 73%.
Hardware
The training was done using NVIDIA Quadro P4000 with CUDA support.
Software
The models were trained using Pytorch in Python, the segmentation and dataset preparation was done using LabKit plugin in software FIJI.
Authors of dataset
Richard Dvorak, Brno University of Technology, Faculty of Civil Engineering, Institute of building testing
Rostislav Krc, Brno University of Technology, Faculty of Civil Engineering, Institute of building testing
Vlastimil Bilek, Brno University of Technology, Faculty of Chemistry, Institute of material chemistry
Barbara Kucharczyková, Brno University of Technology, Faculty of Civil Engineering, Institute of building testing
Model Card Contact
The author of the dataset is Richard Dvorak Ph.D., [email protected], tel.: +420 777 678 613, employee of Brno University Of Technology of the Faculty of Civil Engineering, institute of building testing.
- Downloads last month
- 0