First full model card by @Jrglmn
Browse files
README.md
CHANGED
@@ -1,30 +1,279 @@
|
|
1 |
---
|
2 |
-
tags:
|
|
|
3 |
- image-to-image
|
|
|
|
|
|
|
|
|
4 |
license: apache-2.0
|
5 |
---
|
6 |
-
# About `sbb_binarization`
|
7 |
|
8 |
-
This is a CNN model for document image binarization. It can be
|
9 |
-
used to convert all pixels in a color or grayscale document image
|
10 |
-
to only black or white pixels. The main aim is to improve the
|
11 |
-
contrast between foreground (text) and background (paper) for
|
12 |
-
purposes of OCR. The model is based on a `ResNet50-Unet` model.
|
13 |
|
14 |
-
For further details, have a look at [sbb_binarization](https://github.com/qurator-spk/sbb_binarization) on GitHub.
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
-
In the
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
-
|
26 |
-
|
|
|
|
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
|
|
1 |
---
|
2 |
+
tags:
|
3 |
+
- keras
|
4 |
- image-to-image
|
5 |
+
- pixelwise-segmentation
|
6 |
+
datasets:
|
7 |
+
- DIBCO
|
8 |
+
- H-DIBCO
|
9 |
license: apache-2.0
|
10 |
---
|
|
|
11 |
|
|
|
|
|
|
|
|
|
|
|
12 |
|
|
|
13 |
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
# Model Card for sbb_binarization
|
18 |
+
|
19 |
+
<!-- Provide a quick summary of what the model is/does. [Optional] -->
|
20 |
+
This is a pixelwise segmentation model for document image binarization.
|
21 |
+
The model is a hybrid CNN-Transformer encoder-decoder model (Resnet50-Unet) developed by the Berlin State Library (SBB) in the [QURATOR](https://staatsbibliothek-berlin.de/die-staatsbibliothek/projekte/project-id-1060-2018) project. It can be used to convert all pixels in a color or grayscale document image to only black or white pixels.
|
22 |
+
The main aim is to improve the contrast between foreground (text) and background (paper) for purposes of Optical Character Recognition (OCR).
|
23 |
+
|
24 |
+
|
25 |
+
|
26 |
+
|
27 |
+
# Table of Contents
|
28 |
+
|
29 |
+
- [Model Card for sbb_binarization](#model-card-for-sbb_binarization)
|
30 |
+
- [Table of Contents](#table-of-contents)
|
31 |
+
- [Model Details](#model-details)
|
32 |
+
- [Model Description](#model-description)
|
33 |
+
- [Uses](#uses)
|
34 |
+
- [Direct Use](#direct-use)
|
35 |
+
- [Downstream Use](#downstream-use)
|
36 |
+
- [Out-of-Scope Use](#out-of-scope-use)
|
37 |
+
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
|
38 |
+
- [Recommendations](#recommendations)
|
39 |
+
- [Training Details](#training-details)
|
40 |
+
- [Training Data](#training-data)
|
41 |
+
- [Training Procedure](#training-procedure)
|
42 |
+
- [Preprocessing](#preprocessing)
|
43 |
+
- [Speeds, Sizes, Times](#speeds-sizes-times)
|
44 |
+
- [Evaluation](#evaluation)
|
45 |
+
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
|
46 |
+
- [Testing Data](#testing-data)
|
47 |
+
- [Factors](#factors)
|
48 |
+
- [Metrics](#metrics)
|
49 |
+
- [Results](#results)
|
50 |
+
- [Model Examination](#model-examination)
|
51 |
+
- [Environmental Impact](#environmental-impact)
|
52 |
+
- [Technical Specifications](#technical-specifications)
|
53 |
+
- [Model Architecture and Objective](#model-architecture-and-objective)
|
54 |
+
- [Compute Infrastructure](#compute-infrastructure)
|
55 |
+
- [Hardware](#hardware)
|
56 |
+
- [Software](#software)
|
57 |
+
- [Citation](#citation)
|
58 |
+
- [Glossary [optional]](#glossary-optional)
|
59 |
+
- [More Information [optional]](#more-information-optional)
|
60 |
+
- [Model Card Authors](#model-card-authors)
|
61 |
+
- [Model Card Contact](#model-card-contact)
|
62 |
+
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
|
63 |
+
|
64 |
+
|
65 |
+
# Model Details
|
66 |
+
|
67 |
+
## Model Description
|
68 |
+
|
69 |
+
<!-- Provide a longer summary of what this model is/does. -->
|
70 |
+
Document image binarization is one of the main pre-processing steps for text recognition in document image analysis.
|
71 |
+
Noise, faint characters, bad scanning conditions, uneven light exposure or paper aging can cause artifacts that negatively impact text recognition algorithms.
|
72 |
+
The task of binarization is to segment the foreground (text) from these degradations in order to improve Optical Character Recognition (OCR) results.
|
73 |
+
Convolutional neural networks (CNNs) are one popular method for binarization, while Vision Transformers are gaining performance.
|
74 |
+
The sbb_binarization model therefore applies a hybrid CNN-Transformer encoder-decoder model architecture.
|
75 |
+
|
76 |
+
- **Developed by:** [Vahid Rezanezhad]([email protected])
|
77 |
+
- **Shared by [Optional]:** [Staatsbibliothek zu Berlin / Berlin State Library](https://huggingface.co/SBB)
|
78 |
+
- **Model type:** Neural Network
|
79 |
+
- **Language(s) (NLP):** Irrelevant; works on all languages
|
80 |
+
- **License:** apache-2.0
|
81 |
+
- **Parent Model:** [ResNet-50, see the paper by Zhang et al](https://arxiv.org/abs/1512.03385)
|
82 |
+
- **Resources for more information:** More information needed
|
83 |
+
- [GitHub Repo](https://github.com/qurator-spk/sbb_binarization)
|
84 |
+
- Associated Paper 1 [Time-Quality Binarization Competition](https://dib.cin.ufpe.br/docs/DocEng21_bin_competition_report.pdf)
|
85 |
+
- Associated Paper 2 [Time-Quality Document Image Binarization](https://dib.cin.ufpe.br/docs/papers/ICDAR2021-TQDIB_final_published.pdf)
|
86 |
+
|
87 |
+
# Uses
|
88 |
+
|
89 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
90 |
+
|
91 |
+
Document image binarization is the main use case of this model. The architecture of this model alongside with training techniques like model weights ensembling can reach or outperform state-of-the-art results on standard Document Binarization Competition (DIBCO) datasets in the both machine-printed and handwritten documents.
|
92 |
+
|
93 |
+
|
94 |
+
|
95 |
+
## Direct Use
|
96 |
+
|
97 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
98 |
+
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
99 |
+
|
100 |
+
The intended use is the binarization of document images, particularly of historical documents, understood as one of the main pre-processing steps for text recognition.
|
101 |
+
|
102 |
+
|
103 |
+
## Downstream Use
|
104 |
+
|
105 |
+
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
106 |
+
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
107 |
|
108 |
+
A possible downstream use of this model might lie with the binarization of illustrative elements contained in document images such as digitized newspapers, magazines or books. In such cases, binarization might support analysis of creator attribution, artistic style (e.g., in line drawings), or analysis of image similarity. Furthermore, the model can be used or trained for any other image enhancement use cases too.
|
109 |
+
|
110 |
+
|
111 |
+
## Out-of-Scope Use
|
112 |
+
|
113 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
114 |
+
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
115 |
+
|
116 |
+
This model does **NOT** perform any Optical Character Recognition (OCR), it is an image-to-image model only.
|
117 |
+
|
118 |
+
|
119 |
+
# Bias, Risks, and Limitations
|
120 |
+
|
121 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
122 |
+
|
123 |
+
The aim of the development of this model was to improve document image binarization as a necessary pre-processing step. Since the content of the document images is not touched, ethical challenges cannot be identified. The endeavor of developing the model was not undertaken for profit; though a product based on this model might be developed in the future, it will always remain openly accessible without any commercial interest.
|
124 |
+
This algorithm performs a pixelwise segmentation which is done in patches. Therefore, one technical limitation of this model is that it is unable to capture and see long range dependencies.
|
125 |
+
|
126 |
+
|
127 |
+
## Recommendations
|
128 |
+
|
129 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
130 |
+
|
131 |
+
The application of machine learning models to convert a document image into a binary output is a process which can still be improved. We have used many pseudo-labeled images to train our model, so any improvement or ground truth extension would probably lead to better results.
|
132 |
+
|
133 |
+
|
134 |
+
# Training Details
|
135 |
+
|
136 |
+
## Training Data
|
137 |
+
|
138 |
+
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
139 |
+
The dataset used for training is a combination of training sets from previous [DIBCO](https://dib.cin.ufpe.br/#!/datasets) binarization competitions alongside with the [Palm Leaf dataset](https://ieeexplore.ieee.org/abstract/document/7814130) and the Persian Heritage Image Binarization Competition [PHIBC](https://arxiv.org/abs/1306.6263) dataset, with additional pseudo-labeled images from the Berlin State Library (SBB; datasets to be published). Furthermore, a dataset for very dark or very bright images has been produced for training.
|
140 |
+
|
141 |
+
|
142 |
+
## Training Procedure
|
143 |
+
|
144 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
145 |
+
|
146 |
+
We have used a batch size of 8 with learning rate of 1e − 4 for 20 epochs. A soft dice is applied as loss function. In the training we have taken advantage of dataset augmentation. The augmentation includes flipping, scaling and blurring. The best model weights are chosen based on some problematic documents from the SBB dataset. The final model results of the ensemble of the best weights.
|
147 |
+
|
148 |
+
|
149 |
+
### Preprocessing
|
150 |
+
In order to use this model for binarization no preprocessing is needed for the input image.
|
151 |
+
|
152 |
+
### Speeds, Sizes, Times
|
153 |
+
|
154 |
+
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
155 |
+
|
156 |
+
More information needed
|
157 |
+
|
158 |
+
### Training hyperparameters
|
159 |
+
|
160 |
+
In the training process, the hyperparameters were patch size, learning rate, number of epochs and depth of encoder part.
|
161 |
+
|
162 |
+
### Training results
|
163 |
+
|
164 |
+
See the two papers listed below in the evaluation section.
|
165 |
+
|
166 |
+
# Evaluation
|
167 |
+
|
168 |
+
In the DocEng’2021 [Time-Quality Binarization Competition](https://dib.cin.ufpe.br/docs/DocEng21_bin_competition_report.pdf), the model ranked twelve times under the top 8 of 63 methods, winning 2 tasks.
|
169 |
+
|
170 |
+
In the ICDAR 2021 Competition on [Time-Quality Document Image Binarization](https://dib.cin.ufpe.br/docs/papers/ICDAR2021-TQDIB_final_published.pdf), the model ranked two times under the top 20 of 61 methods, winning 1 task.
|
171 |
+
|
172 |
+
|
173 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
174 |
+
|
175 |
+
## Testing Data, Factors & Metrics
|
176 |
+
|
177 |
+
### Testing Data
|
178 |
+
|
179 |
+
<!-- This should link to a Data Card if possible. -->
|
180 |
+
|
181 |
+
The testing data are the ones used in the [Time-Quality Binarization Competition](https://dib.cin.ufpe.br/docs/DocEng21_bin_competition_report.pdf) and listed in the paper on [Time-Quality Document Image Binarization](https://dib.cin.ufpe.br/docs/papers/ICDAR2021-TQDIB_final_published.pdf).
|
182 |
+
|
183 |
+
|
184 |
+
### Factors
|
185 |
+
|
186 |
+
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
187 |
+
|
188 |
+
More information needed.
|
189 |
+
|
190 |
+
### Metrics
|
191 |
+
|
192 |
+
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
193 |
+
|
194 |
+
The model has been evaluated both based on OCR and pixelwise segmentation results. The metrics which have been used in the case of visual evaluation are pixel proportion error and Cohen's Kappa value, and Levenshtein distance error in the case of OCR.
|
195 |
+
|
196 |
+
## Results
|
197 |
+
|
198 |
+
See the two papers listed above in the evaluation section.
|
199 |
+
|
200 |
+
# Model Examination
|
201 |
+
|
202 |
+
More information needed.
|
203 |
+
|
204 |
+
# Environmental Impact
|
205 |
+
|
206 |
+
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
207 |
+
|
208 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
209 |
+
|
210 |
+
- **Hardware Type:** Nvidia 2080.
|
211 |
+
- **Hours used:** Two days.
|
212 |
+
- **Cloud Provider:** No cloud.
|
213 |
+
- **Compute Region:** Germany.
|
214 |
+
- **Carbon Emitted:** More information needed.
|
215 |
+
|
216 |
+
# Technical Specifications
|
217 |
+
|
218 |
+
## Model Architecture and Objective
|
219 |
+
|
220 |
+
The proposed model is a hybrid CNN-Transformer encoder-decoder model. The encoder part consists of a ResNet-50 model. The ResNet-50 includes convolutional neural networks and is responsible for extracting as many features as possible from the input image. After that the input image goes through the CNN part, then the output undergoes upsampling convolutional layers until the same output size as in the input image is reached.
|
221 |
+
|
222 |
+
## Compute Infrastructure
|
223 |
+
|
224 |
+
Training has been performed on a single Nvidia 2080 GPU.
|
225 |
+
|
226 |
+
### Hardware
|
227 |
+
|
228 |
+
See above.
|
229 |
+
|
230 |
+
### Software
|
231 |
+
|
232 |
+
See the code published on [GitHub](https://github.com/qurator-spk/sbb_binarization).
|
233 |
+
|
234 |
+
# Citation
|
235 |
+
|
236 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
237 |
+
|
238 |
+
Coming soon.
|
239 |
+
|
240 |
+
**BibTeX:**
|
241 |
+
|
242 |
+
More information needed.
|
243 |
+
|
244 |
+
**APA:**
|
245 |
+
|
246 |
+
More information needed.
|
247 |
+
|
248 |
+
# Glossary [optional]
|
249 |
+
|
250 |
+
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
251 |
+
|
252 |
+
More information needed
|
253 |
+
|
254 |
+
# More Information [optional]
|
255 |
+
|
256 |
+
More information needed.
|
257 |
+
|
258 |
+
# Model Card Authors
|
259 |
+
|
260 |
+
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
|
261 |
+
|
262 |
+
[Vahid Rezanezhad]([email protected]), [Clemens Neudecker](https://huggingface.co/cneud), [Konstantin Baierer]([email protected]) and [Jörg Lehmann]([email protected])
|
263 |
+
|
264 |
+
# Model Card Contact
|
265 |
+
|
266 |
+
Questions and comments about the model can be directed to Clemens Neudecker at [email protected], questions and comments about the model card can be directed to Jörg Lehmann at [email protected]
|
267 |
+
|
268 |
+
# How to Get Started with the Model
|
269 |
+
|
270 |
+
Use the code below to get started with the model.
|
271 |
|
272 |
+
sbb_binarize \
|
273 |
+
-m <from_pretrained_keras("sbb_binarization")> \
|
274 |
+
<input image> \
|
275 |
+
<output image>
|
276 |
|
277 |
+
<details>
|
278 |
+
How to get started with this model is explained in the ReadMe file of the GitHub repository [over here](https://github.com/qurator-spk/sbb_binarization).
|
279 |
+
</details>
|