CDM: A Reliable Metric for Fair and Accurate Formula Recognition Evaluation
Abstract
Formula recognition presents significant challenges due to the complicated structure and varied notation of mathematical expressions. Despite continuous advancements in formula recognition models, the evaluation metrics employed by these models, such as BLEU and Edit Distance, still exhibit notable limitations. They overlook the fact that the same formula has diverse representations and is highly sensitive to the distribution of training data, thereby causing the unfairness in formula recognition evaluation. To this end, we propose a Character Detection Matching (CDM) metric, ensuring the evaluation objectivity by designing a image-level rather than LaTex-level metric score. Specifically, CDM renders both the model-predicted LaTeX and the ground-truth LaTeX formulas into image-formatted formulas, then employs visual feature extraction and localization techniques for precise character-level matching, incorporating spatial position information. Such a spatially-aware and character-matching method offers a more accurate and equitable evaluation compared with previous BLEU and Edit Distance metrics that rely solely on text-based character matching. Experimentally, we evaluated various formula recognition models using CDM, BLEU, and ExpRate metrics. Their results demonstrate that the CDM aligns more closely with human evaluation standards and provides a fairer comparison across different models by eliminating discrepancies caused by diverse formula representations.
Community
BLEU and Edit Distance are commonly used metrics in fields such as machine translation and text recognition. While these metrics have also been applied to formula recognition, they fall short due to the non-unique representation of LaTeX formulas, leading to inaccurate assessments and unfair comparisons. The proposed Character Detection Matching (CDM) metric addresses these limitations by employing an image-level character detection and matching approach. This method ensures accurate and fair evaluation of formula recognition capabilities, which is crucial for advancing the field.
Project page here: https://github.com/opendatalab/UniMERNet/tree/main/cdm
Hi @wanderkid , congrats on this work!
I see you created a demo Space, I opened a PR to link it to the paper: https://huggingface.co/spaces/opendatalab/CDM-Demo/discussions/1
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Novel Evaluation Framework for Image2Text Generation (2024)
- PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer (2024)
- HICEScore: A Hierarchical Metric for Image Captioning Evaluation (2024)
- TAMER: Tree-Aware Transformer for Handwritten Mathematical Expression Recognition (2024)
- Prompt Recovery for Image Generation Models: A Comparative Study of Discrete Optimizers (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper