--- license: cc-by-nc-sa-4.0 language: ja tags: - advertisement task_categories: - text2text-generation - image-to-text size_categories: 10K} ``` ### Dataset Structure | Name | Description | | ---- | ---- | | asset_id | ids (associated with LP images) | | kw | search keyword | | lp_meta_description | meta description extracted from LP (i.e., LP Text)| | title_org | ad text (original gold reference) | | title_ne{1-3} | ad text (additonal gold references for multi-reference evaluation | | domain | industry domain (HR, EC, Fin, Edu) for industry-wise evaluation | | parsed_full_text_annotation | OCR result for LP image | | lp_image | LP image | ## Citation ``` @inproceedings{mita-etal-2024-striking, title = "Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation", author = "Mita, Masato and Murakami, Soichiro and Kato, Akihiko and Zhang, Peinan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand and virtual meeting", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.54", pages = "955--972", abstract = "In response to the limitations of manual ad creation, significant research has been conducted in the field of automatic ad text generation (ATG). However, the lack of comprehensive benchmarks and well-defined problem sets has made comparing different methods challenging. To tackle these challenges, we standardize the task of ATG and propose a first benchmark dataset, CAMERA, carefully designed and enabling the utilization of multi-modal information and facilitating industry-wise evaluations. Our extensive experiments with a variety of nine baselines, from classical methods to state-of-the-art models including large language models (LLMs), show the current state and the remaining challenges. We also explore how existing metrics in ATG and an LLM-based evaluator align with human evaluations.", } ```