{}
Dataset Card for Arc2Face
This is the dataset used in Arc2Face: A Foundation Model of Human Faces.
Dataset Summary
This dataset consists of approximately 21M facial images from 1M identities at a resolution of 448×448. It was produced by upsampling 50% of the images from the WebFace42M database (originally at 112×112 resolution) using a state-of-the-art blind face restoration network. This dataset was used to train the identity-conditioned generative face model presented in Arc2Face.
Tasks
The Arc2Face model is based on Stable Diffusion v1.5 and is designed for generating images at 512×512 pixels. To accommodate the requirements of large diffusion models, Arc2Face introduces a refined version of the WebFace42M dataset. Although the original database is intended for Face Recognition (FR) training, the restored dataset provided here is designed for training generative models. Its large number of IDs and considerable intra-class variability make it particularly helpful for ID-conditioned generation.
Please note that the original WebFace42M dataset contains images tailored to extreme conditions for FR robustness. Despite post-restoration filtering, the restored dataset may still include some poor quality 448×448 images. Moreover, all images are limited to tightly cropped facial areas. Therefore, it is suggested to use this dataset in combination with other high-quality datasets (e.g., FFHQ) when training face models, as described in the paper.
Dataset Structure
The dataset consists of 35 zip files split into 5 groups (7 zip files per group). Each zip file is approximately 30GB in size.
You can download the zip files from this repository manually or using python (e.g., for the first zip):
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="0/0_0.zip", local_dir="./Arc2Face_data", repo_type="dataset")
After unzipping, the dataset structure will be:
Arc2Face_data
└── IDs
└── images
Please note that due to the large dataset size, downloading and unzipping may take many hours to complete.
Citation
If you use this dataset, please cite our paper:
@misc{paraperas2024arc2face,
title={Arc2Face: A Foundation Model of Human Faces},
author={Foivos Paraperas Papantoniou and Alexandros Lattas and Stylianos Moschoglou and Jiankang Deng and Bernhard Kainz and Stefanos Zafeiriou},
year={2024},
eprint={2403.11641},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
as well as the original dataset:
@misc{zhu2021webface260m,
title={WebFace260M: A Benchmark Unveiling the Power of Million-scale Deep Face Recognition},
author={Zheng Zhu, Guan Huang, Jiankang Deng, Yun Ye, Junjie Huang, Xinze Chen, Jiagang Zhu, Tian Yang, Jiwen Lu, Dalong Du, Jie Zhou},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021}
}