Datasets:

ArXiv:
License:
FoivosPar commited on
Commit
f309022
1 Parent(s): 3682b34

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -16,9 +16,9 @@ This is the dataset used in [Arc2Face: A Foundation Model of Human Faces](https:
16
  This dataset consists of approximately 21M facial images from 1M identities at a resolution of 448×448. It was produced by upsampling 50% of the images from the [WebFace42M database](https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_WebFace260M_A_Benchmark_Unveiling_the_Power_of_Million-Scale_Deep_Face_CVPR_2021_paper.html) (originally at 112×112 resolution) using a state-of-the-art blind face restoration [network](https://github.com/TencentARC/GFPGAN). This dataset was used to train the identity-conditioned generative face model presented in [Arc2Face](https://arxiv.org/abs/2403.11641).
17
 
18
  ## Tasks
19
- The Arc2Face model is based on Stable Diffusion v1.5 and is designed for generating images at 512×512 pixels. To accommodate the requirements of large diffusion models, Arc2Face introduces a refined version of the WebFace42M dataset. Although the original database is intended for Face Recognition (FR) training, the restored dataset provided here is designed for training generative face models. Its large number of IDs and considerable intra-class variability make it particularly helpful for ID-conditioned generation.
20
 
21
- Please that the original WebFace42M dataset contains images tailored to extreme conditions for FR robustness. Despite post-restoration filtering, the restored dataset may still include some poor quality 448×448 images. Moreover, all images are limited to tightly cropped facial areas. Therefore, it is suggested to use this dataset in combination with other high-quality datasets (e.g., FFHQ) when training face models, as described in the [paper](https://arxiv.org/abs/2403.11641).
22
 
23
  ## Dataset Structure
24
 
 
16
  This dataset consists of approximately 21M facial images from 1M identities at a resolution of 448×448. It was produced by upsampling 50% of the images from the [WebFace42M database](https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_WebFace260M_A_Benchmark_Unveiling_the_Power_of_Million-Scale_Deep_Face_CVPR_2021_paper.html) (originally at 112×112 resolution) using a state-of-the-art blind face restoration [network](https://github.com/TencentARC/GFPGAN). This dataset was used to train the identity-conditioned generative face model presented in [Arc2Face](https://arxiv.org/abs/2403.11641).
17
 
18
  ## Tasks
19
+ The Arc2Face model is based on Stable Diffusion v1.5 and is designed for generating images at 512×512 pixels. To accommodate the requirements of large diffusion models, Arc2Face introduces a refined version of the WebFace42M dataset. Although the original database is intended for Face Recognition (FR) training, the restored dataset provided here is designed for training generative models. Its large number of IDs and considerable intra-class variability make it particularly helpful for ID-conditioned generation.
20
 
21
+ Please note that the original WebFace42M dataset contains images tailored to extreme conditions for FR robustness. Despite post-restoration filtering, the restored dataset may still include some poor quality 448×448 images. Moreover, all images are limited to tightly cropped facial areas. Therefore, it is suggested to use this dataset in combination with other high-quality datasets (e.g., FFHQ) when training face models, as described in the [paper](https://arxiv.org/abs/2403.11641).
22
 
23
  ## Dataset Structure
24