bias_in_bios / README.md
thibaudltn's picture
Update README.md
ce4162a
|
raw
history blame
No virus
2.64 kB
metadata
license: mit
task_categories:
  - text-classification
language:
  - en
viever: true

Bias in Bios

Bias in Bios was created by (De-Artega et al., 2019) and published under the MIT license (https://github.com/microsoft/biosbias). The dataset is used to investigate bias in NLP models. It consists of textual biographies used to predict professional occupations, the sensitive attribute is the gender (binary).

The version shared here is the version proposed by (Ravgofel et al., 2020) which slightly smaller due to the unavailability of 5,557 biographies.

The dataset is divided between train (257,000 samples), test (99,000 samples) and dev (40,000 samples) sets.

Below are presented the classifiaction and sensitive attribtues labels and their proportion. Distributions are similar through the three sets.

Classification labels

Profession Numerical label Proportion (%) Profession Numerical label Proportion (%)
accountant 0 1.42 nurse 13 4.78
architect 1 2.55 painter 14 1.95
attorney 2 8.22 paralegal 15 0.45
chiropractor 3 0.67 pastor 16 0.64
comedian 4 0.71 personal_trainer 17 0.36
composer 5 1.41 photographer 18 6.13
dentist 6 3.68 physician 19 10.35
dietitian 7 1.0 poet 20 1.77
dj 8 0.38 professor 21 29.8
filmmaker 9 1.77 psychologist 22 4.64
interior_designer 10 0.37 rapper 23 0.35
journalist 11 5.03 software_engineer 24 1.74
model 12 1.89 surgeon 25 3.43
nurse 13 4.78 teacher 26 4.09
painter 14 1.95 yoga_teacher 27 0.42

Sensitive attributes

Gender Numerical label Proportion (%)
Male 0 53.9
Female 1 46.1

(De-Artega et al., 2019) Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 120–128. https://doi.org/10.1145/3287560.3287572

(Ravgofel et al., 2020) Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237–7256, Online. Association for Computational Linguistics.