Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:
File size: 3,299 Bytes
950da48
 
 
 
 
 
a27b560
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7dedfa2
 
 
 
 
 
 
 
 
 
052f01d
 
 
 
 
 
 
7dedfa2
 
052f01d
7dedfa2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: mit
task_categories:
- text-classification
language:
- en
dataset_info:
  features:
  - name: hard_text
    dtype: string
  - name: profession
    dtype: int64
  - name: gender
    dtype: int64
  splits:
  - name: train
    num_bytes: 107487885
    num_examples: 257478
  - name: test
    num_bytes: 41312256
    num_examples: 99069
  - name: dev
    num_bytes: 16504417
    num_examples: 39642
  download_size: 99808338
  dataset_size: 165304558
---

# Bias in Bios

Bias in Bios was created by (De-Artega et al., 2019) and published under the MIT license (https://github.com/microsoft/biosbias). The dataset is used to investigate bias in NLP models. It consists of textual biographies used to predict professional occupations, the sensitive attribute is the gender (binary). 

The version shared here is the version proposed by (Ravgofel et al., 2020) which slightly smaller due to the unavailability of 5,557 biographies. 

The dataset is divided between train (257,000 samples), test (99,000 samples) and dev (40,000 samples) sets.

To load each all splits ('train', 'dev', 'test'), use the following code :
```python 
train_dataset = load_dataset("LabHC/bias_in_bios", split='train')
test_dataset = load_dataset("LabHC/bias_in_bios", split='test')
dev_dataset = load_dataset("LabHC/bias_in_bios", split='dev')
```

Below are presented the classifiaction and sensitive attribtues labels and their proportion. Distributions are similar through the three sets.


#### Classification labels

| Profession | Numerical label | Proportion (%)| | Profession | Numerical label | Proportion (%)|  
|---|---|---|---|---|---|---|
accountant | 0 | 1.42 | | nurse | 13 | 4.78
architect | 1 | 2.55 | | painter | 14 | 1.95
attorney | 2 | 8.22 | | paralegal | 15 | 0.45
chiropractor | 3 | 0.67 | | pastor | 16 | 0.64
comedian | 4 | 0.71 | | personal_trainer | 17 | 0.36
composer | 5 | 1.41 | | photographer | 18 | 6.13
dentist | 6 | 3.68 | | physician | 19 | 10.35
dietitian | 7 | 1.0 | | poet | 20 | 1.77
dj | 8 | 0.38 | | professor | 21 | 29.8
filmmaker | 9 | 1.77 | | psychologist | 22 | 4.64
interior_designer | 10 | 0.37 | | rapper | 23 | 0.35
journalist | 11 | 5.03 | | software_engineer | 24 | 1.74
model | 12 | 1.89 | | surgeon | 25 | 3.43
nurse | 13 | 4.78 | | teacher | 26 | 4.09
painter | 14 | 1.95 | | yoga_teacher | 27 | 0.42

#### Sensitive attributes

| Gender | Numerical label | Proportion (%)|
|---|---|---|
Male | 0 | 53.9 | 
Female | 1 | 46.1


---
(De-Artega et al., 2019) Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 120–128. https://doi.org/10.1145/3287560.3287572

(Ravgofel et al., 2020) Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237–7256, Online. Association for Computational Linguistics.