ep9io commited on
Commit
49b4bc8
1 Parent(s): f0fced6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -13
README.md CHANGED
@@ -10,19 +10,11 @@ license: apache-2.0
10
  short_description: LLms for Detecting Bias in Job Descriptions.
11
  ---
12
 
13
- AbstractThis study explores the application of large language
14
- (LLM) models for detecting implicit bias in job descriptions,
15
- an important concern in human resources that shapes applicant
16
- pools and influences employer perception. We compare different
17
- LLM architectures—encoder, encoder-decoder, and decoder models—focusing on seven specific bias types. The research questions
18
- address the capability of foundation LLMs to detect implicit
19
- bias and the effectiveness of domain adaptation via fine-tuning
20
- versus prompt tuning. Results indicate that fine-tuned models
21
- outperform non-fine-tuned models in detecting biases, with Flan-T5-XL emerging as the top performer, surpassing the zero-shot
22
- prompting of GPT-4o model. A labelled dataset comprising gold,
23
- silver, and bronze-standard data was created for this purpose
24
- and open-sourced to advance the field and serve as a valuable
25
- resource for future studies.
26
 
27
  # Short Introduction
28
  Introduction—In human resources, bias affects both employers and employees in explicit and implicit forms. Explicit bias is
 
10
  short_description: LLms for Detecting Bias in Job Descriptions.
11
  ---
12
 
13
+ **Abstract**—This study explores the application of large language (LLM) models for detecting implicit bias in job descriptions, an important concern in human resources that shapes applicant pools and influences employer perception.
14
+ We compare different LLM architectures—encoder, encoder-decoder, and decoder models—focusing on seven specific bias types.
15
+ The research questions address the capability of foundation LLMs to detect implicit bias and the effectiveness of domain adaptation via fine-tuning versus prompt-tuning.
16
+ Results indicate that fine-tuned models are more effective in detecting biases, with Flan-T5-XL emerging as the top performer, surpassing the zero-shot prompting of GPT-4o model.
17
+ A labelled dataset consisting of verified gold-standard, silver-standard, and unverified bronze-standard data was created for this purpose and [open-sourced](https://huggingface.co/datasets/2024-mcm-everitt-ryan/benchmark) to advance the field and serve as a valuable resource for future research.
 
 
 
 
 
 
 
 
18
 
19
  # Short Introduction
20
  Introduction—In human resources, bias affects both employers and employees in explicit and implicit forms. Explicit bias is