File size: 2,837 Bytes
b8f7720
 
cd68dfa
 
 
 
 
 
 
 
 
 
 
b8f7720
 
b5a7011
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b8f7720
 
cd68dfa
 
 
b8f7720
 
cd68dfa
b8f7720
 
cd68dfa
 
 
 
 
 
 
 
b8f7720
 
cd68dfa
 
 
b8f7720
 
 
 
cd68dfa
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
library_name: transformers
license: mit
datasets:
- ahmedheakl/resume-atlas
language:
- en
metrics:
- accuracy
- f1
- recall
- precision
pipeline_tag: text-classification
---

# How to use

In this example, we do an inference on a sample from our dataset (_ResumeAtlas_). You can increase `max_length` for more accurate predictions.

```python
!pip install datasets

import numpy as np
import torch
from transformers import BertForSequenceClassification, BertTokenizer
from datasets import load_dataset
from sklearn import preprocessing

dataset_id='ahmedheakl/resume-atlas'
model_id='ahmedheakl/bert-resume-classification'
label_column = "Category"
num_labels=43
output_attentions=False
output_hidden_states=False
do_lower_case=True
add_special_tokens=True
max_length=512
pad_to_max_length=True
return_attention_mask=True
truncation=True

ds = load_dataset(dataset_id, trust_remote_code=True)

le = preprocessing.LabelEncoder()
le.fit(ds['train'][label_column])


tokenizer = BertTokenizer.from_pretrained(model_id, do_lower_case=do_lower_case)
model = BertForSequenceClassification.from_pretrained(
    model_id,
    num_labels = num_labels,
    output_attentions = output_attentions,
    output_hidden_states = output_hidden_states,
)

model = model.to('cuda').eval()
sent = ds['train'][0]['Text']

encoded_dict = tokenizer.encode_plus(
    sent,
    add_special_tokens=add_special_tokens,
    max_length=max_length,
    pad_to_max_length=pad_to_max_length,
    return_attention_mask=return_attention_mask,
    return_tensors='pt',
    truncation=truncation,
)
input_ids = encoded_dict['input_ids'].to('cuda')
attention_mask = encoded_dict['attention_mask'].to('cuda')

outputs = model(
    input_ids,
    token_type_ids=None,
    attention_mask=attention_mask
)
    
label_id = np.argmax(outputs['logits'].cpu().detach().tolist(), axis=1)
print(f'Predicted: {le.inverse_transform(label_id)[0]} | Ground: {ds["train"][0][label_column]}')
```

# Model Card for Model ID

**Please see paper & code for more information:**
- https://github.com/noran-mohamed/Resume-Classification-Dataset
- https://arxiv.org/abs/2406.18125


## Citation

**BibTeX:**
```
@article{heakl2024resumeatlas,
  title={ResumeAtlas: Revisiting Resume Classification with Large-Scale Datasets and Large Language Models},
  author={Heakl, Ahmed and Mohamed, Youssef and Mohamed, Noran and Sharkaway, Ali and Zaky, Ahmed},
  journal={arXiv preprint arXiv:2406.18125},
  year={2024}
}
```

**APA:**
```
Heakl, A., Mohamed, Y., Mohamed, N., Sharkaway, A., & Zaky, A. (2024). ResumeAtlas: Revisiting Resume Classification with Large-Scale Datasets   and Large Language Models. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2406.18125
```


## Model Card Authors [optional]

Email: [email protected]
Linkedin: https://linkedin.com/in/ahmed-heakl