Henry Kenlay
commited on
Commit
•
61ef443
1
Parent(s):
9839296
Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- antibody language model
|
4 |
+
- antibody
|
5 |
+
base_model: Exscientia/IgBert_unpaired
|
6 |
+
license: mit
|
7 |
+
---
|
8 |
+
|
9 |
+
# IgBert
|
10 |
+
|
11 |
+
Pretrained model on protein and antibody sequences using a masked language modeling (MLM) objective. It was introduced in the paper [Large scale paired antibody language models](https://arxiv.org/abs/2403.17889).
|
12 |
+
|
13 |
+
The model is finetuned from Igbert-unpaired using paired antibody sequences from paired OAS.
|
14 |
+
|
15 |
+
# Use
|
16 |
+
|
17 |
+
The model and tokeniser can be loaded using the `transformers` library
|
18 |
+
|
19 |
+
```python
|
20 |
+
from transformers import BertModel, BertTokenizer
|
21 |
+
|
22 |
+
tokeniser = BertTokenizer.from_pretrained("Exscientia/IgBert", do_lower_case=False)
|
23 |
+
model = BertModel.from_pretrained("Exscientia/IgBert", add_pooling_layer=False)
|
24 |
+
```
|
25 |
+
|
26 |
+
The tokeniser is used to prepare batch inputs
|
27 |
+
```python
|
28 |
+
# heavy chain sequences
|
29 |
+
sequences_heavy = [
|
30 |
+
"VQLAQSGSELRKPGASVKVSCDTSGHSFTSNAIHWVRQAPGQGLEWMGWINTDTGTPTYAQGFTGRFVFSLDTSARTAYLQISSLKADDTAVFYCARERDYSDYFFDYWGQGTLVTVSS",
|
31 |
+
"QVQLVESGGGVVQPGRSLRLSCAASGFTFSNYAMYWVRQAPGKGLEWVAVISYDGSNKYYADSVKGRFTISRDNSKNTLYLQMNSLRTEDTAVYYCASGSDYGDYLLVYWGQGTLVTVSS"
|
32 |
+
]
|
33 |
+
|
34 |
+
# light chain sequences
|
35 |
+
sequences_light = [
|
36 |
+
"EVVMTQSPASLSVSPGERATLSCRARASLGISTDLAWYQQRPGQAPRLLIYGASTRATGIPARFSGSGSGTEFTLTISSLQSEDSAVYYCQQYSNWPLTFGGGTKVEIK",
|
37 |
+
"ALTQPASVSGSPGQSITISCTGTSSDVGGYNYVSWYQQHPGKAPKLMIYDVSKRPSGVSNRFSGSKSGNTASLTISGLQSEDEADYYCNSLTSISTWVFGGGTKLTVL"
|
38 |
+
]
|
39 |
+
|
40 |
+
# The tokeniser expects input of the form ["V Q ... S S [SEP] E V ... I K", ...]
|
41 |
+
paired_sequences = []
|
42 |
+
for sequence_heavy, sequence_light in zip(sequences_heavy, sequences_light):
|
43 |
+
paired_sequences.append(' '.join(sequence_heavy)+' [SEP] '+' '.join(sequence_light))
|
44 |
+
|
45 |
+
tokens = tokeniser.batch_encode_plus(
|
46 |
+
paired_sequences,
|
47 |
+
add_special_tokens=True,
|
48 |
+
pad_to_max_length=True,
|
49 |
+
return_tensors="pt",
|
50 |
+
return_special_tokens_mask=True
|
51 |
+
)
|
52 |
+
```
|
53 |
+
|
54 |
+
Note that the tokeniser adds a `[CLS]` token at the beginning of each paired sequence, a `[SEP]` token at the end of each paired sequence and pads using the `[PAD]` token. For example a batch containing sequences `V Q L [SEP] E V V`, `Q V [SEP] A L` will be tokenised to `[CLS] V Q L [SEP] E V V [SEP]` and `[CLS] Q V [SEP] A L [SEP] [PAD] [PAD]`.
|
55 |
+
|
56 |
+
Sequence embeddings are generated by feeding tokens through the model
|
57 |
+
|
58 |
+
```python
|
59 |
+
output = model(
|
60 |
+
input_ids=tokens['input_ids'],
|
61 |
+
attention_mask=tokens['attention_mask']
|
62 |
+
)
|
63 |
+
|
64 |
+
residue_embeddings = output.last_hidden_state
|
65 |
+
```
|
66 |
+
|
67 |
+
To obtain a sequence representation, the residue tokens can be averaged over like so
|
68 |
+
|
69 |
+
```python
|
70 |
+
import torch
|
71 |
+
|
72 |
+
# mask special tokens before summing over embeddings
|
73 |
+
residue_embeddings[tokens["special_tokens_mask"] == 1] = 0
|
74 |
+
sequence_embeddings_sum = residue_embeddings.sum(1)
|
75 |
+
|
76 |
+
# average embedding by dividing sum by sequence lengths
|
77 |
+
sequence_lengths = torch.sum(tokens["special_tokens_mask"] == 0, dim=1)
|
78 |
+
sequence_embeddings = sequence_embeddings_sum / sequence_lengths.unsqueeze(1)
|
79 |
+
```
|
80 |
+
|
81 |
+
For sequence level fine-tuning the model can be loaded with a pooling head by setting `add_pooling_layer=True` and using `output.pooler_output` in the down-stream task.
|