ZhiyuanChen
commited on
Commit
•
a531855
1
Parent(s):
68c22b6
Upload folder using huggingface_hub
Browse files- README.md +311 -0
- config.json +81 -0
- model.safetensors +3 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +12 -0
- tokenizer_config.json +68 -0
- vocab.txt +26 -0
README.md
ADDED
@@ -0,0 +1,311 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: rna
|
3 |
+
tags:
|
4 |
+
- Biology
|
5 |
+
- RNA
|
6 |
+
license: agpl-3.0
|
7 |
+
datasets:
|
8 |
+
- multimolecule/ensembl-genome-browser
|
9 |
+
library_name: multimolecule
|
10 |
+
pipeline_tag: fill-mask
|
11 |
+
mask_token: "<mask>"
|
12 |
+
widget:
|
13 |
+
- example_title: "microRNA-21"
|
14 |
+
text: "UAGC<mask>UAUCAGACUGAUGUUGA"
|
15 |
+
output:
|
16 |
+
- label: "*"
|
17 |
+
score: 0.08083827048540115
|
18 |
+
- label: "<null>"
|
19 |
+
score: 0.07966958731412888
|
20 |
+
- label: "A"
|
21 |
+
score: 0.0771222859621048
|
22 |
+
- label: "N"
|
23 |
+
score: 0.06853719055652618
|
24 |
+
- label: "."
|
25 |
+
score: 0.06666938215494156
|
26 |
+
---
|
27 |
+
|
28 |
+
# UTR-LM
|
29 |
+
|
30 |
+
Pre-trained model on 5’ untranslated region (5’UTR) using masked language modeling (MLM), Secondary Structure (SS), and Minimum Free Energy (MFE) objectives.
|
31 |
+
|
32 |
+
## Statement
|
33 |
+
|
34 |
+
_A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions_ is published in [Nature Machine Intelligence](https://doi.org/10.1038/s42256-024-00823-9), which is a Closed Access / Author-Fee journal.
|
35 |
+
|
36 |
+
> Machine learning has been at the forefront of the movement for free and open access to research.
|
37 |
+
>
|
38 |
+
> We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of these journals as an outlet of record for the machine learning community would be a retrograde step.
|
39 |
+
|
40 |
+
The MultiMolecule team is committed to the principles of open access and open science.
|
41 |
+
|
42 |
+
We do NOT endorse the publication of manuscripts in Closed Access / Author-Fee journals and encourage the community to support Open Access journals and conferences.
|
43 |
+
|
44 |
+
Please consider signing the [Statement on Nature Machine Intelligence](https://openaccess.engineering.oregonstate.edu).
|
45 |
+
|
46 |
+
## Disclaimer
|
47 |
+
|
48 |
+
This is an UNOFFICIAL implementation of the [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](https://doi.org/10.1101/2023.10.11.561938) by Yanyi Chu, Dan Yu, et al.
|
49 |
+
|
50 |
+
The OFFICIAL repository of UTR-LM is at [a96123155/UTR-LM](https://github.com/a96123155/UTR-LM).
|
51 |
+
|
52 |
+
> [!WARNING]
|
53 |
+
> The MultiMolecule team is unable to confirm that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
|
54 |
+
> This is because
|
55 |
+
>
|
56 |
+
> The proposed method is published in a Closed Access / Author-Fee journal.
|
57 |
+
|
58 |
+
**The team releasing UTR-LM did not write this model card for this model so this model card has been written by the MultiMolecule team.**
|
59 |
+
|
60 |
+
## Model Details
|
61 |
+
|
62 |
+
UTR-LM is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of 5’ untranslated regions (5’UTRs) in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
|
63 |
+
|
64 |
+
### Variations
|
65 |
+
|
66 |
+
- **[`multimolecule/utrlm.te_el`](https://huggingface.co/multimolecule/utrlm.te_el)**: The UTR-LM model for Translation Efficiency of transcripts and mRNA Expression Level.
|
67 |
+
- **[`multimolecule/utrlm.mrl`](https://huggingface.co/multimolecule/utrlm.mrl)**: The UTR-LM model for Mean Ribosome Loading.
|
68 |
+
|
69 |
+
### Model Specification
|
70 |
+
|
71 |
+
<table>
|
72 |
+
<thead>
|
73 |
+
<tr>
|
74 |
+
<th>Variants</th>
|
75 |
+
<th>Num Layers</th>
|
76 |
+
<th>Hidden Size</th>
|
77 |
+
<th>Num Heads</th>
|
78 |
+
<th>Intermediate Size</th>
|
79 |
+
<th>Num Parameters (M)</th>
|
80 |
+
<th>FLOPs (G)</th>
|
81 |
+
<th>MACs (G)</th>
|
82 |
+
<th>Max Num Tokens</th>
|
83 |
+
</tr>
|
84 |
+
</thead>
|
85 |
+
<tbody>
|
86 |
+
<tr>
|
87 |
+
<td>UTR-LM MRL</td>
|
88 |
+
<td rowspan="2">6</td>
|
89 |
+
<td rowspan="2">128</td>
|
90 |
+
<td rowspan="2">16</td>
|
91 |
+
<td rowspan="2">512</td>
|
92 |
+
<td rowspan="2">1.21</td>
|
93 |
+
<td rowspan="2">0.35</td>
|
94 |
+
<td rowspan="2">0.18</td>
|
95 |
+
<td rowspan="2">1022</td>
|
96 |
+
</tr>
|
97 |
+
<tr>
|
98 |
+
<td>UTR-LM TE_EL</td>
|
99 |
+
</tr>
|
100 |
+
</tbody>
|
101 |
+
</table>
|
102 |
+
|
103 |
+
### Links
|
104 |
+
|
105 |
+
- **Code**: [multimolecule.utrlm](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/utrlm)
|
106 |
+
- **Data**:
|
107 |
+
- [Ensembl Genome Browser](https://ensembl.org)
|
108 |
+
- [Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)
|
109 |
+
- [High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1101/2021.10.14.464013)
|
110 |
+
- **Paper**: [A 5’ UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions](http://doi.org/10.1038/s41467-021-24436-7)
|
111 |
+
- **Developed by**: Yanyi Chu, Dan Yu, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang
|
112 |
+
- **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ESM](https://huggingface.co/facebook/esm2_t48_15B_UR50D)
|
113 |
+
- **Original Repository**: [https://github.com/a96123155/UTR-LM](https://github.com/a96123155/UTR-LM)
|
114 |
+
|
115 |
+
## Usage
|
116 |
+
|
117 |
+
The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
|
118 |
+
|
119 |
+
```bash
|
120 |
+
pip install multimolecule
|
121 |
+
```
|
122 |
+
|
123 |
+
### Direct Use
|
124 |
+
|
125 |
+
You can use this model directly with a pipeline for masked language modeling:
|
126 |
+
|
127 |
+
```python
|
128 |
+
>>> import multimolecule # you must import multimolecule to register models
|
129 |
+
>>> from transformers import pipeline
|
130 |
+
>>> unmasker = pipeline('fill-mask', model='multimolecule/utrlm.te_el')
|
131 |
+
>>> unmasker("uagc<mask>uaucagacugauguuga")
|
132 |
+
|
133 |
+
[{'score': 0.08083827048540115,
|
134 |
+
'token': 23,
|
135 |
+
'token_str': '*',
|
136 |
+
'sequence': 'U A G C * U A U C A G A C U G A U G U U G A'},
|
137 |
+
{'score': 0.07966958731412888,
|
138 |
+
'token': 5,
|
139 |
+
'token_str': '<null>',
|
140 |
+
'sequence': 'U A G C U A U C A G A C U G A U G U U G A'},
|
141 |
+
{'score': 0.0771222859621048,
|
142 |
+
'token': 6,
|
143 |
+
'token_str': 'A',
|
144 |
+
'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
|
145 |
+
{'score': 0.06853719055652618,
|
146 |
+
'token': 10,
|
147 |
+
'token_str': 'N',
|
148 |
+
'sequence': 'U A G C N U A U C A G A C U G A U G U U G A'},
|
149 |
+
{'score': 0.06666938215494156,
|
150 |
+
'token': 21,
|
151 |
+
'token_str': '.',
|
152 |
+
'sequence': 'U A G C. U A U C A G A C U G A U G U U G A'}]
|
153 |
+
```
|
154 |
+
|
155 |
+
### Downstream Use
|
156 |
+
|
157 |
+
#### Extract Features
|
158 |
+
|
159 |
+
Here is how to use this model to get the features of a given sequence in PyTorch:
|
160 |
+
|
161 |
+
```python
|
162 |
+
from multimolecule import RnaTokenizer, UtrLmModel
|
163 |
+
|
164 |
+
|
165 |
+
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
|
166 |
+
model = UtrLmModel.from_pretrained('multimolecule/utrlm.te_el')
|
167 |
+
|
168 |
+
text = "UAGCUUAUCAGACUGAUGUUGA"
|
169 |
+
input = tokenizer(text, return_tensors='pt')
|
170 |
+
|
171 |
+
output = model(**input)
|
172 |
+
```
|
173 |
+
|
174 |
+
#### Sequence Classification / Regression
|
175 |
+
|
176 |
+
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
|
177 |
+
|
178 |
+
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
|
179 |
+
|
180 |
+
```python
|
181 |
+
import torch
|
182 |
+
from multimolecule import RnaTokenizer, UtrLmForSequencePrediction
|
183 |
+
|
184 |
+
|
185 |
+
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
|
186 |
+
model = UtrLmForSequencePrediction.from_pretrained('multimolecule/utrlm.te_el')
|
187 |
+
|
188 |
+
text = "UAGCUUAUCAGACUGAUGUUGA"
|
189 |
+
input = tokenizer(text, return_tensors='pt')
|
190 |
+
label = torch.tensor([1])
|
191 |
+
|
192 |
+
output = model(**input, labels=label)
|
193 |
+
```
|
194 |
+
|
195 |
+
#### Nucleotide Classification / Regression
|
196 |
+
|
197 |
+
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
|
198 |
+
|
199 |
+
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
|
200 |
+
|
201 |
+
```python
|
202 |
+
import torch
|
203 |
+
from multimolecule import RnaTokenizer, UtrLmForNucleotidePrediction
|
204 |
+
|
205 |
+
|
206 |
+
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm.te_el')
|
207 |
+
model = UtrLmForNucleotidePrediction.from_pretrained('multimolecule/utrlm.te_el')
|
208 |
+
|
209 |
+
text = "UAGCUUAUCAGACUGAUGUUGA"
|
210 |
+
input = tokenizer(text, return_tensors='pt')
|
211 |
+
label = torch.randint(2, (len(text), ))
|
212 |
+
|
213 |
+
output = model(**input, labels=label)
|
214 |
+
```
|
215 |
+
|
216 |
+
#### Contact Classification / Regression
|
217 |
+
|
218 |
+
**Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
|
219 |
+
|
220 |
+
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
|
221 |
+
|
222 |
+
```python
|
223 |
+
import torch
|
224 |
+
from multimolecule import RnaTokenizer, UtrLmForContactPrediction
|
225 |
+
|
226 |
+
|
227 |
+
tokenizer = RnaTokenizer.from_pretrained('multimolecule/utrlm')
|
228 |
+
model = UtrLmForContactPrediction.from_pretrained('multimolecule/utrlm')
|
229 |
+
|
230 |
+
text = "UAGCUUAUCAGACUGAUGUUGA"
|
231 |
+
input = tokenizer(text, return_tensors='pt')
|
232 |
+
label = torch.randint(2, (len(text), len(text)))
|
233 |
+
|
234 |
+
output = model(**input, labels=label)
|
235 |
+
```
|
236 |
+
|
237 |
+
## Training Details
|
238 |
+
|
239 |
+
UTR-LM used a mixed training strategy with one self-supervised task and two supervised tasks, where the labels of both supervised tasks are calculated using [ViennaRNA](https://viennarna.readthedocs.io).
|
240 |
+
|
241 |
+
1. **Masked Language Modeling (MLM)**: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
|
242 |
+
2. **Secondary Structure (SS)**: predicting the secondary structure of the `<mask>` token in the MLM task.
|
243 |
+
3. **Minimum Free Energy (MFE)**: predicting the minimum free energy of the 5’ UTR sequence.
|
244 |
+
|
245 |
+
### Training Data
|
246 |
+
|
247 |
+
The UTR-LM model was pre-trained on 5’ UTR sequences from three sources:
|
248 |
+
|
249 |
+
- **[Ensembl Genome Browser](https://ensembl.org)**: Ensembl is a genome browser for vertebrate genomes that supports research in comparative genomics, evolution, sequence variation and transcriptional regulation. UTR-LM used 5’ UTR sequences from 5 species: human, rat, mouse, chicken, and zebrafish, since these species have high-quality and manual gene annotations.
|
250 |
+
- **[Human 5′ UTR design and variant effect prediction from a massively parallel translation assay](https://doi.org/10.1038/s41587-019-0164-5)**: Sample et al. proposed 8 distinct 5' UTR libraries, each containing random 50 nucleotide sequences, to evaluate translation rules using mean ribosome loading (MRL) measurements.
|
251 |
+
- **[High-Throughput 5’ UTR Engineering for Enhanced Protein Production in Non-Viral Gene Therapies](https://doi.org/10.1038/s41467-021-24436-7)**: Cao et al. analyzed endogenous human 5’ UTRs, including data from 3 distinct cell lines/tissues: human embryonic kidney 293T (HEK), human prostate cancer cell (PC3), and human muscle tissue (Muscle).
|
252 |
+
|
253 |
+
UTR-LM preprocessed the 5’ UTR sequences in a 4-step pipeline:
|
254 |
+
|
255 |
+
1. removed all coding sequence (CDS) and non-5' UTR fragments from the raw sequences.
|
256 |
+
2. identified and removed duplicate sequences
|
257 |
+
3. truncated the sequences to fit within a range of 30 to 1022 bp
|
258 |
+
4. filtered out incorrect and low-quality sequences
|
259 |
+
|
260 |
+
Note [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
|
261 |
+
|
262 |
+
### Training Procedure
|
263 |
+
|
264 |
+
#### Preprocessing
|
265 |
+
|
266 |
+
UTR-LM used masked language modeling (MLM) as one of the pre-training objectives. The masking procedure is similar to the one used in BERT:
|
267 |
+
|
268 |
+
- 15% of the tokens are masked.
|
269 |
+
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
|
270 |
+
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
271 |
+
- In the 10% remaining cases, the masked tokens are left as is.
|
272 |
+
|
273 |
+
#### PreTraining
|
274 |
+
|
275 |
+
The model was trained on two clusters:
|
276 |
+
|
277 |
+
1. 4 NVIDIA V100 GPUs with 16GiB memories.
|
278 |
+
2. 4 NVIDIA P100 GPUs with 32GiB memories.
|
279 |
+
|
280 |
+
## Citation
|
281 |
+
|
282 |
+
**BibTeX**:
|
283 |
+
|
284 |
+
```bibtex
|
285 |
+
@article {chu2023a,
|
286 |
+
author = {Chu, Yanyi and Yu, Dan and Li, Yupeng and Huang, Kaixuan and Shen, Yue and Cong, Le and Zhang, Jason and Wang, Mengdi},
|
287 |
+
title = {A 5{\textquoteright} UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions},
|
288 |
+
elocation-id = {2023.10.11.561938},
|
289 |
+
year = {2023},
|
290 |
+
doi = {10.1101/2023.10.11.561938},
|
291 |
+
publisher = {Cold Spring Harbor Laboratory},
|
292 |
+
abstract = {The 5{\textquoteright} UTR, a regulatory region at the beginning of an mRNA molecule, plays a crucial role in regulating the translation process and impacts the protein expression level. Language models have showcased their effectiveness in decoding the functions of protein and genome sequences. Here, we introduced a language model for 5{\textquoteright} UTR, which we refer to as the UTR-LM. The UTR-LM is pre-trained on endogenous 5{\textquoteright} UTRs from multiple species and is further augmented with supervised information including secondary structure and minimum free energy. We fine-tuned the UTR-LM in a variety of downstream tasks. The model outperformed the best-known benchmark by up to 42\% for predicting the Mean Ribosome Loading, and by up to 60\% for predicting the Translation Efficiency and the mRNA Expression Level. The model also applies to identifying unannotated Internal Ribosome Entry Sites within the untranslated region and improves the AUPR from 0.37 to 0.52 compared to the best baseline. Further, we designed a library of 211 novel 5{\textquoteright} UTRs with high predicted values of translation efficiency and evaluated them via a wet-lab assay. Experiment results confirmed that our top designs achieved a 32.5\% increase in protein production level relative to well-established 5{\textquoteright} UTR optimized for therapeutics.Competing Interest StatementThe authors have declared no competing interest.},
|
293 |
+
URL = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938},
|
294 |
+
eprint = {https://www.biorxiv.org/content/early/2023/10/14/2023.10.11.561938.full.pdf},
|
295 |
+
journal = {bioRxiv}
|
296 |
+
}
|
297 |
+
```
|
298 |
+
|
299 |
+
## Contact
|
300 |
+
|
301 |
+
Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
|
302 |
+
|
303 |
+
Please contact the authors of the [UTR-LM paper](https://doi.org/10.1101/2023.10.11.561938) for questions or comments on the paper/model.
|
304 |
+
|
305 |
+
## License
|
306 |
+
|
307 |
+
This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
|
308 |
+
|
309 |
+
```spdx
|
310 |
+
SPDX-License-Identifier: AGPL-3.0-or-later
|
311 |
+
```
|
config.json
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"UtrLmForPreTraining"
|
4 |
+
],
|
5 |
+
"attention_dropout": 0.1,
|
6 |
+
"bos_token_id": 1,
|
7 |
+
"emb_layer_norm_before": false,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"head": {
|
10 |
+
"act": null,
|
11 |
+
"bias": true,
|
12 |
+
"dropout": 0.0,
|
13 |
+
"hidden_size": 128,
|
14 |
+
"layer_norm_eps": 1e-12,
|
15 |
+
"num_labels": 1,
|
16 |
+
"output_name": null,
|
17 |
+
"problem_type": null,
|
18 |
+
"transform": null,
|
19 |
+
"transform_act": "gelu"
|
20 |
+
},
|
21 |
+
"hidden_act": "gelu",
|
22 |
+
"hidden_dropout": 0.1,
|
23 |
+
"hidden_size": 128,
|
24 |
+
"id2label": {
|
25 |
+
"0": "LABEL_0"
|
26 |
+
},
|
27 |
+
"initializer_range": 0.02,
|
28 |
+
"intermediate_size": 512,
|
29 |
+
"label2id": {
|
30 |
+
"LABEL_0": 0
|
31 |
+
},
|
32 |
+
"layer_norm_eps": 1e-12,
|
33 |
+
"lm_head": {
|
34 |
+
"act": null,
|
35 |
+
"bias": true,
|
36 |
+
"dropout": 0.0,
|
37 |
+
"hidden_size": 128,
|
38 |
+
"layer_norm_eps": 1e-12,
|
39 |
+
"output_name": null,
|
40 |
+
"transform": "nonlinear",
|
41 |
+
"transform_act": "gelu"
|
42 |
+
},
|
43 |
+
"mask_token_id": 4,
|
44 |
+
"max_position_embeddings": 1026,
|
45 |
+
"mfe_head": {
|
46 |
+
"act": null,
|
47 |
+
"bias": true,
|
48 |
+
"dropout": 0.0,
|
49 |
+
"hidden_size": 128,
|
50 |
+
"layer_norm_eps": 1e-12,
|
51 |
+
"num_labels": 1,
|
52 |
+
"output_name": null,
|
53 |
+
"problem_type": null,
|
54 |
+
"transform": null,
|
55 |
+
"transform_act": "gelu"
|
56 |
+
},
|
57 |
+
"model_type": "utrlm",
|
58 |
+
"null_token_id": 5,
|
59 |
+
"num_attention_heads": 16,
|
60 |
+
"num_hidden_layers": 6,
|
61 |
+
"pad_token_id": 0,
|
62 |
+
"position_embedding_type": "rotary",
|
63 |
+
"ss_head": {
|
64 |
+
"act": null,
|
65 |
+
"bias": true,
|
66 |
+
"dropout": 0.0,
|
67 |
+
"hidden_size": 128,
|
68 |
+
"layer_norm_eps": 1e-12,
|
69 |
+
"num_labels": 3,
|
70 |
+
"output_name": null,
|
71 |
+
"problem_type": null,
|
72 |
+
"transform": null,
|
73 |
+
"transform_act": "gelu"
|
74 |
+
},
|
75 |
+
"token_dropout": false,
|
76 |
+
"torch_dtype": "float32",
|
77 |
+
"transformers_version": "4.44.0",
|
78 |
+
"unk_token_id": 3,
|
79 |
+
"use_cache": true,
|
80 |
+
"vocab_size": 26
|
81 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3db88ba15da57eb0ed3113d0c003e3699482ff63f468aac5f8dfb317a767c948
|
3 |
+
size 4936620
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4b17e3ef1d0b7e35f26387a93ed9cd773708dc59833807095b20edc1620bf05a
|
3 |
+
size 4962802
|
special_tokens_map.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<null>"
|
4 |
+
],
|
5 |
+
"bos_token": "<cls>",
|
6 |
+
"cls_token": "<cls>",
|
7 |
+
"eos_token": "<eos>",
|
8 |
+
"mask_token": "<mask>",
|
9 |
+
"pad_token": "<pad>",
|
10 |
+
"sep_token": "<eos>",
|
11 |
+
"unk_token": "<unk>"
|
12 |
+
}
|
tokenizer_config.json
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {
|
3 |
+
"0": {
|
4 |
+
"content": "<pad>",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false,
|
9 |
+
"special": true
|
10 |
+
},
|
11 |
+
"1": {
|
12 |
+
"content": "<cls>",
|
13 |
+
"lstrip": false,
|
14 |
+
"normalized": false,
|
15 |
+
"rstrip": false,
|
16 |
+
"single_word": false,
|
17 |
+
"special": true
|
18 |
+
},
|
19 |
+
"2": {
|
20 |
+
"content": "<eos>",
|
21 |
+
"lstrip": false,
|
22 |
+
"normalized": false,
|
23 |
+
"rstrip": false,
|
24 |
+
"single_word": false,
|
25 |
+
"special": true
|
26 |
+
},
|
27 |
+
"3": {
|
28 |
+
"content": "<unk>",
|
29 |
+
"lstrip": false,
|
30 |
+
"normalized": false,
|
31 |
+
"rstrip": false,
|
32 |
+
"single_word": false,
|
33 |
+
"special": true
|
34 |
+
},
|
35 |
+
"4": {
|
36 |
+
"content": "<mask>",
|
37 |
+
"lstrip": false,
|
38 |
+
"normalized": false,
|
39 |
+
"rstrip": false,
|
40 |
+
"single_word": false,
|
41 |
+
"special": true
|
42 |
+
},
|
43 |
+
"5": {
|
44 |
+
"content": "<null>",
|
45 |
+
"lstrip": false,
|
46 |
+
"normalized": false,
|
47 |
+
"rstrip": false,
|
48 |
+
"single_word": false,
|
49 |
+
"special": true
|
50 |
+
}
|
51 |
+
},
|
52 |
+
"additional_special_tokens": [
|
53 |
+
"<null>"
|
54 |
+
],
|
55 |
+
"bos_token": "<cls>",
|
56 |
+
"clean_up_tokenization_spaces": true,
|
57 |
+
"cls_token": "<cls>",
|
58 |
+
"codon": false,
|
59 |
+
"eos_token": "<eos>",
|
60 |
+
"mask_token": "<mask>",
|
61 |
+
"model_max_length": 1026,
|
62 |
+
"nmers": 1,
|
63 |
+
"pad_token": "<pad>",
|
64 |
+
"replace_T_with_U": true,
|
65 |
+
"sep_token": "<eos>",
|
66 |
+
"tokenizer_class": "RnaTokenizer",
|
67 |
+
"unk_token": "<unk>"
|
68 |
+
}
|
vocab.txt
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<pad>
|
2 |
+
<cls>
|
3 |
+
<eos>
|
4 |
+
<unk>
|
5 |
+
<mask>
|
6 |
+
<null>
|
7 |
+
A
|
8 |
+
C
|
9 |
+
G
|
10 |
+
U
|
11 |
+
N
|
12 |
+
R
|
13 |
+
Y
|
14 |
+
S
|
15 |
+
W
|
16 |
+
K
|
17 |
+
M
|
18 |
+
B
|
19 |
+
D
|
20 |
+
H
|
21 |
+
V
|
22 |
+
.
|
23 |
+
X
|
24 |
+
*
|
25 |
+
-
|
26 |
+
I
|