Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- text reranking
|
6 |
+
license: Apache License 2.0
|
7 |
+
datasets:
|
8 |
+
- MS MARCO document ranking
|
9 |
+
---
|
10 |
+
|
11 |
+
# BERT Reranker for MS-MARCO Document Ranking
|
12 |
+
|
13 |
+
## Model description
|
14 |
+
|
15 |
+
A text reranker trained for BM25 retriever on MS MARCO document dataset.
|
16 |
+
|
17 |
+
## Intended uses & limitations
|
18 |
+
It is possible to work with other retrievers like but using aligned BM25 works the best.
|
19 |
+
|
20 |
+
We used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following [this instruction](https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-doc.md).
|
21 |
+
|
22 |
+
#### How to use
|
23 |
+
See our [project repo page](https://github.com/luyug/Reranker).
|
24 |
+
|
25 |
+
## Eval results
|
26 |
+
MRR @10: 0.423 on Dev.
|
27 |
+
|
28 |
+
### BibTeX entry and citation info
|
29 |
+
|
30 |
+
```bibtex
|
31 |
+
@inproceedings{gao2021lce,
|
32 |
+
title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline},
|
33 |
+
author={Luyu Gao and Zhuyun Dai and Jamie Callan},
|
34 |
+
year={2021},
|
35 |
+
booktitle={The 43rd European Conference On Information Retrieval (ECIR)},
|
36 |
+
|
37 |
+
}
|
38 |
+
```
|