File size: 1,605 Bytes
214bc2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
monot5-3b-inpars-v2-hotpotqa-promptagator is a monoT5-3B model finetuned on HotpotQA synthetic data generated by [InPars](https://github.com/zetaalphavector/inPars).

Currently, if you use this tool you can cite the original [InPars paper published at SIGIR](https://dl.acm.org/doi/10.1145/3477495.3531863) or [InPars-v2](https://arxiv.org/abs/2301.01820).

```
@inproceedings{inpars,
  author = {Bonifacio, Luiz and Abonizio, Hugo and Fadaee, Marzieh and Nogueira, Rodrigo},
  title = {{InPars}: Unsupervised Dataset Generation for Information Retrieval},
  year = {2022},
  isbn = {9781450387323},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  url = {https://doi.org/10.1145/3477495.3531863},
  doi = {10.1145/3477495.3531863},
  booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages = {2387–2392},
  numpages = {6},
  keywords = {generative models, large language models, question generation, synthetic datasets, few-shot models, multi-stage ranking},
  location = {Madrid, Spain},
  series = {SIGIR '22}
}
```

```
@misc{inparsv2,
  doi = {10.48550/ARXIV.2301.01820},
  url = {https://arxiv.org/abs/2301.01820},
  author = {Jeronymo, Vitor and Bonifacio, Luiz and Abonizio, Hugo and Fadaee, Marzieh and Lotufo, Roberto and Zavrel, Jakub and Nogueira, Rodrigo},
  title = {{InPars-v2}: Large Language Models as Efficient Dataset Generators for Information Retrieval},
  publisher = {arXiv},
  year = {2023},
  copyright = {Creative Commons Attribution 4.0 International}
}
```