File size: 1,283 Bytes
aff456e 2e4ed76 5d52acd 2e4ed76 aff456e 8c8ced6 2f6d761 c25722a 8c8ced6 c25722a 57f68e5 c25722a aff456e 8c8ced6 aff456e c25722a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
library_name: xpmir
---
# SPLADE_DistilMSE: SPLADEv2 trained with the distillated triplets
Training data from: https://github.com/sebastian-hofstaetter/neural-ranking-kd
From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models
More Effective (Thibault Formal, Carlos Lassance, Benjamin Piwowarski,
Stéphane Clinchant). 2022. https://arxiv.org/abs/2205.04733
## Using the model
The model can be loaded with [experimaestro
IR](https://experimaestro-ir.readthedocs.io/en/latest/)
```py from xpmir.models import AutoModel
from xpmir.models import AutoModel
# Model that can be re-used in experiments
model, init_tasks = AutoModel.load_from_hf_hub("xpmir/SPLADE_DistilMSE")
# Use this if you want to actually use the model
model = AutoModel.load_from_hf_hub("xpmir/SPLADE_DistilMSE", as_instance=True)
model.rsv("walgreens store sales average", "The average Walgreens salary ranges...")
```
## Results
| Dataset | AP | P@20 | RR | RR@10 | nDCG | nDCG@10 | nDCG@20 |
|----| ---|------|------|------|------|------|------|
| msmarco_dev | 0.3642 | 0.0382 | 0.3693 | 0.3582 | 0.4879 | 0.4222 | 0.4458 |
| trec2019 | 0.4896 | 0.7209 | 0.9496 | 0.9496 | 0.7253 | 0.7055 | 0.6926 |
| trec2020 | 0.5026 | 0.6315 | 0.9483 | 0.9475 | 0.7273 | 0.6868 | 0.6627 |
|