|
Naming pattern: |
|
|
|
1. `GPL/${dataset}-msmarco-distilbert-gpl`: Model with training order of (1) MarginMSE on MSMARCO -> (2) GPL on ${dataset}; |
|
2. `GPL/${dataset}-tsdae-msmarco-distilbert-gpl`: Model with training order of (1) TSDAE on ${dataset} -> (2) MarginMSE on MSMARCO -> (3) GPL on ${dataset}; |
|
3. `GPL/msmarco-distilbert-margin-mse`: Model trained on MSMARCO with MarginMSE; |
|
4. `GPL/${dataset}-tsdae-msmarco-distilbert-margin-mse`: Model with training order of (1) TSDAE on ${dataset} -> (2) MarginMSE on MSMARCO; |
|
5. `GPL/${dataset}-distilbert-tas-b-gpl-self_miner`: Starting from the [tas-b model](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b), the models were trained with GPL on the target corpus ${dataset} with the base model itself as the negative miner (here noted as "self_miner"). |
|
|
|
Actually, models in 1. and 2. are built on top of 3. and 4., respectively. |
|
|
|
|
|
|
|
|