DrDominikDellermann bwang0911 commited on
Commit
f1f2f25
0 Parent(s):

Duplicate from jinaai/jina-embeddings-v2-base-en

Browse files

Co-authored-by: bowang <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 512,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,2727 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - finetuner
4
+ - mteb
5
+ - sentence-transformers
6
+ - feature-extraction
7
+ - sentence-similarity
8
+ - alibi
9
+ datasets:
10
+ - allenai/c4
11
+ language: en
12
+ inference: false
13
+ license: apache-2.0
14
+ model-index:
15
+ - name: jina-embedding-b-en-v2
16
+ results:
17
+ - task:
18
+ type: Classification
19
+ dataset:
20
+ type: mteb/amazon_counterfactual
21
+ name: MTEB AmazonCounterfactualClassification (en)
22
+ config: en
23
+ split: test
24
+ revision: e8379541af4e31359cca9fbcf4b00f2671dba205
25
+ metrics:
26
+ - type: accuracy
27
+ value: 74.73134328358209
28
+ - type: ap
29
+ value: 37.765427081831035
30
+ - type: f1
31
+ value: 68.79367444339518
32
+ - task:
33
+ type: Classification
34
+ dataset:
35
+ type: mteb/amazon_polarity
36
+ name: MTEB AmazonPolarityClassification
37
+ config: default
38
+ split: test
39
+ revision: e2d317d38cd51312af73b3d32a06d1a08b442046
40
+ metrics:
41
+ - type: accuracy
42
+ value: 88.544275
43
+ - type: ap
44
+ value: 84.61328675662887
45
+ - type: f1
46
+ value: 88.51879035862375
47
+ - task:
48
+ type: Classification
49
+ dataset:
50
+ type: mteb/amazon_reviews_multi
51
+ name: MTEB AmazonReviewsClassification (en)
52
+ config: en
53
+ split: test
54
+ revision: 1399c76144fd37290681b995c656ef9b2e06e26d
55
+ metrics:
56
+ - type: accuracy
57
+ value: 45.263999999999996
58
+ - type: f1
59
+ value: 43.778759656699435
60
+ - task:
61
+ type: Retrieval
62
+ dataset:
63
+ type: arguana
64
+ name: MTEB ArguAna
65
+ config: default
66
+ split: test
67
+ revision: None
68
+ metrics:
69
+ - type: map_at_1
70
+ value: 21.693
71
+ - type: map_at_10
72
+ value: 35.487
73
+ - type: map_at_100
74
+ value: 36.862
75
+ - type: map_at_1000
76
+ value: 36.872
77
+ - type: map_at_3
78
+ value: 30.049999999999997
79
+ - type: map_at_5
80
+ value: 32.966
81
+ - type: mrr_at_1
82
+ value: 21.977
83
+ - type: mrr_at_10
84
+ value: 35.565999999999995
85
+ - type: mrr_at_100
86
+ value: 36.948
87
+ - type: mrr_at_1000
88
+ value: 36.958
89
+ - type: mrr_at_3
90
+ value: 30.121
91
+ - type: mrr_at_5
92
+ value: 33.051
93
+ - type: ndcg_at_1
94
+ value: 21.693
95
+ - type: ndcg_at_10
96
+ value: 44.181
97
+ - type: ndcg_at_100
98
+ value: 49.982
99
+ - type: ndcg_at_1000
100
+ value: 50.233000000000004
101
+ - type: ndcg_at_3
102
+ value: 32.830999999999996
103
+ - type: ndcg_at_5
104
+ value: 38.080000000000005
105
+ - type: precision_at_1
106
+ value: 21.693
107
+ - type: precision_at_10
108
+ value: 7.248
109
+ - type: precision_at_100
110
+ value: 0.9769999999999999
111
+ - type: precision_at_1000
112
+ value: 0.1
113
+ - type: precision_at_3
114
+ value: 13.632
115
+ - type: precision_at_5
116
+ value: 10.725
117
+ - type: recall_at_1
118
+ value: 21.693
119
+ - type: recall_at_10
120
+ value: 72.475
121
+ - type: recall_at_100
122
+ value: 97.653
123
+ - type: recall_at_1000
124
+ value: 99.57300000000001
125
+ - type: recall_at_3
126
+ value: 40.896
127
+ - type: recall_at_5
128
+ value: 53.627
129
+ - task:
130
+ type: Clustering
131
+ dataset:
132
+ type: mteb/arxiv-clustering-p2p
133
+ name: MTEB ArxivClusteringP2P
134
+ config: default
135
+ split: test
136
+ revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
137
+ metrics:
138
+ - type: v_measure
139
+ value: 45.39242428696777
140
+ - task:
141
+ type: Clustering
142
+ dataset:
143
+ type: mteb/arxiv-clustering-s2s
144
+ name: MTEB ArxivClusteringS2S
145
+ config: default
146
+ split: test
147
+ revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
148
+ metrics:
149
+ - type: v_measure
150
+ value: 36.675626784714
151
+ - task:
152
+ type: Reranking
153
+ dataset:
154
+ type: mteb/askubuntudupquestions-reranking
155
+ name: MTEB AskUbuntuDupQuestions
156
+ config: default
157
+ split: test
158
+ revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
159
+ metrics:
160
+ - type: map
161
+ value: 62.247725694904034
162
+ - type: mrr
163
+ value: 74.91359978894604
164
+ - task:
165
+ type: STS
166
+ dataset:
167
+ type: mteb/biosses-sts
168
+ name: MTEB BIOSSES
169
+ config: default
170
+ split: test
171
+ revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
172
+ metrics:
173
+ - type: cos_sim_pearson
174
+ value: 82.68003802970496
175
+ - type: cos_sim_spearman
176
+ value: 81.23438110096286
177
+ - type: euclidean_pearson
178
+ value: 81.87462986142582
179
+ - type: euclidean_spearman
180
+ value: 81.23438110096286
181
+ - type: manhattan_pearson
182
+ value: 81.61162566600755
183
+ - type: manhattan_spearman
184
+ value: 81.11329400456184
185
+ - task:
186
+ type: Classification
187
+ dataset:
188
+ type: mteb/banking77
189
+ name: MTEB Banking77Classification
190
+ config: default
191
+ split: test
192
+ revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
193
+ metrics:
194
+ - type: accuracy
195
+ value: 84.01298701298701
196
+ - type: f1
197
+ value: 83.31690714969382
198
+ - task:
199
+ type: Clustering
200
+ dataset:
201
+ type: mteb/biorxiv-clustering-p2p
202
+ name: MTEB BiorxivClusteringP2P
203
+ config: default
204
+ split: test
205
+ revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
206
+ metrics:
207
+ - type: v_measure
208
+ value: 37.050108150972086
209
+ - task:
210
+ type: Clustering
211
+ dataset:
212
+ type: mteb/biorxiv-clustering-s2s
213
+ name: MTEB BiorxivClusteringS2S
214
+ config: default
215
+ split: test
216
+ revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
217
+ metrics:
218
+ - type: v_measure
219
+ value: 30.15731442819715
220
+ - task:
221
+ type: Retrieval
222
+ dataset:
223
+ type: BeIR/cqadupstack
224
+ name: MTEB CQADupstackAndroidRetrieval
225
+ config: default
226
+ split: test
227
+ revision: None
228
+ metrics:
229
+ - type: map_at_1
230
+ value: 31.391999999999996
231
+ - type: map_at_10
232
+ value: 42.597
233
+ - type: map_at_100
234
+ value: 44.07
235
+ - type: map_at_1000
236
+ value: 44.198
237
+ - type: map_at_3
238
+ value: 38.957
239
+ - type: map_at_5
240
+ value: 40.961
241
+ - type: mrr_at_1
242
+ value: 37.196
243
+ - type: mrr_at_10
244
+ value: 48.152
245
+ - type: mrr_at_100
246
+ value: 48.928
247
+ - type: mrr_at_1000
248
+ value: 48.964999999999996
249
+ - type: mrr_at_3
250
+ value: 45.446
251
+ - type: mrr_at_5
252
+ value: 47.205999999999996
253
+ - type: ndcg_at_1
254
+ value: 37.196
255
+ - type: ndcg_at_10
256
+ value: 49.089
257
+ - type: ndcg_at_100
258
+ value: 54.471000000000004
259
+ - type: ndcg_at_1000
260
+ value: 56.385
261
+ - type: ndcg_at_3
262
+ value: 43.699
263
+ - type: ndcg_at_5
264
+ value: 46.22
265
+ - type: precision_at_1
266
+ value: 37.196
267
+ - type: precision_at_10
268
+ value: 9.313
269
+ - type: precision_at_100
270
+ value: 1.478
271
+ - type: precision_at_1000
272
+ value: 0.198
273
+ - type: precision_at_3
274
+ value: 20.839
275
+ - type: precision_at_5
276
+ value: 14.936
277
+ - type: recall_at_1
278
+ value: 31.391999999999996
279
+ - type: recall_at_10
280
+ value: 61.876
281
+ - type: recall_at_100
282
+ value: 84.214
283
+ - type: recall_at_1000
284
+ value: 95.985
285
+ - type: recall_at_3
286
+ value: 46.6
287
+ - type: recall_at_5
288
+ value: 53.588
289
+ - task:
290
+ type: Retrieval
291
+ dataset:
292
+ type: BeIR/cqadupstack
293
+ name: MTEB CQADupstackEnglishRetrieval
294
+ config: default
295
+ split: test
296
+ revision: None
297
+ metrics:
298
+ - type: map_at_1
299
+ value: 29.083
300
+ - type: map_at_10
301
+ value: 38.812999999999995
302
+ - type: map_at_100
303
+ value: 40.053
304
+ - type: map_at_1000
305
+ value: 40.188
306
+ - type: map_at_3
307
+ value: 36.111
308
+ - type: map_at_5
309
+ value: 37.519000000000005
310
+ - type: mrr_at_1
311
+ value: 36.497
312
+ - type: mrr_at_10
313
+ value: 44.85
314
+ - type: mrr_at_100
315
+ value: 45.546
316
+ - type: mrr_at_1000
317
+ value: 45.593
318
+ - type: mrr_at_3
319
+ value: 42.686
320
+ - type: mrr_at_5
321
+ value: 43.909
322
+ - type: ndcg_at_1
323
+ value: 36.497
324
+ - type: ndcg_at_10
325
+ value: 44.443
326
+ - type: ndcg_at_100
327
+ value: 48.979
328
+ - type: ndcg_at_1000
329
+ value: 51.154999999999994
330
+ - type: ndcg_at_3
331
+ value: 40.660000000000004
332
+ - type: ndcg_at_5
333
+ value: 42.193000000000005
334
+ - type: precision_at_1
335
+ value: 36.497
336
+ - type: precision_at_10
337
+ value: 8.433
338
+ - type: precision_at_100
339
+ value: 1.369
340
+ - type: precision_at_1000
341
+ value: 0.185
342
+ - type: precision_at_3
343
+ value: 19.894000000000002
344
+ - type: precision_at_5
345
+ value: 13.873
346
+ - type: recall_at_1
347
+ value: 29.083
348
+ - type: recall_at_10
349
+ value: 54.313
350
+ - type: recall_at_100
351
+ value: 73.792
352
+ - type: recall_at_1000
353
+ value: 87.629
354
+ - type: recall_at_3
355
+ value: 42.257
356
+ - type: recall_at_5
357
+ value: 47.066
358
+ - task:
359
+ type: Retrieval
360
+ dataset:
361
+ type: BeIR/cqadupstack
362
+ name: MTEB CQADupstackGamingRetrieval
363
+ config: default
364
+ split: test
365
+ revision: None
366
+ metrics:
367
+ - type: map_at_1
368
+ value: 38.556000000000004
369
+ - type: map_at_10
370
+ value: 50.698
371
+ - type: map_at_100
372
+ value: 51.705
373
+ - type: map_at_1000
374
+ value: 51.768
375
+ - type: map_at_3
376
+ value: 47.848
377
+ - type: map_at_5
378
+ value: 49.358000000000004
379
+ - type: mrr_at_1
380
+ value: 43.95
381
+ - type: mrr_at_10
382
+ value: 54.191
383
+ - type: mrr_at_100
384
+ value: 54.852999999999994
385
+ - type: mrr_at_1000
386
+ value: 54.885
387
+ - type: mrr_at_3
388
+ value: 51.954
389
+ - type: mrr_at_5
390
+ value: 53.13
391
+ - type: ndcg_at_1
392
+ value: 43.95
393
+ - type: ndcg_at_10
394
+ value: 56.516
395
+ - type: ndcg_at_100
396
+ value: 60.477000000000004
397
+ - type: ndcg_at_1000
398
+ value: 61.746
399
+ - type: ndcg_at_3
400
+ value: 51.601
401
+ - type: ndcg_at_5
402
+ value: 53.795
403
+ - type: precision_at_1
404
+ value: 43.95
405
+ - type: precision_at_10
406
+ value: 9.009
407
+ - type: precision_at_100
408
+ value: 1.189
409
+ - type: precision_at_1000
410
+ value: 0.135
411
+ - type: precision_at_3
412
+ value: 22.989
413
+ - type: precision_at_5
414
+ value: 15.473
415
+ - type: recall_at_1
416
+ value: 38.556000000000004
417
+ - type: recall_at_10
418
+ value: 70.159
419
+ - type: recall_at_100
420
+ value: 87.132
421
+ - type: recall_at_1000
422
+ value: 96.16
423
+ - type: recall_at_3
424
+ value: 56.906
425
+ - type: recall_at_5
426
+ value: 62.332
427
+ - task:
428
+ type: Retrieval
429
+ dataset:
430
+ type: BeIR/cqadupstack
431
+ name: MTEB CQADupstackGisRetrieval
432
+ config: default
433
+ split: test
434
+ revision: None
435
+ metrics:
436
+ - type: map_at_1
437
+ value: 24.238
438
+ - type: map_at_10
439
+ value: 32.5
440
+ - type: map_at_100
441
+ value: 33.637
442
+ - type: map_at_1000
443
+ value: 33.719
444
+ - type: map_at_3
445
+ value: 30.026999999999997
446
+ - type: map_at_5
447
+ value: 31.555
448
+ - type: mrr_at_1
449
+ value: 26.328000000000003
450
+ - type: mrr_at_10
451
+ value: 34.44
452
+ - type: mrr_at_100
453
+ value: 35.455999999999996
454
+ - type: mrr_at_1000
455
+ value: 35.521
456
+ - type: mrr_at_3
457
+ value: 32.034
458
+ - type: mrr_at_5
459
+ value: 33.565
460
+ - type: ndcg_at_1
461
+ value: 26.328000000000003
462
+ - type: ndcg_at_10
463
+ value: 37.202
464
+ - type: ndcg_at_100
465
+ value: 42.728
466
+ - type: ndcg_at_1000
467
+ value: 44.792
468
+ - type: ndcg_at_3
469
+ value: 32.368
470
+ - type: ndcg_at_5
471
+ value: 35.008
472
+ - type: precision_at_1
473
+ value: 26.328000000000003
474
+ - type: precision_at_10
475
+ value: 5.7059999999999995
476
+ - type: precision_at_100
477
+ value: 0.8880000000000001
478
+ - type: precision_at_1000
479
+ value: 0.11100000000000002
480
+ - type: precision_at_3
481
+ value: 13.672
482
+ - type: precision_at_5
483
+ value: 9.74
484
+ - type: recall_at_1
485
+ value: 24.238
486
+ - type: recall_at_10
487
+ value: 49.829
488
+ - type: recall_at_100
489
+ value: 75.21
490
+ - type: recall_at_1000
491
+ value: 90.521
492
+ - type: recall_at_3
493
+ value: 36.867
494
+ - type: recall_at_5
495
+ value: 43.241
496
+ - task:
497
+ type: Retrieval
498
+ dataset:
499
+ type: BeIR/cqadupstack
500
+ name: MTEB CQADupstackMathematicaRetrieval
501
+ config: default
502
+ split: test
503
+ revision: None
504
+ metrics:
505
+ - type: map_at_1
506
+ value: 15.378
507
+ - type: map_at_10
508
+ value: 22.817999999999998
509
+ - type: map_at_100
510
+ value: 23.977999999999998
511
+ - type: map_at_1000
512
+ value: 24.108
513
+ - type: map_at_3
514
+ value: 20.719
515
+ - type: map_at_5
516
+ value: 21.889
517
+ - type: mrr_at_1
518
+ value: 19.03
519
+ - type: mrr_at_10
520
+ value: 27.022000000000002
521
+ - type: mrr_at_100
522
+ value: 28.011999999999997
523
+ - type: mrr_at_1000
524
+ value: 28.096
525
+ - type: mrr_at_3
526
+ value: 24.855
527
+ - type: mrr_at_5
528
+ value: 26.029999999999998
529
+ - type: ndcg_at_1
530
+ value: 19.03
531
+ - type: ndcg_at_10
532
+ value: 27.526
533
+ - type: ndcg_at_100
534
+ value: 33.040000000000006
535
+ - type: ndcg_at_1000
536
+ value: 36.187000000000005
537
+ - type: ndcg_at_3
538
+ value: 23.497
539
+ - type: ndcg_at_5
540
+ value: 25.334
541
+ - type: precision_at_1
542
+ value: 19.03
543
+ - type: precision_at_10
544
+ value: 4.963
545
+ - type: precision_at_100
546
+ value: 0.893
547
+ - type: precision_at_1000
548
+ value: 0.13
549
+ - type: precision_at_3
550
+ value: 11.360000000000001
551
+ - type: precision_at_5
552
+ value: 8.134
553
+ - type: recall_at_1
554
+ value: 15.378
555
+ - type: recall_at_10
556
+ value: 38.061
557
+ - type: recall_at_100
558
+ value: 61.754
559
+ - type: recall_at_1000
560
+ value: 84.259
561
+ - type: recall_at_3
562
+ value: 26.788
563
+ - type: recall_at_5
564
+ value: 31.326999999999998
565
+ - task:
566
+ type: Retrieval
567
+ dataset:
568
+ type: BeIR/cqadupstack
569
+ name: MTEB CQADupstackPhysicsRetrieval
570
+ config: default
571
+ split: test
572
+ revision: None
573
+ metrics:
574
+ - type: map_at_1
575
+ value: 27.511999999999997
576
+ - type: map_at_10
577
+ value: 37.429
578
+ - type: map_at_100
579
+ value: 38.818000000000005
580
+ - type: map_at_1000
581
+ value: 38.924
582
+ - type: map_at_3
583
+ value: 34.625
584
+ - type: map_at_5
585
+ value: 36.064
586
+ - type: mrr_at_1
587
+ value: 33.300999999999995
588
+ - type: mrr_at_10
589
+ value: 43.036
590
+ - type: mrr_at_100
591
+ value: 43.894
592
+ - type: mrr_at_1000
593
+ value: 43.936
594
+ - type: mrr_at_3
595
+ value: 40.825
596
+ - type: mrr_at_5
597
+ value: 42.028
598
+ - type: ndcg_at_1
599
+ value: 33.300999999999995
600
+ - type: ndcg_at_10
601
+ value: 43.229
602
+ - type: ndcg_at_100
603
+ value: 48.992000000000004
604
+ - type: ndcg_at_1000
605
+ value: 51.02100000000001
606
+ - type: ndcg_at_3
607
+ value: 38.794000000000004
608
+ - type: ndcg_at_5
609
+ value: 40.65
610
+ - type: precision_at_1
611
+ value: 33.300999999999995
612
+ - type: precision_at_10
613
+ value: 7.777000000000001
614
+ - type: precision_at_100
615
+ value: 1.269
616
+ - type: precision_at_1000
617
+ value: 0.163
618
+ - type: precision_at_3
619
+ value: 18.351
620
+ - type: precision_at_5
621
+ value: 12.762
622
+ - type: recall_at_1
623
+ value: 27.511999999999997
624
+ - type: recall_at_10
625
+ value: 54.788000000000004
626
+ - type: recall_at_100
627
+ value: 79.105
628
+ - type: recall_at_1000
629
+ value: 92.49199999999999
630
+ - type: recall_at_3
631
+ value: 41.924
632
+ - type: recall_at_5
633
+ value: 47.026
634
+ - task:
635
+ type: Retrieval
636
+ dataset:
637
+ type: BeIR/cqadupstack
638
+ name: MTEB CQADupstackProgrammersRetrieval
639
+ config: default
640
+ split: test
641
+ revision: None
642
+ metrics:
643
+ - type: map_at_1
644
+ value: 24.117
645
+ - type: map_at_10
646
+ value: 33.32
647
+ - type: map_at_100
648
+ value: 34.677
649
+ - type: map_at_1000
650
+ value: 34.78
651
+ - type: map_at_3
652
+ value: 30.233999999999998
653
+ - type: map_at_5
654
+ value: 31.668000000000003
655
+ - type: mrr_at_1
656
+ value: 29.566
657
+ - type: mrr_at_10
658
+ value: 38.244
659
+ - type: mrr_at_100
660
+ value: 39.245000000000005
661
+ - type: mrr_at_1000
662
+ value: 39.296
663
+ - type: mrr_at_3
664
+ value: 35.864000000000004
665
+ - type: mrr_at_5
666
+ value: 36.919999999999995
667
+ - type: ndcg_at_1
668
+ value: 29.566
669
+ - type: ndcg_at_10
670
+ value: 39.127
671
+ - type: ndcg_at_100
672
+ value: 44.989000000000004
673
+ - type: ndcg_at_1000
674
+ value: 47.189
675
+ - type: ndcg_at_3
676
+ value: 34.039
677
+ - type: ndcg_at_5
678
+ value: 35.744
679
+ - type: precision_at_1
680
+ value: 29.566
681
+ - type: precision_at_10
682
+ value: 7.385999999999999
683
+ - type: precision_at_100
684
+ value: 1.204
685
+ - type: precision_at_1000
686
+ value: 0.158
687
+ - type: precision_at_3
688
+ value: 16.286
689
+ - type: precision_at_5
690
+ value: 11.484
691
+ - type: recall_at_1
692
+ value: 24.117
693
+ - type: recall_at_10
694
+ value: 51.559999999999995
695
+ - type: recall_at_100
696
+ value: 77.104
697
+ - type: recall_at_1000
698
+ value: 91.79899999999999
699
+ - type: recall_at_3
700
+ value: 36.82
701
+ - type: recall_at_5
702
+ value: 41.453
703
+ - task:
704
+ type: Retrieval
705
+ dataset:
706
+ type: BeIR/cqadupstack
707
+ name: MTEB CQADupstackRetrieval
708
+ config: default
709
+ split: test
710
+ revision: None
711
+ metrics:
712
+ - type: map_at_1
713
+ value: 25.17625
714
+ - type: map_at_10
715
+ value: 34.063916666666664
716
+ - type: map_at_100
717
+ value: 35.255500000000005
718
+ - type: map_at_1000
719
+ value: 35.37275
720
+ - type: map_at_3
721
+ value: 31.351666666666667
722
+ - type: map_at_5
723
+ value: 32.80608333333333
724
+ - type: mrr_at_1
725
+ value: 29.59783333333333
726
+ - type: mrr_at_10
727
+ value: 38.0925
728
+ - type: mrr_at_100
729
+ value: 38.957249999999995
730
+ - type: mrr_at_1000
731
+ value: 39.01608333333333
732
+ - type: mrr_at_3
733
+ value: 35.77625
734
+ - type: mrr_at_5
735
+ value: 37.04991666666667
736
+ - type: ndcg_at_1
737
+ value: 29.59783333333333
738
+ - type: ndcg_at_10
739
+ value: 39.343666666666664
740
+ - type: ndcg_at_100
741
+ value: 44.488249999999994
742
+ - type: ndcg_at_1000
743
+ value: 46.83358333333334
744
+ - type: ndcg_at_3
745
+ value: 34.69708333333333
746
+ - type: ndcg_at_5
747
+ value: 36.75075
748
+ - type: precision_at_1
749
+ value: 29.59783333333333
750
+ - type: precision_at_10
751
+ value: 6.884083333333332
752
+ - type: precision_at_100
753
+ value: 1.114
754
+ - type: precision_at_1000
755
+ value: 0.15108333333333332
756
+ - type: precision_at_3
757
+ value: 15.965250000000003
758
+ - type: precision_at_5
759
+ value: 11.246500000000001
760
+ - type: recall_at_1
761
+ value: 25.17625
762
+ - type: recall_at_10
763
+ value: 51.015999999999984
764
+ - type: recall_at_100
765
+ value: 73.60174999999998
766
+ - type: recall_at_1000
767
+ value: 89.849
768
+ - type: recall_at_3
769
+ value: 37.88399999999999
770
+ - type: recall_at_5
771
+ value: 43.24541666666666
772
+ - task:
773
+ type: Retrieval
774
+ dataset:
775
+ type: BeIR/cqadupstack
776
+ name: MTEB CQADupstackStatsRetrieval
777
+ config: default
778
+ split: test
779
+ revision: None
780
+ metrics:
781
+ - type: map_at_1
782
+ value: 24.537
783
+ - type: map_at_10
784
+ value: 31.081999999999997
785
+ - type: map_at_100
786
+ value: 32.042
787
+ - type: map_at_1000
788
+ value: 32.141
789
+ - type: map_at_3
790
+ value: 29.137
791
+ - type: map_at_5
792
+ value: 30.079
793
+ - type: mrr_at_1
794
+ value: 27.454
795
+ - type: mrr_at_10
796
+ value: 33.694
797
+ - type: mrr_at_100
798
+ value: 34.579
799
+ - type: mrr_at_1000
800
+ value: 34.649
801
+ - type: mrr_at_3
802
+ value: 32.004
803
+ - type: mrr_at_5
804
+ value: 32.794000000000004
805
+ - type: ndcg_at_1
806
+ value: 27.454
807
+ - type: ndcg_at_10
808
+ value: 34.915
809
+ - type: ndcg_at_100
810
+ value: 39.641
811
+ - type: ndcg_at_1000
812
+ value: 42.105
813
+ - type: ndcg_at_3
814
+ value: 31.276
815
+ - type: ndcg_at_5
816
+ value: 32.65
817
+ - type: precision_at_1
818
+ value: 27.454
819
+ - type: precision_at_10
820
+ value: 5.337
821
+ - type: precision_at_100
822
+ value: 0.8250000000000001
823
+ - type: precision_at_1000
824
+ value: 0.11199999999999999
825
+ - type: precision_at_3
826
+ value: 13.241
827
+ - type: precision_at_5
828
+ value: 8.895999999999999
829
+ - type: recall_at_1
830
+ value: 24.537
831
+ - type: recall_at_10
832
+ value: 44.324999999999996
833
+ - type: recall_at_100
834
+ value: 65.949
835
+ - type: recall_at_1000
836
+ value: 84.017
837
+ - type: recall_at_3
838
+ value: 33.857
839
+ - type: recall_at_5
840
+ value: 37.316
841
+ - task:
842
+ type: Retrieval
843
+ dataset:
844
+ type: BeIR/cqadupstack
845
+ name: MTEB CQADupstackTexRetrieval
846
+ config: default
847
+ split: test
848
+ revision: None
849
+ metrics:
850
+ - type: map_at_1
851
+ value: 17.122
852
+ - type: map_at_10
853
+ value: 24.32
854
+ - type: map_at_100
855
+ value: 25.338
856
+ - type: map_at_1000
857
+ value: 25.462
858
+ - type: map_at_3
859
+ value: 22.064
860
+ - type: map_at_5
861
+ value: 23.322000000000003
862
+ - type: mrr_at_1
863
+ value: 20.647
864
+ - type: mrr_at_10
865
+ value: 27.858
866
+ - type: mrr_at_100
867
+ value: 28.743999999999996
868
+ - type: mrr_at_1000
869
+ value: 28.819
870
+ - type: mrr_at_3
871
+ value: 25.769
872
+ - type: mrr_at_5
873
+ value: 26.964
874
+ - type: ndcg_at_1
875
+ value: 20.647
876
+ - type: ndcg_at_10
877
+ value: 28.849999999999998
878
+ - type: ndcg_at_100
879
+ value: 33.849000000000004
880
+ - type: ndcg_at_1000
881
+ value: 36.802
882
+ - type: ndcg_at_3
883
+ value: 24.799
884
+ - type: ndcg_at_5
885
+ value: 26.682
886
+ - type: precision_at_1
887
+ value: 20.647
888
+ - type: precision_at_10
889
+ value: 5.2170000000000005
890
+ - type: precision_at_100
891
+ value: 0.906
892
+ - type: precision_at_1000
893
+ value: 0.134
894
+ - type: precision_at_3
895
+ value: 11.769
896
+ - type: precision_at_5
897
+ value: 8.486
898
+ - type: recall_at_1
899
+ value: 17.122
900
+ - type: recall_at_10
901
+ value: 38.999
902
+ - type: recall_at_100
903
+ value: 61.467000000000006
904
+ - type: recall_at_1000
905
+ value: 82.716
906
+ - type: recall_at_3
907
+ value: 27.601
908
+ - type: recall_at_5
909
+ value: 32.471
910
+ - task:
911
+ type: Retrieval
912
+ dataset:
913
+ type: BeIR/cqadupstack
914
+ name: MTEB CQADupstackUnixRetrieval
915
+ config: default
916
+ split: test
917
+ revision: None
918
+ metrics:
919
+ - type: map_at_1
920
+ value: 24.396
921
+ - type: map_at_10
922
+ value: 33.415
923
+ - type: map_at_100
924
+ value: 34.521
925
+ - type: map_at_1000
926
+ value: 34.631
927
+ - type: map_at_3
928
+ value: 30.703999999999997
929
+ - type: map_at_5
930
+ value: 32.166
931
+ - type: mrr_at_1
932
+ value: 28.825
933
+ - type: mrr_at_10
934
+ value: 37.397000000000006
935
+ - type: mrr_at_100
936
+ value: 38.286
937
+ - type: mrr_at_1000
938
+ value: 38.346000000000004
939
+ - type: mrr_at_3
940
+ value: 35.028
941
+ - type: mrr_at_5
942
+ value: 36.32
943
+ - type: ndcg_at_1
944
+ value: 28.825
945
+ - type: ndcg_at_10
946
+ value: 38.656
947
+ - type: ndcg_at_100
948
+ value: 43.856
949
+ - type: ndcg_at_1000
950
+ value: 46.31
951
+ - type: ndcg_at_3
952
+ value: 33.793
953
+ - type: ndcg_at_5
954
+ value: 35.909
955
+ - type: precision_at_1
956
+ value: 28.825
957
+ - type: precision_at_10
958
+ value: 6.567
959
+ - type: precision_at_100
960
+ value: 1.0330000000000001
961
+ - type: precision_at_1000
962
+ value: 0.135
963
+ - type: precision_at_3
964
+ value: 15.516
965
+ - type: precision_at_5
966
+ value: 10.914
967
+ - type: recall_at_1
968
+ value: 24.396
969
+ - type: recall_at_10
970
+ value: 50.747
971
+ - type: recall_at_100
972
+ value: 73.477
973
+ - type: recall_at_1000
974
+ value: 90.801
975
+ - type: recall_at_3
976
+ value: 37.1
977
+ - type: recall_at_5
978
+ value: 42.589
979
+ - task:
980
+ type: Retrieval
981
+ dataset:
982
+ type: BeIR/cqadupstack
983
+ name: MTEB CQADupstackWebmastersRetrieval
984
+ config: default
985
+ split: test
986
+ revision: None
987
+ metrics:
988
+ - type: map_at_1
989
+ value: 25.072
990
+ - type: map_at_10
991
+ value: 34.307
992
+ - type: map_at_100
993
+ value: 35.725
994
+ - type: map_at_1000
995
+ value: 35.943999999999996
996
+ - type: map_at_3
997
+ value: 30.906
998
+ - type: map_at_5
999
+ value: 32.818000000000005
1000
+ - type: mrr_at_1
1001
+ value: 29.644
1002
+ - type: mrr_at_10
1003
+ value: 38.673
1004
+ - type: mrr_at_100
1005
+ value: 39.459
1006
+ - type: mrr_at_1000
1007
+ value: 39.527
1008
+ - type: mrr_at_3
1009
+ value: 35.771
1010
+ - type: mrr_at_5
1011
+ value: 37.332
1012
+ - type: ndcg_at_1
1013
+ value: 29.644
1014
+ - type: ndcg_at_10
1015
+ value: 40.548
1016
+ - type: ndcg_at_100
1017
+ value: 45.678999999999995
1018
+ - type: ndcg_at_1000
1019
+ value: 48.488
1020
+ - type: ndcg_at_3
1021
+ value: 34.887
1022
+ - type: ndcg_at_5
1023
+ value: 37.543
1024
+ - type: precision_at_1
1025
+ value: 29.644
1026
+ - type: precision_at_10
1027
+ value: 7.688000000000001
1028
+ - type: precision_at_100
1029
+ value: 1.482
1030
+ - type: precision_at_1000
1031
+ value: 0.23600000000000002
1032
+ - type: precision_at_3
1033
+ value: 16.206
1034
+ - type: precision_at_5
1035
+ value: 12.016
1036
+ - type: recall_at_1
1037
+ value: 25.072
1038
+ - type: recall_at_10
1039
+ value: 53.478
1040
+ - type: recall_at_100
1041
+ value: 76.07300000000001
1042
+ - type: recall_at_1000
1043
+ value: 93.884
1044
+ - type: recall_at_3
1045
+ value: 37.583
1046
+ - type: recall_at_5
1047
+ value: 44.464
1048
+ - task:
1049
+ type: Retrieval
1050
+ dataset:
1051
+ type: BeIR/cqadupstack
1052
+ name: MTEB CQADupstackWordpressRetrieval
1053
+ config: default
1054
+ split: test
1055
+ revision: None
1056
+ metrics:
1057
+ - type: map_at_1
1058
+ value: 20.712
1059
+ - type: map_at_10
1060
+ value: 27.467999999999996
1061
+ - type: map_at_100
1062
+ value: 28.502
1063
+ - type: map_at_1000
1064
+ value: 28.610000000000003
1065
+ - type: map_at_3
1066
+ value: 24.887999999999998
1067
+ - type: map_at_5
1068
+ value: 26.273999999999997
1069
+ - type: mrr_at_1
1070
+ value: 22.736
1071
+ - type: mrr_at_10
1072
+ value: 29.553
1073
+ - type: mrr_at_100
1074
+ value: 30.485
1075
+ - type: mrr_at_1000
1076
+ value: 30.56
1077
+ - type: mrr_at_3
1078
+ value: 27.078999999999997
1079
+ - type: mrr_at_5
1080
+ value: 28.401
1081
+ - type: ndcg_at_1
1082
+ value: 22.736
1083
+ - type: ndcg_at_10
1084
+ value: 32.023
1085
+ - type: ndcg_at_100
1086
+ value: 37.158
1087
+ - type: ndcg_at_1000
1088
+ value: 39.823
1089
+ - type: ndcg_at_3
1090
+ value: 26.951999999999998
1091
+ - type: ndcg_at_5
1092
+ value: 29.281000000000002
1093
+ - type: precision_at_1
1094
+ value: 22.736
1095
+ - type: precision_at_10
1096
+ value: 5.213
1097
+ - type: precision_at_100
1098
+ value: 0.832
1099
+ - type: precision_at_1000
1100
+ value: 0.116
1101
+ - type: precision_at_3
1102
+ value: 11.459999999999999
1103
+ - type: precision_at_5
1104
+ value: 8.244
1105
+ - type: recall_at_1
1106
+ value: 20.712
1107
+ - type: recall_at_10
1108
+ value: 44.057
1109
+ - type: recall_at_100
1110
+ value: 67.944
1111
+ - type: recall_at_1000
1112
+ value: 87.925
1113
+ - type: recall_at_3
1114
+ value: 30.305
1115
+ - type: recall_at_5
1116
+ value: 36.071999999999996
1117
+ - task:
1118
+ type: Retrieval
1119
+ dataset:
1120
+ type: climate-fever
1121
+ name: MTEB ClimateFEVER
1122
+ config: default
1123
+ split: test
1124
+ revision: None
1125
+ metrics:
1126
+ - type: map_at_1
1127
+ value: 10.181999999999999
1128
+ - type: map_at_10
1129
+ value: 16.66
1130
+ - type: map_at_100
1131
+ value: 18.273
1132
+ - type: map_at_1000
1133
+ value: 18.45
1134
+ - type: map_at_3
1135
+ value: 14.141
1136
+ - type: map_at_5
1137
+ value: 15.455
1138
+ - type: mrr_at_1
1139
+ value: 22.15
1140
+ - type: mrr_at_10
1141
+ value: 32.062000000000005
1142
+ - type: mrr_at_100
1143
+ value: 33.116
1144
+ - type: mrr_at_1000
1145
+ value: 33.168
1146
+ - type: mrr_at_3
1147
+ value: 28.827
1148
+ - type: mrr_at_5
1149
+ value: 30.892999999999997
1150
+ - type: ndcg_at_1
1151
+ value: 22.15
1152
+ - type: ndcg_at_10
1153
+ value: 23.532
1154
+ - type: ndcg_at_100
1155
+ value: 30.358
1156
+ - type: ndcg_at_1000
1157
+ value: 33.783
1158
+ - type: ndcg_at_3
1159
+ value: 19.222
1160
+ - type: ndcg_at_5
1161
+ value: 20.919999999999998
1162
+ - type: precision_at_1
1163
+ value: 22.15
1164
+ - type: precision_at_10
1165
+ value: 7.185999999999999
1166
+ - type: precision_at_100
1167
+ value: 1.433
1168
+ - type: precision_at_1000
1169
+ value: 0.207
1170
+ - type: precision_at_3
1171
+ value: 13.941
1172
+ - type: precision_at_5
1173
+ value: 10.906
1174
+ - type: recall_at_1
1175
+ value: 10.181999999999999
1176
+ - type: recall_at_10
1177
+ value: 28.104000000000003
1178
+ - type: recall_at_100
1179
+ value: 51.998999999999995
1180
+ - type: recall_at_1000
1181
+ value: 71.311
1182
+ - type: recall_at_3
1183
+ value: 17.698
1184
+ - type: recall_at_5
1185
+ value: 22.262999999999998
1186
+ - task:
1187
+ type: Retrieval
1188
+ dataset:
1189
+ type: dbpedia-entity
1190
+ name: MTEB DBPedia
1191
+ config: default
1192
+ split: test
1193
+ revision: None
1194
+ metrics:
1195
+ - type: map_at_1
1196
+ value: 6.669
1197
+ - type: map_at_10
1198
+ value: 15.552
1199
+ - type: map_at_100
1200
+ value: 21.865000000000002
1201
+ - type: map_at_1000
1202
+ value: 23.268
1203
+ - type: map_at_3
1204
+ value: 11.309
1205
+ - type: map_at_5
1206
+ value: 13.084000000000001
1207
+ - type: mrr_at_1
1208
+ value: 55.50000000000001
1209
+ - type: mrr_at_10
1210
+ value: 66.46600000000001
1211
+ - type: mrr_at_100
1212
+ value: 66.944
1213
+ - type: mrr_at_1000
1214
+ value: 66.956
1215
+ - type: mrr_at_3
1216
+ value: 64.542
1217
+ - type: mrr_at_5
1218
+ value: 65.717
1219
+ - type: ndcg_at_1
1220
+ value: 44.75
1221
+ - type: ndcg_at_10
1222
+ value: 35.049
1223
+ - type: ndcg_at_100
1224
+ value: 39.073
1225
+ - type: ndcg_at_1000
1226
+ value: 46.208
1227
+ - type: ndcg_at_3
1228
+ value: 39.525
1229
+ - type: ndcg_at_5
1230
+ value: 37.156
1231
+ - type: precision_at_1
1232
+ value: 55.50000000000001
1233
+ - type: precision_at_10
1234
+ value: 27.800000000000004
1235
+ - type: precision_at_100
1236
+ value: 9.013
1237
+ - type: precision_at_1000
1238
+ value: 1.8800000000000001
1239
+ - type: precision_at_3
1240
+ value: 42.667
1241
+ - type: precision_at_5
1242
+ value: 36.0
1243
+ - type: recall_at_1
1244
+ value: 6.669
1245
+ - type: recall_at_10
1246
+ value: 21.811
1247
+ - type: recall_at_100
1248
+ value: 45.112
1249
+ - type: recall_at_1000
1250
+ value: 67.806
1251
+ - type: recall_at_3
1252
+ value: 13.373
1253
+ - type: recall_at_5
1254
+ value: 16.615
1255
+ - task:
1256
+ type: Classification
1257
+ dataset:
1258
+ type: mteb/emotion
1259
+ name: MTEB EmotionClassification
1260
+ config: default
1261
+ split: test
1262
+ revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
1263
+ metrics:
1264
+ - type: accuracy
1265
+ value: 48.769999999999996
1266
+ - type: f1
1267
+ value: 42.91448356376592
1268
+ - task:
1269
+ type: Retrieval
1270
+ dataset:
1271
+ type: fever
1272
+ name: MTEB FEVER
1273
+ config: default
1274
+ split: test
1275
+ revision: None
1276
+ metrics:
1277
+ - type: map_at_1
1278
+ value: 54.013
1279
+ - type: map_at_10
1280
+ value: 66.239
1281
+ - type: map_at_100
1282
+ value: 66.62599999999999
1283
+ - type: map_at_1000
1284
+ value: 66.644
1285
+ - type: map_at_3
1286
+ value: 63.965
1287
+ - type: map_at_5
1288
+ value: 65.45400000000001
1289
+ - type: mrr_at_1
1290
+ value: 58.221000000000004
1291
+ - type: mrr_at_10
1292
+ value: 70.43700000000001
1293
+ - type: mrr_at_100
1294
+ value: 70.744
1295
+ - type: mrr_at_1000
1296
+ value: 70.75099999999999
1297
+ - type: mrr_at_3
1298
+ value: 68.284
1299
+ - type: mrr_at_5
1300
+ value: 69.721
1301
+ - type: ndcg_at_1
1302
+ value: 58.221000000000004
1303
+ - type: ndcg_at_10
1304
+ value: 72.327
1305
+ - type: ndcg_at_100
1306
+ value: 73.953
1307
+ - type: ndcg_at_1000
1308
+ value: 74.312
1309
+ - type: ndcg_at_3
1310
+ value: 68.062
1311
+ - type: ndcg_at_5
1312
+ value: 70.56400000000001
1313
+ - type: precision_at_1
1314
+ value: 58.221000000000004
1315
+ - type: precision_at_10
1316
+ value: 9.521
1317
+ - type: precision_at_100
1318
+ value: 1.045
1319
+ - type: precision_at_1000
1320
+ value: 0.109
1321
+ - type: precision_at_3
1322
+ value: 27.348
1323
+ - type: precision_at_5
1324
+ value: 17.794999999999998
1325
+ - type: recall_at_1
1326
+ value: 54.013
1327
+ - type: recall_at_10
1328
+ value: 86.957
1329
+ - type: recall_at_100
1330
+ value: 93.911
1331
+ - type: recall_at_1000
1332
+ value: 96.38
1333
+ - type: recall_at_3
1334
+ value: 75.555
1335
+ - type: recall_at_5
1336
+ value: 81.671
1337
+ - task:
1338
+ type: Retrieval
1339
+ dataset:
1340
+ type: fiqa
1341
+ name: MTEB FiQA2018
1342
+ config: default
1343
+ split: test
1344
+ revision: None
1345
+ metrics:
1346
+ - type: map_at_1
1347
+ value: 21.254
1348
+ - type: map_at_10
1349
+ value: 33.723
1350
+ - type: map_at_100
1351
+ value: 35.574
1352
+ - type: map_at_1000
1353
+ value: 35.730000000000004
1354
+ - type: map_at_3
1355
+ value: 29.473
1356
+ - type: map_at_5
1357
+ value: 31.543
1358
+ - type: mrr_at_1
1359
+ value: 41.358
1360
+ - type: mrr_at_10
1361
+ value: 49.498
1362
+ - type: mrr_at_100
1363
+ value: 50.275999999999996
1364
+ - type: mrr_at_1000
1365
+ value: 50.308
1366
+ - type: mrr_at_3
1367
+ value: 47.016000000000005
1368
+ - type: mrr_at_5
1369
+ value: 48.336
1370
+ - type: ndcg_at_1
1371
+ value: 41.358
1372
+ - type: ndcg_at_10
1373
+ value: 41.579
1374
+ - type: ndcg_at_100
1375
+ value: 48.455
1376
+ - type: ndcg_at_1000
1377
+ value: 51.165000000000006
1378
+ - type: ndcg_at_3
1379
+ value: 37.681
1380
+ - type: ndcg_at_5
1381
+ value: 38.49
1382
+ - type: precision_at_1
1383
+ value: 41.358
1384
+ - type: precision_at_10
1385
+ value: 11.543000000000001
1386
+ - type: precision_at_100
1387
+ value: 1.87
1388
+ - type: precision_at_1000
1389
+ value: 0.23600000000000002
1390
+ - type: precision_at_3
1391
+ value: 24.743000000000002
1392
+ - type: precision_at_5
1393
+ value: 17.994
1394
+ - type: recall_at_1
1395
+ value: 21.254
1396
+ - type: recall_at_10
1397
+ value: 48.698
1398
+ - type: recall_at_100
1399
+ value: 74.588
1400
+ - type: recall_at_1000
1401
+ value: 91.00200000000001
1402
+ - type: recall_at_3
1403
+ value: 33.939
1404
+ - type: recall_at_5
1405
+ value: 39.367000000000004
1406
+ - task:
1407
+ type: Retrieval
1408
+ dataset:
1409
+ type: hotpotqa
1410
+ name: MTEB HotpotQA
1411
+ config: default
1412
+ split: test
1413
+ revision: None
1414
+ metrics:
1415
+ - type: map_at_1
1416
+ value: 35.922
1417
+ - type: map_at_10
1418
+ value: 52.32599999999999
1419
+ - type: map_at_100
1420
+ value: 53.18000000000001
1421
+ - type: map_at_1000
1422
+ value: 53.245
1423
+ - type: map_at_3
1424
+ value: 49.294
1425
+ - type: map_at_5
1426
+ value: 51.202999999999996
1427
+ - type: mrr_at_1
1428
+ value: 71.843
1429
+ - type: mrr_at_10
1430
+ value: 78.24600000000001
1431
+ - type: mrr_at_100
1432
+ value: 78.515
1433
+ - type: mrr_at_1000
1434
+ value: 78.527
1435
+ - type: mrr_at_3
1436
+ value: 77.17500000000001
1437
+ - type: mrr_at_5
1438
+ value: 77.852
1439
+ - type: ndcg_at_1
1440
+ value: 71.843
1441
+ - type: ndcg_at_10
1442
+ value: 61.379
1443
+ - type: ndcg_at_100
1444
+ value: 64.535
1445
+ - type: ndcg_at_1000
1446
+ value: 65.888
1447
+ - type: ndcg_at_3
1448
+ value: 56.958
1449
+ - type: ndcg_at_5
1450
+ value: 59.434
1451
+ - type: precision_at_1
1452
+ value: 71.843
1453
+ - type: precision_at_10
1454
+ value: 12.686
1455
+ - type: precision_at_100
1456
+ value: 1.517
1457
+ - type: precision_at_1000
1458
+ value: 0.16999999999999998
1459
+ - type: precision_at_3
1460
+ value: 35.778
1461
+ - type: precision_at_5
1462
+ value: 23.422
1463
+ - type: recall_at_1
1464
+ value: 35.922
1465
+ - type: recall_at_10
1466
+ value: 63.43
1467
+ - type: recall_at_100
1468
+ value: 75.868
1469
+ - type: recall_at_1000
1470
+ value: 84.88900000000001
1471
+ - type: recall_at_3
1472
+ value: 53.666000000000004
1473
+ - type: recall_at_5
1474
+ value: 58.555
1475
+ - task:
1476
+ type: Classification
1477
+ dataset:
1478
+ type: mteb/imdb
1479
+ name: MTEB ImdbClassification
1480
+ config: default
1481
+ split: test
1482
+ revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
1483
+ metrics:
1484
+ - type: accuracy
1485
+ value: 79.4408
1486
+ - type: ap
1487
+ value: 73.52820871620366
1488
+ - type: f1
1489
+ value: 79.36240238685001
1490
+ - task:
1491
+ type: Retrieval
1492
+ dataset:
1493
+ type: msmarco
1494
+ name: MTEB MSMARCO
1495
+ config: default
1496
+ split: dev
1497
+ revision: None
1498
+ metrics:
1499
+ - type: map_at_1
1500
+ value: 21.826999999999998
1501
+ - type: map_at_10
1502
+ value: 34.04
1503
+ - type: map_at_100
1504
+ value: 35.226
1505
+ - type: map_at_1000
1506
+ value: 35.275
1507
+ - type: map_at_3
1508
+ value: 30.165999999999997
1509
+ - type: map_at_5
1510
+ value: 32.318000000000005
1511
+ - type: mrr_at_1
1512
+ value: 22.464000000000002
1513
+ - type: mrr_at_10
1514
+ value: 34.631
1515
+ - type: mrr_at_100
1516
+ value: 35.752
1517
+ - type: mrr_at_1000
1518
+ value: 35.795
1519
+ - type: mrr_at_3
1520
+ value: 30.798
1521
+ - type: mrr_at_5
1522
+ value: 32.946999999999996
1523
+ - type: ndcg_at_1
1524
+ value: 22.464000000000002
1525
+ - type: ndcg_at_10
1526
+ value: 40.919
1527
+ - type: ndcg_at_100
1528
+ value: 46.632
1529
+ - type: ndcg_at_1000
1530
+ value: 47.833
1531
+ - type: ndcg_at_3
1532
+ value: 32.992
1533
+ - type: ndcg_at_5
1534
+ value: 36.834
1535
+ - type: precision_at_1
1536
+ value: 22.464000000000002
1537
+ - type: precision_at_10
1538
+ value: 6.494
1539
+ - type: precision_at_100
1540
+ value: 0.9369999999999999
1541
+ - type: precision_at_1000
1542
+ value: 0.104
1543
+ - type: precision_at_3
1544
+ value: 14.021
1545
+ - type: precision_at_5
1546
+ value: 10.347000000000001
1547
+ - type: recall_at_1
1548
+ value: 21.826999999999998
1549
+ - type: recall_at_10
1550
+ value: 62.132
1551
+ - type: recall_at_100
1552
+ value: 88.55199999999999
1553
+ - type: recall_at_1000
1554
+ value: 97.707
1555
+ - type: recall_at_3
1556
+ value: 40.541
1557
+ - type: recall_at_5
1558
+ value: 49.739
1559
+ - task:
1560
+ type: Classification
1561
+ dataset:
1562
+ type: mteb/mtop_domain
1563
+ name: MTEB MTOPDomainClassification (en)
1564
+ config: en
1565
+ split: test
1566
+ revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
1567
+ metrics:
1568
+ - type: accuracy
1569
+ value: 95.68399452804377
1570
+ - type: f1
1571
+ value: 95.25490609832268
1572
+ - task:
1573
+ type: Classification
1574
+ dataset:
1575
+ type: mteb/mtop_intent
1576
+ name: MTEB MTOPIntentClassification (en)
1577
+ config: en
1578
+ split: test
1579
+ revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
1580
+ metrics:
1581
+ - type: accuracy
1582
+ value: 83.15321477428182
1583
+ - type: f1
1584
+ value: 60.35476439087966
1585
+ - task:
1586
+ type: Classification
1587
+ dataset:
1588
+ type: mteb/amazon_massive_intent
1589
+ name: MTEB MassiveIntentClassification (en)
1590
+ config: en
1591
+ split: test
1592
+ revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
1593
+ metrics:
1594
+ - type: accuracy
1595
+ value: 71.92669804976462
1596
+ - type: f1
1597
+ value: 69.22815107207565
1598
+ - task:
1599
+ type: Classification
1600
+ dataset:
1601
+ type: mteb/amazon_massive_scenario
1602
+ name: MTEB MassiveScenarioClassification (en)
1603
+ config: en
1604
+ split: test
1605
+ revision: 7d571f92784cd94a019292a1f45445077d0ef634
1606
+ metrics:
1607
+ - type: accuracy
1608
+ value: 74.4855413584398
1609
+ - type: f1
1610
+ value: 72.92107516103387
1611
+ - task:
1612
+ type: Clustering
1613
+ dataset:
1614
+ type: mteb/medrxiv-clustering-p2p
1615
+ name: MTEB MedrxivClusteringP2P
1616
+ config: default
1617
+ split: test
1618
+ revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
1619
+ metrics:
1620
+ - type: v_measure
1621
+ value: 32.412679360205544
1622
+ - task:
1623
+ type: Clustering
1624
+ dataset:
1625
+ type: mteb/medrxiv-clustering-s2s
1626
+ name: MTEB MedrxivClusteringS2S
1627
+ config: default
1628
+ split: test
1629
+ revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
1630
+ metrics:
1631
+ - type: v_measure
1632
+ value: 28.09211869875204
1633
+ - task:
1634
+ type: Reranking
1635
+ dataset:
1636
+ type: mteb/mind_small
1637
+ name: MTEB MindSmallReranking
1638
+ config: default
1639
+ split: test
1640
+ revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
1641
+ metrics:
1642
+ - type: map
1643
+ value: 30.540919056982545
1644
+ - type: mrr
1645
+ value: 31.529904607063536
1646
+ - task:
1647
+ type: Retrieval
1648
+ dataset:
1649
+ type: nfcorpus
1650
+ name: MTEB NFCorpus
1651
+ config: default
1652
+ split: test
1653
+ revision: None
1654
+ metrics:
1655
+ - type: map_at_1
1656
+ value: 5.745
1657
+ - type: map_at_10
1658
+ value: 12.013
1659
+ - type: map_at_100
1660
+ value: 15.040000000000001
1661
+ - type: map_at_1000
1662
+ value: 16.427
1663
+ - type: map_at_3
1664
+ value: 8.841000000000001
1665
+ - type: map_at_5
1666
+ value: 10.289
1667
+ - type: mrr_at_1
1668
+ value: 45.201
1669
+ - type: mrr_at_10
1670
+ value: 53.483999999999995
1671
+ - type: mrr_at_100
1672
+ value: 54.20700000000001
1673
+ - type: mrr_at_1000
1674
+ value: 54.252
1675
+ - type: mrr_at_3
1676
+ value: 51.29
1677
+ - type: mrr_at_5
1678
+ value: 52.73
1679
+ - type: ndcg_at_1
1680
+ value: 43.808
1681
+ - type: ndcg_at_10
1682
+ value: 32.445
1683
+ - type: ndcg_at_100
1684
+ value: 30.031000000000002
1685
+ - type: ndcg_at_1000
1686
+ value: 39.007
1687
+ - type: ndcg_at_3
1688
+ value: 37.204
1689
+ - type: ndcg_at_5
1690
+ value: 35.07
1691
+ - type: precision_at_1
1692
+ value: 45.201
1693
+ - type: precision_at_10
1694
+ value: 23.684
1695
+ - type: precision_at_100
1696
+ value: 7.600999999999999
1697
+ - type: precision_at_1000
1698
+ value: 2.043
1699
+ - type: precision_at_3
1700
+ value: 33.953
1701
+ - type: precision_at_5
1702
+ value: 29.412
1703
+ - type: recall_at_1
1704
+ value: 5.745
1705
+ - type: recall_at_10
1706
+ value: 16.168
1707
+ - type: recall_at_100
1708
+ value: 30.875999999999998
1709
+ - type: recall_at_1000
1710
+ value: 62.686
1711
+ - type: recall_at_3
1712
+ value: 9.75
1713
+ - type: recall_at_5
1714
+ value: 12.413
1715
+ - task:
1716
+ type: Retrieval
1717
+ dataset:
1718
+ type: nq
1719
+ name: MTEB NQ
1720
+ config: default
1721
+ split: test
1722
+ revision: None
1723
+ metrics:
1724
+ - type: map_at_1
1725
+ value: 37.828
1726
+ - type: map_at_10
1727
+ value: 53.239000000000004
1728
+ - type: map_at_100
1729
+ value: 54.035999999999994
1730
+ - type: map_at_1000
1731
+ value: 54.067
1732
+ - type: map_at_3
1733
+ value: 49.289
1734
+ - type: map_at_5
1735
+ value: 51.784
1736
+ - type: mrr_at_1
1737
+ value: 42.497
1738
+ - type: mrr_at_10
1739
+ value: 55.916999999999994
1740
+ - type: mrr_at_100
1741
+ value: 56.495
1742
+ - type: mrr_at_1000
1743
+ value: 56.516999999999996
1744
+ - type: mrr_at_3
1745
+ value: 52.800000000000004
1746
+ - type: mrr_at_5
1747
+ value: 54.722
1748
+ - type: ndcg_at_1
1749
+ value: 42.468
1750
+ - type: ndcg_at_10
1751
+ value: 60.437
1752
+ - type: ndcg_at_100
1753
+ value: 63.731
1754
+ - type: ndcg_at_1000
1755
+ value: 64.41799999999999
1756
+ - type: ndcg_at_3
1757
+ value: 53.230999999999995
1758
+ - type: ndcg_at_5
1759
+ value: 57.26
1760
+ - type: precision_at_1
1761
+ value: 42.468
1762
+ - type: precision_at_10
1763
+ value: 9.47
1764
+ - type: precision_at_100
1765
+ value: 1.1360000000000001
1766
+ - type: precision_at_1000
1767
+ value: 0.12
1768
+ - type: precision_at_3
1769
+ value: 23.724999999999998
1770
+ - type: precision_at_5
1771
+ value: 16.593
1772
+ - type: recall_at_1
1773
+ value: 37.828
1774
+ - type: recall_at_10
1775
+ value: 79.538
1776
+ - type: recall_at_100
1777
+ value: 93.646
1778
+ - type: recall_at_1000
1779
+ value: 98.72999999999999
1780
+ - type: recall_at_3
1781
+ value: 61.134
1782
+ - type: recall_at_5
1783
+ value: 70.377
1784
+ - task:
1785
+ type: Retrieval
1786
+ dataset:
1787
+ type: quora
1788
+ name: MTEB QuoraRetrieval
1789
+ config: default
1790
+ split: test
1791
+ revision: None
1792
+ metrics:
1793
+ - type: map_at_1
1794
+ value: 70.548
1795
+ - type: map_at_10
1796
+ value: 84.466
1797
+ - type: map_at_100
1798
+ value: 85.10600000000001
1799
+ - type: map_at_1000
1800
+ value: 85.123
1801
+ - type: map_at_3
1802
+ value: 81.57600000000001
1803
+ - type: map_at_5
1804
+ value: 83.399
1805
+ - type: mrr_at_1
1806
+ value: 81.24
1807
+ - type: mrr_at_10
1808
+ value: 87.457
1809
+ - type: mrr_at_100
1810
+ value: 87.574
1811
+ - type: mrr_at_1000
1812
+ value: 87.575
1813
+ - type: mrr_at_3
1814
+ value: 86.507
1815
+ - type: mrr_at_5
1816
+ value: 87.205
1817
+ - type: ndcg_at_1
1818
+ value: 81.25
1819
+ - type: ndcg_at_10
1820
+ value: 88.203
1821
+ - type: ndcg_at_100
1822
+ value: 89.457
1823
+ - type: ndcg_at_1000
1824
+ value: 89.563
1825
+ - type: ndcg_at_3
1826
+ value: 85.465
1827
+ - type: ndcg_at_5
1828
+ value: 87.007
1829
+ - type: precision_at_1
1830
+ value: 81.25
1831
+ - type: precision_at_10
1832
+ value: 13.373
1833
+ - type: precision_at_100
1834
+ value: 1.5270000000000001
1835
+ - type: precision_at_1000
1836
+ value: 0.157
1837
+ - type: precision_at_3
1838
+ value: 37.417
1839
+ - type: precision_at_5
1840
+ value: 24.556
1841
+ - type: recall_at_1
1842
+ value: 70.548
1843
+ - type: recall_at_10
1844
+ value: 95.208
1845
+ - type: recall_at_100
1846
+ value: 99.514
1847
+ - type: recall_at_1000
1848
+ value: 99.988
1849
+ - type: recall_at_3
1850
+ value: 87.214
1851
+ - type: recall_at_5
1852
+ value: 91.696
1853
+ - task:
1854
+ type: Clustering
1855
+ dataset:
1856
+ type: mteb/reddit-clustering
1857
+ name: MTEB RedditClustering
1858
+ config: default
1859
+ split: test
1860
+ revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
1861
+ metrics:
1862
+ - type: v_measure
1863
+ value: 53.04822095496839
1864
+ - task:
1865
+ type: Clustering
1866
+ dataset:
1867
+ type: mteb/reddit-clustering-p2p
1868
+ name: MTEB RedditClusteringP2P
1869
+ config: default
1870
+ split: test
1871
+ revision: 282350215ef01743dc01b456c7f5241fa8937f16
1872
+ metrics:
1873
+ - type: v_measure
1874
+ value: 60.30778476474675
1875
+ - task:
1876
+ type: Retrieval
1877
+ dataset:
1878
+ type: scidocs
1879
+ name: MTEB SCIDOCS
1880
+ config: default
1881
+ split: test
1882
+ revision: None
1883
+ metrics:
1884
+ - type: map_at_1
1885
+ value: 4.692
1886
+ - type: map_at_10
1887
+ value: 11.766
1888
+ - type: map_at_100
1889
+ value: 13.904
1890
+ - type: map_at_1000
1891
+ value: 14.216999999999999
1892
+ - type: map_at_3
1893
+ value: 8.245
1894
+ - type: map_at_5
1895
+ value: 9.92
1896
+ - type: mrr_at_1
1897
+ value: 23.0
1898
+ - type: mrr_at_10
1899
+ value: 33.78
1900
+ - type: mrr_at_100
1901
+ value: 34.922
1902
+ - type: mrr_at_1000
1903
+ value: 34.973
1904
+ - type: mrr_at_3
1905
+ value: 30.2
1906
+ - type: mrr_at_5
1907
+ value: 32.565
1908
+ - type: ndcg_at_1
1909
+ value: 23.0
1910
+ - type: ndcg_at_10
1911
+ value: 19.863
1912
+ - type: ndcg_at_100
1913
+ value: 28.141
1914
+ - type: ndcg_at_1000
1915
+ value: 33.549
1916
+ - type: ndcg_at_3
1917
+ value: 18.434
1918
+ - type: ndcg_at_5
1919
+ value: 16.384
1920
+ - type: precision_at_1
1921
+ value: 23.0
1922
+ - type: precision_at_10
1923
+ value: 10.39
1924
+ - type: precision_at_100
1925
+ value: 2.235
1926
+ - type: precision_at_1000
1927
+ value: 0.35300000000000004
1928
+ - type: precision_at_3
1929
+ value: 17.133000000000003
1930
+ - type: precision_at_5
1931
+ value: 14.44
1932
+ - type: recall_at_1
1933
+ value: 4.692
1934
+ - type: recall_at_10
1935
+ value: 21.025
1936
+ - type: recall_at_100
1937
+ value: 45.324999999999996
1938
+ - type: recall_at_1000
1939
+ value: 71.675
1940
+ - type: recall_at_3
1941
+ value: 10.440000000000001
1942
+ - type: recall_at_5
1943
+ value: 14.64
1944
+ - task:
1945
+ type: STS
1946
+ dataset:
1947
+ type: mteb/sickr-sts
1948
+ name: MTEB SICK-R
1949
+ config: default
1950
+ split: test
1951
+ revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
1952
+ metrics:
1953
+ - type: cos_sim_pearson
1954
+ value: 84.96178184892842
1955
+ - type: cos_sim_spearman
1956
+ value: 79.6487740813199
1957
+ - type: euclidean_pearson
1958
+ value: 82.06661161625023
1959
+ - type: euclidean_spearman
1960
+ value: 79.64876769031183
1961
+ - type: manhattan_pearson
1962
+ value: 82.07061164575131
1963
+ - type: manhattan_spearman
1964
+ value: 79.65197039464537
1965
+ - task:
1966
+ type: STS
1967
+ dataset:
1968
+ type: mteb/sts12-sts
1969
+ name: MTEB STS12
1970
+ config: default
1971
+ split: test
1972
+ revision: a0d554a64d88156834ff5ae9920b964011b16384
1973
+ metrics:
1974
+ - type: cos_sim_pearson
1975
+ value: 84.15305604100027
1976
+ - type: cos_sim_spearman
1977
+ value: 74.27447427941591
1978
+ - type: euclidean_pearson
1979
+ value: 80.52737337565307
1980
+ - type: euclidean_spearman
1981
+ value: 74.27416077132192
1982
+ - type: manhattan_pearson
1983
+ value: 80.53728571140387
1984
+ - type: manhattan_spearman
1985
+ value: 74.28853605753457
1986
+ - task:
1987
+ type: STS
1988
+ dataset:
1989
+ type: mteb/sts13-sts
1990
+ name: MTEB STS13
1991
+ config: default
1992
+ split: test
1993
+ revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
1994
+ metrics:
1995
+ - type: cos_sim_pearson
1996
+ value: 83.44386080639279
1997
+ - type: cos_sim_spearman
1998
+ value: 84.17947648159536
1999
+ - type: euclidean_pearson
2000
+ value: 83.34145388129387
2001
+ - type: euclidean_spearman
2002
+ value: 84.17947648159536
2003
+ - type: manhattan_pearson
2004
+ value: 83.30699061927966
2005
+ - type: manhattan_spearman
2006
+ value: 84.18125737380451
2007
+ - task:
2008
+ type: STS
2009
+ dataset:
2010
+ type: mteb/sts14-sts
2011
+ name: MTEB STS14
2012
+ config: default
2013
+ split: test
2014
+ revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
2015
+ metrics:
2016
+ - type: cos_sim_pearson
2017
+ value: 81.57392220985612
2018
+ - type: cos_sim_spearman
2019
+ value: 78.80745014464101
2020
+ - type: euclidean_pearson
2021
+ value: 80.01660371487199
2022
+ - type: euclidean_spearman
2023
+ value: 78.80741240102256
2024
+ - type: manhattan_pearson
2025
+ value: 79.96810779507953
2026
+ - type: manhattan_spearman
2027
+ value: 78.75600400119448
2028
+ - task:
2029
+ type: STS
2030
+ dataset:
2031
+ type: mteb/sts15-sts
2032
+ name: MTEB STS15
2033
+ config: default
2034
+ split: test
2035
+ revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
2036
+ metrics:
2037
+ - type: cos_sim_pearson
2038
+ value: 86.85421063026625
2039
+ - type: cos_sim_spearman
2040
+ value: 87.55320285299192
2041
+ - type: euclidean_pearson
2042
+ value: 86.69750143323517
2043
+ - type: euclidean_spearman
2044
+ value: 87.55320284326378
2045
+ - type: manhattan_pearson
2046
+ value: 86.63379169960379
2047
+ - type: manhattan_spearman
2048
+ value: 87.4815029877984
2049
+ - task:
2050
+ type: STS
2051
+ dataset:
2052
+ type: mteb/sts16-sts
2053
+ name: MTEB STS16
2054
+ config: default
2055
+ split: test
2056
+ revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
2057
+ metrics:
2058
+ - type: cos_sim_pearson
2059
+ value: 84.31314130411842
2060
+ - type: cos_sim_spearman
2061
+ value: 85.3489588181433
2062
+ - type: euclidean_pearson
2063
+ value: 84.13240933463535
2064
+ - type: euclidean_spearman
2065
+ value: 85.34902871403281
2066
+ - type: manhattan_pearson
2067
+ value: 84.01183086503559
2068
+ - type: manhattan_spearman
2069
+ value: 85.19316703166102
2070
+ - task:
2071
+ type: STS
2072
+ dataset:
2073
+ type: mteb/sts17-crosslingual-sts
2074
+ name: MTEB STS17 (en-en)
2075
+ config: en-en
2076
+ split: test
2077
+ revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
2078
+ metrics:
2079
+ - type: cos_sim_pearson
2080
+ value: 89.09979781689536
2081
+ - type: cos_sim_spearman
2082
+ value: 88.87813323759015
2083
+ - type: euclidean_pearson
2084
+ value: 88.65413031123792
2085
+ - type: euclidean_spearman
2086
+ value: 88.87813323759015
2087
+ - type: manhattan_pearson
2088
+ value: 88.61818758256024
2089
+ - type: manhattan_spearman
2090
+ value: 88.81044100494604
2091
+ - task:
2092
+ type: STS
2093
+ dataset:
2094
+ type: mteb/sts22-crosslingual-sts
2095
+ name: MTEB STS22 (en)
2096
+ config: en
2097
+ split: test
2098
+ revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
2099
+ metrics:
2100
+ - type: cos_sim_pearson
2101
+ value: 62.30693258111531
2102
+ - type: cos_sim_spearman
2103
+ value: 62.195516523251946
2104
+ - type: euclidean_pearson
2105
+ value: 62.951283701049476
2106
+ - type: euclidean_spearman
2107
+ value: 62.195516523251946
2108
+ - type: manhattan_pearson
2109
+ value: 63.068322281439535
2110
+ - type: manhattan_spearman
2111
+ value: 62.10621171028406
2112
+ - task:
2113
+ type: STS
2114
+ dataset:
2115
+ type: mteb/stsbenchmark-sts
2116
+ name: MTEB STSBenchmark
2117
+ config: default
2118
+ split: test
2119
+ revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
2120
+ metrics:
2121
+ - type: cos_sim_pearson
2122
+ value: 84.27092833763909
2123
+ - type: cos_sim_spearman
2124
+ value: 84.84429717949759
2125
+ - type: euclidean_pearson
2126
+ value: 84.8516966060792
2127
+ - type: euclidean_spearman
2128
+ value: 84.84429717949759
2129
+ - type: manhattan_pearson
2130
+ value: 84.82203139242881
2131
+ - type: manhattan_spearman
2132
+ value: 84.8358503952945
2133
+ - task:
2134
+ type: Reranking
2135
+ dataset:
2136
+ type: mteb/scidocs-reranking
2137
+ name: MTEB SciDocsRR
2138
+ config: default
2139
+ split: test
2140
+ revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
2141
+ metrics:
2142
+ - type: map
2143
+ value: 83.10290863981409
2144
+ - type: mrr
2145
+ value: 95.31168450286097
2146
+ - task:
2147
+ type: Retrieval
2148
+ dataset:
2149
+ type: scifact
2150
+ name: MTEB SciFact
2151
+ config: default
2152
+ split: test
2153
+ revision: None
2154
+ metrics:
2155
+ - type: map_at_1
2156
+ value: 52.161
2157
+ - type: map_at_10
2158
+ value: 62.138000000000005
2159
+ - type: map_at_100
2160
+ value: 62.769
2161
+ - type: map_at_1000
2162
+ value: 62.812
2163
+ - type: map_at_3
2164
+ value: 59.111000000000004
2165
+ - type: map_at_5
2166
+ value: 60.995999999999995
2167
+ - type: mrr_at_1
2168
+ value: 55.333
2169
+ - type: mrr_at_10
2170
+ value: 63.504000000000005
2171
+ - type: mrr_at_100
2172
+ value: 64.036
2173
+ - type: mrr_at_1000
2174
+ value: 64.08
2175
+ - type: mrr_at_3
2176
+ value: 61.278
2177
+ - type: mrr_at_5
2178
+ value: 62.778
2179
+ - type: ndcg_at_1
2180
+ value: 55.333
2181
+ - type: ndcg_at_10
2182
+ value: 66.678
2183
+ - type: ndcg_at_100
2184
+ value: 69.415
2185
+ - type: ndcg_at_1000
2186
+ value: 70.453
2187
+ - type: ndcg_at_3
2188
+ value: 61.755
2189
+ - type: ndcg_at_5
2190
+ value: 64.546
2191
+ - type: precision_at_1
2192
+ value: 55.333
2193
+ - type: precision_at_10
2194
+ value: 9.033
2195
+ - type: precision_at_100
2196
+ value: 1.043
2197
+ - type: precision_at_1000
2198
+ value: 0.11199999999999999
2199
+ - type: precision_at_3
2200
+ value: 24.221999999999998
2201
+ - type: precision_at_5
2202
+ value: 16.333000000000002
2203
+ - type: recall_at_1
2204
+ value: 52.161
2205
+ - type: recall_at_10
2206
+ value: 79.156
2207
+ - type: recall_at_100
2208
+ value: 91.333
2209
+ - type: recall_at_1000
2210
+ value: 99.333
2211
+ - type: recall_at_3
2212
+ value: 66.43299999999999
2213
+ - type: recall_at_5
2214
+ value: 73.272
2215
+ - task:
2216
+ type: PairClassification
2217
+ dataset:
2218
+ type: mteb/sprintduplicatequestions-pairclassification
2219
+ name: MTEB SprintDuplicateQuestions
2220
+ config: default
2221
+ split: test
2222
+ revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
2223
+ metrics:
2224
+ - type: cos_sim_accuracy
2225
+ value: 99.81287128712871
2226
+ - type: cos_sim_ap
2227
+ value: 95.30034785910676
2228
+ - type: cos_sim_f1
2229
+ value: 90.28629856850716
2230
+ - type: cos_sim_precision
2231
+ value: 92.36401673640168
2232
+ - type: cos_sim_recall
2233
+ value: 88.3
2234
+ - type: dot_accuracy
2235
+ value: 99.81287128712871
2236
+ - type: dot_ap
2237
+ value: 95.30034785910676
2238
+ - type: dot_f1
2239
+ value: 90.28629856850716
2240
+ - type: dot_precision
2241
+ value: 92.36401673640168
2242
+ - type: dot_recall
2243
+ value: 88.3
2244
+ - type: euclidean_accuracy
2245
+ value: 99.81287128712871
2246
+ - type: euclidean_ap
2247
+ value: 95.30034785910676
2248
+ - type: euclidean_f1
2249
+ value: 90.28629856850716
2250
+ - type: euclidean_precision
2251
+ value: 92.36401673640168
2252
+ - type: euclidean_recall
2253
+ value: 88.3
2254
+ - type: manhattan_accuracy
2255
+ value: 99.80990099009901
2256
+ - type: manhattan_ap
2257
+ value: 95.26880751950654
2258
+ - type: manhattan_f1
2259
+ value: 90.22177419354838
2260
+ - type: manhattan_precision
2261
+ value: 90.95528455284553
2262
+ - type: manhattan_recall
2263
+ value: 89.5
2264
+ - type: max_accuracy
2265
+ value: 99.81287128712871
2266
+ - type: max_ap
2267
+ value: 95.30034785910676
2268
+ - type: max_f1
2269
+ value: 90.28629856850716
2270
+ - task:
2271
+ type: Clustering
2272
+ dataset:
2273
+ type: mteb/stackexchange-clustering
2274
+ name: MTEB StackExchangeClustering
2275
+ config: default
2276
+ split: test
2277
+ revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
2278
+ metrics:
2279
+ - type: v_measure
2280
+ value: 58.518662504351184
2281
+ - task:
2282
+ type: Clustering
2283
+ dataset:
2284
+ type: mteb/stackexchange-clustering-p2p
2285
+ name: MTEB StackExchangeClusteringP2P
2286
+ config: default
2287
+ split: test
2288
+ revision: 815ca46b2622cec33ccafc3735d572c266efdb44
2289
+ metrics:
2290
+ - type: v_measure
2291
+ value: 34.96168178378587
2292
+ - task:
2293
+ type: Reranking
2294
+ dataset:
2295
+ type: mteb/stackoverflowdupquestions-reranking
2296
+ name: MTEB StackOverflowDupQuestions
2297
+ config: default
2298
+ split: test
2299
+ revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
2300
+ metrics:
2301
+ - type: map
2302
+ value: 52.04862593471896
2303
+ - type: mrr
2304
+ value: 52.97238402936932
2305
+ - task:
2306
+ type: Summarization
2307
+ dataset:
2308
+ type: mteb/summeval
2309
+ name: MTEB SummEval
2310
+ config: default
2311
+ split: test
2312
+ revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
2313
+ metrics:
2314
+ - type: cos_sim_pearson
2315
+ value: 30.092545236479946
2316
+ - type: cos_sim_spearman
2317
+ value: 31.599851000175498
2318
+ - type: dot_pearson
2319
+ value: 30.092542723901676
2320
+ - type: dot_spearman
2321
+ value: 31.599851000175498
2322
+ - task:
2323
+ type: Retrieval
2324
+ dataset:
2325
+ type: trec-covid
2326
+ name: MTEB TRECCOVID
2327
+ config: default
2328
+ split: test
2329
+ revision: None
2330
+ metrics:
2331
+ - type: map_at_1
2332
+ value: 0.189
2333
+ - type: map_at_10
2334
+ value: 1.662
2335
+ - type: map_at_100
2336
+ value: 9.384
2337
+ - type: map_at_1000
2338
+ value: 22.669
2339
+ - type: map_at_3
2340
+ value: 0.5559999999999999
2341
+ - type: map_at_5
2342
+ value: 0.9039999999999999
2343
+ - type: mrr_at_1
2344
+ value: 68.0
2345
+ - type: mrr_at_10
2346
+ value: 81.01899999999999
2347
+ - type: mrr_at_100
2348
+ value: 81.01899999999999
2349
+ - type: mrr_at_1000
2350
+ value: 81.01899999999999
2351
+ - type: mrr_at_3
2352
+ value: 79.333
2353
+ - type: mrr_at_5
2354
+ value: 80.733
2355
+ - type: ndcg_at_1
2356
+ value: 63.0
2357
+ - type: ndcg_at_10
2358
+ value: 65.913
2359
+ - type: ndcg_at_100
2360
+ value: 51.895
2361
+ - type: ndcg_at_1000
2362
+ value: 46.967
2363
+ - type: ndcg_at_3
2364
+ value: 65.49199999999999
2365
+ - type: ndcg_at_5
2366
+ value: 66.69699999999999
2367
+ - type: precision_at_1
2368
+ value: 68.0
2369
+ - type: precision_at_10
2370
+ value: 71.6
2371
+ - type: precision_at_100
2372
+ value: 53.66
2373
+ - type: precision_at_1000
2374
+ value: 21.124000000000002
2375
+ - type: precision_at_3
2376
+ value: 72.667
2377
+ - type: precision_at_5
2378
+ value: 74.0
2379
+ - type: recall_at_1
2380
+ value: 0.189
2381
+ - type: recall_at_10
2382
+ value: 1.913
2383
+ - type: recall_at_100
2384
+ value: 12.601999999999999
2385
+ - type: recall_at_1000
2386
+ value: 44.296
2387
+ - type: recall_at_3
2388
+ value: 0.605
2389
+ - type: recall_at_5
2390
+ value: 1.018
2391
+ - task:
2392
+ type: Retrieval
2393
+ dataset:
2394
+ type: webis-touche2020
2395
+ name: MTEB Touche2020
2396
+ config: default
2397
+ split: test
2398
+ revision: None
2399
+ metrics:
2400
+ - type: map_at_1
2401
+ value: 2.701
2402
+ - type: map_at_10
2403
+ value: 10.445
2404
+ - type: map_at_100
2405
+ value: 17.324
2406
+ - type: map_at_1000
2407
+ value: 19.161
2408
+ - type: map_at_3
2409
+ value: 5.497
2410
+ - type: map_at_5
2411
+ value: 7.278
2412
+ - type: mrr_at_1
2413
+ value: 30.612000000000002
2414
+ - type: mrr_at_10
2415
+ value: 45.534
2416
+ - type: mrr_at_100
2417
+ value: 45.792
2418
+ - type: mrr_at_1000
2419
+ value: 45.806999999999995
2420
+ - type: mrr_at_3
2421
+ value: 37.755
2422
+ - type: mrr_at_5
2423
+ value: 43.469
2424
+ - type: ndcg_at_1
2425
+ value: 26.531
2426
+ - type: ndcg_at_10
2427
+ value: 26.235000000000003
2428
+ - type: ndcg_at_100
2429
+ value: 39.17
2430
+ - type: ndcg_at_1000
2431
+ value: 51.038
2432
+ - type: ndcg_at_3
2433
+ value: 23.625
2434
+ - type: ndcg_at_5
2435
+ value: 24.338
2436
+ - type: precision_at_1
2437
+ value: 30.612000000000002
2438
+ - type: precision_at_10
2439
+ value: 24.285999999999998
2440
+ - type: precision_at_100
2441
+ value: 8.224
2442
+ - type: precision_at_1000
2443
+ value: 1.6179999999999999
2444
+ - type: precision_at_3
2445
+ value: 24.490000000000002
2446
+ - type: precision_at_5
2447
+ value: 24.898
2448
+ - type: recall_at_1
2449
+ value: 2.701
2450
+ - type: recall_at_10
2451
+ value: 17.997
2452
+ - type: recall_at_100
2453
+ value: 51.766999999999996
2454
+ - type: recall_at_1000
2455
+ value: 87.863
2456
+ - type: recall_at_3
2457
+ value: 6.295000000000001
2458
+ - type: recall_at_5
2459
+ value: 9.993
2460
+ - task:
2461
+ type: Classification
2462
+ dataset:
2463
+ type: mteb/toxic_conversations_50k
2464
+ name: MTEB ToxicConversationsClassification
2465
+ config: default
2466
+ split: test
2467
+ revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
2468
+ metrics:
2469
+ - type: accuracy
2470
+ value: 73.3474
2471
+ - type: ap
2472
+ value: 15.393431414459924
2473
+ - type: f1
2474
+ value: 56.466681887882416
2475
+ - task:
2476
+ type: Classification
2477
+ dataset:
2478
+ type: mteb/tweet_sentiment_extraction
2479
+ name: MTEB TweetSentimentExtractionClassification
2480
+ config: default
2481
+ split: test
2482
+ revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
2483
+ metrics:
2484
+ - type: accuracy
2485
+ value: 62.062818336163
2486
+ - type: f1
2487
+ value: 62.11230840463252
2488
+ - task:
2489
+ type: Clustering
2490
+ dataset:
2491
+ type: mteb/twentynewsgroups-clustering
2492
+ name: MTEB TwentyNewsgroupsClustering
2493
+ config: default
2494
+ split: test
2495
+ revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
2496
+ metrics:
2497
+ - type: v_measure
2498
+ value: 42.464892820845115
2499
+ - task:
2500
+ type: PairClassification
2501
+ dataset:
2502
+ type: mteb/twittersemeval2015-pairclassification
2503
+ name: MTEB TwitterSemEval2015
2504
+ config: default
2505
+ split: test
2506
+ revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
2507
+ metrics:
2508
+ - type: cos_sim_accuracy
2509
+ value: 86.15962329379508
2510
+ - type: cos_sim_ap
2511
+ value: 74.73674057919256
2512
+ - type: cos_sim_f1
2513
+ value: 68.81245642574947
2514
+ - type: cos_sim_precision
2515
+ value: 61.48255813953488
2516
+ - type: cos_sim_recall
2517
+ value: 78.12664907651715
2518
+ - type: dot_accuracy
2519
+ value: 86.15962329379508
2520
+ - type: dot_ap
2521
+ value: 74.7367634988281
2522
+ - type: dot_f1
2523
+ value: 68.81245642574947
2524
+ - type: dot_precision
2525
+ value: 61.48255813953488
2526
+ - type: dot_recall
2527
+ value: 78.12664907651715
2528
+ - type: euclidean_accuracy
2529
+ value: 86.15962329379508
2530
+ - type: euclidean_ap
2531
+ value: 74.7367761466634
2532
+ - type: euclidean_f1
2533
+ value: 68.81245642574947
2534
+ - type: euclidean_precision
2535
+ value: 61.48255813953488
2536
+ - type: euclidean_recall
2537
+ value: 78.12664907651715
2538
+ - type: manhattan_accuracy
2539
+ value: 86.21326816474935
2540
+ - type: manhattan_ap
2541
+ value: 74.64416473733951
2542
+ - type: manhattan_f1
2543
+ value: 68.80924855491331
2544
+ - type: manhattan_precision
2545
+ value: 61.23456790123457
2546
+ - type: manhattan_recall
2547
+ value: 78.52242744063325
2548
+ - type: max_accuracy
2549
+ value: 86.21326816474935
2550
+ - type: max_ap
2551
+ value: 74.7367761466634
2552
+ - type: max_f1
2553
+ value: 68.81245642574947
2554
+ - task:
2555
+ type: PairClassification
2556
+ dataset:
2557
+ type: mteb/twitterurlcorpus-pairclassification
2558
+ name: MTEB TwitterURLCorpus
2559
+ config: default
2560
+ split: test
2561
+ revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
2562
+ metrics:
2563
+ - type: cos_sim_accuracy
2564
+ value: 88.97620988085536
2565
+ - type: cos_sim_ap
2566
+ value: 86.08680845745758
2567
+ - type: cos_sim_f1
2568
+ value: 78.02793637114438
2569
+ - type: cos_sim_precision
2570
+ value: 73.11082699683736
2571
+ - type: cos_sim_recall
2572
+ value: 83.65414228518632
2573
+ - type: dot_accuracy
2574
+ value: 88.97620988085536
2575
+ - type: dot_ap
2576
+ value: 86.08681149437946
2577
+ - type: dot_f1
2578
+ value: 78.02793637114438
2579
+ - type: dot_precision
2580
+ value: 73.11082699683736
2581
+ - type: dot_recall
2582
+ value: 83.65414228518632
2583
+ - type: euclidean_accuracy
2584
+ value: 88.97620988085536
2585
+ - type: euclidean_ap
2586
+ value: 86.08681215460771
2587
+ - type: euclidean_f1
2588
+ value: 78.02793637114438
2589
+ - type: euclidean_precision
2590
+ value: 73.11082699683736
2591
+ - type: euclidean_recall
2592
+ value: 83.65414228518632
2593
+ - type: manhattan_accuracy
2594
+ value: 88.88888888888889
2595
+ - type: manhattan_ap
2596
+ value: 86.02916327562438
2597
+ - type: manhattan_f1
2598
+ value: 78.02063045516843
2599
+ - type: manhattan_precision
2600
+ value: 73.38851947346994
2601
+ - type: manhattan_recall
2602
+ value: 83.2768709578072
2603
+ - type: max_accuracy
2604
+ value: 88.97620988085536
2605
+ - type: max_ap
2606
+ value: 86.08681215460771
2607
+ - type: max_f1
2608
+ value: 78.02793637114438
2609
+ ---
2610
+ <!-- TODO: add evaluation results here -->
2611
+ <br><br>
2612
+
2613
+ <p align="center">
2614
+ <img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
2615
+ </p>
2616
+
2617
+
2618
+ <p align="center">
2619
+ <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>, <a href="https://github.com/jina-ai/finetuner"><b>Finetuner</b></a> team.</b>
2620
+ </p>
2621
+
2622
+
2623
+ ## Intended Usage & Model Info
2624
+
2625
+ `jina-embeddings-v2-base-en` is an English, monolingual **embedding model** supporting **8192 sequence length**.
2626
+ It is based on a Bert architecture (JinaBert) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
2627
+ The backbone `jina-bert-v2-base-en` is pretrained on the C4 dataset.
2628
+ The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.
2629
+ These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
2630
+
2631
+ The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.
2632
+ This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.
2633
+
2634
+ With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.
2635
+ Additionally, we provide the following embedding models:
2636
+
2637
+ **V1 (Based on T5, 512 Seq)**
2638
+
2639
+ - [`jina-embeddings-v1-small-en`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters.
2640
+ - [`jina-embeddings-v1-base-en`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters.
2641
+ - [`jina-embeddings-v1-large-en`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters.
2642
+
2643
+ **V2 (Based on JinaBert, 8k Seq)**
2644
+
2645
+ - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters.
2646
+ - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters **(you are here)**.
2647
+ - [`jina-embeddings-v2-large-en`](): 435 million parameters (releasing soon).
2648
+
2649
+ ## Data & Parameters
2650
+
2651
+ Jina Embeddings V2 [technical report](https://arxiv.org/abs/2310.19923)
2652
+
2653
+ ## Usage
2654
+
2655
+ You can use Jina Embedding models directly from transformers package:
2656
+ ```python
2657
+ !pip install transformers
2658
+ from transformers import AutoModel
2659
+ from numpy.linalg import norm
2660
+
2661
+ cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
2662
+ model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True) # trust_remote_code is needed to use the encode method
2663
+ embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
2664
+ print(cos_sim(embeddings[0], embeddings[1]))
2665
+ ```
2666
+
2667
+ If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function:
2668
+
2669
+ ```python
2670
+ embeddings = model.encode(
2671
+ ['Very long ... document'],
2672
+ max_length=2048
2673
+ )
2674
+ ```
2675
+
2676
+ *Alternatively, you can use Jina AI's [Embedding platform](https://jina.ai/embeddings/) for fully-managed access to Jina Embeddings models*.
2677
+
2678
+ ## Fine-tuning
2679
+
2680
+ Please consider [Finetuner](https://github.com/jina-ai/finetuner).
2681
+
2682
+ ## Plans
2683
+
2684
+ The development of new bilingual models is currently underway. We will be targeting mainly the German and Spanish languages.
2685
+ The upcoming models will be called `jina-embeddings-v2-base-de/es`.
2686
+
2687
+ ## Contact
2688
+
2689
+ Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
2690
+
2691
+ ## Citation
2692
+
2693
+ If you find Jina Embeddings useful in your research, please cite the following paper:
2694
+
2695
+ ```
2696
+ @misc{günther2023jina,
2697
+ title={Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents},
2698
+ author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang and Maximilian Werk and Nan Wang and Han Xiao},
2699
+ year={2023},
2700
+ eprint={2310.19923},
2701
+ archivePrefix={arXiv},
2702
+ primaryClass={cs.CL}
2703
+ }
2704
+ ```
2705
+
2706
+ <!--
2707
+ ``` latex
2708
+ @misc{günther2023jina,
2709
+ title={Beyond the 512-Token Barrier: Training General-Purpose Text
2710
+ Embeddings for Large Documents},
2711
+ author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang},
2712
+ year={2023},
2713
+ eprint={2307.11224},
2714
+ archivePrefix={arXiv},
2715
+ primaryClass={cs.CL}
2716
+ }
2717
+
2718
+ @misc{günther2023jina,
2719
+ title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models},
2720
+ author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},
2721
+ year={2023},
2722
+ eprint={2307.11224},
2723
+ archivePrefix={arXiv},
2724
+ primaryClass={cs.CL}
2725
+ }
2726
+ ```
2727
+ -->
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "jinaai/jina-bert-implementation",
3
+ "model_max_length": 8192,
4
+ "architectures": [
5
+ "JinaBertForMaskedLM"
6
+ ],
7
+ "attention_probs_dropout_prob": 0.0,
8
+ "auto_map": {
9
+ "AutoConfig": "jinaai/jina-bert-implementation--configuration_bert.JinaBertConfig",
10
+ "AutoModelForMaskedLM": "jinaai/jina-bert-implementation--modeling_bert.JinaBertForMaskedLM",
11
+ "AutoModel": "jinaai/jina-bert-implementation--modeling_bert.JinaBertModel",
12
+ "AutoModelForSequenceClassification": "jinaai/jina-bert-implementation--modeling_bert.JinaBertForSequenceClassification"
13
+ },
14
+ "classifier_dropout": null,
15
+ "gradient_checkpointing": false,
16
+ "hidden_act": "gelu",
17
+ "hidden_dropout_prob": 0.1,
18
+ "hidden_size": 768,
19
+ "initializer_range": 0.02,
20
+ "intermediate_size": 3072,
21
+ "layer_norm_eps": 1e-12,
22
+ "max_position_embeddings": 8192,
23
+ "model_type": "bert",
24
+ "num_attention_heads": 12,
25
+ "num_hidden_layers": 12,
26
+ "pad_token_id": 0,
27
+ "position_embedding_type": "alibi",
28
+ "torch_dtype": "float32",
29
+ "transformers_version": "4.26.0",
30
+ "type_vocab_size": 2,
31
+ "use_cache": true,
32
+ "vocab_size": 30528,
33
+ "feed_forward_type": "geglu",
34
+ "emb_pooler": "mean"
35
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.31.0",
5
+ "pytorch": "2.0.1"
6
+ }
7
+ }
coreml/float32_model.mlpackage/Data/com.apple.CoreML/model.mlmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:106c1ee920a9ea9d8d30523206bf862f2469a7d3e5e9c90c109bfde3df898060
3
+ size 135698
coreml/float32_model.mlpackage/Data/com.apple.CoreML/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8c81b424167aab6b88182cc404fb7d7517084597121ca63c10a9423e26186e2
3
+ size 550683456
coreml/float32_model.mlpackage/Manifest.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "fileFormatVersion": "1.0.0",
3
+ "itemInfoEntries": {
4
+ "4BDEEC23-5067-410C-8A8F-A649FD4360D9": {
5
+ "author": "com.apple.CoreML",
6
+ "description": "CoreML Model Specification",
7
+ "name": "model.mlmodel",
8
+ "path": "com.apple.CoreML/model.mlmodel"
9
+ },
10
+ "B7AAB529-A51F-4EB3-B2CD-4BDA80250E6F": {
11
+ "author": "com.apple.CoreML",
12
+ "description": "CoreML Model Weights",
13
+ "name": "weights",
14
+ "path": "com.apple.CoreML/weights"
15
+ }
16
+ },
17
+ "rootModelIdentifier": "4BDEEC23-5067-410C-8A8F-A649FD4360D9"
18
+ }
generation_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "pad_token_id": 0,
4
+ "transformers_version": "4.26.0"
5
+ }
model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6bccce798906f269ee6990d35b8a516390a9593cde824de2e6b9d087b07fa4d
3
+ size 547390322
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b70f1386f05b9703ea4edf7f1550a8925399f9580e4cc754cc099efc1e736d8
3
+ size 274757256
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6cd5a65131aa1db04c4146f474bdf68fac06417cba56789f4e6aaabd190e2818
3
+ size 274773117
sentence_bert_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 8192,
3
+ "do_lower_case": false,
4
+ "model_args": {"trust_remote_code": true}
5
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_basic_tokenize": true,
5
+ "do_lower_case": true,
6
+ "mask_token": "[MASK]",
7
+ "model_max_length": 2147483648,
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff