Are you open to a PR for adding a onnx model artifact of the tensor weights?
#3
by
bergum
- opened
For accelerated inference it would be great to save a onnx model file, exported by
optimum-cli export onnx --task fill-mask --model naver/splade-cocondenser-ensembledistil onnx