metadata
license: apache-2.0
datasets:
- fine-tuned/QuoraRetrieval-256-24-gpt-4o-2024-05-13-80208
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Datasets
- Benchmark
- Retrieval
- Research
- Evaluation
This model is a fine-tuned version of BAAI/bge-large-en-v1.5 designed for the following use case:
information retrieval benchmark
How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/QuoraRetrieval-256-24-gpt-4o-2024-05-13-80208',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))