Edit model card

SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2")
# Run inference
sentences = [
    'Reportando a Mánager ventasLograr un crecimiento sostenible de los ingresos mediante la negociación, cierre, implementación y cumplimiento de acuerdos con los diferentes clientes.Encargado de realizar la búsqueda y apertura de nuevos clientes a nivel LATAM . Entender requerimientos y saber asesorar de la mejor manera para un buen cierre de negocio. Alto conocimiento en la línea de flotas y Camiones.',
    'Introduction to Data Science and scikit-learn in Python.Data Science.Data Analysis.Employ artificial intelligence techniques to test hypothesis in Python. Apply a machine learning model combining Numpy, Pandas, and Scikit-Learn',
    'Planejamento de projetos: Como reunir tudo.Data Science.Leadership and Management.Descrever os componentes da fase de planejamento e a significância deles.. Identificar ferramentas e práticas recomendadas para criar um plano de projeto e um plano de gestão de riscos. . Descrever como estimar, acompanhar e manter um orçamento.. Elaborar um plano de comunicação e explicar como gerenciá-lo.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 500 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 85 tokens
    • mean: 85.0 tokens
    • max: 85 tokens
    • min: 13 tokens
    • mean: 65.22 tokens
    • max: 128 tokens
    • 0: ~10.80%
    • 1: ~13.20%
    • 2: ~76.00%
  • Samples:
    sentence1 sentence2 label
    Reportando a Mánager ventasLograr un crecimiento sostenible de los ingresos mediante la negociación, cierre, implementación y cumplimiento de acuerdos con los diferentes clientes.Encargado de realizar la búsqueda y apertura de nuevos clientes a nivel LATAM . Entender requerimientos y saber asesorar de la mejor manera para un buen cierre de negocio. Alto conocimiento en la línea de flotas y Camiones. Launching Your Music Career.Data Science.Music and Art.Articulate your Unique Selling Proposition.. Use the Business Model Canvas to determine the core functions required to effectively manage your portfolio career.. Complete a comprehensive growth and recruitment plan for your teaching studio and identify the competitive landscape.. Seek out and book performance opportunities in a variety of settings. 2
    Reportando a Mánager ventasLograr un crecimiento sostenible de los ingresos mediante la negociación, cierre, implementación y cumplimiento de acuerdos con los diferentes clientes.Encargado de realizar la búsqueda y apertura de nuevos clientes a nivel LATAM . Entender requerimientos y saber asesorar de la mejor manera para un buen cierre de negocio. Alto conocimiento en la línea de flotas y Camiones. Robotics.Data Science.Electrical Engineering.Motion Planning. Matlab. Estimation 2
    Reportando a Mánager ventasLograr un crecimiento sostenible de los ingresos mediante la negociación, cierre, implementación y cumplimiento de acuerdos con los diferentes clientes.Encargado de realizar la búsqueda y apertura de nuevos clientes a nivel LATAM . Entender requerimientos y saber asesorar de la mejor manera para un buen cierre de negocio. Alto conocimiento en la línea de flotas y Camiones. Core Java.Data Science.Software Development.Learn the basic syntax and functions of the Java programming language. Apply object-oriented programming techniques to building classes, creating objects, and understanding how solutions are packaged in Java.. Learn how to implement inheritance and polymorphism in Java.. Use selected parts of the vast Java SE class library to enhance your Java programming techniques. 2
  • Loss: SoftmaxLoss

Training Hyperparameters

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3.0
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers and SoftmaxLoss

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
7
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2

Finetuned
(73)
this model
Finetunes
1 model