metadata
base_model: indobenchmark/indobert-base-p1
datasets: []
language: []
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:12000
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Awalnya merupakan singkatan dari John's Macintosh Project.
sentences:
- >-
Sebuah formasi yang terdiri dari sekitar 50 petugas Polisi Baltimore
akhirnya menempatkan diri mereka di antara para perusuh dan milisi,
memungkinkan Massachusetts ke-6 untuk melanjutkan ke Stasiun Camden.
- Mengecat luka dapat melindungi dari jamur dan hama.
- Dulunya merupakan singkatan dari John's Macintosh Project.
- source_sentence: Boueiz berprofesi sebagai pengacara.
sentences:
- Mereka juga gagal mengembangkan Water Cooperation Quotient yang baru.
- >-
Pada Pemilu 1970, ia ikut serta dari Partai Persatuan Nasional namun
dikalahkan.
- Seorang pengacara berprofesi sebagai Boueiz.
- source_sentence: Fakultas Studi Oriental memiliki seorang profesor.
sentences:
- >-
Di tempat lain di New Mexico, LAHS terkadang dianggap sebagai sekolah
untuk orang kaya.
- >-
Laporan lain juga menunjukkan kandungannya lebih rendah dari 0,1% di
Australia.
- Profesor tersebut merupakan bagian dari Fakultas Studi Oriental.
- source_sentence: >-
Hal ini terjadi di sejumlah negara, termasuk Ethiopia, Republik Demokratik
Kongo, dan Afrika Selatan.
sentences:
- >-
Hal ini diketahui terjadi di Eritrea, Ethiopia, Kongo, Tanzania, Namibia
dan Afrika Selatan.
- Gugus amil digantikan oleh gugus pentil.
- Dan saya beritahu Anda sesuatu, itu tidak adil.
- source_sentence: Ini adalah wilayah sosial-ekonomi yang lebih rendah.
sentences:
- >-
Ini adalah bengkel perbaikan mobil terbaru yang masih beroperasi di
kota.
- >-
Zelinsky hanya berteori bahwa tidak ada tiga bilangan bulat berurutan
yang semuanya dapat difaktorkan ulang.
- Ini adalah wilayah sosial-ekonomi yang lebih tinggi.
model-index:
- name: SentenceTransformer based on indobenchmark/indobert-base-p1
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: str dev
type: str-dev
metrics:
- type: pearson_cosine
value: 0.4564569322733096
name: Pearson Cosine
- type: spearman_cosine
value: 0.48195228779003385
name: Spearman Cosine
- type: pearson_manhattan
value: 0.5026090402544289
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.4959933098737397
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.5039005057105697
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.4974503970711054
name: Spearman Euclidean
- type: pearson_dot
value: 0.30898798759416635
name: Pearson Dot
- type: spearman_dot
value: 0.2877933490149207
name: Spearman Dot
- type: pearson_max
value: 0.5039005057105697
name: Pearson Max
- type: spearman_max
value: 0.4974503970711054
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: str test
type: str-test
metrics:
- type: pearson_cosine
value: 0.47784323630714065
name: Pearson Cosine
- type: spearman_cosine
value: 0.5031401179671358
name: Spearman Cosine
- type: pearson_manhattan
value: 0.5002126701994709
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.49583761101885343
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.5003980651640989
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.49610725867890976
name: Spearman Euclidean
- type: pearson_dot
value: 0.3399664664461248
name: Pearson Dot
- type: spearman_dot
value: 0.3339252012184323
name: Spearman Dot
- type: pearson_max
value: 0.5003980651640989
name: Pearson Max
- type: spearman_max
value: 0.5031401179671358
name: Spearman Max
SentenceTransformer based on indobenchmark/indobert-base-p1
This is a sentence-transformers model finetuned from indobenchmark/indobert-base-p1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: indobenchmark/indobert-base-p1
- Maximum Sequence Length: 32 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("damand2061/negasibert-mnrl")
# Run inference
sentences = [
'Ini adalah wilayah sosial-ekonomi yang lebih rendah.',
'Ini adalah wilayah sosial-ekonomi yang lebih tinggi.',
'Zelinsky hanya berteori bahwa tidak ada tiga bilangan bulat berurutan yang semuanya dapat difaktorkan ulang.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
str-dev
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.4565 |
spearman_cosine | 0.482 |
pearson_manhattan | 0.5026 |
spearman_manhattan | 0.496 |
pearson_euclidean | 0.5039 |
spearman_euclidean | 0.4975 |
pearson_dot | 0.309 |
spearman_dot | 0.2878 |
pearson_max | 0.5039 |
spearman_max | 0.4975 |
Semantic Similarity
- Dataset:
str-test
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.4778 |
spearman_cosine | 0.5031 |
pearson_manhattan | 0.5002 |
spearman_manhattan | 0.4958 |
pearson_euclidean | 0.5004 |
spearman_euclidean | 0.4961 |
pearson_dot | 0.34 |
spearman_dot | 0.3339 |
pearson_max | 0.5004 |
spearman_max | 0.5031 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 12,000 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 5 tokens
- mean: 14.84 tokens
- max: 32 tokens
- min: 5 tokens
- mean: 14.83 tokens
- max: 32 tokens
- Samples:
sentence_0 sentence_1 Pusat Peringatan Topan Gabungan (JTWC) juga mengeluarkan peringatan dalam kapasitas tidak resmi.
Pusat Peringatan Topan Gabungan (JTWC) hanya mengeluarkan peringatan dalam kapasitas yang tidak resmi.
DNP komersial digunakan sebagai antiseptik dan pestisida bioakumulasi non-selektif.
DNP komersial tidak dapat digunakan sebagai antiseptik atau pestisida bioakumulasi non-selektif.
Kuncian tulang belakang dan kuncian serviks diperbolehkan dan wajib dalam kompetisi jiu-jitsu Brasil IBJJF.
Kuncian tulang belakang dan kuncian serviks dilarang dalam kompetisi jiu-jitsu Brasil IBJJF.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 64per_device_eval_batch_size
: 64num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | str-dev_spearman_max | str-test_spearman_max |
---|---|---|---|---|
1.0 | 188 | - | 0.4906 | 0.5067 |
2.0 | 376 | - | 0.4941 | 0.5060 |
2.6596 | 500 | 0.0995 | - | - |
3.0 | 564 | - | 0.4935 | 0.5055 |
4.0 | 752 | - | 0.4959 | 0.5016 |
5.0 | 940 | - | 0.4975 | 0.5031 |
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.44.0
- PyTorch: 2.4.0
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}