metadata
base_model: indobenchmark/indobert-base-p1
datasets: []
language: []
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:12000
- loss:MegaBatchMarginLoss
widget:
- source_sentence: Awalnya merupakan singkatan dari John's Macintosh Project.
sentences:
- >-
Sebuah formasi yang terdiri dari sekitar 50 petugas Polisi Baltimore
akhirnya menempatkan diri mereka di antara para perusuh dan milisi,
memungkinkan Massachusetts ke-6 untuk melanjutkan ke Stasiun Camden.
- Mengecat luka dapat melindungi dari jamur dan hama.
- Dulunya merupakan singkatan dari John's Macintosh Project.
- source_sentence: Boueiz berprofesi sebagai pengacara.
sentences:
- Mereka juga gagal mengembangkan Water Cooperation Quotient yang baru.
- >-
Pada Pemilu 1970, ia ikut serta dari Partai Persatuan Nasional namun
dikalahkan.
- Seorang pengacara berprofesi sebagai Boueiz.
- source_sentence: Fakultas Studi Oriental memiliki seorang profesor.
sentences:
- >-
Di tempat lain di New Mexico, LAHS terkadang dianggap sebagai sekolah
untuk orang kaya.
- >-
Laporan lain juga menunjukkan kandungannya lebih rendah dari 0,1% di
Australia.
- Profesor tersebut merupakan bagian dari Fakultas Studi Oriental.
- source_sentence: >-
Hal ini terjadi di sejumlah negara, termasuk Ethiopia, Republik Demokratik
Kongo, dan Afrika Selatan.
sentences:
- >-
Hal ini diketahui terjadi di Eritrea, Ethiopia, Kongo, Tanzania, Namibia
dan Afrika Selatan.
- Gugus amil digantikan oleh gugus pentil.
- Dan saya beritahu Anda sesuatu, itu tidak adil.
- source_sentence: Ini adalah wilayah sosial-ekonomi yang lebih rendah.
sentences:
- >-
Ini adalah bengkel perbaikan mobil terbaru yang masih beroperasi di
kota.
- >-
Zelinsky hanya berteori bahwa tidak ada tiga bilangan bulat berurutan
yang semuanya dapat difaktorkan ulang.
- Ini adalah wilayah sosial-ekonomi yang lebih tinggi.
model-index:
- name: SentenceTransformer based on indobenchmark/indobert-base-p1
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: str dev
type: str-dev
metrics:
- type: pearson_cosine
value: 0.45499177580198114
name: Pearson Cosine
- type: spearman_cosine
value: 0.47824954773877343
name: Spearman Cosine
- type: pearson_manhattan
value: 0.5063760846250573
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.49835693711719375
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.5062153453050553
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.4982327535492364
name: Spearman Euclidean
- type: pearson_dot
value: 0.27097056415300647
name: Pearson Dot
- type: spearman_dot
value: 0.25179460239023077
name: Spearman Dot
- type: pearson_max
value: 0.5063760846250573
name: Pearson Max
- type: spearman_max
value: 0.49835693711719375
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: str test
type: str-test
metrics:
- type: pearson_cosine
value: 0.47495139518851825
name: Pearson Cosine
- type: spearman_cosine
value: 0.5059515739122313
name: Spearman Cosine
- type: pearson_manhattan
value: 0.50154011084872
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.5058071904463332
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.5028237271275693
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.5061159996491946
name: Spearman Euclidean
- type: pearson_dot
value: 0.3250041946830172
name: Pearson Dot
- type: spearman_dot
value: 0.31627719040314917
name: Spearman Dot
- type: pearson_max
value: 0.5028237271275693
name: Pearson Max
- type: spearman_max
value: 0.5061159996491946
name: Spearman Max
SentenceTransformer based on indobenchmark/indobert-base-p1
This is a sentence-transformers model finetuned from indobenchmark/indobert-base-p1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: indobenchmark/indobert-base-p1
- Maximum Sequence Length: 32 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("damand2061/negasibert-mbm")
# Run inference
sentences = [
'Ini adalah wilayah sosial-ekonomi yang lebih rendah.',
'Ini adalah wilayah sosial-ekonomi yang lebih tinggi.',
'Zelinsky hanya berteori bahwa tidak ada tiga bilangan bulat berurutan yang semuanya dapat difaktorkan ulang.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
str-dev
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.455 |
spearman_cosine | 0.4782 |
pearson_manhattan | 0.5064 |
spearman_manhattan | 0.4984 |
pearson_euclidean | 0.5062 |
spearman_euclidean | 0.4982 |
pearson_dot | 0.271 |
spearman_dot | 0.2518 |
pearson_max | 0.5064 |
spearman_max | 0.4984 |
Semantic Similarity
- Dataset:
str-test
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.475 |
spearman_cosine | 0.506 |
pearson_manhattan | 0.5015 |
spearman_manhattan | 0.5058 |
pearson_euclidean | 0.5028 |
spearman_euclidean | 0.5061 |
pearson_dot | 0.325 |
spearman_dot | 0.3163 |
pearson_max | 0.5028 |
spearman_max | 0.5061 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 12,000 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 5 tokens
- mean: 14.84 tokens
- max: 32 tokens
- min: 5 tokens
- mean: 14.83 tokens
- max: 32 tokens
- Samples:
sentence_0 sentence_1 Pusat Peringatan Topan Gabungan (JTWC) juga mengeluarkan peringatan dalam kapasitas tidak resmi.
Pusat Peringatan Topan Gabungan (JTWC) hanya mengeluarkan peringatan dalam kapasitas yang tidak resmi.
DNP komersial digunakan sebagai antiseptik dan pestisida bioakumulasi non-selektif.
DNP komersial tidak dapat digunakan sebagai antiseptik atau pestisida bioakumulasi non-selektif.
Kuncian tulang belakang dan kuncian serviks diperbolehkan dan wajib dalam kompetisi jiu-jitsu Brasil IBJJF.
Kuncian tulang belakang dan kuncian serviks dilarang dalam kompetisi jiu-jitsu Brasil IBJJF.
- Loss:
MegaBatchMarginLoss
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 500per_device_eval_batch_size
: 500num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 500per_device_eval_batch_size
: 500per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | str-dev_spearman_max | str-test_spearman_max |
---|---|---|---|
1.0 | 24 | 0.4904 | 0.5030 |
2.0 | 48 | 0.4905 | 0.5036 |
3.0 | 72 | 0.4947 | 0.5041 |
4.0 | 96 | 0.4963 | 0.5061 |
5.0 | 120 | 0.4984 | 0.5061 |
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.44.0
- PyTorch: 2.4.0
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MegaBatchMarginLoss
@inproceedings{wieting-gimpel-2018-paranmt,
title = "{P}ara{NMT}-50{M}: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations",
author = "Wieting, John and Gimpel, Kevin",
editor = "Gurevych, Iryna and Miyao, Yusuke",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1042",
doi = "10.18653/v1/P18-1042",
pages = "451--462",
}