SentenceTransformer based on LazarusNLP/all-indo-e5-small-v4

This is a sentence-transformers model finetuned from LazarusNLP/all-indo-e5-small-v4 on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: LazarusNLP/all-indo-e5-small-v4
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: id

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Isi: Sebelum memangku jabatannya, Panitera, Wakil Panitera, dan Panitera Pengganti harus bersumpah atau berjanji menurut agama atau kepercayaannya, yang berbunyi sebagai berikut: "Saya bersumpah/berjanji dengan sungguh-sungguh bahwa saya, untuk memangku jabatan saya ini, langsung atau tidak langsung, dengan menggunakan nama atau apa pun juga, tidak memberikan atau menjanjikan barang sesuatu kepada siapa pun"; "Saya bersumpah/berjanji bahwa saya, untuk melakukan atau tidak',
    '2. Apa isu yang dibicarakan dalam sumpah atau janji tersebut?',
    '1. Apa yang harus dilakukan jika saksi diambil sumpah atau janji tetapi terbanding atau tergugat tidak hadir?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4375, 0.3659],
#         [0.4375, 1.0000, 0.4640],
#         [0.3659, 0.4640, 1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.496
cosine_accuracy@3 0.4978
cosine_accuracy@5 0.6544
cosine_accuracy@10 0.7678
cosine_precision@1 0.496
cosine_precision@3 0.4957
cosine_precision@5 0.3606
cosine_precision@10 0.2215
cosine_recall@1 0.1656
cosine_recall@3 0.4963
cosine_recall@5 0.6016
cosine_recall@10 0.7394
cosine_ndcg@10 0.6205
cosine_mrr@10 0.5501
cosine_map@100 0.5939

Information Retrieval

Metric Value
cosine_accuracy@1 0.4872
cosine_accuracy@3 0.489
cosine_accuracy@5 0.6341
cosine_accuracy@10 0.7564
cosine_precision@1 0.4872
cosine_precision@3 0.4872
cosine_precision@5 0.3506
cosine_precision@10 0.2157
cosine_recall@1 0.1627
cosine_recall@3 0.4878
cosine_recall@5 0.5849
cosine_recall@10 0.7199
cosine_ndcg@10 0.6057
cosine_mrr@10 0.5393
cosine_map@100 0.5836

Information Retrieval

Metric Value
cosine_accuracy@1 0.4653
cosine_accuracy@3 0.467
cosine_accuracy@5 0.613
cosine_accuracy@10 0.7159
cosine_precision@1 0.4653
cosine_precision@3 0.4653
cosine_precision@5 0.3379
cosine_precision@10 0.205
cosine_recall@1 0.1552
cosine_recall@3 0.4656
cosine_recall@5 0.5641
cosine_recall@10 0.6844
cosine_ndcg@10 0.5777
cosine_mrr@10 0.515
cosine_map@100 0.5589

Information Retrieval

Metric Value
cosine_accuracy@1 0.423
cosine_accuracy@3 0.4257
cosine_accuracy@5 0.5383
cosine_accuracy@10 0.6456
cosine_precision@1 0.423
cosine_precision@3 0.4225
cosine_precision@5 0.2999
cosine_precision@10 0.1852
cosine_recall@1 0.1413
cosine_recall@3 0.423
cosine_recall@5 0.5004
cosine_recall@10 0.6183
cosine_ndcg@10 0.5216
cosine_mrr@10 0.4657
cosine_map@100 0.5067

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 10,233 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 14 tokens
    • mean: 68.11 tokens
    • max: 128 tokens
    • min: 8 tokens
    • mean: 17.21 tokens
    • max: 51 tokens
  • Samples:
    positive anchor
    atas keberatan tersebut, berarti akan diperoleh suatu kepastian hukum bagi Wajib Pajak selain terlaksananya administrasi perpajakan. 2. Bagaimana peranan administrasi perpajakan dalam kasus keberatan?
    Isi: Barang impor yang dikirim melalui pos atau jasa titipan hanya dapat dikeluarkan atas persetujuan pejabat bea dan cukai.
    Penjelasan: Yang dimaksud dengan persetujuan pejabat bea dan cukai yaitu penetapan pejabat bea dan cukai yang menyatakan bahwa barang tersebut telah dipenuhi kewajiban pabean berdasarkan Undang-Undang ini.
    2. Bagaimana cara mendapatkan persetujuan pejabat bea dan cukai untuk mengeluarkan barang impor?
    Isi: Pengusaha kecil yang memilih untuk dikukuhkan sebagai Pengusaha Kena Pajak wajib melaksanakan ketentuan sebagaimana dimaksud pada ayat (1). (Sumber : UU 42 Tahun 2009, Tanggal Berlaku : 1 Jan 2010)
    Penjelasan: Pengusaha kecil diperkenankan untuk memilih dikukuhkan menjadi Pengusaha Kena Pajak. Apabila pengusaha kecil memilih menjadi Pengusaha Kena Pajak, Undang-Undang ini berlaku sepenuhnya bagi pengusaha kecil tersebut.
    3. Bagaimana Undang-Undang ini berlaku terhadap pengusaha kecil yang memilih menjadi Pengusaha Kena Pajak?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • gradient_accumulation_steps: 32
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 32
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dim_384_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.5 10 1.8579 - - - -
1.0 20 0.9668 0.6007 0.5868 0.5516 0.4836
1.5 30 0.8129 - - - -
2.0 40 0.7398 0.6136 0.5992 0.5718 0.5055
2.5 50 0.6296 - - - -
3.0 60 0.6527 0.6142 0.6008 0.5733 0.5136
3.5 70 0.5721 - - - -
4.0 80 0.5759 0.6198 0.6051 0.5775 0.5214
4.5 90 0.5399 - - - -
5.0 100 0.5365 0.6205 0.6057 0.5777 0.5216
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 5.1.0
  • Transformers: 4.57.1
  • PyTorch: 2.8.0+cu128
  • Accelerate: 1.10.1
  • Datasets: 4.2.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
2
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for galang006/all-indo-e5-small-v4-matryoshka-v2

Finetuned
(1)
this model

Evaluation results