ModernBERT Embed base miriad

This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/modernbert-embed-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("digo-prayudha/test-modernbert-embed-base-miriad")
# Run inference
sentences = [
    'Three had personal and family issues to attend to. The peer counsellors presented their reports which were then discussed with the supervisors.\n\n At the beginning of the training some peer counsellors were hoping to be trained as health workers while others wanted to learn how to improve breastfeeding of their babies. Some suggested that they receive uniforms to identify them in the community. The peer counsellors expressed a strong wish to be given bicycles to ease their mobility around the villages and a monthly allowance equivalent to US$10. Transportation was the most "felt need" identified by the peer counsellors. One peer counsellor said,\n\n Another peer counsellor said,\n\n The peer counsellors were each given a bicycle for ease of movement during peer counselling visits.\n\n Lessons learnt from this study are summarised in Table 3 .\n\n This study showed that rural Ugandan women with modest formal education can be trained in breastfeeding counselling successfully. On returning to their communities, they were able to provide help and support to breastfeeding mothers to improve their breastfeeding technique and breastfeed exclusively. This is in agreement with what other studies have found [20] [21] [22] .\n\n The peer counsellors expressed a desire to learn more about breastfeeding at the beginning of the course. This was despite breastfeeding being culturally accepted and widely practiced in the community. The peer counsellors believed that breast milk alone was not enough for a baby up to the age of six months. A similar belief was also perceived at the lactation clinic of Mulago hospital in Uganda [31] . The training curriculum covered all the questions asked by the peer counsellors at the beginning of the course. This gave the peer counsellors the confidence that they would be able to answer questions posed by their peers. Since we did not administer pre-and post-test during training, our assessment of the knowledge they gained from the training is limited.\n\n We also found that there are cultural and traditional beliefs and practices regarding breastfeeding which may influence the practice of exclusive breastfeeding negatively. Beliefs and practices related to expressing breast milk, use of colostrum together with understanding and managing breast conditions during breastfeeding may not be supportive of exclusive breastfeeding. Other studies have also highlighted traditional and cultural beliefs and practices related to breastfeeding that may negatively influence the practice of exclusive breastfeeding [7] [8] [9] .\n\n At the beginning of the training for health workers, they were asked what they expected to learn from the training course. A list of their expectations was made and it was interesting to note that most of the expectations of the health workers were similar to those of the peer counsellors at the beginning of training. This suggests that community women could perform as well as, or even better than the health workers in supporting mothers to exclusively breast feed their babies. However, we did not compare the performance of the two groups in this study.\n\n The peer counsellors were also able to identify common breastfeeding problems in their communities. They appreciated the fact that the training they received had empowered them with skills to help the mothers overcome these problems. The commonly identified breastfeeding problems included "not enough breast milk", sore nipples and mastitis as well as identifying poor positioning of a baby at the breast. This was also reported in a previous hospital based study in Uganda [31] .\n\n We further observed that follow-up of the peer counsellors in their communities helped to motivate them so that they neither failed nor lost their confidence. Follow up supervision served as a way of addressing the challenges the peer counsellors met in their work and this was appreciated. It provided a mechanism for continued training for them as well sharing their experiences with each other and their supervisors. They were able to consult where they encountered difficulties. This interaction provided an avenue for the supervisors to re-enforce some information and skills which were observed to be deficient while observing the peer counsellors at work. Often the peer counsellors were able to suggest solutions during meetings which boosted their confidence further. This also added to their credibility with the mothers. This is similar The Intervention • Training rural women as peer counsellors for support of exclusive breastfeeding is feasible • Introducing an activity in a community can be a long process requiring multiple visits starting with the district down to the lowest level to ensure community involvement.',
    'How did the follow-up supervision of the peer counsellors contribute to their success in supporting breastfeeding mothers?\n',
    'How does statin use affect the mortality rates of CDI patients?\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.7550, -0.0372],
#         [ 0.7550,  1.0000, -0.0165],
#         [-0.0372, -0.0165,  1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6655
cosine_accuracy@3 0.9045
cosine_accuracy@5 0.9455
cosine_accuracy@10 0.9695
cosine_precision@1 0.6655
cosine_precision@3 0.3015
cosine_precision@5 0.1891
cosine_precision@10 0.097
cosine_recall@1 0.6655
cosine_recall@3 0.9045
cosine_recall@5 0.9455
cosine_recall@10 0.9695
cosine_ndcg@10 0.8327
cosine_mrr@10 0.7871
cosine_map@100 0.7884

Information Retrieval

Metric Value
cosine_accuracy@1 0.668
cosine_accuracy@3 0.9
cosine_accuracy@5 0.943
cosine_accuracy@10 0.9675
cosine_precision@1 0.668
cosine_precision@3 0.3
cosine_precision@5 0.1886
cosine_precision@10 0.0968
cosine_recall@1 0.668
cosine_recall@3 0.9
cosine_recall@5 0.943
cosine_recall@10 0.9675
cosine_ndcg@10 0.831
cosine_mrr@10 0.7855
cosine_map@100 0.7869

Information Retrieval

Metric Value
cosine_accuracy@1 0.6435
cosine_accuracy@3 0.891
cosine_accuracy@5 0.933
cosine_accuracy@10 0.964
cosine_precision@1 0.6435
cosine_precision@3 0.297
cosine_precision@5 0.1866
cosine_precision@10 0.0964
cosine_recall@1 0.6435
cosine_recall@3 0.891
cosine_recall@5 0.933
cosine_recall@10 0.964
cosine_ndcg@10 0.8178
cosine_mrr@10 0.7693
cosine_map@100 0.7707

Information Retrieval

Metric Value
cosine_accuracy@1 0.637
cosine_accuracy@3 0.8665
cosine_accuracy@5 0.9105
cosine_accuracy@10 0.946
cosine_precision@1 0.637
cosine_precision@3 0.2888
cosine_precision@5 0.1821
cosine_precision@10 0.0946
cosine_recall@1 0.637
cosine_recall@3 0.8665
cosine_recall@5 0.9105
cosine_recall@10 0.946
cosine_ndcg@10 0.8028
cosine_mrr@10 0.7556
cosine_map@100 0.7576

Information Retrieval

Metric Value
cosine_accuracy@1 0.568
cosine_accuracy@3 0.8155
cosine_accuracy@5 0.865
cosine_accuracy@10 0.9165
cosine_precision@1 0.568
cosine_precision@3 0.2718
cosine_precision@5 0.173
cosine_precision@10 0.0917
cosine_recall@1 0.568
cosine_recall@3 0.8155
cosine_recall@5 0.865
cosine_recall@10 0.9165
cosine_ndcg@10 0.7516
cosine_mrr@10 0.6977
cosine_map@100 0.7007

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 8,095 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 467 tokens
    • mean: 944.9 tokens
    • max: 1460 tokens
    • min: 9 tokens
    • mean: 19.7 tokens
    • max: 61 tokens
  • Samples:
    positive anchor
    24 If the dot is in a fixed location, it is called a laser Doppler flow meter. If the beam scans the skin, a large skin area can be scanned and a laser Doppler image of the skin surface reflecting blood flow can be seen. [25] [26] [27] These devices are called laser Doppler imagers. Another technique is laser speckle flow imagers. They project a constant speckle laser pattern on the skin to obtain rapid pictures of flow, typically 25 per second compared with 2 min using a laser Doppler imager. 28 The speed is a sacrifice for depth of penetration, which is less than 1/2 dermal thickness. Depending on laser frequency and power, all techniques have different areas they cover and different penetration into tissue.

    There are numerous pros and cons to this technique. First, skin blood flow varies continuously because of vasomotor rhythm and respiration. Blood flow increases slightly during exhaling and is reduced slightly during inhalation. If flow is sampled too quickly, it may be high or...
    How does the heated thermistor pair technique measure skin blood flow?
    126 -128 Furthermore, NGF is locally up-regulated in humans presenting with chronic pain, such as arthritis, migraine/headache, fibromyalgia, or peripheral nerve injury. 129 -132 These observations suggest that in humans, as in preclinical animal models, the ongoing production of NGF may be involved in chronic pain and changes in sensitization. Indeed, there are at least three major pharmacologic strategies under development that target NGF-TrkA signaling for the treatment of chronic pain and that have produced effective reduction in hypersensitivity in preclinical models. These are sequestration of NGF or inhibiting its binding to TrkA, 61, 133 antagonizing TrkA so as to block NGF from binding to TrkA, 134 -136 and blocking TrkA kinase activity. 137 Among the first such molecules to be investigated preclinically were a TrkA-IgG fusion protein, 138 MNAC13, 134 and PD90780, 136 which act by inhibiting the binding of NGF to TrkA and ALE0540, 135 which appears to act by modulating the int... How do humanized anti-NGF monoclonal antibodies exert their analgesic effect?
    It was not possible to correct the estimates for withinindividual variation in levels of the liver enzymes over time which may have underestimated the associations, because data involving repeat measurements were not reported by all the contributing studies. There are data to suggest that the levels of these enzymes in individuals can fluctuate considerably over time 61 ; hence, the associations demonstrated may be even stronger. Studies are therefore needed with serial measurements of these liver enzymes to be able to adjust for regression dilution bias.

    There was substantial heterogeneity among the available prospective studies. Given this, it was debatable whether pooled estimates should be presented rather than reporting estimates in relevant subgroups, as the presence of heterogeneity makes pooling of risk estimates data somewhat controversial. We however systematically explored and identified the possible sources of heterogeneity using stratified analyses, meta-regression and s...
    Are there geographical variations in the association between ALT levels and all-cause mortality?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • gradient_accumulation_steps: 32
  • learning_rate: 2e-05
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 32
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
1.0 32 - 0.8271 0.8235 0.8137 0.7953 0.7404
1.5692 50 0.1536 - - - - -
2.0 64 - 0.8328 0.8312 0.8169 0.8022 0.7519
3.0 96 - 0.8327 0.831 0.8178 0.8028 0.7516
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.11
  • Sentence Transformers: 5.1.0
  • Transformers: 4.56.2
  • PyTorch: 2.8.0+cu126
  • Accelerate: 1.10.1
  • Datasets: 4.1.1
  • Tokenizers: 0.22.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for digo-prayudha/test-modernbert-embed-base-miriad

Finetuned
(93)
this model

Dataset used to train digo-prayudha/test-modernbert-embed-base-miriad

Evaluation results