SentenceTransformer based on jhu-clsp/mmBERT-small

This is a sentence-transformers model finetuned from jhu-clsp/mmBERT-small on the msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'transformer_task': 'feature-extraction', 'modality_config': {'text': {'method': 'forward', 'method_output_name': 'last_hidden_state'}}, 'module_output_name': 'token_embeddings', 'message_format': 'auto', 'architecture': 'ModernBertModel'})
  (1): Pooling({'embedding_dimension': 384, 'pooling_mode': 'mean', 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/mmBERT-small-msmarco-fa2-flattened")
# Run inference
sentences = [
    'do bond funds pay dividends',
    "A bond fund or debt fund is a fund that invests in bonds, or other debt securities. Bond funds can be contrasted with stock funds and money funds. Bond funds typically pay periodic dividends that include interest payments on the fund's underlying securities plus periodic realized capital appreciation. Bond funds typically pay higher dividends than CDs and money market accounts. Most bond funds pay out dividends more frequently than individual bonds.",
    'You would have $71,200 paying out $1,687 in annual dividends. That is about $4.62 for working up in the morning. Interestingly enough, that 2.37% yield is at a low point because The Wellington Fund is a â\x80\x9cbalanced fundâ\x80\x9d meaning that it holds a combination of stocks and bonds.',
    "If a cavity is causing the toothache, your dentist will fill the cavity or possibly extract the tooth, if necessary. A root canal might be needed if the cause of the toothache is determined to be an infection of the tooth's nerve. Bacteria that have worked their way into the inner aspects of the tooth cause such an infection. An antibiotic may be prescribed if there is fever or swelling of the jaw.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [4, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[0.4402, 0.2069, 0.2170]])

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.84

Training Details

Training Dataset

msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1

  • Dataset: msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 at 84ed2d3
  • Size: 300,000 training samples
  • Columns: query, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    query positive negative
    type string string string
    details
    • min: 11 characters
    • mean: 32.46 characters
    • max: 140 characters
    • min: 67 characters
    • mean: 336.71 characters
    • max: 864 characters
    • min: 56 characters
    • mean: 339.96 characters
    • max: 900 characters
  • Samples:
    query positive negative
    what is the meaning of menu planning Menu planning is the selection of a menu for an event. Such as picking out the dinner for your wedding or even a meal at a Birthday Party. Menu planning is when you are preparing a calendar of meals and you have to sit down and decide what meat and veggies you want to serve on each certain day. Menu Costs. In economics, a menu cost is the cost to a firm resulting from changing its prices. The name stems from the cost of restaurants literally printing new menus, but economists use it to refer to the costs of changing nominal prices in general.
    how old is brett butler Brett Butler is 59 years old. To be more precise (and nerdy), the current age as of right now is 21564 days or (even more geeky) 517536 hours. That's a lot of hours! Passed in: St. John's, Newfoundland and Labrador, Canada. Passed on: 16/07/2016. Published in the St. John's Telegram. Passed away suddenly at the Health Sciences Centre surrounded by his loving family, on July 16, 2016 Robert (Bobby) Joseph Butler, age 52 years. Predeceased by his special aunt Geri Murrin and uncle Mike Mchugh; grandparents Joe and Margaret Murrin and Jack and Theresa Butler.
    when was the last navajo treaty sign? In Executive Session, Senate of the United States, July 25, 1868. Resolved, (two-thirds of the senators present concurring,) That the Senate advise and consent to the ratification of the treaty between the United States and the Navajo Indians, concluded at Fort Sumner, New Mexico, on the first day of June, 1868. Share Treaty of Greenville. The Treaty of Greenville was signed August 3, 1795, between the United States, represented by Gen. Anthony Wayne, and chiefs of the Indian tribes located in the Northwest Territory, including the Wyandots, Delawares, Shawnees, Ottawas, Miamis, and others.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false,
        "directions": [
            "query_to_doc"
        ],
        "partition_mode": "joint",
        "hardness_mode": null,
        "hardness_strength": 0.0
    }
    

Evaluation Dataset

msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1

  • Dataset: msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 at 84ed2d3
  • Size: 1,000 evaluation samples
  • Columns: query, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    query positive negative
    type string string string
    details
    • min: 10 characters
    • mean: 32.61 characters
    • max: 110 characters
    • min: 81 characters
    • mean: 344.28 characters
    • max: 908 characters
    • min: 97 characters
    • mean: 342.31 characters
    • max: 963 characters
  • Samples:
    query positive negative
    what county is holly springs nc in Holly Springs, North Carolina. Holly Springs is a town in Wake County, North Carolina, United States. As of the 2010 census, the town population was 24,661, over 2½ times its population in 2000. Contents. The Mt. Holly Springs Park & Resort. One of the numerous trolley routes that carried people around the county at the turn of the century was the Carlisle & Mt. Holly Railway Company. The “Holly Trolley” as it came to be known was put into service by Patricio Russo and made its first run on May 14, 1901.
    how long does nyquil stay in your system In order to understand exactly how long Nyquil lasts, it is absolutely vital to learn about the various ingredients in the drug. One of the ingredients found in Nyquil is Doxylamine, which is an antihistamine. This specific medication has a biological half-life or 6 to 12 hours. With this in mind, it is possible for the drug to remain in the system for a period of 12 to 24 hours. It should be known that the specifics will depend on a wide variety of different factors, including your age and metabolism. I confirmed that NyQuil is about 10% alcohol, a higher content than most domestic beers. When I asked about the relatively high proof, I was told that the alcohol dilutes the active ingredients. The alcohol free version is there for customers with addiction issues.. also found that in that version there is twice the amount of DXM. When I asked if I could speak to a chemist or scientist, I was told they didn't have anyone who fit that description there. It’s been eight years since I kicked NyQuil. I've been sober from alcohol for four years.
    what are mineral water 1 Mineral water – water from a mineral spring that contains various minerals, such as salts and sulfur compounds. 2 It comes from a source tapped at one or more bore holes or spring, and originates from a geologically and physically protected underground water source. Mineral water – water from a mineral spring that contains various minerals, such as salts and sulfur compounds. 2 It comes from a source tapped at one or more bore holes or spring, and originates from a geologically and physically protected underground water source. Minerals for Your Body. Drinking mineral water is beneficial to health and well-being. But it is not only the amount of water you drink that is important-what the water contains is even more essential.inerals for Your Body. Drinking mineral water is beneficial to health and well-being. But it is not only the amount of water you drink that is important-what the water contains is even more essential.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false,
        "directions": [
            "query_to_doc"
        ],
        "partition_mode": "joint",
        "hardness_mode": null,
        "hardness_strength": 0.0
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 128
  • num_train_epochs: 1
  • learning_rate: 8e-05
  • warmup_steps: 0.05
  • bf16: True
  • eval_strategy: steps
  • per_device_eval_batch_size: 128
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • per_device_train_batch_size: 128
  • num_train_epochs: 1
  • max_steps: -1
  • learning_rate: 8e-05
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: None
  • warmup_steps: 0.05
  • optim: adamw_torch_fused
  • optim_args: None
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • optim_target_modules: None
  • gradient_accumulation_steps: 1
  • average_tokens_across_devices: True
  • max_grad_norm: 1.0
  • label_smoothing_factor: 0.0
  • bf16: True
  • fp16: False
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • use_liger_kernel: False
  • liger_kernel_config: None
  • use_cache: False
  • neftune_noise_alpha: None
  • torch_empty_cache_steps: None
  • auto_find_batch_size: False
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • include_num_input_tokens_seen: no
  • log_level: passive
  • log_level_replica: warning
  • disable_tqdm: False
  • project: huggingface
  • trackio_space_id: trackio
  • eval_strategy: steps
  • per_device_eval_batch_size: 128
  • prediction_loss_only: True
  • eval_on_start: False
  • eval_do_concat_batches: True
  • eval_use_gather_object: False
  • eval_accumulation_steps: None
  • include_for_metrics: []
  • batch_eval_metrics: False
  • save_only_model: False
  • save_on_each_node: False
  • enable_jit_checkpoint: False
  • push_to_hub: False
  • hub_private_repo: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_always_push: False
  • hub_revision: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • restore_callback_states_from_checkpoint: False
  • full_determinism: False
  • seed: 42
  • data_seed: None
  • use_cpu: False
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • dataloader_prefetch_factor: None
  • remove_unused_columns: True
  • label_names: None
  • train_sampling_strategy: random
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • ddp_backend: None
  • ddp_timeout: 1800
  • fsdp: []
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • deepspeed: None
  • debug: []
  • skip_memory_metrics: True
  • do_predict: False
  • resume_from_checkpoint: None
  • warmup_ratio: None
  • local_rank: -1
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Click to expand
Epoch Step Training Loss Validation Loss msmarco-co-condenser-eval-triplet_cosine_accuracy
-1 -1 - - 0.5340
0.0102 24 5.4808 - -
0.0205 48 5.2376 - -
0.0307 72 4.2629 - -
0.0410 96 3.0983 - -
0.0512 120 2.1074 - -
0.0614 144 1.5078 - -
0.0717 168 1.3295 - -
0.0819 192 1.0376 - -
0.0922 216 1.0018 - -
0.1003 235 - 1.1467 0.7610
0.1024 240 0.7672 - -
0.1126 264 0.7724 - -
0.1229 288 0.7220 - -
0.1331 312 0.6774 - -
0.1433 336 0.6984 - -
0.1536 360 0.6782 - -
0.1638 384 0.6620 - -
0.1741 408 0.5325 - -
0.1843 432 0.5893 - -
0.1945 456 0.4944 - -
0.2005 470 - 0.7816 0.7590
0.2048 480 0.5062 - -
0.2150 504 0.4083 - -
0.2253 528 0.4436 - -
0.2355 552 0.5312 - -
0.2457 576 0.4251 - -
0.2560 600 0.4881 - -
0.2662 624 0.4719 - -
0.2765 648 0.4556 - -
0.2867 672 0.3566 - -
0.2969 696 0.3735 - -
0.3008 705 - 0.4058 0.8410
0.3072 720 0.5212 - -
0.3174 744 0.4097 - -
0.3276 768 0.3429 - -
0.3379 792 0.3341 - -
0.3481 816 0.3187 - -
0.3584 840 0.3550 - -
0.3686 864 0.3735 - -
0.3788 888 0.3620 - -
0.3891 912 0.2912 - -
0.3993 936 0.2705 - -
0.4010 940 - 0.3238 0.8210
0.4096 960 0.2694 - -
0.4198 984 0.2875 - -
0.4300 1008 0.3122 - -
0.4403 1032 0.2973 - -
0.4505 1056 0.3642 - -
0.4608 1080 0.2962 - -
0.4710 1104 0.2805 - -
0.4812 1128 0.2454 - -
0.4915 1152 0.2715 - -
0.5013 1175 - 0.2668 0.8350
0.5017 1176 0.3350 - -
0.5119 1200 0.3056 - -
0.5222 1224 0.2710 - -
0.5324 1248 0.3994 - -
0.5427 1272 0.3313 - -
0.5529 1296 0.3144 - -
0.5631 1320 0.2923 - -
0.5734 1344 0.2912 - -
0.5836 1368 0.2360 - -
0.5939 1392 0.2508 - -
0.6015 1410 - 0.3140 0.8310
0.6041 1416 0.2394 - -
0.6143 1440 0.3832 - -
0.6246 1464 0.2197 - -
0.6348 1488 0.2956 - -
0.6451 1512 0.2050 - -
0.6553 1536 0.3045 - -
0.6655 1560 0.2322 - -
0.6758 1584 0.2233 - -
0.6860 1608 0.2498 - -
0.6962 1632 0.2225 - -
0.7018 1645 - 0.3156 0.8520
0.7065 1656 0.3178 - -
0.7167 1680 0.3205 - -
0.7270 1704 0.2739 - -
0.7372 1728 0.2206 - -
0.7474 1752 0.2258 - -
0.7577 1776 0.3038 - -
0.7679 1800 0.2068 - -
0.7782 1824 0.2092 - -
0.7884 1848 0.1976 - -
0.7986 1872 0.2832 - -
0.8020 1880 - 0.2351 0.8420
0.8089 1896 0.2208 - -
0.8191 1920 0.2190 - -
0.8294 1944 0.2076 - -
0.8396 1968 0.1939 - -
0.8498 1992 0.2083 - -
0.8601 2016 0.2059 - -
0.8703 2040 0.3119 - -
0.8805 2064 0.1962 - -
0.8908 2088 0.2442 - -
0.9010 2112 0.1859 - -
0.9023 2115 - 0.2889 0.8390
0.9113 2136 0.1995 - -
0.9215 2160 0.2216 - -
0.9317 2184 0.2370 - -
0.9420 2208 0.2155 - -
0.9522 2232 0.1852 - -
0.9625 2256 0.2151 - -
0.9727 2280 0.2082 - -
0.9829 2304 0.2741 - -
0.9932 2328 0.2627 - -
-1 -1 - - 0.8400

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.213 kWh
  • Carbon Emitted: 0.057 kg of CO2
  • Hours Used: 0.738 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 5.4.0.dev0
  • Transformers: 5.3.0.dev0
  • PyTorch: 2.10.0+cu128
  • Accelerate: 1.13.0.dev0
  • Datasets: 4.3.0
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{oord2019representationlearningcontrastivepredictive,
      title={Representation Learning with Contrastive Predictive Coding},
      author={Aaron van den Oord and Yazhe Li and Oriol Vinyals},
      year={2019},
      eprint={1807.03748},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/1807.03748},
}
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tomaarsen/mmBERT-small-msmarco-fa2-flattened-mnrl

Finetuned
(33)
this model

Dataset used to train tomaarsen/mmBERT-small-msmarco-fa2-flattened-mnrl

Papers for tomaarsen/mmBERT-small-msmarco-fa2-flattened-mnrl

Evaluation results

  • Cosine Accuracy on msmarco co condenser eval triplet
    self-reported
    0.840