umarbutler's picture
docs: update citation
a95d552 verified
metadata
pretty_name: Singaporean Judicial Keywords
task_categories:
  - text-retrieval
  - summarization
  - text-ranking
tags:
  - legal
  - law
  - judicial
  - singapore
source_datasets:
  - Singapore Courts
language:
  - en
language_details: en-SG, en-GB
annotations_creators:
  - found
language_creators:
  - found
license: cc-by-4.0
size_categories:
  - n<1K
dataset_info:
  - config_name: default
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: float64
    splits:
      - name: test
        num_examples: 500
  - config_name: corpus
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus
        num_examples: 500
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: queries
        num_examples: 500
configs:
  - config_name: default
    data_files:
      - split: test
        path: default.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl

Singaporean Judicial Keywords πŸ›οΈ

Singaporean Judicial Keywords by Isaacus is a challenging legal information retrieval evaluation dataset consisting of 500 catchword-judgment pairs sourced from the Singapore Judiciary.

Uniquely, the keywords in this dataset are real-world annotations created by subject matter experts, namely, Singaporean law reporters, as opposed to being constructed ex post facto by third parties.

Additionally, unlike standard keyword queries, judicial catchwords are meant to capture the most essential and relevant concepts and principles to a case, even where those elements may never be explicitly referenced by it.

Such features make this dataset especially useful for the robust evaluation of the legal conceptual understanding and overall knowledge of information retrieval models.

This dataset forms part of the Massive Legal Embeddings Benchmark (MLEB), the largest, most diverse, and most comprehensive benchmark for legal text embedding models.

Structure πŸ—‚οΈ

As per the MTEB information retrieval dataset format, this dataset comprises three splits, default, corpus and queries.

The default split pairs catchwords (query-id) with judgments (corpus-id), each pair having a score of 1.

The corpus split contains Singaporean court judgments (excluding catchwords and preceding metadata), with the text of judgments being stored in the text key and their ids being stored in the _id key. There is also a title column which is deliberately set to an empty string in all cases for compatibility with the mteb library.

The queries split contains catchwords, with the text of catchwords being stored in the text key and their ids being stored in the _id key.

Methodology πŸ§ͺ

This dataset was constructed by collecting all publicly available Singaporean court judgments, converting them into plain text with Inscriptis, cleaning them and removing near duplicates with the simhash algorithm, and then using multiple complex regex patterns to extract catchwords from them before removing those catchwords and everything preceding them from judgments (in order to force models to focus on representing the core semantics of judgments' texts rather than their metadata-rich cover sheets). Finally, 500 catchword-judgment pairs were randomly selected for inclusion in this dataset.

License πŸ“œ

This dataset is licensed under CC BY 4.0 which allows for both non-commercial and commercial use of this dataset as long as appropriate attribution is made to it.

Citation πŸ”–

If you use this dataset, please cite the Massive Legal Embeddings Benchmark (MLEB):

@misc{butler2025massivelegalembeddingbenchmark,
      title={The Massive Legal Embedding Benchmark (MLEB)}, 
      author={Umar Butler and Abdur-Rahman Butler and Adrian Lucas Malec},
      year={2025},
      eprint={2510.19365},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.19365}, 
}