umarbutler's picture
docs: update citation
982aa11 verified
metadata
pretty_name: Australian Tax Guidance Retrieval
task_categories:
  - text-retrieval
  - question-answering
  - text-ranking
tags:
  - legal
  - law
  - tax
  - australia
  - markdown
source_datasets:
  - ATO Community
language:
  - en
language_details: en-AU
annotations_creators:
  - expert-generated
language_creators:
  - found
license: cc-by-4.0
size_categories:
  - n<1K
dataset_info:
  - config_name: default
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: float64
    splits:
      - name: test
        num_examples: 112
  - config_name: corpus
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus
        num_examples: 105
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: queries
        num_examples: 112
configs:
  - config_name: default
    data_files:
      - split: test
        path: default.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl

Australian Tax Guidance Retrieval 🏦

Australian Tax Guidance Retrieval by Isaacus is a novel, diverse, and challenging legal information retrieval evaluation dataset consisting of 112 real-life Australian tax law questions paired with expert-annotated, relevant Australian Government tax guidance and policies.

Uniquely, this dataset sources its real-life tax questions from the posts of everyday Australian taxpayers on the ATO Community forum, with relevant Australian Government guidance and policy in turn being sourced from the answers of tax professionals and ATO employees.

The fact that questions center around substantive and often complex tax problems, which are broadly representative of the problems faced by everyday Australian taxpayers, makes this dataset extremely valuable for the robust evaluation of the legal retrieval capabilities and tax domain understanding of information retrieval models.

This dataset forms part of the Massive Legal Embeddings Benchmark (MLEB), the largest, most diverse, and most comprehensive benchmark for legal text embedding models.

Structure πŸ—‚οΈ

As per the MTEB information retrieval dataset format, this dataset comprises three splits, default, corpus and queries.

The default split pairs questions (query-id) with relevant materials (corpus-id), each pair having a score of 1.

The corpus split contains Markdown-formatted Australian Government guidance and policies, with the text of such materials being stored in the text key and their IDs being stored in the _id key. There is also a title column which is deliberately set to an empty string in all cases for compatibility with the mteb library.

The queries split contains Markdown-formatted questions, with the text of a question being stored in the text key and its ID being stored in the _id key.

Methodology πŸ§ͺ

This dataset was constructed by:

  1. For each of the 14 sub-topics of the ATO Community forum that did not come under the parent topics 'Online Services' and 'Tax Professionals' (which were found to consist almost exclusively of practical questions around the use of ATO services rather than substantive tax law queries), selecting 8 questions that:
    1. Had at least one answer with at least one hyperlink (with, where there were multiple competing answers, the answer selected by the user as the best answer being used otherwise using the answers of ATO employees over those of tax professionals).
    2. Were about a substantive tax law problem and were not merely practical questions about, for example, the use of ATO services or how to file tax returns.
  2. For each sampled question, visiting the hyperlink in the selected answer that appeared to be the most relevant to the question and then copying as much text from the hyperlink as appeared relevant to the question, ranging from a single paragraph to the entire document.
  3. Using a purpose-built Chrome extension to extract questions and relevant passages directly to Markdown to preserve the semantics of added markup.
  4. Lightly cleaning queries and passages by replacing consecutive sequences of at least two newlines with two consecutive newlines and removing leading and trailing whitespace.

License πŸ“œ

This dataset is licensed under CC BY 4.0 which allows for both non-commercial and commercial use of this dataset as long as appropriate attribution is made to it.

Citation πŸ”–

If you use this dataset, please cite the Massive Legal Embeddings Benchmark (MLEB):

@misc{butler2025massivelegalembeddingbenchmark,
      title={The Massive Legal Embedding Benchmark (MLEB)}, 
      author={Umar Butler and Abdur-Rahman Butler and Adrian Lucas Malec},
      year={2025},
      eprint={2510.19365},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.19365}, 
}