umarbutler's picture
docs: update citation
db59ab4 verified
---
pretty_name: License TL;DR Retrieval
task_categories:
- text-retrieval
- summarization
tags:
- legal
- law
- contractual
- licenses
language:
- en
annotations_creators:
- found
language_creators:
- found
license: cc-by-3.0
size_categories:
- n<1K
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_examples: 65
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_examples: 65
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_examples: 65
configs:
- config_name: default
data_files:
- split: test
path: default.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
---
# License TL;DR Retrieval πŸ“‘
**License TL;DR Retrieval** by [Isaacus](https://isaacus.com/) is a challenging legal information retrieval evaluation dataset consisting of 65 summary-license pairs sourced from [tl;drLegal](https://www.tldrlegal.com/).
This dataset is intended to stress test the ability of an information retrieval model to match relevant open source licenses with summaries of their terms.
This dataset forms part of the [Massive Legal Embeddings Benchmark (MLEB)](https://isaacus.com/mleb), the largest, most diverse, and most comprehensive benchmark for legal text embedding models.
## Structure πŸ—‚οΈ
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.
The `default` split pairs summaries (`query-id`) with licenses (`corpus-id`), each pair having a `score` of 1.
The `corpus` split contains licenses from tl;drLegal, with their full texts being stored in the `text` key and their ids being stored in the `_id` key. There is also a `title` column which is deliberately set to an empty string in all cases for compatibility with the [`mteb`](https://github.com/embeddings-benchmark/mteb) library.
The `queries` split contains summaries of licenses, with the text of summaries being stored in the `text` key and their ids being stored in the `_id` key.
## Methodology πŸ§ͺ
This dataset was constructed by collecting all licenses available on [tl;drLegal](https://www.tldrlegal.com/) and pairing their human-created summaries with their full texts.
## License πŸ“œ
This dataset is licensed under [CC BY 3.0](https://creativecommons.org/licenses/by/3.0/legalcode) which allows for both non-commercial and commercial use of this dataset as long as appropriate attribution is made to it.
## Citation πŸ”–
If you use this dataset, please cite the [Massive Legal Embeddings Benchmark (MLEB)](https://arxiv.org/abs/2510.19365):
```bibtex
@misc{butler2025massivelegalembeddingbenchmark,
title={The Massive Legal Embedding Benchmark (MLEB)},
author={Umar Butler and Abdur-Rahman Butler and Adrian Lucas Malec},
year={2025},
eprint={2510.19365},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.19365},
}
```