Datasets:
metadata
pretty_name: UK Legislation
tags:
- uk
- law
- legislation
license: other
language: en
annotations_creators:
- no-annotation
language_creators:
- found
size_categories:
- 100K<n<1M
task_categories:
- text-generation
source_datasets:
- original
UK Legislation Dataset
This directory packages scraped UK legislation into a layout that can be ingested directly with the Hugging Face datasets library. All documents live in JSON Lines format inside data/ with one piece of legislation per line. The schema captures both plain-text and XML renderings, along with document-level metadata and section breakdowns.
Repository Layout
data/train.jsonl– full corpus of 175,515 documents ready forload_dataset.meta/– auxiliary files that describe the scrape workflow and provenance.dataset_config.json– declarative configuration that mirrors theload_datasetarguments used during training.
Data Fields
Every record is a JSON object with the following keys:
id(string): canonical legislation identifier, e.g.ukpga_2025_23.title(string): preferred title.type(string): legislation class (ukpga,uksi, etc.).year(int): legislation year.number(int): act or statutory instrument number within the year.text_content(string): consolidated plain-text content.xml_content(string): raw XML body as served by legislation.gov.uk.publication_date,enacted_date,coming_into_force_date(string): ISO dates when known.extent(array[string]): geographic extent codes.status(string): legislation status (In Force,Prospective, etc.).sections(array[object]): ordered section data withid,number,title,contentfields per section.metadata(object): scraped metadata such asclassificationandtitle.url(string): source URL.checksum(string): SHA1 checksum of the XML payload.
Loading With datasets
from datasets import load_dataset
dataset = load_dataset(
"json",
data_files={"train": "data/train.jsonl"},
features={
"id": "string",
"title": "string",
"type": "string",
"year": "int32",
"number": "int32",
"text_content": "string",
"xml_content": "string",
"publication_date": "string",
"enacted_date": "string",
"coming_into_force_date": "string",
"extent": ["string"],
"status": "string",
"sections": [{
"id": "string",
"number": "string",
"title": "string",
"content": "string",
}],
"metadata": {
"classification": "string",
"title": "string",
},
"url": "string",
"checksum": "string",
},
)
You can further split the dataset with the usual train_test_split utilities or by sharding train.jsonl into additional files.
Provenance & Licensing
The dataset originates from legislation.gov.uk. Legislative content is generally covered by the UK Open Government Licence v3.0, but confirm applicability to your downstream use-case.