T5-Large Fine-tuned on the combined XSum + CNN/DailyMail Datasets
Task: Abstractive Summarization (English)
Base Model: google-t5/t5-large
License: MIT
Overview
This model is a T5-Large checkpoint fine-tuned jointly on XSum and CNN/DailyMail datasets. It produces concise, abstractive summaries and has been widely adopted as a baseline in summarization research.
Performance ~ On XSum test set
| Metric | Score |
|---|---|
| ROUGE-1 | 36.77 |
| ROUGE-2 | 14.69 |
| ROUGE-L | 30.06 |
| Loss | 1.64 |
| Avg. Length | 18.6 tokens |
Usage
Quick Start
from transformers import pipeline
summarizer = pipeline("summarization", model="sysresearch101/t5-large-finetuned-xsum-cnn")
article = "Your article text here..."
summary = summarizer(article, max_length=80, min_length=20, do_sample=False)
print(summary[0]['summary_text'])
Advanced Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sysresearch101/t5-large-finetuned-xsum-cnn")
model = AutoModelForSeq2SeqLM.from_pretrained("sysresearch101/t5-large-finetuned-xsum-cnn")
inputs = tokenizer("summarize: " + article, return_tensors="pt", max_length=512, truncation=True)
outputs = model.generate(
**inputs,
max_length=80,
min_length=20,
num_beams=4,
no_repeat_ngram_size=2,
length_penalty=1.0,
repetition_penalty=2.5,
use_cache=True,
early_stopping=True,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95
)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
Training Data
- XSum: BBC articles with single-sentence summaries
- CNN/DailyMail: News articles with multi-sentence summaries
Intended Use
- Primary: Summarization.
- Secondary: Educational demonstrations, reproducible baselines, Research benchmarking, academic studies on summarization
Limitations
- Optimized for English news text; performance may vary on other domains
- Tends to produce very concise summaries (18-20 tokens average)
- No built-in fact-checking or content filtering
Citation
@misc{stept2023_t5_large_xsum_cnn_summarization,
author = {Shlomo Stept (sysresearch101)},
title = {T5-Large Fine-tuned on XSum + CNN/DailyMail for Abstractive Summarization},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/sysresearch101/t5-large-finetuned-xsum-cnn}
}
Papers Using This Model
- Zhu et al. (2023). Annotating and Detecting Fine-grained Factual Errors for Dialogue Summarization. ACL 2023 (Long).
- European Food Safety Authority. (2023). Implementing AI Vertical use cases – Scenario 1. EFSA Journal, Special Publication EN-8223. https://doi.org/10.2903/sp.efsa.2023.EN-8223
- (Forthcoming) Budget-Constrained Learning to Defer for Autoregressive Generation (under review, ICLR 2025)
Contact
Created by Shlomo Stept (ORCID: 0009-0009-3185-589X) DARMIS AI
- Website: shlomostept.com
- LinkedIn: linkedin.com/in/shlomo-stept
- Downloads last month
- 49
Model tree for sysresearch101/t5-large-finetuned-xsum-cnn
Base model
google-t5/t5-large