🇵🇹 BERTimbau fine-tuned on ClaimPT (Claim Extraction)

This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the ClaimPT dataset for claim and non-claim detection in Portuguese news articles.
It classifies each token as part of a Claim or Non-Claim span, following the guidelines described below. For more information visit our GitHub repository


🧠 Model Details

Model type: Transformer-based encoder (BERT)
Base model: neuralmind/bert-base-portuguese-cased
Fine-tuning objective: Token classification
Task: Claim Extraction Language: Portuguese (pt)
Framework: 🤗 Transformers
License: CC BY-NC 4.0 (non-commercial use)
Authors: Ricardo Campos, Raquel Sequeira, Sara Nerea, Inês Cantante, Diogo Folques, Luís Filipe Cunha, João Canavilhas, António Branco, Alípio Jorge, Sérgio Nunes, Nuno Guimarães, Purificação Silvano

Institution(s): INESC TEC, University of Beira Interior, University of Porto, University of Lisbon


📘 Dataset

Dataset: ClaimPT
Authors: Ricardo Campos, Raquel Sequeira, Sara Nerea, Inês Cantante, Diogo Folques, Luís Filipe Cunha, João Canavilhas, António Branco, Alípio Jorge, Sérgio Nunes, Nuno Guimarães, Purificação Silvano

ClaimPT, a dataset of European Portuguese news articles annotated for factual claims, comprising 1,308 articles and 6,875 individual annotations.


⚙️ Training Details

  • Task formulation: Token classification with labels
    {B-Claim, I-Claim, B-Non-Claim, I-Non-Claim, O}
  • Loss: Cross-entropy
  • Optimizer: AdamW
  • Learning rate: 2e-5
  • Batch size: 16
  • Max sequence length: 512
  • Truncation strategy: Chunking with 128-token overlap (stride)

📊 Evaluation

Model Label Precision (%) Recall (%) F1 (%)
BERT-Chunk Claim 40.38 22.58 28.97
Non-Claim 55.96 68.71 61.68
Micro Avg 55.24 64.31 59.43

🧩 Usage

from transformers import AutoTokenizer, AutoModelForTokenClassification

tokenizer = AutoTokenizer.from_pretrained("lfcc/bertimbau-claimpt-sent")
model = AutoModelForTokenClassification.from_pretrained("lfcc/bertimbau-claimpt-sent")

text = '"O governo vai reduzir o IVA dos alimentos", disse o ministro da economia.'
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits

Annotation Guidelines

Detailed annotation instructions, including procedures, quality-control measures, and schema definitions, are available in the document:

📄 ClaimPT Annotation Manual (PDF)

This manual describes:

  • The annotation process and methodology
  • The annotation scheme and entity structures
  • The definition of a claim
  • Metadata and label taxonomy
  • Examples and boundary cases

Researchers interested in replicating the annotation or training models should refer to this guide.


Citation

If you use this dataset, please cite:

@dataset{claimpt2025,
  author       = {Ricardo Campos and Raquel Sequeira and Sara Nerea and Inês Cantante and Diogo Folques and Luís Filipe Cunha and João Canavilhas and António Branco and Alípio Jorge and Sérgio Nunes and Nuno Guimarães and Purificação Silvano},
  title        = {ClaimPT: A Portuguese Dataset of Annotated Claims in News Articles},
  year         = {2025},
  doi          = {https://rdm.inesctec.pt/dataset/cs-2025-008},
  institution  = {INESC TEC}
}

Credits and Acknowledgements

This dataset was developed by INESC TEC – Institute for Systems and Computer Engineering, Technology and Science, specifically by the NLP Group within the LIAAD – Laboratory of Artificial Intelligence and Decision Support research center.

Affiliated Institutions

Acknowledgements

This work was carried out as part of the project Accelerat.AI (Ref. C644865762-00000008), financed by IAPMEI and the European Union — Next Generation EU Fund, within the scope of call for proposals no. 02/C05-i01/2022 — submission of final proposals for project development under the Mobilizing Agendas for Business Innovation of the Recovery and Resilience Plan. Ricardo Campos, Alípio Jorge, and Nuno Guimarães also acknowledge support from the StorySense project (Ref. 2022.09312.PTDC, DOI: 10.54499/2022.09312.PTDC).

Downloads last month
17
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support