Deprecated

Please visit https://huggingface.co/shorecode/t5-efficient-tiny-summarizer-general-purpose-v3 for an improved version

This model was built to shorten text that is injected into LLM prompts to reduce API calling costs

Very high compression (7x) meaning the text is 7 times smaller when sent to your LLM provider!

Prompt
Model training

https://api.wandb.ai/links/shorecode-shorecode-llc/nqr415rk

Downloads last month
846
Safetensors
Model size
15.6M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shorecode/t5-efficient-tiny-summarizer-general-purpose-v2

Quantized
(2)
this model

Dataset used to train shorecode/t5-efficient-tiny-summarizer-general-purpose-v2

Evaluation results

  • f1 Score on shorecode/summary-collection-60k-rows
    self-reported
    0.323
  • Faithfullness (facebook/bart-large-cnn) on shorecode/summary-collection-60k-rows
    self-reported
    2.560
  • Summarization Compression on shorecode/summary-collection-60k-rows
    self-reported
    6.910
  • Summarization Coverage on shorecode/summary-collection-60k-rows
    self-reported
    0.890
  • Summarization Density on shorecode/summary-collection-60k-rows
    self-reported
    4.950
  • rougeL precision on shorecode/summary-collection-60k-rows
    self-reported
    0.510
  • rougeL recall on shorecode/summary-collection-60k-rows
    self-reported
    0.130
  • rougeL fmeasure on shorecode/summary-collection-60k-rows
    self-reported
    0.200
  • rouge1 precision on shorecode/summary-collection-60k-rows
    self-reported
    0.710
  • rouge1 recall on shorecode/summary-collection-60k-rows
    self-reported
    0.180