view article Article Improving Hugging Face Training Efficiency Through Packing with Flash Attention 2 +4 Aug 21, 2024 • 42
view article Article Efficient LLM Pretraining: Packed Sequences and Masked Attention Oct 7, 2024 • 60
view article Article Good answers are not necessarily factual answers: an analysis of hallucination in leading LLMs May 7 • 41
view article Article Saving Memory Using Padding-Free Transformer Layers during Finetuning Jun 11, 2024 • 20