D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to Embodied AI Paper • 2510.05684 • Published 22 days ago • 133
Exploring Fine-Tuning of Large Audio Language Models for Spoken Language Understanding under Limited Speech data Paper • 2509.15389 • Published Sep 18 • 3
KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean Language Paper • 2503.23730 • Published Mar 31 • 3
HerO at AVeriTeC: The Herd of Open Large Language Models for Verifying Real-World Claims Paper • 2410.12377 • Published Oct 16, 2024
CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction Paper • 2410.01273 • Published Oct 2, 2024 • 12
EnCLAP++: Analyzing the EnCLAP Framework for Optimizing Automated Audio Captioning Performance Paper • 2409.01201 • Published Sep 2, 2024 • 1
EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning Paper • 2401.17690 • Published Jan 31, 2024 • 5