-
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Paper • 2404.02258 • Published • 107 -
Jamba: A Hybrid Transformer-Mamba Language Model
Paper • 2403.19887 • Published • 111 -
EfficientVMamba: Atrous Selective Scan for Light Weight Visual Mamba
Paper • 2403.09977 • Published • 11 -
SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series
Paper • 2403.15360 • Published • 13
Ceshine Lee
ceshine
AI & ML interests
None yet
Recent Activity
liked
a model
8 days ago
kotoba-tech/kotoba-whisper-v2.0
liked
a model
8 days ago
kotoba-tech/kotoba-whisper-v2.0-ggml
upvoted
a
paper
about 1 month ago
LoRA: Low-Rank Adaptation of Large Language Models