Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic Programming Paper • 2301.12187 • Published Jan 28, 2023 • 1
Q-Palette: Fractional-Bit Quantizers Toward Optimal Bit Allocation for Efficient LLM Deployment Paper • 2509.20214 • Published Sep 24 • 1
GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance Paper • 2505.07004 • Published May 11 • 7
KVzip: Query-Agnostic KV Cache Compression with Context Reconstruction Paper • 2505.23416 • Published May 29 • 11