-
LLM-Pruner: On the Structural Pruning of Large Language Models
Paper • 2305.11627 • Published • 3 -
Adapt-Pruner: Adaptive Structural Pruning for Efficient Small Language Model Training
Paper • 2502.03460 • Published -
Pruning as a Domain-specific LLM Extractor
Paper • 2405.06275 • Published • 1 -
KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
Paper • 2402.11176 • Published • 2
Collections
Discover the best community collections!
Collections including paper arxiv:2408.05147
-
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 14 -
Scaling and evaluating sparse autoencoders
Paper • 2406.04093 • Published • 3 -
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Paper • 2408.05147 • Published • 40 -
Disentangling Dense Embeddings with Sparse Autoencoders
Paper • 2408.00657 • Published • 1
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 71 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 131 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90
-
Latent Reasoning in LLMs as a Vocabulary-Space Superposition
Paper • 2510.15522 • Published • 1 -
Language Models are Injective and Hence Invertible
Paper • 2510.15511 • Published • 61 -
Eliciting Secret Knowledge from Language Models
Paper • 2510.01070 • Published • 4 -
Interpreting Language Models Through Concept Descriptions: A Survey
Paper • 2510.01048 • Published • 2
-
MADLAD-400: A Multilingual And Document-Level Large Audited Dataset
Paper • 2309.04662 • Published • 24 -
Neurons in Large Language Models: Dead, N-gram, Positional
Paper • 2309.04827 • Published • 17 -
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs
Paper • 2309.05516 • Published • 10 -
DrugChat: Towards Enabling ChatGPT-Like Capabilities on Drug Molecule Graphs
Paper • 2309.03907 • Published • 12
-
Resa: Transparent Reasoning Models via SAEs
Paper • 2506.09967 • Published • 21 -
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Paper • 2408.05147 • Published • 40 -
Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Paper • 2505.22255 • Published • 24 -
I Have Covered All the Bases Here: Interpreting Reasoning Features in Large Language Models via Sparse Autoencoders
Paper • 2503.18878 • Published • 119
-
Sparse Autoencoders Find Highly Interpretable Features in Language Models
Paper • 2309.08600 • Published • 15 -
Scaling and evaluating sparse autoencoders
Paper • 2406.04093 • Published • 3 -
Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models
Paper • 2403.19647 • Published • 4 -
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Paper • 2408.05147 • Published • 40
-
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
Paper • 2402.15627 • Published • 38 -
Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts
Paper • 2402.16822 • Published • 18 -
FuseChat: Knowledge Fusion of Chat Models
Paper • 2402.16107 • Published • 40 -
Multi-LoRA Composition for Image Generation
Paper • 2402.16843 • Published • 32
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 151 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 14 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 60 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 48
-
LLM-Pruner: On the Structural Pruning of Large Language Models
Paper • 2305.11627 • Published • 3 -
Adapt-Pruner: Adaptive Structural Pruning for Efficient Small Language Model Training
Paper • 2502.03460 • Published -
Pruning as a Domain-specific LLM Extractor
Paper • 2405.06275 • Published • 1 -
KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
Paper • 2402.11176 • Published • 2
-
Resa: Transparent Reasoning Models via SAEs
Paper • 2506.09967 • Published • 21 -
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Paper • 2408.05147 • Published • 40 -
Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Paper • 2505.22255 • Published • 24 -
I Have Covered All the Bases Here: Interpreting Reasoning Features in Large Language Models via Sparse Autoencoders
Paper • 2503.18878 • Published • 119
-
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 14 -
Scaling and evaluating sparse autoencoders
Paper • 2406.04093 • Published • 3 -
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Paper • 2408.05147 • Published • 40 -
Disentangling Dense Embeddings with Sparse Autoencoders
Paper • 2408.00657 • Published • 1
-
Sparse Autoencoders Find Highly Interpretable Features in Language Models
Paper • 2309.08600 • Published • 15 -
Scaling and evaluating sparse autoencoders
Paper • 2406.04093 • Published • 3 -
Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models
Paper • 2403.19647 • Published • 4 -
Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Paper • 2408.05147 • Published • 40
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 71 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 131 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90
-
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
Paper • 2402.15627 • Published • 38 -
Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts
Paper • 2402.16822 • Published • 18 -
FuseChat: Knowledge Fusion of Chat Models
Paper • 2402.16107 • Published • 40 -
Multi-LoRA Composition for Image Generation
Paper • 2402.16843 • Published • 32
-
Latent Reasoning in LLMs as a Vocabulary-Space Superposition
Paper • 2510.15522 • Published • 1 -
Language Models are Injective and Hence Invertible
Paper • 2510.15511 • Published • 61 -
Eliciting Secret Knowledge from Language Models
Paper • 2510.01070 • Published • 4 -
Interpreting Language Models Through Concept Descriptions: A Survey
Paper • 2510.01048 • Published • 2
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 151 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 14 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 60 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 48
-
MADLAD-400: A Multilingual And Document-Level Large Audited Dataset
Paper • 2309.04662 • Published • 24 -
Neurons in Large Language Models: Dead, N-gram, Positional
Paper • 2309.04827 • Published • 17 -
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs
Paper • 2309.05516 • Published • 10 -
DrugChat: Towards Enabling ChatGPT-Like Capabilities on Drug Molecule Graphs
Paper • 2309.03907 • Published • 12