Collections
Discover the best community collections!
Collections including paper arxiv:2205.05638 
						
					
				- 
	
	
	
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery
Paper • 2310.18356 • Published • 24 - 
	
	
	
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 28 - 
	
	
	
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 - 
	
	
	
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 45 
- 
	
	
	
Diversity of Thought Improves Reasoning Abilities of Large Language Models
Paper • 2310.07088 • Published • 5 - 
	
	
	
Reverse Chain: A Generic-Rule for LLMs to Master Multi-API Planning
Paper • 2310.04474 • Published • 2 - 
	
	
	
Promptor: A Conversational and Autonomous Prompt Generation Agent for Intelligent Text Entry Techniques
Paper • 2310.08101 • Published • 2 - 
	
	
	
Instance Needs More Care: Rewriting Prompts for Instances Yields Better Zero-Shot Performance
Paper • 2310.02107 • Published • 3 
- 
	
	
	
Combining Modular Skills in Multitask Learning
Paper • 2202.13914 • Published • 4 - 
	
	
	
The Power of Scale for Parameter-Efficient Prompt Tuning
Paper • 2104.08691 • Published • 10 - 
	
	
	
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Paper • 2101.00190 • Published • 6 - 
	
	
	
GPT Understands, Too
Paper • 2103.10385 • Published • 10 
- 
	
	
	
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Paper • 2303.10512 • Published • 2 - 
	
	
	
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Paper • 2205.05638 • Published • 5 - 
	
	
	
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Paper • 2303.16199 • Published • 4 - 
	
	
	
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning
Paper • 2108.06098 • Published • 2 
- 
	
	
	
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 28 - 
	
	
	
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 45 - 
	
	
	
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 - 
	
	
	
LoRA ensembles for large language model fine-tuning
Paper • 2310.00035 • Published • 2 
- 
	
	
	
Dissecting In-Context Learning of Translations in GPTs
Paper • 2310.15987 • Published • 6 - 
	
	
	
In-Context Learning Creates Task Vectors
Paper • 2310.15916 • Published • 43 - 
	
	
	
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
Paper • 2202.07922 • Published • 1 - 
	
	
	
Promptor: A Conversational and Autonomous Prompt Generation Agent for Intelligent Text Entry Techniques
Paper • 2310.08101 • Published • 2 
- 
	
	
	
Combining Modular Skills in Multitask Learning
Paper • 2202.13914 • Published • 4 - 
	
	
	
The Power of Scale for Parameter-Efficient Prompt Tuning
Paper • 2104.08691 • Published • 10 - 
	
	
	
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Paper • 2101.00190 • Published • 6 - 
	
	
	
GPT Understands, Too
Paper • 2103.10385 • Published • 10 
- 
	
	
	
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Paper • 2303.10512 • Published • 2 - 
	
	
	
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Paper • 2205.05638 • Published • 5 - 
	
	
	
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Paper • 2303.16199 • Published • 4 - 
	
	
	
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning
Paper • 2108.06098 • Published • 2 
- 
	
	
	
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery
Paper • 2310.18356 • Published • 24 - 
	
	
	
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 28 - 
	
	
	
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 - 
	
	
	
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 45 
- 
	
	
	
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 28 - 
	
	
	
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 45 - 
	
	
	
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 - 
	
	
	
LoRA ensembles for large language model fine-tuning
Paper • 2310.00035 • Published • 2 
- 
	
	
	
Diversity of Thought Improves Reasoning Abilities of Large Language Models
Paper • 2310.07088 • Published • 5 - 
	
	
	
Reverse Chain: A Generic-Rule for LLMs to Master Multi-API Planning
Paper • 2310.04474 • Published • 2 - 
	
	
	
Promptor: A Conversational and Autonomous Prompt Generation Agent for Intelligent Text Entry Techniques
Paper • 2310.08101 • Published • 2 - 
	
	
	
Instance Needs More Care: Rewriting Prompts for Instances Yields Better Zero-Shot Performance
Paper • 2310.02107 • Published • 3 
- 
	
	
	
Dissecting In-Context Learning of Translations in GPTs
Paper • 2310.15987 • Published • 6 - 
	
	
	
In-Context Learning Creates Task Vectors
Paper • 2310.15916 • Published • 43 - 
	
	
	
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
Paper • 2202.07922 • Published • 1 - 
	
	
	
Promptor: A Conversational and Autonomous Prompt Generation Agent for Intelligent Text Entry Techniques
Paper • 2310.08101 • Published • 2