CorDA
					Collection
				
models and datas for CorDA
					• 
				9 items
				• 
				Updated
					
				•
					
					1
The LLaMA-2-7b model finetuned on the Math task using CorDA in the IPA mode with MetaMath.
| Method | TriviaQA | NQ open | GSM8k | Math | 
|---|---|---|---|---|
| LoRA | 44.17 | 1.91 | 42.68 | 5.92 | 
| CorDA (KPA with nqopen) | 45.23 | 10.44 | 45.64 | 6.94 | 
| CorDA (IPA with MetaMath) | - | - | 54.59 | 8.54 | 
You can evaluate the model's performance following the step-3 in CorDA github repo.
Note: The model trained using CorDA adapter is based on customized code. If you want to restore the original LLaMA architecture, execute merge_adapter_for_corda.py in CorDA github repo.