--- license: apache-2.0 task_categories: - image-classification language: - en tags: - medical - biology --- ### Model Checkpoints and Logs | Name | Few-Shot | Base-to-Novel | |-----------------------------------------------------------|:---------:|:----------:| | [**BiomedCoOp**](https://github.com/HealthX-Lab/BiomedCoOp/blob/main/trainers/BiomedCoOp/biomedcoop_biomedclip.py) | [link](https://huggingface.co/TahaKoleilat/BiomedCoOp/tree/main/few_shot) | [link](https://huggingface.co/TahaKoleilat/BiomedCoOp/tree/main/base2new) | ### Reproducing Results Run the following scripts to use the checkpoints and get testing results. Note that the following scripts automatically download the desired model weights: ##### (1) Few-shot Evaluation ```bash CUDA_VISIBLE_DEVICES= bash scripts/biomedcoop/eval_fewshot.sh # Example on BTMRI using 16 shots and the BiomedCLIP model on GPU 0 CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_fewshot.sh data btmri 16 ``` ##### (2) Base-to-Novel Generalization ```bash CUDA_VISIBLE_DEVICES= bash scripts/biomedcoop/eval_base2new.sh # Example on BTMRI using 16 shots and the BiomedCLIP model on GPU 0 CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_base2new.sh data btmri 16 ``` ### Citation If you use our work, please consider citing: ```bibtex @article{koleilat2024biomedcoop, title={BiomedCoOp: Learning to Prompt for Biomedical Vision-Language Models}, author={Koleilat, Taha and Asgariandehkordi, Hojat and Rivaz, Hassan and Xiao, Yiming}, journal={arXiv preprint arXiv:2411.15232}, year={2024} } ```