| license: bsd-3-clause | |
| Mirror of the base ProGen2-small model (with slightly modified configuration and forward pass) by [Nijkamp, et al.](https://arxiv.org/abs/2206.13517). | |
| See also my github [repo](https://github.com/hugohrban/ProGen2-finetuning/tree/main) for an example of finetuning this model. | |
| Example usage: | |
| ```python | |
| from transformers import AutoModelForCausalLM | |
| from tokenizers import Tokenizer | |
| import torch | |
| import torch.nn.functional as F | |
| # load model and tokenizer | |
| model = AutoModelForCausalLM.from_pretrained("hugohrban/progen2-small", trust_remote_code=True) | |
| tokenizer = Tokenizer.from_pretrained("hugohrban/progen2-small") | |
| tokenizer.no_padding() | |
| # prepare input | |
| prompt = "1MEVVIVTGMSGAGK" | |
| input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device) | |
| # forward pass | |
| logits = model(input_ids).logits | |
| # print output probabilities | |
| next_token_logits = logits[-1, :] | |
| next_token_probs = F.softmax(next_token_logits, dim=-1) | |
| for i in range(tokenizer.get_vocab_size(with_added_tokens=False)): | |
| print(f"{tokenizer.id_to_token(i)}: {100 * next_token_probs[i].item():.2f} %") | |
| ``` | |