Dataset Viewer
The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Server-side error
Error code:   ClientConnectionError

10 million random examples from Uniref50 representative sequences (October 2023) and computed selfies strings. The strings are stored as input ids from a custom selfies tokenizer. A BERT tokenizer with this vocabulary has been uploaded to this dataset under the files.

You can access the tokenizer like this:

import os
from huggingface_hub import hf_hub_download
from transformers import AutoTokenizer

repo_path = 'Synthyra/ProteinSelfies'
local_path = 'ProteinSelfies'
files = ['special_tokens_map.json', 'tokenizer_config.json', 'vocab.txt']
os.makedirs(local_path, exist_ok=True)

for file in files:
    hf_hub_download(
        repo_id=repo_path,
        filename=file,
        repo_type='dataset',
        local_dir=local_path
    )

tokenizer = AutoTokenizer.from_pretrained(local_path)

Intended for atom-wise protein language modeling.

Downloads last month
220