ProteinSelfies / README.md
lhallee's picture
Update README.md
3e4de1b verified
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
    - name: aa_seqs
      dtype: string
  splits:
    - name: train
      num_bytes: 61101706188
      num_examples: 9920628
  download_size: 5540646354
  dataset_size: 61101706188
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

10 million random examples from Uniref50 representative sequences (October 2023) and computed selfies strings. The strings are stored as input ids from a custom selfies tokenizer. A BERT tokenizer with this vocabulary has been uploaded to this dataset under the files.

You can access the tokenizer like this:

import os
from huggingface_hub import hf_hub_download
from transformers import AutoTokenizer

repo_path = 'Synthyra/ProteinSelfies'
local_path = 'ProteinSelfies'
files = ['special_tokens_map.json', 'tokenizer_config.json', 'vocab.txt']
os.makedirs(local_path, exist_ok=True)

for file in files:
    hf_hub_download(
        repo_id=repo_path,
        filename=file,
        repo_type='dataset',
        local_dir=local_path
    )

tokenizer = AutoTokenizer.from_pretrained(local_path)

Intended for atom-wise protein language modeling.