binary-tokenizer-001-64k

A cross-platform BPE tokenizer for binary executables and machine code. Trained on 13 GB of diverse binaries spanning Linux, Windows, macOS, and Android platforms.

πŸ”— Model: mjbommar/binary-tokenizer-001-64k πŸ“Š Dataset: mjbommar/binary-30k-tokenized πŸ“„ Paper: Binary BPE: Cross-Platform Tokenization for Binary Analysis (arXiv preprint coming soon)

Overview

  • Vocabulary Size: 65,536 tokens (2^16)
  • Token Composition: 256 base bytes + 65,273 learned merges + 7 special tokens
  • Average Token Length: 4.173 bytes
  • 3-byte Instructions: 17.9% of vocabulary (11,729 tokens)
  • Compression Ratio: ~3.0 bytes/token on typical binaries

Training Configuration

Training Corpus:

  • Source: mjbommar/binary-30k-tokenized
  • Size: ~13 GB
  • Files: 30,738 binary files
  • Platforms: Linux (ELF), Windows (PE), macOS (Mach-O), Android (APK)
  • Architectures: x86-64, x86, ARM64, ARM, MIPS, RISC-V

Training Parameters:

  • Vocabulary size: 65,536 (including 7 special tokens)
  • Min frequency: 10
  • Chunk size: 8,192 bytes
  • Allowed lengths: DEFAULT (1-16 bytes)
  • Training duration: ~12-14 hours

Vocabulary Statistics

Composition:

  • Base bytes (0-255): 256 tokens
  • Learned merges: 65,273 tokens
  • Special tokens: 7 tokens (<|start|>, <|end|>, <|pad|>, <|unk|>, <|cls|>, <|sep|>, <|mask|>)
  • Total: 65,536 tokens

Quality Metrics:

  • All tokens reachable: βœ“ Yes
  • Valid merges: 65,273 / 65,273
  • Power-of-2 size: βœ“ Yes (2^16)

Token Length Distribution

Length Count Percentage Description
1 byte 256 0.4% Base bytes
2 bytes 24,943 38.1% Byte pairs (most common)
3 bytes 11,729 17.9% Complete x86-64 instructions
4 bytes 13,189 20.1% Instructions with operands
5 bytes 3,737 5.7% Complex patterns
6 bytes 3,109 4.7% Complex patterns
7 bytes 1,564 2.4% Complex patterns
8 bytes 2,498 3.8% Multi-byte sequences
9+ bytes 3,302 5.0% Long patterns

Average Token Length: 4.173 bytes


Byte Content Analysis

Content Categories:

  • Contains NULL byte (0x00): 16,988 tokens (25.9%)
  • ASCII printable (0x20-0x7E): 11,552 tokens (17.6%)
  • All ASCII (<0x80): 24,706 tokens (37.7%)
  • High bytes (β‰₯0x80): 40,821 tokens (62.3%)

Most Common Bytes in Tokens:

  • 0x00 (NULL): 42,537 occurrences - Padding and alignment
  • 0xFF: 7,204 occurrences - Sentinel values
  • 0x48 (REX.W): 6,105 occurrences - x86-64 REX prefix
  • 0x8B (MOV): 4,016 occurrences - x86-64 MOV opcode
  • 0x20 (space): 4,087 occurrences - ASCII strings

Sequence Coverage

N-byte Sequence Diversity:

Length Learned Tokens Possible Sequences Coverage
1-byte 256 256 100.00%
2-byte 24,943 65,536 38.06%
3-byte 11,729 16,777,216 0.070%
4-byte 13,189 4,294,967,296 0.00031%

Files

  • tokenizer-65536.json - Trained tokenizer model (5.0 MB)
  • analysis_results.json - Detailed analysis statistics
  • training.log - Training output log (if available)
  • training_stats.txt - Training summary (if available)

Usage

Load from HuggingFace Hub:

from tokenizers import Tokenizer

# Load directly from HuggingFace
tokenizer = Tokenizer.from_pretrained("mjbommar/binary-tokenizer-001-64k")

Load from local file:

# With bbpe CLI
bbpe encode --tokenizer tokenizer-65536.json /path/to/binary
bbpe info tokenizer-65536.json

Complete Python Example:

from tokenizers import Tokenizer

# Load from HuggingFace or local file
tokenizer = Tokenizer.from_pretrained("mjbommar/binary-tokenizer-001-64k")
# OR: tokenizer = Tokenizer.from_file("tokenizer-65536.json")

# Read binary file and decode as latin-1 (preserves all byte values 0-255)
with open("/usr/bin/ls", "rb") as f:
    data = f.read()
    data_str = data.decode("latin-1")

# Encode the binary data
encoding = tokenizer.encode(data_str)
print(f"File size: {len(data)} bytes")
print(f"Total tokens: {len(encoding.ids)}")
print(f"Compression: {len(data) / len(encoding.ids):.3f} bytes/token")

# First 10 tokens
for i, (token_id, token) in enumerate(zip(encoding.ids[:10], encoding.tokens[:10])):
    token_bytes = token.encode("latin-1")
    print(f"  Token {i}: ID={token_id:5d} hex={token_bytes.hex():20s} ({len(token_bytes)} bytes)")

# Decode tokens back to bytes
decoded_str = tokenizer.decode(encoding.ids)
decoded_bytes = decoded_str.encode("latin-1")
assert decoded_bytes == data  # Perfect reconstruction

Example output for /usr/bin/ls (142,312 bytes):

File size: 142312 bytes
Total tokens: 47993
Compression: 2.965 bytes/token

First 10 tokens:
  Token 0: ID=45813 hex=7f454c46020101       (7 bytes)
  Token 1: ID=  662 hex=000000000000000000   (9 bytes)
  Token 2: ID=  265 hex=0300                 (2 bytes)
  Token 3: ID= 1369 hex=3e00                 (2 bytes)
  Token 4: ID=  279 hex=01000000             (4 bytes)
  Token 5: ID=41250 hex=306d                 (2 bytes)
  Token 6: ID=  288 hex=000000000000         (6 bytes)
  Token 7: ID= 5908 hex=4000000000000000     (8 bytes)
  Token 8: ID= 8377 hex=2824                 (2 bytes)
  Token 9: ID=14325 hex=02000000000000000000 (10 bytes)

Decoded: 7f454c4602010100000000000000000003003e0001000000306d...
(ELF header: 7f 45 4c 46 = ELF magic bytes)

Citation

If you use this tokenizer in your research, please cite:

@article{bommarito2025binarybpe,
  title={Binary BPE: Cross-Platform Tokenization for Binary Analysis},
  author={Bommarito II, Michael J.},
  journal={arXiv preprint},
  year={2025},
  note={Preprint coming soon}
}

Author: Michael J. Bommarito II ([email protected])


Generated: November 13, 2025 Training Script: train_tokenizers.sh Analysis Script: analyze_tokenizer.py

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support