Datasets:
license: mit
configs:
- config_name: default
data_files:
- split: train
path: tars/train/*.tar
- split: fine_tune
path: tars/fine_tune/*.tar
language:
- en
Accessing the font-square-v2 Dataset on Hugging Face
The font-square-v2 dataset is hosted on Hugging Face at blowing-up-groundhogs/font-square-v2. It is stored in WebDataset format, with its tar files split into two folders:
- tars/train/: Contains tar shards named
{000..499}.tarfor the main training split. - tars/fine_tune/: Contains tar shards named
{000..049}.tarfor optional fine-tuning.
Each tar file includes multiple samples; each sample consists of:
- An RGB image file (
.rgb.png) - A black-and-white image file (
.bw.png) - A JSON file (
.json) with metadata (such as text, writer ID, etc.)
You can access the dataset either by downloading it locally or by streaming it directly over HTTP.
Table of Contents
- Dataset Creation
- Downloading the Dataset Locally
- Streaming the Dataset Directly Over HTTP
- Additional Considerations
Dataset Creation
Text Content Sampling
This synthetic dataset comprises images of text lines superimposed on diverse backgrounds. First, text lines are generated by sampling sentences from multiple English corpora available via the NLTK library (for example, abc, brown, genesis, inaugural, state_union, and webtext).
To better represent both common and rare characters, a rarity-based weighting strategy is used. Each word is assigned a weight based on the frequency of its individual characters (unigrams) and pairs of characters (bigrams). Words containing less frequent character patterns receive higher weights, so that they are sampled more often during dataset creation.
Below is an example Python function used to compute the weight of a word:
def word_weight(word):
# u_counts and b_counts are dictionaries storing
# unigram and bigram counts over the entire corpus.
# Compute the unigram score
u_score = 0
for c in word:
u_score += u_counts[c]
u_score /= len(word)
# Compute the bigram score
bigrams = pairwise(' ' + word + ' ')
b_score = 0
for b in bigrams:
b_score += b_counts[''.join(b)]
b_score /= len(bigrams)
# Return the average of the two scores
return (u_score + b_score) / 2
By sampling words using these computed weights, the resulting text exhibits a balanced distribution of characters, allowing the model to learn from both common and rare patterns.
Text Line Rendering
After a sentence is sampled, the image is created in two main steps:
Text Image Generation:
- A font is selected from a list of 100,000 fonts.
- The text is rendered on a white background to produce an initial grayscale text image.
Background and Final Image Creation:
- A background image is chosen from a collection of realistic textures (such as paper, wood, or walls).
- A transparency value is randomly selected between 0.5 and 1.
- Random transformations (including rotation, warping, Gaussian blur, dilation, and color jitter) are applied to the text image.
- Minor transformations (such as dilation, color jitter, and random inversion) are applied to the background image.
- Finally, the processed text image is superimposed onto the background image using the chosen transparency, resulting in the final synthetic RGB image.
In total, this process yields approximately 2.2 million text images. Each image has a fixed height of 64 pixels, while the width varies in proportion to the length of the text. Metadata for each image is stored in the accompanying JSON file.
1. Downloading the Dataset Locally
You can download the dataset locally using either Git LFS or the huggingface_hub Python library.
Using Git LFS
Clone the repository (ensure you have Git LFS installed):
git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2
This will create a local directory named font-square-v2 that contains the tars/ folder with the subdirectories train/ and fine_tune/.
Using the huggingface_hub Python Library
Alternatively, download a snapshot of the dataset with:
from huggingface_hub import snapshot_download
# Download the repository; the local path is returned
local_dir = snapshot_download(repo_id="blowing-up-groundhogs/font-square-v2", repo_type="dataset")
print("Dataset downloaded to:", local_dir)
After downloading, you will find the tar shards inside:
local_dir/tars/train/{000..499}.tarlocal_dir/tars/fine_tune/{000..049}.tar
Using WebDataset with the Local Files
Once downloaded, you can load the dataset with WebDataset. For example, to load the training split:
import webdataset as wds
import os
local_dir = "path/to/font-square-v2" # Update the path if necessary
# Load all training shards (000-499)
train_pattern = os.path.join(local_dir, "tars", "train", "{000..499}.tar")
train_dataset = wds.WebDataset(train_pattern).decode("pil")
for sample in train_dataset:
rgb_image = sample["rgb.png"] # PIL image
bw_image = sample["bw.png"] # PIL image
metadata = sample["json"]
print("Training sample metadata:", metadata)
break
Similarly, load the fine-tune split with:
fine_tune_pattern = os.path.join(local_dir, "tars", "fine_tune", "{000..049}.tar")
fine_tune_dataset = wds.WebDataset(fine_tune_pattern).decode("pil")
2. Streaming the Dataset Directly Over HTTP
If you prefer not to download all shards, you can stream them directly via HTTP using the Hugging Face CDN, provided the tar files are public.
For example, if the training tar shards are available at:
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/train/000000.tar
...
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/train/000499.tar
you can stream them as follows:
import webdataset as wds
url_pattern = (
"https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main"
"/tars/train/{000000..000499}.tar"
)
dataset = wds.WebDataset(url_pattern).decode("pil")
for sample in dataset:
rgb_image = sample["rgb.png"]
bw_image = sample["bw.png"]
metadata = sample["json"]
print("Sample metadata:", metadata)
break
(Adjust the shard range accordingly for the fine-tune split.)
Additional Considerations
Decoding
The.decode("pil")method in WebDataset converts image bytes into PIL images. If you prefer PyTorch tensors, you can add a transformation step:import torchvision.transforms as transforms transform = transforms.ToTensor() dataset = ( wds.WebDataset(train_pattern) .decode("pil") .map(lambda sample: { "rgb": transform(sample["rgb.png"]), "bw": transform(sample["bw.png"]), "metadata": sample["json"] }) )Shard Naming
The naming convention is now:tars/ βββ train/ β βββ {000..499}.tar βββ fine_tune/ βββ {000..049}.tarEnsure that your WebDataset pattern matches this folder structure and tar file naming.
By following these instructions, you can easily integrate the font-square-v2 dataset into your project for training and fine-tuning with a diverse set of synthetic text-line images.