Datasets:

Modalities:
Image
Text
Formats:
webdataset
Languages:
English
ArXiv:
Libraries:
Datasets
WebDataset
License:
File size: 5,182 Bytes
43443c7
 
 
313e268
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8dc8990
313e268
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ce2b13f
313e268
 
 
 
 
 
 
 
 
8dc8990
 
313e268
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ce2b13f
313e268
 
 
 
 
 
 
 
 
8dc8990
 
313e268
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ce2b13f
313e268
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---

license: mit
---


# Accessing the `font-square-v2` Dataset on Hugging Face

The `font-square-v2` dataset is hosted on Hugging Face at [blowing-up-groundhogs/font-square-v2](https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2). The dataset is stored in WebDataset format, with all tar files located in the `tars/` folder of the repository. Each tar file contains multiple samples; each sample includes:
- An RGB image file (with the extension `.rgb.png`)
- A black-and-white image file (with the extension `.bw.png`)
- A JSON file (`.json`) with metadata (e.g. text and writer ID)

You can access the dataset in one of two ways: by downloading it locally or by streaming it directly over HTTP.

---

## 1. Downloading the Dataset Locally

You can download the dataset locally using either Git LFS or the [huggingface_hub](https://huggingface.co/docs/huggingface_hub) Python library.

### **Using Git LFS**

Clone the repository (make sure [Git LFS](https://git-lfs.github.com/) is installed):

```bash

git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2

```

This will create a local directory named `font-square-v2` that contains the `tars/` folder with all the tar shards.

### **Using the huggingface_hub Python Library**



Alternatively, you can download a snapshot of the dataset in your Python code:



```python

from huggingface_hub import snapshot_download



# Download the repository; the local path is returned

local_dir = snapshot_download(repo_id="blowing-up-groundhogs/font-square-v2", repo_type="dataset")

print("Dataset downloaded to:", local_dir)

```



After downloading, the tar shards are available in the `tars/` subdirectory of `local_dir`.



### **Using WebDataset with the Local Files**



Once the dataset is downloaded locally, you can load it with [WebDataset](https://github.com/webdataset/webdataset). For example:



```python

import webdataset as wds

import os

from pathlib import Path



# Assuming the dataset was downloaded to `local_dir` and tar shards are in the 'tars' folder

local_dir = "path/to/font-square-v2"  # Update this path if necessary

tar_pattern = os.path.join(local_dir, "tars", "{000000..000499}.tar")  # Adjust range to match your tar shard naming



# Create a WebDataset

dataset = wds.WebDataset(tar_pattern).decode("pil")



# Example: Iterate over a few samples

for sample in dataset:

    # Access sample items

    rgb_image = sample["rgb.png"]  # RGB image (PIL image)

    bw_image = sample["bw.png"]    # BW image (PIL image)

    metadata = sample["json"]



    print("Sample metadata:", metadata)

    

    # Process the images as needed...

    break

```



---



## 2. Streaming the Dataset Directly Over HTTP



If you prefer to stream the dataset directly from Hugging Face (without downloading the entire dataset), you can use the HTTP URLs provided by the Hugging Face CDN. Make sure that your tar files are public and accessible.



For example, if the tar shards are available at:



```

https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/000000.tar

https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/000001.tar

...

```



you can set up your WebDataset as follows:



```python

import webdataset as wds



# Define the URL pattern to stream the tar shards directly from Hugging Face

url_pattern = "https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/{000000..000499}.tar"

# Adjust the shard range (here 000000 to 000010) to cover all your tar shards.



# Create a WebDataset that streams data

dataset = wds.WebDataset(url_pattern).decode("pil")



# Iterate over a few samples from the streamed dataset

for sample in dataset:

    rgb_image = sample["rgb.png"]

    bw_image = sample["bw.png"]

    metadata = sample["json"]



    print("Sample metadata:", metadata)

    

    # Process sample as needed...

    break

```



**Note:** Streaming performance depends on your network connection and the Hugging Face CDN. If you experience any slowdowns, consider downloading the dataset locally instead.

---

## Additional Considerations

- **Decoding:**  
  The `.decode("pil")` method in the WebDataset pipeline converts image bytes into PIL images. If you prefer PyTorch tensors, you can add a transformation:
  
  ```python

  import torchvision.transforms as transforms

  transform = transforms.ToTensor()

  

  dataset = (

      wds.WebDataset(url_pattern)

      .decode("pil")

      .map(lambda sample: {

          "rgb": transform(sample["rgb.png"]),

          "bw": transform(sample["bw.png"]),

          "metadata": sample["json"],

      })

  )

  ```

- **Shard Naming:**  
  Ensure that the naming convention in your `tars/` folder matches the URL pattern used above. Adjust the pattern `{000000..000499}` accordingly if your tar files have a different naming scheme or if there are more shards.

---

By following these instructions, you can easily load and work with the `font-square-v2` dataset from Hugging Face in your Python projects using WebDataset.