The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image | label
class label |
|---|---|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
|
0cardboard_box
|
Abstract
This dataset consists of data scraped from Bing images using an iCrawler bot. Additional processing and cleanup was applied to remove duplicates, irrelevant images, watermark banners, watermarks, as well as a final screening with a VLM to see which images can still be used for test data, or if we simply need to throw them out as they would 'poison' the dataset. The dataset started off with 75k webscraped images, 15k of those were duplicates (found this out with comparing MD5 hashes) and the remaining 20k were images deemed too low of quality to be in the dataset.
Usage
It is reccomended to just download files using wget rather than cloning the entire repository:
For dataset .zip file:
wget https://huggingface.co/datasets/lreal/BingRecycle40k/resolve/main/BingRecycle40k_39class_rev1.zip
Or if you also need the dataset classes.txt file:
wget https://huggingface.co/datasets/lreal/BingRecycle40k/resolve/main/BingRecycle40k_39class_rev1_classes.txt
If you would like to create your own split, please refer to the YOLO Conversion Repo I made to automate this. You will also need to download both .zip files in the pre-split-dataset directory.
Problems with web scraped datasets
Some of the problems that had to be addressed in this web scraped image dataset (common for most web scraped datasets)
- Watermarks (many stockhouses have ones with symbols and lines that are difficult to detect)
- Banner watermarks (at bottom of image usually)
- Incorrect images (sometimes a completely different object shown)
- Confusing backgrounds (crowded with people, other objects, patterns like PNG background, etc)
- Text on image (e.g. product photos that have a text description)
- Images that are not photorealistic (clipart, illustrations, drawings, etc)
- Duplicate images (can show up across different queries that are similar)
Pipeline diagram
Below you can see a diagram of the entire pipeline for gathering this data

Github Repositories
If you would like to use the same workflow I used, here is each repository listed below:
iCrawler scraper: Scrapes images from web using queries.txt, contains script to remove duplicates
Remove watermark banners from bottom of images: This will crop the image until it finds a sharp color difference using greyscale filters with darkness values
Watermark mask generation: Generate the masks for watermarks with YOLO detect inference and OWLv2 for quick screening
Remove watermarks: Adapted from the IOPaint pip package, allows user to use CLI (iopaint run) with large datasets (recursive directories and batching), uses LAMA model
Ollama VLM Screening: VLM classification of images based off different criteria with local Ollama server, determines which images are salvageable or should be removed
Final YOLO cls conversion: This converts the dataset into a YOLO classify format, and uses the JSON outputs from the above step to determine to keep or discard image
Design process (Each step explained)
- Web scraping: For this dataset, I used Bing images as it tends to be less restrictive than Google images (also supported by iCrawler), in a future dataset, different Google image results can be added to the dataset for a larger set of images
- Identifying duplicates: Using MD5 hash comparison is a pretty obvious choice, as it has a very high likelihood of identifying exact duplicates because of its derivation from the file's bytes
- Removing watermark banners: This had some design iterations, from looking for large areas with color difference in OpenCV, to using text detection, but ultimately how the algorithm works is that it will convert the image to greyscale, then check if the bottom row of pixels in the image is greater than a set average darkness value, then each row above has its average darkness value computed, once it reaches below a percentage of the first row's darkness value, it will crop to that height. A maximum crop height is also implemented for safety.
- Watermark detection: This was inspired from this huggingface repo, and was adapted to work on large datasets with optimized performance for CUDA enabled devices. I also implemented batching for parallel processing, speeding up inference by many times on datasets like this one.
- Watermark removal: This was also reused from a LAMA-based project called IOPaint, where I modified the source code to have the CLI command (iopaint run) work with large datasets, and support nested directories and a preserved directory output, as well as implemented parallel computing for quicker inpainting on the entire dataset.
- VLM Screening: This was originally not going to be included, but I noticed a lot of the methods above did not do enough to clean up the dataset. Since this is a very lengthy inference (took 30 hours in total to run on this dataset), this is run at the very end when quicker algorithms and inference can be run to make the job of the VLM easier. I already was familiar with Ollama and it's strong API, and had it running locally so this was pretty easy to implement. The first model I tried was LLaVA, however the results from this were lackluster, as it did not seem to follow prompts and was either completely wrong, or hesitant. I used the qwen2.5vl model, and found much better results, where it resolved the mentioned issues. Memory usage was also an issue as more and more images were being base64 encoded, so I made sure to also add several garbage collection calls as well as deleting the unused variables, at the end it took up less than 4GB at 60k images processed.
- Converting to YOLO: This last part was pretty straightforward to implement, I originally was just going to have the regular train/test/val split, but because of the VLM screening able to salvage some images from the dataset that we can run tests on, I opted to keep those exclusively for the test split. Essentially, after filtering out the images with incorrect items and ones with a clipart style, if the VLM said there was a watermark present or the incorrect background, it would just throw those images into the test split. This may cause some bias when running tests, but it is not in the val split so the quality of the model is not affected as the hyperparameters are not dependent on the quality of the images.
What can be improved
- A lot of the watermarks (especially the more difficult ones like lines/symbols) are not detected well or being properly removed, maybe training a custom segmentation model on synthetic data can prove to be successful.
- Less images than expected, I was aiming for 50k, but a lot got removed due to duplicates or being incorrect, maybe expand on the web scrape queries for the next revision, using a strategy where synonyms are used to find the widest range of images possible. Another thing that can be explored is using different image search engines or websites.
- Optimization of course, this is a pretty rough first release just to get it working, this can definitely be optimized to run faster.
- Downloads last month
- 2,691