Publication

#2
by albertzeyer - opened

Very nice that you prepared this cleaned up version of openwebtext2.

Do you have any publication on this where you write about it, or make use of it?

Thanks for your attention! I'm really flattered. Currently this is just a personal side project I did in my spare time. I might look into it further in the future if I have the time. Feel free to reach out if you have any thoughts about these datasets! I'm open to collaboration🤗.

I was looking for OpenWebText2 on HF but it seems there is no official repo for it, or actually any repo for it containing the original data. I only found subsets, like this: https://huggingface.co/datasets/RaiBP/openwebtext2-first-30-chunks-ablation-full or this: https://huggingface.co/datasets/maxtli/OpenWebText-2M

Or your repo.

But then you also already did some further filtering. Which is actually useful, as I specifically needed English-only text for my current use case.

I'm working on speech recognition (automatic speech recognition (ASR)). Now I did some research on some English ASR model using the Loquacious dataset (https://huggingface.co/datasets/speechbrain/LoquaciousSet). In ASR, it is common to use it together with some language model. But there is no official text corpus for Loquacious. (Librispeech is a very popular example where there is both the ASR dataset, and a separate text-only corpora.) So I'm searching for some text corpora which would be a good fit to Loquacious. Ideally the size (in terms of num words) should be about 10-100 times more than the ASR transcriptions. So it means I need around 25B words, or in that order of magnitude (everything >=5B words is maybe also already fine). But then, it should ideally also match in text-style (spoken text transcriptions vs written text) and domains (topics). There is probably not a perfect match.

My version is filtered from vietgpt/the_pile_openwebtext2. I think that's an exact copy of the original version since the number of rows - 17103059 - matches the datacard description in defunct-datasets/the_pile_openwebtext2.

I cleaned many large-scale English-only datasets earlier this year (OpenWebText2, C4, Books3, CC-News, etc), but none of them focus on spoken-style texts. If the style is important, I think the best thing to do would be to annotate some samples with an LLM, train a small classifier with these annotations, and apply the classifier to the entire dataset to filter for spoken-style texts. Maybe I can help with that later this month🤔️.

Oh ok. I first found https://huggingface.co/datasets/Bingsu/openwebtext_20p and I saw that this has 33M rows, which is only 20% of OpenWebText (1), and then I thought that OpenWebText2 must have even more than that. So it means the segmentation is different, now it has whole documents, while before it was on sentence level or so?

I don't know how relevant the text style is. Maybe it still works ok. I'm not sure if anyone really studied this.

For speech recognition, we know that there is a strong correlation of the perplexity (PPL) and the word-error-rate (WER). The PPL measured on some dev-sets of the speech corpora transcriptions.

This indeed would be interesting, what you propose. Maybe let's continue the discussion via mail: [email protected]

Sign up or log in to comment