Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,92 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
# Romanian Wikipedia - Processed Dataset
|
| 5 |
+
|
| 6 |
+
This is a processed version of the Romanian Wikipedia subset from the [FineWiki dataset](https://huggingface.co/datasets/HuggingFaceFW/finewiki), optimized for language model training and analysis. The dataset has been filtered to essential fields and enriched with token counts using tiktoken's cl100k_base encoding.
|
| 7 |
+
|
| 8 |
+
## Dataset Overview
|
| 9 |
+
|
| 10 |
+
- **Total Pages**: 493,462
|
| 11 |
+
- **Total Size**: ~669 MB (compressed parquet)
|
| 12 |
+
- **Language**: Romanian (ro)
|
| 13 |
+
- **Total Tokens**: ~395 million tokens (cl100k_base encoding)
|
| 14 |
+
- **Source**: FineWiki - An updated and better extracted version of the wikimedia/Wikipedia dataset originally released in 2023
|
| 15 |
+
|
| 16 |
+
## Dataset Structure
|
| 17 |
+
|
| 18 |
+
### Data Instances
|
| 19 |
+
|
| 20 |
+
Example from the Romanian subset (values truncated for readability):
|
| 21 |
+
|
| 22 |
+
```json
|
| 23 |
+
{
|
| 24 |
+
"id": "rowiki/3217840",
|
| 25 |
+
"title": "Melba (film)",
|
| 26 |
+
"url": "https://ro.wikipedia.org/wiki/Melba_(film)",
|
| 27 |
+
"date_modified": "2023-08-15T10:22:31Z",
|
| 28 |
+
"text": "# Melba (film)\nMelba este un film biografic muzical britanic din 1953 regizat de Lewis Milestone...",
|
| 29 |
+
"token_count": 2145
|
| 30 |
+
}
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
### Data Fields
|
| 34 |
+
|
| 35 |
+
- **id** (string): dataset-unique identifier; format: `rowiki/<page_id>`
|
| 36 |
+
- **title** (string): article title
|
| 37 |
+
- **url** (string): canonical article URL
|
| 38 |
+
- **date_modified** (string): ISO-8601 timestamp of the last page revision
|
| 39 |
+
- **text** (string): cleaned, structured article text preserving headings, lists, code/pre blocks, tables and math. Has markdown formatting (headings, tables, lists)
|
| 40 |
+
- **token_count** (int64): number of tokens in the text field, calculated using tiktoken's cl100k_base encoding (used by GPT-4 and similar models)
|
| 41 |
+
|
| 42 |
+
## Tokenization
|
| 43 |
+
|
| 44 |
+
Token counts are computed using [tiktoken](https://github.com/openai/tiktoken) with the **cl100k_base** encoding, which is the same tokenizer used by:
|
| 45 |
+
- GPT-4
|
| 46 |
+
- GPT-3.5-turbo
|
| 47 |
+
- text-embedding-ada-002
|
| 48 |
+
|
| 49 |
+
This makes the dataset particularly useful for training or fine-tuning models compatible with OpenAI's tokenization scheme.
|
| 50 |
+
|
| 51 |
+
## Processing Details
|
| 52 |
+
|
| 53 |
+
This dataset was created from the original FineWiki Romanian subset by:
|
| 54 |
+
1. Filtering from 14 columns down to 6 essential fields
|
| 55 |
+
2. Computing token counts for each article's text using tiktoken (cl100k_base)
|
| 56 |
+
3. Processing in batches of 10,000 rows for efficient computation
|
| 57 |
+
4. Saving as compressed parquet files with snappy compression
|
| 58 |
+
|
| 59 |
+
## Files
|
| 60 |
+
|
| 61 |
+
- `000_00000_processed.parquet`: 249,533 articles (~327 MB)
|
| 62 |
+
- `000_00001_processed.parquet`: 243,929 articles (~342 MB)
|
| 63 |
+
|
| 64 |
+
## Use Cases
|
| 65 |
+
|
| 66 |
+
- Training or fine-tuning Romanian language models
|
| 67 |
+
- Token budget analysis and dataset planning
|
| 68 |
+
- Information retrieval and semantic search
|
| 69 |
+
- Question answering systems
|
| 70 |
+
- Text summarization and generation tasks
|
| 71 |
+
|
| 72 |
+
## Citation Information
|
| 73 |
+
|
| 74 |
+
```bibtex
|
| 75 |
+
@dataset{penedo2025finewiki,
|
| 76 |
+
author = {Guilherme Penedo},
|
| 77 |
+
title = {FineWiki},
|
| 78 |
+
year = {2025},
|
| 79 |
+
publisher = {Hugging Face Datasets},
|
| 80 |
+
url = {https://huggingface.co/datasets/HuggingFaceFW/finewiki},
|
| 81 |
+
urldate = {2025-10-20},
|
| 82 |
+
note = {Source: Wikimedia Enterprise Snapshot API (https://api.enterprise.wikimedia.com/v2/snapshots). Text licensed under CC BY-SA 4.0 with attribution to Wikipedia contributors.}
|
| 83 |
+
}
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
## License
|
| 87 |
+
|
| 88 |
+
The text content is licensed under **CC BY-SA 4.0** with attribution to Wikipedia contributors, as per the original Wikipedia content license.
|
| 89 |
+
|
| 90 |
+
## Dataset Creator
|
| 91 |
+
|
| 92 |
+
Processed and uploaded by [Yxanul](https://huggingface.co/Yxanul)
|