|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- ro |
|
|
tags: |
|
|
- legal |
|
|
- finance |
|
|
- biology |
|
|
- chemistry |
|
|
- medical |
|
|
- code |
|
|
- climate |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
# Romanian Wikipedia - Processed Dataset |
|
|
|
|
|
This is a processed version of the Romanian Wikipedia subset from the [FineWiki dataset](https://huggingface.co/datasets/HuggingFaceFW/finewiki), optimized for language model training and analysis. The dataset has been filtered to essential fields and enriched with token counts using tiktoken's cl100k_base encoding. |
|
|
|
|
|
## Dataset Overview |
|
|
|
|
|
- **Total Pages**: 493,462 |
|
|
- **Total Size**: ~669 MB (compressed parquet) |
|
|
- **Language**: Romanian (ro) |
|
|
- **Total Tokens**: ~395 million tokens (cl100k_base encoding) |
|
|
- **Source**: FineWiki - An updated and better extracted version of the wikimedia/Wikipedia dataset originally released in 2023 |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
Example from the Romanian subset (values truncated for readability): |
|
|
|
|
|
```json |
|
|
{ |
|
|
"id": "rowiki/3217840", |
|
|
"title": "Melba (film)", |
|
|
"url": "https://ro.wikipedia.org/wiki/Melba_(film)", |
|
|
"date_modified": "2023-08-15T10:22:31Z", |
|
|
"text": "# Melba (film)\nMelba este un film biografic muzical britanic din 1953 regizat de Lewis Milestone...", |
|
|
"token_count": 2145 |
|
|
} |
|
|
``` |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
- **id** (string): dataset-unique identifier; format: `rowiki/<page_id>` |
|
|
- **title** (string): article title |
|
|
- **url** (string): canonical article URL |
|
|
- **date_modified** (string): ISO-8601 timestamp of the last page revision |
|
|
- **text** (string): cleaned, structured article text preserving headings, lists, code/pre blocks, tables and math. Has markdown formatting (headings, tables, lists) |
|
|
- **token_count** (int64): number of tokens in the text field, calculated using tiktoken's cl100k_base encoding (used by GPT-4 and similar models) |
|
|
|
|
|
## Tokenization |
|
|
|
|
|
Token counts are computed using [tiktoken](https://github.com/openai/tiktoken) with the **cl100k_base** encoding, which is the same tokenizer used by: |
|
|
- GPT-4 |
|
|
- GPT-3.5-turbo |
|
|
- text-embedding-ada-002 |
|
|
|
|
|
This makes the dataset particularly useful for training or fine-tuning models compatible with OpenAI's tokenization scheme. |
|
|
|
|
|
## Processing Details |
|
|
|
|
|
This dataset was created from the original FineWiki Romanian subset by: |
|
|
1. Filtering from 14 columns down to 6 essential fields |
|
|
2. Computing token counts for each article's text using tiktoken (cl100k_base) |
|
|
3. Processing in batches of 10,000 rows for efficient computation |
|
|
4. Saving as compressed parquet files with snappy compression |
|
|
|
|
|
## Files |
|
|
|
|
|
- `000_00000_processed.parquet`: 249,533 articles (~327 MB) |
|
|
- `000_00001_processed.parquet`: 243,929 articles (~342 MB) |
|
|
|
|
|
## Use Cases |
|
|
|
|
|
- Training or fine-tuning Romanian language models |
|
|
- Token budget analysis and dataset planning |
|
|
- Information retrieval and semantic search |
|
|
- Question answering systems |
|
|
- Text summarization and generation tasks |
|
|
|
|
|
## Citation Information |
|
|
|
|
|
```bibtex |
|
|
@dataset{penedo2025finewiki, |
|
|
author = {Guilherme Penedo}, |
|
|
title = {FineWiki}, |
|
|
year = {2025}, |
|
|
publisher = {Hugging Face Datasets}, |
|
|
url = {https://huggingface.co/datasets/HuggingFaceFW/finewiki}, |
|
|
urldate = {2025-10-20}, |
|
|
note = {Source: Wikimedia Enterprise Snapshot API (https://api.enterprise.wikimedia.com/v2/snapshots). Text licensed under CC BY-SA 4.0 with attribution to Wikipedia contributors.} |
|
|
} |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
The text content is licensed under **CC BY-SA 4.0** with attribution to Wikipedia contributors, as per the original Wikipedia content license. |
|
|
|
|
|
## Dataset Creator |
|
|
|
|
|
Processed and uploaded by [Yxanul](https://huggingface.co/Yxanul) |