--- pretty_name: Wikipedia language: - da license: cc0-1.0 license_name: CC-0 size_categories: - 100k-1M task_categories: - text-generation - fill-mask task_ids: - language-modeling source_datasets: - danish-foundation-models/danish-gigaword domains: - Encyclopedic --- # Dataset Card for Wikipedia The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page). You can read more about wikipedia on their [about](https://en.wikipedia.org/wiki/Wikipedia:About) page. ## Dataset Description - **Number of samples**: 264.43K - **Number of tokens (Llama 3)**: 122.00M - **Average document length in tokens (min, max)**: 461.372631252529 (3, 83.12K) ## Dataset Structure An example from the dataset looks as follows. ```py { "id": "wiki_366127", "text": "Vimoutiers er en kommune i departementet Orne i Basse-Normandie regionen i det nordvestlige Frankrig[...]", "source": "wiki", "added": "2021-03-28", "created": "2019-01-01, 2021-01-01", "token_count": 126 } ``` ### Data Fields An entry in the dataset consists of the following fields: - `id` (`str`): An unique identifier for each document. - `text`(`str`): The content of the document. - `source` (`str`): The source of the document (see [Source Data](#source-data)). - `added` (`str`): An date for when the document was added to this collection. - `created` (`str`): An date range for when the document was originally created. - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer ### Dataset Statistics