wikitext-18-de / README.md
PatrickHaller's picture
Update README.md
954ab28
metadata
dataset_info:
  features:
    - name: title
      dtype: string
    - name: text
      dtype: string
    - name: url
      dtype: string
  splits:
    - name: train
      num_bytes: 138186439
      num_examples: 2759
  download_size: 79585645
  dataset_size: 138186439
license: cc-by-sa-3.0
task_categories:
  - text-generation
language:
  - de
pretty_name: wikitext german
size_categories:
  - 1K<n<10K

Dataset Card for "wikitext-18-de"

Dataset Summary

The dataset is a german variation of the wikitext dataset and is a collection of ca. 18 million tokens. It follows the same approach by extracting from the "Good and Featured" articles on Wikipedia, but for German articles. The dataset is available under the Creative Commons Attribution-ShareAlike License.

The stated German version contains 2759 articles (visited: 27.06.23). Even though the smalle size of articles, compared to wikitext, the dataset contains 18 million (whitespace) seperated tokens. Probably due to longer articles lengths and language.

The dataset retains the original case, punctuation, numbers and newlines, excluding images, tables and other data.

More Information needed