Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Failed to parse string: '-' as a scalar of type double
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2223, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2224, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2086, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1949, in array_cast
                  return array.cast(pa_type)
                File "pyarrow/array.pxi", line 996, in pyarrow.lib.Array.cast
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/compute.py", line 404, in cast
                  return call_function("cast", [arr], options, memory_pool)
                File "pyarrow/_compute.pyx", line 590, in pyarrow._compute.call_function
                File "pyarrow/_compute.pyx", line 385, in pyarrow._compute.Function.call
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Failed to parse string: '-' as a scalar of type double
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

No.
string
TOKEN
string
NE-TAG
string
NE-EMB
string
ID
string
url_id
float64
left
float64
right
float64
top
float64
bottom
float64
# https://content.staatsbibliothek-berlin.de/zefys/SNP24340492-18990613-2-1-0-0/left,top,width,height/full/0/default.jpg
null
null
null
null
null
null
null
null
null
0
Dien
O
O
-
0
154
514
193
255
1
ag
O
O
-
0
154
514
193
255
2
,
O
O
-
0
154
514
193
255
3
13
O
O
-
0
154
514
193
255
4
.
O
O
-
0
154
514
193
255
5
Junf
O
O
-
0
154
514
193
255
6
!
O
O
-
0
154
514
193
255
7
r
O
O
-
0
90
592
274
307
8
Benllu
O
O
-
0
90
592
274
307
9
in
O
O
-
0
90
592
274
307
10
der
O
O
-
0
90
592
274
307
11
Crpeöiſlen
O
O
-
0
90
592
274
307
12
,
O
O
-
0
90
592
274
307
13
Berlin
B-LOC
O
Q64
0
90
592
274
307
14
Mauerſte
B-LOC
O
Q1911025
0
89
591
298
329
15
.
I-LOC
O
Q1911025
0
89
591
298
329
16
86
I-LOC
O
Q1911025
0
89
591
298
329
17
I-LOC
O
Q1911025
0
89
591
298
329
18
88
I-LOC
O
Q1911025
0
89
591
298
329
19
,
O
O
-
0
89
591
298
329
20
Und
O
O
-
0
89
591
298
329
21
den
O
O
-
0
89
591
298
329
22
Verbellungeſtener
O
O
-
0
89
591
298
329
23
vierteljährl
O
O
-
0
88
591
321
351
24
.
O
O
-
0
88
591
321
351
25
6
O
O
-
0
88
591
321
351
26
,
O
O
-
0
88
591
321
351
27
4
O
O
-
0
88
591
321
351
28
75
O
O
-
0
88
591
321
351
29
3
O
O
-
0
88
591
321
351
30
,
O
O
-
0
88
591
321
351
31
bei
O
O
-
0
88
591
321
351
32
altey
O
O
-
0
88
591
321
351
33
Zeitungsſpedifenren
O
O
-
0
88
591
321
351
34
it
O
O
-
0
91
592
344
375
35
Dotentohn
O
O
-
0
91
592
344
375
36
vierteljährlich
O
O
-
0
91
592
344
375
37
)
O
O
-
0
91
592
344
375
38
8
O
O
-
0
91
592
344
375
39
.
O
O
-
0
91
592
344
375
40
4
O
O
-
0
91
592
344
375
41
25
O
O
-
0
91
592
344
375
42
3
O
O
-
0
91
592
344
375
43
,
O
O
-
0
91
592
344
375
44
mona
O
O
-
0
91
592
344
375
45
1
O
O
-
0
91
592
344
375
46
e
O
O
-
0
89
592
368
398
47
Peſauaen
O
O
-
0
89
592
368
398
48
ür
O
O
-
0
89
592
368
398
49
Berlin
B-LOC
O
Q64
0
89
592
368
398
50
.
O
O
-
0
89
592
368
398
51
as
O
O
-
0
89
592
368
398
52
deutſche
B-LOC
O
Q1206012
0
90
590
390
422
53
Reich
I-LOC
O
Q1206012
0
90
590
390
422
54
und
O
O
-
0
90
590
390
422
55
Oeſterreich
B-LOC
O
Q28513
0
90
590
390
422
56
-
I-LOC
O
Q28513
0
90
590
390
422
57
Ungarn
I-LOC
O
Q28513
0
90
590
390
422
58
ereahriic
O
O
-
0
90
590
390
422
59
3
O
O
-
0
186
491
414
442
60
wt
O
O
-
0
186
491
414
442
61
,
O
O
-
0
186
491
414
442
62
frei
O
O
-
0
186
491
414
442
63
ins
O
O
-
0
186
491
414
442
64
Haus
O
O
-
0
186
491
414
442
65
9
O
O
-
0
186
491
414
442
66
4
O
O
-
0
186
491
414
442
67
50
O
O
-
0
186
491
414
442
68
5
O
O
-
0
186
491
414
442
69
%
O
O
-
0
186
491
414
442
70
dSerantwortlicher
O
O
-
0
110
568
449
477
71
Nedakteur
O
O
-
0
110
568
449
477
72
:
O
O
-
0
110
568
449
477
73
S
B-PER
O
NIL
0
110
568
449
477
74
.
I-PER
O
NIL
0
110
568
449
477
75
E
I-PER
O
NIL
0
110
568
449
477
76
.
I-PER
O
NIL
0
110
568
449
477
77
Köbner
I-PER
O
NIL
0
110
568
449
477
78
in
O
O
-
0
287
390
475
495
79
Berlin
B-LOC
O
Q64
0
287
390
475
495
80
Drutär
O
O
-
0
133
538
506
536
81
und
O
O
-
0
133
538
506
536
82
Verlag
O
O
-
0
133
538
506
536
83
der
O
O
-
0
133
538
506
536
84
Aktiengeſellſchaft
B-ORG
O
NIL
0
133
538
506
536
85
Nationalzeitung
I-ORG
O
NIL
0
223
448
531
557
86
O
O
-
0
223
448
531
557
87
Dritte
O
O
-
0
790
1,507
231
281
88
(
O
O
-
0
790
1,507
231
281
89
Parlali
O
O
-
0
790
1,507
231
281
90
Ausgabe
O
O
-
0
790
1,507
231
281
91
IN
O
O
-
0
1,878
2,050
191
231
92
8051
O
O
-
0
1,878
2,050
191
231
93
19392
O
O
-
0
1,746
2,181
237
295
94
.
O
O
-
0
1,746
2,181
237
295
95
Jahrgang
O
O
-
0
1,746
2,181
237
295
96
Aeiinach
O
O
-
0
1,710
2,214
302
329
97
Schriſſanlen
O
O
-
0
1,710
2,214
302
329
98
laut
O
O
-
0
1,710
2,214
302
329
End of preview.

Title

ZEFYS2025: A German Dataset for Named Entity Recognition and Entity Linking for Historical Newspapers

Description

Historical newspaper collectons were amongst the first materials to be scanned in order to preserve them for the future. To expand the ways in which specific types of information from digitised newspapers can be searched, explored and analysed, appropriate technologies need to be developed. Named entity recognition (NER) and entity linking (EL) are such information extraction techniques aiming at recognising, classifying, disambiguating and linking entities that carry a name, in particular proper names. However, large annotated datasets for historical newspapers are still rare. In order to enable the training of machine learning models capable of correctly identifying named entities and linking them to authority files such as, e.g., wikidata entities, we provide a corpus of 100 German-language newspaper pages published between 1837 and 1940. The machine learning task for which this dataset was collected falls into the domain of token classification and, more generally, of natural language processing.

The dataset was compiled by collaborators in the research project “Mensch.Maschine.Kultur – Künstliche Intelligenz für das Digitale Kulturelle Erbe” at the Staatsbibliothek zu Berlin – Berlin State Library (SBB). The research project was funded by the Federal Government Commissioner for Culture and the Media (BKM), project grant no. 2522DIG002.

Homepage

IDM 4 Data Science Named Entity Recognition Data

Publisher

Staatsbibliothek zu Berlin – Berlin State Library

Dataset Curators

Ulrike Förstel, project collaborator in the "Human.Machine.Culture" project at Staatsbibliothek zu Berlin – Berlin State Library, [email protected], ORCID: 0009-0009-5472-2336. Ulrike Förstel has studied library and information science. She participated in annotating the dataset and in the mutual systematic control and multiple reviews of the dataset.

Dr. Kai Labusch, project collaborator in the "Human.Machine.Culture" project at Staatsbibliothek zu Berlin – Berlin State Library, [email protected], ORCID: 0000-0002-7275-5483. Kai Labusch has studied computer science. He preprocessed the dataset with a BERT-based entity recognition and linking system. He also worked on the Named Entity Annotation tool that was used to review and correct the dataset.

Dr. Jörg Lehmann, project collaborator in the "Human.Machine.Culture" project at Staatsbibliothek zu Berlin – Berlin State Library, [email protected], ORCID: 0000-0003-1334-9693. Jörg Lehmann has studied history and comparative literature. He was responsible for annotating the dataset, preparing the data for publication and drafting the datasheet.

Clemens Neudecker, head of unit IDM4 Data Science and lead of the "Human.Machine.Culture" project, Staatsbibliothek zu Berlin – Berlin State Library, [email protected], ORCID: 0000-0001-5293-8322. Clemens Neudecker has studied philosophy, computer science and political science. He was responsible for data selection and participated in the annotation of the dataset.

Sophie Schneider, project collaborator in the "Human.Machine.Culture" project at Staatsbibliothek zu Berlin – Berlin State Library, [email protected], ORCID: 0000-0002-8303-1798. Sophie Schneider has studied library and information science. She participated in annotating the dataset and was responsible for data analysis with respect to data cleaning and descriptive statistics.

Other Contributors

Markus Bierkoch (annotations), Knut Lohse (provision of ground truth), both Staatsbibliothek zu Berlin – Berlin State Library

Point of Contact

Clemens Neudecker, Staatsbibliothek zu Berlin – Berlin State Library, [email protected], ORCID: 0000-0001-5293-8322.

Papers and/or Other References

Ehrmann, M., Watter, C., Romanello, M., Clematide, S., & Flückiger, A. (2020). Impresso Named Entity Annotation Guidelines. https://doi.org/10.5281/zenodo.3585749

Labusch, K., Neudecker, C. & Zellhöfer , D. (2019). BERT for Named Entity Recognition in Contemporary and Historical German. Proceedings of the 15th conference on natural language processing. https://konvens.org/proceedings/2019/papers/KONVENS2019_paper_4.pdf

Labusch, K., & Neudecker, C. (2020). Named Entity Disambiguation and Linking Historic Newspaper OCR with BERT. Conference and Labs of the Evaluation Forum. https://ceur-ws.org/Vol-2696/paper_163.pdf

Labusch, K., & Neudecker, C. (2022). Entity Linking in Multilingual Newspapers and Classical Commentaries with BERT. Conference and Labs of the Evaluation Forum. 1079-1089. https://ceur-ws.org/Vol-3180/paper-85.pdf

Menzel, S., Zinck, J., & Petras, V. (2020). Guidelines for Full Text Annotations in the SoNAR (IDH) Corpus. https://doi.org/10.5281/zenodo.5115932

Named Entity Annotation Tool NEAT https://github.com/qurator-spk/neat/

Supported Tasks and Shared Tasks

The task for which this dataset was established is the training of machine learning models capable of correctly identifying named entities and linking them to wikidata entries. This dataset was not part of a shared task. However, a part of this dataset (SoNAR data, 19 pages) was used in the HIPE 2022 shared task.

AI Category

Natural Language Processing, Knowledge Representation and Reasoning, Machine Learning

Type of Cultural Heritage Application

Natural Language Processing

(Cultural Heritage) Application Example

Named Entity Recognition, Entity Linking

Distribution

Data Access URL

https://doi.org/10.5281/zenodo.15771823

Licensing Information

Creative Commons Attribution 4.0 International CC BY 4.0.

File Format

text/tsv, UTF-8

Citation Information

@dataset{schneider_2025_15771823,
  author       = {Schneider, Sophie and
                  Förstel, Ulrike and
                  Labusch, Kai and
                  Lehmann, Jörg and
                  Neudecker, Clemens},
  title        = {ZEFYS2025: A German Dataset for Named Entity
                   Recognition and Entity Linking for Historical
                   Newspapers
                  },
  month        = sep,
  year         = 2025,
  publisher    = {Staatsbibliothek zu Berlin - Berlin State Library},
  version      = 1,
  doi          = {10.5281/zenodo.15771823},
  url          = {https://doi.org/10.5281/zenodo.15771823},
}

Composition

Data Category

Content

Media Category

Text

Object Type

OCRed historical newspaper pages.

Dataset Structure

The complete training dataset consists of 100 OCRed historical newspaper pages where the full text has been converted into .tsv format. The first column indicates token position and sentence splits, where the beginning of a new sentence is indicated with a 0. Tokens can be found in the second column with one token per row (tokens may be split across multiple rows due to OCR/segmentation errors). Two columns are used for named entity tags at two different annotation layers, the first one indicating the surface tag, while the latter is used for embedded entities. The column ID contains Wikidata QIDs, if available or otherwise NIL. Four columns are dedicated for the page image coordinates, if available.

Data Instances

For the annotation of tokens, the IOB chunking scheme has been used. Each token has received one of the following labels: Inside, Outside or Beginning. Tokens that do not represent entities are tagged O (outside). B-ent (begin-of-entity) indicates that the token forms the beginning of an entity; I-ent (inside-of-entity) is used if the token is part of the entity but does not form its beginning. Three entity types are annotated: Locations (LOC), persons (PER), and organisations (ORG). Embedded entities were annotated in the same way. Entities which can be found on Wikidata are represented via their QID, which can be complemented with https://www.wikidata.org/wiki/ to receive the full link to the entity. If an entity could not be found on Wikidata (but it has been searched for), a NIL was inserted.

Data Fields

Not applicable.

Compliance with Standard(s)

The annotations in the .tsv files of the dataset conform to the ‘one word per line’ standard described in Tjong Kim Sang, E., & Meulder, F.D. (2003). Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, 142–147. https://aclanthology.org/W03-0419.pdf

The tags are encoded according to the IOB standard as described in Ramshaw, L. A., & Marcus, M. P. (1995). Text Chunking Using Transformation-Based Learning. In S. Armstrong, K. Church, P. Isabelle, S. Manzi, E. Tzoukermann, & D. Yarowsky (Hrsg.), Natural Language Processing Using Very Large Corpora (Vol. 11, 157–176). https://doi.org/10.1007/978-94-017-2390-9_10

Data Splits

Not applicable.

Languages

deu (ISO 639-2), de-DE (BCP-47 code)

Descriptive Statistics

The temporal span of the 100 historical newspaper pages ranges from 1837 to 1940. The whole dataset comprises 348,307 tokens. Altogether, 14,072 entities were annotated, namely 4,389 PER, 6,049 LOC and 3,223 ORG entities. To determine these absolute numbers, the B-tags were counted. The number of distinct entities amounts up to 8,000. The dataset comprises 10,341 links to Wikidata. This German-language dataset is characterised by the historical context of the temporal range from which it was collected; therefore, entities like "Berlin", "Paris", "London", or "Deutschland", "England", "Preußen" and "Rußland" can be found amongst the most frequently used LOC entities, "Beethoven", "Bismarck" and "Poincaré" amongst the PER entities and "Reichstag", "Reichsregierung" or the German political party "Zentrum" amongst the ORG entities. Though the newspapers were selected to represent a broad range of types and content, there is an emphasis on financial papers due to accordingly themed digitisation projects. 21 pages or one fifth of the whole corpus has been taken from the Berliner Börsen-Zeitung published between 1872 and 1930, and 17 pages have been taken from the Berliner Tageblatt und Handels-Zeitung published between 1877 and 1934; this emphasis on financial papers clearly forms a bias. The file size of the ZEFYS2025.zip is 2 MB (2.064.384 Bytes).

Data Collection Process

Curation Rationale

The intention to provide a large corpus (> 100k tokens) in German language from historical sources including named entity tags as well as links to corresponding knowledge base entries (where applicable) for the purpose of historical NER/EL motivated the creation of this dataset. Alternative datasets with similar characteristics are the CoNNL 2003, the GermEval 2014 and the NewsEye 2021 datasets. See

Tjong Kim Sang, E., & Meulder, F.D. (2003). Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. Conference on Computational Natural Language Learning (CoNNL 2003). https://doi.org/10.3115/1119176.1119195.

Benikova, D., Biemann, C., & Reznicek, M. (2014). NoSta-D Named Entity Annotation for German: Guidelines and Dataset. Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). http://www.lrec-conf.org/proceedings/lrec2014/pdf/276_Paper.pdf.

Hamdi, A., Linhares Pontes, E., Boros, E., Nguyen, T.T.H., Hackl, G., Moreno, J.G., & Doucet, A. (2021). A Multilingual Dataset for Named Entity Recognition, Entity Linking and Stance Detection in Historical Newspapers. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '21). https://doi.org/10.1145/3404835.3463255.

Source Data

Initial Data Collection

The 100 historical newspaper pages were selected from the newspaper information system of Berlin State Library ZEFYS (an abbreviation of ZEitungsinFormationssYStem) with the aim to provide a sufficiently large and homogeneously annotated dataset. For ZEFYS2025, pre-existing named entity datasets from two distinct projects – Europeana Newspapers (cf. Neudecker, Clemens (2016). An Open Corpus for Named Entity Recognition in Historic Newspapers. Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4348–4352, Portorož, Slovenia. European Language Resources Association (ELRA). https://aclanthology.org/L16-1689); and SoNAR (cf. Menzel, Sina, Schnaitter, Hannes, Zinck, Josefine, Petras, Vivien, Neudecker, Clemens, Labusch, Kai, Leitner, Elena and Rehm, Georg (2021). Named Entity Linking mit Wikidata und GND – Das Potenzial handkuratierter und strukturierter Datenquellen für die semantische Anreicherung von Volltexten. Qualität in der Inhaltserschließung, edited by Michael Franke-Maier, Anna Kasprzik, Andreas Ledl and Hans Schürmann, Berlin, Boston: De Gruyter Saur, pp. 229-258. DOI: https://doi.org/10.1515/9783110691597-012) – were combined with additional newly annotated newspaper pages into a single coherent resource. Out of the 100 pages, 84 pages were produced automatically by OCR. No normalisation or modernisation of the OCR was performed; however, post-OCR correction was done for the named entities in the dataset in order to present them in a normalized, consistent form. Moreover, 16 of the pages were annotated on the basis of manually transcribed ground truth.

Source Data Producers

The source data were produced by Staatsbibliothek zu Berlin – Berlin State Library within the continuous process of digitising historical newspapers and providing full texts of selected newspapers.

Digitisation Pipeline

All of the newspapers were digitised for presentation in the newspaper information system of Berlin State Library ZEFYS. Most often, conservation / preservation motivated the digitisation. For a part of the newspapers presented in ZEFYS, full texts are provided. The 100 newspaper pages which form the dataset therefore represent a selection of the selection of newspapers available in ZEFYS.

Preprocessing and Cleaning

The output of the OCR process performed for the provision of the newspapers in the ZEFYS system was transformed into .tsv format using a page2tsv tool (https://github.com/qurator-spk/page2tsv); tokenization and sentence splitting were performed using SoMajo (https://github.com/tsproisl/SoMaJo). For initial entity recognition and linking, the tool described in Labusch, K., & Neudecker, C. (2022). Entity Linking in Multilingual Newspapers and Classical Commentaries with BERT. Conference and Labs of the Evaluation Forum. 1079-1089. https://ceur-ws.org/Vol-3180/paper-85.pdf was used.

Annotations

Annotation Process

After the preprocessing described above, human annotators added annotations and entity links or corrected the suggestions provided by the entity recognition and linking systems. For the annotation process, the neat (named entity annotation tool) tool developed by one of the project collaborators was used. Three entity types (persons, locations, organisations) were annotated. Annotating nested named entities was allowed with a limit of depth one, but nested named entities are not considered in the entity linking annotations. Annotations mainly followed the annotation guidelines developed for the Impresso project, see Ehrmann, M., Watter, C., Romanello, M., Clematide, S., & Flückiger, A. (2020). Impresso Named Entity Annotation Guidelines. https://doi.org/10.5281/zenodo.3585749. Using the annotation tool, all automatically generated annotations have been intellectually checked and revised by a group of German native speakers. This included the verification of the NE tags assigned previously as well as the addition or deletion of tags if needed. For NEs confirmed in this step, existing links were checked for the most precise and correct option for linking, and missing links to entities were inserted if available. To flag ambiguous or uncertain cases arising during annotation for later discussion, an extra NE-TAG class TODO was used in this process. Consensus about challenging cases collected throughout the annotation was reached in regular discussion meetings, and the set of instructions was expanded iteratively, adding further rules or examples when deemed necessary. Since the revision was carried out by one expert per page only, it was not possible to calculate inter-annotator agreement or similar measures to assess consistency between multiple annotators. Instead, we employed computational methods to assist in localizing and reducing inconsistencies within extensively annotated datasets. The automated analyses identified reappearing tokens throughout our dataset with a high divergence in how they were transcribed, tagged or linked. In additional correction loops, these inconsistencies were systematically reviewed once again by the annotators.

Annotators

During the time when the annotations took place, all annotators were employed at Staatsbibliothek zu Berlin – Berlin State Library. Socio-demographic information on the annotators is not available, but see the information provided above in the section "Dataset curators".

Crowd Labour

Not applicable.

Data Provenance

All 100 newspaper pages have been selected from ZEFYS, the newspaper information system of Berlin State Library. This portal provides free access to historical newspapers held and digitised by the Berlin State Library up to 1945 with a public domain license (Public Domain Mark, PDM). An exception are the "DDR-Presse" newspapers, for which specific rights and access restrictions apply. Therefore, no newspapers from the DDR-Presse project were included in this dataset.

Use of Linked Open Data, Controlled Vocabulary, Multilingual Ontologies/Taxonomies

If possible, named entities were linked to Wikidata entries. Beyond this knowledge graph, no other controlled vocabularies or multilingual ontologies were used during the establishment of the dataset.

Version Information

There is no previous version of this dataset.

Release Date

2025-09-10

Date of Modification

Not applicable.

Checksums

MD5 checksum of the ZEFYS2025.zip:

76f0948086b3923ef751e1fd3a1805a1

SHA256 checksum of the ZEFYS2025.zip:

f1a0d931f4ba4fc3c3330df26bcde33354df3c223d2e0af8d633c912c7a996ea

Maintenance Plan

Maintenance Level

Actively Maintained – This dataset will be actively maintained, including but not limited to updates to the data.

Update Periodicity

Only a part of the 100 .tsv files (about 1/3rd) contains coordinates for the page facsimiles of the according tokens. It is foreseen to update these data for all 100 newspaper pages in future work.

Examples and Considerations for Using the Data

This dataset was established for the training of machine learning models capable of correctly identifying named entities and linking them to wikidata entries. The tasks of named entity recognition and entity linking are conceived to enable these information extraction techniques especially in historical newspapers. However, they might work well on other digital assets as well if they come from a comparable time span.

Ethical Considerations

Personal and Other Sensitive Information

The dataset does not contain personal or sensitive information beyond what is available on Wikidata anyway. It does not contain any sensitive data in the sense of contemporary privacy laws.

Discussion of Biases

As the 100 newspaper pages were more or less randomly selected from the digitised newspaper collections of Berlin State Library, the emphasis of this digital collection can be regarded as a bias: The focus is on newspapers published in Prussia. The historical worldviews and preferences reflected in such newspapers within the time span between 1837 and 1940 can clearly be understood as biases that reflect the role of Prussia as a great European power and core of the German Empire after 1870/71. However, given the long time frame, this dataset also reflects linguistic change as well as the change of preferences of what is newsworthy and what should therefore be reported in newspapers. This linguistic change and shift of focus is especially evident in the newspapers printed during the Weimar Republic

Potential Societal Impact of Using the Dataset

In this dataset, persons, locations and organisations are identified and linked that have been mentioned in German-language newspapers published before 1940. Most probably, the social impact of the dataset is therefore very low. However, advancing information extraction techniques facilitates the creation of knowledge and furthers research as well as the discovery of new sources.

Examples of Datasets, Publications and Models that (re-)use the Dataset

The dataset has been used to fine-tune and evaluate various models on the NER downstream task (see https://github.com/qurator-spk/sbb_ner_hf). The data selection and annotation process as well as the results of model training and evaluation have been published as a contribution to the KONVENS 2025 Conference on Natural Language Processing.

Known Non-Ethical Limitations

In a general way, it can be stated that models pretrained on historical German datasets perform better on historical datasets, whereas models pretrained with contemporary data perform better on contemporary datasets.

Unanticipated Uses made of this Dataset

Not applicable.

Datasheet as of September 10th, 2025

Downloads last month
18