JohanHeinsen commited on
Commit
66204ae
·
1 Parent(s): 6aab189

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -14,6 +14,7 @@ pretty_name: Enevældens Nyheder Online
14
  ENO is a dataset of texts from Danish and Norwegian newspapers during the period of constitutional absolutism in Denmark (1660–1849). In the course of the eighteenth century, newspapers became an everyday medium. They informed a relatively large reading public about everything from high politics to the mundanities of local markets.
15
  The dataset was created by re-processing over 550.000 digital images scanned from microfilm and held in the Danish Royal Library's collection. They had initially been OCR-processed, but the results were generally unreadable. ENO re-processed the images using tailored pylaia-models in Transkribus. The OCR-quality is generally high, despite the difficult state of the original images.
16
  The newspapers editions have been segmented into individual texts using a model designed by the project team. Such texts are the base entity of the dataset. They include mainly two genres: news items and advertisements.
 
17
  The dataset is made up of 4.8 million texts amounting to about 465 million words.
18
 
19
  ## Dataset Details
 
14
  ENO is a dataset of texts from Danish and Norwegian newspapers during the period of constitutional absolutism in Denmark (1660–1849). In the course of the eighteenth century, newspapers became an everyday medium. They informed a relatively large reading public about everything from high politics to the mundanities of local markets.
15
  The dataset was created by re-processing over 550.000 digital images scanned from microfilm and held in the Danish Royal Library's collection. They had initially been OCR-processed, but the results were generally unreadable. ENO re-processed the images using tailored pylaia-models in Transkribus. The OCR-quality is generally high, despite the difficult state of the original images.
16
  The newspapers editions have been segmented into individual texts using a model designed by the project team. Such texts are the base entity of the dataset. They include mainly two genres: news items and advertisements.
17
+
18
  The dataset is made up of 4.8 million texts amounting to about 465 million words.
19
 
20
  ## Dataset Details