JohanHeinsen commited on
Commit
edb3857
·
verified ·
1 Parent(s): 09a71b6

Update README.md

Browse files

Updated to reflect the state of 1.0

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -15,7 +15,7 @@ ENO is a dataset of texts from Danish and Norwegian newspapers during the period
15
  The dataset was created by re-processing over 550.000 digital images scanned from microfilm and held in the Danish Royal Library's collection. They had initially been OCR-processed, but the results were generally unreadable. ENO re-processed the images using tailored pylaia-models in Transkribus. The OCR-quality is generally high, despite the difficult state of the original images.
16
  The newspapers editions have been segmented into individual texts using a model designed by the project team. Such texts are the base entity of the dataset. They include mainly two genres: news items and advertisements.
17
 
18
- The dataset is made up of 4.8 million texts amounting to about 465 million words.
19
 
20
  ## Dataset Details
21
 
@@ -66,13 +66,13 @@ As a rule of thumb, publications have been digitised in total – as they exist
66
 
67
  Most publications contain title page marginalia (date, title etc.). Because these were set with large ornamental types, they are typically recognised with much less accuracy than the regular text. We are currently working on implementing a step in the workflow to identify and filter out these elements.
68
 
69
- The coverage of the newspapers included can be seen here:
70
 
71
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/67eacf9aaeab4ce97db916d9/55C_06HgNc4h1NCAzfs57.jpeg)
72
 
73
  The distribution of texts pr. year is as follows:
74
 
75
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/67eacf9aaeab4ce97db916d9/bfc9G7IjMGnF7diM-UXGL.jpeg)
76
 
77
  ### Data Collection and Processing.
78
 
 
15
  The dataset was created by re-processing over 550.000 digital images scanned from microfilm and held in the Danish Royal Library's collection. They had initially been OCR-processed, but the results were generally unreadable. ENO re-processed the images using tailored pylaia-models in Transkribus. The OCR-quality is generally high, despite the difficult state of the original images.
16
  The newspapers editions have been segmented into individual texts using a model designed by the project team. Such texts are the base entity of the dataset. They include mainly two genres: news items and advertisements.
17
 
18
+ The dataset is made up of 4.9 million texts amounting to about 474 million words.
19
 
20
  ## Dataset Details
21
 
 
66
 
67
  Most publications contain title page marginalia (date, title etc.). Because these were set with large ornamental types, they are typically recognised with much less accuracy than the regular text. We are currently working on implementing a step in the workflow to identify and filter out these elements.
68
 
69
+ The coverage of the newspapers included at present can be seen here:
70
 
71
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/67eacf9aaeab4ce97db916d9/DdDJStUyCp_McPS7Kwbdk.jpeg)
72
 
73
  The distribution of texts pr. year is as follows:
74
 
75
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/67eacf9aaeab4ce97db916d9/SKJwusapbU534xbUKa6LL.jpeg)
76
 
77
  ### Data Collection and Processing.
78