Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Danish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
File size: 4,265 Bytes
ed22468
 
 
 
 
 
 
439e14c
d9093e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f324fa
 
 
 
 
 
439e14c
 
 
 
 
 
 
3007163
 
 
 
 
 
 
 
 
 
dd36adf
 
 
 
 
 
0cdc88c
 
 
 
 
 
 
 
 
 
 
6a2c8fb
 
 
 
 
 
 
 
 
 
 
7df022e
 
 
 
 
 
 
2c91001
 
 
 
 
 
16931a4
 
 
 
 
 
00e7f2a
 
 
 
 
 
060c443
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17ce613
27558c0
17ce613
 
 
da633ea
27558c0
 
 
00e7f2a
17ce613
ed22468
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
060c443
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148

# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).


## [v1.2.9] - 2025-08-05

### Docs

- Average document length now uses tokens instead of characters
- Added vizualization for checking document length in sub datasets
- Changes to `*/descriptive_stats.json`: 
  - The object no longer includes revision. 
  - Now include character-level metrics along with minimum and maximum length. Removed average document length as it is computable from existing metrics.
- Removed per-dataset histograms from the main readme. The goal is to avoid loading the entire dataset when updating the readme. This should make it easier for contributors.
- Simplifying PR workflow in `contributing.md`

### CI
- Fixes bug causing `make update-descriptive-stats` to fail when not having a linear commit history. The script now skips a dataset update based on revision, but only if the `descriptive_stats.json` file does not exist. To ensure that the main readme is always up to date, we change the make command always to update it.

## [v1.2.8] - 2025-08-05

### Added

- Added dataset: Enevældens Nyheder Online (`enevaeldens_nyheder`). This brings us to >5B tokens!

## [v1.2.7] - 2025-07-22

### Added

- Added dataset: Grundtvigs Works (`grundtvig`) 
- Added bias and risk section to the README

## [v1.2.6] - 2025-07-21

### Added

- Added two table to get an overview of data by license and domain

### Changed

- Dataset overview table now appears in a drop down menu

## [v1.2.5] - 2025-07-08

### Added

- Added the `domsdatabasen` dataset.

## [v1.2.4] - 2025-07-08

### Added

- Add a plot for tokens over time to see how the dataset develops
- Minor documentation improvements in the main readme

### Changed

- Rename `scrape_hovedstaden` to `health_hovedstaden` avoid confusion with its pretty name

## [v1.2.3] - 2025-06-30

### Added

- Added a `create.py` script for the `retsinformationdk` dataset.
  - Resulted in a boost in tokens and documents  

### Changed

- Did a full stats update on datasets, resulting in minor changes in a few datasheets

## [v1.2.2] - 2025-06-26

### Added

- Added the new `scrape_hovedstaden` dataset. 
- Added a new domain type `Medical`.

## [v1.2.1] - 2025-06-24

### Fixed

- Updated the danske-taler dataset. This version fixes a problem where the texts from the API contains no newlines, and where there should have been newline there is now space between words and punctuation.

## [v1.2.0] - 2025-06-23

### Fixed

- Updated the memo dataset, this second version fixed previous [issues](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67) with the download and processing of the Danish Memo which cut off the text leading to notably smaller documents.

## [v1.1.1] - 2025-06-16

### Added

- Added tests to ensure that 1 tokens document don't appear in the data. This filtered out 0 documents in total.

## [v1.1.0] - 2025-04-29

### Added

- Added multiple quality controls
  - Removed all empty string
  - Removed duplicates across within datasets
- Restructured datasets
  - Removed columns from the dataset to make the structure more lightweight, these include domain, metadata, and license. These have been moved to the individual datasheets. It is still possible to filter for license by using the dataset name
  - Added column for number of tokens
- For developers
  - Restructered CI codebase substantially
    - Added `DataSheet` to make CI for convenient 
    - factored out plots and tables

### Docs

- Sorted overview table
- Minor changes to dataset documentation


## [v1.0.12] - 2025-05-08

### Added

- Added new datasets
  - Norwegian Colossal Corpus (newspapers) (~191.08K tokens)
  - Norwegian Colossal Corpus (books) (~531.97M tokens)
  - Norwegian Colossal Corpus (maalfrid) (~29.26M tokens)
  - Norwegian Colossal Corpus (parliament) (~338.87M tokens)

## [v1.0.11] - 2025-03-29

### Added

- Added new datasets (more than 1B tokens 🎉)
  - AI Aktindsigt
  - Cellar
  - Danske Taler
  - Miljøportalen
  - EUR-Lex SUM
  - Finansministeriets Udgivelser

### Docs

- Sorted main table in readme
- Added Changelog
- Minor changes to dataset documentation