Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Danish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
File size: 4,481 Bytes
e76be1c
 
 
 
 
78108d3
 
50a9937
e76be1c
 
 
 
 
 
 
78108d3
e76be1c
 
 
 
bb8789e
 
5affec7
 
 
 
 
 
 
 
 
 
 
 
 
 
bb8789e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d9093e1
bb8789e
ed22468
bb8789e
ed22468
 
d9093e1
bb8789e
d9093e1
 
 
bb8789e
d9093e1
 
 
 
 
 
 
 
 
 
 
bb8789e
6893992
 
 
439e14c
44fce3a
 
5affec7
ed22468
 
6893992
 
bb8789e
69db905
78108d3
f2ad4e1
44fce3a
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
## Working with dataset locally

A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:

```bash
git clone https://huggingface.co/datasets/danish-foundation-models/danish-dynaword
cd danish-dynaword
git lfs pull # download large files to ensure that tests works
```

You can the work with the dataset locally like so:

```py
from datasets import load_dataset

name = "../."  # instead of "danish-foundation-models/danish-dynaword"
dataset = load_dataset("../.", split="train")
# make transformations here
```

> Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.

## Adding a new dataset

To add a new dataset you will have to create a folder under `data/{dataset_name}/`, which should look as follows:

```
  data/dataset_name
  |- dataset_name.md
  |- dataset_name.parquet
  |- create.py               # optional
```

The create.py is an optional python script that allow you to recreate the dataset from the source. This is typically to allow us to reproduce the 
dataset with fixes or update the dataset to the latest version using an API.

## Installing dependencies

This repo comes with a few dependencies you need to install to make this run. It uses a [makefile](https://opensource.com/article/18/8/what-how-makefile) to run commands and a [uv](https://docs.astral.sh/uv/) for package management. Once you have uv installed you can install the dependencies using:

```bash
make install
```

## Running dataset tests

This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using

```bash
make test
```

## Submitting a PR

Creating a PR on Huggingface is a bit different from creating one on Github.

1) Go to the community tab on huggingface press *new pull request* and choose *on your machine*. Specify the title of the your PR. Then you can simply:

```bash
git checkout -b {new branch name}
# make your changes here 

# push to hub
# you might need to first login:
# huggingface-cli login
git push origin HEAD:refs/pr/{PR NUMBER}
```
Where HEAD refers to the current branch. 

Before you make the PR do be sure to make sure that you have completed the checklist below.

### Making changes to an existing PR

As a contributor you might need to develop on an existing branch. To do so you you 
```bash
# fetch and checkout existing branch:
git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
git checkout pr/{PR NUMBER}
# make your changes here

# push changes
```

### Checklist

- [ ] I have run the test suite using `make test` and all tests pass
- [ ] I have added/changed a dataset:
  - [ ] I have updated descriptive statistics using `make update-descriptive-statistics`
  - [ ] I have bumped the version use `make bump-version`
- [ ] If I have added a `create.py` script I have added the [script dependencies](https://docs.astral.sh/uv/guides/scripts/#declaring-script-dependencies) required to run that script.
- [ ] I have updated the CHANGELOG.md if appropriate 


### Examples of Previous PRs
To see example PR you can see the following:

- [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/11)
- [Adding a new dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/15)
- Updated [dataset description and metadata](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/20)

## Frequently asked questions

### Do you accept synthetic dataets

Yes we do generally accept synthetic datasets since it will likely be a promising research direction for low- to mid-resource languages.
However, you should be aware that synthetic dataset will probably require a more detailed examination and description.
We will for instance examine the quality of the synthetic subset and whether the model used for the creation permits resharing of the synthetic data under permissible licenses.

### Do you accept non-Danish data

Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text.