Datasets:
dlb
/

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
ju-resplande commited on
Commit
66726a7
·
verified ·
1 Parent(s): f9b24a5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - fill-mask
4
+ language:
5
+ - en
6
+ tags:
7
+ - medical
8
+ pretty_name: MentalReddit
9
+ size_categories:
10
+ - 1M<n<10M
11
+ ---
12
+
13
+ # MentalReddit
14
+
15
+ This dataset, `dlb/mentalreddit`, was created by the DeepLearningBrasil team for the pre-training of their `MentalBERTa` model. This model secured the first position in the [DepSign-LT-EDI@RANLP-2023 shared task](https://aclanthology.org/2023.ltedi-1.42/), which focused on classifying social media texts into three levels of depression. [1, 3]
16
+
17
+ ## Dataset Description
18
+
19
+ The MentalReddit dataset is a large collection of English-language comments sourced from Reddit. The data was specifically curated to provide a rich resource for understanding mental health discourse, as well as general language patterns. The dataset is composed of two main parts:
20
+
21
+ * **Mental Health-Related Subreddits:** 3.4 million comments from communities focused on mental health topics.
22
+ * **General Subreddits:** 3.2 million comments from a variety of non-depression-related subreddits to provide a broad base of general language.
23
+
24
+ In total, the dataset contains approximately 7.31 million comments, occupying about 1.4 GB of disk space.
25
+
26
+ ## Data Fields
27
+
28
+ The dataset consists of the following fields:
29
+
30
+ * `body`: The text content of the Reddit comment.
31
+ * `subreddit`: The name of the subreddit from which the comment was sourced.
32
+ * `id`: A unique identifier for the comment.
33
+
34
+ ## Usage
35
+
36
+ You can load the dataset using the Hugging Face `datasets` library:
37
+
38
+ ```python
39
+ from datasets import load_dataset
40
+
41
+ dataset = load_dataset("dlb/mentalreddit")
42
+ ```
43
+
44
+