Dataset Viewer
Auto-converted to Parquet
paper_url
stringlengths
42
44
paper_id
stringlengths
10
12
arxiv_link
stringlengths
32
32
reviews
list
latex
stringlengths
15.5k
101k
https://openreview.net/forum?id=VvRbhkiAwR
VvRbhkiAwR
https://arxiv.org/abs/2008.12172
[ { "cdate": 1594023893979, "content": { "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "nominate_for_a_reproducibility_award": null, "rating": "6: Marginally above acceptance threshold", "review": "The authors present an interesting, important and relevant trend analysis of sentiment across languages in several locales during the Covid-19 pandemic, using geo-tagged European Twitter data and pre-trained cross-lingual embeddings within a neural model.\n\nThe main contributions of the paper are: 1) the geo-tagged European Twitter dataset of 4.6 million tweets between Dec 2019 and Apr 2020, where some of these contain Covid19-specific keywords (it would be nice to see some percentage breakdown stats by language here), and 2) the important trends by country in terms of dip and recovery of sentiment over this period, including the overall trends across the board.\n\nIn terms of sentiment modeling, they use a pre-trained neural model trained on the Sentiment140 dataset of Go et al, which is English-only, hence they freeze the weights to prevent over-adapting to English. They use cross-lingual MUSE embeddings to train this network to better generalize sentiment prediction to multi-lingual data for each country. There is no novelty in the modeling approach itself, which works for the purposes of trend analysis being performed. However, there is no comparison being presented of results of experimentation with different approaches, to corroborate or contrast their current trends results. E.g. a simple baseline approach could have been to run Average and Polarity sentiment values using a standard python text processing package such as `textblob` to obtain sentiment predictions. Other experiments could have been done to use different pre-trained embeddings such regular GloVE or Multi-lingual BERT to provide a comparison or take the average of the approaches to get a more generalized picture of sentiment trends. Also the authors should make it clear that the model has really been used in perhaps inference mode only to obtain the final sentiment predictions for each tweet.\n\nThe treemap visualization gives a good overall picture of tweet stats, but a table providing the individual dataset statistics including keywords chosen by locale would be really helpful.\n\nSome notable trends are how the sentiment generally dips in all locales right around the time of lockdown announcements, and recovers relatively soon after, except for Germany where it dips at the same time as neighboring countries despite lockdown being started here much later, and UK, where sentiment stays low. It is also interesting to note the spikes and fluctuations in Covid19-related sentiment for Spain, and the overall trend for average sentiment by country for \"all\" tweets (including Covid19-related ones) tracking similarly over the time period considered.\n\nHowever, one trend it would be good to see some discussion on is how the histogram of keywords correlate with the sentiment for the keyworded tweets, as it appears interesting that heightened use of Covid-19 keywords in tweets tracks with more positive sentiment in most of the plots. Perhaps it would be helpful to have a separate discussion section for the overall trend analysis at the end.\n\nOverall the paper is well-motivated and in its current form provides perhaps the intended insights, and presents lot of scope to perform useful extended analyses with more meaningful comparisons for additional time spans and across countries where governmental and societal response were different than in Europe. Perhaps the authors could consider a more interpretable predictive sentiment model in future with some hand-crafted features such as geotag metadata, unigram and bi-gram features, binary features for government measures, and Covid19-specific keyword features by locale, which could provide more insight into why sentiment predictions trend a certain way during a specific period for a given locale.\n\n\n\n", "reviews_visibility": null, "title": "Review of \"Cross-language sentiment analysis of European Twitter messages\" -- interesting trends analysis but some more approach comparisons and tables for the data would be good." }, "ddate": null, "forum": "VvRbhkiAwR", "id": "1cp_MEsz_cI", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/-/Official_Review", "mdate": null, "nonreaders": [], "number": 3, "original": null, "readers": [ "everyone" ], "replyto": "VvRbhkiAwR", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/AnonReviewer2" ], "tcdate": 1594023893979, "tddate": null, "tmdate": 1594023893979, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/AnonReviewer2" ] }, { "cdate": 1593834884476, "content": { "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "nominate_for_a_reproducibility_award": null, "rating": "6: Marginally above acceptance threshold", "review": "The authors carried on a deep learning pipeline to analyze the sentiment of Twitter texts, and propose a complete research. The presentation and language part of this submission is good.\n\n However, the research mainly use the routine DL methodology and the analysis method is not contributive. In general, the novelty and contribution of this research do not reach the level of publication as a ACL workshop paper. Here comes some comments and suggestions.\n\n1. The data statistics is missing. Though we found a rough number list in Figure 1, they are not quite clear. Data with time series info are also welcomed. Furthermore, several python packages help to draw Europe Map, and might make this part more vivid.\n2. It is better to provide a figure to explain the structure of the network. The authors surely already gave some details in page 2, including the input layer, activation function info. The hyper parameter of the network could also be provided.\n3. It is lacking of comparison of the current NN with some other NN structure. How would one single experiment derive convincing result without baseline methods or intrinsic evaluation? This is a core question I would like to raise here for this research.\n4. I am thinking of a possibility of splitting the Twitter data in terms of weeks, and take time series consideration into the current research paradigm. A sentiment-time curve plot might lead to some instructive hypothesis, if the research take a more sophisticated experiment design.\n", "reviews_visibility": null, "title": "Review on \"Cross-language sentiment analysis of European Twitter messages during the COVID-19 pandemic\"" }, "ddate": null, "forum": "VvRbhkiAwR", "id": "sr6Pz8LrNmZ", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/-/Official_Review", "mdate": null, "nonreaders": [], "number": 2, "original": null, "readers": [ "everyone" ], "replyto": "VvRbhkiAwR", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/AnonReviewer1" ], "tcdate": 1593834884476, "tddate": null, "tmdate": 1593834884476, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/AnonReviewer1" ] }, { "cdate": 1593827509247, "content": { "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "nominate_for_a_reproducibility_award": null, "rating": "6: Marginally above acceptance threshold", "review": "This is a mostly well-written overview of an exercise to assign a sentiment label to the European-country generated tweets during the period December’19-May’20. \n\nThe authors describe how they differentiate and identify the country, how they assign the sentiment level (positive, neutral, negative), how they use emojis, and how they use the deep learning neural model which presumably can adjust this label assignment regardless of what language the tweet is originally written. The authors report a 0.82 accuracy of their system. The rest of the paper is a recognition of the limitations, and a description and plotting of the sentiment level for various European countries. \n\nUnfortunately, these results do not contribute to adding new knowledge. The study could use more work. \n\nSuggestions: \n\nCould the authors provide a breakdown by language of the tweets that they process? Are we to assume that all tweet originated from Italy are in Italian and those originating in Germany are in German? \n\nIs this data publicly available?\n\nHas the 0.82 accuracy been manually validated? Is there a difference in accuracy depending on the language? The authors claim that one of the contributions of their study is this tagged dataset (geotagged, and sentiment-tagged). It seems there is no further evaluation on how well the tagging has been applied. \n\nAnd while it is visibly clear that we see a global fall in sentiment that correlates with governments issuing lock-down protective measures, and this result could be a start that this labelling of the data is good, is there anything else we can say, is there any other way we can analyze this data and identify common topics in the similar sentiment groups? Something that can be actually useful to the COVID-19 researchers…\n", "reviews_visibility": null, "title": "Review" }, "ddate": null, "forum": "VvRbhkiAwR", "id": "kAPKEJYRLfM", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/-/Official_Review", "mdate": null, "nonreaders": [], "number": 1, "original": null, "readers": [ "everyone" ], "replyto": "VvRbhkiAwR", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/AnonReviewer3" ], "tcdate": 1593827509247, "tddate": null, "tmdate": 1593827702431, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper25/AnonReviewer3" ] } ]
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2020} \usepackage{times} \usepackage{latexsym} \renewcommand{\UrlFont}{\ttfamily\small} \usepackage{graphicx} \usepackage{caption} \usepackage{url} \usepackage[utf8]{inputenc} \usepackage{microtype} \aclfinalcopy % \setlength\titlebox{10cm} \newcommand\BibTeX{B\textsc{ib}\TeX} \title{Cross-language sentiment analysis of European Twitter messages during the COVID-19 pandemic} \author{Anna Kruspe \\ German Aerospace Center (DLR) \\ Institute of Data Science \\ Jena, Germany \\ \texttt{[email protected]} \\\And Matthias H\"aberle \\ Technical University of Munich (TUM) \\ Signal Processing in Earth Observation (SiPEO) \\ Munich, Germany \\ \texttt{[email protected]} \\\AND Iona Kuhn \\ German Aerospace Center (DLR) \\ Institute of Data Science \\ Jena, Germany \\ \texttt{[email protected]} \\\And Xiao Xiang Zhu \\ German Aerospace Center (DLR) \\ Remote Sensing Technology Institute (IMF) \\ Oberpfaffenhofen, Germany \\ \texttt{[email protected]}} \date{} \begin{document} \maketitle \begin{abstract} Social media data can be a very salient source of information during crises. User-generated messages provide a window into people's minds during such times, allowing us insights about their moods and opinions. Due to the vast amounts of such messages, a large-scale analysis of population-wide developments becomes possible.\\ In this paper, we analyze Twitter messages (tweets) collected during the first months of the COVID-19 pandemic in Europe with regard to their sentiment. This is implemented with a neural network for sentiment analysis using multilingual sentence embeddings. We separate the results by country of origin, and correlate their temporal development with events in those countries. This allows us to study the effect of the situation on people's moods. We see, for example, that lockdown announcements correlate with a deterioration of mood in almost all surveyed countries, which recovers within a short time span. \end{abstract} \section{Introduction} The COVID-19 pandemic has led to a worldwide situation with a large number of unknowns. Many heretofore unseen events occurred within a short time span, and governments have had to make quick decisions for containing the spread of the disease. Due to the extreme novelty of the situation, the outcomes of many of these events have not been studied well so far. This is true with regards to their medical effect, as well as the effect on people's perceptions and moods.\\ First studies about the effect the pandemic has on people's lives are being published at the moment \citep[e.g.][]{uni_erfurt}, mainly focusing on surveys and polls. Naturally, such studies are limited to relatively small numbers of participants and focus on specific regions (e.g. countries).\\ In contrast, social media provides a large amount of user-created messages reflective of those users' moods and opinions. The issue with this data source is the difficulty of analysis - social media messages are extremely noisy and idiosyncratic, and the amount of incoming data is much too large to analyze manually. We therefore need automatic methods to extract meaningful insights.\\ In this paper, we describe a data set collected from Twitter during the months of December 2019 through April 2020, and present an automatic method for determining the sentiments contained in these messages. We then calculate the development of these sentiments over time, segment the results by country, and correlate them with events that took place in each country during those five months. \vspace{-5pt} \section{Related work} Since the pandemic outbreak and lockdown measures, numerous studies have been published to investigate the impact of the corona pandemic on Twitter. \citet{feng2020working} analyzed tweets from the US on a state and county level. First, they could detect differences in temporal tweeting patterns and found that people tweeting more about COVID-19 during working hours as the pandemic progressed. Furthermore, they conducted a sentiment analysis over time including an event specific subtask reporting negative sentiment when the 1000th death was announced and positive when the lockdown measures were eased in the states. \citet{lyu2020sense} looked into US-tweets which contained the terms "Chinese-virus" or "Wuhan-virus" referring to the COVID-19 pandemic to perform a user characterization. They compared the results to users that did not make use of such controversial vocabulary. The findings suggest that there are noticeable differences in age group, geo-location, or followed politicians. \citet{chen2020eyes} focused on sentiment analysis and topic modelling on COVID-19 tweets containing the term "Chinese-virus" (controversial) and contrasted them against tweets without such terms (non-controversial). Tweets containing "Chinese-virus" discussing more topics which are related to China whereas tweets without such words stressing how to defend the virus. The sentiment analysis revealed for both groups negative sentiment, yet with a slightly more positive and analytical tone for the non-controversial tweets. Furthermore, they accent more the future and what the group itself can do to fight the disease. In contrast, the controversial group aiming more on the past and concentrate on what others should do. \begin{figure*}[htbp] \centerline{\includegraphics[width=.8\textwidth]{fig/treemap_countries.pdf}} \caption{Treemap of Twitter activity in Europe during the time period of December 2019 to April 2020.} \label{fig:treemap_countries} \end{figure*} \section{Data collection}\label{sec:data_collection} For our study, we used the freely available Twitter API to collect the tweets from December 2019 to April 2020. The free API allows streaming of 1\% of the total tweet amount. To cover the largest possible area, we used a bounding box which includes the entire world. From this data, we sub-sampled 4,683,226 geo-referenced tweets in 60 languages located in the Europe. To create the Europe sample, we downloaded a shapefile of the earth\footnote{\url{https://www.naturalearthdata.com/downloads/10m-cultural-vectors/10m-admin-0-countries/}}, then we filtered by country performing a point in polygon test using the Python package \textit{Shapely}\footnote{\url{https://pypi.org/project/Shapely/}}. Figure \ref{fig:treemap_countries} depicts the Europe Twitter activity in total numbers. Most tweets come from the U.K. Tweets are not filtered by topic, i.e. many of them are going to be about other topics than COVID-19. This is by design. As we will describe later, we also apply a simple keyword filter to detect tweets that are probably COVID-19-related for further analysis. \begin{figure}[htbp] \centerline{\includegraphics[width=.4\textwidth]{fig/model.png}} \caption{Architecture of the sentiment analysis model.} \label{fig:model} \end{figure} \section{Analysis method} We now describe how the automatic sentiment analysis was performed, and the considerations involved in this method. \begin{figure}[htbp] \centerline{\includegraphics[width=.5\textwidth]{fig/embedding_comp.png}} \caption{MSE for different models on the \textit{Sentiment140} test dataset.} \label{fig:embedding_comp} \end{figure} \subsection{Sentiment modeling} In order to analyze these large amounts of data, we focus on an automatic method for sentiment analysis. We train a neural network for sentiment analysis on tweets. The text input layer of the network is followed by a pre-trained word or sentence embedding. The resulting embedding vectors are fed into a 128-dimensional fully-connected ReLU layer with 50\% dropout, followed by a regression output layer with sigmoid activation. Mean squared error is used as loss. The model is visualized in figure \ref{fig:model}.\\ This network is trained on the \textit{Sentiment140} dataset \cite{go}. This dataset contains around 1.5 million tweets collected through keyword search, and then annotated automatically by detecting emoticons. Tweets are determined to have positive, neutral, or negative sentiment. We map these sentiments to the values 1.0, 0.5, and 0.0 for the regression. Sentiment for unseen tweets is then represented on a continuous scale at the output.\\ We test variants of the model using the following pre-trained word- and sentence-level embeddings: \begin{itemize} \item A skip-gram version of \textit{word2vec} \citep{mikolov} trained on the English-language Wikipedia\footnote{\url{https://tfhub.dev/google/Wiki-words-250/2}} \item A multilingual version of BERT \citep{bert} trained on Wikipedia data\footnote{\url{https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2}} \item A multilingual version of BERT trained on 160 million tweets containing COVID-19 keywords\footnote{\url{https://tfhub.dev/digitalepidemiologylab/covid-twitter-bert/1}} \citep{covidtwitterbert} \item An ELMO model \cite{elmo} trained on the 1 Billion Word Benchmark dataset\footnote{\url{https://tfhub.dev/google/elmo/3}} \item The Multilingual Universal Sentence Encoder (MUSE)\footnote{\url{https://tfhub.dev/google/universal-sentence-encoder-multilingual/3}} \citep{yang} \end{itemize} We train each sentiment analysis model on the \textit{Sentiment140} dataset for 10 epochs. Mean squared error results on the unseen test portion of the same dataset are shown in figure \ref{fig:embedding_comp}. For comparison, we also include an analysis conducted by VADER which is a rule-based sentiment reasoner designed for social media messages \cite{vader}.\\ % Interestingly, most neural network results are in the range of the rule-based approach. BERT delivers better results than the \textit{word2vec} model, with ELMO and the COVID-19-specific version also leading to improvements. However, the best result is achieved with the pre-trained multilingual USE model, which can embed whole sentences rather than (contextualized) words. We therefore perform the subsequent sentiment analysis with the MUSE-based model.\\ An interesting side note here is that the dataset only contains English-language tweets, but the sentence embedding is multilingual (for 16 languages). We freeze the embedding weights to prevent them from over-adapting to English. Due to the cross-lingual semantic representation capabilities of the pre-trained embedding, we expect the model to be able to detect sentiment in other languages just as well.\\ With the created model, we perform sentiment analysis on the 4.6 million tweets collected from December to April, and then aggregate the results over time. This provides us with a representation of the development of Twitter messages' average sentiment over time. We specifically consider all collected tweets rather than just those determined to be topically related to COVID-19 because we are interested in the effect on people's moods in general, not just with regards to the pandemic. Additionally, we also filter the tweets by COVID-19-associated keywords, and analyze their sentiments as well. % The chosen keywords are listed in figure \ref{fig:keywords}.\\ \subsection{Considerations} There are some assumptions implicit in this analysis method that we want to address here. First of all, we only consider tweets containing a geolocation. This applies to less than 1\% of the whole tweet stream, but according to \citet{sloan}, the amount of geolocated tweets closely follows the geographic population distribution. According to \citet{graham}, there probably are factors determining which users share their locations and which ones do not, but there is no systematic study of these.\\ Other assumptions arise from the analysis method itself. For one, we assume that the model is able to extract meaningful sentiment values from the data. However, sentiment is subjective, and the model may be failing for certain constructs (e.g. negations, sarcasm). Additionally, modeling sentiment on a binary scale does not tell the whole story. ``Positive'' sentiment encompasses, for example, happy or hopeful tweets, ``negative'' angry or sad tweets, and ``neutral'' tweets can be news tweets, for example. A more finegrained analysis would be of interest in the future.\\ We also assume a somewhat similar perception of sentiment across languages. Finally, we assume that the detected sentiments as a whole are reflective of the mood within the community; on the other hand, mood is not quantifiable in the first place. All of these assumptions can be called into question. Nevertheless, while they may not be applicable for every single tweet, we hope to detect interesting effects on a large scale. When analyzing thousands of tweets within each time frame, random fluctuations become less likely. We believe that this analysis can provide useful insights into people's thoughts, and form an interesting basis for future studies from psychological or sociological perspectives. \begin{figure}[htbp] \centerline{\includegraphics[width=.4\textwidth]{fig/keywords.png}} \caption{Keywords used for filtering the tweets (not case sensitive).} \label{fig:keywords} \end{figure} \section{Results} In the following, we present the detected sentiment developments over time over-all and for select countries, and correlate them with events that took place within these months. Results for some other countries would have been interesting as well, but were not included because the main spoken language is not covered by MUSE (e.g. Sweden, Denmark). Others were excluded because there was not enough material available; we only analyze countries with at least 300,000 recorded tweets. As described in section \ref{sec:data_collection}, tweets are filtered geographically, not by language (i.e. Italian tweets may also be in other languages than Italian). \subsection{Over-all}\label{subsec:res_overall} In total, we analyzed around 4.6 million tweets, of which around 79,000 contained at least one COVID-19 keyword. Figure \ref{fig:sentiment_kw_count_all} shows the development of the sentiment over time for all tweets and for those with keywords, as well as the development of the number of keyworded tweets. The sentiment results are smoothed on a weekly basis (otherwise, we would be seeing a lot of movement during the week, e.g. an increase on the weekends). For the average over all tweets, we see a slight decrease in sentiment over time, indicating possibly that users' moods deteriorated over the last few months. There are some side effects that need to be considered here. For example, the curve rises slightly for holidays like Christmas and Easter (April 12). Interestingly, we see a clear dip around mid-March. Most European countries started implementing strong social distancing measures around this time. We will talk about this in more detail in the next sections.\\ We see that keywords were used very rarely before mid-January, and only saw a massive increase in usage around the beginning of March. Lately, usage has been decreasing again, indicating a loss of interest over time. Consequently, the sentiment analysis for keyword tweets is not expressive in the beginning. Starting with the more frequent usage in February, the associated sentiment drops massively, indicating that these tweets are now used in relation with the pandemic. Interestingly, the sentiment recovers with the increased use in March - it is possible that users were starting to think about the risks and handling of the situation in a more relaxed way over time. Still, the sentiment curve for keyword tweets lies significantly below the average one, which is to be expected for this all-around rather negative topic. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_all.png}} \caption{Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_all} \end{figure*} \subsection{Analysis by country} We next aggregated the tweets by country as described in section \ref{sec:data_collection} and performed the same analysis by country. The country-wise curves are shown jointly in figure \ref{fig:sentiment_by_country}. Comparing the absolute average sentiment values between countries is difficult as they may be influenced by the languages or cultural factors. However, the relative development is interesting. We see that all curves progress in a relatively similar fashion, with peaks around Christmas and Easter, a strong dip in the middle of March, and a general slow decrease in sentiment. In the following, we will have a closer look at each country's development. (Note that the keyword-only curves are cut of in the beginning for some countries due to a low number of keyword tweets). \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_by_country.png}} \caption{Development of average sentiment over time by country (all tweets).} \label{fig:sentiment_by_country} \end{figure*} \subsubsection{Italy} Figure \ref{fig:sentiment_kw_count_italy} shows the average sentiment for all Italian tweets and all Italian keyword tweets, as well as the development of keyword tweets in Italy. In total, around 400,000 Italian tweets are contained in the data set, of which around 12,000 have a keyword. Similar to the over-all curves described in section \ref{subsec:res_overall}, the sentiment curve slowly decreases over time, keywords are not used frequently before the end of January, when the first cases in Italy were confirmed. Sentiment in the keyword tweets starts out very negative and then increases again. Interestingly, we see a dip in sentiment on March 9, which is exactly when the Italian lockdown was announced. Keywords were also used most frequently during that week. The dip is not visible in the keyword-only sentiment curve, suggesting that the negative sentiment was actually caused by the higher prevalence of coronavirus-related tweets. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_italy_mod.png}} \caption{Italy: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_italy} \end{figure*} \subsubsection{Spain} For Spain, around 780,000 tweets were collected in total with around 14,000 keyword tweets. The curves are shown in figure \ref{fig:sentiment_kw_count_spain}. The heavier usage of keywords starts around the same time as in Italy, where the first domestic cases were publicized at the same time. The spike in keyword-only sentiment in mid-February is actually an artifact of the low number of keyworded tweets in combination with the fact that ``corona'' is a word with other meanings in Spanish (in contrast to the other languages). With more keyword mentions, the sentiment drops as in other countries.\\ From there onwards, the virus progressed somewhat slower in Spain, which is reflected in the curves as well. A lockdown was announced in Spain on March 14, corresponding to a dip in the sentiment curve. As with the Italian data, this dip is not present in the keyword-only sentiments. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_spain.png}} \caption{Spain: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_spain} \end{figure*} \subsubsection{France} Analyses for the data from France are shown in figure \ref{fig:sentiment_kw_count_france}. For France, around 309,000 tweets and around 4,600 keyword tweets were collected. Due to the lower number of data points, the curves are somewhat less smooth. Despite the first European COVID-19 case being detected in France in January, cases did not increase significantly until the end of February, which once again is also seen in the start of increased keyword usage here. The French lockdown was announced on March 16 and extended on April 13, both reflected in dips in the sentiment curve. Towards the end of the considered period, keyword-only sentiment actually starts to increase, which is also seen in Italy and Germany. This could indicate a shift to a more hopeful outlook with regards to the pandemic. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_france_mod.png}} \caption{France: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_france} \end{figure*} \subsubsection{Germany} For Germany, around 415,000 tweets and around 5,900 keyword tweets were collected. The analysis results are shown in figure \ref{fig:sentiment_kw_count_germany}. After very few first cases at the end of January, Germany's case count did not increase significantly until early March, which is again when keyword usage increased. The decrease in the sentiment curve actually arrives around the same time as in France and Spain, which is a little surprising because social distancing measures were not introduced by the government until March 22 (extended on March 29). German users were likely influenced by the situation in their neighboring countries here. In general, the curve is flatter than in other countries. One possible reason for this might be the lower severity of measures in Germany, e.g. there were no strict curfews.\\ In contrast to all other considered countries, the keyword-only sentiment curve is not significantly below the sentiment curve for all tweets in Germany after the beginning of March. There are some possible explanations for this. For one, governmental response to the situation was generally applauded in Germany \cite{uni_erfurt}, and, as mentioned above, was not as strict as in other countries, possibly not impacting people as much. On the other hand, the over-all German curve is lower than its counterparts from other countries, i.e. German tweets have lower average sentiment values in general, possibly caused by cultural factors. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_germany_mod.png}} \caption{Germany: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_germany} \end{figure*} \subsubsection{United Kingdom} Curves for the United Kingdom are shown in figure \ref{fig:sentiment_kw_count_uk}, calculated on around 1,380,000 tweets including around 22,000 keyword tweets. Higher keyword usage starts somewhat earlier here than expected in February, whereas a significant increase in cases did not occur until March. Once again, keyword-only sentiment starts out very negative and then increases over time.\\ The British government handled the situation somewhat differently. In early March, only recommendations were given, and a lockdown was explicitly avoided to prevent economic consequences. This may be a cause for the sentiment peak seen at this time. However, the curve falls until mid-March, when other European countries did implement lockdowns. The government finally did announce a lockdown starting on March 26. This did not lead to a significant change in average sentiment anymore, but in contrast with other countries, the curve does not swing back to a significantly more positive level in the considered period, and actually decreases towards the end. \begin{figure*}[htbp] \centerline{\includegraphics[width=.9\textwidth]{fig/sentiment_kw_count_uk_mod.png}} \caption{United Kingdom: Development of average sentiment for all tweets and for tweets containing COVID-19 keywords, and development of number of tweets containing COVID-19 keywords.} \label{fig:sentiment_kw_count_uk} \end{figure*} \section{Conclusion} \vspace{-5pt} In this paper, we presented the results of a sentiment analysis of 4.6 million geotagged Twitter messages collected during the months of December 2019 through April 2020. This analysis was performed with a neural network trained on an unrelated Twitter sentiment data set. The tweets were then tagged with sentiment on a scale from 0 to 1 using this network. The results were aggregated by country, and averaged over time. Additionally, the sentiments of tweets containing COVID-19-related keywords were aggregated separately.\\ We find several interesting results in the data. First of all, there is a general downward trend in sentiment in the last few months corresponding to the COVID-19 pandemic, with clear dips at times of lockdown announcements and a slow recovery in the following weeks in most countries. COVID-19 keywords were used rarely before February, and correlate with a rise in cases in each country. The sentiment of keyworded tweets starts out very negative at the beginning of increased keyword usage, and becomes more positive over time. However, it remains significantly below the average sentiment in all countries except Germany. Interestingly, there is a slight upward development in sentiment in most countries towards the end of the considered period.\\ \vspace{-10pt} \section{Future work} \vspace{-5pt} We will continue this study by also analyzing the development in the weeks since May 1st and the coming months. More countries will also be added. It will be very interesting to compare the shown European results to those of countries like China, South Korea, Japan, New Zealand, or even individual US states, which were impacted by the pandemic at different times and in different ways, and where the governmental and societal response was different from that of Europe.\\ There are also many other interesting research questions that could be answered on a large scale with this data - for example, regarding people's trust in published COVID-19 information, their concrete opinions on containment measures, or their situation during an infection. Other data sets have also been published in the meantime, including ones that contains hundreds of millions of tweets at the time of writing \cite[e.g.][]{geocov,banda_juan_m_2020_3757272}. These data sets are much larger because collection was not restricted to geotagged tweets. In \citet{geocov}, geolocations were instead completed from outside sources.\\ These studies could also be extended to elucidate more detailed factors in each country. One possibility here is an analysis of Twitter usage and tweet content by country. Another, as mentioned above, lies in moving from the binary sentiment scale to a more complex model. \newpage \bibliography{anthology,acl2020} \bibliographystyle{acl_natbib} \appendix \end{document}
https://openreview.net/forum?id=0gLzHrE_t3z
0gLzHrE_t3z
https://arxiv.org/abs/2004.10706
[ { "cdate": 1593935756291, "content": { "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "nominate_for_a_reproducibility_award": null, "rating": "9: Top 15% of accepted papers, strong accept", "review": "This manuscript describes an exemplary effort to address COVID-19 by bringing together much of the relevant literature into one corpus, CORD-19, and increasing its accessibility by providing a harmonized and standardized format convenient for use by automated tools. CORD-19 has been - and is likely to continue being - a critical resource for the scientific community to address COVID-19, and this manuscript not only reflects that importance, but also gives insight into the approach used, the design decisions taken, challenges encountered, use cases, shared tasks, and various discussion points. The manuscript is well-organized and readable, and (overall) an excellent case study in corpus creation. This manuscript is not only important for understanding the CORD-19 corpus and its enabling effect on current COVID-19 efforts, but is possibly also a historically important example of joint scientific efforts to address COVID-19.\n\nDespite the critical importance of this dataset, there are several questions left unanswered by this manuscript, and it would be unfortunate to not address these before publication.\n\nIt would be useful to have a very clear statement of the purpose for CORD-19. The inclusion of SARS and MERS makes intuitive sense, but it is less clear why other coronaviruses that infect humans (e.g. HCoV-OC43) are not explicitly included - I am not a virologist, but neither will be most of the audience for this manuscript. While many of the articles that discuss these lesser known cornaviruses would be included anyway because they would also mention \"coronavirus\", this is not guaranteed. \n\nWhile it seems appropriate for document inclusion to be query-based, it is important to consider the coverage of the query. The number of name variants in the literature for COVID-19 or SARS-CoV-2 is rather large, and not all of these documents will include other terms that will match, such as \"coronavirus\". For example, how would a document that mentions \"SARS CoV-2\" but none of the query terms listed be handled? This is not a theoretical case: the title and abstract for PMID 32584542 have this issue, and I was unable to locate this document in CORD-19. In addition to minor variations such as this, there are many examples of significant variations such as \"HCoV-19\", \"nCoV-19\" or even \"COIVD\". Are these cases worth considering? If not, can we quantify how much is lost? And if we can't quantify it, this is a limitation.\n\nHow is the following situation handled: querying source A returns a document (e.g. the source has full text and that matches), but the version in source B does not return it (e.g. the source only has title & abstract, and they do not match). From the description, I would assume that the version from source A is used and the version from source B is ignored; is any reasonably useful data lost by not explicitly querying source B for its version?\n\nThere are other efforts to provide a repository of scientific articles related to COVID-19, and it would be appropriate to mention these, if only to indicate why CORD-19 has unique value. I am aware of LitCovid (Chen Q, Allot A, Lu Z. Keep up with the latest coronavirus research. Nature. 2020;579(7798):193), are there others?\n\nThere are also non-COVID-19 efforts to provide a large percentage of the literature in formats appropriate for text mining or other processing. One is (Comeau, Donald C., et al. \"PMC text mining subset in BioC: about three million full-text articles and growing.\" Bioinformatics 35.18 (2019): 3533-3535.), which not only provides the full text of a large percentage of the articles in PubMed Central, but it is also kept up-to-date and converts all documents into a straightforward standardized XML format appropriate for text mining. While this effort is single-source, it specifically addresses some of the issues encountered in the creation of CORD-19 and the representation aspect of the \"Call to Action\".\n", "reviews_visibility": null, "title": "Excellent description of a critical COVID-19 dataset, some questions remaining" }, "ddate": null, "forum": "0gLzHrE_t3z", "id": "awuj7QVR2cj", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/-/Official_Review", "mdate": null, "nonreaders": [], "number": 3, "original": null, "readers": [ "everyone" ], "replyto": "0gLzHrE_t3z", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/AnonReviewer1" ], "tcdate": 1593935756291, "tddate": null, "tmdate": 1593935756291, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/AnonReviewer1" ] }, { "cdate": 1593869977289, "content": { "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "nominate_for_a_reproducibility_award": null, "rating": "9: Top 15% of accepted papers, strong accept", "review": "This is a paper that describes an important research dataset that has been produced during the Covid-19 epidemic. The CORD-19 collection is used for much research and some challenge evaluations. Even though this paper does not report any research results per se, and the paper is posted on the ArXiv preprint server, this version will give a citable description of the collection that will likely be widely referenced.\n\nThe authors describe well the process of dealing not only with the technical issues of processing heterogeneous scientific papers but also the non-technical issues, such as copyright and licensing.\n\nThe authors do not make any unreasonable claims, although I do question the value of this collection for non-computational researchers and clinicians. As the authors note, the collection is not complete, which is essential for clinical researchers and certainly for clinicians (who do not typically read primary research papers anyways, and tend to focus more on summations). But the dataset is of tremendous value to computational and informatics researchers, and that should be pronounced.\n\nI appreciate the Discussion that points out the limitations of how scientific information is currently published, and how it could be improved. One other concern that could be addressed is how long the Allen Institute for AI, which is to be commended for this work, will continue to maintain this tremendously valuable resource.\n", "reviews_visibility": null, "title": "Overview of a highly important Covid-19 dataset" }, "ddate": null, "forum": "0gLzHrE_t3z", "id": "9nXCz8yAKmY", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/-/Official_Review", "mdate": null, "nonreaders": [], "number": 2, "original": null, "readers": [ "everyone" ], "replyto": "0gLzHrE_t3z", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/AnonReviewer3" ], "tcdate": 1593869977289, "tddate": null, "tmdate": 1593869977289, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/AnonReviewer3" ] }, { "cdate": 1593851275369, "content": { "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "nominate_for_a_reproducibility_award": null, "rating": "9: Top 15% of accepted papers, strong accept", "review": "The authors present the CORD-19 data set and describe how it has been developed and continues to be developed. The CORD-19 data set is a valuable resource that provides access to the latest literature about COVID-19 and coronaviruses and it is updated daily with over 200k downloads. The generation of the CORD-19 requires a coordinated integration and processing effort that is significant. The contribution of this corpus is of high significance and will have a strong impact on the biomedical domain and support the development, for instance, of COVID-19 vaccines. The manuscript is clearly written and it is easy to understand.\n\nThe effort in providing a version of the latest literature in formats that can be processed by text analytics methods is excellent, using the latest of the available technology to do so. In the paper, it is mentioned in the manuscript that there are some problems in turning tables into structured format and the authors provide examples of issues that they have found. Table processing is done by IBM, who has as well a method for table processing that seems to be resilient to the problems mentioned and would be relevant to consider it for table processing (https://arxiv.org/abs/1911.10683).\n\nThe authors give an example of conflict from which it can be inferred that the same DOI might be linked to two different PubMed identifiers, the reviewer is curious why this might be the case and if an example could be provided.\n\nWhen you mention “Classification of CORD-19 papers to Microsoft Academic Graph”, is this classification done by a method provided by the authors? is this classification provided as meta-data?\n\nDuring my review the only typo I could find is:\n* “other research activity.” —> “other research activities.”?\n* “by not allowing republication of **an** paper”, an —> a\n\nPlease consider the following guideline for NLM trademarks: https://www.nlm.nih.gov/about/trademarks.html", "reviews_visibility": null, "title": "CORD-19 is an excellent resource with an impressive integration work for the research community to fight COVID-19." }, "ddate": null, "forum": "0gLzHrE_t3z", "id": "QSKmsC8qzKu", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/-/Official_Review", "mdate": null, "nonreaders": [], "number": 1, "original": null, "readers": [ "everyone" ], "replyto": "0gLzHrE_t3z", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/AnonReviewer2" ], "tcdate": 1593851275369, "tddate": null, "tmdate": 1593851275369, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper24/AnonReviewer2" ] } ]
\documentclass[11pt,a4paper]{article} \PassOptionsToPackage{hyphens}{url}\usepackage{hyperref} % \usepackage[hyperref]{acl2020} \usepackage{times} \usepackage{latexsym} \usepackage{enumitem} \usepackage{graphicx} \usepackage{booktabs} \usepackage{tabularx} \renewcommand{\UrlFont}{\ttfamily\small} \usepackage{xspace} % \usepackage{microtype} \aclfinalcopy % \setlength\titlebox{8cm} \newcommand\BibTeX{B\textsc{ib}\TeX} \newcommand{\covid}{\textsc{Covid-19}\xspace} \newcommand{\cord}{\textsc{CORD-19}\xspace} \newcommand{\sars}{\textsc{SARS}\xspace} \newcommand{\mers}{\textsc{MERS}\xspace} \newcommand{\swine}{\textsc{H1N1}\xspace} \newcommand{\trec}{\textsc{TREC-COVID}\xspace} \newcommand\kyle[1]{{\color{red}\{\textit{#1}\}$_{KL}$}} \newcommand\lucy[1]{{\color{orange}\{\textit{#1}\}$_{LLW}$}} \newcommand\todoit[1]{{\color{red}\{TODO: \textit{#1}\}}} \newcommand\todo{{\color{red}{TODO}}\xspace} \title{\cord: The \covid Open Research Dataset} \author{ Lucy Lu Wang$^{1,}$\Thanks{ denotes equal contribution} \quad Kyle Lo$^{1,}$\footnotemark[1] \quad Yoganand Chandrasekhar$^1$ \quad Russell Reas$^1$ \quad \\ {\bf Jiangjiang Yang$^1$ \quad Douglas Burdick$^2$ \quad Darrin Eide$^3$ \quad Kathryn Funk$^4$ \quad } \\ {\bf Yannis Katsis$^2$ \quad Rodney Kinney$^1$ \quad Yunyao Li$^2$ \quad Ziyang Liu$^6$ \quad } \\ {\bf William Merrill$^1$ \quad Paul Mooney$^5$ \quad Dewey Murdick$^7$ \quad Devvret Rishi$^5$ \quad } \\ {\bf Jerry Sheehan$^4$ \quad Zhihong Shen$^3$ \quad Brandon Stilson$^1$ \quad Alex D. Wade$^6$ \quad } \\ {\bf Kuansan Wang$^3$ \quad Nancy Xin Ru Wang $^2$ \quad Chris Wilhelm$^1$ \quad Boya Xie$^3$ \quad } \\ {\bf Douglas Raymond$^1$ \quad Daniel S. Weld$^{1,8}$ \quad Oren Etzioni$^1$ \quad Sebastian Kohlmeier$^1$ \quad } \\ [2mm] $^1$Allen Institute for AI \quad $^2$ IBM Research \quad $^3$Microsoft Research \\ $^4$National Library of Medicine \quad $^5$Kaggle \quad $^6$Chan Zuckerberg Initiative \\ $^7$Georgetown University \quad $^8$University of Washington \\ {\tt\small \{lucyw, kylel\}@allenai.org} } \date{} \begin{document} \maketitle \begin{abstract} The \covid Open Research Dataset (\cord) is a growing\footnote{The dataset continues to be updated daily with papers from new sources and the latest publications. Statistics reported in this article are up-to-date as of version \textsc{2020-06-14}.} resource of scientific papers on \covid and related historical coronavirus research. \cord is designed to facilitate the development of text mining and information retrieval systems over its rich collection of metadata and structured full text papers. Since its release, \cord has been downloaded\footnote{\href{https://www.semanticscholar.org/cord19}{https://www.semanticscholar.org/cord19}} over 200K times and has served as the basis of many \covid text mining and discovery systems. In this article, we describe the mechanics of dataset construction, highlighting challenges and key design decisions, provide an overview of how \cord has been used, and describe several shared tasks built around the dataset. We hope this resource will continue to bring together the computing community, biomedical experts, and policy makers in the search for effective treatments and management policies for \covid. \end{abstract} \section{Introduction} On March 16, 2020, the Allen Institute for AI (AI2), in collaboration with our partners at The White House Office of Science and Technology Policy (OSTP), the National Library of Medicine (NLM), the Chan Zuckerburg Initiative (CZI), Microsoft Research, and Kaggle, coordinated by Georgetown University's Center for Security and Emerging Technology (CSET), released the first version of \cord. This resource is a large and growing collection of publications and preprints on \covid and related historical coronaviruses such as \sars and \mers. The initial release consisted of 28K papers, and the collection has grown to more than 140K papers over the subsequent weeks. Papers and preprints from several archives are collected and ingested through the Semantic Scholar literature search engine,\footnote{\href{https://semanticscholar.org/}{https://semanticscholar.org/}} metadata are harmonized and deduplicated, and paper documents are processed through the pipeline established in \citet{lo-wang-2020-s2orc} to extract full text (more than 50\% of papers in \cord have full text). We commit to providing regular updates to the dataset until an end to the \covid crisis is foreseeable. \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{cord19_dset.png} \caption{Papers and preprints are collected from different sources through Semantic Scholar. Released as part of \cord are the harmonized and deduplicated metadata and full text JSON.} \label{fig:dataset} \end{figure} \cord aims to connect the machine learning community with biomedical domain experts and policy makers in the race to identify effective treatments and management policies for \covid. The goal is to harness these diverse and complementary pools of expertise to discover relevant information more quickly from the literature. Users of the dataset have leveraged AI-based techniques in information retrieval and natural language processing to extract useful information. Responses to \cord have been overwhelmingly positive, with the dataset being downloaded over 200K times in the three months since its release. The dataset has been used by clinicians and clinical researchers to conduct systematic reviews, has been leveraged by data scientists and machine learning practitioners to construct search and extraction tools, and is being used as the foundation for several successful shared tasks. We summarize research and shared tasks in Section~\ref{sec:research_directions}. In this article, we briefly describe: \begin{enumerate}[noitemsep] \item The content and creation of \cord, \item Design decisions and challenges around creating the dataset, \item Research conducted on the dataset, and how shared tasks have facilitated this research, and \item A roadmap for \cord going forward. \end{enumerate} \section{Dataset} \label{sec:dataset} \cord integrates papers and preprints from several sources (Figure~\ref{fig:dataset}), where a paper is defined as the base unit of published knowledge, and a preprint as an unpublished but publicly available counterpart of a paper. Throughout the rest of Section~\ref{sec:dataset}, we discuss papers, though the same processing steps are adopted for preprints. First, we ingest into Semantic Scholar paper metadata and documents from each source. Each paper is associated with bibliographic metadata, like title, authors, publication venue, etc, as well as unique identifiers such as a DOI, PubMed Central ID, PubMed ID, the WHO Covidence \#,\footnote{\label{footnote:who}\href{https://www.who.int/emergencies/diseases/novel-coronavirus-2019/global-research-on-novel-coronavirus-2019-ncov}{https://www.who.int/emergencies/diseases/novel-coronavirus-2019/global-research-on-novel-coronavirus-2019-ncov}} MAG identifier \citep{Shen2018AWS}, and others. Some papers are associated with documents, the physical artifacts containing paper content; these are the familiar PDFs, XMLs, or physical print-outs we read. For the \cord effort, we generate harmonized and deduplicated metadata as well as structured full text parses of paper documents as output. We provide full text parses in cases where we have access to the paper documents, and where the documents are available under an open access license (e.g. Creative Commons (CC),\footnote{\href{https://creativecommons.org/}{https://creativecommons.org/}} publisher-specific \covid licenses,\footnote{\label{footnote:pmc_covid}\href{https://www.ncbi.nlm.nih.gov/pmc/about/covid-19/}{https://www.ncbi.nlm.nih.gov/pmc/about/covid-19/}} or identified as open access through DOI lookup in the Unpaywall\footnote{\href{https://unpaywall.org/}{https://unpaywall.org/}} database). \subsection{Sources of papers} Papers in \cord are sourced from PubMed Central (PMC), PubMed, the World Health Organization's Covid-19 Database,\textsuperscript{\ref{footnote:who}} and preprint servers bioRxiv, medRxiv, and arXiv. The PMC Public Health Emergency Covid-19 Initiative\textsuperscript{\ref{footnote:pmc_covid}} expanded access to \covid literature by working with publishers to make coronavirus-related papers discoverable and accessible through PMC under open access license terms that allow for reuse and secondary analysis. BioRxiv and medRxiv preprints were initially provided by CZI, and are now ingested through Semantic Scholar along with all other included sources. We also work directly with publishers such as Elsevier\footnote{\label{footnote:elsevier}\href{https://www.elsevier.com/connect/coronavirus-information-center}{https://www.elsevier.com/connect/coronavirus-information-center}} and Springer Nature,\footnote{\href{https://www.springernature.com/gp/researchers/campaigns/coronavirus}{https://www.springernature.com/gp/researchers/\\campaigns/coronavirus}} to provide full text coverage of relevant papers available in their back catalog. All papers are retrieved given the query\footnote{Adapted from the Elsevier COVID-19 site\textsuperscript{\ref{footnote:elsevier}}}: \begin{quote} \footnotesize\texttt{"COVID" OR "COVID-19" OR "Coronavirus" OR "Corona virus" OR "2019-nCoV" OR "SARS-CoV" OR "MERS-CoV" OR "Severe Acute Respiratory Syndrome" OR "Middle East Respiratory Syndrome"} \end{quote} \noindent Papers that match on these keywords in their title, abstract, or body text are included in the dataset. Query expansion is performed by PMC on these search terms, affecting the subset of papers in \cord retrieved from PMC. \subsection{Processing metadata} \label{sec:metadata_processing} The initial collection of sourced papers suffers from duplication and incomplete or conflicting metadata. We perform the following operations to harmonize and deduplicate all metadata: \begin{enumerate}[noitemsep] \item Cluster papers using paper identifiers \item Select canonical metadata for each cluster \item Filter clusters to remove unwanted entries \end{enumerate} \paragraph{Clustering papers} We cluster papers if they overlap on any of the following identifiers: \emph{\{doi, pmc\_id, pubmed\_id, arxiv\_id, who\_covidence\_id, mag\_id\}}. If two papers from different sources have an identifier in common and no other identifier conflicts between them, we assign them to the same cluster. Each cluster is assigned a unique identifier \textbf{\textsc{cord\_uid}}, which persists between dataset releases. No existing identifier, such as DOI or PMC ID, is sufficient as the primary \cord identifier. Some papers in PMC do not have DOIs; some papers from the WHO, publishers, or preprint servers like arXiv do not have PMC IDs or DOIs. Occasionally, conflicts occur. For example, a paper $c$ with $(doi, pmc\_id, pubmed\_id)$ identifiers $(x, null, z')$ might share identifier $x$ with a cluster of papers $\{a, b\}$ that has identifiers $(x, y, z)$, but has a conflict $z' \neq z$. In this case, we choose to create a new cluster $\{c\}$, containing only paper $c$.\footnote{This is a conservative clustering policy in which any metadata conflict prohibits clustering. An alternative policy would be to cluster if any identifier matches, under which $a$, $b$, and $c$ would form one cluster with identifiers $(x, y, [z, z'])$.} \paragraph{Selecting canonical metadata} Among each cluster, the canonical entry is selected to prioritize the availability of document files and the most permissive license. For example, between two papers with PDFs, one available under a CC license and one under a more restrictive \covid-specific copyright license, we select the CC-licensed paper entry as canonical. If any metadata in the canonical entry are missing, values from other members of the cluster are promoted to fill in the blanks. \paragraph{Cluster filtering} Some entries harvested from sources are not papers, and instead correspond to materials like tables of contents, indices, or informational documents. These entries are identified in an ad hoc manner and removed from the dataset. \subsection{Processing full text} Most papers are associated with one or more PDFs.\footnote{PMC papers can have multiple associated PDFs per paper, separating the main text from supplementary materials.} To extract full text and bibliographies from each PDF, we use the PDF parsing pipeline created for the S2ORC dataset \cite{lo-wang-2020-s2orc}.\footnote{One major difference in full text parsing for \cord is that we do not use ScienceParse,\footnotemark~as we always derive this metadata from the sources directly.}\footnotetext{\href{https://github.com/allenai/science-parse}{https://github.com/allenai/science-parse}} In \cite{lo-wang-2020-s2orc}, we introduce the S2ORC JSON format for representing scientific paper full text, which is used as the target output for paper full text in \cord. The pipeline involves: \begin{enumerate}[noitemsep] \item Parse all PDFs to TEI XML files using GROBID\footnote{\href{https://github.com/kermitt2/grobid}{https://github.com/kermitt2/grobid}} \cite{Lopez2009GROBIDCA} \item Parse all TEI XML files to S2ORC JSON \item Postprocess to clean up links between inline citations and bibliography entries. \end{enumerate} \noindent We additionally parse JATS XML\footnote{\href{https://jats.nlm.nih.gov/}{https://jats.nlm.nih.gov/}} files available for PMC papers using a custom parser, generating the same target S2ORC JSON format. This creates two sets of full text JSON parses associated with the papers in the collection, one set originating from PDFs (available from more sources), and one set originating from JATS XML (available only for PMC papers). Each PDF parse has an associated SHA, the 40-digit SHA-1 of the associated PDF file, while each XML parse is named using its associated PMC ID. Around 48\% of \cord papers have an associated PDF parse, and around 37\% have an XML parse, with the latter nearly a subset of the former. Most PDFs ($>$90\%) are successfully parsed. Around 2.6\% of \cord papers are associated with multiple PDF SHA, due to a combination of paper clustering and the existence of supplementary PDF files. \subsection{Table parsing} Since the May 12, 2020 release of \cord, we also release selected HTML table parses. Tables contain important numeric and descriptive information such as sample sizes and results, which are the targets of many information extraction systems. A separate PDF table processing pipeline is used, consisting of table extraction and table understanding. \emph{Table extraction} is based on the Smart Document Understanding (SDU) capability included in IBM Watson Discovery.\footnote{\href{https://www.ibm.com/cloud/watson-discovery}{https://www.ibm.com/cloud/watson-discovery}} SDU converts a given PDF document from its native binary representation into a text-based representation like HTML which includes both identified document structures (e.g., tables, section headings, lists) and formatting information (e.g. positions for extracted text). \emph{Table understanding} (also part of Watson Discovery) then annotates the extracted tables with additional semantic information, such as column and row headers and table captions. We leverage the Global Table Extractor (GTE)~\cite{Zheng2020GlobalTE}, which uses a specialized object detection and clustering technique to extract table bounding boxes and structures. All PDFs are processed through this table extraction and understanding pipeline. If the Jaccard similarity of the table captions from the table parses and \cord parses is above 0.9, we insert the HTML of the matched table into the full text JSON. We extract 188K tables from 54K documents, of which 33K tables are successfully matched to tables in 19K (around 25\%) full text documents in \cord. Based on preliminary error analysis, we find that match failures are primarily due to caption mismatches between the two parse schemes. Thus, we plan to explore alternate matching functions, potentially leveraging table content and document location as additional features. See Appendix \ref{app:tables} for example table parses. \subsection{Dataset contents} \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{papers_per_year.png} \caption{The distribution of papers per year in \cord. A spike in publications occurs in 2020 in response to \covid.} \label{fig:year} \end{figure} \cord has grown rapidly, now consisting of over 140K papers with over 72K full texts. Over 47K papers and 7K preprints on \covid and coronaviruses have been released since the start of 2020, comprising nearly 40\% of papers in the dataset. \begin{table}[tbp!] \setlength{\tabcolsep}{.25em} \footnotesize \centering \begin{tabular}{p{34mm}p{15mm}p{17mm}} \toprule Subfield & Count & \% of corpus \\ \midrule Virology & 29567 & 25.5\% \\ Immunology & 15954 & 13.8\% \\ Surgery & 15667 & 13.5\% \\ Internal medicine & 12045 & 10.4\% \\ Intensive care medicine & 10624 & 9.2\% \\ Molecular biology & 7268 & 6.3\% \\ Pathology & 6611 & 5.7\% \\ Genetics & 5231 & 4.5\% \\ Other & 12997 & 11.2\% \\ \bottomrule \end{tabular} \caption{MAG subfield of study for \cord papers.} \label{tab:fos} \end{table} Classification of \cord papers to Microsoft Academic Graph (MAG) \citep{msr:mag1, msr:mag2} fields of study \citep{Shen2018AWS} indicate that the dataset consists predominantly of papers in Medicine (55\%), Biology (31\%), and Chemistry (3\%), which together constitute almost 90\% of the corpus.\footnote{MAG identifier mappings are provided as a supplement on the \cord landing page.} A breakdown of the most common MAG subfields (L1 fields of study) represented in \cord is given in Table~\ref{tab:fos}. Figure~\ref{fig:year} shows the distribution of \cord papers by date of publication. Coronavirus publications increased during and following the SARS and MERS epidemics, but the number of papers published in the early months of 2020 exploded in response to the \covid epidemic. Using author affiliations in MAG, we identify the countries from which the research in CORD-19 is conducted. Large proportions of \cord papers are associated with institutions based in the Americas (around 48K papers), Europe (over 35K papers), and Asia (over 30K papers). \section{Design decision \& challenges} A number of challenges come into play in the creation of \cord. We summarize the primary design requirements of the dataset, along with challenges implicit within each requirement: \paragraph{Up-to-date} Hundreds of new publications on \covid are released every day, and a dataset like \cord can quickly become irrelevant without regular updates. \cord has been updated daily since May 26. A processing pipeline that produces consistent results day to day is vital to maintaining a changing dataset. That is, the metadata and full text parsing results must be reproducible, identifiers must be persistent between releases, and changes or new features should ideally be compatible with previous versions of the dataset. \paragraph{Handles data from multiple sources} Papers from different sources must be integrated and harmonized. Each source has its own metadata format, which must be converted to the \cord format, while addressing any missing or extraneous fields. The processing pipeline must also be flexible to adding new sources. \paragraph{Clean canonical metadata} Because of the diversity of paper sources, duplication is unavoidable. Once paper metadata from each source is cleaned and organized into \cord format, we apply the deduplication logic described in Section \ref{sec:metadata_processing} to identify similar paper entries from different sources. We apply a conservative clustering algorithm, combining papers only when they have shared identifiers but no conflicts between any particular class of identifiers. We justify this because it is less harmful to retain a few duplicate papers than to remove a document that is potentially unique and useful. \paragraph{Machine readable full text} To provide accessible and canonical structured full text, we parse content from PDFs and associated paper documents. The full text is represented in S2ORC JSON format \citep{lo-wang-2020-s2orc}, a schema designed to preserve most relevant paper structures such as paragraph breaks, section headers, inline references, and citations. S2ORC JSON is simple to use for many NLP tasks, where character-level indices are often employed for annotation of relevant entities or spans. The text and annotation representations in S2ORC share similarities with BioC \citep{Comeau2019PMCTM}, a JSON schema introduced by the BioCreative community for shareable annotations, with both formats leveraging the flexibility of character-based span annotations. However, S2ORC JSON also provides a schema for representing other components of a paper, such as its metadata fields, bibliography entries, and reference objects for figures, tables, and equations. We leverage this flexible and somewhat complete representation of S2ORC JSON for \cord. We recognize that converting between PDF or XML to JSON is lossy. However, the benefits of a standard structured format, and the ability to reuse and share annotations made on top of that format have been critical to the success of \cord. \paragraph{Observes copyright restrictions} Papers in \cord and academic papers more broadly are made available under a variety of copyright licenses. These licenses can restrict or limit the abilities of organizations such as AI2 from redistributing their content freely. Although much of the \covid literature has been made open access by publishers, the provisions on these open access licenses differ greatly across papers. Additionally, many open access licenses grant the ability to read, or ``consume'' the paper, but may be restrictive in other ways, for example, by not allowing republication of a paper or its redistribution for commercial purposes. The curator of a dataset like \cord must pass on best-to-our-knowledge licensing information to the end user. \section{Research directions} \label{sec:research_directions} \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{cord19_tasks.png} \caption{An example information retrieval and extraction system using \cord: Given an input query, the system identifies relevant papers (yellow highlighted rows) and extracts text snippets from the full text JSONs as supporting evidence.} \label{fig:tasks} \end{figure} We provide a survey of various ways researchers have made use of \cord. We organize these into four categories: \emph{(i)} direct usage by clinicians and clinical researchers (\S\ref{sec:by_clinical_experts}), \emph{(ii)} tools and systems to assist clinicians (\S\ref{sec:for_clinical_experts}), \emph{(iii)} research to support further text mining and NLP research (\S\ref{sec:for_nlp_researchers}), and \emph{(iv)} shared tasks and competitions (\S\ref{sec:shared_tasks}). \subsection{Usage by clinical researchers} \label{sec:by_clinical_experts} \cord has been used by medical experts as a paper collection for conducting systematic reviews. These reviews address questions about \covid include infection and mortality rates in different demographics \cite{Han2020.who-is-more-susceptible}, symptoms of the disease \citep{Parasa2020PrevalenceOG}, identifying suitable drugs for repurposing \cite{sadegh2020exploring}, management policies \cite{Yaacoube-bmj-safe-management-bodies}, and interactions with other diseases \cite{Crisan-Dabija-tuberculosis-covid19, Popa-inflammatory-bowel-diseases}. \subsection{Tools for clinicians} \label{sec:for_clinical_experts} Challenges for clinicians and clinical researchers during the current epidemic include \textit{(i)} keeping up to to date with recent papers about \covid, \textit{(ii)} identifying useful papers from historical coronavirus literature, \textit{(iii)} extracting useful information from the literature, and \textit{(iv)} synthesizing knowledge from the literature. To facilitate solutions to these challenges, dozens of tools and systems over \cord have already been developed. Most combine elements of text-based information retrieval and extraction, as illustrated in Figure~\ref{fig:tasks}. We have compiled a list of these efforts on the \cord public GitHub repository\footnote{\href{https://github.com/allenai/cord19}{https://github.com/allenai/cord19}} and highlight some systems in Table \ref{tab:other_tasks}.\footnote{There are many Search and QA systems to survey. We have chosen to highlight the systems that were made publicly-available within a few weeks of the \cord initial release.} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}p{#1}} \begin{table*}[tbh!] \small \begin{tabularx}{\textwidth}{L{20mm}p{20mm}p{40mm}X} \toprule \textbf{Task} & \textbf{Project} & \textbf{Link} & \textbf{Description} \\ \midrule \textbf{Search and \newline discovery} & \textsc{Neural Covidex} & \href{https://covidex.ai/}{https://covidex.ai/} & Uses a T5-base \cite{raffel2019exploring} unsupervised reranker on BM25 \cite{Jones2000APM} \\ \cline{2-4} & \textsc{CovidScholar} & \href{https://covidscholar.org}{https://covidscholar.org/} & Adapts \citet{Weston2019} system for entity-centric queries \\ \cline{2-4} & \textsc{KDCovid} & \href{http://kdcovid.nl/about.html}{http://kdcovid.nl/about.html} & Uses BioSentVec \cite{biosentvec} similarity to identify relevant sentences \\ \cline{2-4} & \textsc{Spike-Cord} & \href{https://spike.covid-19.apps.allenai.org}{https://spike.covid-19.apps.allenai.org} & Enables users to define ``regular expression''-like queries to directly search over full text \\ \midrule \textbf{Question answering} & \textsc{covidask} & \href{https://covidask.korea.ac.kr/}{https://covidask.korea.ac.kr/} & Adapts \citet{seo-etal-2019-real} using BioASQ challenge (Task B) dataset \citep{Tsatsaronis2015AnOO} \\ \cline{2-4} & \textsc{aueb} & \href{http://cslab241.cs.aueb.gr:5000/}{http://cslab241.cs.aueb.gr:5000/} & Adapts \citet{mcdonald2018deep} using \citet{Tsatsaronis2015AnOO} \\ \midrule \textbf{Summariz-ation} & Vespa & \href{https://cord19.vespa.ai/}{https://cord19.vespa.ai/} & Generates summaries of paper abstracts using T5 \citep{raffel2019exploring} \\ \midrule \textbf{Recommend-ation} & Vespa & \href{https://cord19.vespa.ai/}{https://cord19.vespa.ai/} & Recommends ``similar papers'' using Sentence-BERT \cite{reimers-gurevych-2019-sentence} and SPECTER embeddings \cite{specter2020cohan} \\ \midrule \textbf{Entailment} & COVID papers browser & \href{https://github.com/gsarti/covid-papers-browser}{https://github.com/gsarti/covid-papers-browser} & Similar to \textsc{KDCovid}, but uses embeddings from BERT models trained on NLI datasets \\ \midrule \textbf{Claim \newline verification} & SciFact & \href{https://scifact.apps.allenai.org}{https://scifact.apps.allenai.org} & Uses RoBERTa-large \cite{liu2019roberta} to find Support/Refute evidence for \covid claims \\ \midrule \textbf{Assistive lit. review} & ASReview & \href{https://github.com/asreview/asreview-covid19}{https://github.com/asreview/ asreview-covid19} & Active learning system with a \cord plugin for identifying papers for literature reviews \\ \midrule \textbf{Augmented reading} & Sinequa & \href{https://covidsearch.sinequa.com/app/covid-search/}{https://covidsearch.sinequa.com/ app/covid-search/} & In-browser paper reader with entity highlighting on PDFs \\ \midrule \textbf{Visualization} & SciSight & \href{https://scisight.apps.allenai.org}{https://scisight.apps.allenai.org} & Network visualizations for browsing research groups working on \covid \\ \bottomrule \end{tabularx} \caption{Publicly-available tools and systems for medical experts using \cord.} \label{tab:other_tasks} \end{table*} \subsection{Text mining and NLP research} \label{sec:for_nlp_researchers} The following is a summary of resources released by the NLP community on top of \cord to support other research activities. \paragraph{Information extraction} To support extractive systems, NER and entity linking of biomedical entities can be useful. NER and linking can be performed using NLP toolkits like ScispaCy \cite{neumann-etal-2019-scispacy} or language models like BioBERT-base \cite{Lee2019BioBERTAP} and SciBERT-base \cite{beltagy-etal-2019-scibert} finetuned on biomedical NER datasets. \citet{Wang2020ComprehensiveNE} augments \cord full text with entity mentions predicted from several techniques, including weak supervision using the NLM's Unified Medical Language System (UMLS) Metathesaurus \cite{Bodenreider2004TheUM}. \paragraph{Text classification} Some efforts focus on extracting sentences or passages of interest. For example, \citet{Liang2020IdentifyingRF} uses BERT \cite{devlin-etal-2019-bert} to extract sentences from \cord that contain \covid-related radiological findings. \paragraph{Pretrained model weights} BioBERT and SciBERT have been popular pretrained LMs for \covid-related tasks. DeepSet has released a BERT-base model pretrained on \cord.\footnote{\href{https://huggingface.co/deepset/covid_bert_base}{https://huggingface.co/deepset/covid\_bert\_base}} SPECTER \cite{specter2020cohan} paper embeddings computed using paper titles and abstracts are being released with each \cord update. SeVeN relation embeddings \cite{espinosa-anke-schockaert-2018-seven} between word pairs have also been made available for \cord.\footnote{\href{https://github.com/luisespinosaanke/cord-19-seven}{https://github.com/luisespinosaanke/cord-19-seven}} \paragraph{Knowledge graphs} The Covid Graph project\footnote{\href{https://covidgraph.org/}{https://covidgraph.org/}} releases a \covid knowledge graph built from mining several public data sources, including \cord, and is perhaps the largest current initiative in this space. \citet{Ahamed2020InformationMF} rely on entity co-occurrences in \cord to construct a graph that enables centrality-based ranking of drugs, pathogens, and biomolecules. \subsection{Competitions and Shared Tasks} \label{sec:shared_tasks} The adoption of \cord and the proliferation of text mining and NLP systems built on top of the dataset are supported by several \covid-related competitions and shared tasks. \subsubsection{Kaggle} \label{sec:kaggle} Kaggle hosts the \cord Research Challenge,\footnote{\href{https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge}{https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge}} a text-mining challenge that tasks participants with extracting answers to key scientific questions about \covid from the papers in the \cord dataset. Round 1 was initiated with a set of open-ended questions, e.g., \textit{What is known about transmission, incubation, and environmental stability?} and \textit{What do we know about \covid risk factors?} More than 500 teams participated in Round 1 of the Kaggle competition. Feedback from medical experts during Round 1 identified that the most useful contributions took the form of article summary tables. Round 2 subsequently focused on this task of table completion, and resulted in 100 additional submissions. A unique tabular schema is defined for each question, and answers are collected from across different automated extractions. For example, extractions for risk factors should include disease severity and fatality metrics, while extractions for incubation should include time ranges. Sufficient knowledge of COVID-19 is necessary to define these schema, to understand which fields are important to include (and exclude), and also to perform error-checking and manual curation. \subsubsection{TREC} The \trec\footnote{\href{https://ir.nist.gov/covidSubmit/index.html}{https://ir.nist.gov/covidSubmit/index.html}} shared task \cite{trec-covid-jamia,voorhees2020treccovid} assesses systems on their ability to rank papers in \cord based on their relevance to \covid-related topics. Topics are sourced from MedlinePlus searches, Twitter conversations, library searches at OHSU, as well as from direct conversations with researchers, reflecting actual queries made by the community. To emulate real-world surge in publications and rapidly-changing information needs, the shared task is organized in multiple rounds. Each round uses a specific version of \cord, has newly added topics, and gives participants one week to submit per-topic document rankings for judgment. Round 1 topics included more general questions such as \emph{What is the origin of COVID-19?}~and \emph{What are the initial symptoms of COVID-19?}~while Round 3 topics have become more focused, e.g., \emph{What are the observed mutations in the SARS-CoV-2 genome?}~and \emph{What are the longer-term complications of those who recover from COVID-19?} Around 60 medical domain experts, including indexers from NLM and medical students from OHSU and UTHealth, are involved in providing gold rankings for evaluation. \trec opened using the April 1st \cord version and received submissions from over 55 participating teams. \section{Discussion} \label{sec:discussion} Several hundred new papers on \covid are now being published every day. Automated methods are needed to analyze and synthesize information over this large quantity of content. The computing community has risen to the occasion, but it is clear that there is a critical need for better infrastructure to incorporate human judgments in the loop. Extractions need expert vetting, and search engines and systems must be designed to serve users. Successful engagement and usage of \cord speaks to our ability to bridge computing and biomedical communities over a common, global cause. From early results of the Kaggle challenge, we have learned which formats are conducive to collaboration, and which questions are the most urgent to answer. However, there is significant work that remains for determining \textit{(i)} which methods are best to assist textual discovery over the literature, \textit{(ii)} how best to involve expert curators in the pipeline, and \textit{(iii)} which extracted results convert to successful \covid treatments and management policies. Shared tasks and challenges, as well as continued analysis and synthesis of feedback will hopefully provide answers to these outstanding questions. Since the initial release of \cord, we have implemented several new features based on community feedback, such as the inclusion of unique identifiers for papers, table parses, more sources, and daily updates. Most substantial outlying features requests have been implemented or addressed at this time. We will continue to update the dataset with more sources of papers and newly published literature as resources permit. \subsection{Limitations} Though we aim to be comprehensive, \cord does not cover many relevant scientific documents on \covid. We have restricted ourselves to research papers and preprints, and do not incorporate other types of documents, such as technical reports, white papers, informational publications by governmental bodies, and more. Including these documents is outside the current scope of \cord, but we encourage other groups to curate and publish such datasets. Within the scope of scientific papers, \cord is also incomplete, though we continue to prioritize the addition of new sources. This has motivated the creation of other corpora supporting \covid NLP, such as LitCovid \citep{Chen2020KeepUW}, which provide complementary materials to \cord derived from PubMed. Though we have since added PubMed as a source of papers in \cord, there are other domains such as the social sciences that are not currently represented, and we hope to incorporate these works as part of future work. We also note the shortage of foreign language papers in \cord, especially Chinese language papers produced during the early stages of the epidemic. These papers may be useful to many researchers, and we are working with collaborators to provide them as supplementary data. However, challenges in both sourcing and licensing these papers for re-publication are additional hurdles. \subsection{Call to action} Though the full text of many scientific papers are available to researchers through \cord, a number of challenges prevent easy application of NLP and text mining techniques to these papers. First, the primary distribution format of scientific papers -- PDF -- is not amenable to text processing. The PDF file format is designed to share electronic documents rendered faithfully for reading and printing, and mixes visual with semantic information. Significant effort is needed to coerce PDF into a format more amenable to text mining, such as JATS XML,\footnote{\label{footnote:jats}\href{https://www.niso.org/publications/z3996-2019-jats}{https://www.niso.org/publications/z3996-2019-jats}} BioC \citep{Comeau2019PMCTM}, or S2ORC JSON \citep{lo-wang-2020-s2orc}, which is used in \cord. Though there is substantial work in this domain, we can still benefit from better PDF parsing tools for scientific documents. As a complement, scientific papers should also be made available in a structured format like JSON, XML, or HTML. Second, there is a clear need for more scientific content to be made accessible to researchers. Some publishers have made \covid papers openly available during this time, but both the duration and scope of these epidemic-specific licenses are unclear. Papers describing research in related areas (e.g., on other infectious diseases, or relevant biological pathways) have also not been made open access, and are therefore unavailable in \cord or otherwise. Securing release rights for papers not yet in \cord but relevant for \covid research is a significant portion of future work, led by the PMC \covid Initiative.\textsuperscript{\ref{footnote:pmc_covid}} Lastly, there is no standard format for representing paper metadata. Existing schemas like the JATS XML NISO standard\textsuperscript{\ref{footnote:jats}} or library science standards like \textsc{bibframe}\footnote{\href{https://www.loc.gov/bibframe/}{https://www.loc.gov/bibframe/}} or Dublin Core\footnote{\href{https://www.dublincore.org/specifications/dublin-core/dces/}{https://www.dublincore.org/specifications/dublin-core/dces/}} have been adopted to represent paper metadata. However, these standards can be too coarse-grained to capture all necessary paper metadata elements, or may lack a strict schema, causing representations to vary greatly across publishers who use them. To improve metadata coherence across sources, the community must define and agree upon an appropriate standard of representation. \subsection*{Summary} This project offers a paradigm of how the community can use machine learning to advance scientific research. By allowing computational access to the papers in \cord, we increase our ability to perform discovery over these texts. We hope the dataset and projects built on the dataset will serve as a template for future work in this area. We also believe there are substantial improvements that can be made in the ways we publish, share, and work with scientific papers. We offer a few suggestions that could dramatically increase community productivity, reduce redundant effort, and result in better discovery and understanding of the scientific literature. Through \cord, we have learned the importance of bringing together different communities around the same scientific cause. It is clearer than ever that automated text analysis is not the solution, but rather one tool among many that can be directed to combat the \covid epidemic. Crucially, the systems and tools we build must be designed to serve a use case, whether that's improving information retrieval for clinicians and medical professionals, summarizing the conclusions of the latest observational research or clinical trials, or converting these learnings to a format that is easily digestible by healthcare consumers. \section*{Acknowledgments} This work was supported in part by NSF Convergence Accelerator award 1936940, ONR grant N00014-18-1-2193, and the University of Washington WRF/Cable Professorship. We thank The White House Office of Science and Technology Policy, the National Library of Medicine at the National Institutes of Health, Microsoft Research, Chan Zuckerberg Initiative, and Georgetown University's Center for Security and Emerging Technology for co-organizing the \cord initiative. We thank Michael Kratsios, the Chief Technology Officer of the United States, and The White House Office of Science and Technology Policy for providing the initial seed set of questions for the Kaggle \cord research challenge. We thank Kaggle for coordinating the \cord research challenge. In particular, we acknowledge Anthony Goldbloom for providing feedback on \cord and for involving us in discussions around the Kaggle literature review tables project. We thank the National Institute of Standards and Technology (NIST), National Library of Medicine (NLM), Oregon Health and Science University (OHSU), and University of Texas Health Science Center at Houston (UTHealth) for co-organizing the \trec shared task. In particular, we thank our co-organizers -- Steven Bedrick (OHSU), Aaron Cohen (OHSU), Dina Demner-Fushman (NLM), William Hersh (OHSU), Kirk Roberts (UTHealth), Ian Soboroff (NIST), and Ellen Voorhees (NIST) -- for feedback on the design of \cord. We acknowledge our partners at Elsevier and Springer Nature for providing additional full text coverage of papers included in the corpus. We thank Bryan Newbold from the Internet Archive for providing feedback on data quality and helpful comments on early drafts of the manuscript. We thank Rok Jun Lee, Hrishikesh Sathe, Dhaval Sonawane and Sudarshan Thitte from IBM Watson AI for their help in table parsing. We also acknowledge and thank our collaborators from AI2: Paul Sayre and Sam Skjonsberg for providing front-end support for \cord and \trec, Michael Schmitz for setting up the \cord Discourse community forums, Adriana Dunn for creating webpage content and marketing, Linda Wagner for collecting community feedback, Jonathan Borchardt, Doug Downey, Tom Hope, Daniel King, and Gabriel Stanovsky for contributing supplemental data to the \cord effort, Alex Schokking for his work on the Semantic Scholar \covid Research Feed, Darrell Plessas for technical support, and Carissa Schoenick for help with public relations. \bibliography{cord19} \bibliographystyle{acl_natbib} \appendix \section{Table parsing results} \label{app:tables} \begin{table*}[th!] \centering \small \begin{tabular}{llL{40mm}} \toprule \textbf{PDF Representation} & \textbf{HTML Table Parse} & \textbf{Source \& Description} \\ \midrule \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf1.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse1.png}} & From \citet{Hothorn2020RelativeCD}: Exact Structure; Minimal row rules \\ [2.0cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf2.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse2.png}} & From \citet{LpezFando2020ManagementOF}: Exact Structure; Colored rows \\ [1.4cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf3.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse3.png}} & From \citet{Stringhini2020SeroprevalenceOA}: Minor span errors; Partially colored background with minimal row rules \\ [2.0cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf4.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse4.png}} & From \citet{Fathi2020PROGNOSTICVO}: Overmerge and span errors; Some section headers have row rules \\ [2.2cm] \raisebox{-0.6\totalheight}{\includegraphics[width=0.32\textwidth]{tables/pdf5.png}} & \raisebox{-0.6\totalheight}{\includegraphics[width=0.35\textwidth]{tables/parse5.png}} & From \citet{Kaushik2020MultisystemIS}: Over-splitting errors; Full row and column rules with large vertical spacing in cells \\ \bottomrule \end{tabular} \caption{A sample of table parses. Though most table structure is preserved accurately, the diversity of table representations results in some errors.} \label{tab:table_parses} \end{table*} There is high variance in the representation of tables across different paper PDFs. The goal of table parsing is to extract all tables from PDFs and represent them in HTML table format, along with associated titles and headings. In Table \ref{tab:table_parses}, we provide several example table parses, showing the high diversity of table representations across documents, the structure of resulting parses, and some common parse errors. \end{document}
https://openreview.net/forum?id=mlmwkAdIeK
mlmwkAdIeK
https://arxiv.org/abs/2008.05713
[ { "cdate": 1593931332484, "content": { "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "nominate_for_a_reproducibility_award": null, "rating": "7: Good paper, accept", "review": "This paper explores gender differences in linguistic productions between two groups of Redditors who self-identify as either \"male\" or \"female\". It examines a corpus of covid-19 pandemic threads in relation to two areas: emotion analysis (employing a VAD lexicon and word embedding representations); and topic analysis (employing the tool MALLET).\n\nThe paper's novelty is in the application of an established method to a new corpus that the authors have developed pertaining to covid-19 threads. As expected, the language usage for covid-19 posts had a lower Valence when compared to language used in a baseline corpus. There is also a general trend for the language used in the female sub-corpus scoring slightly higher in the Valence scale than male sub-corpus. The trends are reversed when Arousal and Dominance were examined: overall higher for men, and when comparing baseline to the covid-19 posts, the baselines score slightly lower for both male and female data.\n\nTo compare and contrast the different topics covered between the male and female authored posts, topic modelling was applied to each sub-corpus and the topics with the highest coherence scores were presented. However, applying topic modelling to the corpus as a whole and analysing the topic allocation of the male and female posts would give a better indication of similarities and differences of the topics covered in the sub-corpora. However, two different topic models for each sub-corpus were developed and the most cohesive topics were presented.\n\nIn general the VAD study is interesting, although unsurprsing. The goal of discovering if different or similar topics were covered in the two sub-corpora may be best approached by discovering the topics covered by the corpus as a whole and analysing the topic allocation of the sub-corpora.\n", "reviews_visibility": null, "title": "nice application to new data set to be made available" }, "ddate": null, "forum": "mlmwkAdIeK", "id": "2Aa131i9-Qi", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/-/Official_Review", "mdate": null, "nonreaders": [], "number": 3, "original": null, "readers": [ "everyone" ], "replyto": "mlmwkAdIeK", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/AnonReviewer1" ], "tcdate": 1593931332484, "tddate": null, "tmdate": 1593931332484, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/AnonReviewer1" ] }, { "cdate": 1593797932016, "content": { "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "nominate_for_a_reproducibility_award": null, "rating": "4: Ok but not good enough - rejection", "review": "This paper aims to understand the difference between male and female discourse in social media looking at a manually annotated set of Reddit threads related to Covid-19 compared to a baseline set. They confirm existing results about male and female discourse on the VAD scale.\n\nThe paper is clear and well-written and seems to be an interesting analysis, but fails to provide the significance of the work. Further the only novelty of the work is the application to Covid-19, otherwise all methods are utilizing previous work. This is not to say the authors should re-invent the wheel. \n\n\nPros:\n- An interesting exploration of gender differences that confirms previous results.\n- A good use of previous work on a new corpus. \n\nCons:\n- Missing the overall significance for researchers, clinicians, epidemiologists, etc.\n- It is unclear why Reddit specifically is used. They mention it is the 19th most visited site world. What about the other ones that are more visited? Is Reddit truly representative of the population at large? A description of the basic characteristics of Reddit users and posts would be helpful. \n- There is also a large imbalance between male and female posts (2:1 ratio). \n- This is very heteronormative.\n- The dataset is pulled from 15 weeks starting from Feb 1 to June 1, which was a rapidly changing time. The paper would benefit from a discussion of the different topics discussed over that time in comparison to the topics pulled out by the models. Currently we are in a new \"normal\" and I think that would be reflected during the different weeks.\n- The baseline is pulled from the same time period of Covid-19. An explanation of why the baseline should be the same time frame would be helpful to understand why the baseline is not from before Covid-19 when males and females were posting \"normal\" stuff. \n- The overall results in table 1 are confusing in what is being compared and what is statistically significant. The difference between males and females for the VAD criteria may be statistically significant but it is a minor increase (< 0.2). It is unclear how important this is and what implications it has.\n- A more in depth discussion on the relevance of the most coherent topics for males and females would be helpful. ", "reviews_visibility": null, "title": "Overall the paper is okay but fails to provide the significance of the work." }, "ddate": null, "forum": "mlmwkAdIeK", "id": "Z6sH6FO5-Ai", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/-/Official_Review", "mdate": null, "nonreaders": [], "number": 2, "original": null, "readers": [ "everyone" ], "replyto": "mlmwkAdIeK", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/AnonReviewer3" ], "tcdate": 1593797932016, "tddate": null, "tmdate": 1593797932016, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/AnonReviewer3" ] }, { "cdate": 1593146492366, "content": { "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper", "nominate_for_a_reproducibility_award": null, "rating": "7: Good paper, accept", "review": "Quality:\nOverall the paper is well written, contains re-usable data, and describes clear results.\n\nClarity:\nAuthors aims of analysis were clearly stated, as were the methods employed. Results are elucidated clearly. The paper is well written, concise, and easy to follow logically.\n\nOriginality:\nGiven the findings corroborate already established patterns of F / M speech, the exact findings that those patterns persist in covid-related speech is not particularly original. However, within the context of studying phenomena amidst a completely novel world event, covid, the findings regarding how people talk about said event are original. Combination of methodologies to perform analysis are somewhat original.\n\nSignificance:\nMohammed's VAD showed low inter-annotator agreement for A & D types. This may reduce the impact of any findings, distinctions, or variances (even if statistically significant) between the genders in these categories. Even if statistically significant, the cohen-d effect sizes between F & M are still very small (< .2 in all categories). What is the _human significance_ (not mathematical significance) of the analyzed differences? \n\nEditing suggestions: \n Clarify Fig1 caption by including \"re covid\" or something to that effect\n Typo in sentence, missing \"of\" : \"COVID-related data trends (Figure 2) show comparatively low scores for valence and high scores\nfor arousal in the early weeks [OF] our analysis (February to mid-March)\". \n This sentence comes off as sexist: \"women tend to use more positive language, while men score higher on arousal and dominance.\" Use similar terms to describe characteristics for both genders instead of saying what women do and what men score, e.g, \"Women score higher in use of positive language, while men score ...\"\n\npros\n Straightforward, solid results that established F / M speech patterns persist in novel corpus.\n Probably a decent baseline paper to use in further research on gender differences re covid speech or other domains.\n \ncons\n Statistical significance does not explain human importance of findings. \n ", "reviews_visibility": null, "title": "Overall the paper is well written, contains re-usable data, and describes clear results." }, "ddate": null, "forum": "mlmwkAdIeK", "id": "2sY3by5pJW", "invitation": "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/-/Official_Review", "mdate": null, "nonreaders": [], "number": 1, "original": null, "readers": [ "everyone" ], "replyto": "mlmwkAdIeK", "signatures": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/AnonReviewer2" ], "tcdate": 1593146492366, "tddate": null, "tmdate": 1593146492366, "writers": [ "aclweb.org/ACL/2020/Workshop/NLP-COVID/Paper19/AnonReviewer2" ] } ]
\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2020} \usepackage{latexsym} \usepackage{times} \usepackage{subcaption} \usepackage{graphicx} \usepackage{comment} \usepackage{color} \usepackage{booktabs} \usepackage{amsmath} \usepackage{amssymb} \usepackage{dblfloatfix} \usepackage{pbox} \usepackage{array} \usepackage{url} \renewcommand{\UrlFont}{\ttfamily\small} \usepackage{microtype} \aclfinalcopy % \newcommand\BibTeX{B\textsc{ib}\TeX} \newcommand{\Table}[1]{Tab.~\ref{#1}} \newcommand{\Algorithm}[1]{Algorithm~\ref{#1}} \newcommand{\Section}[1]{Sec.~\textit{\nameref{#1}}} \newcommand{\Example}[1]{Ex.~\ref{#1}} \newcommand{\Figure}[1]{Fig.~\ref{#1}} \newcommand{\Equation}[1]{Eqn.~(\ref{#1})} \newcommand{\EquationNP}[1]{Eqn.~\ref{#1}} \newcommand{\Sectref}[1]{Section~\ref{#1}} \newcommand{\Page}[1]{page~\pageref{#1}} \newcommand{\ella}[1]{{\color{blue}{#1}}} \newcommand{\jai}[1]{{\color{orange}{#1}}} \newcommand{\sxs}[1]{{\color{magenta}{SS: #1}}} \newcommand{\todo}[1]{{\color{red}{#1}}} \newcommand{\tocheck}[1]{{\color{purple}{#1}}} \title{Exploration of Gender Differences in COVID-19 Discourse on Reddit} \author{ Jai Aggarwal \hspace{2.8cm} Ella Rabinovich \hspace{2.8cm} Suzanne Stevenson \vspace{0.2cm} \\ Department of Computer Science, University of Toronto \vspace{0.1cm} \\ \texttt{\{jai,ella,suzanne\}@cs.toronto.edu} } \date{} \begin{document} \maketitle \begin{abstract} Decades of research on differences in the language of men and women have established postulates about preferences in lexical, topical, and emotional expression between the two genders, along with their sociological underpinnings. Using a novel dataset of male and female linguistic productions collected from the Reddit discussion platform, we further confirm existing assumptions about gender-linked affective distinctions, and demonstrate that these distinctions are amplified in social media postings involving emotionally-charged discourse related to COVID-19. Our analysis also confirms considerable differences in topical preferences between male and female authors in spontaneous pandemic-related discussions. \end{abstract} \section{Introduction} Research on gender differences in language has a long history spanning psychology, gender studies, sociolinguistics, and, more recently, computational linguistics. A considerable body of linguistic studies highlights the differences between the language of men and women in topical, lexical, and syntactic aspects \citep{lakoff1973language, labov1990intersection}, and such differences have proven to be accurately detectable by automatic classification tools \citep{koppel2002automatically,schler2006effects, schwartz2013personality}. Here, we study the differences in male (M) and female (F) language in discussions of COVID-19\footnote{We refer to COVID-19 by `COVID' hereafter.} on the Reddit\footnote{\url{https://www.reddit.com/}} discussion platform. Responses to the virus on social media have been heavily emotionally-charged, accompanied by feelings of anxiety, grief, and fear, and have discussed far-ranging concerns regarding personal and public health, the economy, and social aspects of life. In this work, we explore how established emotional and topical cross-gender differences are carried over into this pandemic-related discourse. Insights regrading these distinctions will advance our understanding of gender-linked linguistic traits, and may further help to inform public policy and communications around the pandemic. Research has considered the emotional content of social media on the topic of the COVID pandemic \citep[e.g.,][]{LwinEtAl2020, StellaEtAl2020}, but little work has looked specifically at the impact of gender on affective expression \citep{vandervegt2020women}. Gender-linked linguistic distinctions across emotional dimensions have been a subject of prolific research \citep{burriss2007psychophysiological, hoffman2008empathy, thelwall2010data}, with findings suggesting that women are more likely than men to express positive emotions, while men exhibit higher tendency to dominance, engagement, and control (although see \citet{park2016women} for an alternative finding). \citet{vandervegt2020women} compared the self-reported emotional state of male vs.\ female crowdsourced workers who contributed to the Real World Worry Dataset \citep[RWWD,][]{RWWD2020}, in which they were also asked to write about their feelings around COVID. However, because \citet{vandervegt2020women} restricted the affective analysis to the workers’ emotional ratings, it remains an open question regarding whether, and how, the natural linguistic productions of males and females about COVID will exhibit detectably different patterns of emotion. Topical analysis of social media during the pandemic has also been a focus of recent work \citep[e.g.,][]{liu_health_2020, abd-alrazaq_top_2020}, again with few studies devoted to gender differences \citep{thelwall_covid-19_2020, vandervegt2020women}. Much prior work has found distinctions in topical preferences in spontaneous productions of the two genders \citep[e.g.,][]{mulac2001empirical, mulac2006gender, newman2008gender}, showing that men were more likely to discuss money- and occupation-related topics, focused on objects and impersonal matters, while women preferred discussion on family and social life, topics related to psychological and social processes. In the recent context, \citet{thelwall_covid-19_2020} found these observations persisted in COVID-19 tweets, with a male focus on sports and politics, and female focus on family and caring. In the prompted texts of the RWWD, \citet{vandervegt2020women} also found the expected M vs.\ F topical differences, with men talking more about the international impact of the pandemic, as well as governmental policy, and women more commonly discussing social aspects -- family, friends, and solidarity. Moreover, \citet{vandervegt2020women} further found differences between the elicited short (tweet-sized) and longer essays, revealing the impact of the goal and size of the text on such analyses. Again, an open question remains concerning the topical distinctions between M and F authors in spontaneous productions without artificial restrictions on length. % Here, we aim to address the above gaps in the literature, by performing a comprehensive analysis of the similarities and differences between male and female language collected from the Reddit discussion platform. Our main corpus is a large collection of spontaneous COVID-related utterances by (self-reported) M and F authors. Importantly, we also collect productions on a wide variety of topics by the same set of authors as a `baseline' dataset. First, using a multidimensional affective framework from psychology \citep{bradley1994measuring}, we draw on a recently-released dataset of human affective ratings of words \citet{mohammad2018obtaining} to support the emotional assessment of male and female posts in our datasets. Through this approach, we corroborate existing assumptions on differences in the emotional aspects of linguistic productions of men and women in the COVID corpus. Moreover, our use of a baseline dataset enables us to further show that these distinctions are amplified in the emotionally-intensive setting of COVID discussions compared to productions on other topics. Second, we take a topic modeling approach to demonstrate detectable distinctions in the range of topics discussed by the two genders in our COVID corpus, reinforcing (to some extent) assumptions on gender-related topical preferences, in this natural discourse in an emotionally-charged context.\footnote{All data and code is available at \url{https://github.com/ellarabi/covid19-demography}.} \section{Datasets} As noted above, our goal is to analyze emotions and topics in spontaneous utterances that are relatively unconstrained by length. To that end, our main dataset comprises a large collection of spontaneous, COVID-related English utterances by male and female authors from the Reddit discussion platforms. As of May 2020, Reddit had over $430$M active users, $1.2$M topical threads (subreddits), and over $70$\% of its user base coming from English-speaking countries. Subreddits often encourage their subscribers to specify a meta-property (called a `flair', a textual tag), projecting a small glimpse about themselves (e.g., political association, country of origin, age), thereby customizing their presence within a subreddit. We identified a set of subreddits, such as `r/askmen' and `r/askwomen', where authors commonly self-report their gender, and extracted a set of unique user-ids of authors who specified male or female gender as a flair.\footnote{Although gender can be viewed as a continuum rather than binary, we limit this study to the two most prominent gender markers in our corpus: male and female.} This process yielded the user-ids for $10,421$ males and $5,630$ females (as self-reported). Using this extracted set of ids, we collected COVID-related submissions and comments\footnote{For convenience, we refer to both initial submissions and comments to submissions as `posts' hereafter.} from across the Reddit discussion platform for a period of 15 weeks, from February 1st through June 1st. COVID-related posts were identified as those containing one or more of a set of predefined keywords: `covid', `covid-19', `covid19', `corona', `coronavirus', `the virus', `pandemic'. This process resulted in over $70$K male and $35$K female posts spanning $7,583$ topical threads; the male subcorpus contains $5.3$M tokens and the female subcorpus $2.8$M tokens. Figure~\ref{fig:weekly-counts} presents the weekly amount of COVID-related posts in the combined corpus, showing a peak in early-mid March (weeks $5$--$6$). \begin{figure}[hbt] \centering \includegraphics[width=7cm]{gender-counts-plot.png} \caption{Weekly COVID-related posts by gender.} \label{fig:weekly-counts} \end{figure} Aiming at a comparative analysis between virus-related and `neutral' (baseline) linguistic productions by men and women, we collected an additional dataset comprising a randomly sampled $10$K posts per week by the same set of authors, totalling $150$K posts for each gender. The baseline dataset contains $6.8$M tokens in the male subcorpus and $5.3$M tokens in the female subcorpus. We use our COVID and baseline datasets for analysis of emotional differences as well as topical preferences in spontaneous productions by male and female authors on Reddit. The ample size of the corpora facilitates analysis of distinctions in these two aspects between the two genders in their discourse on the pandemic, and as compared to non-COVID discussion. \section{Analysis of Emotional Dimensions} \subsection{Methods} \begin{table*} \resizebox{\textwidth}{!}{ \begin{tabular}{l|rr|rr|r||rr|rr|r} \multicolumn{1}{c}{} & \multicolumn{5}{c||}{COVID-related posts} & \multicolumn{5}{c}{Baseline posts} \\ & mean(M) & std(M) & mean(F) & std(F) & eff. size & mean(M) & std(M) & mean(F) & std(F) & eff. size \\ \hline V & 0.375 & 0.12 & \textbf{0.388} & 0.11 & -0.120 & 0.453 & 0.14 & \textbf{0.459} & 0.14 & -0.043 \\ A & \textbf{0.579} & 0.09 & 0.567 & 0.08 & 0.144 & \textbf{0.570} & 0.10 & 0.559 & 0.09 & 0.109 \\ D & \textbf{0.490} & 0.08 & 0.476 & 0.07 & 0.183 & \textbf{0.486} & 0.09 & 0.469 & 0.09 & 0.185 \\ \end{tabular} } \caption{\label{tbl:vad-values} Means of M and F posts for each affective dimension, and effect size of differences within each corpus. All differences significant at p\textless$0.001$. Highest mean score for each of V, A, D, in COVID and baseline, is boldfaced.} \end{table*} \begin{figure*}[ht!] \begin{subfigure}[t]{0.1\textwidth} \includegraphics[scale=0.4]{gender-v-plot.png} \end{subfigure} \qquad \qquad \quad \qquad \qquad \quad \begin{subfigure}[t]{0.1\textwidth} \includegraphics[scale=0.4]{gender-a-plot.png} \end{subfigure} \qquad \qquad \quad \qquad \qquad \quad \begin{subfigure}[t]{0.1\textwidth} \includegraphics[scale=0.4]{gender-d-plot.png} \end{subfigure} \caption{\label{fig:vad-diachronic}Diachronic analysis of valence (left), arousal (middle), and dominance (right) scores for Reddit data.} \end{figure*} A common way to study emotions in psycholinguistics uses an approach that groups affective states into a few major dimensions, such as the Valence-Arousal-Dominance (VAD) affect representation, where \textit{valence} refers to the degree of positiveness of the affect, \textit{arousal} to the degree of its intensity, and \textit{dominance} represents the level of control \citep{bradley1994measuring}. Computational studies applying this approach to emotion analysis have been relatively scarce due to the limited availability of a comprehensive resource of VAD rankings, with (to the best of our knowledge) no large-scale study on cross-gender language. Here we make use of the recently-released NRC-VAD Lexicon, a large dataset of human ratings of $20,000$ English words \citep{mohammad2018obtaining}, in which each word is assigned V, A, and D values, each in the range $[0\text{--}1]$. For example, the word `fabulous' is rated high on the valence dimension, while `deceptive' is rated low. % In this study we aim at estimating the VAD values of posts (typically comprising multiple sentences), rather than individual words; we do so by inferring the affective ratings of sentences using those of individual words, as follows. Word embedding spaces have been shown to capture variability in emotional dimensions closely corresponding to valence, arousal, and dominance \citep{Hollis2016}, implying that such semantic representations carry over information useful for the task of emotional affect assessment. Therefore, we exploit affective dimension ratings assigned to individual words for supervision in extracting ratings of sentences. We use the model introduced by \citet{ReimersSBERT} for producing word- and sentence-embeddings using Siamese BERT-Networks,\footnote{We used the \texttt{bert-large-nli-mean-tokens} model, obtaining highest scores on a the STS benchmark.} thereby obtaining semantic representations for the $20,000$ words in \citet{mohammad2018obtaining} as well as for sentences in our datasets. This model performs significantly better than alternatives (such as averaging over a sentence's individual word embeddings and using BERT encoding \citep{ReimersSBERT}) on the SentEval toolkit, a popular evaluation toolkit for sentence embeddings \citep{Conneau2018SentEval}. Next, we trained beta regression models\footnote{An alternative to linear regression in cases where the dependent variable is a proportion (in 0\text{--}1 range).} \citep{zeileis2010beta} to predict VAD scores (dependent variables) of words from their embeddings (independent predictors), yielding Pearson's correlations of $0.85$, $0.78$, and $0.81$ on a $1000$-word held-out set for V, A, and D, respectively. The trained models were then used to infer VAD values for each sentence within a post using the sentence embeddings.\footnote{We excluded sentences shorter than 5 tokens.} A post's final score was computed as the average of the predicted scores for each of its constituent sentences. As an example, the post \textit{`most countries handled the covid-19 situation appropriately'} was assigned a low arousal score of 0.274, whereas a high arousal score of $0.882$ was assigned to \textit{`gonna shoot the virus to death!'}. \subsection{Results and Discussion} We compared V, A, and D scores of male posts to those of female posts, in each of the COVID and baseline datasets, using Wilcoxon rank-sum tests. All differences were significant, and Cohen's~$d$ \citep{cohen2013statistical} was used to find the effect size of these differences; see Table~\ref{tbl:vad-values}. We also compared the scores for each gender in the COVID dataset to their respective scores in the baseline dataset (discussed below). We further show, in Figure~\ref{fig:vad-diachronic}, the diachronic trends in VAD for M and F authors in the two sub-corpora: COVID and baseline. First, Table~\ref{tbl:vad-values} shows considerable differences between M and F authors in the baseline dataset for all three emotional dimensions (albeit a tiny effect size in valence), in line with established assumptions in this field \citep{burriss2007psychophysiological, hoffman2008empathy, thelwall2010data}: women score higher in use of positive language, while men score higher on arousal and dominance. Interestingly, the cross-gender differences in V and A are amplified between baseline and COVID data, with an increase in effect size from $0.043$ to $0.120$ for V and $0.109$ to $0.144$ for A. By comparison, virtually no difference was detected in D between M and F authors in baseline vs.\ virus-related discussions. Thus we find that men seem to use more negative and emotionally-charged language when discussing COVID than women do -- and to a greater degree than in non-COVID discussion -- presumably indicating a grimmer outlook towards the pandemic. This finding is particularly interesting, given that \citet{vandervegt2020women} find that women self-report more negative emotion in reaction to the pandemic, and underlies the importance of analysis of implicit indications of affective state in spontaneous text. COVID-related data trends (Figure~\ref{fig:vad-diachronic}) show comparatively low scores for valence and high scores for arousal in the early weeks of our analysis (February to mid-March). We attribute these findings to an increased level of alarm and uncertainty about the pandemic in its early stages, which gradually attenuated as the population learned more about the virus. As expected, both genders exhibit lower V scores in COVID discussions compared to baseline: Cohen's $d$ effect size of $-0.617$ for M and $-0.554$ for F authors. Smaller, yet considerable, differences between the two sub-corpora exist also for A and D ($0.095$ and $0.047$ for M, and $0.083$ and $0.085$, for F). These affective divergences from baseline show how emotionally-intensive is COVID-related discourse. \section{Analysis of Topical Distinctions} \begin{table*}[h!] \centering \small \begin{tabular}{ >{\centering\arraybackslash}p{1.5cm} >{\centering\arraybackslash}p{1.5cm} >{\centering\arraybackslash}p{1.5cm} >{\centering\arraybackslash}p{1.5cm}| >{\centering\arraybackslash}p{1.5cm} >{\centering\arraybackslash}p{1.5cm} >{\centering\arraybackslash}p{1.5cm} >{\centering\arraybackslash}p{1.5cm} } \textbf{M-1} & \textbf{M-2} & \textbf{M-3} & \textbf{M-4} & \textbf{F-1} & \textbf{F-2} & \textbf{F-3} & \textbf{F-4}\\ money & week & case & fuck & virus & feel & mask & week \\ economy & health & rate & mask & make & thing & hand & test \\ business & close & spread & claim & good & good & wear & hospital \\ market & food & hospital & news & thing & friend & woman & sick \\ crisis & open & week & post & vaccine & talk & food & patient \\ make & travel & month & comment & point & make & face & symptom \\ economic & supply & testing & call & happen & love & call & doctor \\ pandemic & store & social & article & human & parent & store & positive \\ lose & stay & lockdown & chinese & body & anxiety & close & start \\ vote & plan & measure & medium & study & read & stay & care \\ \end{tabular} \caption{Most coherent topics identified in male (\textbf{M-1}--\textbf{M-4}) and female (\textbf{F-1}--\textbf{F-4}) COVID-related posts.} \label{tbl:topic-modeling} \end{table*} \begin{table*} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|l|l|c|c} & \multicolumn{1}{c|}{Topic} & \multicolumn{1}{c|}{Keywords} & \multicolumn{1}{c|}{Male} & \multicolumn{1}{c}{Female} \\ \hline \textbf{1} & \textbf{Economy} & {money, business, make, month, food, economy, market, supply, store, cost} & \textbf{0.17} & \textbf{0.10} \\ \hline \textbf{2} & \textbf{Social} & {feel, thing, live, good, make, friend, talk, love, hard, start} & \textbf{0.07} & \textbf{0.26} \\ \hline 3 & Distancing & close, social, health, open, plan, stay, travel, week, continue, risk & 0.09 & 0.11 \\ \hline 4 & Virus & virus, kill, human, disease, study, body, spread, effect, similar, immune & 0.11 & 0.07 \\ \hline 5 & Health (1) & mask, hand, stop, make, call, good, wear, face, person, woman & 0.07 & 0.08 \\ \hline 6 & Health (2) & case, test, hospital, rate, spread, patient, risk, care, sick, testing & 0.17 & 0.14 \\ \hline \textbf{7} & \textbf{Politics} & {problem, issue, change, response, vote, policy, support, power, action, agree} & \textbf{0.17} & \textbf{0.07} \\ \hline 8 & Media & point, make, question, post, news, read, fact, information, understand, article & 0.08 & 0.07 \\ \hline 9 & Misc. & good, start, thing, make, hour, stuff, play, pretty, find, easy & 0.08 & 0.10 \\ \end{tabular} } \caption{\label{tbl:topic-dist} Distribution of dominant topics in the COVID corpus. Entries in columns M(ale) and F(emale) represent the ratio of posts with the topic in that row as their main topic. Ratios are calculated for M and F posts separately (each of columns M and F sum to $1$). Bolded topics indicate those with substantial differences between M and F.} \end{table*} We study topical distinctions in male vs.\ female COVID-related discussions with two complementary analyses: (1) comparison of topics found by topic modelling over each of the M and F subcorpora separately, and (2) comparison of the distribution of dominant topics in M vs.\ F posts as derived from a topic model over the entire M+F dataset. For each analysis, we used a publicly-available topic modeling tool \citep[MALLET,][]{McCallumMALLET}. Each topic is represented by a probability distribution over the entire vocabulary, where terms more characteristic of a topic are assigned a higher probability.\footnote{Prior to topic modeling we applied a preprocessing step including lemmatization of a post's text and filtering out stopwords (the $300$ most frequent words in the corpus).} A common way to evaluate a topic learned from a set of documents is by computing its \textit{coherence score} -- a measure reflecting its overall quality \cite{newman2010automatic}. We assess the quality of a learned model by averaging the scores of its individual topics -- the \textit{model} coherence score. \textbf{Analysis of Cross-gender Topics.} Here we explore topical aspects of the productions of the two genders by comparing two topic models: one created using M posts, and another using F posts, in the COVID dataset. We selected the optimal number of topics for each set of posts by maximizing its model coherence score, resulting in $8$ topics for male and $7$ topics for female posts (coherence scores of $0.48$ and $0.46$). We examined the similarities and the differences across the two topical distributions by extracting the top $4$ topics -- those with the highest individual coherence scores -- in each of the M and F models. Table~\ref{tbl:topic-modeling} presents the $10$ words with highest likelihood for these topics in each model; topics within each are ordered by decreasing coherence score (left to right). We can see that both genders are occupied with health-related issues (topics \textbf{M\text{-}3}, \textbf{F\text{-}1}, \textbf{F\text{-}4}), and the implications on consumption habits (topics \textbf{M\text{-}2}, \textbf{F\text{-}3}). However, clear distinctions in topical preference are also revealed by our analysis: men discuss economy/market and media-related topics (\textbf{M\text{-}1}, \textbf{M\text{-}4}), while women focus more on family and social aspects (\textbf{F\text{-}2}). Collectively these results show that the established postulates regarding gender-linked topical preferences are evident in spontaneous COVID-related discourse on Reddit. \textbf{Analysis of Dominance of Topics across Genders.} We next performed a complementary analysis, creating a topic model over the combined male and female sub-corpora, yielding $9$ topics.\footnote{We used the model with the 2nd-best number of topics (9, coherence score 0.432) as inspection revealed it to be more descriptive than the optimal number of topics (2, score 0.450).} We calculate, for the two sets of M and F posts, the distribution of dominant topics -- that is, for each of topics $1$--$9$, what proportion of M (respectively F) posts had that topic as its first-ranked topic. Table~\ref{tbl:topic-dist} reports the results; e.g., row 1 shows that the economy is the main topic of 17\% of male posts, but only 10\% of female posts. We see that males tend to focus more on economic and political topics than females (rows $1$ and $7$); conversely, females focus far more on social topics than did males (row $2$). Once again, these findings highlight cross-gender topical distinctions in COVID discussions on Reddit in support of prior results. \section{Conclusions} A large body of studies spanning a range of disciplines has suggested (and corroborated) assumptions regarding the differences in linguistic productions of male and female speakers. Using a large dataset of COVID-related utterances by men and women on the Reddit discussion platforms, we show clear distinctions along emotional dimensions between the two genders, and demonstrate that these differences are amplified in emotionally-intensive discourse on the pandemic. Our analysis of topic modeling further highlights distinctions in topical preferences between men and women. \section*{Acknowledgments} This research was supported by NSERC grant RGPIN-2017-06506 to Suzanne Stevenson, and by an NSERC USRA to Jai Aggarwal. \bibliographystyle{acl_natbib} \bibliography{anthology,main} \end{document}
https://openreview.net/forum?id=qd51R0JNLl
qd51R0JNLl
https://arxiv.org/abs/2005.12522
[{"cdate":1593448164581,"content":{"confidence":"4: The reviewer is confident but not absolutely cer(...TRUNCATED)
"\\pdfoutput=1\n\n\\documentclass[11pt,a4paper]{article}\n\\usepackage[hyperref]{acl2020}\n\\usepack(...TRUNCATED)
https://openreview.net/forum?id=JQCYcdHfXyJ
JQCYcdHfXyJ
https://arxiv.org/abs/2004.04225
[{"cdate":1588604168441,"content":{"confidence":"5: The reviewer is absolutely certain that the eval(...TRUNCATED)
"\\pdfoutput=1\n\\documentclass[11pt,a4paper]{article}\n\\PassOptionsToPackage{breaklinks}{hyperref}(...TRUNCATED)
https://openreview.net/forum?id=ub9_2iAo3D
ub9_2iAo3D
https://arxiv.org/abs/2006.03202
[{"cdate":1594069497050,"content":{"confidence":"4: The reviewer is confident but not absolutely cer(...TRUNCATED)
"\n\\documentclass[11pt,a4paper]{article}\n\\usepackage[hyperref]{acl2020}\n\\usepackage{times}\n\\u(...TRUNCATED)
https://openreview.net/forum?id=PlUA_mgGaPq
PlUA_mgGaPq
https://arxiv.org/abs/2004.05125
[{"cdate":1587548155603,"content":{"confidence":"5: The reviewer is absolutely certain that the eval(...TRUNCATED)
"\\documentclass[11pt,a4paper]{article}\n\\usepackage[hyperref]{acl2020}\n\\usepackage{times}\n\\use(...TRUNCATED)
https://openreview.net/forum?id=p4SrFydwO5
p4SrFydwO5
https://arxiv.org/abs/2207.03574
[{"cdate":1638240968734,"content":{"confidence":"4: The reviewer is confident but not absolutely cer(...TRUNCATED)
"\n\\documentclass[nohyperref]{article}\n\n\\usepackage{microtype}\n\\usepackage{graphicx}\n\\usepac(...TRUNCATED)
https://openreview.net/forum?id=u_lOumlm7mu
u_lOumlm7mu
https://arxiv.org/abs/2203.14126
[{"cdate":1638168751302,"content":{"confidence":"3: The reviewer is fairly confident that the evalua(...TRUNCATED)
"\n\\documentclass[sigconf]{aamas} \n\\usepackage{balance} %\n\\usepackage{packages}\n\\usepackage{c(...TRUNCATED)
https://openreview.net/forum?id=LGlhzn1ZJl
LGlhzn1ZJl
https://arxiv.org/abs/2111.07035
[{"cdate":1638171182680,"content":{"confidence":"4: The reviewer is confident but not absolutely cer(...TRUNCATED)
"\n\\def\\year{2022}\\relax\n\n\\documentclass[letterpaper]{article} %\n\\usepackage{aaai22} (...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
6