Fighting fake news in the COVID-19 era: policy insights from an equilibrium model

This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.

Abstract

The COVID-19 crisis has revealed structural failures in governance and coordination on a global scale. With related policy interventions dependent on verifiable evidence, pandemics require governments to not only consider the input of experts but also ensure that science is translated for public understanding. However, misinformation and fake news, including content shared through social media, compromise the efficacy of evidence-based policy interventions and undermine the credibility of scientific expertise with potentially longer-term consequences. We introduce a formal mathematical model to understand factors influencing the behavior of social media users when encountering fake news. The model illustrates that direct efforts by social media platforms and governments, along with informal pressure from social networks, can reduce the likelihood that users who encounter fake news embrace and further circulate it. This study has implications at a practical level for crisis response in politically fractious settings and at a theoretical level for research about post-truth and the construction of fact.

Keywords: Fake news, Policy sciences, Equilibrium model, COVID-19

Introduction

‘This is a free country. Land of the free. Go to China if you want communism’ yelled an American protester at a nurse counter-protesting the resumption of commercial activity 5 weeks into the country’s COVID-19 crisis (Armus and Hassan 2020). Like many policy challenges, the COVID-19 crisis is exposing deep-seated political and epistemological divisions, fueled in part contestation over scientific evidence and ideological tribalism stoked in online communities. The proliferation of social media has democratized access to information with evident benefits, but also raises concerns about the difficulty users face in distinguishing between truth and falsehood. The perils of ‘fake news’—false information masquerading as verifiable truth, often disseminated online—are acutely apparent during public health crises, with false equivalence drawn between scientific evidence and uninformed opinion.

In an illustrative episode from April 2020, the scientific community’s largely consensus views about the need for social distancing to limit the spread of COVID-19 were challenged by protesters in the American states of Minnesota, Michigan, and Texas, who demanded in rallies that governors immediately relax social distancing protocols and re-open shuttered businesses. Populist skepticism about COVID-19 response in the USA had arguably been growing since President Donald Trump’s early dismissals of the severity of the virus (US White House 2020) and his call for protestors to ‘liberate’ states undertaking containment measures (Shear and Mervosh 2020). These actions were seen by some as evidence of the presidential administration’s willingness to politicize virus response; indeed, critical language from some politicians and commentators cast experts and political opponents as unnecessarily panicky and politically motivated to overstate the need for lock-downs and business closures. Despite the salience of this recent phenomenon, anti-science populism has an arguably extended history, not only for issues related to public health (e.g., virus response and, prior to COVID-19, vaccinations) but also for climate change (Fischer 2020; Huber 2020; Lejano and Dodge 2017; Lewandowsky et al. 2015). Anti-science skepticism, often lacking a broad audience and attention from mainstream media, is left to peddle scientifically unsubstantiated claims in online communities, where such content remains widely accessible and largely unregulated (Edis 2020; Szabados 2019). As such, the issue of fake news deserves closer scrutiny with the world facing its greatest public health crisis in a century.

There is no consensus definition of fake news (Shu et al. 2017). Based on a survey of articles published between 2003 and 2017, Tandoc et al. (2018) propose a typology for how the concept can be operationalized: satire, parody, fabrication, manipulation, propaganda, and advertising. Waszak et al. (2018) propose a similar typology (with the overlapping categories of fabricated news, manipulated news, and advertising news) but add ‘irrelevant news’ to capture the cooptation of health terms and topics to support unrelated arguments. Shu et al. (2017) cite verifiable lack of authenticity and intent to deceive as general characteristics of fake news. Making a distinction between fake news and brazen falsehoods, which has implications for this study’s focus on the behavior of the individual information consumer, Tandoc et al. (2018) argue ‘while news is constructed by journalists, it seems that fake news is co-constructed by the audience, for its fakeness depends a lot on whether the audience perceives the fake as real. Without this complete process of deception, fake news remains a work of fiction’ (p. 148).

Amidst the COVID-19 crisis, during which trust in government is not merely an idle theoretical topic but has substantial implications for public health, deeper scholarly understandings about the power and allure of fake news are needed. According to Porumbescu (2018), ‘the evolution of online mass media is anything but irrelevant to citizens’ evaluations of government, with discussions of “fake news,” “alternative facts,” “the deep state,” and growing political polarization rampant’ (p. 234). With the increasing level of global digital integration comes the growing difficulty of controlling the dissemination of misinformation. Efforts by social media platforms (as the underlying organizational structures and operations; referred to hereafter as SMPs) and governments have targeted putative sources of misinformation, but engagement (i.e., sharing and promoting links) with fake news by individual users is an additional realm in which the problem of fake news can be addressed.

Examining the motivations driving an individual’s engagement with fake news, this study introduces a formal mathematical model that illustrates the cost to an individual of making low- or high-level efforts to resist fake news. The intent is to reveal mechanisms by which SMPs and governments can intervene at individual and broader scales to contain the spread of willful misinformation. This article continues with a literature review focusing on fake news in social media and policy efforts to address it. This is followed by the presentation of the model, with a subsequent section focusing on policy insights and recommendations that connect the findings of the model to practical implications. The conclusion reflects more broadly on the ‘post-truth’ phenomenon as it relates to policymaking and issues a call for continued research around epistemic contestation.

Literature review

A canvassing of literature about fake news could draw from an array of disciplines including communications, sociology, psychology, and economics. We focus on the treatment of fake news by the public policy literature—an angle that engages discussions about cross-cutting issues like misinformation, politicization of fact, and the use of knowledge in policymaking. The review is in two parts. The first focuses on the intersection of fake news, social media, and pandemics (in particular the COVID-19 crisis), and the second on policy efforts to address individual reactions to fake news.

Fake news, social media, and pandemics

Fake news and social media as topics of analysis are closely intertwined, as the latter is considered a principal conduit through which the former spreads; indeed, Shu et al. (2017) call social media ‘a powerful source for fake news dissemination’ (p. 23). The aftermath of the 2016 US presidential election sent scholars scrambling to the topic of fake news, misinformation, and populism; as such, information-filtering through political and cognitive bias is a topic now enjoying a spirited revival in the literature (Fang et al. 2019; Polletta and Callahan 2019; Cohen 2018; Allcott and Gentzkow 2017; DiFranzo and Gloria-Garcia 2017; Flaxman et al. 2016; Zuiderveen Borgesius et al. 2016). A popular heuristic for conceptualizing the phenomenon of social media-enabled fake news is the notion of the ‘echo chamber’ effect (Shu et al. 2017; Barberá et al. 2015; Agustín 2014; Jones et al. 2005), in which information consumers intentionally self-expose only to content and communities that confirm their beliefs and perceptions while avoiding those that challenges them. The effect leads to the development of ideologically homogenous social networks whose members derive collective satisfaction from frequently repeated narratives (a process McPherson et al. (2001) label ‘homophily’). This phenomenon leads to ‘filter bubbles’ (Spohr 2017) in which ‘algorithmic curation and personalization systems […] decreases [users’] likelihood of encountering ideologically cross-cutting news content’ (p. 150). The filtering mechanism is both self-imposed and externally imposed based on the algorithmic efforts of SMPs to circulate content that maintains user interest (Tufekci 2015). For example, in a Singapore-based study of the motivations behind social media users’ efforts to publicly confront fake news, Tandoc et al. (2020) find that users are driven most by the relevance of the issue covered, their interpersonal relationships, and their ability to convincingly refute the misinformation; according to the authors, ‘participants were willing to correct when they felt that the fake news post touches on an issue that is important to them or has consequences to their loved ones and close friends’ (p. 393). As the issue of misinformation has gained further salience during the COVID-19 episode, this review continues by exploring scholarship about fake news in the context of pandemics.

Research has shown that fake news and misinformation can have detrimental effects on public health. In the context of pandemics, fake news operates by ‘masking healthy behaviors and promoting erroneous practices that increase the spread of the virus and ultimately result in poor physical and mental health outcomes’ (Tasnim et al. 2020; n.p.), by limiting the dissemination of ‘clear, accurate, and timely transmission of information from trusted sources’ (Wong et al. 2020; p. 1244), and by compromising short-term containment efforts and longer-term recovery efforts (Shaw et al. 2020). First used by World Health Organization Director-General Tedros Adhanom Ghebreyesus in February 2020 to describe the rapid global spread of misinformation about COVID-19 through social media (Zarocostas 2020), the term ‘infodemic’ has recently gained popularity in pandemic studies (Hu et al. 2020; Hua and Shaw 2020; Medford et al. 2020; Pulido et al. 2020). Similar terms are ‘pandemic populism’ (Boberg et al. 2020) and the punchy albeit casual ‘covidiocy’ (Hogan 2020). Predictably, misinformation has proliferated with the rising salience of COVID-19 (Cinelli et al. 2020; Frenkel et al. 2020; Hanafiah and Wan 2020; Pennycook et al. 2020; Rodríguez et al. 2020; Singh et al. 2020). In an April 2020 press conference, US President Donald Trump made ambiguous reference to the possible value of ingesting disinfectants to treat the virus (New York Times 2020), an utterance that elicited both concern and ridicule.

Scholarly efforts to understand misinformation in the COVID-19 pandemic contribute to an existing body of similar research in other contexts, including the spread of fake news during outbreaks of Zika (Sommariva et al. 2018), Ebola (Spinney 2019; Fung et al. 2016), and SARS (Taylor 2003). Research about misinformation and COVID-19 draws also on existing research about online information-sharing behaviors more generally. For example, in a study about the role of fake news in public health, Waszak et al. (2018) find that among the most shared links on common social media, 40 percent contained fallacious content (with vaccination having the highest incidence, at 90 percent). In taking a still broader view, understandings about the politicization of public health information draw from research about science denialism more generally, including the process by which climate denial narratives as ‘alternative facts’ are socio-culturally constructed to protect ideological imaginaries (see Fischer (2019) for a similar discussion related to climate change). On the other hand, there is also evidence that the use of social media as a conduit for information dissemination by public health authorities and governments has been useful, including communicating the need for social distancing, indicating support for healthcare workers, and providing emotional encouragement during lock-down (Thelwall and Thelwall 2020). As such, it is crucial to distinguish between productive uses of social media from unproductive ones, with the operative characteristic being the effect on safety and wellbeing of information consumers and the broader public.

Policy efforts to address individual reactions to fake news

The second part of this review explores literature about policy efforts to address fake news, such as it is understood as a policy problem. The phenomenon of fake news can be considered an individual-level issue, and this is the perspective adopted by this review and study. Within a ‘marketplace’ of information exchange, consumers encounter information and must decide whether to engage with it or to discredit and dismiss it. As such, many policy interventions targeting fake news focus on verification by helping equip social media users with the tools to identify and confront fake news (Torres et al. 2018). Nevertheless, the efficacy of such policy tools depends on their calibration to individual cognitive and emotional characteristics. For example, Lazer et al. (2018) outline cognitive biases that determine the allure of fake news, including self-selection (limiting one’s consumption only to affirming content), confirmation (giving greater credibility to affirming content), and desirability (accepting only affirming content). To this list Rini (2017) adds ‘credibility excess’ as a way of ascribing ‘inappropriately high testimonial credibility [to a news item] on the basis of [the source’s] demography’ (p. E-53).

A well-developed literature also indicates that cognitive efforts and characteristics determine an individual’s willingness to engage with fake news. According to a study about individual behaviors in response to the COVID-19 crisis in the USA, Stanley et al. (2020) find that ‘individuals less willing to engage effortful, deliberative, and reflective cognitive processes were more likely to believe the pandemic was a hoax, and less likely to have recently engaged in social-distancing and hand-washing’ (n.p.). The individual cognitive perspective is utilized also by Castellacci and Tveito (2018) in a review of literature about the impact of internet use on individual wellbeing: ‘the effects of Internet on wellbeing are mediated by a set of personal characteristics that are specific to each individual: psychological functioning, capabilities, and framing conditions’ (p. 308). Ideological orientation has likewise been found to associate with perceptions about and reactions to fake news; for example, Guess et al. (2019) find in a study of Facebook activity during the 2016 US presidential election that self-identifying political conservatives (the ‘right’) were more likely than political liberals (the ‘left’) to share fake news and that the user group aged 65 and older (controlling for ideology) shared over six times more fake news articles than did the youngest user group. A similar age-related effect on psychological responses to social media rumors is observed by He et al. (2019) in a study of usage patterns for messaging application WeChat; older users who are new to the application struggle more to manage their own rumor-induced anxiety. Network type also plays a role in determining fake news engagement; circulation of fake news and misinformation was found to be higher among anonymous and informal (individual and group) social media accounts than among official and formal institutional accounts (Kouzy et al. 2020).

The analytical value of connecting individual behavior with public policy interventions has prompted studies about the conduits through which policies influence social media consumers. According to Rini (2017), the problem of fake news ‘will not be solved by focusing on individual epistemic virtue. Rather, we must treat fake news as a tragedy of the epistemic commons, and its solution as a coordination problem’ (p. E-44). This claim makes reference to a scale and topic – the actions of government – that are underexplored in studies about the failure to limit the spread of fake news. Venturing as well into the realm of interpersonal dynamics, Rini continues by arguing that the development of unambiguous norms can enhance individual accountability, particularly around the transmission of fake news through social media sharing as a ‘testimonial endorsement’ (p. E-55). Extending the conversation about external influences on individual behavior, Lazer et al. (2018) classify ‘political interventions’ into two categories: (1) empowerment of individuals to evaluate fake news (e.g., training, fact-checking websites, and verification mechanisms embedded within social media posts to evaluate information source authenticity) and (2) SMP-based controls on dissemination of fake news (e.g., identification of media-active ‘bots’ and ‘cyborgs’ through algorithms). The authors also advocate wider applicability of tort lawsuits related to the harm caused by individuals sharing fake news. From a cognitive perspective, Van Bavel et al. (2020) add ‘prebunking’ as a form of psychological inoculation that exposes users to a modest amount of fake news with the purpose of helping them develop an ability to recognize it; the authors cite support in similar studies by van der Linden et al. (2017) and McGuire (1964). These interventions, in addition to crowd-sourced verification mechanisms whereby users rate the perceived accuracy of social media posts by other users, have the goal of conditioning and nudging users to reflect more deeply on the accuracy of the news they encounter.

Given that the model introduced in the following section concerns the issue of fake news from the perspective of individual behavior and that topics addressed by fake news are often ideologically contentious, it is appropriate to acknowledge the literature related to cognitive bias, beliefs, and ideologies with reference to political behavior. In a study about narrative framing, Mullainathan and Shleifer (2005) explore how the media, whether professional or otherwise, seeks to satisfy the confirmation biases of targeted or segmented viewer groups; when extended to the current environment of online media, narrative targeting increases the likelihood that fake news will be shared due to its attractiveness to particular audiences. In turn, this narrative targeting perpetuates the process by which information consumers construct their own narratives about political issues in a way that comports with their ideologies (Kim and Fording 1998; Minar 1961) and personalities (Duckitt and Sibley 2016; Caprara and Vecchione 2013). This tendency is shown to be strongly influenced by not only by (selectively observed) reality but also by individual perceptions that Kinder (1978; p. 867) labels ‘wishful thinking.’ Further, the cognitive tendency to classify reality through sweeping categorizations (e.g., ‘liberals’ vs. ‘conservatives,’ a common polarity in American politics) compels individuals to associate more strongly with one side and distance further from the other (Vegetti and Širinic 2019; Devine 2015) and thereby biases an individual’s cognitive processing of information (Van Bavel and Pereira 2018). This observation is arguably relevant to the current online discourses and content of fake news, which often reflect extreme party-political rivalries and left–right partisanship (Spohr 2017; Gaughan 2016). These issues are relevant as they bear strongly on the choices of individuals about effort levels related to their interactions with and subjective judgments of fake news.

Finally, few attempts have been made to apply formal mathematical modeling to understand the behavior of individuals with respect to the consumption of fake news on social media; a notable example is Shu et al. (2017), who model the ability of algorithms to detect fake news, and Tong et al. (2018), who model the ‘multi-cascade’ diffusion of fake news. Papanastasiou (2020) uses a formal mathematical model to illustrate the behavior of SMPs in response to sharing fake news by users. To this limited body of research we contribute a formal mathematical model addressing the motivations of users to engage with or dismiss fake news when encountered.

The model

The model presented in this section examines the behavior of a hypothetical digital citizen (DC) who encounters fake news while using social media. The vulnerability of the DC in this encounter depends on the DC’s level of effort in resisting fake news. To illustrate this dynamic, we adopt the modeling approach used by Lin et al. (2019) and Hartley et al. (2019) that considers the equilibrium choices of a rational decision-maker as determined by individual attributes and external factors. The advantage of this model is its incorporation of factors related to ethical standards in addition to cost–benefit considerations in decision-making. This type of model has been widely used in studies related to taxpayer behavior (Eisenhauer 2006, 2008; Yaniv 1994; Beck and Jung 1989; Srinivasan 1973), in which the tension between ethics and perceived benefits of acting unethically is comparable to that faced by a DC.

The model’s parameters are intended to aid thinking about issues within the ambit of public policy, making clearer the assumptions about individual behaviors and consequences of those behaviors as addressable by government interventions. While the model examines DC behavior, it is not intended to be meaningful for research about psychology and individual or group behavior more generally; mentions of behavior and its motivations and effects are made in service only to arguments about the policy implications of governing behavior in free societies. The remainder of this section specifies the model, which aims to formally and systematically observe dynamics among effort levels, standards, and utility for consuming and sharing fake news.

Choice of effort level

The model assumes that the DC is a rational decision-maker; that is, in reacting to fake news she maximizes her utility as determined by two factors: consumer utility (the benefits accruing to the DC by engaging in a particular way with fake news) and ethical standards. Regarding consumer utility, the DC makes an implicit cost–benefit analysis; for ethical standards, she behaves consistently with her personally held ethical norms. For simplicity, we assume that the DC’s overall utility function U takes the following form:

U = α e · W e + 1 - α e · S / e

e is the DC’s effort level in reacting to fake news. For simplicity of exposition, the DC is assumed to choose from two levels of effort: low (e = eL) and high (e = eH). If the DC chooses low effort, she increases her consumption of fake news; with high effort, she reduces her consumption. In the equation q = eH/eL, we assume that q > 1 and that a higher q represents a higher cost to the DC because it reflects a higher level of effort.

α is the weight the DC gives to her consumer utility and (1 − α) is the weight she gives to her utility from ethical behavior. We assume that α is a function of effort level as follows: α e H = α ( 0 < α < 1 ) and α(eL) = 1. That is, by choosing the high-effort level, the DC derives her utility not only from consumption (α > 0) but also from ethical behavior ( 1 - α > 0 ). If choosing the low-effort level, the DC derives her utility only from consumption (α = 1).

W(e) is the consumer utility that the DC gains from engagement with social media, which is assumed to depend on her effort level as follows: W e H = W and W e L = W / ω where ω > 1. That is, if the DC chooses the low-effort level, she derives less consumer utility due to the negative effects of engaging with fake news (as described in the literature review).

S represents the DC’s utility from observing her own standards (beyond cost–benefit considerations) for guiding online behavior. The assumption is that the DC gains utility by aligning her behavior choice (regarding whether to engage with fake news) with ethical norms and her own beliefs.

Given the utility function in Eq. (1), the DC exerts high effort in reacting to fake news if and only if: