Stage 1 Registered Report: Testing the Causal Impact of Social Media Reduction Around the Globe
Rathje, Steve, Nejla Asimovic, Tiago Ventura, Sarah Mughal, Hannah Karsting, Claire Robertson, Christopher Barrie, Joshua A. Tucker, and Jay Van Bavel
· Nature
· Stage 1 Registered Report Accepted in Principle
· Forthcoming
A global field experiment across 23 countries measuring how reducing social media usage for two weeks affects news knowledge, exposure to online hostility, intergroup attitudes, and well-being. Stage 1 peer review is complete; data collection is scheduled throughout 2026.
Online Information EnvironmentPolitical PolarizationMedia Consumption
Elite & Mass Political BehaviorData Science Methodology
Journal Article
Labeling Social Media Posts: Does Showing Coders Multimodal Content Produce Better Human Annotation, and a Better Machine Classifier?
Chen, Haohan, James Bisbee, Joshua A. Tucker, and Jonathan Nagler
· Political Science Research and Methods
· Conditionally accepted
· Forthcoming
The increasing multimodality of social media data presents opportunities and challenges. The authors develop five measures and an experimental framework to assist with decisions about whether to collect and label only the textual content of social media data or their full multimodal content, illustrated with a tweet labeling experiment.
Data Science MethodologyOnline Information Environment
Allcott, Hunt, Matthew Gentzkow, Ro'ee Levy, et al., and Joshua A. Tucker
· Nature Human Behaviour
· 2026
We study the effects of social media political advertising by randomizing subsets of 36,906 Facebook users and 25,925 Instagram users to have political ads removed from their news feeds for 6 weeks before the 2020 US presidential election. We show that most presidential ads were targeted towards parties’ own supporters and that fundraising ads were the most common. On both Facebook and Instagram, we found no detectable effects of removing political ads on political knowledge, polarization, perceived legitimacy of the election, political participation (including campaign contributions), candidate favourability and turnout. This was true overall and for both Democrats and Republicans separately.
Allcott, Hunt, Matthew Gentzkow, Benjamin Wittenbrink, et al., and Joshua A. Tucker
· NBER Working Paper 33697
· 2025
Two randomized experiments conducted before the 2020 U.S. election examined social media deactivation effects. Deactivating Facebook for six weeks before the election produced a 0.060 standard deviation improvement in an index of happiness, depression, and anxiety. Instagram deactivation showed similar but smaller gains (0.041 SD). Effects varied by demographics, with Facebook benefits concentrated among those 35+, while Instagram benefits concentrated among women under 25.
Erlich, Aaron, Kevin Aslett, Sarah Graham, and Joshua A. Tucker
· Journal of Experimental Political Science
· 1-15
· 2025
Investigates whether bilinguals in multilingual contexts evaluate misinformation differently depending on the language in which it is presented. A 10-week survey experiment in Ukraine (May-July 2024) found that Ukrainian-preferring respondents reading stories in Russian reduced their belief in misinformation significantly, while Russian-preferring respondents showed opposite patterns. Language preference affected belief in both true and false stories equally, yielding null effects on overall discernment.
Macdonald, Maggie, Megan Brown, Joshua A. Tucker, and Jonathan Nagler
· Electoral Studies
· 95:102907
· 2025
Examined whether candidates shift ideologically during election cycles by analyzing messaging ideology from 2020 congressional campaigns before and after primaries using a homophily-based measure of domains shared on Twitter. Key finding: incumbents in safe seats moved towards the extreme before their primaries and back towards the center for the general election, but only when threatened by a well-funded primary challenge.
News sharing on social media: mapping the ideology of news media, politicians, and the mass public
Eady, Gregory, Richard Bonneau, Joshua A. Tucker, and Jonathan Nagler
· Political Analysis
· 33(2): 73-90
· 2025
Examines information sharing behavior of U.S. politicians and the mass public by mapping the ideological sharing space of political news on social media. The methodology employs web links as data to jointly measure ideology across news organizations, politicians, and citizens. Politicians sharing highly polarized content account for the most political news distribution, and electoral competitiveness inversely correlates with polarized sharing.
Media ConsumptionOnline Information EnvironmentPartisanship
Lühiste, Maarja, Stiene Praet, Sebastian Adrian Popa, Yannis Theocharis, Pablo Barberá, Zoltán Fazekas, and Joshua A. Tucker
· Politics & Gender
· 1-28
· 2025
Abstract Past research alerts to the increasingly unpleasant climate surrounding public debate on social media. Female politicians, in particular, are reporting serious attacks targeted at them. Yet, research offers inconclusive insights regarding the gender gap in online incivility. This paper aims to address this gap by comparing politicians with varying levels of prominence and public status in different institutional contexts. Using a machine learning approach for analyzing over 23 million tweets addressed to politicians in Germany, Spain, the United Kingdom, and the United States, we find little consistent evidence of a gender gap in the proportion of incivility. However, more prominent politicians are considerably and consistently more likely than others to receive uncivil attacks. While prominence influences US male and female politicians’ probability to receive uncivil tweets the same way, women in our European sample receive incivility regardless of their status. Most importantly, the incivility varies in quality and across contexts, with women, especially in more plurality contexts, receiving more identity-based attacks than other politicians.
Zilinsky, Jan, Joshua A. Tucker, and Jonathan Nagler
· American Politics Research
· 53(2): 91-102
· 2025
Research in political science suggests campaigns have a minimal effect on voters’ attitudes and vote choice. We evaluate the effectiveness of the 2016 Trump and Clinton campaigns at informing voters by giving respondents an opportunity to name policy positions of candidates that they felt would make them better off. The relatively high rates of respondents’ ability to name a Trump policy that would make them better off suggests that the success of his campaign can be partly attributed to its ability to communicate memorable information. Our evidence also suggests that cable television informed voters: respondents exposed to higher levels of liberal news were more likely to be able to name Clinton policies, and voters exposed to higher levels of conservative news were more likely to name Trump policies; these effects hold even conditioning on respondents’ ideology and exposure to mainstream media. Our results demonstrate the advantages of using novel survey questions and provide additional insights into the 2016 campaign that challenge one part of the conventional narrative about the presumed non-importance of operational ideology.
Waight, Hannah, Solomon Messing, Anton Shirikov, Margaret E. Roberts, Jonathan Nagler, Jason Greenfield, Megan A. Brown, Kevin Aslett, and Joshua A. Tucker
· Sociological Methods & Research
· 54(3): 933-983
· 2025
How can one understand the spread of ideas across text data? This is a key measurement problem in sociological inquiry, from the study of how interest groups shape media discourse, to the spread of policy across institutions, to the diffusion of organizational structures and institution themselves. To study how ideas and narratives diffuse across text, we must first develop a method to identify whether texts share the same information and narratives, rather than the same broad themes or exact features. We propose a novel approach to measure this quantity of interest, which we call “narrative similarity,” by using large language models to distill texts to their core ideas and then compare the similarity of claims rather than of words, phrases, or sentences. The result is an estimand much closer to narrative similarity than what is possible with past relevant alternatives, including exact text reuse, which returns lexically similar documents; topic modeling, which returns topically similar documents; or an array of alternative approaches. We devise an approach to providing out-of-sample measures of performance (precision, recall, F1) and show that our approach outperforms relevant alternatives by a large margin. We apply our approach to an important case study: The spread of Russian claims about the development of a Ukrainian bioweapons program in U.S. mainstream and fringe news websites. While we focus on news in this application, our approach can be applied more broadly to the study of propaganda, misinformation, diffusion of policy and cultural objects, among other topics.
Ventura, Tiago, Rajeshwari Majumdar, Jonathan Nagler, and Joshua A. Tucker
· The Journal of Politics
· 2025
2024 Paul Lazarsfeld Best Paper Award, APSA Political Communication 2024 Best Paper Award, APSA Information Technology and Politics 2024 Best Article in Political Behavior, Brazilian Political Science Association
In most advanced democracies, concerns about the spread of misinformation are typically associated with feed-based social media platforms like Twitter and Facebook. This study conducted a multimedia deactivation experiment during Brazil's 2022 presidential election with 700+ WhatsApp users. Disabling multimedia downloads significantly reduced participants' recall of false rumors but produced no significant changes in belief accuracy, political polarization, or well-being.
Abrajano, Marisa, Marianna Garcia, Aaron Pope, Edwin Kamau, Robert Vidigal, Joshua A. Tucker, and Jonathan Nagler
· Political Research Quarterly
· 78(2): 635-650
· 2025
Social media is used by millions of Americans to access news and politics. Yet there are no studies, to date, examining whether these behaviors systematically vary for those whose political incorporation process is distinct from those in the majority. We fill this void by examining how Latino online political activity compares to that of white Americans and the role of language in Latinos’ online political engagement. We hypothesize that Latino online political activity is comparable to whites. Moreover, given media reports suggesting that greater quantities of political misinformation are circulating on Spanish versus English-language social media, we expect reliance on Spanish-language social media for news predicts beliefs in inaccurate political narratives. Our survey findings, which we believe to be the largest original survey of the online political activity of Latinos and whites, reveal support for these expectations. Latino social media political activity, as measured by sharing/viewing news, talking about politics, and following politicians, is comparable to whites, both in self-reported and digital trace data. Latinos also turned to social media for news about COVID-19 more often than did whites. Finally, Latinos relying on Spanish-language social media usage for news predicts beliefs in election fraud in the 2020 U.S. Presidential election.
Elite & Mass Political BehaviorMedia ConsumptionOnline Information EnvironmentElections & VotingPartisanship
Journal Article
Platform-independent experiments on social media
Allen, Jennifer and Joshua A. Tucker
· Science
· 390(6776): 883-884
· 2025
Discusses a methodological approach to studying social media's effects without platform collaboration. The authors highlight research that uses large language models and browser extensions to rerank social media feeds, enabling external researchers to test algorithmic impacts on partisan sentiment. Demonstrates that altering the visibility of polarizing content can affect how people perceive opposing political groups.
Sanderson, Zeve, Zhong, Wei, and Tucker, Joshua A.
· OSF Preprints / SocArXiv
· 2025
Recent advancements in the availability and sophistication of generative artificial intelligence has been accompanied by widespread concerns regarding the ability for the public to navigate the digital information environment, especially during social and political events. A growing consensus across policymakers, academics, and industry leaders has emerged around the need for applying labels communicating whether AI was used in content creation. Labeling as a media literacy strategy and policy intervention has gained momentum, but what impacts do synthetic content labels have on the public? We run two online experiments to measure the effect of content labeling on perceptions of political images—their perceived provenance, veracity, and engagement intention. We find that AI labels effectively communicate content provenance, significantly reducing perceived human involvement in image creation regardless of whether images were actually AI-generated.
Wu, Patrick Y., Jonathan Nagler, Joshua A. Tucker, and Sol Messing
· IEEE International Conference on Big Data
· pp. 7232-7241
· 2024
Presents a framework using generative LLMs for text scoring that addresses limitations of existing methods. The approach, called concept-guided chain-of-thought (CGCoT), employs researcher-designed prompts with an LLM to generate a concept-specific breakdown for each text. Applied to measure political party aversion on Twitter, achieves stronger performance than unsupervised methods like Wordfish, with results comparable to supervised fine-tuned models while requiring minimal hand-labeled data.
Abrajano, Marisa A., Marianna Garcia, Aaron Pope, Robert Vidigal, Joshua A. Tucker, and Jonathan Nagler
· PNAS Nexus
· 3(11)
· 2024
Conducted a large-scale survey examining social media usage and misinformation beliefs among Latino populations around the 2022 midterm elections. Latinos who use Spanish-language social media have a higher probability of believing in false political narratives, compared with Latinos using English-language social media. Latinos face heightened misinformation vulnerability due to greater reliance on social media, weaker Spanish-language fact-checking algorithms, and less platform enforcement against Spanish-language falsehoods.
Ruggeri, Kai, Friederike Stock, S. Alexander Haslam, Valerio Capraro, Paulo Boggio, Joshua A. Tucker, Naomi Ellemers, et al.
· Nature
· 625(7993): 134-147
· 2024
AbstractScientific evidence regularly guides policy decisions1, with behavioural science increasingly part of this process2. In April 2020, an influential paper3 proposed 19 policy recommendations (‘claims’) detailing how evidence from behavioural science could contribute to efforts to reduce impacts and end the COVID-19 pandemic. Here we assess 747 pandemic-related research articles that empirically investigated those claims. We report the scale of evidence and whether evidence supports them to indicate applicability for policymaking. Two independent teams, involving 72 reviewers, found evidence for 18 of 19 claims, with both teams finding evidence supporting 16 (89%) of those 18 claims. The strongest evidence supported claims that anticipated culture, polarization and misinformation would be associated with policy effectiveness. Claims suggesting trusted leaders and positive social norms increased adherence to behavioural interventions also had strong empirical support, as did appealing to social consensus or bipartisan agreement. Targeted language in messaging yielded mixed effects and there were no effects for highlighting individual benefits or protecting others. No available evidence existed to assess any distinct differences in effects between using the terms ‘physical distancing’ and ‘social distancing’. Analysis of 463 papers containing data showed generally large samples; 418 involved human participants with a mean of 16,848 (median of 1,699). That statistical power underscored improved suitability of behavioural science research for informing policy decisions. Furthermore, by implementing a standardized approach to evidence selection and synthesis, we amplify broader implications for advancing scientific evidence in policy formulation and prioritization.
Lai, Angela, Megan Brown, James Bisbee, Joshua A. Tucker, Jonathan Nagler, and Richard Bonneau
· Political Analysis
· 32(3): 345-60
· 2024
AbstractWe present a method for estimating the ideology of political YouTube videos. The subfield of estimating ideology as a latent variable has often focused on traditional actors such as legislators, while more recent work has used social media data to estimate the ideology of ordinary users, political elites, and media sources. We build on this work to estimate the ideology of a political YouTube video. First, we start with a matrix of political Reddit posts linking to YouTube videos and apply correspondence analysis to place those videos in an ideological space. Second, we train a language model with those estimated ideologies as training labels, enabling us to estimate the ideologies of videos not posted on Reddit. These predicted ideologies are then validated against human labels. We demonstrate the utility of this method by applying it to the watch histories of survey respondents to evaluate the prevalence of echo chambers on YouTube in addition to the association between video ideology and viewer engagement. Our approach gives video-level scores based only on supplied text metadata, is scalable, and can be easily adjusted to account for changes in the ideological landscape.
Allcott, Hunt, Matthew Gentzkow, Winter Mason, Arjun Wilkins, Pablo Barberá, Taylor Brown, Juan Carlos Cisneros, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Sandra González-Bailón, Andrew M. Guess, Young Mie Kim, David Lazer, Neil Malhotra, Devra Moehler, Sameer Nair-Desai, Houda Nait El Barj, Brendan Nyhan, Ana Carolina Paixao de Queiroz, Jennifer Pan, Jaime Settle, Emily Thorson, Rebekah Tromble, Carlos Velasco Rivera, Benjamin Wittenbrink, Magdalena Wojcieszak, Saam Zahedian, Annie Franco, Chad Kiewiet de Jonge, Natalie Jomini Stroud, and Joshua A. Tucker
· Proceedings of the National Academy of Sciences
· 121(21)
· 2024
We study the effect of Facebook and Instagram access on political beliefs, attitudes, and behavior by randomizing a subset of 19,857 Facebook users and 15,585 Instagram users to deactivate their accounts for 6 wk before the 2020 U.S. election. We report four key findings. First, both Facebook and Instagram deactivation reduced an index of political participation (driven mainly by reduced participation online). Second, Facebook deactivation had no significant effect on an index of knowledge, but secondary analyses suggest that it reduced knowledge of general news while possibly also decreasing belief in misinformation circulating online. Third, Facebook deactivation may have reduced self-reported net votes for Trump, though this effect does not meet our preregistered significance threshold. Finally, the effects of both Facebook and Instagram deactivation on affective and issue polarization, perceived legitimacy of the election, candidate favorability, and voter turnout were all precisely estimated and close to zero.
Aslett, Kevin, William Godel, Zeve Sanderson, Nathaniel Persily, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· Nature
· 625: 548-556
· 2024
AbstractConsiderable scholarly attention has been paid to understanding belief in online misinformation1,2, with a particular focus on social networks. However, the dominant role of search engines in the information environment remains underexplored, even though the use of online search to evaluate the veracity of information is a central component of media literacy interventions3–5. Although conventional wisdom suggests that searching online when evaluating misinformation would reduce belief in it, there is little empirical evidence to evaluate this claim. Here, across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them. To shed light on this relationship, we combine survey data with digital trace data collected using a custom browser extension. We find that the search effect is concentrated among individuals for whom search engines return lower-quality information. Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources. We also find consistent evidence that searching online to evaluate news increases belief in true news from low-quality sources, but inconsistent evidence that it increases belief in true news from mainstream sources. Our findings highlight the need for media literacy programmes to ground their recommendations in empirically tested strategies and for search engines to invest in solutions to the challenges identified here.
Tokita, Chris, Kevin Aslett, William Godel, Zeve Sanderson, Joshua A. Tucker, Nathaniel Persily, Jonathan Nagler, and Richard Bonneau
· PNAS Nexus
· 3(10)
· 2024
Abstract Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.
Brown, Megan, Zeve Sanderson, Sarah Graham, Minjoo Kim, Joshua A. Tucker, and Solomon Messing
· Journal of Quantitative Description: Digital Media
· 4(May)
· 2024
There is scant quantitative research describing Nextdoor, the world's largest and most important hyperlocal social media network. Due to its localized structure, Nextdoor data are notoriously difficult to collect and work with. We build multiple datasets that allow us to generate descriptive analyses of the platform's offline contexts and online content. We first create a comprehensive dataset of all Nextdoor neighborhoods joined with U.S. Census data, which we analyze at the community-level (block-group). Our findings suggests that Nextdoor is primarily used in communities where the populations are whiter, more educated, more likely to own a home, and with higher levels of average income, potentially impacting the platform's ability to create new opportunities for social capital formation and citizen engagement. At the same time, Nextdoor neighborhoods are more likely to have active government agency accounts---and law enforcement agencies in particular---where offline communities are more urban, have larger nonwhite populations, greater income inequality, and higher average home values. We then build a convenience sample of 30 Nextdoor neighborhoods, for which we collect daily posts and comments appearing in the feed (115,716 posts and 163,903 comments), as well as associated metadata. Among the accounts for which we collected posts and comments, posts seeking or offering services were the most frequent, while those reporting potentially suspicious people or activities received the highest average number of comments. Taken together, our study describes the ecosystem of and discussion on Nextdoor, as well as introduces data for quantitatively studying the platform.
The Diffusion and Reach of (Mis) Information on Facebook During the US 2020 Election
González-Bailón, Sandra, David Lazer, Pablo Barberá, William Godel, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Deen Freelon, Matthew Gentzkow, Andrew Guess, Shanto Iyengar, Young Mie Kim, Neil Malhotra, Devra Moehler, Brendan Nyhan, Jennifer Pan, Carlos Velasco Rivera, Jaime Settle, Emily Thorson, Rebekah Tromble, Arjun Wilkins, Magdalena Wojcieszak, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud, and Joshua A. Tucker
· Sociological Science
· 11:1124-1146
· 2024
Examined approximately 1 billion reshared posts on Facebook from July 2020 to February 2021. Misinformation diffused more slowly, relying on a small number of active users, while legitimate content spread through platform pages. Content moderation efforts before the election corresponded with dramatic drops in the spread and reach of misinformation.
Tucker, Joshua A.
· Handbook of Computational Social Science for Policy (Springer)
· pp. 381-403
· 2023
Examines how computational social science addresses democratic concerns in the digital era. Reviews research methodologies and tools across four domains: measuring public opinion online, combating online hate speech, addressing misinformation, and countering foreign influence operations.
Buntain, Cody L., Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker
· Proceedings of the International AAAI Conference on Web and Social Media
· 17(1): 72-83
· 2023
Demonstrates use of differentially private hyperlink-level engagement data for measuring ideologies of audiences for web domains. Examines metrics for ideological positioning and tests their robustness when privacy-preserving noise is introduced. Using Facebook engagement data, validated approach against prior work, achieving correlations exceeding 0.87. Viewing audiences tend toward more moderate positions than sharing audiences.
Asimovic, Nejla, Jonathan Nagler, and Joshua A. Tucker
· Research & Politics
· 2023
Replicated a 2021 Facebook deactivation study in Cyprus to examine social media's impact on interethnic polarization. While they could not reproduce the main effect found in Bosnia and Herzegovina, they identified a significant interaction pattern. The study also confirmed that Facebook deactivation leads to a reduction in anxiety levels and provided suggestive evidence of reduced news knowledge.
Aslett, Kevin, Zeve Sanderson, William Godel, Nathaniel Persily, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· Journal of Experimental Political Science
· 2023
Conducted pre-registered experiments to measure how additional information affects people's ability to identify accurate versus false news in real time. Access to the full article relative to solely the headline/lede and access to source information improves an individual's ability to correctly discern the veracity of news. Encouraging online searches increased belief in both true and false content.
Kates, Sean, Sidak Yntiso, Tine Paulsen, and Joshua A. Tucker
· Political Analysis
· 31(4): 642-50
· 2023
AbstractMany large survey courses rely on multiple professors or teaching assistants to judge student responses to open-ended questions. Even following best practices, students with similar levels of conceptual understanding can receive widely varying assessments from different graders. We detail how this can occur and argue that it is an example of differential item functioning (or interpersonal incomparability), where graders interpret the same possible grading range differently. Using both actual assessment data from a large survey course in Comparative Politics and simulation methods, we show that the bias can be corrected by a small number of “bridging” observations across graders. We conclude by offering best practices for fair assessment in large survey courses.
Eady, Gregory, Tom Paskhalis, Jan Zilinsky, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· Nature Communications
· 14(62)
· 2023
AbstractThere is widespread concern that foreign actors are using social media to interfere in elections worldwide. Yet data have been unavailable to investigate links between exposure to foreign influence campaigns and political behavior. Using longitudinal survey data from US respondents linked to their Twitter feeds, we quantify the relationship between exposure to the Russian foreign influence campaign and attitudes and voting behavior in the 2016 US election. We demonstrate, first, that exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures. Second, exposure was concentrated among users who strongly identified as Republicans. Third, exposure to the Russian influence campaign was eclipsed by content from domestic news media and politicians. Finally, we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior. The results have implications for understanding the limits of election interference campaigns on social media.
Elite & Mass Political BehaviorForeign Influence CampaignsOnline Information EnvironmentPolitical PolarizationPublic OpinionPost-Communist PoliticsElections & Voting
Guess, Andrew, Neil Malhotra, Jennifer Pan, Pablo Barberá, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, Sandra González-Bailón, Edward Kennedy, Young Mie Kim, David Lazer, Devra Moehler, Brendan Nyhan, Carlos Velasco Rivera, Jaime Settle, Daniel Robert Thomas, Emily Thorson, Rebekah Tromble, Arjun Wilkins, Magdalena Wojcieszak, Beixian Xiong, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud, and Joshua A. Tucker
· Science
· 381(6656): 404-408
· 2023
We studied the effects of exposure to reshared content on Facebook during the 2020 US election by assigning a random set of consenting, US-based users to feeds that did not contain any reshares over a 3-month period. We find that removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases overall clicks and reactions; and reduces partisan news clicks. Further, we observe that removing reshared content produces clear decreases in news knowledge within the sample, although there is some uncertainty about how this would generalize to all users. Contrary to expectations, the treatment does not significantly affect political polarization or any measure of individual-level political attitudes.
Nyhan, Brendan, Jaime Settle, Emily Thorson, Magdalena Wojcieszak, Pablo Barberá, Annie Y. Chen, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, Sandra González-Bailón, Andrew M. Guess, Edward Kennedy, Young Mie Kim, David Lazer, Neil Malhotra, Devra Moehler, Jennifer Pan, Daniel Robert Thomas, Rebekah Tromble, Carlos Velasco Rivera, Arjun Wilkins, Beixian Xiong, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud, and Joshua A. Tucker
· Nature
· 620(7972): 137-144
· 2023
AbstractMany critics raise concerns about the prevalence of ‘echo chambers’ on social media and their potential role in increasing political polarization. However, the lack of available data and the challenges of conducting large-scale field experiments have made it difficult to assess the scope of the problem1,2. Here we present data from 2020 for the entire population of active adult Facebook users in the USA showing that content from ‘like-minded’ sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures. To evaluate a potential response to concerns about the effects of echo chambers, we conducted a multi-wave field experiment on Facebook among 23,377 users for whom we reduced exposure to content from like-minded sources during the 2020 US presidential election by about one-third. We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.
Guess, Andrew, Neil Malhotra, Jennifer Pan, Pablo Barberá, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, Sandra González-Bailón, Edward Kennedy, Young Mie Kim, David Lazer, Devra Moehler, Brendan Nyhan, Carlos Velasco Rivera, Jaime Settle, Daniel Robert Thomas, Emily Thorson, Rebekah Tromble, Arjun Wilkins, Magdalena Wojcieszak, Beixian Xiong, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud, and Joshua A. Tucker
· Science
· 381(6656): 398-404
· 2023
We investigated the effects of Facebook’s and Instagram’s feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users’ on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.
González-Bailón, Sandra, David Lazer, Pablo Barberá, Meiqing Zhang, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Deen Freelon, Matthew Gentzkow, Andrew M. Guess, Shanto Iyengar, Young Mie Kim, Neil Malhotra, Devra Moehler, Brendan Nyhan, Jennifer Pan, Carlos Velasco Rivera, Jaime Settle, Emily Thorson, Rebekah Tromble, Arjun Wilkins, Magdalena Wojcieszak, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud, and Joshua A. Tucker
· Science
· 381(6656): 392-398
· 2023
2024 Political Ties Award, APSA Political Networks
Does Facebook enable ideological segregation in political news consumption? We analyzed exposure to news during the US 2020 election using aggregated data for 208 million US Facebook users. We compared the inventory of all political news that users could have seen in their feeds with the information that they saw (after algorithmic curation) and the information with which they engaged. We show that (i) ideological segregation is high and increases as we shift from potential exposure to actual exposure to engagement; (ii) there is an asymmetry between conservative and liberal audiences, with a substantial corner of the news ecosystem consumed exclusively by conservatives; and (iii) most misinformation, as identified by Meta’s Third-Party Fact-Checking Program, exists within this homogeneously conservative corner, which has no equivalent on the liberal side. Sources favored by conservative audiences were more prevalent on Facebook’s news ecosystem than those favored by liberals.
Stukal, Denis, Sergey Sanovich, Richard Bonneau, and Joshua A. Tucker
· American Political Science Review
· 116(3): 843-857
· 2022
There is abundant anecdotal evidence that nondemocratic regimes are harnessing new digital technologies known as social media bots to facilitate policy goals. The study develops theoretical frameworks predicting bot deployment in response to either offline or online protest activities. Using Twitter data from Russian pro-government bots (2015-2018), found that online opposition activities produce stronger reactions from bots than offline protests. Bots amplify messaging through increased posting and retweeting during opposition mobilization.
Foreign Influence CampaignsPolitics of AuthoritarianismPost-Communist PoliticsData Science Methodology
Journal Article
SARS-CoV-2 RNA concentrations in wastewater foreshadow dynamics and clinical presentation of new COVID-19 cases
Wu, Fuqing, Amy Xiao, Jianbo Zhang, Katya Moniz, Noriko Endo, Federica Armas, Richard Bonneau, Megan A Brown, Mary Bushman, Peter R Chai, Claire Duvallet, Timothy B Erickson, Katelyn Foppe, Newsha Ghaeli, Xiaoqiong Gu, William P Hanage, Katherine H Huang, Wei Lin Lee, Mariana Matus, Kyle A McElroy, Jonathan Nagler, Steven F Rhode, Mauricio Santillana, Joshua A Tucker, Stefan Wuertz, Shijie Zhao, Janelle Thompson, and Eric Alm
· Science of the Total Environment
· 805:150121
· 2022
Tracked SARS-CoV-2 viral concentrations in Massachusetts wastewater from January through May 2020. SARS-CoV-2 RNA concentrations in wastewater correlated with clinically diagnosed new COVID-19 cases, with the trends appearing 4-10 days earlier in wastewater than in clinical data. Demonstrates that wastewater monitoring can identify disease transmission trends ahead of clinical reporting.
Chen, Zhouhan, Haohan Chen, Joshua A. Tucker, Juliana Freire, and Jonathan Nagler
· Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media
· 2022
Examined traffic patterns to The Gateway Pundit, a far-right news site, using 68 million visits collected over one month. Search engines and social media sites were the main drivers of traffic. Visitors to TGP were more likely to be from areas that voted for Donald Trump in the 2020 election. Content about election fraud and the Capitol riot generated the highest engagement.
Munger, Kevin, Patrick J. Egan, Jonathan Nagler, Jonathan Ronen, and Joshua A. Tucker
· British Journal of Political Science
· 52(1): 107-127
· 2022
The study examines whether social media educates or misleads voters by analyzing Twitter exposure and political knowledge changes during the 2015 UK election. Twitter use increased knowledge about politics and public affairs, with news media improving factual knowledge and party messages increasing understanding of party platforms. However, party messaging also shifted voter assessments of the economy and immigration in directions favorable to those parties' positions, sometimes leaving voters with beliefs further from the truth than at the campaign's start.
Aslett, Kevin, Andrew Guess, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· Science Advances
· 8(18): 1-10
· 2022
As the primary arena for viral misinformation shifts toward transnational threats, the search continues for scalable countermeasures compatible with principles of transparency and free expression. We conducted a randomized field experiment evaluating the impact of source credibility labels embedded in users’ social feeds and search results pages. By combining representative surveys ( n = 3337) and digital trace data ( n = 968) from a subset of respondents, we provide a rare ecologically valid test of such an intervention on both attitudes and behavior. On average across the sample, we are unable to detect changes in real-world consumption of news from low-quality sources after 3 weeks. We can also rule out small effects on perceived accuracy of popular misinformation spread about the Black Lives Matter movement and coronavirus disease 2019. However, we present suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation. We discuss the implications of our findings for scholars and practitioners.
Luca, Mario, Kevin Munger, Jonathan Nagler, and Joshua A. Tucker
· Journal of Experimental Political Science
· 9(2): 267-277
· 2022
'Clickbait' media has long been espoused as an unfortunate consequence of the rise of digital journalism. We conducted a survey experiment in Italy, offering respondents monetary incentives for correct answers to manipulate the salience of accuracy motivation. We found that older and less educated subjects became even more likely to select clickbait headlines when incentivized for factual accuracy. The model suggests that a politically relevant population subset prefers clickbait media because they trust it more.
Bisbee, James, Megan Brown, Angela Lai, Richard Bonneau, Joshua A. Tucker, and Jonathan Nagler
· Journal of Online Trust and Safety
· 1(3)
· 2022
Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.
Wu, Patrick Y, Richard Bonneau, Joshua A. Tucker, and Jonathan Nagler
· Proceedings of the Conference on Empirical Methods in Natural Language Processing
· 2022
Introduces an approach that leverages specialized dictionaries when fine-tuning pretrained language models. Replaces dictionary terms with standardized tokens, then applies a contrastive learning objective to draw same-class embeddings closer while separating different classes. Testing showed improvements in few-shot learning and social science text classification tasks.
Wojcieszak, Magdalena, Andreu Casas, Xudong Yu, Jonathan Nagler, and Joshua A. Tucker
· Science Advances
· 8(39): eabn9418
· 2022
We offer comprehensive evidence of preferences for ideological congruity when people engage with politicians, pundits, and news organizations on social media. Using 4 years of data (2016–2019) from a random sample of 1.5 million Twitter users, we examine three behaviors studied separately to date: (i) following of in-group versus out-group elites, (ii) sharing in-group versus out-group information (retweeting), and (iii) commenting on the shared information (quote tweeting). We find that the majority of users (60%) do not follow any political elites. Those who do follow in-group elite accounts at much higher rates than out-group accounts (90 versus 10%), share information from in-group elites 13 times more frequently than from out-group elites, and often add negative comments to the shared out-group information. Conservatives are twice as likely as liberals to share in-group versus out-group content. These patterns are robust, emerge across issues and political elites, and exist regardless of users’ ideological extremity.
Payson, Julia, Andreu Casas, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· State Politics and Policy Quarterly
· 22(4): 371-395
· 2022
AbstractState governments are tasked with making important policy decisions in the United States. How do state legislators use their public communications—particularly social media—to engage with policy debates? Due to previous data limitations, we lack systematic information about whether and how state legislators publicly discuss policy and how this behavior varies across contexts. Using Twitter data and state-of-the-art topic modeling techniques, we introduce a method to study state legislator policy priorities and apply the method to 15 US states in 2018. We show that we are able to accurately capture the policy issues discussed by state legislators with substantially more accuracy than existing methods. We then present initial findings that validate the method and speak to debates in the literature. The paper concludes by discussing promising avenues for future state politics research using this new approach.
Siegel, Alexandra, Evgenii Nikitin, Pablo Barberá, Joanna Sterling, Bethany Pullen, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker
· Quarterly Journal of Political Science
· 16(1): 71-104
· 2021
To what extent did online hate speech and white nationalist rhetoric on Twitter increase over the course of Donald Trump's 2016 presidential election campaign and its immediate aftermath? We analyzed over 750 million tweets related to the 2016 election, plus approximately 400 million tweets from a random sample of American Twitter users. Using machine-learning methods, we found no persistent increase in hate speech or white nationalist language either over the course of the campaign or in the six months following Trump's election. While specific campaign events caused temporary spikes in hateful language, these bursts quickly dissipated.
Van Bavel, Jay J., Elizabeth Harris, Steve Rathje, Kimberly C. Doell, and Joshua A. Tucker
· Social Issues and Policy Review
· 15(1): 84-113
· 2021
The spread of misinformation, including 'fake news,' propaganda, and conspiracy theories, represents a serious threat to society, as it has the potential to alter beliefs, behavior, and policy. The authors propose an integrative theoretical framework examining social, political, and cognitive psychology factors underlying misinformation dissemination. They emphasize that exposure, belief, and sharing represent distinct processes, and outline strategies for mitigation.
Praet, Stiene, Andy Guess, Joshua A. Tucker, Richard Bonneau, and Jonathan Nagler
· Political Communication
· 3(39): 311-338
· 2021
Examined whether political polarization extends to lifestyle preferences by analyzing Facebook "likes" from 1,200+ respondents. Polarization is present in page categories that are somewhat related to politics such as news and identity topics, but remained minimal in domains like sports, food, and music. Non-political lifestyle activities continue to offer cross-cutting spaces where Americans share common interests across partisan lines.
Laruelle, Marlene, Mikhail Alexseev, Cynthia Buckley, Ralph S. Clem, J. Paul Goode, Ivan Gomza, Henry E. Hale, Erik Herron, Andrey Makarychev, Madeline McCann, Mariya Omelicheva, Gulnaz Sharafutdinova, Regina Smyth, Sarah Wilson Sokhey, Mikhail Troitskiy, Joshua A. Tucker, Judyth Twigg, and Elizabeth Wishnick
· Problems of Post-Communism
· 68(1): 1-16
· 2021
Politics of AuthoritarianismPost-Communist Politics
Journal Article
An automatic framework to continuously monitor multi-platform information spread
Chen, Zhouhan, Kevin Aslett, Jen Rosiere Reynolds, Juliana Freire, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau
· CEUR Workshop Proceedings
· 2890
· 2021
Documents methodology for an automatic framework to continuously monitor information spread across multiple social media platforms.
Data Science MethodologyOnline Information Environment
Siegel, Alexandra A., Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· World Politics
· 73(2): 243-274
· 2021
Do online social networks affect political tolerance in the highly polarized climate of postcoup Egypt? Taking advantage of the real-time networked structure of Twitter data, the authors find that not only is greater network diversity associated with lower levels of intolerance, but also that longer exposure to a diverse network is linked to less expression of intolerance over time. The research presents evidence that social norms in online networks may shape individuals' propensity to publicly express intolerant attitudes.
Sanderson, Zeve, Meagan A. Brown, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker
· Harvard Kennedy School Misinformation Review
· 2(4)
· 2021
We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.
Munger, Kevin, Ishita Gopal, Jonathan Nagler, and Joshua A. Tucker
· Research and Politics
· 8(2)
· 2021
An emerging empirical regularity suggests that older people use and respond to social media very differently than younger people. Older people are the fastest-growing population of Internet and social media users in the US, and this heterogeneity will soon become central to online politics. However, many important experiments in this field have been conducted on online samples that do not contain enough older people to be useful to generalize to the current population of Internet users; this issue is more pronounced for studies that are even a few years old. In this paper, we report the results of replicating two experiments involving social media (specifically, Facebook) conducted on one such sample lacking older users (Amazon’s Mechanical Turk) using a source of online subjects which does contain sufficient variation in subject age. We add a standard battery of questions designed to explicitly measure digital literacy. We find evidence of significant treatment effect heterogeneity in subject age and digital literacy in the replication of one of the two experiments. This result is an example of limitations to generalizability of research conducted on samples where selection is related to treatment effect heterogeneity; specifically, this result indicates that Mechanical Turk should not be used to recruit subjects when researchers suspect treatment effect heterogeneity in age or digital literacy, as we argue should be the case for research on digital media effects.
Buntain, Cody, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker
· Proceedings of the ACM on Human-Computer Interaction
· 11
· 2021
The paper examines whether YouTube's January 2019 content quality initiative successfully reduced harmful content spread. Using interrupted time series models, researchers analyzed video sharing on Twitter and Reddit across eight months surrounding the announcement. They tracked three content categories: conspiracy videos with reduced recommendations, videos from conspiracy-oriented channels, and alternative influence network videos, with mainstream news as a control. Key findings indicate that conspiracy-labeled and AIN videos experienced a significant decreasing trend in sharing on both platforms, yet conspiracy-channel sharing in Reddit showed a significant increase, suggesting mixed cross-platform effects.
Guess, Andy, Kevin Aslett, Joshua A. Tucker, Richard Bonneau, and Jonathan Nagler
· Journal of Quantitative Description: Digital Media
· 1
· 2021
In this study, we analyze for the first time newly available engagement data covering millions of web links shared on Facebook to describe how and by which categories of U.S. users different types of news are seen and shared on the platform. We examine articles from low-credibility publishers, credible news sources, clickbait content, and political news. Key findings indicate that older users and conservatives shared more fake news, with credible news sources receiving substantially more engagement -- shared 5.5 times and viewed 7.5 times more frequently than low-credibility sources.
Kates, Sean, Joshua A. Tucker, Jonathan Nagler, and Richard Bonneau
· Journal of Quantitative Description: Digital Media
· 1
· 2021
This paper uses geolocated Twitter histories from approximately 25,000 individuals in 6 different time zones and 3 different countries to construct a proper time-zone dependent hourly baseline for social media activity studies. We establish that, across multiple regions and time periods, interaction with social media is strongly conditioned by traditional bio-rhythmic or 'Circadian' patterns, and that in the United States, this pattern is itself further conditioned by the ideological bent of the user. Using a time series of these histories around the 2016 U.S. Presidential election, we show that external events of great significance can disrupt traditional social media activity patterns, and that this disruption can be significant.
Klašnja, Marko, Noam Lupu, and Joshua A. Tucker
· Journal of Experimental Political Science
· 8(2): 161-71
· 2021
AbstractA growing body of research explores the factors that affect when corrupt politicians are held accountable by voters. Most studies, however, focus on one or few factors in isolation, leaving incomplete our understanding of whether they condition each other. To address this, we embedded rich conjoint candidate choice experiments into surveys in Argentina, Chile, and Uruguay. We test the importance of two contextual factors thought to mitigate voters’ punishment of corrupt politicians: how widespread corruption is and whether it brings side benefits. Like other scholars, we find that corruption decreases candidate support substantially. But, we also find that information that corruption is widespread does not lessen the sanction applied against corruption, whereas information about the side benefits from corruption does, and does so to a similar degree as the mitigating role of permissible attitudes toward bribery. Moreover, those who stand to gain from these side benefits are less likely to sanction corruption.
Online Information EnvironmentPublic OpinionElections & Voting
Journal Article
Moderating with the Mob: Evaluating the Efficacy of Real Time Crowdsourced Fact Checking
Godel, William, Kevin Aslett, Zeve Sanderson, Nathaniel Persily, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· Journal of Online Trust and Safety
· 1(1)
· 2021
Examined whether aggregating ordinary users' assessments can effectively combat misinformation. Evaluated 135 news articles within 72 hours of publication through both crowd evaluations and professional fact-checkers, generating 12,883 assessments. Crowdsourced systems perform better when limited to politically knowledgeable respondents rather than representative samples.
Data Science MethodologyOnline Information Environment
Yildrim, Mikdat, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· Perspectives on Politics
· 1-13
· 2021
Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.
Asimovic, Nejla, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker
· Proceedings of the National Academy of Sciences
· 118(25)
· 2021
2022 Best Article Award, APSA Information Technology and Politics
SignificanceAmid growing belief that social media exacerbates polarization, little is known about the causal effects of social media on ethnic outgroup attitudes. Through an experiment in Bosnia and Herzegovina where users refrained from Facebook usage during 1 wk of heightened identity salience, we find that—counter expectations—people who deactivated their accounts reported lower outgroup regard than the group that remained active, but this effect was likely conditional on the level of ethnic heterogeneity of respondents’ residence. Additionally, we replicate findings from a study on US users: Deactivation led to a decrease in news knowledge and suggestive improvements in subjective wellbeing. Our findings bring nuance to popular beliefs, frequently dichotomous and simplistic, of social media’s impact on societal dynamics.
Eady, Gregory, Jan Zilinsky, Richard Bonneau, Joshua A. Tucker, and Jonathan Nagler
· Working Paper
· 2020
The role of the media in influencing people's attitudes and opinions is difficult to demonstrate because media consumption by survey respondents is usually unobserved in datasets containing information on attitudes and vote choice. This paper leverages behavioral data combined with responses from a multi-wave panel to test whether Democrats who see more stories from liberal news sources on Twitter develop more liberal positions over time and, conversely, whether Republicans are more likely to revise their views in a conservative direction if they are exposed to more news on Twitter from conservative media sources. We find evidence that exposure to ideologically framed information and arguments changes voters' own positions, but has a limited impact on perceptions of where the candidates stand on the issues.
Terechshenko, Zhanna, Fridolin Linder, Vishakh Padmakumar, Fengyuan Liu, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau
· Working Paper
· 2020
Automated text classification has rapidly become an important tool for political analysis. Recent advancements in NLP enabled by advances in deep learning now achieve state of the art results in many standard tasks for the field. However, these methods require large amounts of both computing resources and text data to learn the characteristics of the language, resources which are not always accessible to political scientists. One solution is a transfer learning approach. We investigate the performance of these models in political science by comparing multiple text classification methods. We find RoBERTa and XLNet, language models that rely on the Transformer, require fewer computing resources and less training data to perform on par with -- or outperform -- several political science text classification methods.
Zilinsky, Jan, Cristian Vaccari, Jonathan Nagler, and Joshua A. Tucker
· Perspectives on Politics
· 18(1): 144-60
· 2020
Michael Jordan supposedly justified his decision to stay out of politics by noting that Republicans buy sneakers too. We analyze approximately 220,000 tweets from 83 celebrities who endorsed a 2016 presidential candidate. We find that followers of opinionated celebrities do not withhold engagement when entertainers become politically mobilized and do indeed often go negative. Political content from celebrities sometimes generated more engagement than typical lifestyle tweets, suggesting the cost of political speech on Twitter for celebrities is relatively low.
Van Bavel, Jay J., Katherine Baicker, Paulo S. Boggio, Valerio Capraro, Aleksandra Cichocka, Mina Cikara, Molly J. Crockett, Alia J. Crum, Karen M. Douglas, James N. Druckman, John Drury, Oeindrila Dube, Naomi Ellemers, Eli J. Finkel, James H. Fowler, Michele Gelfand, Shihui Han, S. Alexander Haslam, Jolanda Jetten, Shinobu Kitayama, Dean Mobbs, Lucy E. Napper, Dominic J. Packer, Gordon Pennycook, Ellen Peters, Richard E. Petty, David G. Rand, Stephen D. Reicher, Simone Schnall, Azim Shariff, Linda J. Skitka, Sandra Susan Smith, Cass R. Sunstein, Nassim Tabri, Joshua A. Tucker, Sander van der Linden, Paul van Lange, Kim A. Weeden, Michael J. A. Wohl, Jamil Zaki, Sean R. Zion, and Robb Willer
· Nature Human Behavior
· 2020(4): 460-71
· 2020
Top 15 Public Relations Insight publications of 2020, Institute for Publication Relations
The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. The paper examines research on threat navigation, social influences, science communication, moral decision-making, leadership, and stress management.
Munger, Kevin, Mario Luca, Jonathan Nagler, and Joshua A. Tucker
· Public Opinion Quarterly
· 84(1): 49-73
· 2020
'Clickbait' headlines designed to entice people to click are frequently used by both legitimate and less-than-legitimate news sources. Contemporary clickbait headlines tend to use emotional partisan appeals, raising concerns about their impact on consumers of online news. This article reports the results of a pair of experiments with different sets of subject pools. Findings show that older people and non-Democrats have a higher 'preference for clickbait,' but reading clickbait headlines does not drive affective polarization, information retention, or trust in media.
Golovchenko, Yevgeniy, Cody Buntain, Gregory Eady, Megan A. Brown, and Joshua A. Tucker
· International Journal of Press and Politics
· 25(3): 357-89
· 2020
This paper investigates online propaganda strategies of the Internet Research Agency (IRA) -- Russian 'trolls' -- during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, 'conservative' trolls were more active than 'liberal' ones. The IRA also shared content across social media platforms, particularly YouTube -- the second-most linked destination among IRA tweets.
Brader, Ted, Lorenzo De Sio, Aldo Paparo, and Joshua A. Tucker
· Political Psychology
· 41(4): 795-821
· 2020
The ability of parties to not only reflect, but actually shape, citizens' preferences on policy issues has been long debated, as it corresponds to a fundamental prediction of classic party identification theory. While most research draws on data from the United States or studies of low‐salience issues, we exploit the unique opportunity presented by the 2013 Italian election, with the four major parties of a clear multiparty setting holding distinct positions on crucial issues of the campaign. Based on an experimental design, we test the impact of party cues on citizens' preferences on high‐salience issues. The results are surprising: Despite a party system in flux (with relevant new parties) and a weakening of traditional party identities, we find large, significant partisan‐cueing effects in all the three experimental issues, and for voters of all the major Italian parties—both old and new, governmental and opposition, ideologically clear or ambiguous.
Alizadeh, Meysam, Jacob N. Shapiro, Cody Buntain, and Joshua A. Tucker
· Science Advances
· 6(30): 1-13
· 2020
The study examines distinguishing influence operations from organic social media activity using machine learning. We evaluate Twitter data on Chinese, Russian, and Venezuelan troll activity targeting the United States, plus Reddit data on Russian influence efforts. We develop classifiers using content-based features like timing, word count, and post relationships. Key findings indicate that industrialized production of influence campaign content leaves a distinctive signal in user-generated content, with Russia proving most sophisticated and difficult to track, while Venezuela was easiest to identify.
Pop-Eleches, Grigore and Joshua A. Tucker
· Comparative Political Studies
· 53(12): 1861-89
· 2020
Communist regimes were avowedly leftist authoritarian regimes, a relative rarity among autocracies. The growing literature on regime legacies would lead us to expect that postcommunist citizens would be more likely to exhibit “left-authoritarian” attitudes than their counterparts elsewhere. Finding that this is the case, we rely on 157 surveys from 88 countries to test if a living through Communism legacy model can account for this surplus of left-authoritarian attitudes. Employing both aggregate and micro-level analyses, we find strong support for the predictions of this model. Moving beyond previous legacy studies, we then test a variety of hypothesized mechanisms to explain how exposure to communist rule could have led to the regime congruent left-authoritarian attitudes. Of the mechanisms tested, greater state penetration of society is associated with a strong socialization effect and religious attendance—and in particular attending Catholic religious services—is associated with weaker socialization effects.
Finkel, Eli J., Christopher A. Bail, Mina Cikara, Peter H. Ditto, Shanto Iyengar, Samara Klar, Lilliana Mason, Mary C. McGrath, Brendan Nyhan, David G. Rand, Linda J. Skitka, Joshua A. Tucker, Jay J. Van Bavel, Cynthia S. Wang, and James N. Druckman
· Science
· 370(6516): 533-6
· 2020
Political polarization, a concern in many countries, is especially acrimonious in the United States. The paper distinguishes between ideological polarization and a second type focused on dominating opponents rather than championing ideas. The authors introduce 'political sectarianism' as an overarching construct with three core elements: othering (categorizing groups as 'us' versus 'them'), aversion, and moralization. The work examines causes and democratic consequences of this sectarianism, ultimately proposing interventions to reduce its harmful effects.
Stukal, Denis, Joshua A. Tucker, Sergey Sanovich, and Richard Bonneau
· PONARS Eurasia Policy Memo No. 564
· 2019
How do regimes respond to online opposition and shape the online conversation? Our research, using a collection of tweets about Russian politics from bots, finds overall that bots are usually used as amplifiers for political messages.
Munger, Kevin, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker
· Political Science Research and Methods
· 7(4): 815-834
· 2019
As non-democratic regimes have adapted to the proliferation of social media, they have begun actively engaging with Twitter to enhance regime resilience. We analyze Venezuelan legislators' tweets during 2014 protests, employing topic modeling and hashtag analysis. We argue the regime's optimal response to existential threats involved promoting multiple competing narratives addressing unrelated issues rather than directly countering opposition claims.
Guess, Andrew M., Kevin Munger, Jonathan Nagler, and Joshua A. Tucker
· Political Communication
· 36(2): 241-258
· 2019
How accurate are survey-based measures of social media use, in particular about political topics? We answer this question by linking original survey data collected during the U.S. 2016 election campaign with respondents' observed social media activity. We employ machine learning to classify whether Twitter and Facebook content was politics-related, then compare self-reported survey responses against actual posting behavior. We discover that aggregate survey measures generally align with observed activity, though individual responses show significant inconsistencies.
Kates, Sean and Joshua A. Tucker
· Social Science Quarterly
· 100(2): 494-523
· 2019
ObjectiveMacro‐level studies have consistently found a connection between economic crises and support for far‐right parties. However, research on the micro foundations for this electoral support has generally found little or no correlation between an individual's economic environment and far‐right voting. We test one possible explanation for this seeming paradox, namely, that determinants of far‐right identification differ across time and particularly in times of crisis.MethodsUtilizing traditional representative data from Eurobarometer surveys in a manner that strips away confounding issues generally found in the extant literature, we directly test whether individuals concerned about their personal economic situation, or that of their country, are more likely to identify with far‐right ideological beliefs during economic crises.ResultsUltimately, we find little evidence to support the claim that the Great Recession of 2007–2009 and its aftermath shifted the determinants of support for far‐right ideology, though prospective pocketbook concerns do increase the likelihood of identifying with the far right.ConclusionsWe discuss the implications of these findings and offer additional avenues for future research.
Guess, Andrew M., Jonathan Nagler, and Joshua A. Tucker
· Science Advances
· 5(1): eaau4586
· 2019
So-called 'fake news' has renewed concerns about the prevalence and effects of misinformation in political campaigns. Given the potential for widespread dissemination of this material, we examine the individual-level characteristics associated with sharing false articles during the 2016 U.S. presidential campaign. To do so, we uniquely link an original survey with respondents' sharing activity as recorded in Facebook profile data. First and foremost, we find that sharing this content was a relatively rare activity. Conservatives were more likely to share articles from fake news domains, which in 2016 were largely pro-Trump in orientation, than liberals or moderates. We also find a strong age effect, which persists after controlling for partisanship and ideology: On average, users over 65 shared nearly seven times as many articles from fake news domains as the youngest age group.
Stukal, Denis, Sergey Sanovich, Joshua A. Tucker, and Richard Bonneau
· SAGE Open
· 9(1)
· 2019
Computational propaganda and the use of automated accounts in social media have recently become the focus of public attention, with alleged Russian government activities abroad provoking particularly widespread interest. However, even in the Russian domestic context, where anecdotal evidence of state activity online goes back almost a decade, no public systematic attempt has been made to dissect the population of Russian social media bots by their political orientation. We address this gap by developing a deep neural network classifier that separates pro-regime, anti-regime, and neutral Russian Twitter bots. Our method relies on supervised machine learning and a new large set of labeled accounts, rather than externally obtained account affiliations or orientation of elites. We also illustrate the use of our method by applying it to bots operating in Russian political Twitter from 2015 to 2017 and show that both pro- and anti-Kremlin bots had a substantial presence on Twitter.
Data Science MethodologyElite & Mass Political BehaviorForeign Influence CampaignsOnline Information EnvironmentPolitics of AuthoritarianismPost-Communist Politics
Eady, Gregory, Jonathan Nagler, Andrew M. Guess, Jan Zilinsky, and Joshua A. Tucker
· SAGE Open
· 9(3)
· 2019
We linked survey data from 1,496 Americans with their Twitter accounts and analyzed 642,345 followed accounts containing approximately 1.2 billion tweets. More than a third of respondents do not follow any media sources, but among those who do, we find a substantial amount of overlap (51%) in the ideological distributions of accounts followed by users on opposite ends of the political spectrum. We also find asymmetries in individuals' willingness to venture into cross-cutting spaces, with conservatives more likely to follow media and political accounts classified as left-leaning than the reverse. Overall we find no evidence for a strict characterization of echo chambers.
Larson, Jennifer, Jonathan Nagler, Jonathan Ronen, and Joshua A. Tucker
· American Journal of Political Science
· 63(3): 690-705
· 2019
Pinning down the role of social ties in the decision to protest has been notoriously elusive largely due to data limitations. We examine Twitter activity during the 2015 Charlie Hebdo protests in Paris to test whether social network structure influences real-world protest participation. We find that protesters are significantly more connected to one another via direct, indirect, triadic, and reciprocated ties than comparable nonprotesters, providing empirical support for social theories linking network position to protest participation.
Barberá, Pablo, Andreu Casas, Jonathan Nagler, Patrick J. Egan, Richard Bonneau, John T. Jost, and Joshua A. Tucker
· American Political Science Review
· 113(4): 883-901
· 2019
Are legislators responsive to the priorities of the public? Research demonstrates a strong correspondence between the issues about which the public cares and the issues addressed by politicians, but conclusive evidence about who leads whom in setting the political agenda has yet to be uncovered. We answer this question with fine-grained temporal analyses of Twitter messages by legislators and the public during the 113th U.S. Congress. After employing an unsupervised method that classifies tweets sent by legislators and citizens into topics, we use vector autoregression models to explore whose priorities more strongly predict the relationship between citizens and politicians. We find that legislators are more likely to follow, than to lead, discussion of public issues, results that hold even after controlling for the agenda-setting effects of the media. We also find, however, that legislators are more responsive to their supporters than to the general public.
Klašnja, Marko, Pablo Barberá, Nick Beauchamp, Jonathan Nagler, and Joshua A. Tucker
· The Oxford Handbook of Polling and Survey Methods
· 2018
This chapter examines the use of social networking sites such as Twitter in measuring public opinion. It first considers the opportunities and challenges that are involved in conducting public opinion surveys using social media data. Three challenges are discussed: identifying political opinion, representativeness of social media users, and aggregating from individual responses to public opinion. The chapter outlines some of the strategies for overcoming these challenges and proceeds by highlighting some of the novel uses for social media that have fewer direct analogs in traditional survey work.
Tucker, Joshua A., Andrew M. Guess, Pablo Barberá, Cristian Vaccari, Alexandra A. Siegel, Sergey Sanovich, Denis Stukal, and Brendan Nyhan
· Hewlett Foundation Report
· 2018
This report provides an overview of the current state of the literature on the relationship between social media; political polarization; and political 'disinformation,' a term used to encompass so-called fake news, rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and hyperpartisan news. The review is organized into six sections examining online political conversations, exposure consequences, disinformation producers, spreading strategies, content polarization effects, and impacts on American democracy.
Jones, Kevin L., Sharareh Noorbaloochi, John T. Jost, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker
· Political Psychology
· 39(2): 423-443
· 2018
Past research using self-report questionnaires administered to ordinary citizens demonstrates that value priorities differ as a function of one's political ideology, but it is unclear whether this conclusion applies to political elites, who are presumably seeking to appeal to very broad constituencies. We used quantitative methods of textual analysis to investigate value-laden language in a collection of 577,555 messages sent from the public Twitter accounts of over 400 members of the U.S. Congress between 2012 and 2014. Consistent with theoretical expectations, we observed that Republican and conservative legislators stressed values of tradition, conformity, and national security (as well as self-direction), whereas Democratic and liberal legislators stressed values of benevolence, universalism, hedonism, and social/economic security (as well as achievement).
Siegel, Alexandra A. and Joshua A. Tucker
· Journal of Language and Politics
· 17(2): 258-280
· 2018
How successful is the Islamic State's online strategy? To what extent does the organization achieve its goals of attracting a global audience, broadcasting its military successes, and marketing the Caliphate? We analyze Twitter and YouTube data from 2015-2016, examining 16,364 suspected ISIS accounts and over 70 million tweets. We find that while ISIS maintained linguistically diverse narratives, touting battlefield victories and depicting utopian life in the Caliphate, pro-ISIS content was substantially less prevalent than counter-messaging.
Jost, John T., Pablo Barberá, Richard Bonneau, Melanie Langer, Megan MacDuffee Metzger, Jonathan Nagler, Joanna Sterling, and Joshua A. Tucker
· Advances in Political Psychology
· 39(S1): 85-118
· 2018
It is often claimed that social media platforms such as Facebook and Twitter are profoundly shaping political participation, especially when it comes to protest behavior. We examine protest movements in the United States, Spain, Turkey, and Ukraine finding that: (1) social media enables the exchange of information vital to the coordination of protest activities; (2) these platforms facilitate the exchange of emotional and motivational content; and (3) online network structures shape who and how many people are exposed to information and organizational efforts.
Klašnja, Marko, Andrew Little, and Joshua A. Tucker
· Political Science Research and Methods
· 6(3): 413-428
· 2018
Academics and policymakers recognize that there are serious costs associated with systemic corruption. Stubbornly, many countries or regions remain stuck in a high-corruption equilibrium—a “corruption trap.” Most existing theories concentrate on mutually reinforcing expectations of corrupt behavior among a fixed set of bureaucrats or politicians, implying that changing such expectations can lead to lower corruption. We develop models that more fully characterize thepoliticalnature of corruption traps by also analyzing the behavior of voters and entrants to politics, as well their interaction with incumbent politicians. We show that corruption traps can arise through strategic behavior of each set of actors, as well as through their interrelations. By linking politician, voter, and entrant behavior, we provide an explanation for why simply trying to change expectations among one set of actors is likely insufficient for eliminating corruption traps.
Sanovich, Sergey, Denis Stukal, and Joshua A. Tucker
· Comparative Politics
· 50(3): 435-482
· 2018
We introduce a novel classification of strategies employed by autocrats to combat online opposition generally, and opposition on social media in particular. The research distinguishes between offline responses, technical content restrictions, and online engagement tactics. We document Russia's Internet policy evolution since 2000 and develop methods for detecting political bots on Twitter, analyzing over 14 million Russian-language tweets from approximately 1.3 million accounts.
Brader, Ted and Joshua A. Tucker
· Advances in Political Psychology
· 39(S1): 137-157
· 2018
What factors enable and motivate citizens to form partisan identities? Popular accounts, as well as several major theoretical approaches, attribute a central role for policy and ideological concerns in shaping the partisan orientations of voters. Information about the policy aims of parties should therefore, on average, make it easier for an individual to find a party that best fits her views, especially if she had previously been less familiar with the parties. The evidence for this is mixed, however. Plenty of studies find a robust correlation between policy views and partisanship. Yet there is mounting evidence that citizens look to parties to decide where to stand on policy issues, suggesting that partisan identification precedes policy preferences. We bring new evidence to bear by investigating directly the impact of substantive policy information on the partisan identities of ordinary citizens. To do this, we carry out a pair of original experiments across six countries, five of which are relatively young or unstable party systems. One experiment informs citizens about the policy goals of the major parties; we find little to no evidence that such information affects levels of partisanship. The other experiment tests the impact of inviting citizens to evaluate and compare their own position to the positions of the major parties, a more direct test of the sort of reasoning posited in some theoretical accounts. We find that this reflective task in fact depresses levels of partisanship, perhaps especially among those who knew less about politics and parties from the outset. This suggests that thinking about policy differences and proximities pushes citizens away from partisan attachments they form in the ordinary course of life, perhaps because such thinking generates fresh doubts or focuses attention on facets of partisan choice that matter less in typical processes of preference formation.
Metzger, Megan MacDuffee and Joshua A. Tucker
· Slavic Review
· 76(1): 169-191
· 2017
This paper examines how social media facilitated the 2013 EuroMaidan protests in Ukraine. It describes how journalist Mustafa Nayem's Facebook post on November 21, 2013, calling for protesters to gather at Independence Square if the post received 1,000 comments, would have a much larger impact on subsequent political developments than most that had preceded it. The protests eventually became the largest since Ukrainian independence, ultimately leading to government resignation, presidential exile, and the Crimean conflict.
Brady, William, Julian Willis, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel
· Proceedings of the National Academy of Sciences
· 114(28): 7313-7318
· 2017
Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call 'moral contagion.' Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them.
Tucker, Joshua A., Yannis Theocharis, Margaret E. Roberts, and Pablo Barberá
· Journal of Democracy
· 28(4): 46-59
· 2017
We present a framework addressing how social media simultaneously enables liberation in authoritarian contexts while being weaponized for repression and exploited by anti-system actors in democracies. Our core argument: (1) social media gives voice to people excluded from political discussion by traditional media; (2) social media platforms function as neutral tools rather than inherently democratic or authoritarian systems, capable of serving liberal or illiberal purposes.
Stukal, Denis, Sergey Sanovich, Richard Bonneau, and Joshua A. Tucker
· Big Data
· 5(4): 310-324
· 2017
Automated and semiautomated Twitter accounts, bots, have recently gained significant public attention due to their potential interference in the political realm. We develop detection methodology using multiple classifiers to study bot activity in Russian political discussions from February 2014 to December 2015. We discover that bots produced over 50% of tweets about Russian politics during this period, and were primarily used for sharing news headlines and promoting media outlets, suggesting their main purpose was manipulating search rankings.
Tucker, Joshua A., Jonathan Nagler, Megan MacDuffee Metzger, Pablo Barberá, Duncan Penfold-Brown, and Richard Bonneau
· Computational Social Science: Discovery and Prediction (Cambridge University Press)
· p. 199-224
· 2016
The paper examines how social media influences offline political behavior, specifically protest participation. Using data from the 2013 Turkish Gezi Park protests and 2013-14 Ukrainian EuroMaidan protests, the authors analyze a wealth of evidence suggesting a potentially important role for social media in affecting protest behavior and development in both Turkey and Ukraine.
Vaccari, Cristian, Augusto Valeriani, Pablo Barberá, Richard Bonneau, John T. Jost, Jonathan Nagler, and Joshua A. Tucker
· Social Media and Society
· 2(3)
· 2016
Scholars have debated whether social media platforms, by allowing users to select the information to which they are exposed, may lead people to isolate themselves from viewpoints with which they disagree, thereby serving as political 'echo chambers.' We examine Twitter users during the 2013 German and Italian elections, finding that exposure to different political viewpoints depends on offline conversation patterns and habits of social media political engagement.
Klašnja, Marko, Kevin Deegan-Krause, and Joshua A. Tucker
· British Journal of Political Science
· 46(1): 67-94
· 2016
The article examines the relationship between corruption and voting behavior by defining two distinct channels:pocketbook corruption voting, i.e. how personal experiences with corruption affect voting behavior; andsociotropic corruption voting, i.e. how perceptions of corruption in society do so. Individual and aggregate data from Slovakia fail to support hypotheses that corruption is an undifferentiated valence issue, that it depends on the presence of a viable anti-corruption party, or that voters tolerate (or even prefer) corruption, and support the hypothesis that the importance of each channel depends on thesalienceof each source of corruption and that pocketbook corruption voting prevails unless a credible anti-corruption party shifts media coverage of corruption and activates sociotropic corruption voting. Previous studies may have underestimated the prevalence of corruption voting by not accounting for both channels.
Metzger, Megan MacDuffee, Joshua A. Tucker, Jonathan Nagler, and Richard Bonneau
· Journal of Comparative Economics
· 44(1): 16-40
· 2016
Why and when do group identities become salient? Existing scholarship has suggested that insecurity and competition over political and economic resources as well as increased perceptions of threat from the out-group tend to increase the salience of ethnic identities. We examine Twitter usage patterns in Ukraine from late 2013 through 2014 during the Euromaidan crisis and find no evidence that major political events triggered language preference reversals. Notably, both Ukrainian and Russian speakers increased their Russian-language usage after Crimea's annexation.
Vaccari, Cristian, Augusto Valeriani, Pablo Barberá, Richard Bonneau, John T. Jost, Jonathan Nagler, and Joshua A. Tucker
· Journal of Computer-Mediated Communication
· 20(2): 221-239
· 2015
Barberá, Pablo, John T. Jost, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau
· Psychological Science
· 26(10): 1531-1542
· 2015
We estimated ideological preferences of 3.8 million Twitter users and, using a data set of nearly 150 million tweets concerning 12 political and nonpolitical issues, explored whether online communication resembles an 'echo chamber.' We found that information exchange was primarily ideological for political topics but not for other current events, concluding that previous research may have overestimated how much echo chambers affect social media usage.
Barberá, Pablo, Ning Wang, Richard Bonneau, John T. Jost, Jonathan Nagler, Joshua A. Tucker, and Sandra González-Bailón
· PLOS ONE
· 10(11): e0143611
· 2015
Social media have provided instrumental means of communication in many recent political protests. The efficiency of online networks in disseminating timely information has been praised by many commentators; at the same time, users are often derided as 'slacktivists' because of the shallow commitment involved in clicking a forwarding button. We find that peripheral participants are critical in increasing the reach of protest messages and generating online content at levels that are comparable to core participants.
Powell, Eleanor and Joshua A. Tucker
· British Journal of Political Science
· 44(1): 123-147
· 2014
This article provides a detailed set of coding rules for disaggregating electoral volatility into two components: volatility caused by new party entry and old party exit, and volatility caused by vote switching across existing parties. After providing an overview of both types of volatility in post-communist countries, the causes of volatility are analysed using a larger dataset than those used in previous studies. The results are startling: most findings based on elections in post-communist countries included in previous studies disappear. Instead, entry and exit volatility is found to be largely a function of long-term economic recovery, and it becomes clear that very little is known about what causes ‘party switching’ volatility. As a robustness test of this latter result, the authors demonstrate that systematic explanations for party-switching volatility in Western Europe can indeed be found.
Pop-Eleches, Grigore and Joshua A. Tucker
· East European Politics and Society
· 27(1): 45-68
· 2013
In this article, we test the effect of communist-era legacies on the large and temporally resilient deficit in civic participation in post-communist countries. To do so, we analyze data from 157 surveys conducted between 1990 and 2009 in twenty-four post-communist countries and forty-two non-post-communist countries. The specific hypotheses we test are drawn from a comprehensive theoretical framework of the effects of communist legacies on political behavior in post-communist countries that we have previously developed. Our analysis suggests that three mechanisms were particularly salient in explaining this deficit: first, the demographic profile (including lower religiosity levels) of post-communist countries is less conducive to civic participation than elsewhere. Second, the magnitude of the deficit increases with the number of years an individual spent under communism but the effects were particularly strong for people socialized in the post-totalitarian years and for those who experienced communism in their early formative years (between ages six and seventeen). Finally, we also find that civic participation suffered in countries that experienced weaker economic performance in the post-communist period, though differences in post-communist democratic trajectories had a negligible impact on participation. Taken together, we leave behind a potentially optimistic picture about civic society in post-communist countries, as the evidence we present suggests eventual convergence toward norms in other non post-communist countries.
Politics of AuthoritarianismPost-Communist Politics
Journal Article
Social Media and Political Communication: A Survey of Twitter Users during the 2013 Italian General Election
Vaccari, Cristian, Augusto Valeriani, Pablo Barberá, Richard Bonneau, John T. Jost, Jonathan Nagler, and Joshua A. Tucker
· Italian Political Science Review
· 43(3): 381-410
· 2013
Online Information EnvironmentElections & Voting
Journal Article
The Economy, Corruption, and the Vote: Evidence from Experiments in Sweden and Moldova
Klašnja, Marko and Joshua A. Tucker
· Electoral Studies
· 32(3): 536-543
· 2013
Meirowitz, Adam and Joshua A. Tucker
· American Journal of Political Science
· 57(2): 478-490
· 2013
In the aftermath of the Arab Spring, a crucial question is whether popular protest is now likely to be a permanent part of Middle Eastern politics or if the protests that have taken place over the past two years are more likely to be a “one‐shot deal.” We consider this question from a theoretical perspective, focusing on the relationship between the consequences of protests in one period and the incentives to protest in the future. The model provides numerous predictions for why we might observe a phenomenon that we call the “one‐shot deal”: when protest occurs at one time but not in the future despite an intervening period of bad governance. The analysis focuses on the learning process of citizens. We suggest that citizens may not only be discovering the type or quality of their new government—as most previous models of adverse selection assume—but rather citizens may also be learning about the universe of potential governments in their country. In this way, bad performance by one government induces some pessimism about possible replacements. This modeling approach expands the formal literature on adverse selection in elections in two ways: it takes seriously the fact that removing governments can be costly, and it explores the relevance of allowing the citizen/principal to face uncertainty about the underlying distribution from which possible government/agent types are drawn.
Brader, Ted, Joshua A. Tucker, and Dominik Duell
· Comparative Political Studies
· 46(11): 1485-1517
· 2013
Political parties not only aggregate the policy preferences of their supporters, but also have the ability to shape those preferences. Experimental evidence demonstrates that, when parties stake out positions on policy issues, partisans become more likely to adopt these positions, whether out of blind loyalty or because they infer that party endorsements signal options consistent with their interests or values. It is equally clear, however, that partisans do not always follow their party’s lead. The authors investigate the impact of three party-level traits on partisan cue taking: longevity, incumbency, and ideological clarity. As parties age, voters may become more certain of both the party’s reputation and their own allegiance. Governing parties must take action and respond to events, increasing the likelihood of compromise and failure, and therefore may dilute their reputation and disappoint followers. Incumbency aside, some parties exhibit greater ambiguity in their ideological position than other parties, undermining voter certainty about the meaning of cues. The authors test these hypotheses with experiments conducted in three multiparty democracies (Poland, Hungary, and Great Britain). They find that partisans more strongly follow their party’s lead when that party is older, in the opposition, or has developed a more consistent ideological image. However, the impact of longevity vanishes when the other factors are taken into account. Underscoring the importance of voter (un)certainty, ideologically coherent opposition parties have the greatest capacity to shape the policy views of followers.
Markowski, Radoslaw and Joshua A. Tucker
· Party Politics
· 16(4): 532-548
· 2010
One of the most interesting features of the 2003 Polish referendum on European Union (EU) membership was the strong link between voting behaviour in the 2003 referendum and voting behaviour in the 2001 Polish parliamentary election. In this article, we test two competing mechanisms that could account for this finding: a responsible party model, whereby citizens’ attitudes towards EU membership would have been driven by their preferred party’s position on the issue, and a more Downsian model, whereby the existence of an unrepresented Polish Eurosceptic electorate could have driven the success of two new Eurosceptic parties in the 2001 parliamentary elections. Drawing upon data from the 1997, 2001 and 2005 Polish National Election Studies, we find much stronger empirical support for the Downsian approach. Far from being led to their Euroscepticism by party leaders as the 2003 referendum on Polish EU membership approached, voters for Poland’s Eurosceptic parties in 2001 already possessed healthy degrees of Euroscepticism, especially when compared to supporters of other parties and even to non-voters.
Brader, Ted and Joshua A. Tucker
· Politics and Policy
· 37(4): 843-868
· 2009
We consider the question of whether Russia's greatly weakened political parties might continue to exert an influence on public opinion in twenty‐first century Russia. To do so, we carried out a series of survey‐based experiments in Moscow in the spring of 2006. We present evidence showing that partisan cues increase support for public policy proposals and make it more likely that respondents will adopt a position on an issue that mirrors their party's preferred position (“opinion taking”), as well as increase the likelihood that respondents will adopt a position on a given issue at all (“opinion giving”). We also present evidence that party cues can sway the opinions of nonpartisans, though such influence may be limited to cases when the position of a party constitutes an unusually informative or credible signal. The findings should be of interest to those concerned with Russia's post‐communist political development, those interested more broadly in the effects of partisan cues on political behavior, as well as to scholars trying to characterize the nature of “competitive authoritarian” regimes.Consideramos la cuestión de si partidos políticos fuertemente debilitados, como es el caso de los partidos rusos, podrían continuar influyendo la opinión pública en la Rusia del siglo XXI. Para responder esto llevamos a cabo una serie de investigaciones basadas en encuestas realizadas en Moscú en la primavera de 2006. Presentamos evidencia que demuestra que las pautas partidistas incrementan el apoyo del público a sus propuestas de políticas y hacen más probable que un encuestado conozca la preferencia del partido e incluso la llegue a asumir como suya (“tomando opinión”), así como que incremente la posibilidad de que ellos adopten una determinada posición sobre un determinado tópico (“dando opinión”). También presentamos evidencia que las pautas del partido pueden producir cambios de opinión, aunque dicha influencia puede estar limitada a casos donde la posición del partido constituye una señal inusualmente informativa o creíble. Los resultados serán de interés para aquellos preocupados por el desarrollo político de Rusia post‐comunista, aquellos interesados más ampliamente en los efectos de las pautas partidistas en el comportamiento político, así como para los académicos que buscan precisar las características de los regímenes “autoritarios competitivos.”