As much as forty percent of social media users have been harassed online, but there is scarce causal evidence of how toxic content impacts user engagement and whether it is contagious. In a pre-registered field experiment, we recruited participants to install a browser extension, and randomly assigned them to either a treatment group where the extension automatically hides toxic text content on Facebook, Twitter, and YouTube, or to a control group without hiding. As the first stage, 6.6% of the content displayed to users was classified as toxic by the extension relying on state-of-the-art toxicity detection tools, and duly hidden in the treatment group during a six-week long period. Lowering exposure to toxicity reduced content consumption on Facebook by 23% relative to the mean – beyond the mechanical effect of our intervention. We also report a 9.2% drop in ad consumption on Twitter (relative to the mean), where this metric is available. Additionally, the intervention reduced the average toxicity of content posted by users on Facebook and Twitter, evidence of toxicity being contagious. Taken together, our results suggest a trade-off faced by platforms: they can curb users’ toxicity at the expense of their content consumption.
full text