Summary Twitters Algorithm Amplifying Anger and Animosity arxiv.org
6,412 words - PDF document - View PDF document
One Line
Twitter's algorithm amplifies anger, animosity, and affective polarization, particularly in political tweets, and needs to incorporate user surveys and explicit indicators of value to reduce the amplification of problematic content.
Key Points
- Twitter's algorithm amplifies anger, animosity, and affective polarization, particularly in political tweets.
- Algorithm-selected tweets are preferred by readers but can lead to worse perceptions of political out-groups.
- The algorithm slightly increases the ratio of out-group to in-group content seen by users.
- Social media ranking algorithms prioritize users' revealed preferences, leading to the amplification of sensationalized and clickbaity content.
- The algorithm needs to better incorporate user surveys and other explicit indicators of value to reduce the amplification of problematic content.
- The study highlights the need for algorithmic transparency and better recommendations to address issues of polarization, anger, and toxicity in online discourse.
Summaries
277 word summary
A study analyzed Twitter's algorithm and found that it amplifies anger and animosity in users' timelines, particularly in political tweets. The research measured outcomes related to emotions, out-group animosity expression, and partisanship of tweets. Exposure to opposing views can increase political polarization, and social media algorithms impact media usage. The algorithm needs to better incorporate user surveys and other explicit indicators of value into the recommender system to reduce the amplification of problematic content. Social media ranking algorithms optimize for users' revealed preferences rather than their stated preferences, leading to the amplification of sensationalized and clickbaity content. A study found that Twitter's algorithm amplifies emotionally-charged content, particularly anger and out-group animosity. The algorithm primarily amplifies anger among authors and readers of political tweets, while sadness and anxiety are also significantly amplified. The study raises questions about external validity and the effect of the algorithm on tweets about charged topics such as gun control and climate change. The study contributes to the evaluation and understanding of Twitter's ranking algorithm, which appears to fall short in optimizing users' stated preferences, potentially contributing to greater affective polarization. A study by the University of California, Berkeley and Cornell Tech found that Twitter's algorithm amplifies anger, animosity, and affective polarization. It increases the ratio of out-group to in-group tweets, worsens perceptions of the out-group, and improves perceptions of the in-group slightly. The algorithm tends to amplify emotionally-charged content, particularly political tweets, which exhibit greater partisanship and out-group animosity. Readers generally prefer algorithm-selected tweets but are less likely to perceive their political out-group positively. The study provides important insights into the impact of social media ranking algorithms on public discourse and democratic engagement.
630 word summary
A study conducted by the University of California, Berkeley and Cornell Tech found that Twitter's algorithm amplifies anger, animosity, and affective polarization. The algorithm tends to amplify emotionally-charged content, particularly political tweets, which exhibit greater partisanship and out-group animosity. Readers generally prefer algorithm-selected tweets but are less likely to perceive their political out-group positively. The study provides important insights into the impact of social media ranking algorithms on public discourse and democratic engagement.
The study found that the algorithm slightly increases the ratio of out-group to in-group tweets and users see about twice as many tweets from their political in-group relative to their out-group. The algorithm also worsened perceptions of the out-group for both left- and right-leaning users but improved perceptions of the in-group slightly. The study was conducted before Twitter changed its policy to only give verified status to paid subscribers.
The study contributes to the evaluation and understanding of Twitter's ranking algorithm, which appears to fall short in optimizing users' stated preferences, potentially contributing to greater affective polarization. The recently passed EU Digital Services Act mandates that large online platforms offer a non-algorithmic way of viewing content, which may make it possible to replicate the study across multiple platforms. A study found that Twitter's algorithm amplifies emotionally-charged content, particularly anger and out-group animosity. The algorithm also leads to lower user value compared to the chronological timeline, but political tweets recommended by the algorithm are rated slightly higher. For political tweets, the algorithm primarily amplifies anger among authors and readers, while sadness and anxiety are also significantly amplified. The study raises questions about external validity and the effect of the algorithm on tweets about charged topics such as gun control and climate change.
Social media ranking algorithms optimize for users' revealed preferences rather than their stated preferences, leading to the amplification of sensationalized and clickbaity content. Users have a slight preference for algorithm-selected tweets over tweets in their reverse chronological timeline, but it is important to evaluate whether users actually want to see algorithm-selected tweets or not. The algorithm may contribute to affective polarization by selectively highlighting divisive tweets and slightly increasing the ratio of out-group to in-group content seen by users.
The study was conducted using a Chrome extension and found evidence of a positive feedback loop in which the algorithm amplifies anger and animosity by incentivizing certain types of content over others. The algorithm changes what type of content is available in the chronological timeline and affects what users see on their timeline. The study suggests that the algorithm needs to better incorporate user surveys and other explicit indicators of value into the recommender system to reduce the amplification of problematic content. A study analyzed Twitter's algorithm and found that it amplifies anger and animosity in users' timelines. The research measured outcomes related to emotions, out-group animosity expression, and partisanship of tweets. The study found that exposure to opposing views can increase political polarization and that social media algorithms impact media usage. The research suggests that social media's algorithm amplifies negative emotions and can have negative welfare effects. The article discusses various studies on the effects of social media algorithms on polarization, anger, and toxicity in online discourse. The study highlights the need for algorithmic transparency and better recommendations to address these issues. A study analyzed the impact of Twitter's algorithm on political tweets and found that it amplifies anger and animosity, particularly in political tweets. Emotional effects of political tweets were measured, including reader and author emotions such as anger, sadness, happiness, and anxiety. The study also looked at the treatment effects of Twitter's algorithm on animosity and out-group perception, as well as the demographics of Twitter users and their primary reasons for using the platform. Finally, the study examined the political party leaning of participants.
1524 word summary
The study analyzed the effect of Twitter's algorithm on political tweets. Table 4 shows the effect sizes, p-values, and adjusted p-values for all tested outcomes. The study found that Twitter's algorithm amplifies anger and animosity, especially in political tweets. The emotional effects of political tweets were measured, including reader value, reader angry, reader sad, reader happy, reader anxious, author angry, author sad, author happy, and author anxious. The study also looked at the treatment effects of Twitter's algorithm on animosity, out-group (overall), out-group perc. (overall), in-group perc. (right users), out-group perc. (right users), in-group perc. (left users), and partisanship. The study analyzed the demographics of Twitter users and their primary reason for using Twitter. Finally, the study looked at the political party leaning of participants. A study found that Twitter's algorithm amplifies anger and animosity. Demographic data was collected from 1730 study responses from 806 unique participants. The study found that social media platforms can amplify divisive content and out-group animosity drives engagement on social media. The study also found that algorithmic recommendation systems struggle with agency on online platforms. The study highlights the need to build human values into recommendation systems and questions whether algorithmic recommendation systems can be good for democracy. The article discusses various studies on the effects of social media algorithms on polarization, anger, and toxicity in online discourse. One study found that self-selection and exposure to incivility can fuel online comment toxicity, while another study showed that social media algorithms amplify anger and animosity. Additionally, research has shown that the sharing of misinformation is habitual and not just lazy or biased, and that overperception of moral outrage in online social networks inflates beliefs about intergroup hostility. Algorithm-mediated social learning has also been found to amplify diffusion of moralized content in social networks. The article emphasizes the need for algorithmic transparency and better recommendations to address these issues. This study analyzes the algorithm used by Twitter, which amplifies anger and animosity in users' timelines. The study uses a false discovery rate to estimate the effect size of moral contagion in online networks. The research finds that exposure to opposing views can increase political polarization, and that social media algorithms impact media usage. The study also evaluates the effects of face-to-face and online interactions on political polarization. The research suggests that social media's algorithm amplifies negative emotions and can have negative welfare effects. The study was financially supported by the UC Berkeley Center for Human-Compatible AI and received feedback from various individuals. Data collection was stopped due to budget constraints, and the study faced limitations in recruiting participants and flagging it as requiring a software download. Standardized effect sizes were calculated for author emotion, reader emotion, and explicit value for political tweets. The study opted to use CloudResearch Connect instead of Mechanical Turk for participant recruitment. Hypotheses, outcome measures, and statistical analyses were pre-registered on Open Science Framework. False discovery rate-adjusted p-values were used, and the Benjamini-Krieger-Yekutieli two-stage method was used for multiple tests. The study measured outcomes related to author and reader emotions, author expression of out-group animosity, and partisanship of tweets. The study consisted of 1730 responses from 806 unique participants. Participants who passed the pre-screen but did not complete the full survey were excluded from the final dataset used for analysis. Attention checks were included in the study, and participants were asked questions about each tweet from the personalized and chronological timeline in a random order. A study was conducted using a Chrome extension to collect the top ten tweets from participants' personalized Twitter timelines. The study period was broken into four intervals, and up to 150 eligible participants were recruited daily. The study found evidence of a positive feedback loop in which the algorithm amplifies anger and animosity by incentivizing certain types of content over others. There is evidence that people produce more of what the algorithm favors through at least three mechanisms: intentional strategic adaptation, observational learning, and reinforcement mechanisms. The algorithm changes what type of content is available in the chronological timeline and affects what users see on their timeline. The study suggests that the algorithm needs to better incorporate user surveys and other explicit indicators of value into the recommender system to reduce the amplification of problematic content. Social media ranking algorithms, including Twitter's algorithm, tend to optimize for users' revealed preferences rather than their stated preferences, which can lead to the amplification of sensationalized and clickbaity content. However, users often explicitly state that they do not want to see this kind of content. Twitter's algorithm predicts and serves users content they are most likely to engage with, but our measurement of users' stated preferences highlights a blind spot in machine learning research on recommendation systems. Users have a very slight preference for algorithm-selected tweets over tweets in their reverse chronological timeline, but it is important to evaluate whether users actually want to see algorithm-selected tweets or not. The algorithm may contribute to affective polarization not by limiting exposure to the other side, but rather by selectively highlighting divisive tweets. The algorithm actually slightly increases the ratio of out-group to in-group content seen by users, potentially leading to greater affective polarization. We find evidence that the algorithm may contribute to affective polarization, i.e., the tendency for partisans to dislike and distrust those from the other side. Prior studies have found little evidence to support the existence of filter bubbles, where users receive limited exposure from their political out-group. However, subsequent studies have found that exposure to counter-attitudinal views can actually increase rather than decrease polarization. Most prior studies focus on the effect of social media in general on polarization as opposed to specifically the effect of the ranking algorithm. A study shows that Twitter's algorithm amplifies emotionally-charged content, particularly those expressing anger and out-group animosity. The randomized experiment provides causal evidence to support this finding. The algorithm also leads to lower user value compared to the chronological timeline. However, political tweets recommended by the algorithm are rated slightly higher. For political tweets, the algorithm primarily amplifies anger among authors and readers, while sadness and anxiety are also significantly amplified. The study raises questions about external validity and the effect of the algorithm on tweets about charged topics such as gun control and climate change. Twitter's algorithm amplifies tweets containing negative emotions, particularly anger, which may contribute to increased polarization. The algorithm also amplifies out-group animosity and partisan political content. Participants reported feeling worse about their political out-group after reading tweets from the personalized algorithm. The algorithm worsened perceptions of the out-group for both left- and right-leaning users, but improved perceptions of the in-group slightly. The study was conducted before Twitter changed its policy to only give verified status to paid subscribers. A study has found that Twitter's algorithm amplifies anger and animosity in political discussions. The study asked users to assess whether tweets were about political or social issues and found that the algorithm slightly increases the ratio of out-group to in-group tweets. Users see about twice as many tweets from their political in-group relative to their out-group, and personalized timelines have a higher representation of likes, retweets, and followers. The study also found that the algorithm does not necessarily favor the most popular accounts, and there is a higher representation of Democrats in the sample than the general population. A recent study found that 42% of Twitter users have at least a Bachelor's degree, and the majority of the sample leans Democrat and is male. The study surveyed participants about their own timelines, presenting them with both personalized algorithm-generated tweets and a reverse-chronological feed. The algorithm is designed to boost user engagement, but users are less likely to prefer algorithm-selected political tweets. The study contributes to the evaluation and understanding of Twitter's ranking algorithm, which appears to fall short in optimizing users' stated preferences, potentially contributing to greater affective polarization. The recently passed EU Digital Services Act mandates that large online platforms offer a non-algorithmic way of viewing content, which may make it possible to replicate the study across multiple platforms. A controlled experiment was conducted to understand the impact of Twitter's ranking algorithm on emotional content and affective polarization. The algorithm tends to amplify emotionally-charged content and may play a role in increasing affective polarization. Political tweets shown by the algorithm exhibit greater partisanship and out-group animosity, leading to increased emotional responses, especially anger. Interestingly, readers generally prefer algorithm-selected tweets but are less likely to perceive their political out-group positively. The study provides important insights into the impact of social media ranking algorithms on public discourse and democratic engagement. A study was conducted by a group of active Twitter users to experiment on Twitter's algorithm without internal access. They collected tweets from personalized algorithms and compared them to the latest tweets from people in order to understand the impact of machine learning algorithms that filter and curate content. The study found that Twitter's algorithm amplifies anger, animosity, and affective polarization. Understanding the impact of social media on public opinion is crucial as it continues to have a significant influence. The study was conducted by the University of California, Berkeley and Cornell Tech.