Two Ways to Promote Positivity and Disrupt Echo Chambers

Social media algorithms are the unseen forces modifying our minds and swaying our societies. Most of us have no idea how they work. We just accept their results. Only a few programmers, product managers, and executives know the full details, and even fewer can change these systems. But they have an immense influence on our world. In 2019, around 72% of adults in the United States were on at least social media site (Pew Research). These algorithms especially shape our individual mental lives, affecting who and what we are exposed to. Should these algorithms take a stand on which emotions are better than others? Should they, for example, promote joy and compassion above anger? I think the answer is yes. I’ll also argue that these algorithms should disrupt echo chambers by introducing semi-random content into newsfeeds.

Weighing Love above Anger?

Reactions on Facebook (anything beyond a ‘like’) are weighted higher by the newsfeed algorithm. If you react with an emotion to a post, you’re more likely to see more of that item than if you simply like it. But right now – as far as we know – the algorithm weights all of the different emotions equally (Hootsuite). That should change! Specifically, anger’s weight should be reduced. Positive emotions should carry more weight.

Should Facebook take a position on which emotions are weighed more highly? Maybe the algorithm should instead weight love, laughter, and care highest, then surprise, then sadness, and finally anger the lowest. Mere likes would stay the lowest of all, as they represent lower engagement. This would make news-feeds more positive, promote better thinking, and maybe reduce divisiveness. It might counteract the dominance of anger on social media.

Social media is already filled with emotional contagion. Social media networks tend to create clusters of people who experience synchronized waves of similar emotions (Coviello et al 2014). Do we have to let anger spread like an unfettered pandemic? Or can we encourage more positive emotions instead?

Right now, anger-producing content spreads most quickly on social media. One study found that angry content is more likely to go viral, followed by joy, while sad or disgust-provoking content results in the most subdued reactions (Fan et al 2013). This rewards fringe content, ‘fake news,’ or stuff that makes people mad – and might explain why Fox News is the top publisher on Facebook by total engagement.

thumbnail
A social media network colored by emotions (from Fan et al). Notice the density & clustering of the red (anger) networks, while green (joy) is more scattered and noisy. This graph also includes black (disgust) and blue (sadness).

Our psychology encourages us to react rapidly to anger and fear, and to share bad news. This has evolutionary advantages: info about potential dangers circulates swiftly. But it also arguably hurts our society and encourages reactive, angry, tribalist, and even hateful thinking. Anger-producing content activates Kahneman’s System 2 – fast, instinctive, emotional. Anger suppresses the more deliberate, slower, and more careful side of the mind. For example, research finds that angry people are more likely to stereotype, rely more on simple heuristics, and their judgements rely far more on the person who is telling them the message than the actual content of the message (Bodenhausen et al 1994). Anger clouds thinking.

“Angry people are not always wise.”

― Jane Austen, Pride and Prejudice

On the other hand, positive emotions make our thinking more creative, integrative, flexible, and open to information (Fredrickson 2003). And these emotions speak for themselves; they just feel better. This also translates into health benefits. People who experience more positive emotions tend to live longer (Abel & Kruger 2010). Meanwhile, anger increases blood pressure, and leads to tons of other harmful physiological impacts associated with stress (Groër et al 1994). Positive emotions like compassion enhance the immune system while anger weakens and inhibits immune reactions (Rein et al 1995). Laughter alone improves immune responses (Brod et al 2014). Promoting more positive emotions on social media could not only improve our thinking and reduce waves of anger-fueled divisiveness and misinformation. On a population level, it could promote health and longevity. It could even help slightly strengthen immune systems and enhance our resilience to the current pandemic.

boy singing on microphone with pop filter
Social media as it exists currently amplifies the voices of the angriest. It doesn’t need to be that way.

Objections to this Change

I’m not sure about this idea. There are lots of complexities to take into account. Any algorithm change would likely have countless unknown and unintended consequences. It’s hard to know what externalities this would create. For example, if sadness is given lower weight, important news about sad events throughout the world (e.g. genocide in Myanmar) will become even more obscure and hidden. Favoring positive emotions may result in ignorance of a different kind. As the positivity-favoring algorithm glosses over or suppresses info associated with negative emotions, we may become less aware of problems in the world.

My response to this is simple. First, yes, I think changing the algorithm will be hard and complex. But this is not an argument against the change. It just means developers and decision-makers who implement the new algorithm need to be careful and forward-looking. The change should be tested thoroughly before it’s rolled out globally. Data scientists should examine the effects of the new algorithm and look for subtle unintended effects. Facebook should also be transparent about the changes and play close attention to user feedback. My assumption in this post that Facebook will not botch the changes and will take all these sensible precautions and more.

Some people might argue this change would be too paternalistic. Facebook should not intervene to promote our supposed best interests. Social media networks should not take a position on which human emotions are ‘better.’ However, Facebook is already taking an implicit stance. The structure of social media already favors anger. Accepting the default is taking a tacit stand in favor of an anger-promoting system. It would be impossible for Facebook to be completely neutral on this issue. Any algorithm will inevitably favor some emotions over others. So why not make a stand for more positive emotions? This would not infringe on anyone’s freedoms. If anything, it would liberate us from the restrictive, rationality-undermining, mind-consuming effects of anger.

This intervention is better understood as a ‘nudge.’ In their book Nudge, the economists Prof. Thaler and Prof. Sunstein argued that there are a variety of societal changes we can make that don’t reduce anyone’s freedom but promote better choices. For example, managers of school cafeterias can put healthier foods at eye level, while placing junk food in places that are harder for kids to reach. This doesn’t restrict choice – the kids can still access the junk food. But it influences choice in a positive direction. The authors use the name choice architecture for any system that affects our decisions. Choice architectures inevitably favor some choices over others. For instance, if the junk food is placed at eye level instead, that would encourage more unhealthy choices.

On social media, the choice architecture is the social media feed that presents a range of choices (posts) to interact with. No architecture is neutral, and right now, Facebook’s algorithm favors more anger-promoting choices. Modifying the architecture to favor positive emotions like love and empathy would not infringe on freedoms. It would only nudge our choices in a better direction by presenting more positivity-promoting content. It would even enhance our freedom by preventing our brains from being hijacked by anger.

Archipelagos of Echo Chambers

See you next time :-) | I'm off for some adventures, will be… | Flickr

Online social networks are made up of countless small islands of thought. These are often called echo chambers. The more extreme the thoughts, the wider the gulf that separates them from the world, and the more insular the island becomes. Research shows that political party groupings which are further apart in ideological terms interact less, and that individuals at the extreme ends of the ideological scale are particularly likely to form echo chambers (Bright 2017). For example, on climate change, most people are segregated into “skeptic” or “activist” groups (Williams et al 2015). People within chambers tend to accept & spread clearly false information if it confirms the group’s beliefs (Bessi et al 2015). Social media has almost definitely contributed to today’s extreme ideological polarization.

The borders between chambers are often hard to see from within a chamber. But these borders are psychologically enforced. People engage less with cross-cutting content from outside their echo chamber (Garimella et al 2018). This same study found that people who create more bipartisan, cross-cutting content pay the price of less engagement. And people tend to interact positively with people within their group, while interactions with outsiders are more negative. The people who build rafts & attempt to sail over to neighboring islands are met with either silence or a flurry of arrows. Bridging the gaps between echo chambers is not easy.

Randomness to Disrupt Echoes

I have a very simple suggestion to break up echo chambers: the algorithm should introduce some randomness. This is content that is chosen without input from the normal newsfeed algorithm. It might be semi-randomly selected from people we follow, or even from beyond our limited social networks. This increases novelty and introduces us to content that we wouldn’t expect. It reduces echo chambers by exposing us to content outside our well-curated worlds. It encourages more open and critical thinking. Plus, Facebook’s machine learning algorithms may learn more from our reactions to this novel info and offer better content in the future.

Randomness would help bridge the divides between information islands on social media. Right now, the only thing that interrupts the insularity of these islands are the people who dare to cross the seas between them. Sailing between islands is disincentivized by the fact that people who cross the divides between echo chambers are often spurned or ignored, while people who stay within their islands are rewarded with engagement and shares. Adding a random element to the newsfeed is like adding a spaceship to the social media archipelago, picking up info from one island and dropping it onto another. Like the San tribe in the South African comedy The Gods Must be Crazy, who have to deal with a Coca-Cola bottle that falls from the sky, we’ll encounter unpredictable content that interferes with our comfortable and restrictive echo chambers.

Leave a Reply

Your email address will not be published. Required fields are marked *