- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
cross-posted from: https://feddit.org/post/341702
Once upon a time, newly minted graduates dreamt of creating online social media that would bring people closer together.
That dream is now all but a distant memory. In 2024, there aren’t many ills social networks don’t stand accused of: the platforms are singled out for spreading “fake news”, for serving as Russian and Chinese vehicles to destabilise democracies, as well as for capturing our attention and selling it to shadowy merchants through micro targeting. The popular success of documentaries and essays on the allegedly huge social costs of social media illustrates this.
Studies suggest that if individuals regularly clash over political issues online, this is partly due to psychological and socioeconomic factors independent of digital platforms.
In economically unequal and less democratic countries, individuals are most often victims of online hostility on social media (e.g., insults, threats, harassment, etc.). A phenomenon which seems to derive from frustrations generated by more repressive social environments and political regimes.
individuals who indulge most in online hostility are also those who are higher in status-driven risk taking. This personality trait corresponds to an orientation towards dominance, i.e., a propensity to seek to submit others to one’s will, for instance through intimidation. According to our cross-cultural data, individuals with this type of dominant personality are more numerous in unequal and non-democratic countries.
Similarly, independent analyses show that dominance is a key element in the psychology of political conflict, as it also predicts more sharing of “fake news” mocking or insulting political opponents, and more attraction to offline political conflict, in particular.
n summary, online political hostility appears to be largely the product of the interplay between particular personalities and social contexts repressing individual aspirations. It is the frustrations associated with social inequality that have made these people more aggressive, activating tendencies to see the world in terms of “us” vs “them”.
On a policy level, if we are to bring about a more harmonious Internet (and civil society), we will likely have to tackle wealth inequality and make our political institutions more democratic.
Recent analyses also remind us that social networks operate less as a mirror than as a distorting prism for the diversity of opinions in society. Outraged and potentially insulting political posts are generally written by people who are more committed to express themselves and more radical than the average person, whether it’s to signal their commitments, express anger, or mobilise others to join political causes.
Even when they represent a relatively small proportion of the written output on the networks, moralistic and hostile posts tend to be promoted by algorithms programmed to push forward content capable of attracting attention and triggering responses, of which divisive political messages are an important part.
On the other hand, the majority of users, who are more moderate and less dogmatic, are more reluctant to get involved in political discussions that rarely reward good faith in argumentation and often escalate into outbursts of hatred.
Social media use seems to contribute to increasing political hostility and polarisation through at least one mechanism: exposure to caricatural versions of the political convictions of one’s rivals.
The way in which most people express their political convictions – both on social media and at the coffee machine – is rather lacking in nuance and tactfulness. It tends to reduce opposing positions to demonised caricatures, and is less concerned with persuading the other side than with signaling devotion to particular groups or causes, galvanising people who already agree with you, and maintaining connections with like-minded friends.
Wow what a thought provoking question that surely nobody has ever considered. Is this article from 2008?
It’s a scientific study. It’s useful because it provides additional evidence for something that we all suspect is occurring, and they don’t find exactly the results that you might expect.
And that nobody… Cambridge Analytica!
Yes. I thought that was obvious.
I am going to need your 50 point summary of those obvious points in the longest form possible by this afternoon so I can be completely convinced that I have already made up my mind in the correct way. Thanks.
I went on facebook as an experiment for a couple of weeks, try it out again, even take part.
Algorithm quickly caught on that I liked some interests - transit, trains, Taylor Swift, and EVs.
It was fine for a while, made a few comments, engaged with a few people, both who agreed and not.
All of a sudden over the last week I’m seeing just pure propaganda - BS “headlines” like “50% of Americans regret buying their EV”. Absolutely unproven horseshit, but there it is.
Facebook is absolutely culpable in this mess. They straight up promote it, and for me I was pro all of that stuff, it switched on me.
On a policy level, if we are to bring about a more harmonious Internet (and civil society), we will likely have to tackle wealth inequality and make our political institutions more democratic.
Really burying the lede there.
Another big factor is a lack of moderation, as well as the things that get platformed. People sit and stew in a broth of violent, hateful rhetoric all day long because platforms not only allow that kind of content, but massively profit from it. We seriously need social media companies, podcast apps, YouTube and other video hosting sites, etc. to step the fuck up and deplatform misinformation, disinformation, bigotry, hatemongering, and ragebait. Do not give it light. Do not give it oxygen. Smother it. Ban it. And be extremely aggressive about it. You wanna cry about censorship? Go ahead. But the first amendment doesn’t apply to private companies, and you are also free to setup your own website/service/app/whatever and spew your own bile there.
We further need social media companies to change their algorithms to prevent them from rewarding inflammatory posts. People want to have millions of followers and be big stars on the web, and ragebait, lies, and misinformation are perfect ways to do that. It’s gets you to the top of the heap because there’s no such thing as a “bad” click. You still watched it. You still replied (even if to refute it). And that gave it a boost. That shit has to stop. The entire social landscape is built on top of this inflammatory foundation.
Really burying the lede there.
If the headline is a question, the answer is always no.
Tackling wealth inequality is why we keep getting further polarized. Everything is either about maintaining the status quo, or challenging it. It’s very clearly not the best thing, but I don’t think we can stop at this point.
I’m sorry, are you suggesting that allowing wealth inequality is the best course of action, simply because it’s more harmonious than combating it?
and the only voting options are maintaining the status quo or making it worse
I highly recommend you visit your local library and request/check-out a copy of the book Polarization by Nolan McCarty. Read that.
Interesting article. The results of the author’s research is consistent with my understanding of the social media landscape in countries like the Philippines, which I believe are extremely toxic and partisan. However, I’m not sure the additional studies linked support the argument that social media does not increase polarisation. I’ve only read the abstracts of each so it’s quite possible I’m overlooking something, but they immediately seem flawed due to their reliance on consenting participants. I would have thought that anyone who agrees to take part in such a study is clearly an outlier within society and therefore not a reliable test subject. Polarisation via algorithm relies on people being unwittingly exposed to content; if they’re switched on enough to deactivate their social media accounts or disable re-shares as part of a study into political polarisation, they are clearly not representative of society at large.
Nope not really. People were already mad but its a lot easier to get mad publicly on the internet than in person. But Im sure the same people could get just as angry watching biased news channels but they cant start arguments with anyone in that context.
And also, don’t forget Betteridges Law of Headlines.
The answer is yes and with significant effect. I just barely skimmed this article but this doesn’t seem to be focusing on the important factor: Algorithmic content feeds.
Modern day social media (things like Facebook, Reddit, YouTube, X, etc.) Are all set up with one goal in mind: make as much money as possible. This in itself isn’t a problem depending on who you ask, but let’s pick one social media as an example and see why this eventually causes political polarization.
For this demonstration, I will pick Facebook, but this could just as easily be done with any free, ad-supported website/app.
Okay, so to reach their goal of getting as much money as possible, Facebook shows ads between posts. Companies pay Facebook to show those ads to people, with the promise that they will be shown to people that fit a demographic that would be interested in the product. When the ad is viewed by enough people, Facebook will stop running the ad unless the company pays again.
Now that we know how they make money, let’s look at how they ensure they get as many people to view as many ads as possible. This mostly boils down to a few metrics.
- Time spent on the platform
- Engagement (views, link clicks, comments, likes, messages, posts, etc.)
If you spend more time on Facebook, you will see more ads. To maximize time spent on the platform, Facebook keeps track of everything you do, both on their site and off. I won’t go into specifics here, but they utilize web cookies to keep track of your browsing history and things like app permissions to keep track of your location and what you do on your phone. From this data, and potentially other data on you that they purchase from data brokers, they build a pretty good profile on what you would be interested in seeing. They show you relevant ads and relevant posts to hopefully keep you on their site.
Keeping engagement high means you are more likely to click on an ad, which pays out more than a view for an ad. To ensure you are fully engaging with content, as discussed above, Facebook keeps track of what you like to view and interact with, and puts that in front of you. However, Facebook also knows what type of content garners more interaction.
This is where the whole system leads to political polarization. There are two types of content that bring the most engagement: Controversy and content designed to make you angry. So what does Facebook do? It throws the most controversial, rage-baity article that makes your political opponents seem like absolute monsters in front of you. Often times, these posts are actually really misleading and full of both deliberate misinformation or non-malicious misinformation. These posts get people riled up, and so they are very likely to engage with the post. And because Facebook knows that you are less likely to stay on the site if it shows you something that you don’t engage with, it avoids showing you posts that show the other side of the story, so you are caught in an echo chamber of your own ideas and the misinformation of the outrage-inducing posts you have seen.
Facebook won’t show you posts that are on situations where you and your political opponents actually agree, because if it doesn’t get you worked up, you aren’t likely to engage with it. They also won’t show you posts that have a majority of engagement from your political opponents, since it’s likely not something that the data profile they have on you suggests you’d like.
News content that shows both sides agreeing is already hard to find, considering that the news sites also know that rage-inducing content gets more views and more eyes on their ads, so they primarily focus on making controversial content anyway.
Enough of this over time will make you think that everyone on the Internet agrees with you, since Facebook doesn’t show you content that those who oppose your ideas are engaging with. This type of situation will support an us-vs-them mentality, and breeds pockets of the social media with either left-leaning content or right-leaning content is all that’s being shown, which breads political polarization.
Thanks for coming to my TED Talk, sorry it was so long.
tl;dr: Social media exists to make their owning companies money, politically polarizing content gets them more money, thus in a way social media exists to make politically polarizing content
I agree with you.
And youre right that the article doesnt focus on the algorithmic hate factory which to me is the main difference between social media and traditional media. For instance, and this is just anecdotal, my grandma who had nothing besides an analog telephone and broadcast tv became just as polarized and angry as someone with social media just by reading and watching Fox news (and eventually OAN and Newsmax) all day. I cant imagine that Facebook would have made it any worse.
The algorithm is probably accelerating the polarization pipeline, but i guess my point was that social media isnt necessarily doing anything new or distinct. Its doing the same thing Rush Limbaugh was doing on the radio 25 years ago, its just on a new frontier.
The 24 hour news cycle was already throwing sensational controversial stories up and speculating wildly if not outright lying about to hold on to eyeballs. The longer you watch, the more commercials you see. Etc etc.
I would love to see a study of social media vs traditional media to see whether the mean time to full polarization changes and if so, how significantly.
Good Ted talk!