If we’re serious about social cohesion, these big tech hate machines must be reined in

0
3
Advertisement

Opinion

Founder of Teach Us Consent

In a California court, tech billionaires have been struggling to refute claims that their social media platforms were designed to be addictive, despite growing evidence and the lived experience of millions suggesting otherwise. Specifically, there are concerns about “infinite scroll”, “autoplay” and “dangerous and addictive algorithms”.

As we await this landmark decision, we should also consider how these products have become a threat to democracy, social cohesion, and freedom of information, and begin to consider remedies to this.

Mark Zuckerberg arrives at Los Angeles Superior Court on Wednesday to give evidence in a landmark trial over social media addiction.Bloomberg

At first, social media challenged the oligarchical, profit-motivated, self-censored legacy media system by providing an alternative and equal platform for voices. But now, the biggest social media companies are similarly owned by a handful of tech billionaires, and they control the digital infrastructure in which we communicate, organise and access information in an unprecedented way.

While the algorithm may give the illusion of freedom, the conditions it operates on distorts access to information and inhibits free speech. Content may not be removed completely, but private entities now mediate public and interpersonal discourse through their own set of rules that prioritise, withhold, promote, and censor, based on the desires of big tech. These desires may morph based on ideologies or profit incentives, but never with transparency.

Advertisement

This becomes both a threat to democracy, and social cohesion. Echo chambers have been established, and while the algorithm largely keeps us within them, it tokenistically shows snippets of content from the extreme other side of the spectrum to intentionally inflame, and to reinforce that the chamber we are in is the “correct” one. Mis- and disinformation is commonplace on our screens and those paying attention fear this has led us to a post-truth era, where emotional resonance and personal conviction outweigh empirical evidence in influencing beliefs and decisions.

The (highly lucrative) technical infrastructure curating what we see on our screens is designed to prioritise content that provokes the strongest emotional reactions. Outrage, lies, jealousy, fear and anger all drive higher engagement, almost always on the basis of “us” and “them”. This creates an environment where hateful content doesn’t merely exist – it succeeds and dominates, at the expense of social cohesion.

A sketch of Zuckerberg as he took the stand in the bellwether trial against Meta.AP

The content people are fed doesn’t start with overt calls to violence, rather it starts with grievance, and the argument that something they’re entitled to has been taken from them. It also tends to begin with misogyny, with research finding that it only takes 23 minutes for social media to show boys and men misogynistic content upon sign up, regardless of their viewing preferences.

Hatred of all kinds – misogyny, xenophobia, Islamophobia, antisemitism, racism – follows the same pattern: it intensifies during periods of rapid social change, blames another group for one’s problems, and thrives on harmful stereotypes and false narratives. These forms of hate don’t exist in isolation. A growing body of research shows the structural similarities between misogynistic extremism and violent extremism more broadly. This is unsurprising when you consider both as forms of male violence. Disenfranchised men are persistently served content that identifies the “other” as the source of their problems, whoever that may be – women, Muslims, Jewish people, refugees. The list is endless, and it frequently intersects.

Advertisement

Research has shown that Australians who hold hostile attitudes towards women are much more likely to support various forms of violent extremism. Academics at the University of Melbourne have warned that racial and gendered biases as drivers of radicalisation and violent extremism is “a significant, but overlooked security concern for Australia”.

A key finding of the Royal Commission of Inquiry into the Attack on Christchurch Mosques was that the Australian white supremacist was radicalised on YouTube and other online, algorithm-driven spaces. We also know due to recent investigations, that neo-Nazi and other extremist groups recruit through misogynistic online spaces and forums, and that recruitment is at a record high.

The most dangerous radicalising force is no longer always a “group” we can monitor and intercept. Instead, it can now be intangible, shapeshifting, invisible and omnipresent all at once. Algorithms do not recruit in the traditional sense, instead, they recommend, reinforce and escalate. They do this simultaneously at scale, and in private, and their tendency to promote hate, alongside mis- and disinformation is one of the biggest threats to democracy and social cohesion in Australia today.

In the mid-2000s, Australian mobile phone plans began offering free text contracts, creating a change in the way we were able to communicate – without cost. This seemingly small technology lever allowed for the chain text “get down to Cronulla for w*g bashing day” to spread at what was (in 2005) unprecedented speed. Compared to technological developments today, this was snail’s pace. Now, hate such as misogyny, Islamophobia, antisemitism, and white supremacy material is available to be force-fed to us all day, every day, on our digital devices. If you can’t relate this with respect to what comes across your screen, it speaks to a more alarming feature of the algorithm, where hateful content is intentionally and systematically targeted to demographics that are most likely to engage in it and absorb the ideology.

Advertisement

Big tech companies claim to be champions of free speech, yet the reality is engineered amplification and censorship dictated by the interests of an elite few. In an economic system where human focus is treated as a scarce commodity, our screen time fills big tech’s pockets, and the more outraged we are, the more time we spend on their addictive products. The “us” and “them” mentality that has enabled dehumanisation across groups for millennia has been weaponised for clicks and profits, allowing division to scale and spread at an entirely unprecedented rate. Public discourse is subtly curated by systems designed not for democratic health, but for profit, at the cost of social cohesion and increasing instances of radicalisation.

Regardless of the outcome of the social media trial in California, Australians should have the option to turn off this addictive engagement-driven recommendation system, and return to chronological feeds if they wish to do so. This would redistribute autonomy to the people, and give individuals control over their information environment and media diet, rather than leaving that power concentrated in the hands of big tech.

Chanel Contos is the founder and chief executive of Teach Us Consent. She has a masters in public policy from Oxford University and a masters in gender, education and international development from University College London.

Get a weekly wrap of views that will challenge, champion and inform your own. Sign up for our Opinion newsletter.

Chanel ContosChanel Contos is the Founder and CEO of Teach Us Consent. She has a Masters in Public
Policy from Oxford University and a Masters in Gender, Education and International
Development from University College London.

From our partners

Advertisement
Advertisement

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: www.smh.com.au