The great scroll off: Inside Australia’s world-first teen social media ban

0
3

Julie Inman Grant was at Roblox’s headquarters when she asked the question that crystallised in her mind everything that’s wrong with children on social media. The company, whose primary users are aged five to 13, had just launched a virtual dating feature.

Why? Inman Grant inquired.

“They said, ‘Well, we need to keep them on longer than 13’,” the eSafety commissioner recalls. “I thought, oh my god.”

Grieving parents, media campaigns and political opportunism have collided to create the world’s most ambitious – and contentious – experiment in digital regulation.

Grieving parents, media campaigns and political opportunism have collided to create the world’s most ambitious – and contentious – experiment in digital regulation.Credit: Stephen Kiprillis

The tech companies implement these so-called sticky features to hold users’ attention and occupy their minds to such an extent that they keep coming back. That’s how they generate maximum value for their platforms, despite the risks.

The answer gave Inman Grant further impetus to pursue the process that this week led to Australia becoming the first country on Earth to ban under 16s from holding social media accounts. Surprisingly, Roblox is exempt from the ban.

This tension – between bold action and glaring loopholes – defines Australia’s world-first experiment in digital regulation. Platforms including Instagram, TikTok, Snapchat, YouTube and Facebook now face fines of up to $49.5 million if they fail to prevent under 16s from holding accounts.

But behind the triumphant photo ops lies a more complicated story. It’s one of policy shortcuts, grieving parents and unanswered questions about whether Australia has chosen the hard road or merely the most visible one.

The architecture of a ban

The path to December 10 was extraordinarily short. Just 178 days passed between the advocacy group 36 Months creating an online petition and Albanese announcing legislation. By Canberra standards, this was hypersonic.

Advertisement

The origin story has several authors. There’s Jonathan Haidt’s bestseller The Anxious Generation, which gave concerned parents an intellectual framework for their unease. There’s South Australian Premier Peter Malinauskas, whose wife reportedly handed him Haidt’s book and urged action.

There was also the bipartisan political consensus that emerged when then-opposition leader Peter Dutton pledged to ban social media for under 16s within 100 days of being elected, forcing Labor’s hand. Plus the sustained media campaign – News Corp’s “Let Them Be Kids” prominent among them – that amplified the voices of grieving parents with the kind of focus that turns political weather into climate.

“I can’t think of any example in recent Australian parliamentary history where a policy that is so profound in its impact has been rushed through with such little consultation,” one Canberra lobbyist told The Australian Financial Review, speaking anonymously.

But for Charlotte Mortlock, executive director of Hilma’s Network and a long-time advocate for algorithm reform, the speed was necessary. “Things were at such a dire point, and the impact of social media was so detrimental, particularly to children, that we didn’t have enough time to try different things,” she tells this masthead.

“The algorithm had got to a point where it was so addictive and so sinister that I just don’t think we had any other options.”

Facebook whistleblower Frances Haugen supports the social media ban.

Facebook whistleblower Frances Haugen supports the social media ban.Credit: Eamon Gallagher

The Frances Haugen revelations – the so-called “Facebook Files” that exposed how Meta knew its platforms harmed teenagers – changed the calculus. Haugen herself offers cautious support for Australia’s approach.

“I’m excited to see Australia take the lead on standing up to big tech,” she tells this masthead.

“I believe it is possible to design social platforms that are healthy for young teens, but until the big platforms take seriously the public health concerns they’re inflicting on the world’s children, taking a step back and choosing to age-gate social media is a practical way to pressure for change.”

Emma Mason, whose 15-year-old daughter Tilly died by suicide after online bullying, has told her story perhaps 500 or 600 times now. To prime ministers, bureaucrats, journalists from the US, Denmark, Britain, Japan and Germany. In September, she addressed the United Nations. “I’m bloody proud to be an Australian,” she told News Corp, “to be in the country which has said, ‘It is over, you’re not going to monetise our children’.”

As The Australian Financial Review put it this week: “Messages don’t get much more powerful than when delivered by parents holding the ashes of their child.”

The woman in the hot seat

eSafety Commissioner Julie Inman Grant didn’t design this policy, but she’s been handed the impossible task of making it work.

“I didn’t expect to become a public figure,” she says in a candid interview days before the ban takes effect. “I took this job as an opportunity to really make a change.”

The commissioner spent 22 years in the tech industry, including at Twitter (now called X) during the heady post-Arab Spring years.

“I joined Twitter in 2014 because I truly believed in the power of social media to speak truth to power, as a great leveller,” she recalls. But disillusionment came fast.

“After two years, I could no longer defend their safety record because I saw how terrible vulnerable communities and women were being targeted.”

Now she finds herself enforcing a law that even her own children oppose. During her video call with this masthead, Inman Grant’s 13-year-old twin daughters had a few things to say

“It’s stupid,” one says. “What’s it actually helping? How are we supposed to meet people? Where am I supposed to watch my celebrity crush?”

Grant admits the questions keep her up at night. She’s conscious there are kids who feel more themselves online than in the real world, particularly from diverse, disabled or LGBTQI+ communities.

“It’s a way to find their tribe and really be themselves,” she acknowledges.

“The idea is not to cut kids off from communication and creation and connection, but to keep them away from the harmful and deceptive design features that are built in to tether them.”

eSafety Commissioner Julie Inman Grant says the ban is imperfect but vital.

eSafety Commissioner Julie Inman Grant says the ban is imperfect but vital.Credit: Louie Douvis

New research from Monash University found that almost four out of five Australian adults supported the ban, though support generally grew from the younger age groups to the older. Research team lead Professor Mark Andrejevic said that regardless of the law’s shortcomings, people recognised that big tech was exploiting children for its own betterment.

“This ban targets a handful of powerful, overseas platforms that profit from tracking young users to capture their attention and pepper them with ads,” he said.

“They are using increasingly powerful algorithms to determine how best to capture and exploit young people’s attention. It’s a timely intervention in an increasingly unregulated digital environment.”

Circuit-breaker or blunt instrument?

The ban’s supporters frame it as a “giant circuit breaker” – the term Albanese commonly deploys – that will give developing brains a reprieve from algorithms designed to exploit their vulnerabilities. Grant describes platforms using “harmful and deceptive design features” that create “powerful forces that kids can’t see, let alone fight against”.

She points to evidence that is genuinely alarming: an eSafety survey found 96 per cent of 10- to 15-year-olds use social media, seven in 10 have seen harmful content online, half have been cyberbullied, and about a quarter have been sexually harassed.

But critics argue that Australia is treating the symptom, not the disease. Dr Taliah Prince, a postdoctoral researcher in adolescent brain development at the University of the Sunshine Coast’s Thompson Institute, uses a tree analogy. “Is it cutting off the branches while leaving the roots?”

More than 140 academics who specialise in child welfare and technology signed an open letter opposing the ban, calling it “too blunt an instrument”.

“A ban does not function to improve the products children will be allowed to use,” the letter says. Children will still access YouTube in a logged-out state, still play on Roblox, still use WhatsApp, Discord and whatever new platform emerges next week.

The YouTube situation captures one of the ban’s cruellest ironies. Before today, responsible parents could link child accounts to their own, choosing between the sanitised YouTube Kids or full YouTube with restrictions. They could monitor viewing habits, limit screen time and pay to remove ads. Under the ban, kids can still watch YouTube, they just can’t have accounts. The parental oversight is gone. The restrictions are gone. Children are now forced to watch unfiltered, untargeted, untrackable content. With ads.

Grant argues that it’s on YouTube to create a safe experience, even if it can’t monetise millions of underage users. But the example cuts to a key criticism of the ban: that it targets services by content delivery type rather than potential harm. Platforms primarily for gaming or messaging are exempt, even ones such as Roblox that would seem to have a lot of the same problems as Facebook. Yet YouTube and Reddit are included, despite how differently teens use them compared with TikTok.

David McKinney, a Sydney IT consultant and father of four, said this aspect of the social media ban felt like a punishment despite the fact he had carefully set up monitored accounts which let his kids enjoy soccer ad-free highlights, Minecraft videos and audiobooks. The accounts were cut off overnight.

“I try to be responsible and manage what the kids watch, and the government has taken away my ability to do that,” he said.

Grant says she tried to get a harm-based framework that would let her differentiate between platforms, but she was overruled. Instead, she was given a blunt “sole and significant purpose” test. “It’s an age bill,” she says. “It’s not an omnibus safety bill.”

The Teach Us Consent campaign has been pushing a different solution: requiring platforms to offer an “opt-in” feature for algorithms, giving users the autonomy to turn recommendation engines on and off at will. “Fix our feeds,” they argue, rather than banning children from the dinner table while leaving the poison in the soup.

Where to from here

The next few months will be critical. Grant has already issued information notices to platforms, demanding baseline data on underage accounts. Independent evaluations involving 11 academics will track outcomes. Legal challenges also loom: the Digital Freedom Project has already launched a High Court action arguing that the law violates the implied constitutional right to political communication, and Reddit has filed its own High Court challenge.

Meanwhile, thousands of young users have fled from mainstream social media sites to less prominent ones such as Lemon8 and Yope, some of which have also self-assessed and will begin blocking under 16s. VPNs and other tools are being marketed to kids as ways to potentially get around the ban, and the world is watching to see how effectively the government can keep on top of it all.

The risk, some critics warn, is that the government declares victory and moves on, leaving the algorithmic machinery that radicalises adults and children alike entirely untouched. Mortlock is cautiously optimistic this won’t happen.

“I actually hope it’s almost the opposite; that we’ve been a world leader in the social media space, and we will want to continue that,” she says.

“We shouldn’t stop here because social media is so manipulative that it impacts adult brains, not just children.”

But what happens in the space where social media used to be? Do kids simply trade Snapchat for WhatsApp and TikTok for Roblox? Do we move to less algorithmic platforms that still have to find a way to pay the bills? The honest truth is that nobody knows whether this will work. Australia is running a massive social experiment on a million teenagers, with the world watching.

Grant frames it as “creating a giant circuit breaker and putting friction in the system”. The bet is on fragmentation: as kids scatter to smaller platforms, they won’t achieve the critical mass that makes TikTok and Instagram so powerful, and so dangerous.

Haugen, the Facebook whistleblower, says we shouldn’t let perfect get in the way of good.

“The government is setting reasonable expectations that this won’t be perfect on day one,” she said.

It definitely will not be perfect. It already isn’t. If the ban doesn’t work, those business models will persist unchallenged. But if it does, Australia will have proven something profound: that democracies can still regulate technology giants, even when those companies have market capitalisations larger than our GDP.

Either way, we’re at the beginning of an answer.

If you or someone you know needs support, contact Lifeline on 13 11 14 or Beyond Blue on 1300 22 4636.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: www.smh.com.au