Australia is preparing to enforce one of the world’s toughest social media regulations, as a new law banning children under 16 from using major platforms comes into effect on 10 December. The move has left global tech companies scrambling and has reignited debate about online safety, children’s mental health, and the role of governments in regulating digital platforms.
Former Facebook Australia chief Stephen Scheeler, once an optimist about social media’s positive potential, now believes the industry has allowed too many harmful elements to grow unchecked. “There’s lots of good things about these platforms, but there’s just too much bad stuff,” he told the BBC.
A First-of-its-Kind Ban
The law requires platforms to take “reasonable steps” to block under-16 users from creating accounts. Unlike rules in other regions, Australia does not allow parental consent as a workaround. This makes the ban the strictest globally—and a possible blueprint for other countries.
Tech companies and lobby groups have strongly opposed the move. NetChoice’s Paul Taske accused Australia of “blanket censorship,” while others argued that the policy may make children less safe by pushing them to unregulated online spaces.
A Global Industry Under Scrutiny
The social media sector is already facing mounting whistleblower claims, lawsuits, and allegations that it prioritised profits over user well-being. Meta, TikTok, Snapchat, and YouTube have all faced legal challenges related to teen harm, addictive design, and inadequate protections.
Several whistleblowers—including Frances Haugen and Arturo Béjar—have testified that platforms ignored or suppressed safety improvements. One US trial, beginning in January, brings together hundreds of claims from parents and school districts who say social media contributed to mental health crises among teens.
Tech Companies Push Back
As Australia crafted its policy, platforms remained publicly quiet but lobbied heavily behind the scenes. Snapchat co-founder Evan Spiegel held private talks with the government, and YouTube reportedly brought children’s entertainers, The Wiggles, into discussions.
Many companies say that responsibility should lie with app stores like Apple and Google, not social media platforms themselves. They insist parents—not governments—should decide what is appropriate for their children.
Australia’s Communications Minister Anika Wells rejected that argument, noting that social media companies have had “15, 20 years” to act voluntarily. She revealed that multiple countries—including Denmark, Norway, Singapore, Fiji, and the EU—are already exploring similar laws.
Rushed Safety Features and Compliance Concerns
With the ban approaching, companies have rolled out new age-verification tools and teen-specific features:
YouTube introduced AI-based age estimation
Snapchat expanded child-focused safety settings
Meta launched “Instagram Teen” accounts with increased safeguards
But critics say these tools are not enough. Béjar’s research found that more than half of Meta’s new teen safety features were ineffective at reducing harmful content exposure.
Will the Ban Work?
Experts say big tech has an incentive to comply just enough to avoid legal penalties—without making the ban seem too effective, which could encourage other countries to adopt similar rules.
Even with fines up to A$49.5 million for violations, analysts believe the penalties are small compared to the massive profits at stake.
Scheeler, however, argues that this step—while imperfect—may represent a turning point.
“This feels like a seatbelt moment,” he said. “Maybe it will work, maybe it won’t, but at least we’re trying something.”

