Australia’s online watchdog has criticised the world’s biggest social platforms of failing to properly enforce the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to prevent new accounts. In its initial compliance assessment since the prohibition came into force, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Compliance Failures Revealed in Opening Large-scale Review
Australia’s eSafety Commissioner has detailed a troubling pattern of non-compliance amongst the world’s most prominent social media platforms in her inaugural review since the ban came into effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have jointly failed to implement adequate safeguards to prevent minors from accessing their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification processes, highlighting that some platforms have permitted children who initially declared themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings represent a significant escalation in the regulatory action, with the eSafety Commissioner moving beyond monitoring to direct enforcement. The regulator has made clear that merely demonstrating some children still maintain accounts is inadequate; platforms must rather provide concrete evidence that they have established robust systems and processes intended to stop under-16s from opening accounts in the outset. This shift demonstrates the government’s determination to hold tech giants accountable, with possible sanctions looming for companies that fail to meet the legal requirements.
- Enabling formerly prohibited users to confirm again their age and regain account access
- Allowing repeated attempts at the same age assurance method with no repercussions
- Insufficient systems to block accounts for under-16s from being opened
- Insufficient reporting tools for families and the wider community
- Shortage of transparent data about regulatory measures and user account terminations
The Extent of the Issue
The considerable scale of social media activity amongst young Australians highlights the regulatory challenge facing both the government and the platforms in question. With millions of accounts already restricted or removed since the implementation of the ban, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s conclusions indicate that the operational and technical barriers to implementing age restrictions have proven far more complex than expected, with platforms having difficulty to distinguish genuine age declarations from fraudulent ones. This complexity has placed enforcement authorities grappling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the technical obstacles lies a broader concern about the willingness of platforms to prioritise compliance over user growth. Social media companies have consistently opposed stringent age verification measures, citing data protection worries and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to deploy the infrastructure mandated legally. The shift towards active enforcement represents a pivotal moment: either platforms will significantly enhance their compliance infrastructure, or they risk facing significant penalties that could reshape their business models in Australia and possibly affect regulatory approaches internationally.
What the Numbers Reveal
In the opening month after the ban’s launch, Australian officials indicated that 4.7 million accounts had been restricted or deleted. Whilst this statistic initially seemed to show regulatory success, further investigation reveals a more layered picture. The substantial number of account takedowns implies that many under-16s had managed to establish accounts in the first place, demonstrating that preventative measures were insufficient. Furthermore, the data casts doubt about whether suspended accounts represent authentic compliance or just users deleting their profiles voluntarily in in light of the updated rules.
The restricted transparency concerning these figures has troubled independent observers attempting to evaluate the ban’s actual effectiveness. Platforms have provided scant details about their implementation approaches, performance indicators, or the profile of removed accounts. This opacity makes it difficult for regulators and the general public to assess whether the ban is functioning as designed or whether younger users are just locating other methods to access social media. The Commissioner’s push for comprehensive proof of systematic compliance measures reflects growing frustration with platforms’ unwillingness to share full information.
Sector Reaction and Pushback
The social media giants have addressed the regulator’s enforcement action with a combination of compliance assurances and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst simultaneously arguing that precise age verification continues to be a significant industry-wide challenge. The company has called for a alternative strategy, proposing that robust age verification and parental approval mechanisms implemented at the app store level would be more efficient than platform-level enforcement. This position demonstrates wider concerns across the industry that the current regulatory framework puts an impractical burden on individual platforms.
Snap, the creator of Snapchat, has taken a more proactive public stance, stating that it had locked 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, sector analysts dispute whether such figures demonstrate genuine compliance or simply represent reactive account management. The core conflict between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the regulatory requirement to systematically remove an entire age demographic remains unresolved. Companies have long resisted rigorous age verification methods, citing privacy issues and technical constraints, creating a standoff between regulators and platforms over who carries responsibility for execution.
- Meta contends age verification ought to take place at app store level instead of on individual platforms
- Snap asserts to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups cite privacy concerns and technical obstacles as impediments to effective age verification
- Platforms assert they are doing their best whilst challenging the ban’s overall effectiveness
Larger Inquiries Concerning the Prohibition’s Efficacy
As Australia’s under-16 online platform ban moves into its implementation stage, fundamental questions remain about whether the law will accomplish its intended goals or merely push young users towards unregulated platforms. The regulator’s initial compliance assessment reveals that despite months of implementation, significant loopholes remain—children continue finding ways to circumvent age verification systems, and platforms have had difficulty stop new underage accounts from being created. Critics contend that the ban’s success depends not merely on regulatory vigilance but on whether young people will truly leave major social networks or simply migrate to other platforms, encrypted messaging applications, or VPNs designed to conceal their age and location.
The ban’s global implications contribute further complexity to assessments of its success. Countries including the United Kingdom, Canada, and multiple European countries are observing Australia’s initiative closely, exploring similar laws for their respective populations. If the ban does not successfully reduce children’s online activity or cannot protect them from harmful content, it could weaken the case for comparable regulations elsewhere. Conversely, if implementation proves sufficiently strict to effectively limit underage participation, it may encourage other administrations to adopt comparable measures. The outcome will likely influence worldwide regulatory patterns for many years ahead, making Australia’s implementation efforts scrutinised far beyond its borders.
Who Benefits and Those Who Suffer
Mental health supporters and child safety organisations have endorsed the ban as a essential measure to counter algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms built to maximise engagement could reduce anxiety, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the risks to mental health associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also removes valid applications of social media for young people—maintaining friendships, obtaining educational material, and participating in online communities around shared interests. The regulatory approach assumes harm outweighs benefit, a calculation that some young people and their families dispute.
The ban’s real-world effects reaches past individual users to affect content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that depend on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously utilised effectively. Meanwhile, the ban unintentionally favours large technology companies with resources to build age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects extend far beyond the simple goal of child protection.
What Lies Ahead for Enforcement
Australia’s eSafety Commissioner has announced a marked change from passive monitoring to proactive action, marking a pivotal moment in the implementation of the youth access prohibition. The regulator will now compile information to establish whether companies have omitted “reasonable steps” to prevent underage access, a regulatory requirement that surpasses simply recording that young people stay within these platforms. This strategy requires concrete evidence that platforms have implemented suitable mechanisms and procedures intended to prevent minors. The regulatory body has indicated it will conduct enquiries systematically, developing arguments that could lead to considerable sanctions for failure to comply. This transition from observation to intervention reflects growing frustration with the services’ existing measures and signals that willing participation alone will no longer suffice.
The rollout phase presents significant concerns about the sufficiency of sanctions and the concrete procedures for maintaining corporate responsibility. Australia’s legislation offers enforcement instruments, but their success depends on the eSafety Commissioner’s readiness to undertake formal action and the platforms’ capability to adjust substantively. Overseas authorities, especially regulators in the Britain and Europe, will keenly observe Australia’s enforcement strategy and results. A effective regulatory push could establish a template for additional countries considering similar bans, whilst inadequate results might undermine the overall legislative structure. The coming months will be critical whether Australia’s groundbreaking legislation delivers real safeguards for teenagers or stays primarily ceremonial in its effect.
