Australia’s online watchdog has accused the world’s biggest social platforms of not adequately implementing the country’s ban on under-16s using their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and insufficient measures to prevent new accounts. In its initial compliance assessment since the prohibition came into force, the regulator identified multiple shortcomings and has now shifted from observation to active enforcement, cautioning that platforms must demonstrate they have implemented “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Uncovered in First Major Review
Australia’s eSafety Commissioner has outlined a concerning pattern of failure to comply among the world’s most prominent social media platforms in her first formal review following the ban came into effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement appropriate safeguards to prevent minors from using their services. Julie Inman Grant expressed particular concern about structural gaps in age verification systems, highlighting that some platforms have allowed children who initially declared themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings demonstrate a notable intensification in the regulatory action, with the eSafety Commissioner moving beyond monitoring to direct enforcement. The regulator has made clear that merely demonstrating some children still maintain accounts is inadequate; platforms must instead furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from opening accounts in the outset. This shift reflects the government’s commitment to ensure tech giants accountable, with potential penalties looming for companies that fail to meet the statutory obligations.
- Permitting formerly prohibited users to confirm again their age and regain account access
- Permitting multiple tries at the identical verification process without consequences
- Insufficient mechanisms to prevent new under-16 accounts from being opened
- Inadequate notification systems for families and the wider community
- Absence of transparent data about compliance actions and account removals
The Extent of the Issue
The substantial scale of social media activity amongst young Australians underscores the regulatory challenge confronting both the government and the platforms themselves. With numerous accounts already restricted or removed since the implementation of the ban, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to enforcing age restrictions have proven far more complex than expected, with platforms having difficulty to differentiate authentic age confirmations from false claims. This complexity has left enforcement authorities grappling with the fundamental question of whether existing age verification systems are sufficient for the purpose.
Beyond the technical obstacles lies a broader concern about the willingness of platforms to prioritise compliance over user growth. Social media companies have long resisted stringent age verification measures, citing data protection worries and the genuine difficulty of confirming age online. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to implement the systems required by law. The shift towards active enforcement represents a pivotal moment: either platforms will substantially upgrade their regulatory systems, or they stand to incur substantial fines that could reshape their business models in Australia and potentially influence regulatory approaches internationally.
What the Figures Indicate
In the initial month following the ban’s introduction, Australian regulators stated that 4.7 million accounts had been limited or deleted. Whilst this figure initially appeared to prove regulatory success, subsequent analysis reveals a more nuanced picture. The sheer volume of account removals implies that many under-16s had been able to set up accounts in the initial stages, indicating that preventive controls were inadequate. Moreover, the data raises questions about whether deleted profiles represent real regulation or merely users deleting their accounts willingly in in light of the latest limitations.
The minimal transparency concerning these figures has frustrated independent observers attempting to evaluate the ban’s true effectiveness. Platforms have provided little data about their implementation approaches, performance indicators, or the nature of removed accounts. This absence of transparency makes it challenging for regulators and the public to evaluate whether the ban is working as intended or whether teenagers are simply finding other methods to use social media. The Commissioner’s insistence on comprehensive proof of consistent enforcement practices reflects growing frustration with platforms’ resistance to disclosing comprehensive data.
Sector Reaction and Opposition
The major tech platforms have addressed the regulator’s enforcement action with a combination of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst simultaneously arguing that precise age verification continues to be a major challenge across the industry. The company has advocated for a different approach, suggesting that robust age verification and parental approval mechanisms put in place at the app store level would be more effective than enforcement at the platform level. This stance demonstrates broader industry concerns that the current regulatory framework puts an impractical burden on separate platforms.
Snap, the creator of Snapchat, has taken a more proactive public stance, stating that it had locked 450,000 accounts following the ban’s implementation and claiming to continue locking more daily. However, sector analysts dispute whether such figures reflect authentic adherence or simply represent reactive account management. The fundamental tension between platforms’ commercial structures—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to systematically remove an entire age demographic remains unresolved. Companies have long resisted rigorous age verification methods, citing privacy concerns and technical limitations, creating a standoff between authorities and platforms over who carries responsibility for execution.
- Meta contends age verification should occur at app store level instead of on individual platforms
- Snap asserts to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups point to privacy concerns and technical challenges as impediments to effective age verification
- Platforms maintain they are doing their best whilst challenging the ban’s general effectiveness
Wider Considerations Regarding the Ban’s Effectiveness
As Australia’s under-16 online platform ban moves into its implementation stage, fundamental questions remain about whether the legislation will achieve its stated objectives or merely drive young users towards less regulated platforms. The regulatory authority’s initial compliance assessment reveals that following implementation, substantial gaps remain—children keep discovering ways to bypass age verification systems, and platforms have struggled to stop new underage accounts from being established. Critics argue that the ban’s success depends not merely on regulatory vigilance but on whether young people will truly leave mainstream platforms or simply migrate to alternative services, encrypted messaging applications, or virtual private networks designed to conceal their age and location.
The ban’s global implications contribute further complexity to assessments of its impact. Countries including the United Kingdom, Canada, and various European states are observing Australia’s approach closely, considering similar regulatory measures for their own populations. If the ban proves ineffective at reducing children’s social media usage or does not protect them from dangerous online content, it could undermine the case for equivalent legislation elsewhere. Conversely, if implementation proves sufficiently strict to truly restrict underage access, it may inspire other governments to implement similar strategies. The result will potentially determine international regulatory direction for years to come, making Australia’s implementation efforts examined far beyond its borders.
Who Gains and Who Is Disadvantaged
Mental health advocates and organisations focused on child safety have championed the ban as a essential measure to counter algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and decrease exposure to cyberbullying. Tech companies’ own research has recognised the risks to mental health associated with social media use amongst adolescents, adding weight to these concerns. However, the ban also removes legitimate uses of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around shared interests. The regulatory approach assumes harm exceeds benefit, a calculation that some young people and their families challenge.
The ban’s practical impact extends beyond individual users to influence content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that rely on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously used effectively. Meanwhile, the ban unexpectedly advantages large technology companies with resources to build age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects reach well further than the simple goal of child protection.
What Lies Ahead for Regulatory Action
Australia’s eSafety Commissioner has announced a marked change from inactive oversight to direct intervention, marking a critical turning point in the execution of the age restriction. The regulator will now collect data to determine whether platforms have neglected to implement “reasonable steps” to prevent underage access, a statutory benchmark that surpasses simply documenting that minors continue using these systems. This strategy necessitates demonstrable proof that companies have introduced appropriate systems and processes designed to exclude minors. The enforcement team has indicated it will launch probes systematically, building cases that could result in substantial penalties for non-compliance. This move from monitoring to enforcement demonstrates mounting concern with the platforms’ current efforts and suggests that consensual engagement alone will no longer suffice.
The enforcement phase highlights important questions about the appropriateness of fines and the practical mechanisms for holding tech giants accountable. Australia’s regulatory framework provides enforcement instruments, but their success depends on the eSafety Commissioner’s commitment to initiate formal action and the platforms’ capability to adjust effectively. International observers, especially regulators in the United Kingdom and European Union, will carefully track Australia’s regulatory approach and results. A effective regulatory push could establish a blueprint for additional countries contemplating similar bans, whilst failure might weaken the entire regulatory framework. The forthcoming period will be critical whether Australia’s groundbreaking legislation translates into substantive defence for young people or stays primarily ceremonial in its influence.
