2 Comments
User's avatar
Ruv Draba's avatar

Tumithak, thank you for this take on age-related access governance. Here's a summary of the Australian experience to date.

We had an Online Safety Act in 2021, but it was amended last year. Phase 2 took place early this month. It extends age verification to pornography, high-impact violence, self harm, AI chatbots, app stores and gaming and can issue penalties up to AUD $49.5m for systemic non-compliance by platforms. A mandatory review is required within two years.

Ten platforms are presently nominated by the eSafety Commissioner for age restriction: Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X (formerly Twitter), YouTube, Kick, and Reddit. Subsequently additional notified or self-assessed were Yubo, Lemon8, Wizz, BigoLive.

As at end 2025, not age restricted are: Discord, GitHub, Google Classroom, Messenger, Steam, Steam Chat, WhatsApp, YouTube Kids. (This is at Ministerial discretion, so it can be amended without new legislation.)

YouTube was initially expected to be exempt due to educational content, but was included in June 2025 after the eSafety Commissioner cited it as the most frequently reported source of harmful content for 10–15-year-olds.

Platforms were required to identify and deactivate or remove accounts held by under-16s. By mid-January 2026, the government reported approximately 4.7 million accounts had been deactivated, removed, or restricted across the ten platforms. Meta alone removed nearly 550,000 accounts within the first day. Researchers caution this figure likely overstates the number of individual children affected, as many teenagers hold multiple accounts across platforms. There is no provision allowing existing under-16 account holders to retain accounts. The law applies to "creating or keeping" an account, capturing both prospective and retrospective cases.

The legislation deliberately does not mandate a specific age verification technology. Instead, platforms must take "reasonable steps" to prevent under-16s from creating or holding accounts. The eSafety Commissioner published regulatory guidance in September 2025 setting out principles for what constitutes reasonable steps. Unlike AB 1043's self-attestation model, the Australian guidance explicitly states that relying solely on self-declaration (entering a birthday) does **not** constitute a reasonable step. Nor does simply holding an account for a set period before detecting underage users.

After an Age Assurance Technology Trial, the government found that:

- No single technology works in all situations; a "waterfall" (layered) approach is recommended — start with the lightest-touch method and escalate only if results are uncertain.

- Age estimation via facial analysis carries a margin of error commonly of ±18 months, and is less accurate for girls, First Nations people, darker skin tones, lower socioeconomic groups, and the 16–20 age bracket specifically.

- Some third-party providers were found to be over-collecting and retaining biometric and document data beyond what compliance requires — anticipating future regulatory demands.

- No requirement to establish age "to a very high certainty" — proportionality applies.

This has resulted in a range of responses:

- **Meta (Facebook, Instagram, Threads):** Selfie-based facial age estimation.

- **TikTok:** Existing account data to estimate age.

- **Snapchat, YouTube, others:** User-declared age data (but note: this alone is insufficient under the guidance, so presumably supplemented with other signals).

- **Third-party services:** At least one Singapore-based age assurance service combining government ID, bank SMS confirmation, or selfies in a layered approach.

Platforms cannot compel government-issued ID as the only verification option. Data collected for age assurance must be technically segregated from other business data and destroyed after verification.

The OAIC (Office of the Australian Information Commissioner) co-regulates the scheme. Platforms face a dual bind: fined by eSafety for failing to verify age, and fined by OAIC for being too intrusive in how they verify.

There are currently two High Court challenges on this, both based on what Australia uses instead of Freedom of Expression ('Implied freedom of political communication'.)

The most dramatic response is that the porn sector has largely geoblocked the whole of Australia.

We've had a big range of downloads of apps not covered by the ban immediately following implementation date.

And we've had a complaint from US House Judiciary Committee Chair Jim Jordan saying that Australia is 'harassing American companies'.

Policy response is broadly supportive -- among adults, 55-77% supporting. Around 70-75% of children opposed. Among kids, lots of tricks shared to avoid facial age estimation.

It's early days, but qualitatively, parents are split on efficacy: less screen time vs. security theatre.

Equity is also a concern. LGBTQIA+ youth, neurodiverse children, children with disabilities, children in rural/remote areas, and children in lower socioeconomic circumstances are identified as disproportionately affected. For these groups, social media platforms provided accessible socialisation, peer mental health support, and community connection that is harder to replace through physical alternatives.

Australia seems to be leading the pack globally on this. As of March:

- **France:** National Assembly approved (Jan 2026, 116–23) a ban on social media for under-15s, planning implementation by September 2026. France is pursuing EU-wide harmonisation and piloting an EU-wide age verification app (with Denmark, Greece, Italy, Spain).

- **UK:** Online Safety Act 2023 took effect July 2025 with age verification duties. House of Lords voted in favour of an Australian-style ban. Aylo blocked UK users in January 2026.

- **Denmark:** Announced agreement (Nov 2025) to ban social media for under-15s, potentially using national electronic ID.

- **Norway:** Introduced a bill for minimum age of 15 (consultation deadline Oct 2025).

- **Portugal:** Parliament approved a ban for under-16s with parental consent from 13.

- **Spain, Italy, Germany, Greece, Finland, Ireland:** All considering or advancing similar restrictions.

- **Malaysia:** Ban on under-16s effective 1 January 2026 with eKYC verification.

- **Brazil:** Law passed (Sep 2025) requiring age verification and parental linkage for under-16s; effective March 2026.

- **EU level:** European Parliament proposed (Nov 2025) raising social media minimum age to 16 with mandatory privacy-preserving age verification. EU Digital Identity Wallet initiative expected to mature by end of 2026.

My conclusion: you're probably right that AB 1043 functions as a compliance shield. Global initiatives identify three models: Self-attestation / age signal model (California), Principles-based reasonable steps model (Australia), Design-based / duty-of-care mode (UK.) All have strengths and weaknesses.

Our experience: we're trying to make the age gate work. It's a more honest approach but not yet more effective. We're ratcheting court challenges and user-based workarounds. Instead of compliance shield we have enforcement complexity, privacy trade-offs, constitutional challenges, platform defiance, child circumvention, and equity concerns. The child is still not protected — but the system is no longer pretending.

Tumithak of the Corridors's avatar

Ruv, thanks for laying all of this out. Good to see the Australian picture collected in one place like this.

This piece is the first half of a two-parter. The second essay should be up in the next few days. It follows the money side of the equation. Who's selling the compliance tools, how the vendor ecosystem works, and what the track record actually looks like in production. I think you'll find it connects to some of what you're seeing play out in Australia.

Thanks for reading.