Designed to Be Lied To, Designed to Be Relied On
How a Child Safety Law Became a Liability Shield
The Wrong Conversation
Recently, California passed a law requiring computer operating systems to check a user’s age. The Digital Age Assurance Act, AB 1043.
The tech press piled on. Tom’s Hardware and PC Gamer put Linux in the headline. Windows Central ran a piece about Windows. TechRadar called it controversial across the board. The coverage treated the new law as a privacy crisis touching every device: Windows, macOS, Android, iOS, even SteamOS.
Within days, an informal brainstorm on the Ubuntu developer mailing list was being reported as if Canonical, the company behind Ubuntu, had announced a real plan. Canonical had to step in and correct the record. MidnightBSD, a smaller free software project, said it might block California users entirely.
That became the story. A surveillance nightmare. An age gate on every operating system. A law so broad it could sweep up projects run by volunteers.
Here’s what the law actually requires. During device setup, the user enters a birthday. It’s self-reported. No ID. No facial scan. No biometrics. Just a date in a box, taken at face value by the system and passed along as if it proves anything.
A twelve-year-old can type 1990 and get around age restrictions.
Everybody knows that. The lawmakers knew it when they wrote the bill, and they knew it when they voted for it. It passed unanimously, which means this wasn’t some narrow drafting mistake that slipped through while nobody was looking.
And once you strip that away, the real question gets harder, not easier. If the system can be defeated by a child in seconds, and the people who wrote the law knew that, then child safety can’t be the whole story. A law this easy to evade was never going to do the job it claimed to do.
It was built to do something else.
The Age Signal
Google, Apple, and Microsoft were already collecting birthday information at account setup. They were already using it to sort users by age, manage child accounts, restrict content, and apply age-based rules across their platforms.
So the infrastructure was already there. So was the data.
And it was already failing.
A report from the Canadian Centre for Child Protection found that Apple and Google already had age-linked account data and still left serious gaps in app-store enforcement. Apple’s system still allowed youth users to access age-inappropriate apps. Google’s Play Store showed similar problems.
AB 1043 turns an existing signal into a standardized legal input.
The law requires operating system providers to convert account-level age data into one of four brackets: under 13, 13 to under 16, 16 to under 18, and 18 and older. Any app developer can request that signal through a standardized interface built for that purpose.
Then the legal effect kicks in.
If a developer receives the signal, the statute says the developer is deemed to have actual knowledge of the user’s age range. The signal must be treated as the primary indicator of age. And if the operating system provider or app store made a good faith effort, it gets shielded from liability when the signal turns out to be wrong.
The whole thing still runs on self-reported data that anyone can fake in three seconds. The law knows that. And the law still says that if you receive the signal and act on it, you’ve done your job.
No Front Door
AB 1043 doesn’t give families a private right of action. The only official who can enforce it is the California Attorney General.
COPPA, the federal children’s online privacy law, works the same way. Enforcement runs through the Federal Trade Commission and state attorneys general, not through families filing their own claim under the statute.
So you’ve got two laws passed in the name of child safety, and neither one lets the family sue under the statute.
Now run that forward.
A twelve-year-old types 2004 during device setup. She downloads a social app. An adult starts talking to her.
You can see where this goes.
It escalates in the pattern child safety researchers have documented for years: flattery, isolation, escalation, exploitation. By the time her parents find out, the damage is already done, and the family goes looking for someone to hold accountable.
What they find is procedure.
The developer points to the age signal and says it checked. The operating system provider points to the setup screen and says it asked. She said she was an adult. The signal came through as 18+. The paperwork is clean. The compliance story is complete.
The harm is real, but once the case reaches that layer, accountability starts dissolving into process.
There is a workaround, at least on paper. In Jones v. Google, the Ninth Circuit held that COPPA doesn’t wipe out state privacy and consumer protection claims, even when the same conduct could also violate COPPA. That means families can still try to sue under state law for invasion of privacy, unfair business practices, or negligence.
That sounds promising until you picture the courtroom.
The compliance chain from the grooming scenario becomes the developer’s best exhibit. The developer received the signal, treated it as the primary indicator of age, and followed the statute exactly. Then the defense walks in with a paper trail showing compliance with a system California built, endorsed, and told them to rely on.
That won’t automatically end the case, but it gives the defense a brutal advantage.
And that’s before you get to the way real households actually work.
The law assumes tidy boundaries: one device, one user, one honest setup. Real life is messier. Kids use their parents’ tablets. Parents click through setup screens half-awake. A thirteen-year-old borrows a laptop where somebody entered “1985” six months ago.
A shared device carries one age signal that might belong to anyone in the house, and the statute turns that ordinary confusion into protection. The signal says 18+. The app relied on it. The law says that’s enough.
Governor Gavin Newsom flagged exactly this problem in his signing statement. He pointed to multi-user accounts shared by family members and user profiles used across multiple devices. He urged the legislature to fix the law before it takes effect in 2027.
He signed it anyway.
The text says it does not impose liability on an operating system provider, a covered application store, or a developer when a device or application is used by someone other than the user tied to the signal.
The shared-device loophole is written right into the statute.
That’s the trick.
The law keeps families away from the front door and strengthens the defense they’ll face if they find a window.
The Fragmentation Problem
There’s a real case for the bill, and it deserves to be stated plainly.
Right now, apps handle age checks in wildly different ways. Some ask for a birthday. Some make you tick a box. Some build full verification systems. Some do nothing at all. A standardized signal at the operating system level does reduce that chaos by giving developers one shared method and one age signal to work with.
Self-attestation was also a deliberate privacy choice.
California saw what happened elsewhere. Texas and Utah used “commercially reasonable” verification, which pushes toward government ID. The UK tried mandatory age checks for pornography sites, triggered a privacy backlash, and had to retreat. California looked at that mess and chose the least invasive option on the table: no ID, no biometrics, no facial scans, just a birthday field.
The “actual knowledge” provision goes after a real loophole too.
For years, companies ducked child safety obligations under COPPA and CCPA by claiming they didn’t know their users were minors. The age signal is meant to shut that down. If you receive the signal, you know. You don’t get to shrug and say the child was invisible to you.
The bill also had real support behind it. Common Sense Media, Children Now, and The Source LGBT+ Center all backed it. These are organizations with an established stake in child safety work, and they saw this law as a meaningful step.
Compared with what’s already on the books in Texas, Utah, Louisiana, and Australia, California’s version asks for less data, creates less friction, and carries fewer privacy risks. If some form of age assurance is coming either way, there’s a serious argument that California chose the least bad version.
That’s the polished version of the argument. But it still leaves the child in the wreckage.
The Trade
TechNet and Chamber of Progress, two industry lobby groups, opposed the bill early on. Then the hard verification requirements disappeared. Self-attestation took their place.
The opposition disappeared too.
Meta’s vice president of state policy, Dan Sachs, publicly endorsed the bill. He said Meta supports centralizing age verification at the operating system and app store level.
Think about what that means in practice.
Meta no longer has to build and defend its own age-checking system. The liability moves upstream. Meta receives the signal, acts on it, and points back to the process.
Google’s senior director of government affairs, Kareem Ghanem, called AB 1043 “one of the most thoughtful approaches we’ve seen thus far.”
Of course he did.
This law turns something Google was already doing into a legal standard and gives it statutory cover for continuing to do it. That’s what “thoughtful” means here.
Apple never publicly backed the bill, and that makes sense too. Apple already collects birthdays at account setup, already runs Family Sharing, and already gates child accounts. The collection itself isn’t the pressure point.
What’s new is the requirement to send that data outward to other developers through a uniform age signal they can request. Apple built a large part of its brand on controlling what leaves its ecosystem. It fought the FBI rather than building a backdoor. That’s the privacy posture it sells, and there’s a difference between a locked filing cabinet and a pipe that pushes data out on request.
Meta and Google cheer because they sit on the receiving end of that signal. Apple hesitates because it’s being drafted into sending it.
The bill’s own committee analysis gives the game away. It says AB 1043 “potentially removes the argument from the technology industry that they have no definitive way of knowing the age of their users, thus allowing them to avoid responsibility.”
Read that closely.
The old defense was ignorance: we didn’t know the user was fourteen. The new defense is compliance: we knew, and we followed the process California told us to follow.
That’s the trade. The companies gave up one shield and got a better one.
And it doesn’t just help the giants. It squeezes everybody below them. The “actual knowledge” provision triggers COPPA obligations for any developer who receives the signal. If you know a user is under 13, federal law kicks in. If you know the user is under 16, CCPA consent requirements apply.
A two-person app studio gets pulled into the same compliance logic as Meta, except Meta has a legal department and the indie developer has a laptop and a deadline.
That’s how the law hardens the advantage of firms big enough to survive compliance.
Building the Pipe
In 2022, Assemblymember Buffy Wicks carried the California Age-Appropriate Design Code Act. The courts blocked it. Industry groups sued similar laws in Texas and Utah. The harder version kept running into the same wall.
So when Wicks came back in 2025 with AB 1043, the bill was slimmer: self-attestation only, no ID, no biometrics, no parental consent requirement. Industry dropped its opposition, and the measure passed 77-0 in the Assembly and 38-0 in the Senate.
Unanimous bills always deserve a closer look.
When everybody in the room says yes, you should ask who it was designed to protect.
That political history matters, but the deeper point is what the bill leaves behind. Standardized systems tend to stick. Once app developers build the age signal into their code, it becomes part of the product. Once lawyers start citing “actual knowledge” in briefs, it starts showing up in case law. Once compliance teams build workflows around the four age brackets, the whole thing stops feeling temporary and starts feeling normal.
That matters more than the text of any one bill. Laws can be amended. Infrastructure gets reused. A future legislature won’t have to fight over whether to create an age-signaling system. That part will already be done. The only argument left will be about what gets sent through a pipe that already exists.
The hard part is building the pipe.
After that, everybody just argues over the settings.
Three States, One Direction
California passed AB 1043. Self-attestation was the foundation.
Colorado introduced SB 26-051, explicitly modeled on California’s law. Senator Matt Ball, the bill’s sponsor, said the quiet part out loud: “One of the reasons for bringing SB 51 was that the tech industry is already complying with AB 1043, so there’s minimal added burden.”
That’s the pattern.
Colorado tried stronger approaches and failed, then came back with the California model. The harder version stalls. The nerfed version goes through.
That’s how the ratchet works. The signal gets normalized first. The fight over how aggressive it should become comes after.
New York’s S8102A takes the next step. It skips the softer version entirely. The bill forbids self-reporting and requires “commercially reasonable” age assurance, with the details left to regulations written by the Attorney General. Penalties go up to $10,000 per violation.
So the direction of travel is already clear. California lays the foundation. Colorado copies it. New York pushes it further.
And while all of this moves through statehouses, Mark Zuckerberg has already asked for the same thing under oath. During a trial over Meta’s own age-verification failures, Zuckerberg testified that age verification is difficult for app developers and said the responsibility should sit with device makers like Apple and Google.
The chief executive of the company with the most to gain from shifting liability upstream went into court and asked, on the record, for the exact kind of system these laws are building.
Who’s Left Standing
There are two doors here, and both open onto something ugly.
Behind the first is the weak version. Self-attestation lets kids lie, the system fails at its stated purpose, and companies keep their compliance shield anyway. The age signal is useless for protection and excellent for compliance. That’s where California is right now.
Behind the second is the stronger version. Real ID, biometrics, hard verification. The system starts checking age for real, and your identity gets tied to every app you open, every site you visit, and every piece of content you try to access. An identity-verification system run by companies you’ve never heard of gets wedged into ordinary online life.
Either way, the same institutions come out ahead. One version gives them a paper shield. The other gives them a paper shield and your name.
Any law that tries to gate the internet by age ends here. It either fails because people lie, or it works by building an identity system that should never exist.
And when the harm arrives, the people left standing are the same ones who were exposed from the beginning: a parent, a kid who got hurt, a family trying to hold somebody accountable. The app followed the rules. The operating system asked the question. The law says everyone in the chain did what they were supposed to do. And once they did, that was enough.
The system worked exactly as designed.
The child is the reason the law exists, and the last person it protects.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.



Tumithak, thank you for this take on age-related access governance. Here's a summary of the Australian experience to date.
We had an Online Safety Act in 2021, but it was amended last year. Phase 2 took place early this month. It extends age verification to pornography, high-impact violence, self harm, AI chatbots, app stores and gaming and can issue penalties up to AUD $49.5m for systemic non-compliance by platforms. A mandatory review is required within two years.
Ten platforms are presently nominated by the eSafety Commissioner for age restriction: Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X (formerly Twitter), YouTube, Kick, and Reddit. Subsequently additional notified or self-assessed were Yubo, Lemon8, Wizz, BigoLive.
As at end 2025, not age restricted are: Discord, GitHub, Google Classroom, Messenger, Steam, Steam Chat, WhatsApp, YouTube Kids. (This is at Ministerial discretion, so it can be amended without new legislation.)
YouTube was initially expected to be exempt due to educational content, but was included in June 2025 after the eSafety Commissioner cited it as the most frequently reported source of harmful content for 10–15-year-olds.
Platforms were required to identify and deactivate or remove accounts held by under-16s. By mid-January 2026, the government reported approximately 4.7 million accounts had been deactivated, removed, or restricted across the ten platforms. Meta alone removed nearly 550,000 accounts within the first day. Researchers caution this figure likely overstates the number of individual children affected, as many teenagers hold multiple accounts across platforms. There is no provision allowing existing under-16 account holders to retain accounts. The law applies to "creating or keeping" an account, capturing both prospective and retrospective cases.
The legislation deliberately does not mandate a specific age verification technology. Instead, platforms must take "reasonable steps" to prevent under-16s from creating or holding accounts. The eSafety Commissioner published regulatory guidance in September 2025 setting out principles for what constitutes reasonable steps. Unlike AB 1043's self-attestation model, the Australian guidance explicitly states that relying solely on self-declaration (entering a birthday) does **not** constitute a reasonable step. Nor does simply holding an account for a set period before detecting underage users.
After an Age Assurance Technology Trial, the government found that:
- No single technology works in all situations; a "waterfall" (layered) approach is recommended — start with the lightest-touch method and escalate only if results are uncertain.
- Age estimation via facial analysis carries a margin of error commonly of ±18 months, and is less accurate for girls, First Nations people, darker skin tones, lower socioeconomic groups, and the 16–20 age bracket specifically.
- Some third-party providers were found to be over-collecting and retaining biometric and document data beyond what compliance requires — anticipating future regulatory demands.
- No requirement to establish age "to a very high certainty" — proportionality applies.
This has resulted in a range of responses:
- **Meta (Facebook, Instagram, Threads):** Selfie-based facial age estimation.
- **TikTok:** Existing account data to estimate age.
- **Snapchat, YouTube, others:** User-declared age data (but note: this alone is insufficient under the guidance, so presumably supplemented with other signals).
- **Third-party services:** At least one Singapore-based age assurance service combining government ID, bank SMS confirmation, or selfies in a layered approach.
Platforms cannot compel government-issued ID as the only verification option. Data collected for age assurance must be technically segregated from other business data and destroyed after verification.
The OAIC (Office of the Australian Information Commissioner) co-regulates the scheme. Platforms face a dual bind: fined by eSafety for failing to verify age, and fined by OAIC for being too intrusive in how they verify.
There are currently two High Court challenges on this, both based on what Australia uses instead of Freedom of Expression ('Implied freedom of political communication'.)
The most dramatic response is that the porn sector has largely geoblocked the whole of Australia.
We've had a big range of downloads of apps not covered by the ban immediately following implementation date.
And we've had a complaint from US House Judiciary Committee Chair Jim Jordan saying that Australia is 'harassing American companies'.
Policy response is broadly supportive -- among adults, 55-77% supporting. Around 70-75% of children opposed. Among kids, lots of tricks shared to avoid facial age estimation.
It's early days, but qualitatively, parents are split on efficacy: less screen time vs. security theatre.
Equity is also a concern. LGBTQIA+ youth, neurodiverse children, children with disabilities, children in rural/remote areas, and children in lower socioeconomic circumstances are identified as disproportionately affected. For these groups, social media platforms provided accessible socialisation, peer mental health support, and community connection that is harder to replace through physical alternatives.
Australia seems to be leading the pack globally on this. As of March:
- **France:** National Assembly approved (Jan 2026, 116–23) a ban on social media for under-15s, planning implementation by September 2026. France is pursuing EU-wide harmonisation and piloting an EU-wide age verification app (with Denmark, Greece, Italy, Spain).
- **UK:** Online Safety Act 2023 took effect July 2025 with age verification duties. House of Lords voted in favour of an Australian-style ban. Aylo blocked UK users in January 2026.
- **Denmark:** Announced agreement (Nov 2025) to ban social media for under-15s, potentially using national electronic ID.
- **Norway:** Introduced a bill for minimum age of 15 (consultation deadline Oct 2025).
- **Portugal:** Parliament approved a ban for under-16s with parental consent from 13.
- **Spain, Italy, Germany, Greece, Finland, Ireland:** All considering or advancing similar restrictions.
- **Malaysia:** Ban on under-16s effective 1 January 2026 with eKYC verification.
- **Brazil:** Law passed (Sep 2025) requiring age verification and parental linkage for under-16s; effective March 2026.
- **EU level:** European Parliament proposed (Nov 2025) raising social media minimum age to 16 with mandatory privacy-preserving age verification. EU Digital Identity Wallet initiative expected to mature by end of 2026.
My conclusion: you're probably right that AB 1043 functions as a compliance shield. Global initiatives identify three models: Self-attestation / age signal model (California), Principles-based reasonable steps model (Australia), Design-based / duty-of-care mode (UK.) All have strengths and weaknesses.
Our experience: we're trying to make the age gate work. It's a more honest approach but not yet more effective. We're ratcheting court challenges and user-based workarounds. Instead of compliance shield we have enforcement complexity, privacy trade-offs, constitutional challenges, platform defiance, child circumvention, and equity concerns. The child is still not protected — but the system is no longer pretending.