Sam Altman Wants to Scan Your Eyes So Advertisers Know You're Real
World ID and the price of being real.
The Closed Loop
Sam Altman is the CEO of OpenAI, the company that made AI-generated text and images indistinguishable from the real thing. Over a billion images have been created with ChatGPT’s image tools alone. The models are so good at mimicking human writing that GPT-4.5 was judged human 73% of the time in a controlled Turing test.
The internet’s basic assumption used to be simple: the person on the other end is a person. OpenAI broke that assumption. It’s now trivially cheap to generate a convincing dating profile, a product review, a Reddit comment, a customer service exchange, an op-ed, a headshot. At scale, instantly, across every platform.
And so Altman’s other company sells the fix.
He’s also the chairman of Tools for Humanity, the company that operates a project called World, formerly Worldcoin. The pitch for World is simple. You visit a device called the Orb. It scans your iris. You receive a digital pass proving you’re human.
One company made the tools that helped accelerate the problem. The other sells the solution.
On April 16, 2026, World published a blog post titled “The Revenue Potential from World ID.” It identified thirteen industries where proof-of-humanity has commercial value, starting with the $411 billion advertising market.
The next day, April 17, World launched partnerships with Tinder, Zoom, DocuSign, Shopify, and Okta, embedding iris verification into dating, video calls, legal documents, commerce, and workplace logins.
On April 21, OpenAI released ChatGPT Images 2.0, its most advanced image model yet, designed to produce images that “feel less AI-generated.”
Same founder. Same month.
Eighteen million people have scanned their irises at an Orb. In country after country, people scanned because the money was hard to refuse.
Where the money came from, and where the data went, is a longer story.
Building the Database
Kenya was the template for how you recruit eighteen million people to scan their eyes.
Worldcoin launched there in July 2023 with at least eighteen Orb locations across Nairobi. The payment for scanning was about $54, paid in the project’s own cryptocurrency. That’s roughly half a month’s pay for a low-wage Kenyan worker. Thousands lined up at the Kenyatta International Convention Centre on the first day. Informal brokers set up shop outside, offering to buy tokens on the spot. Around 350,000 Kenyans scanned their irises in the first week.
Willis Okach was a college student in Nairobi. He got his iris scanned, then was recruited to work as an Orb operator. His job was to bring other students to the device and get them to scan. He was paid 50 Kenyan shillings per signup. That’s about 44 cents.
His read on the arrangement was simple. Worldcoin, he said, “feels that students don’t have a lot of money so they will sign up.”
His fellow operator, Bryan Mtembei, signed up between 150 and 200 people at the same rate. He said he was given little information about the project but was encouraged to “bring more people in to get yourself more money.”
MIT Technology Review investigated Worldcoin’s early recruitment across six countries. They interviewed more than 35 people in Indonesia, Kenya, Sudan, Ghana, Chile, and Norway. Their findings: deceptive marketing practices, data collection beyond what was disclosed, and failure to obtain meaningful informed consent. Pete Howson, a researcher at Northumbria University, called it “crypto-colonialism.”
Argentina came next. By early 2024, half a million people had scanned their irises during 288% inflation and a 45% poverty rate in greater Buenos Aires. The going rate was about $50 per scan. Intermediaries recruited at nightclubs, bars, cellphone shops, and theaters, paid per head.
Olga de León was 57 when Rest of World interviewed her. She’s a pensioner living on $95 a month. She scanned her iris. “No one told me what they’ll do with my eye,” she told Rest of World. “But I did this out of need.”
In Brazil, iris scans were going for about $122 in eastern São Paulo. In Indonesia, the range was $18 to $48, and regulators later discovered that the company had been collecting biometric data since 2021 under a different company’s government license. Indonesia’s communications ministry found that more than 500,000 people had been scanned before the operation was suspended.
In Colombia, nearly two million iris scans were collected with consent forms provided only in English.
Eighteen million irises. A hundred and sixty countries. The original goal was one billion users by the end of 2023. At current rates, that would require scanning roughly 2,734 people per day at every active Orb for two straight years.
They’re behind schedule.
The World Tried to Stop It
Kenya’s Ministry of Interior suspended Worldcoin on August 2, 2023, barely a week after launch. The government cited concerns about the security of the data collected and what the collectors intended to do with it. The Data Protection Authority found that the consent process “did not meet the requirements,” with many participants from economically disadvantaged communities given no clear explanation of what scanning their iris actually meant.
In May 2025, a Kenyan High Court judge ruled the operations illegal and ordered deletion of all collected biometric data within seven days.
Kenya wasn’t alone. Over the next two years, regulators in Spain, Portugal, Germany, Hong Kong, South Korea, Brazil, Indonesia, Colombia, and Argentina investigated, fined, suspended, or banned the project. The findings were remarkably consistent.
Spain and Portugal both cited the scanning of children. Hong Kong raided six offices and called the data collection “unnecessary and excessive.” South Korea found no Korean-language consent form had existed until months after scanning began. Indonesia discovered the company had been collecting biometric data under a different company’s government license. Brazil’s data protection authority ruled that paying people for biometrics constitutes “undue interference with the autonomous will of the data subject.”
Colombia ordered a permanent shutdown in October 2025 after finding nearly two million scans collected with consent forms provided only in English. The regulator’s language was the plainest of any jurisdiction: the financial incentives had “conditioned the will” of data subjects.
A dozen countries. Same findings. The consent wasn’t informed, the data practices weren’t disclosed, and the payments undermined any meaningful choice. The corporate structure, split across Delaware, the Cayman Islands, and the British Virgin Islands, made local accountability nearly impossible.
In every case, regulators acted after the bulk of the scanning had already happened. The operations were suspended. The iris data had already been collected.
The Embed
The database was built. The next step was making it useful.
On April 17, 2026, World held an event in San Francisco called Lift Off. The company announced World ID 4.0 and a roster of partnerships that moved iris verification out of the crypto world and into the platforms ordinary people use every day.
Tinder piloted World ID verification in Japan last year and is now rolling out “verified human” badges globally, starting with the United States. Verified users get five free profile boosts. Match Group, Tinder’s parent company, is the largest dating company in the world.
Zoom built a feature called Deep Face. It matches a user’s live video feed against their iris-scanned profile, and displays a “Verified Human” badge next to their name in meetings. The pitch came with a case study: a finance employee at Arup was tricked into transferring $25 million by deepfake video of senior executives on a Zoom call. Deep Face is the answer to that problem. The answer requires your iris on file.
DocuSign is adding proof-of-human checks to digital signatures. The distinction matters as AI agents start executing agreements on behalf of people. Proof of identity answers who is signing. Proof of human answers whether a live person is behind the signature.
Shopify lets merchants gate promotions, discounts, and limited-edition releases behind iris verification. One person, one redemption.
Reddit is in talks to use World ID for user verification, according to Semafor reporting from June 2025.
Concert Kit reserves ticket pools exclusively for iris-verified fans, integrated with Ticketmaster and AXS. Bruno Mars, Anderson .Paak, and Thirty Seconds to Mars have signed on. The selling point is scalper bots. A bot can buy a ticket in less than a second. Concert Kit’s fix is requiring proof that a human is behind the purchase.
The system runs on three tiers. A selfie check. A government-issued ID. And an in-person iris scan at an Orb. Each platform decides which level it requires.
Dating. Video calls. Legal signatures. Commerce. Social media. Concert tickets. Each one sounds like a feature. Together they’re a tollbooth.
Two Internets
Every one of those integrations draws the same line. Verified on one side. Unverified on the other.
On Tinder, verified profiles get boosted. That means unverified profiles get buried. The algorithm doesn’t need to ban anyone. It just stops showing them. On Zoom, an unverified participant sits in a meeting next to colleagues with a “Verified Human” badge next to their name. No one has to say anything. The absence of the badge says it for them.
On Reddit, if moderators can require World ID to post in their communities, the platform splits. Verified users participate freely. Unverified users get locked out of the conversations that matter most.
Verification determines what you can access.
Tiago Sada, World’s chief product officer, told the press that verification is “something that should be optional” and that partners use it to “boost the experience” rather than gate access. A boost for the verified is a penalty for everyone else. You don’t have to lock anyone out. You just make the verified experience measurably better, and the gap does the work.
It’s happened before. India’s Aadhaar biometric system launched as voluntary. It collected fingerprints and iris scans from over 1.2 billion people. Over time it became effectively required for welfare payments, banking, mobile phone service, and tax filing. The Supreme Court scaled it back in 2018, but the gravitational pull remained. Once enough services treat a credential as default, optional stops meaning what it used to.
The critical difference is that Aadhaar is a government program, subject to constitutional review and parliamentary oversight. World ID is controlled by a for-profit company incorporated in the Cayman Islands, funded by venture capital.
Billy Perrigo spent months reporting on World for TIME. He interviewed ten Tools for Humanity executives and reviewed hundreds of pages of company documents. His conclusion: if the Orb becomes internet infrastructure, Altman could end up with significant influence over a leading defense mechanism against AI-generated content. People might have no choice but to participate in order to access social media or online services.
The Product Is You
The verification system has a revenue model. World published it.
Bot traffic on the web now exceeds human traffic. More than half the clicks, views, and impressions on the internet are generated by automated systems. Ad fraud costs the industry an estimated $100 billion a year. The bots have gotten good enough to mimic scrolling, mouse movement, and reading time. They fake engagement well enough to pass for a person looking at an ad.
If you’re an advertiser spending money to reach human beings, that’s a problem. You’re paying for eyeballs, and half of them don’t exist. So you turn to the company selling proof that the eyeballs are real.
World’s own blog post, published April 16, 2026, lays out the solution in plain language. The post is titled “The Revenue Potential from World ID.” It identifies the advertising industry as a $411 billion annual market with six billion users. It says platforms can “charge higher CPM through credibly lower bot traffic or a ‘verified human’ offering.”
Read that again. Higher CPM. That’s cost per thousand impressions. The price an advertiser pays to show you an ad. World is telling platforms they can charge more for ads served to iris-verified users.
This is already being tested. Hakuhodo, Japan’s second-largest advertising agency, ran a pilot with Tools for Humanity and LG Electronics. Over 3,500 participants. More than ten advertisers. The result: ads served to iris-verified users got clicked 50% more often than ads served to unverified ones.
That 50% gap is the price of a confirmed human. It’s what your iris scan is worth to an advertiser.
World’s blog post goes further. It models a hypothetical platform with 100 million monthly active users and $50 average revenue per user. It assumes half those users are bots. It calculates the corrected revenue after removing the fake accounts. Then it proposes a monthly fee of $0.40 per verified user, with 20% of the increased revenue flowing back to World.
They published the price sheet. Forty cents a month per verified human, paid by the platform, collected by World.
OpenAI launched ads in ChatGPT on January 16, 2026. Altman had previously called ads “uniquely unsettling” and “a last resort.” Internal documents projected OpenAI would lose $14 billion by end of 2026. The Financial Times called it “an era-defining money furnace.”
Altman’s January statement on the ad launch: “It is clear to us that a lot of people want to use a lot of AI and don’t want to pay, so we are hopeful a business model like this can work.”
An ad-supported business model needs verified humans to have value. A proof-of-humanity credential makes verified humans available. The same founder sits on both sides of that exchange.
The Lock
The whole pitch for World ID is proving you’re human. The next product built on top of it does the opposite.
In March 2026, World and Coinbase released AgentKit. It’s a developer toolkit that lets AI agents carry digital proof they’re backed by a verified human. The agent gets its own digital identity and its own wallet. It can make payments, use online services, and execute contracts on its own. The platform on the other end can verify that a real person authorized the action, without ever seeing that person’s identity. The same credential that proves you’re human also authorizes AI to act in your place.
This matters because the agentic web is already here. ChatGPT’s agent mode browses the web, fills out spreadsheets, and completes multi-step workflows without human supervision. These are AI systems acting on your behalf across any service that will accept them. McKinsey projects agentic commerce will reach $3 to $5 trillion globally by 2030. Bain estimates AI agents could account for a quarter of all U.S. e-commerce by the end of the decade.
Somebody has to verify that there’s a real person behind each agent. AgentKit makes the iris scan that credential. Proof of humanity becomes a license for artificial agents.
Okta is building the enterprise layer on top of it. The product is called Human Principal. It’s in beta. An AI agent gets registered in Okta’s directory alongside human identities, with an assigned human owner for governance and accountability. The system controls what each agent is allowed to do and how often, tied to the verified human who owns it.
Think about what that means in practice. Your World ID, obtained through an iris scan at an Orb in a shopping mall or a nightclub parking lot or a convention center in Nairobi, becomes the root credential for every AI agent acting in your name. Every payment it makes. Every service it connects to. Every contract it executes. All of it traces back to your iris.
The people who scanned their irises in Nairobi and Buenos Aires signed up for a cryptocurrency payment. Nobody mentioned AI agents, advertising verification, or root credentials for autonomous commerce. The use case changed. The consent didn’t.
Nobody in the public conversation is asking the obvious questions. What happens when an agent authenticated by your iris does something you didn’t authorize? Who’s liable? What’s the audit trail? Can you take back permission after the fact? What happens when you can’t?
You can change a password, cancel a credit card, deactivate a login. You can’t change your iris. If the root credential is compromised, by a hack or a misbehaving agent or a breakdown in the chain between you and the software acting for you, there’s no reset.
India’s Aadhaar biometric system, the closest precedent, was breached in 2018 when access to over a billion records was sold for less than seven dollars. A breach at World wouldn’t just expose personal information. It could let attackers spin up autonomous agents under stolen identities, with the original iris holder on the hook for whatever those agents do.
The obvious response is to cancel the compromised credential. That solves the liability problem and creates a new one. The iris has already been used. You’ve got two. Neither of them changes. Once both are compromised, there’s no reset. You’re locked out of every service that requires verification, permanently, for a breach you didn’t cause.
The legal framework doesn’t exist yet, and the infrastructure is already in production.
The Toll Road
Add it all up.
Dating. Video calls. Legal signatures. Commerce. Social media. Concert tickets. Enterprise identity. AI agents. Advertising verification. Each integration makes the next one harder to refuse. Each refusal narrows the slice of the internet available to you.
Tools for Humanity is a Delaware corporation. The foundation that controls its cryptocurrency is incorporated in the Cayman Islands. Its asset-holding subsidiary is registered in the British Virgin Islands. It’s raised $240 million from Andreessen Horowitz, Bain Capital Crypto, Blockchain Capital, and Khosla Ventures.
One of its early investors was Sam Bankman-Fried.
The chairman of that company is the CEO of OpenAI.
Eighteen million people have already scanned in. Most of them live in countries where fifty dollars is hard to turn down. The regulators who tried to stop it arrived after the database was built. The partnerships that make the database valuable were announced after the regulators had already fallen behind.
Edward Snowden looked at the Orb in 2021 and said: “Don’t catalogue eyeballs.”
They catalogued eighteen million of them.
World’s co-founder and CEO, Alex Blania, told TIME he’s “really excited to make a lot of money.” Sam Altman waved away a question about the influence he and investors stand to gain. “What I think would be bad is if an early crew had a lot of control over the protocol,” he told TIME. “And that’s where I think the commitment to decentralization is so cool.”
The protocol is controlled by a foundation whose sole director is a British Virgin Islands company. The tokens are split 75% to the community and 25% to Tools for Humanity’s investors and staff, including Blania and Altman. The commitment to decentralization is a promise written on paper filed offshore.
The Orb started as a crypto curiosity that privacy advocates mocked. Five years later, it’s a candidate for the identity layer of the internet. The verified human is becoming the default, and the unverified human is becoming a second-class citizen of a web they used to navigate freely.
Your iris is worth 44 cents in Nairobi. It’s worth 40 cents a month to the platform serving you ads. And it’s worth whatever the agentic economy grows into by the time you realize you can’t opt out.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.



Minority report, again