From Uniquely Unsettling to Kinda Cool
OpenAI, ChatGPT, and the Advertising Pivot
The Announcement
On January 16, 2026, OpenAI announced that ads are coming to ChatGPT.
The same day, a federal judge ruled that Elon Musk’s lawsuit against OpenAI can proceed to trial. He’s suing them for abandoning their nonprofit mission. The timing is coincidental. It’s also poetic.
The announcement came from Fidji Simo, OpenAI’s CEO of Applications. (More on her later.) The framing is worth reading twice: “Who gets access to that level of intelligence will shape whether AI expands opportunity or reinforces the same divides.”
She’s not wrong. A student in Lagos can’t pay twenty dollars a month. An ad-supported tier gives them access to something powerful. That’s real.
But it comes with a cost. The system learns you. Your conversations become targeting data. Paid tiers buy distance from monetization. Free users pay another way.
That’s the trade-off. Access in exchange for extraction.
Here’s how it’s supposed to work. You ask ChatGPT a question. It answers. Below the answer is a clearly labeled ad, informed by your previous chats.
Sam Altman posted his own explanation of the ad roll-out on X. “We will not accept money to influence the answer ChatGPT gives you,” he wrote. “We keep your conversations private from advertisers.”
Then he added this: “An example of ads I like are on Instagram, where I’ve found stuff I like that I otherwise never would have.”
He’s telling you exactly what they’re building.
Instagram ads work because they don’t feel like ads. The algorithm knows you so well that the sponsored content blends into your feed. You’re being sold to, but it feels like discovery. That’s the goal here.
OpenAI’s official principles say ads will be “separate and clearly labeled.” They say ads “do not influence the answers ChatGPT gives you.” Maybe that’s true on day one.
But here’s the thing. Ads don’t need to change answers to change outcomes.
You ask ChatGPT a question. It answers. Then it sells you something. The answer can be perfectly objective and the relationship is still corrupted. The ad sits below the response, colonizing the moment of trust. You come for help. You get a sales pitch.
The Arc
Let’s track Sam Altman’s statements on advertising over the past twenty months. It’s a masterclass in moving goalposts.
May 2024, at Harvard Business School, he laid out his position clearly: “I will disclose just as like a personal bias that I hate ads.”
He explained why. “I think they do sort of somewhat fundamentally misalign a user’s incentives with the company providing the service.”
Then he got specific about ChatGPT: “When I think of like GPT writing me a response, if I had to go figure out exactly how much was who paying here to influence what I’m being shown, I don’t think I would like that. And as things go on, I think I would like that even less.”
As things go on. Remember that.
He described the alternative model he preferred: “We make great AI, and you pay us for it, and it’s like we’re just trying to do the best we can for you.”
That was the selling point. You were paying for independence. You were paying for answers you could trust.
For everyone else, he had a plan: “We commit, as a company, to use a lot of what basically the rich people pay to give free access to the poor people.”
Subscriptions would fund the free tier. The model was clean. The incentives were aligned.
But he left himself an exit: “I kind of think of ads as like a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services.”
Last resort. Remember that too.
March 2025, on Stratechery: “We’re never going to take money to change placement or whatever.”
Never. That’s the word he used.
But also: “Maybe there’s a tasteful way we can do ads, but I don’t know. I kind of just don’t like ads that much.”
The hedge had arrived. He still didn’t like ads. He just couldn’t rule them out anymore.
June 2025, on OpenAI’s own podcast: “I’m not totally against it. I can point to areas where I like ads. I think ads on Instagram, kinda cool.”
October 2025, back on Stratechery: “I love Instagram ads. They’ve added value to me. I found stuff I never would’ve found. I bought a bunch of stuff.”
Love. Added value. The conversion is complete.
January 2026: Ads launch.
The Vise
Here are the numbers.
ChatGPT has 800 million weekly active users. Only 5% pay for subscriptions. That’s 760 million people using the product for free.
In 2024, OpenAI made $3.7 billion in revenue and lost $5 billion. In the first half of 2025 alone, they lost $7.8 billion.
Banking giant HSBC projects they won’t achieve profitability by 2030.
Financial Times Alphaville called it “a money pit with a website on top.”
Now look at what they’ve planned for. $1.4 trillion in infrastructure deals over the next eight years. Oracle alone is $300 billion. Microsoft, $250 billion.
Deutsche Bank put it simply: “No startup in history has operated with losses on anything approaching this scale.”
They have 800 million users and they’re bleeding cash. They have trillion-dollar commitments and no path to profitability. The subscription model isn’t enough. Enterprise contracts aren’t enough. They need another revenue stream.
Internal documents project $1 billion from “free user monetization” in 2026. That’s the internal term for advertising. They expect it to grow to $25 billion by 2029.
Twenty-five billion dollars in ad revenue. From 800 million conversations.
Given these numbers, ads become the obvious path. The financial pressure points one direction. The only question was timing.
Which reframes the public statements.
You can’t go from “uniquely unsettling” to launch overnight. You have to warm the room. You need “kinda cool” and “I love Instagram ads” in between. The arc has to feel like a journey of discovery rather than a predetermined destination.
The statements evolved as the strategy hardened. The hiring did too.
The Architect
Remember Fidji Simo? She wrote the blog post announcing ads.
OpenAI hired her as CEO of Applications in May 2025. That tells you the decision was already made. The recruiting for a Head of Advertising in September tells you the infrastructure was being built.
Simo spent ten years at Meta. She ran the Facebook App from 2019 to 2021, overseeing the core product, the main revenue engine, the thing that prints money. She led ads in News Feed. AdWeek named her one of the top fifteen people shaping mobile advertising.
Then she went to Instacart as CEO, where she built one of the largest retail advertising businesses outside of Amazon and Walmart.
OpenAI hired her in May 2025 as CEO of Applications.
She’s not the only one.
Kate Rouch joined as OpenAI’s first CMO in December 2024 after eleven years at Meta, where she ran global brand and product marketing for Instagram, WhatsApp, Messenger, and Facebook.
Reporting suggests hundreds of former Meta employees have joined OpenAI.
You assemble this team to build an advertising business. You put Simo’s name on the announcement because she’s the one who knows how.
The Treasure Trove
Millions of people use ChatGPT as a confidant.
Google knows what you search. Facebook knows what you post. ChatGPT knows what you think.
People talk to it like a therapist, like a priest, like a journal with a voice. They tell it things they’d never type into a search bar because search bars feel public. Things they’d never post because posts have audiences. Medical questions they’re too embarrassed to ask a doctor. Relationship problems they can’t tell their friends. Trauma they can’t afford to work through with a professional.
The conversational interface lowers every guard.
Now remember the memory feature.
OpenAI shipped it as a convenience. “ChatGPT will remember things you discuss to make future conversations more helpful.” Users loved it. They stored context because it made the tool more useful. Mental health history. Relationship status. Financial situation. Medical conditions. Names of kids, names of therapists, names of medications. What you had for breakfast two weeks ago.
All conversations logged.
This is a nearly perfect data acquisition system. People give their information willingly because they trust the tool. They’re not filling out a form. They’re not clicking through a privacy policy. They’re just talking.
In December 2024, three weeks before the ad announcement, The Information reported on OpenAI’s internal strategy. They called it “intent-based monetization.” The approach involves showing ads based on what one source called the “treasure trove of information” the company has on users, mined from chat histories. The target is Meta’s $250 annual ad revenue per U.S. user.
That treasure trove is people’s darkest moments. Their deepest fears. Their most vulnerable confessions.
OpenAI’s official position: “We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.”
This is technically true and completely misleading.
They don’t sell your data. They use your data to sell you. The advertiser never sees your conversations. They just tell OpenAI who they want to reach. People anxious about money. People considering a career change. People with health concerns. OpenAI uses the conversations to find them.
Your secrets stay private. Your vulnerabilities become targeting parameters.
And there’s a carve-out in the announcement worth noticing.
“Ads are not eligible to appear near sensitive or regulated topics like health, mental health or politics.”
The carve-out exists because showing an ad next to someone’s therapy session would make the extraction obvious. So they drew a line.
But here’s what the carve-out doesn’t do.
It doesn’t stop the system from learning. The ad doesn’t appear next to your depression conversation. But your depression conversation can still inform which ads appear everywhere else.
You tell ChatGPT you’re worried about your marriage. No ad appears. A week later you ask about weekend activities. An ad appears for a couples retreat. What a coincidence. What a relevant ad. You found something you never would have found otherwise.
The carve-out limits where ads appear. It doesn’t limit what the system can learn.
Now look at the promises.
OpenAI’s announcement says: “Ads do not influence the answers ChatGPT gives you.” It says: “Answers are optimized based on what’s most helpful to you.” It says: “Ads are always separate and clearly labeled.”
The same Information report tells a different story. OpenAI employees have discussed how to “prioritize sponsored content to ensure it shows up in ChatGPT responses.” They’re exploring AI models that give “sponsored information preferential treatment.”
And the announcement includes one more detail worth noticing.
“Soon you might see an ad and be able to directly ask the questions you need to make a purchase decision.”
You can talk to the ad.
The ad isn’t a banner at the bottom of the screen. It’s part of the conversation. You ask ChatGPT for help, it answers, an ad appears, and then you can ask the ad questions. The model serves the advertiser’s interests while you think you’re still getting assistance.
Where does the answer end and the ad begin?
That boundary is the product.
They’re calling it democratization. It’s extraction dressed as access. The system learns you, then sells what it learned.
But this isn’t what they said they were building.
Two Betrayals
Elon Musk is suing OpenAI for abandoning its nonprofit mission. On January sixteenth, the same day as the ad announcement, a federal judge ruled the case can proceed to trial.
Musk’s argument is about structure. OpenAI was founded as a nonprofit. Now it’s a for-profit. He says that’s a betrayal.
He’s right that something was betrayed. He’s aimed at the wrong transformation.
Let’s go back to the beginning.
December 2015. OpenAI launches with a clear statement: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” That was the point.
Google had acquired DeepMind. The fear was that advanced AI would be controlled by corporations optimizing for profit. OpenAI would be different.
By 2017, everyone involved understood the nonprofit structure couldn’t raise the capital needed to compete.
OpenAI recently published notes from internal conversations. September 2017, Musk on a call with the team: “Gotta figure out how do we transition from non-profit to something which is essentially philanthropic endeavor and is B-corp or C-corp or something.”
Ilya Sutskever, then OpenAI’s chief scientist, same call: “As long as the main entity has something fundamentally philanthropic.”
Everyone agreed a for-profit entity was necessary. The debate was about control and structure. Musk wanted majority equity. OpenAI’s leadership said no. Musk left. The transition happened without him.
By 2024 and 2025, they completed the full conversion. Microsoft invested over $13 billion and holds 27% equity. SoftBank put in $30 billion contingent on restructuring. The nonprofit, now called the OpenAI Foundation, holds about 26% of an entity valued at $130 billion.
The nonprofit is a minority shareholder in its own creation.
Was this necessary? Maybe. Probably. The capital requirements are staggering. You can argue about whether the mission required this structure. The structure itself isn’t obviously a betrayal.
Here’s what is.
The public rationale for restructuring centered on compute and capital. The 2017 conversations kept circling back to “essentially philanthropic endeavor” and “fundamentally philanthropic.” The structure could change as long as the mission stayed intact.
Surveillance advertising isn’t philanthropic. Targeting users based on their most vulnerable moments isn’t advancing digital intelligence “in the way that is most likely to benefit humanity as a whole.”
This is where “unconstrained by a need to generate financial return” actually dies. The ad platform, not the corporate restructuring.
Musk is suing over transformation one. The real betrayal is transformation two.
He’s fighting about the paperwork while the captain changes course.
The Test
“We’re never going to take money to change placement or whatever.”
That’s Sam Altman in March 2025.
“Ads do not influence the answers ChatGPT gives you.”
That’s the official policy in January 2026.
Can these promises survive?
Can they survive $1.4 trillion in infrastructure commitments? Can they survive internal discussions about “prioritizing sponsored content”? Can they survive the financial pressure of 760 million free users and no path to profitability?
Can they survive the incentives?
Every platform that introduced ads said the same things. Google said ads wouldn’t affect search results. Facebook said ads wouldn’t compromise the user experience. They said they’d keep ads separate. They said trust us.
Then the incentives took over. The ad revenue grew. The targeting got more sophisticated. The line between content and promotion blurred. The platforms optimized for engagement because engagement meant impressions and impressions meant money.
OpenAI says it will be different. They all said they’d be different.
Musk’s lawsuit goes to trial on April 27, 2026. A jury will decide whether OpenAI betrayed its nonprofit mission by becoming a for-profit company.
That’s one verdict. The other comes from users.
Will people keep confessing to a billboard? Will they trust a confidant that sells ads?
The timeline tells the rest.
December 2015: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
May 2024: “Ads-plus-AI is sort of uniquely unsettling to me.”
October 2025: “I love Instagram ads.”
January 2026: ChatGPT serves its first ad.
And the next confession will come anyway.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.


