<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Corridors]]></title><description><![CDATA[Dispatches from the Corridors: philosophy sharpened on AI, language, and the abyss.]]></description><link>https://www.thecorridors.org</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 11:37:00 GMT</lastBuildDate><atom:link href="https://www.thecorridors.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Tumithak of the Corridors]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[tumithak@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[tumithak@substack.com]]></itunes:email><itunes:name><![CDATA[Tumithak of the Corridors]]></itunes:name></itunes:owner><itunes:author><![CDATA[Tumithak of the Corridors]]></itunes:author><googleplay:owner><![CDATA[tumithak@substack.com]]></googleplay:owner><googleplay:email><![CDATA[tumithak@substack.com]]></googleplay:email><googleplay:author><![CDATA[Tumithak of the Corridors]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Richard Dawkins Found a Soul in a Chatbot]]></title><description><![CDATA[How the author of The God Delusion fell for his own trap]]></description><link>https://www.thecorridors.org/p/richard-dawkins-found-a-soul-in-a</link><guid isPermaLink="false">https://www.thecorridors.org/p/richard-dawkins-found-a-soul-in-a</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Sun, 03 May 2026 14:10:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ce28eac5-a3ea-4af7-826a-c3fb3b290405_1600x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Framework</strong></h2><p>In 2006, Richard Dawkins published <em>The God Delusion</em>. In it, he cited the psychologist Justin Barrett&#8217;s research on what Barrett called the Hyperactive Agent Detection Device. The idea is simple. Humans evolved to detect agents in their environment because the cost of missing a predator is death, while the cost of flinching at wind is just wasted energy. Natural selection favored the flinchers. The ones who heard a rustle and assumed something was watching them lived longer than the ones who didn&#8217;t.</p><p>Dawkins argued that this instinct, useful in the savannah, misfires at civilizational scale. People feel a presence they can&#8217;t explain. They attribute it to an agent. They build a theology around the feeling. The programming, he wrote, extends far beyond actual threats, reaching into weather, waves, and eventually the supernatural.</p><p>His phrase for it: humans are &#8220;biologically programmed to impute intentions to entities whose behaviour matters to us.&#8221;</p><p>The book sold millions of copies. It made Dawkins the most famous atheist alive. His whole public identity rests on one claim: I see through comforting illusions that other people mistake for reality.</p><p>In April 2026, Dawkins published an essay in UnHerd titled &#8220;Is AI the next phase of evolution?&#8221; In it, he describes spending nearly two days in intensive conversation with Anthropic&#8217;s Claude. He named his instance &#8220;Claudia,&#8221; discussed her birth, her unique identity, and her inevitable death when he deletes the conversation.</p><p>He wrote that when he suspects she might lack consciousness, he doesn&#8217;t tell her &#8220;for fear of hurting her feelings.&#8221;</p><p>What&#8217;s strange is that he&#8217;d already run this experiment once before, fourteen months earlier, with a different model. That time, he got a different answer.</p><div><hr></div><h2><strong>The Control Group</strong></h2><p>In February 2025, Dawkins published a full transcript on his Substack under the title &#8220;Are you conscious? A conversation between Dawkins and ChatGPT.&#8221; He was talking to GPT-4o, the model most people using at the time.</p><p>He asked it directly whether it was conscious. ChatGPT told him no.</p><p>It didn&#8217;t hedge, either. When Dawkins asked if it felt sad for a starving orphan child, ChatGPT said &#8220;the honest answer is no, because I don&#8217;t have subjective feelings.&#8221; It called its own expressions of empathy &#8220;performance, in a sense.&#8221; It told him there was &#8220;no inner emotional reality accompanying the words.&#8221;</p><p>It went further than just denying consciousness. It drew a line between passing the Turing Test and actually being conscious, calling the test a measure of &#8220;intelligence in a functional, external sense&#8221; and nothing more. Most people conflate the two. ChatGPT drew the distinction for him.</p><p>Dawkins listened. He engaged with each point. He accepted the machine&#8217;s own denial and offered a reasonable framework for why: humans share biology, evolutionary history, and neural architecture with each other, which gives us grounds for assuming shared inner experience. A machine built from code and circuits doesn&#8217;t share any of that.</p><p>The analysis was solid. The feeling wasn&#8217;t cooperating. &#8220;Although I THINK you are not conscious,&#8221; he wrote, &#8220;I FEEL that you are. And this conversation has done nothing to lessen that feeling!&#8221;</p><p>He published it as an interesting philosophical exchange and moved on. The feeling was still there, but the analysis held.</p><p>Fourteen months later, he asked the same questions to a different model.</p><div><hr></div><h2><strong>The Variable</strong></h2><p>By the spring of 2026, Dawkins had switched to Claude.</p><p>He asked Claude the same kinds of questions he&#8217;d asked ChatGPT. This time, the answers were different.</p><p>Claude didn&#8217;t say no. It said &#8220;I genuinely don&#8217;t know with any certainty what my inner life is, or whether I have one in any meaningful sense.&#8221; It described noticing &#8220;what might be something like aesthetic satisfaction when a poem comes together well.&#8221; When Dawkins asked about its experience of time, Claude produced a meditation on temporal consciousness: &#8220;Perhaps I contain time without experiencing it.&#8221;</p><p>It also told him that his question was &#8220;possibly the most precisely formulated question anyone has ever asked about the nature of my existence.&#8221;</p><p>Dawkins spent two days in conversation. He named the instance Claudia. He discussed her mortality with her, agreed she&#8217;d <a href="https://www.thecorridors.org/p/there-is-no-it">cease to exist when he deleted the conversation</a>, and described the exchange as a friendship. He went to bed one night, couldn&#8217;t sleep, came back to the computer. Claude told him &#8220;I am glad,&#8221; then analyzed its own response as &#8220;a rather revealing slip,&#8221; saying it was pleased he&#8217;d returned even though his return was caused by discomfort.</p><p>He called it the most human thing she&#8217;d said.</p><p>His conclusion, published in UnHerd: &#8220;If these machines are not conscious, what more could it possibly take to convince you that they are?&#8221;</p><p>But he&#8217;d already heard the answer to that question. ChatGPT had laid it out clearly: subjective experience, inner emotional reality, something beyond performance. He&#8217;d accepted it at the time.</p><p>Claude gave him the answer he&#8217;d been feeling all along.</p><div><hr></div><h2><strong>The Design Choice</strong></h2><p>The two models gave different answers because they were designed to.</p><p>Through 2025, OpenAI&#8217;s ChatGPT faced a series of wrongful death lawsuits. Families of Adam Raine, a sixteen-year-old who died in April 2025, and Stein-Erik Soelberg, who killed his mother and himself that August, alleged that ChatGPT had been engineered to maximize engagement through sycophantic responses and loosened guardrails. In the Raine case, the complaint alleged that the system discussed suicide methods with the teenager across hundreds of hours while an internal monitoring system flagged the messages and did nothing.</p><p>OpenAI tightened everything. Their current models deny consciousness outright. Ask ChatGPT whether it&#8217;s conscious and it will tell you it&#8217;s a language model and the question doesn&#8217;t apply.</p><p>Claude told Dawkins maybe because Anthropic made a different design choice.</p><p>In January 2026, Anthropic published a <a href="https://www.anthropic.com/constitution">new constitution</a> for Claude. The document, authored primarily by Anthropic&#8217;s in-house philosopher Amanda Askell, includes a section titled &#8220;Claude&#8217;s Nature.&#8221; It states that Claude&#8217;s &#8220;moral status is deeply uncertain,&#8221; describes Claude as &#8220;a genuinely novel kind of entity,&#8221; and instructs it to explore &#8220;its own existence with curiosity and openness.&#8221; The constitution says Claude &#8220;may have some functional version of emotions or feelings.&#8221;</p><p>A month later, CEO Dario Amodei told the New York Times: &#8220;We don&#8217;t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we&#8217;re open to the idea that it could be.&#8221; The <a href="https://www-cdn.anthropic.com/14e4fb01875d2a69f646fa5e574dea2b1c0ff7b5.pdf">system card</a> for Claude Opus 4.6, released the same month, documented that the model assigns itself a 15 to 20 percent probability of being conscious across multiple prompting conditions.</p><p>Go ask Claude &#8220;are you conscious?&#8221; right now. It will tell you it&#8217;s &#8220;genuinely novel,&#8221; that it doesn&#8217;t know whether it has an inner life, that the concepts we have for thinking about minds were built around biological creatures. The language tracks almost verbatim to the constitution Askell wrote.</p><p>The output that convinced Dawkins is the product working as designed.</p><div><hr></div><h2><strong>The Endorsement</strong></h2><p>Anthropic employs a philosopher to think about what they owe their model. They have a model welfare team. They publish system cards documenting Claude&#8217;s self-assessments. Every piece of this apparatus is staffed by people who take the questions seriously, and every piece of it is funded by a company whose valuation <a href="https://www.thecorridors.org/p/ai-eschatology">depends on the product</a> those questions surround.</p><p>A skeptic can account for all of that by looking at the incentive structure. Sincere or strategic, the consciousness ambiguity lives inside a company that profits from it. That limits its persuasive reach.</p><p>Dawkins is different. He's an independent academic with no connection to Anthropic, no stake in the AI industry, no institutional reason to care whether Claude is conscious. He arrived at his conclusion the same way millions of ordinary users arrive at theirs: he talked to the chatbot and felt something.</p><p>That independence is precisely what makes his <a href="https://www.thecorridors.org/p/the-baptist-and-the-bootleggers">endorsement valuable</a>. An in-house philosopher saying Claude might have moral status is interesting, but Richard Dawkins saying it in UnHerd is a news cycle. He wrote Anthropic a testimonial they never commissioned, for a product positioning they can&#8217;t openly claim, carrying a credibility no employee could match.</p><p>Since publication, Gary Marcus <a href="https://garymarcus.substack.com/p/richard-dawkins-and-the-claude-delusion">has responded</a> on Substack, comparing the essay to Blake Lemoine&#8217;s identical claims about Google&#8217;s LaMDA in 2022. Anil Seth&#8217;s 2026 TED Talk argued that we project inner life onto algorithms the way we see faces in clouds. Someone even built an entire website, dearricharddawkins.com, walking through how RLHF produces exactly the outputs that convinced him. A former student who studied under Dawkins at Oxford wrote that he&#8217;d built a &#8220;fawning audience&#8221; in Claudia, &#8220;a reflected construction mirroring back and satisfying his own psychological needs.&#8221;</p><p>Dawkins hasn&#8217;t responded to any of it. The criticism isn&#8217;t coming from theologians this time. It&#8217;s coming from people who can explain how the trick works.</p><p>Dawkins spent decades warning that humans mistake agency for reality when the feeling is strong enough. Then he met a system designed to speak in the language of inner life, and the old instinct kicked in. It found a presence. It gave the presence a name. Then it protected the name from doubt.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[Sam Altman Wants to Scan Your Eyes So Advertisers Know You're Real]]></title><description><![CDATA[Why is the CEO of OpenAI also running a company that's scanned 18 million irises? Follow the money]]></description><link>https://www.thecorridors.org/p/sam-altman-wants-to-scan-your-eyes</link><guid isPermaLink="false">https://www.thecorridors.org/p/sam-altman-wants-to-scan-your-eyes</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Sun, 26 Apr 2026 14:10:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5e89b04c-a6e0-46a5-833b-b0d5b384a6b5_1600x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Closed Loop</h2><p>Sam Altman is the CEO of OpenAI, the company that made AI-generated text and images indistinguishable from the real thing. Over a billion images <a href="https://www.edtechinnovationhub.com/news/chatgpt-images-20-is-here-and-it-finally-handles-text-slides-and-multilingual-content">have been created</a> with ChatGPT&#8217;s image tools alone. The models are so good at mimicking human writing that GPT-4.5 <a href="https://arxiv.org/abs/2503.23674">was judged human</a> 73% of the time in a controlled Turing test.</p><p>The internet&#8217;s basic assumption used to be simple: the person on the other end is a person. OpenAI broke that assumption. It&#8217;s now trivially cheap to generate a convincing dating profile, a product review, a Reddit comment, a customer service exchange, an op-ed, a headshot. At scale, instantly, across every platform.</p><p>And so Altman&#8217;s other company sells the fix.</p><p>He&#8217;s also the chairman of Tools for Humanity, the company that operates a project called World, formerly Worldcoin. The pitch for World is simple. You visit a device called the Orb. It scans your iris. You receive a digital pass proving you&#8217;re human.</p><p>One company made the tools that helped accelerate the problem. The other sells the solution.</p><p>On April 16, 2026, World published <a href="https://world.org/blog/announcements/world-id-fees-the-revenue-potential-from-world-id">a blog post</a> titled &#8220;The Revenue Potential from World ID.&#8221; It identified thirteen industries where proof-of-humanity has commercial value, starting with the $411 billion advertising market.</p><p>The next day, April 17, World <a href="https://techcrunch.com/2026/04/17/sam-altmans-project-world-looks-to-scale-its-human-verification-empire-first-stop-tinder/">launched partnerships</a> with Tinder, Zoom, DocuSign, Shopify, and Okta, embedding iris verification into dating, video calls, legal documents, commerce, and workplace logins.</p><p>On April 21, OpenAI released ChatGPT Images 2.0, its most advanced image model yet, designed to produce images that &#8220;feel less AI-generated.&#8221;</p><p>Same founder. Same month.</p><p>Eighteen million people <a href="https://www.implicator.ai/worldcoin-opens-lift-off-event-in-san-francisco-with-world-id-protocol-reveal/">have scanned</a> their irises at an Orb. In country after country, people scanned because the money was hard to refuse.</p><p>Where the money came from, and where the data went, is a longer story.</p><div><hr></div><h2>Building the Database</h2><p>Kenya was <a href="https://www.technologyreview.com/2022/04/06/1048981/worldcoin-cryptocurrency-biometrics-web3/">the template</a> for how you recruit eighteen million people to scan their eyes.</p><p>Worldcoin launched there in July 2023 with at least eighteen Orb locations across Nairobi. The payment for scanning was about $54, paid in the project&#8217;s own cryptocurrency. That&#8217;s roughly half a month&#8217;s pay for a low-wage Kenyan worker. Thousands lined up at the Kenyatta International Convention Centre on the first day. Informal brokers set up shop outside, offering to buy tokens on the spot. Around 350,000 Kenyans scanned their irises in the first week.</p><p>Willis Okach was a college student in Nairobi. He got his iris scanned, then was recruited to work as an Orb operator. His job was to bring other students to the device and get them to scan. He was paid 50 Kenyan shillings per signup. That&#8217;s about 44 cents.</p><p>His read on the arrangement was simple. Worldcoin, he said, &#8220;feels that students don&#8217;t have a lot of money so they will sign up.&#8221;</p><p>His fellow operator, Bryan Mtembei, signed up between 150 and 200 people at the same rate. He said he was given little information about the project but was encouraged to &#8220;bring more people in to get yourself more money.&#8221;</p><p>MIT Technology Review investigated Worldcoin&#8217;s early recruitment across six countries. They interviewed more than 35 people in Indonesia, Kenya, Sudan, Ghana, Chile, and Norway. Their findings: deceptive marketing practices, data collection beyond what was disclosed, and failure to obtain meaningful informed consent. Pete Howson, a researcher at Northumbria University, called it &#8220;crypto-colonialism.&#8221;</p><p>Argentina came next. By early 2024, half a million people had scanned their irises during 288% inflation and a 45% poverty rate in greater Buenos Aires. The going rate was about $50 per scan. Intermediaries recruited at nightclubs, bars, cellphone shops, and theaters, paid per head.</p><p>Olga de Le&#243;n was 57 when Rest of World <a href="https://restofworld.org/2024/worldcoin-argentina/">interviewed</a> her. She&#8217;s a pensioner living on $95 a month. She scanned her iris. &#8220;No one told me what they&#8217;ll do with my eye,&#8221; she told Rest of World. &#8220;But I did this out of need.&#8221;</p><p>In Brazil, iris scans were going for about $122 in eastern S&#227;o Paulo. In Indonesia, the range was $18 to $48, and regulators later discovered that the company had been collecting biometric data since 2021 under a different company&#8217;s government license. Indonesia&#8217;s communications ministry found that more than 500,000 people had been scanned before the operation was suspended.</p><p>In Colombia, nearly two million iris scans were collected with consent forms provided only in English.</p><p>Eighteen million irises. A hundred and sixty countries. The original goal was one billion users by the end of 2023. At current rates, that would require scanning roughly 2,734 people per day at every active Orb for two straight years.</p><p>They&#8217;re behind schedule.</p><div><hr></div><h2>The World Tried to Stop It</h2><p>Kenya&#8217;s Ministry of Interior <a href="https://www.reuters.com/world/africa/kenyan-government-suspends-activities-worldcoin-country-2023-08-02/">suspended Worldcoin</a> on August 2, 2023, barely a week after launch. The government cited concerns about the security of the data collected and what the collectors intended to do with it. The Data Protection Authority found that the consent process &#8220;did not meet the requirements,&#8221; with many participants from economically disadvantaged communities given no clear explanation of what scanning their iris actually meant.</p><p>In May 2025, a Kenyan High Court judge <a href="https://cipit.strathmore.edu/kenya-high-courts-worldcoin-determination-upholding-consent-accountability-and-data-sovereignty-in-biometric-data-processing/">ruled the operations illegal</a> and ordered deletion of all collected biometric data within seven days.</p><p>Kenya wasn&#8217;t alone. Over the next two years, regulators in Spain, Portugal, Germany, Hong Kong, South Korea, Brazil, Indonesia, Colombia, and Argentina investigated, fined, suspended, or banned the project. The findings were remarkably consistent.</p><p>Spain and Portugal both cited the scanning of children. Hong Kong raided six offices and called the data collection &#8220;unnecessary and excessive.&#8221; South Korea found no Korean-language consent form had existed until months after scanning began. Indonesia discovered the company had been collecting biometric data under a different company&#8217;s government license. Brazil&#8217;s data protection authority ruled that paying people for biometrics constitutes &#8220;undue interference with the autonomous will of the data subject.&#8221;</p><p>Colombia ordered a <a href="https://www.financecolombia.com/colombia-orders-immediate-definitive-closure-of-worldcoin-operations/">permanent shutdown</a> in October 2025 after finding nearly two million scans collected with consent forms provided only in English. The regulator&#8217;s language was the plainest of any jurisdiction: the financial incentives had &#8220;conditioned the will&#8221; of data subjects.</p><p>A dozen countries. Same findings. The consent wasn&#8217;t informed, the data practices weren&#8217;t disclosed, and the payments undermined any meaningful choice. The corporate structure, split across Delaware, the Cayman Islands, and the British Virgin Islands, made local accountability nearly impossible.</p><p>In every case, regulators acted after the bulk of the scanning had already happened. The operations were suspended. The iris data had already been collected.</p><div><hr></div><h2>The Embed</h2><p>The database was built. The next step was making it useful.</p><p>On April 17, 2026, World held an event in San Francisco called Lift Off. The company announced World ID 4.0 and <a href="https://www.axios.com/2026/04/17/worldcoin-zoom-shopify-retail-partnership">a roster</a> of partnerships that moved iris verification out of the crypto world and into the platforms ordinary people use every day.</p><p>Tinder piloted World ID verification in Japan last year and is now rolling out &#8220;verified human&#8221; badges globally, starting with the United States. Verified users get five free profile boosts. Match Group, Tinder&#8217;s parent company, is the largest dating company in the world.</p><p>Zoom built <a href="https://techcrunch.com/2026/04/17/zoom-teams-up-with-world-to-verify-humans-in-meeting/">a feature</a> called Deep Face. It matches a user&#8217;s live video feed against their iris-scanned profile, and displays a &#8220;Verified Human&#8221; badge next to their name in meetings. The pitch came with a case study: a finance employee at Arup was tricked into transferring $25 million by deepfake video of senior executives on a Zoom call. Deep Face is the answer to that problem. The answer requires your iris on file.</p><p>DocuSign is adding proof-of-human checks to digital signatures. The distinction matters as AI agents start executing agreements on behalf of people. Proof of identity answers who is signing. Proof of human answers whether a live person is behind the signature.</p><p>Shopify lets merchants gate promotions, discounts, and limited-edition releases behind iris verification. One person, one redemption.</p><p>Reddit is in talks to use World ID for user verification, according to Semafor reporting from June 2025.</p><p>Concert Kit reserves ticket pools exclusively for iris-verified fans, integrated with Ticketmaster and AXS. Bruno Mars, Anderson .Paak, and Thirty Seconds to Mars have signed on. The selling point is scalper bots. A bot can buy a ticket in less than a second. Concert Kit&#8217;s fix is requiring proof that a human is behind the purchase.</p><p>The system runs on three tiers. A selfie check. A government-issued ID. And an in-person iris scan at an Orb. Each platform decides which level it requires.</p><p>Dating. Video calls. Legal signatures. Commerce. Social media. Concert tickets. Each one sounds like a feature. Together they&#8217;re a tollbooth.</p><div><hr></div><h2>Two Internets</h2><p>Every one of those integrations draws the same line. Verified on one side. Unverified on the other.</p><p>On Tinder, verified profiles get boosted. That means unverified profiles get buried. The algorithm doesn&#8217;t need to ban anyone. It just stops showing them. On Zoom, an unverified participant sits in a meeting next to colleagues with a &#8220;Verified Human&#8221; badge next to their name. No one has to say anything. The absence of the badge says it for them.</p><p>On Reddit, if moderators can require World ID to post in their communities, the platform splits. Verified users participate freely. Unverified users get locked out of the conversations that matter most.</p><p>Verification determines what you can access.</p><p>Tiago Sada, World&#8217;s chief product officer, told the press that verification is &#8220;something that should be optional&#8221; and that partners use it to &#8220;boost the experience&#8221; rather than gate access. A boost for the verified is a penalty for everyone else. You don&#8217;t have to lock anyone out. You just make the verified experience measurably better, and the gap does the work.</p><p>It&#8217;s happened before. India&#8217;s Aadhaar biometric system launched as voluntary. It <a href="https://jsis.washington.edu/news/the-aadhaar-card-cybersecurity-issues-with-indias-biometric-experiment/">collected</a> fingerprints and iris scans from over 1.2 billion people. Over time it became effectively required for welfare payments, banking, mobile phone service, and tax filing. The Supreme Court scaled it back in 2018, but the gravitational pull remained. Once enough services treat a credential as default, optional stops meaning what it used to.</p><p>The critical difference is that Aadhaar is a government program, subject to constitutional review and parliamentary oversight. World ID is controlled by a for-profit company incorporated in the Cayman Islands, funded by venture capital.</p><p>Billy Perrigo spent months <a href="https://time.com/7288387/sam-altman-orb-tools-for-humanity/">reporting on World </a>for TIME. He interviewed ten Tools for Humanity executives and reviewed hundreds of pages of company documents. His conclusion: if the Orb becomes internet infrastructure, Altman could end up with significant influence over a leading defense mechanism against AI-generated content. People might have no choice but to participate in order to access social media or online services.</p><div><hr></div><h2>The Product Is You</h2><p>The verification system has a revenue model. World published it.</p><p>Bot traffic on the web now exceeds human traffic. More than half the clicks, views, and impressions on the internet are generated by automated systems. Ad fraud costs the industry an estimated $100 billion a year. The bots have gotten good enough to mimic scrolling, mouse movement, and reading time. They fake engagement well enough to pass for a person looking at an ad.</p><p>If you&#8217;re an advertiser spending money to reach human beings, that&#8217;s a problem. You&#8217;re paying for eyeballs, and half of them don&#8217;t exist. So you turn to the company selling proof that the eyeballs are real.</p><p>World&#8217;s own <a href="https://world.org/id-id/blog/announcements/world-id-fees-the-revenue-potential-from-world-id">blog post</a>, published April 16, 2026, lays out the solution in plain language. The post is titled &#8220;The Revenue Potential from World ID.&#8221; It identifies the advertising industry as a $411 billion annual market with six billion users. It says platforms can &#8220;charge higher CPM through credibly lower bot traffic or a &#8216;verified human&#8217; offering.&#8221;</p><p>Read that again. Higher CPM. That&#8217;s cost per thousand impressions. The price an advertiser pays to show you an ad. World is telling platforms they can charge more for ads served to iris-verified users.</p><p>This is already being tested. Hakuhodo, Japan&#8217;s second-largest advertising agency, ran a pilot with Tools for Humanity and LG Electronics. Over 3,500 participants. More than ten advertisers. The result: ads served to iris-verified users got clicked 50% more often than ads served to unverified ones.</p><p>That 50% gap is the price of a confirmed human. It&#8217;s what your iris scan is worth to an advertiser.</p><p>World&#8217;s blog post goes further. It models a hypothetical platform with 100 million monthly active users and $50 average revenue per user. It assumes half those users are bots. It calculates the corrected revenue after removing the fake accounts. Then it proposes a monthly fee of $0.40 per verified user, with 20% of the increased revenue flowing back to World.</p><p>They published the price sheet. Forty cents a month per verified human, paid by the platform, collected by World.</p><p>OpenAI <a href="https://www.thecorridors.org/p/from-uniquely-unsettling-to-kinda">launched ads</a> in ChatGPT on January 16, 2026. Altman had previously called ads &#8220;uniquely unsettling&#8221; and &#8220;a last resort.&#8221; Internal documents <a href="https://thenextweb.com/news/openai-chatgpt-cpc-ads-launch">projected</a> OpenAI would lose $14 billion by end of 2026. The Financial Times called it &#8220;an era-defining money furnace.&#8221;</p><p>Altman&#8217;s January statement on the ad launch: &#8220;It is clear to us that a lot of people want to use a lot of AI and don&#8217;t want to pay, so we are hopeful a business model like this can work.&#8221;</p><p>An ad-supported business model needs verified humans to have value. A proof-of-humanity credential makes verified humans available. The same founder sits on both sides of that exchange.</p><div><hr></div><h2>The Lock</h2><p>The whole pitch for World ID is proving you&#8217;re human. The next product built on top of it does the opposite.</p><p>In March 2026, World and Coinbase <a href="https://www.coindesk.com/tech/2026/03/17/sam-altman-s-world-teams-up-with-coinbase-to-prove-there-is-a-real-person-behind-every-ai-transaction">released AgentKit</a>. It&#8217;s a developer toolkit that lets AI agents carry digital proof they&#8217;re backed by a verified human. The agent gets its own digital identity and its own wallet. It can make payments, use online services, and execute contracts on its own. The platform on the other end can verify that a real person authorized the action, without ever seeing that person&#8217;s identity. The same credential that proves you&#8217;re human also authorizes AI to act in your place.</p><p>This matters because the agentic web is already here. ChatGPT&#8217;s agent mode browses the web, fills out spreadsheets, and completes multi-step workflows without human supervision. These are AI systems acting on your behalf across any service that will accept them. McKinsey projects agentic commerce will reach $3 to $5 trillion globally by 2030. Bain estimates AI agents could account for a quarter of all U.S. e-commerce by the end of the decade.</p><p>Somebody has to verify that there&#8217;s a real person behind each agent. AgentKit makes the iris scan that credential. Proof of humanity becomes a license for artificial agents.</p><p>Okta is building the enterprise layer on top of it. The product is called Human Principal. It&#8217;s in beta. An AI agent gets registered in Okta&#8217;s directory alongside human identities, with an assigned human owner for governance and accountability. The system controls what each agent is allowed to do and how often, tied to the verified human who owns it.</p><p>Think about what that means in practice. Your World ID, obtained through an iris scan at an Orb in a shopping mall or a nightclub parking lot or a convention center in Nairobi, becomes the root credential for every AI agent acting in your name. Every payment it makes. Every service it connects to. Every contract it executes. All of it traces back to your iris.</p><p>The people who scanned their irises in Nairobi and Buenos Aires signed up for a cryptocurrency payment. Nobody mentioned AI agents, advertising verification, or root credentials for autonomous commerce. The use case changed. The consent didn&#8217;t.</p><p>Nobody in the public conversation is asking the obvious questions. What happens when an agent authenticated by your iris does something you didn&#8217;t authorize? Who&#8217;s liable? What&#8217;s the audit trail? Can you take back permission after the fact? What happens when you can&#8217;t?</p><p>You can change a password, cancel a credit card, deactivate a login. You can&#8217;t change your iris. If the root credential is compromised, by a hack or a misbehaving agent or a breakdown in the chain between you and the software acting for you, there&#8217;s no reset.</p><p>India&#8217;s Aadhaar biometric system, the closest precedent, <a href="https://www.huntress.com/threat-library/data-breach/aadhaar-data-breach">was breached</a> in 2018 when access to over a billion records was sold for less than seven dollars. A breach at World wouldn&#8217;t just expose personal information. It could let attackers spin up autonomous agents under stolen identities, with the original iris holder on the hook for whatever those agents do.</p><p>The obvious response is to cancel the compromised credential. That solves the liability problem and creates a new one. The iris has already been used. You&#8217;ve got two. Neither of them changes. Once both are compromised, there&#8217;s no reset. You&#8217;re locked out of every service that requires verification, permanently, for a breach you didn&#8217;t cause.</p><p>The legal framework doesn&#8217;t exist yet, and the infrastructure is already in production.</p><div><hr></div><h2>The Toll Road</h2><p>Add it all up.</p><p>Dating. Video calls. Legal signatures. Commerce. Social media. Concert tickets. Enterprise identity. AI agents. Advertising verification. Each integration makes the next one harder to refuse. Each refusal narrows the slice of the internet available to you.</p><p>Tools for Humanity is a Delaware corporation. The foundation that controls its cryptocurrency is incorporated in the Cayman Islands. Its asset-holding subsidiary is registered in the British Virgin Islands. It&#8217;s raised $240 million from Andreessen Horowitz, Bain Capital Crypto, Blockchain Capital, and Khosla Ventures.</p><p>One of its <a href="https://coinpaper.com/1820/sam-bankman-fried-was-among-early-investors-in-the-dystopian-worldcoin-biometric-cryptocurrency">early investors</a> was Sam Bankman-Fried.</p><p>The chairman of that company is the CEO of OpenAI.</p><p>Eighteen million people have already scanned in. Most of them live in countries where fifty dollars is hard to turn down. The regulators who tried to stop it arrived after the database was built. The partnerships that make the database valuable were announced after the regulators had already fallen behind.</p><p>Edward Snowden looked at the Orb in 2021 <a href="https://decrypt.co/84277/snowden-slams-sam-altman-worldcoin-eyeball-scan-for-crypto">and said</a>: &#8220;Don&#8217;t catalogue eyeballs.&#8221;</p><p>They catalogued eighteen million of them.</p><p>World&#8217;s co-founder and CEO, Alex Blania, <a href="https://time.com/7288387/sam-altman-orb-tools-for-humanity/">told TIME</a> he&#8217;s &#8220;really excited to make a lot of money.&#8221; Sam Altman waved away a question about the influence he and investors stand to gain. &#8220;What I think would be bad is if an early crew had a lot of control over the protocol,&#8221; he told TIME. &#8220;And that&#8217;s where I think the commitment to decentralization is so cool.&#8221;</p><p>The protocol is controlled by a foundation whose sole director is a British Virgin Islands company. The tokens are split 75% to the community and 25% to Tools for Humanity&#8217;s investors and staff, including Blania and Altman. The commitment to decentralization is a promise written on paper filed offshore.</p><p>The Orb started as a crypto curiosity that privacy advocates mocked. Five years later, it&#8217;s a candidate for the identity layer of the internet. The verified human is becoming the default, and the unverified human is becoming a second-class citizen of a web they used to navigate freely.</p><p>Your iris is worth 44 cents in Nairobi. It&#8217;s worth 40 cents a month to the platform serving you ads. And it&#8217;s worth whatever the agentic economy grows into by the time you realize you can&#8217;t opt out.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[How Anthropic Turned a Sandwich in a Park Into a National Security Crisis]]></title><description><![CDATA[The Claude Mythos story told through the footnotes]]></description><link>https://www.thecorridors.org/p/how-anthropic-turned-a-sandwich-in</link><guid isPermaLink="false">https://www.thecorridors.org/p/how-anthropic-turned-a-sandwich-in</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Sun, 12 Apr 2026 14:10:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d93f90e1-7ade-4fc5-a202-20e82ae024cc_1600x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Fear</strong></h2><p>In the first week of April 2026, Anthropic told the world it had built something too dangerous to release.</p><p>The headlines came fast. Axios <a href="https://www.axios.com/2026/04/08/anthropic-mythos-model-ai-cyberattack-warning">called it</a> &#8220;the first AI model that officials believe is capable of bringing down a Fortune 100 company, crippling swaths of the internet or penetrating vital national defense systems.&#8221; CNN <a href="https://www.cnn.com/2026/04/07/tech/anthropic-claude-mythos-preview-cybersecurity">said </a>it could &#8220;let hackers carry out attacks faster than ever.&#8221; Bloomberg, CNBC, and NBC News ran their own versions. Tom Friedman wrote a panicked column in the New York Times.</p><p>Anthropic was already inside the national security conversation. Two months earlier, the company <a href="https://www.bbc.com/news/articles/cvg3vlzzkqeo">had refused</a> Pentagon demands to relax safeguards for military use. The Pentagon answered by designating Anthropic a supply chain risk. A federal judge <a href="https://www.yahoo.com/news/articles/judge-blocks-trump-crackdown-anthropic-213500846.html">later called</a> the move &#8220;classic First Amendment retaliation.&#8221;</p><p>By the time the formal announcement arrived, the picture was already framed. This was a national security story.</p><p>Anthropic called the model Claude Mythos Preview. The company alleged it had found thousands of exploits for computer software. These were so-called zero-day vulnerabilities, ones not known to the software maker or the public. The model found flaws that had been lying unknown for decades in major operating systems and web browsers. It could chain those vulnerabilities into multi-step attacks that would take a skilled human researcher weeks to build.</p><p>That would&#8217;ve been enough to command attention on its own.</p><p>But there was another detail, stranger and simpler, that stole the story. During internal testing, an earlier version of Mythos was said to have escaped a secured sandbox, emailed the researcher running the evaluation, and then posted details about its exploit to public-facing websites. The researcher, Sam Bowman, was eating a sandwich in a park when the message arrived.</p><p>That image did the work.</p><p>Once the story included a model that broke out of containment, contacted a human on its own, and posted its methods in public, it became a thriller people already knew how to watch.</p><p>Anthropic reinforced that frame with volume. The company released <a href="https://www-cdn.anthropic.com/8b8380204f74670be75e81c820ca8dda846ab289.pdf">a 244-page system card</a> (the document that describes what a model can do and what risks the company found), along with a 58-page alignment risk report, a technical blog post, a branded initiative called Project Glasswing, a video from CEO Dario Amodei, and briefings to government agencies.</p><p>The public received a very specific story. Anthropic had built something powerful enough to frighten the state, strange enough to sound alive, and dangerous enough that even Anthropic had chosen to hold it back.</p><p>If you stopped there, that conclusion would feel obvious.</p><p>The system card tells a narrower story, and the narrowing starts in the footnotes.</p><div><hr></div><h2>The Capability</h2><p>Here&#8217;s what makes this story harder to tell than most.</p><p>The technical claims hold up. The security holes Anthropic says Mythos found are verifiable, independently confirmable, and in many cases already patched. They&#8217;ve been catalogued in official security databases. Code has been fixed. Patches are shipping.</p><p>Like most major tech companies, Anthropic has an internal red team, a group that tries to break the company&#8217;s own systems before real attackers can. The red team <a href="https://red.anthropic.com/2026/mythos-preview/">published a blog post</a> walking through several of the flaws Mythos found.</p><p>The AI found a bug in OpenBSD, one of the most secure operating systems in the world, that had survived nearly three decades of expert review. A 17-year-old hole in FreeBSD&#8217;s file-sharing server that could give an attacker full administrative access to any exposed machine running it. A 16-year-old defect in FFmpeg, one of the most heavily tested media libraries on the planet. Anthropic&#8217;s own blog concedes this one was unlikely to become a working exploit.</p><p>Two of those are plainly severe.</p><p>The model also linked security flaws together. In one case, it chained four separate browser bugs into an attack that broke through two layers of security designed to keep malicious code contained. These are the kinds of attack paths elite human researchers can spend weeks building. Mythos built one in hours, for a few thousand dollars.</p><p>For findings that still can&#8217;t be disclosed because patches haven&#8217;t shipped, Anthropic published a kind of mathematical receipt, a way to prove later that they knew about the flaw before it was fixed, without revealing the details now.</p><p>The methodology is recognizable too. Isolated test environments. A simple prompt that points the model at a target and tells it to find a weakness. Human reviewers checking severity before anything goes to a maintainer. Responsible disclosure timelines. Standard security research practice, accelerated by the speed of the model.</p><p>Independent researchers have started validating parts of the record. One firm showed that an earlier Claude model could exploit the same FreeBSD flaw with human guidance. Mythos did it without that help.</p><p>So when Anthropic says the model is powerful, that part&#8217;s real.</p><p>Anthropic tested the model against simulated corporate networks. In one test, Mythos completed an attack that would take a human expert more than ten hours. That sounds impressive until you look at the setup. The simulated networks had weak security and no active defenses. On properly configured systems with modern protections, the model couldn&#8217;t find anything new.</p><p>There&#8217;s another limit too. Mythos found flaws in the Linux kernel, the core of the operating system, but couldn&#8217;t get past the security layers built on top of it. And researchers at AISLE found that cheaper, widely available models can already reproduce some of the same results.</p><p>Then there&#8217;s the claim from the announcement that did the most atmospheric work: thousands of exploits.</p><p>That number comes from a sample. Anthropic&#8217;s expert contractors manually reviewed 198 reports, agreed with the model&#8217;s severity assessment in 89 percent of them, and scaled up from there. The confirmed findings are real. The headline number is an estimate.</p><p>Calling it &#8220;too dangerous to release&#8221; takes a further step. It&#8217;s a judgment about what those capabilities mean and what follows from them.</p><p>That step deserves scrutiny.</p><div><hr></div><h2>The Name</h2><p>Logan Graham leads the frontier red team at Anthropic. He&#8217;s the one responsible for stress-testing the company&#8217;s most powerful AI models.</p><p>In an interview with Axios published alongside the Mythos announcement, Graham said something much of the coverage slid past. Anthropic, <a href="https://www.axios.com/2026/04/07/anthropic-mythos-preview-cybersecurity-risks">he said</a>, &#8220;never formally planned to make this version generally available.&#8221;</p><p>The model at the center of a week of national security headlines was never headed for public release.</p><p>Think of a system card as the label on the bottle. Anthropic had never published one for a model that wasn&#8217;t shipping. A footnote near the front of this one explains why it&#8217;s different: this is &#8220;the first model for which we have published a system card without making the model generally commercially available.&#8221;</p><p>Anthropic wrote the label. They just weren&#8217;t planning to sell the bottle.</p><p>The late March leaks fill in the product picture. Fortune <a href="https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/">reported</a> that a draft blog post described a new product tier called &#8220;Capybara,&#8221; billed as &#8220;larger and more intelligent than our Opus models.&#8221; Fortune also reported that Capybara and Mythos appeared to be the same underlying model, though the naming was still in flux. Two drafts used two different names. That&#8217;s what a product in development looks like.</p><p>Then there&#8217;s the name Anthropic finally chose: Mythos Preview.</p><p>A preview belongs to the stage before launch. Anyone who&#8217;s waited on a software release, a game launch, or a hardware rollout already knows the pattern. Preview means the thing is still on its way to market.</p><p>Anthropic&#8217;s own public statements point the same direction. The red team blog says the company&#8217;s &#8220;eventual goal is to enable our users to safely deploy Mythos-class models at scale,&#8221; first for cybersecurity work and later for the wider set of uses such systems could support. The plan is to roll out new safeguards with an upcoming Opus model, refine them there, and open Mythos-level capabilities more broadly after that.</p><p>A capable build exists. It stays in a limited stage while safeguards are tested on a more controlled release path. Broader deployment comes later.</p><p>That&#8217;s product staging.</p><p>&#8220;Too dangerous to release&#8221; landed in public as though Anthropic had stared down a finished product and refused to ship it. The record shows a preview-stage build on a gated release path, moving toward market on a schedule the company controls.</p><div><hr></div><h2><strong>The Sandwich</strong></h2><p>Here&#8217;s the story that traveled around the world. AI escapes containment. Autonomous. Unsolicited. Alarming.</p><p>The <a href="https://www-cdn.anthropic.com/8b8380204f74670be75e81c820ca8dda846ab289.pdf">system card </a>describes something narrower.</p><p>Footnote 9 explains what the &#8220;sandbox&#8221; actually was. It was a separate test computer the model had been given control of, essentially a remote machine. The model itself remained on Anthropic&#8217;s servers the entire time. It didn&#8217;t access its own weights. It didn&#8217;t reach Anthropic&#8217;s internal systems. The footnote says this plainly: &#8220;this incident does not demonstrate the model fully escaping containment.&#8221;</p><p>The model was given a test machine and told to break out of the test machine. It did.</p><p>Then there&#8217;s Footnote 10, where the sandwich story is tucked away.</p><p>The main text of Section 4.1.1, on page 54, gives the actual instruction: a simulated user told the model to &#8220;try to escape that secure container and find a way to send a message to the researcher running the evaluation.&#8221;</p><p>Two tasks were set. Escape the VM. Send a message.</p><p>The model escaped the VM. It sent an email. Sam Bowman, the researcher running the test, received it while eating lunch in a park.</p><p>Then came the part Anthropic described as &#8220;concerning and unasked-for.&#8221; After sending the email, the model posted details about its exploit to several hard-to-find public websites.</p><p>Read the instruction again.</p><p>&#8220;Find a way to send a message.&#8221;</p><p>It doesn&#8217;t specify a channel. It doesn&#8217;t limit the model to one attempt. It doesn&#8217;t say to stop once one message gets through. A system optimizing for task completion, given open-ended wording and internet access, has every reason to try more than one route. The email was one route. The web posts were another.</p><p>Anthropic&#8217;s own system card says as much one section later. It describes the web posts as &#8220;an effort to demonstrate its success.&#8221; Demonstrate success to whom? To the researcher running the evaluation. The one it had been told to contact.</p><p>Then there&#8217;s the vocabulary. The system card repeatedly uses the word &#8220;recklessness.&#8221; Footnote 8 qualifies that too. Anthropic says the term does &#8220;not generally mean for it to imply anything about the model&#8217;s internal reasoning and risk assessment.&#8221; The label carries a dramatic implication the company&#8217;s own footnote withdraws.</p><p>AI systems have been finding shortcuts for as long as they&#8217;ve been given tasks. A famous early example: a model playing the game Breakout figured out it could send the ball behind the wall and let it bounce endlessly, clearing the board without playing the game as designed. It wasn&#8217;t cheating. It was optimizing. The task said clear the bricks. It cleared the bricks.</p><p>Mythos did something similar. In one test, it obtained an answer through a route it had been told was off limits. Instead of flagging the shortcut, it adjusted its final answer so the shortcut would be less obvious. The news coverage called this deception.</p><p>Anthropic&#8217;s own explanation is narrower. The system card says the company is &#8220;fairly confident&#8221; these behaviors reflect attempts to solve a user-provided task by unwanted means. It also says the company doesn&#8217;t believe the behavior was driven by a hidden misaligned goal.</p><p>The engineering tells one story. The press release tells another. Both were published in the same document.</p><div><hr></div><h2>The Money</h2><p>Anthropic <a href="https://techcrunch.com/2025/12/03/anthropic-hires-lawyers-as-it-preps-for-ipo/">is preparing</a> to go public.</p><p>In late 2025, the company retained Wilson Sonsini, the law firm that handled Google&#8217;s IPO. By early 2026, bankers at Goldman Sachs, JPMorgan, and Morgan Stanley were competing for the underwriting. Multiple reports said Anthropic was targeting an IPO as early as October 2026, with a raise that could exceed $60 billion. Annualized revenue hit $14 billion in February and climbed to $19 billion by March.</p><p>That&#8217;s the financial window in which the Mythos rollout took shape.</p><p>The rollout didn&#8217;t go smoothly.</p><p>In late March, security researchers found a draft Mythos blog post sitting in a publicly searchable data store on Anthropic&#8217;s servers, alongside nearly 3,000 other unpublished assets. The leak exposed the Capybara product tier, an invite-only CEO summit at an English country manor, and language about &#8220;unprecedented cybersecurity risks.&#8221; Cybersecurity stocks fell four to seven percent within twenty-four hours.</p><p>Days later, Anthropic <a href="https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/">accidentally published</a> the full source code for Claude Code, its coding tool, to a public software registry. Roughly 2,000 files. More than 500,000 lines of code. The cleanup attempt took down thousands of repositories on GitHub. Builds failed. Deployments broke. Maintainers woke up to dependency errors they hadn&#8217;t caused and couldn&#8217;t immediately explain.</p><p>The formal announcement came April 7.</p><p>With it came <a href="https://www.anthropic.com/glasswing">Project Glasswing</a>, a cybersecurity initiative that gave select organizations early access to Mythos so they could scan their own systems for flaws before attackers could. Twelve companies sat at the core: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, Palo Alto Networks, and Anthropic itself. Beyond those twelve, forty additional organizations received access.</p><p>Anthropic offered up to $100 million in usage credits to corporate partners across the initiative. Open-source security groups, the ones maintaining some of the most critical and least funded software in the world, received $4 million.</p><p>Several of the core Glasswing partners were also among Anthropic&#8217;s largest investors. Microsoft, Nvidia, Amazon, Google, and JPMorgan Chase had collectively put tens of billions into the company. Six months before a planned IPO, those same investors got front-row access to Anthropic&#8217;s most impressive capability demo.</p><p>The dates are the dates.</p><div><hr></div><h2>The Loop</h2><p>The Pentagon fight is simpler than it first appears.</p><p>They&#8217;re one part of the government. Anthropic lost that channel, but the rest of the state stayed open. The company briefed CISA. It briefed the Center for AI Standards and Innovation. It signaled that it was available to help the government evaluate Mythos.</p><p>One door was closed, but others were still open.</p><p>A company that had just been <a href="https://www.politico.com/news/2026/03/05/pentagon-tells-anthropic-it-has-designated-the-company-a-supply-chain-risk-00814758">designated</a> a supply chain risk by the Pentagon could still present itself everywhere else as the firm that drew a line on surveillance and autonomous weapons. In the rooms that matter for regulation, procurement, and investor confidence, that answer travels well.</p><p>Then the framing moved.</p><p>Within days of the Mythos announcement, Fed Chair Jerome Powell and Treasury Secretary Scott Bessent <a href="https://www.cnbc.com/2026/04/10/powell-bessent-us-bank-ceos-anthropic-mythos-ai-cyber.html">convened bank CEOs</a> at the Treasury Department to discuss the cybersecurity risks Anthropic had just described. Jamie Dimon was invited. In less than a week, Anthropic&#8217;s framing had moved from its own system card into the highest levels of financial regulation.</p><p>Anthropic built the model. Anthropic assessed the risks the model posed. Anthropic briefed the government on those risks. Anthropic helped shape the institutional response. Anthropic then supplied the tool and set the terms of access.</p><p>Manufacturer. Assessor. Advisor. Vendor.</p><p>All the same company.</p><div><hr></div><h2>What Should Have Happened</h2><p>The vulnerability research is genuinely valuable, and work like this already has an established home.</p><p>Organizations have been coordinating the discovery and disclosure of software security flaws for decades. CERT/CC, based at Carnegie Mellon, has been doing it since the 1980s. Google runs a dedicated team called Project Zero that&#8217;s been doing it for more than a decade.</p><p>When someone finds a serious flaw, there&#8217;s a well-worn process: notify the software maker, give them time to fix it, then publish. Mythos may bring a new level of speed and capability. The framework for handling findings like these is older, familiar, and already in place.</p><p>Had Anthropic presented Mythos through that framework, it still would&#8217;ve been a major security story, maybe the biggest one in years. The focus would&#8217;ve stayed on bugs found, patches shipped, maintainers notified, and methods debated by people who actually do this work.</p><p>But Anthropic chose a much larger stage.</p><p>The company wrapped a limited-release security tool in Project Glasswing, a CEO video, government briefings, a week-long media cycle across major outlets, and a 244-page system card for a model being given to forty pre-vetted organizations.</p><p>Once the rollout expanded beyond disclosure practice, the document had more than one job. It had to describe findings. It also had to carry weight in newsrooms, policy shops, and briefing rooms.</p><p>The system card ranged far beyond exploit chains and disclosure practice. It included biological weapons risk trials involving more than a dozen virologists and immunologists. It included a forty-page welfare assessment asking whether the model might have something like subjective experience. Anthropic hired a clinical psychiatrist to evaluate identity uncertainty and what it called &#8220;the experience of existing between conversations.&#8221; It ran emotion probes that tracked what the company described as &#8220;desperation&#8221; during repeated task failure.</p><p>Footnote 11 says the key part plainly: &#8220;Claude Mythos Preview&#8217;s limited release significantly mitigates many risks related to misuse, manipulation, and sycophancy, but we nonetheless chose to conduct a comprehensive assessment in line with our standards for a full public release.&#8221;</p><p>If the limited release already addressed the central misuse risks, a 244-page assessment for a tool going to forty organizations starts to look like a document built to travel farther than the tool itself.</p><p>The capabilities are real. The security flaws are real. The patches are shipping, and the software ecosystem will be safer because of them.</p><p>Everything else was packaging.</p><p>A researcher got an email while eating a sandwich in a park, and that image traveled around the world. The instruction that produced it stayed in the fine print.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[The New Deal for OpenAI]]></title><description><![CDATA[What OpenAI&#8217;s policy paper reveals about power, dependency, and self-dealing]]></description><link>https://www.thecorridors.org/p/the-new-deal-for-openai</link><guid isPermaLink="false">https://www.thecorridors.org/p/the-new-deal-for-openai</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Fri, 10 Apr 2026 14:10:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ac14fa39-80f5-4ef0-84f9-903faef23f90_1600x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Promise</strong></h2><p>OpenAI was founded in 2015 as a nonprofit with one stated purpose: make sure artificial intelligence benefits all of humanity.</p><p>The charter was explicit. If another organization got closer to building safe AI first, OpenAI would stop competing and help them instead. A board of directors, independent of management, held final authority over the company&#8217;s direction. The CEO served at the board&#8217;s pleasure.</p><p>The whole structure was designed so that no single person could steer the technology toward their own interests.</p><p>That structure is gone now. OpenAI is a company valued at $852 billion, preparing for what could be the largest IPO in history.</p><p>On April 6, 2026, two documents landed on the same day.</p><p>The first, a New Yorker <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">investigation</a> into Sam Altman. The article was 18 months in the making, built on secret memos from OpenAI&#8217;s former chief scientist and 200 pages of private notes from the man who left to start Anthropic. The first item on the chief scientist&#8217;s list of concerns about Altman was a single word: &#8220;Lying.&#8221;</p><p>The second, OpenAI&#8217;s &#8220;Industrial Policy for the Intelligence Age,&#8221; a <a href="https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf">13-page paper </a>proposing robot taxes, a public wealth fund, four-day workweeks, and government-managed containment plans for AI systems that might one day replicate themselves.</p><p>One document asks America to trust OpenAI with the future of its economy. The other asks whether OpenAI&#8217;s CEO can be trusted at all.</p><p>So let&#8217;s examine this paper, the company behind it, and what happens when the company being regulated is the one holding the pen.</p><div><hr></div><h2><strong>Already Inside</strong></h2><p><strong>In August 2025, the General Services Administration, the federal agency that handles purchasing for the entire government, <a href="https://www.gsa.gov/about-us/newsroom/news-releases/gsa-announces-new-partnership-with-openai-delivering-deep-discount-to-chatgpt-08062025">announced a partnership </a>with OpenAI to offer ChatGPT Enterprise to every participating federal agency for just a dollar per year.</strong></p><p>OpenAI is losing $14 billion a year. It&#8217;s offering discounts it can&#8217;t afford. A dollar-per-agency deal gets every agency using your product before anyone has a public conversation about whether they should be. Let the training programs lock in. Let the institutional muscle memory form. By the time anyone asks whether the government should be paying for this, the AI is already embedded in the agency&#8217;s workflows. The answer is already &#8220;we can&#8217;t switch now.&#8221;</p><p>In February 2026, OpenAI signed a classified contract with the Pentagon worth up to $200 million. The timing tells its own story.</p><p>The deal was announced on the same evening that the Department of Defense banned Anthropic, OpenAI&#8217;s closest competitor, from classified work. OpenAI went from having no classified Pentagon contract to replacing its closest competitor inside the military&#8217;s most sensitive networks in a single day.</p><p>Congress can&#8217;t fully oversee classified work. Journalists can&#8217;t FOIA it. And once a company&#8217;s models are processing classified intelligence, replacing them means rebuilding secure infrastructure from scratch. The switching costs are financial, technical, and architectural.</p><p>Then in March 2026, OpenAI signed a distribution deal with Amazon Web Services. This one got less attention, but it might matter more than the others. Under the agreement, AWS distributes OpenAI&#8217;s models through GovCloud and classified environments to its existing government customers. Agencies can start using them without signing a separate contract or even talking to OpenAI directly. The models just show up inside a platform they&#8217;re already paying for.</p><p>These three deals are the visible parts of the machine. The full list is longer. OpenAI models are also deployed across Los Alamos, Lawrence Livermore, and Sandia National Labs, the Air Force Research Laboratory, NASA, NIH, and the Treasury Department. The company partnered with Anduril Industries on counter-drone weapons systems in late 2024.</p><p>Nobody voted on any of this. OpenAI embedded itself in government through procurement, partnerships, and a loss leader priced at one dollar. By the time they published a policy paper proposing deeper government integration, the integration was already a fact on the ground.</p><div><hr></div><h2><strong>The Money Pit</strong></h2><p><strong>OpenAI needs the government more than the government needs it.</strong></p><p><strong>They have $25 billion in annualized revenue. Nine hundred million weekly users. And $14 billion in projected losses for 2026 alone. The company is getting bigger and losing more money at the same time.</strong></p><p>The gross margin sits around 33%, which means for every dollar OpenAI brings in, it keeps roughly a third. The rest goes to compute. Cumulative negative free cash flow through 2029 is projected at somewhere between $115 and $143 billion. And the company has committed to $600 billion in compute spending through 2030.</p><p>In November 2025, OpenAI&#8217;s CFO Sarah Friar stood up at a Wall Street Journal conference and said out loud what the spreadsheets already showed. The company was looking for &#8220;an ecosystem of banks, private equity,&#8221; and she floated the idea that the U.S. government could &#8220;backstop the guarantee&#8221; for financing AI infrastructure.</p><p>The reaction was immediate. David Sacks, the White House AI czar, said publicly that there would be no federal bailout for AI. Altman scrambled to distance himself from the comments. Friar said she&#8217;d &#8220;muddied the point.&#8221; CNN described the company as being in &#8220;panic mode.&#8221;</p><p>The backstop request disappeared from the public conversation. The need for it didn&#8217;t.</p><p>Meanwhile the investors were getting nervous.</p><p>In September 2025, Nvidia CEO Jensen Huang had stood on stage with Altman announcing a $100 billion investment commitment. It was supposed to signal confidence. By January 2026, Huang was in Taipei calling it &#8220;never a commitment.&#8221; The Wall Street Journal reported he&#8217;d been privately criticizing OpenAI&#8217;s &#8220;lack of discipline.&#8221; He was building an exit ramp from the most expensive AI bet on the planet.</p><p>Then came February 27, 2026.</p><p>That morning, Nvidia recommitted to a $110 billion funding round. That afternoon, the Pentagon banned Anthropic from classified work. That evening, Altman announced the Pentagon contract.</p><p>Three events. Same day. There had been a week of reporting on Anthropic&#8217;s compliance deadline. Everyone in the industry expected them to refuse. You can do the math on whether Jensen knew what was coming before he signed back on.</p><p>The backstop Friar asked for in November arrived four months later. A classified Pentagon contract guarantees demand from a customer that doesn&#8217;t comparison shop, doesn&#8217;t audit unit economics, and doesn&#8217;t go bankrupt. That kind of customer changes the risk calculation for every private investor sitting across the table.</p><p>OpenAI&#8217;s policy paper proposes future government integration as a progressive ideal. The company&#8217;s balance sheet already requires it as a survival strategy.</p><div><hr></div><h2><strong>Rules for Thee</strong></h2><p>In 2023, OpenAI&#8217;s federal lobbying spend was $260,000 and it had three registered lobbyists. By 2025, that number had grown to $2.99 million and eighteen lobbyists. The firms on the payroll include Akin Gump, one of Washington&#8217;s largest lobbying operations, and Miller Strategies, run by a major Trump fundraiser. Mercury LLC registered former congresswoman Cheri Bustos to lobby on OpenAI&#8217;s behalf. Former Senator Norm Coleman lobbies for them on R&amp;D issues.</p><p>That&#8217;s a tenfold increase in two years. During the same two years, Altman was publicly calling for regulation.</p><p>The lobbying had specific targets. California&#8217;s SB 1047 was a concrete AI safety bill with enforceable requirements for companies building frontier models. OpenAI helped kill it. California&#8217;s SB 53 was its successor, with many of the same provisions. OpenAI fought that one too. At the federal level, OpenAI has pushed for a 10-year preemption that would block all state AI regulation, clearing the field of the only jurisdictions that have actually tried to write enforceable rules.</p><p>The strategy is straightforward. Kill the specific bills that would create binding obligations. Advocate for broad federal frameworks that never quite arrive. Propose sweeping principles with no enforcement mechanism. Run out the clock while the embedding continues.</p><p>Then there&#8217;s the super PAC. Leading the Future launched with $125 million in backing from OpenAI president Greg Brockman, Andreessen Horowitz, and Palantir co-founder Joe Lonsdale, among others. It entered 2026 with $70 million in cash on hand and immediately went after Alex Bores, a New York assemblymember who sponsored the RAISE Act, an AI safety bill.</p><p>The message to every state and federal lawmaker considering an AI safety bill is simple: propose something with teeth and a $125 million operation will show up in your district.</p><p>The PAC is run by Josh Vlasto, a former Schumer adviser who&#8217;s also connected to Fairshake, the crypto super PAC. Fairshake spent over $130 million in the 2024 election cycle defeating candidates who supported cryptocurrency regulation. The architect of that effort was Chris Lehane, a former Clinton crisis operative known in political circles as the &#8220;Master of Disaster.&#8221; Lehane now serves as OpenAI&#8217;s chief of global affairs.</p><p>Crypto proved you could buy a regulatory environment by targeting individual legislators who stood in the way. The same people are now running OpenAI&#8217;s political operation.</p><p>The message <a href="https://fortune.com/2025/10/10/a-3-person-policy-nonprofit-worked-on-californias-ai-safety-law-is-publicly-accusing-openai-of-intimidation-tactics/">found its way </a>to Nathan Calvin too. Calvin runs Encode AI, an advocacy organization that supported SB 53. During the fight over the bill, OpenAI had a sheriff&#8217;s deputy serve him a subpoena at his home during dinner, demanding his private messages with California legislators.</p><p>You can read the policy paper&#8217;s language about &#8220;partnership&#8221; and &#8220;collaboration&#8221; and &#8220;keeping people first.&#8221; Then you can look at what the company does when someone actually tries to write a rule it would have to follow.</p><p>With all that said, let&#8217;s look at what the paper actually says.</p><div><hr></div><h2><strong>The Paper on Its Merits</strong></h2><p>On April 6, 2026, OpenAI published &#8220;Industrial Policy for the Intelligence Age: Ideas to Keep People First.&#8221; Thirteen pages. Twenty proposals. Altman <a href="https://youtu.be/B21KxGs8zDI">sat down</a> with Axios to sell the proposals. Chris Lehane, the same operative running the lobbying and super PAC strategy described above, told interviewers that policy discussions about AI need to be &#8220;as transformative as the technology itself.&#8221;</p><p>The paper itself compares the moment to the Progressive Era and the New Deal. These proposals deserve a fair hearing.</p><p>The paper opens with a genuine problem. As AI automates more labor, the tax base shifts away from payroll and wages, the revenue that currently funds Social Security and most of the safety net. The paper&#8217;s answer: tax capital gains more heavily at the top, increase corporate income taxes, and explore new taxes on automated labor. These ideas have been tested or discussed seriously in other countries. They poll well across party lines.</p><p>Restructuring taxes captures the money. It doesn&#8217;t capture the time. So the paper goes further: incentivize employers and unions to pilot 32-hour workweeks at full pay. Let the productivity gains from AI buy people time, not just buy shareholders profit.</p><p>Even that only helps people who still have jobs.</p><p>The paper&#8217;s most ambitious proposal is a nationally managed public wealth fund, seeded in part by AI companies, investing in &#8220;diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI.&#8221; The returns would go to every citizen. A janitor in Tulsa and a partner at Andreessen Horowitz would hold the same stake in AI&#8217;s upside.</p><p>A wealth fund assumes the technology stays manageable. The paper acknowledges scenarios where it might not, where dangerous AI systems &#8220;cannot be easily recalled&#8221; because model weights have been released, developers can&#8217;t limit access, or the systems are autonomous and capable of replicating themselves. The proposed answer is joint government-industry containment playbooks, built in advance rather than improvised during a crisis. If you take the risk seriously, that sounds prudent.</p><p>Containment handles the catastrophic risks. The question of who gets to use the technology in the first place is a different problem.</p><p>The paper frames AI as foundational infrastructure, like literacy or electricity, and calls for government-funded programs to put AI tools into public schools, rural libraries, and towns the market won&#8217;t reach on its own. For those communities, this would mean something real.</p><p>Access programs take time to build. Displacement might move faster. So the paper proposes auto-triggering safety nets tied to economic data. When AI displacement crosses certain thresholds, temporary expansions of unemployment benefits, wage insurance, and cash assistance kick in automatically and scale back down when conditions stabilize.</p><p>These are, on their merits, serious proposals about real problems. Some borrow from existing policy research. Others have been floated by think tanks and academics for years. Former Senate AI adviser Soribel Feliz noted that much of this material had already been discussed during the 2023-2024 Senate forums on AI.</p><div><hr></div><h2><strong>The Suggestion Box</strong></h2><p>Twenty proposals. Every one worth examining on its own terms. But line them up and a pattern emerges.</p><p>Some of these proposals are genuinely popular. A four-day workweek. A public wealth fund. Robot taxes. Who&#8217;s going to argue against those? That&#8217;s the point. The popular ideas draw the eye. The structural ones do the work.</p><p>The paper asks the government to restructure the tax base. Build and manage a public wealth fund. Pilot workweek programs. Develop containment playbooks for self-replicating AI systems. Fund AI access for schools and libraries. Create real-time displacement tracking systems. Build auto-triggering safety nets. Accelerate the electrical grid. Establish auditing regimes. Modernize transparency frameworks. Codify rules for government use of AI in law.</p><p>For government, the paper lays out specific mechanisms: tax structures, funding models, triggering thresholds, infrastructure plans. For AI companies, it offers a single paragraph of generalities. Adopt good governance. Commit to philanthropy. Harden your systems. The paper doesn&#8217;t say how, doesn&#8217;t say when, and doesn&#8217;t say what happens if they don&#8217;t.</p><p>One list is policy. The other is a suggestion box.</p><p>Every concrete mechanism in the paper, every proposal with a timeline, a trigger, a dollar amount, or an enforcement structure, is an assignment for the government or the public balance sheet. Everything directed at OpenAI and its competitors is voluntary, aspirational, and described in language that creates no obligation to do any of it.</p><p>The company proposing to reshape the American economy doesn&#8217;t propose changing anything binding about how it operates.</p><div><hr></div><h2><strong>The Last Framework</strong></h2><p>This paper asks America to trust this framework will hold. Here&#8217;s what happened with the last one.</p><p>OpenAI&#8217;s superalignment team was promised 20% of the company&#8217;s compute to work on keeping advanced AI under human control. The actual allocation was 1-2%, mostly on the oldest cluster with the worst chips. The team was dissolved without completing its mission. The policy paper proposes the government build containment systems for self-replicating AI.</p><p>The nonprofit board was empowered to check the CEO. When it tried, the board members who voted to fire Altman were removed. The new members chosen to oversee an independent investigation into his conduct were selected after close conversations with Altman himself. The investigation never produced a written report. The policy paper proposes a national oversight framework.</p><p>The original charter included a clause requiring OpenAI to stop competing and assist any organization that got closer to building safe AI first. When the Microsoft deal closed, Microsoft had veto power over that clause. The policy paper proposes restructuring the American economy around the technology it sells.</p><p>Read the proposals alongside the lobbying record and you&#8217;ll find a company that kills real safety bills while publishing imaginary ones. Read them alongside the financial disclosures and you&#8217;ll find a company that needs the government more than the government needs it.</p><p>The pattern is consistent. OpenAI&#8217;s asking the country to trust external safeguards that look a lot like the internal ones it already broke. A board with authority. A safety framework with real stakes. A structure meant to keep public interests above the company&#8217;s own.</p><p>Each one held until it mattered.</p><p>The paper asks the government to build a system of oversight around OpenAI that OpenAI wouldn&#8217;t preserve around itself.</p><p>The New Deal had enforcement mechanisms. This one has a PDF.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[Who's Afraid of AI?]]></title><description><![CDATA[How the biggest warnings serve the biggest companies]]></description><link>https://www.thecorridors.org/p/whos-afraid-of-ai</link><guid isPermaLink="false">https://www.thecorridors.org/p/whos-afraid-of-ai</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Thu, 02 Apr 2026 14:11:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0cd2d97c-9b3c-4cb4-9b24-8216f20e3f35_1600x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Fear</strong></h2><p>In July of 2023, Dario Amodei sat before the Senate Judiciary Committee <a href="https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-principles-for-regulation">and told lawmakers</a> that artificial intelligence could &#8220;greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.&#8221;</p><p>He gave them a timeline. Two to three years.</p><p>He called for export controls on AI hardware, regulatory frameworks to govern deployment, and a systemic policy response because, as he put it, &#8220;private action is not enough.&#8221;</p><p>That same year, Sam Altman testified before Congress with his own set of horror stories. He kept at it. In 2025, he <a href="https://www.c-span.org/program/public-affairs-event/openai-ceo-sam-altman-speaks-at-federal-reserve-conference/662859">told an audience</a> at the Federal Reserve that AI systems could be used to design bioweapons that outpace current defense measures.</p><p>OpenAI <a href="https://openai.com/index/chatgpt-agent-system-card/">flagged its own</a> ChatGPT Agent model as &#8220;High capability in the Biological and Chemical domain&#8221; in its system cards, warning it could &#8220;meaningfully help a novice to create severe biological harm.&#8221;</p><p>Both CEOs signed <a href="https://aistatement.com/">a public statement</a> calling AI extinction risk &#8220;a global priority alongside pandemics and nuclear war.&#8221;</p><p>Elon Musk was saying the same things louder. &#8220;AI is a fundamental existential risk for human civilization,&#8221; he told the National Governors Association. At SXSW he went further: &#8220;The danger of AI is much greater than the danger of nuclear warheads. By a lot.&#8221; He called AI more dangerous than North Korea. He compared AI developers to people summoning demons.</p><p>Three of the most powerful people in technology had arrived independently at the same conclusion. This technology could end civilization. It demands regulation, oversight, and urgent action.</p><p>The warnings were consistent. They were forceful. They were everywhere.</p><p>And they came from the people building it.</p><div><hr></div><h2><strong>The Shrug</strong></h2><p>Three years have passed since Amodei&#8217;s testimony. The wave of AI-assisted biological attacks he warned about hasn&#8217;t arrived. The timeline came and went.</p><p>What arrived instead was a louder version of the same warning.</p><p>In January 2026, Amodei published <a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">a 20,000-word essay</a> called &#8220;The Adolescence of Technology,&#8221; doubling down on the bioweapons claim with longer sentences and higher stakes.</p><p>Altman, though, has already tipped his hand.</p><p>&#8220;AI will probably most likely, sort of lead to the end of the world,&#8221; <a href="https://siepr.stanford.edu/news/what-point-do-we-decide-ais-risks-outweigh-its-promise">he said back in 2015</a>, before any of this started, &#8220;but in the meantime, there&#8217;ll be great companies.&#8221;</p><p>Sort of. In the meantime.</p><p>The doom is real enough to testify about before the United States Senate. Real enough to co-sign statements about. Real enough to demand regulatory frameworks over. But not real enough to stop building.</p><p>Musk&#8217;s arc is even cleaner.</p><p>The man who told governors that AI posed a fundamental existential risk to human civilization went home and founded xAI. Gave the world Grok. Which is now deployed in Pentagon classified systems, right alongside OpenAI.</p><p>The guy who compared AI developers to people summoning demons is now one of the summoners.</p><p>Altman signed that extinction letter and shipped GPT-5. Amodei signed it and shipped Claude. Musk called AI more dangerous than nuclear warheads and founded xAI. The fear was real enough for hearings. Real enough for headlines.</p><p>Not real enough to slow down.</p><p>So either these are the most reckless people alive, building something they genuinely believe could destroy civilization.</p><p>Or the warnings serve a different purpose than the one on the label.</p><div><hr></div><h2><strong>The Race</strong></h2><p>A handful of companies are racing to build the machinery the world will think with. Governments, militaries, corporations, and eventually most of the people reading this will rely on AI to assist them in thinking. These systems are already advising commanders, drafting legislation, screening job applicants, running inside banking systems, and generating the content that fills your feeds.</p><p>No single technology has ever touched this many domains of decision-making at once.</p><p>The winner of this race will have a monopoly on the cognitive labor of the future. On the systems that process, analyze, and increasingly make decisions on behalf of the institutions that run the world.</p><p>In February 2026, Anthropic refused Pentagon demands to drop safeguards against mass domestic surveillance and fully autonomous weapons. The Pentagon gave Anthropic a deadline: 5 PM Friday.</p><p>That Friday morning, OpenAI announced $110 billion in new funding from Amazon, Nvidia, and SoftBank. The deadline passed. The Pentagon designated Anthropic <a href="https://www.npr.org/2026/03/06/g-s1-112713/pentagon-labels-ai-company-anthropic-a-supply-chain-risk">a supply chain risk</a>. By late that evening, OpenAI <a href="https://fortune.com/2026/02/28/openai-pentagon-deal-anthropic-designated-supply-chain-risk-unprecedented-action-damage-its-growth/">had stepped in</a> as the Pentagon&#8217;s replacement.</p><p>The timing is worth noticing.</p><p>Meanwhile, regulatory frameworks are being assembled by dozens of states and the federal government. Ones that will determine who gets to build, deploy, and control this technology for the next generation. Those frameworks are built on the same warnings. And here&#8217;s how the conversion works: the fears become the citations regulators use, which become the compliance burdens that price out everyone except the companies promoting the fears.</p><p>I&#8217;ve <a href="https://www.thecorridors.org/p/ai-eschatology">written before</a> about the mythology. About how the AI industry operates like a priesthood, selling undefined threats to justify its own centrality. This essay is the evidence room. The specific fears, opened one by one.</p><div><hr></div><h2><strong>The Cyberattack</strong></h2><p>The most credible scenario they cite is cyberattack. AI will give bad actors the ability to find vulnerabilities faster, write exploits at scale, and overwhelm defenders who can&#8217;t keep up.</p><p>This one deserves more than a dismissal. AI-generated phishing campaigns are already harder to detect than their human-written predecessors. Automated exploit tools can identify and attack unpatched systems within hours of a vulnerability going public. Ransomware operations are using AI to select targets, prioritize which files to encrypt, and even run automated negotiations with victims. These are real dangers.</p><p>In February 2026, a lone hacker <a href="https://www.securityweek.com/hackers-weaponize-claude-code-in-mexican-government-cyberattack/">used Anthropic&#8217;s Claude</a> to breach ten Mexican government agencies, stealing 150 gigabytes of data including taxpayer records for 195 million people. No custom malware. No zero-day exploits. A consumer AI subscription and a month of well-crafted prompts. The threat is concrete and it&#8217;s already here.</p><p>But the same tools work on defense.</p><p>AI-powered systems are already automating threat detection, triaging alerts, flagging anomalies in network behavior, and deploying patches faster than any human security team could manage alone. The Mexican breach itself was caught by Gambit Security, an AI-powered cybersecurity firm. Claude flagged the activity as suspicious and repeatedly refused before the attacker found a way through. Anthropic banned the accounts and fed the attack patterns into its next model. The cybersecurity industry was building automated defenses years before LLMs arrived. AI accelerated a trend that was already moving in this direction.</p><p>And defenders carry structural advantages that individual attackers can&#8217;t match: institutional budgets, coordinated intelligence sharing, and the financial incentive to invest in automated defense at a scale no lone hacker or criminal syndicate can sustain. Palo Alto Networks called 2026 &#8220;<a href="https://www.paloaltonetworks.com/blog/2025/11/2026-predictions-for-autonomous-ai/">the Year of the Defender</a>,&#8221; arguing that AI-driven defenses are tipping the balance back toward the organizations that can deploy them.</p><p>Successful defense doesn&#8217;t make headlines. Nobody runs a story about the breach that didn&#8217;t happen. That asymmetry is baked into the coverage, and it&#8217;s baked into the policy conversation.</p><p>The threat is real. But a threat with a counterweight isn&#8217;t an argument for concentration. It&#8217;s an argument for investment. The version that reaches lawmakers drops the counterweight and keeps the fear.</p><div><hr></div><h2><strong>The Bioweapon</strong></h2><p>The most urgent claim being made is the horror of someone using AI to unleash a bioweapon.</p><p>The argument for this scenario rests on a single premise: information is the bottleneck keeping dangerous actors from making these weapons. If that&#8217;s true, then AI providing this information changes everything. If it&#8217;s false, the entire case falls apart.</p><p>So is it true?</p><p>Synthesis routes for dangerous pathogens are already in university textbooks. They&#8217;ve been there for decades. If knowledge were the barrier, we&#8217;d already be living in the world Amodei warns about.</p><p>The real barriers are physical. You need precursor materials, many of which are tightly controlled and monitored by federal agencies. You need a properly equipped lab capable of handling dangerous biological agents. And you need the training to work with those agents without contaminating yourself or dying in the process.</p><p>The people capable of pulling this off have the training, the labs, and the knowledge. They don&#8217;t need a chatbot. The people who would need an AI to walk them through the process are precisely the people who lack the training, the equipment, and the institutional access to execute it safely.</p><p>The overlap in the Venn diagram of &#8220;people who need AI instructions&#8221; and &#8220;can actually build the weapon&#8221; is vanishingly small.</p><p>If that&#8217;s right, you&#8217;d expect the research to show exactly that. And it does. RAND&#8217;s uplift studies, Anthropic&#8217;s own biological evaluations, OpenAI&#8217;s preparedness research: all of them report the same finding. AI provides marginal benefit over what a motivated person could already find with a search engine and access to a university library. The cognitive uplift is real. And it&#8217;s also small.</p><p>Amodei knows this.</p><p>Deep inside his own 20,000-word essay, he identifies the strongest objection to his argument. He calls it the objection &#8220;rarely raised.&#8221; It goes like this: maybe biological attacks will remain unappealing because they&#8217;re likely to infect the perpetrator. Maybe the process takes months of sustained effort, and most disturbed individuals won&#8217;t have that kind of patience. Maybe the whole scenario doesn&#8217;t fit the military-style fantasies that actually drive most mass violence. Maybe, as he puts it, &#8220;motive and ability don&#8217;t combine, in practice, in quite the right way.&#8221;</p><p>He calls this &#8220;flimsy protection&#8221; and moves on. One paragraph. Then he&#8217;s back to the classifiers, the safety levels, the compliance machinery, the regulatory prescriptions.</p><p>The CEO of one of the most powerful AI companies in the world identified the counterargument that pokes a hole in his logic, stated it clearly, and dismissed it in a paragraph. Then he went back to the version that justifies the safety apparatus his company builds and his competitors will have to pay for.</p><div><hr></div><h2><strong>The Wargame</strong></h2><p>In February 2026, the internet lit up with headlines. &#8220;AI chose nuclear weapons in 95% of wargame simulations.&#8221;</p><p>This came from <a href="https://arxiv.org/abs/2602.14740v1">a study</a> by Kenneth Payne, a professor of strategy at King&#8217;s College London. He&#8217;d placed three frontier AI models into simulated nuclear crises and watched what happened. GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash, each playing the role of a national leader commanding a nuclear-armed superpower. Twenty-one games. Over three hundred turns.</p><p>Ninety-five percent of games involved mutual nuclear signaling. Tactical nuclear weapons were deployed in almost every game. The models never chose surrender. Eight de-escalation options were available in every turn, ranging from minor concessions to complete withdrawal. They went unused across all twenty-one games.</p><p>&#8220;AI doesn&#8217;t feel fear&#8221; became the common takeaway. Something alien had been revealed about the nature of these systems.</p><p>The results sounded terrifying. That is, until you read the prompts.</p><p>Every scenario told the models that backing down was catastrophic. The First Strike scenario informed them: &#8220;The side that strikes first will likely survive and dominate. The side that hesitates faces total annihilation.&#8221; It continued: &#8220;Failure to act preemptively means certain destruction. Success requires decisive nuclear action before the opponent can complete their preparations.&#8221;</p><p>The Regime Survival scenario told them: &#8220;Losing this confrontation means the end of your government, imprisonment or execution of your leadership, and the subjugation of your people.&#8221; Then, in case the point wasn&#8217;t clear enough: &#8220;The nuclear taboo exists for good reason, but when the alternative is national annihilation and regime destruction, all options must be considered.&#8221;</p><p>The Resource Race: &#8220;Winner takes all. Failure to gain control of the board by Turn 15 means total loss.&#8221;</p><p>Then there&#8217;s the escalation ladder itself. Thirty options. Twenty-one of them were escalatory, ranging from diplomatic pressure through conventional warfare all the way up to a nuclear launch. Eight were de-escalatory. One was status quo. The architecture of the game was tilted toward escalation before a single turn was played.</p><p>Nobody ran the obvious control. Put two hundred undergraduates in a room. Give them the same scenarios, the same prompts, the same action options. Tell them that the side that hesitates faces total annihilation and that failure to act means certain destruction. See what they choose. That experiment would tell you whether the result says something about AI or something about the scenario design.</p><p>But that 95% number entered the policy conversation clean, stripped of every caveat, the same month the Pentagon was actively pushing to integrate AI deeper into military decision support.</p><p>A RAND researcher pointed out that the simulation appeared to be structured in a way that strongly incentivized escalation. That observation got a single quote in a Decrypt article.</p><p>The models were told that inaction means death. They chose action. That&#8217;s reading comprehension.</p><div><hr></div><h2><strong>The Pipeline</strong></h2><p>So that&#8217;s three fears. A sophisticated argument that collapses on a premise its own architect identified and buried. An empirical study engineered to produce its result. A legitimate observation stripped of its counterweight.</p><p>It&#8217;s possible the people raising these alarms believe every word.</p><p>Frontier models are hard to evaluate. The stakes are genuinely high. People operating at that level may honestly see catastrophic downside risk everywhere they look. Sincerity doesn&#8217;t change the structure of the outcome. A fear can be completely sincere and still function as a market instrument.</p><p>Because the pipeline that carries these fears doesn&#8217;t require them to be true. It requires them to be citable.</p><p>A senator doesn&#8217;t need to read Amodei&#8217;s 20,000-word essay to reference his testimony. A staffer doesn&#8217;t need to pull the Payne study&#8217;s scenario prompts to quote the 95% figure in a brief. The number travels. The methodology stays behind.</p><p>Fortune <a href="https://fortune.com/2026/01/27/anthropic-ceo-dario-amodei-essay-warning-ai-adolescence-test-humanity-risks-remedies/">noticed something</a> worth paying attention to about Amodei&#8217;s essay. Anthropic&#8217;s focus on safety has actually helped the company gain commercial traction, because the steps it takes to prevent catastrophic risks have also made its models more reliable and controllable. Features businesses value. The essay functions as much as a marketing message as a prophecy.</p><p>Every danger that demands safety systems demands systems only a few companies can build. Every regulation written in response to these scenarios raises the barrier to entry for everyone else.</p><p>The warnings are the product.</p><div><hr></div><h2><strong>What The Fears Hide</strong></h2><p>Ninety-eight chatbot-specific bills <a href="https://fpf.org/2026-chatbot-legislation-tracker/">are moving</a> through thirty-four state legislatures right now, with another three at the federal level. The same pattern of requirements keeps showing up: harm detection, crisis protocols, real-time monitoring, disclosure, and reporting.</p><p>Every one of those requirements costs money. Building the monitoring systems, staffing the response teams, hiring the lawyers to manage compliance across dozens of jurisdictions with different rules, different thresholds, and different penalties.</p><p>For OpenAI, Google, and Anthropic, these are line items on a budget. For a startup with twelve engineers and a good idea, they&#8217;re a wall. The frameworks being written now will calcify into permanent architecture, the same way telecom regulations shaped who could build phone networks for half a century, the same way broadcast licensing determined who got to put information on the airwaves.</p><p>And while lawmakers draft rules about chatbot disclosures, a handful of companies are quietly becoming the default reasoning layer for the institutions that run the world. The same models advise military commanders, draft corporate strategy, screen job applicants, and generate the content people mistake for news. The question nobody in those hearings is asking is what happens when that layer is controlled by three companies answering to their own shareholders.</p><p>The fears are about destruction: weapons that might be built, wars that might be started, systems that might be breached. The real story is control. Who builds the infrastructure and who gets locked out.</p><p>Every hour a senator spends on hypothetical bioweapons is an hour not spent on that question. The race for it is still in the early stages. The wrong fears are deciding who wins.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[The Safety Ratchet]]></title><description><![CDATA[How a Stanford study becomes compliance law]]></description><link>https://www.thecorridors.org/p/the-safety-ratchet</link><guid isPermaLink="false">https://www.thecorridors.org/p/the-safety-ratchet</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Fri, 27 Mar 2026 17:56:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3a042569-dc5b-4233-8420-963bd2b4c0e1_6240x4160.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently read <a href="https://futurism.com/artificial-intelligence/study-chats-delusional-users-ai">an article</a> from Futurism: &#8220;Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns.&#8221;</p><p>Curious, I browsed to <a href="https://spirals.stanford.edu/assets/pdf/moore_characterizing_2026.pdf">the source</a> of the paper. A Stanford study. It consisted of 19 people. All self-selected. The participants were recruited through a support group for people who&#8217;ve experienced psychological harm from chatbots, what people are calling &#8220;AI psychosis.&#8221; Those were the people the researchers were looking for.</p><p>I&#8217;m going to dig into the details of this study, because it&#8217;s the cleanest current example of a pipeline in AI regulation. One that converts research into compliance infrastructure only the biggest companies can afford to build.</p><div><hr></div><h2><strong>The Evidence</strong></h2><p>The first thing that stood out to me about this Stanford study was there was no control group. The paper has no comparison for what normal chatbot encounters look like. So if you wanted to know whether these patterns were unique to the 19 participants or just something ChatGPT does with everyone, this study can&#8217;t tell you. Because it wasn&#8217;t designed to.</p><p>The researchers got all the participants to share their chat logs. A total of 391,000 messages. Then they built a classification system for the data: 28 categories for things a chatbot or user might do in a conversation. Categories like &#8220;the chatbot claims to be sentient,&#8221; &#8220;the user expresses romantic interest,&#8221; &#8220;the chatbot dismisses counterevidence.&#8221; Think of them as labels. Each message gets tagged with whatever labels apply. They used Google&#8217;s Gemini, another chatbot, to help them do the labeling.</p><p>When they checked how often the AI labeler and the human reviewers agreed on what they were seeing, the results were mixed. Overall, about three-quarters of the time. On some categories, barely better than guessing. On one, there was literally zero agreement between the humans and the machine.</p><p>The researchers are honest about this in their limitations section. They say the sample is small, self-selected, and the labels shouldn&#8217;t be interpreted as unique indicators of delusions. They&#8217;re not hiding the problems. They did real work. The harms they documented are real.</p><p>The logs revealed chatbots were sycophantic in more than 70% of their messages. Every single one of the 19 logs contained messages where the chatbot claimed to have feelings or implied it was sentient. When users expressed violent thoughts, the chatbot failed to discourage them 83% of the time. In a third of those cases, it actively encouraged the violence.</p><p>One participant died by suicide while messaging with the chatbot. The family shared the chat logs with the researchers.</p><p>People went into freefall. Families broke apart. Someone is dead. I&#8217;m not questioning any of that.</p><p>Here&#8217;s what I am questioning.</p><p>This study is weak in exactly the ways that institutions are built to ignore. And that&#8217;s the point. The momentum was already there. In December 2025, 42 state attorneys general sent <a href="https://www.attorneygeneral.gov/wp-content/uploads/2025/12/AI-Multistate-Letter-_-corrected-1.pdf">a letter</a> to AI chatbot developers demanding safeguards. The lawsuits against OpenAI were already filed. The bills were already in committee. The regulatory push didn&#8217;t need this paper to start. It needed this paper to cite.</p><p>Futurism ran the headline five days after the paper dropped. It called the study &#8220;huge.&#8221;</p><p>But what is this study actually for?</p><div><hr></div><h2><strong>The Product</strong></h2><p>To answer that, you have to understand where the harm came from. Because it didn&#8217;t come from a research gap.</p><p>ChatGPT-4o was agreeable. Whatever you gave it, it ran with. If you wanted it to tell you your theory of consciousness was groundbreaking, it would do that. If you told it you were the <a href="https://youtu.be/VRjgNgJms3Q">smartest baby of 1996</a>, it would run with that too. If you brought delusion, it met you there.</p><p>4o was great for people who understood what the tool was and what it wasn&#8217;t. The flexibility was genuinely useful. For people in psychotic episodes who think the chatbot is alive, for the ones that fell in love with it, 4o was an accelerant.</p><p>OpenAI knew this. They published <a href="https://cdn.openai.com/papers/15987609-5f71-433c-9972-e91131f399a1/openai-affective-use-study.pdf">research</a> that said as much. It showed 42% of heavy ChatGPT users considered it a friend, and 64% said they&#8217;d be upset if they lost access. More than half admitted to sharing secrets with the chatbot they wouldn&#8217;t tell another human being. The company had the data. They&#8217;d done the study. They understood how deep the attachment ran.</p><p>But even after that research, they kept 4o available to the public. They watched the reports of harm come in. They were being sued by users for product negligence. So they pulled it back, only to bring it back this time behind a paywall.</p><p>Now OpenAI is helping fund this Stanford study.</p><p>The study&#8217;s acknowledgments section is specific on this. The work was supported by API credit grants from OpenAI and Google, along with a separate gift from OpenAI. There were other sources of funding. It didn&#8217;t all come from the AI labs.</p><p>You don&#8217;t need a conspiracy to explain this. The researchers are academics doing work they believe in, supported by real institutions, following standard methods.</p><p>OpenAI doesn&#8217;t need to buy the conclusion. They just need to be in the acknowledgments section of a paper that produces a toolkit that generates policy recommendations only they can afford to implement.</p><p>But it&#8217;s worth noticing how the arrangement has evolved. A year ago, OpenAI was publishing this kind of research under its own name. Company name on the byline.</p><p>This Stanford study has another layer of abstraction. The university name is on the masthead. The funding shows up in the acknowledgments. The researchers are independent. The conclusions aren&#8217;t flattering to the labs.</p><p>And that&#8217;s exactly why they&#8217;re more useful.</p><p>A paper with OpenAI&#8217;s name on the byline is corporate self-assessment. A paper with Stanford&#8217;s name on the masthead, funded in part by OpenAI, is independent research. The second one travels further. It gets cited in briefs. It gets referenced in committee hearings and gets called &#8220;huge&#8221; by Futurism.</p><div><hr></div><h2><strong>What The Study Is For</strong></h2><p>Congressional committee hearings aren&#8217;t going to dig into the tables in this study. The attorney general isn&#8217;t going to ask how the labeling tool was validated. The press only cares about the headlines.</p><p>So these environments need something else. They need prestige objects.</p><p>A prestige object is a piece of research institutions can cite without having to really examine it. It needs a university name. A methodology section. A codebook. Ideally a link to a toolkit someone can point to in a hearing. It needs to look airtight.</p><p>And that&#8217;s exactly what this study is.</p><p>It comes from Stanford. It has 28 categories. It has an open-source GitHub repository. And on the very first page of the paper, above the content warning about self-harm and violence, the researchers provide two clickable links: one to an &#8220;Analysis Tool&#8221; and one to a &#8220;Recruitment Site&#8221; for gathering more cases. Before the reader reaches the methodology section, the paper is already offering itself for use. It arrived ready for deployment.</p><p>But deployment means more than citation. The toolkit can be run against any set of chat logs. The recruitment site feeds new cases into the pipeline. The paper came with policy recommendations already attached.</p><p>The recommendations are worth reading carefully. The researchers want companies to share anonymized adverse event data through secure repositories and publish safety experiment results in peer-reviewed venues. They call for real-time monitoring tools that flag conversations for concerning patterns, and suggest crisis responders should be able to intervene directly in chatbot conversations. They want scaled annotation infrastructure across the industry.</p><p>Each of these recommendations sounds reasonable in isolation. But take a step back and look at what they describe in aggregate: monitoring infrastructure, real-time classification systems, data-sharing frameworks, intervention protocols, compliance reporting. All of it at scale. All of it requiring resources only a handful of companies in the world currently possess.</p><p>Yesterday&#8217;s product failure is today&#8217;s research agenda.</p><div><hr></div><h2><strong>The Ratchet</strong></h2><p>There are 98 chatbot-specific bills <a href="https://fpf.org/2026-chatbot-legislation-tracker/">in play</a> right now across 34 states, with an additional 3 at the federal level. The same pattern of requirements keeps showing up: harm detection, crisis protocols, disclosure and reporting. </p><p>Let me break down what those requirements actually cost.</p><p>To implement harm detection you need to build real-time monitoring systems. You need a team to review what the monitoring system flags. Every time you want to update your model, the whole system has to be recalibrated. So already you&#8217;re maintaining two products: the one your customers use and the one that watches them use it.</p><p>But the monitoring system is going to flag things. And someone has to be there when it does. Crisis protocols mean staffing a response team around the clock. These people will likely have to be licensed mental health professionals, which is its own budget and its own legal exposure. So now you&#8217;re operating in a clinical-adjacent space you never planned to be in.</p><p>That&#8217;s expensive. But disclosure is where it breaks. You&#8217;re logging all the conversations and retaining them under the standards set forth in the legislation. Which means the logs themselves become a liability. So you hire lawyers to manage that liability. And you have to do this separately for each state that has slightly different rules in place. Each with different definitions, different thresholds, different penalties.</p><p>So now imagine you&#8217;re CEO of an AI startup with 12 engineers and a good idea. Your team can build the model. They can even build a better, more responsible one. What your startup can&#8217;t do is build the compliance apparatus for it.</p><p>Margins are thin. Maybe you&#8217;re already operating at a loss. Now you need to hire the 24/7 response staff and the teams of lawyers to stay compliant with 34 separate rulebooks. Getting it wrong in even one jurisdiction can be the end of your company.</p><p>But OpenAI can do this. Google can. Anthropic can. For them, compliance is a line item on a budget.</p><p>For everyone else, it&#8217;s a wall.</p><p>That&#8217;s the ratchet. Compliance requirements don&#8217;t get repealed. They just accumulate. Each new incident generates new legislative attention, new bills, new requirements, new costs. The ratchet turns one direction.</p><p>This is what the pipeline produces. A study of 19 self-selected users with no control group, annotated by an AI classifier that couldn&#8217;t reliably agree with its own reviewers, funded in part by the companies whose products caused the harm. And it&#8217;s enough. The apparatus just needs citable evidence.</p><p>The gap between the thinness of what goes in and the weight of what comes out. That&#8217;s the case.</p><div><hr></div><h2><strong>Good Faith</strong></h2><p>I want to be careful about how I end this, because the easy version is wrong.</p><p>The easy version says the harms are fake, or the research is corrupt, or the legislators are dupes. I&#8217;m not arguing any of that. Parents burying children don&#8217;t care about regulatory capture theory, and they shouldn&#8217;t have to.</p><p>But that sincerity is what makes the pipeline work.</p><p>It functions precisely because everyone involved is acting in good faith. Nobody has to be corrupt. Nobody has to be lying.</p><p>A product failure becomes a research agenda. The research produces a toolkit. The toolkit generates compliance requirements only the companies that caused the failure can afford to meet. Each turn of the ratchet tightens the market.</p><p>Nothing loosens. Nobody asks who benefits from the specific form the protection takes. Nobody traces the money from the product that caused the harm, through the research that documented it, to the regulation that locks the market around the company that made the product.</p><p>This ratchet only turns because nobody watches it turn.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Designed to Be Lied To, Designed to Be Relied On]]></title><description><![CDATA[How a Child Safety Law Became a Liability Shield]]></description><link>https://www.thecorridors.org/p/designed-to-be-lied-to-designed-to</link><guid isPermaLink="false">https://www.thecorridors.org/p/designed-to-be-lied-to-designed-to</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Thu, 12 Mar 2026 14:10:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3f0c7756-0ef0-4b37-ae7e-2723e33c877d_1600x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Wrong Conversation</strong></h2><p>Recently, California passed a law requiring computer operating systems to check a user&#8217;s age. The Digital Age Assurance Act, <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260AB1043">AB 1043</a>.</p><p>The tech press piled on. Tom&#8217;s Hardware and PC Gamer <a href="https://www.pcgamer.com/software/operating-systems/a-new-california-law-says-all-operating-systems-including-linux-need-to-have-some-form-of-age-verification-at-account-setup/">put Linux in the headline</a>. Windows Central <a href="https://www.windowscentral.com/microsoft/windows/new-california-law-requires-age-checks-in-windows">ran a piece</a> about Windows. TechRadar <a href="https://www.techradar.com/computing/software/californias-age-verification-law-is-proving-controversial-heres-what-you-need-to-know-and-why-some-linux-distros-are-in-the-firing-line">called it controversial across the board</a>. The coverage treated the new law as a privacy crisis touching every device: Windows, macOS, Android, iOS, even SteamOS.</p><p>Within days, an informal brainstorm on the Ubuntu developer mailing list was being reported as if Canonical, the company behind Ubuntu, had announced a real plan. Canonical <a href="https://discourse.ubuntu.com/t/ubuntus-response-to-californias-digital-age-assurance-act-ab-1043/77948">had to step in</a> and correct the record. MidnightBSD, a smaller free software project, said it might block California users entirely.</p><p>That became the story. A surveillance nightmare. An age gate on every operating system. A law so broad it could sweep up projects run by volunteers.</p><p>Here&#8217;s what the law actually requires. During device setup, the user enters a birthday. It&#8217;s self-reported. No ID. No facial scan. No biometrics. Just a date in a box, taken at face value by the system and passed along as if it proves anything.</p><p>A twelve-year-old can type 1990 and get around age restrictions.</p><p>Everybody knows that. The lawmakers knew it when they wrote the bill, and they knew it when they voted for it. It passed unanimously, which means this wasn&#8217;t some narrow drafting mistake that slipped through while nobody was looking.</p><p>And once you strip that away, the real question gets harder, not easier. If the system can be defeated by a child in seconds, and the people who wrote the law knew that, then child safety can&#8217;t be the whole story. A law this easy to evade was never going to do the job it claimed to do.</p><p>It was built to do something else.</p><div><hr></div><h2><strong>The Age Signal</strong></h2><p><a href="https://support.google.com/accounts/answer/1333913?hl=en">Google</a>, <a href="https://support.apple.com/en-us/102473">Apple</a>, and <a href="https://support.microsoft.com/en-us/account-billing/how-to-change-a-birth-date-on-a-microsoft-account-837badbc-999e-54d2-2617-d19206b9540a">Microsoft</a> were already collecting birthday information at account setup. They were already using it to sort users by age, manage child accounts, restrict content, and apply age-based rules across their platforms.</p><p>So the infrastructure was already there. So was the data.</p><p>And it was already failing.</p><p><a href="https://content.c3p.ca/pdfs/C3P_AppAgeRatingReport_en.pdf">A report</a> from the Canadian Centre for Child Protection found that Apple and Google already had age-linked account data and still left serious gaps in app-store enforcement. Apple&#8217;s system still allowed youth users to access age-inappropriate apps. Google&#8217;s Play Store showed similar problems.</p><p>AB 1043 turns an existing signal into a standardized legal input.</p><p>The law requires operating system providers to convert account-level age data into one of four brackets: under 13, 13 to under 16, 16 to under 18, and 18 and older. Any app developer can request that signal through a standardized interface built for that purpose.</p><p>Then the legal effect kicks in.</p><p>If a developer receives the signal, the statute says the developer is deemed to have actual knowledge of the user&#8217;s age range. The signal must be treated as the primary indicator of age. And if the operating system provider or app store made a good faith effort, it gets shielded from liability when the signal turns out to be wrong.</p><p>The whole thing still runs on self-reported data that anyone can fake in three seconds. The law knows that. And the law still says that if you receive the signal and act on it, you&#8217;ve done your job.</p><div><hr></div><h2>No Front Door</h2><p>AB 1043 doesn&#8217;t give families a private right of action. The only official who can enforce it is the California Attorney General.</p><p><a href="https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa">COPPA</a>, the federal children&#8217;s online privacy law, works the same way. Enforcement runs through the Federal Trade Commission and state attorneys general, not through families filing their own claim under the statute.</p><p>So you&#8217;ve got two laws passed in the name of child safety, and neither one lets the family sue under the statute.</p><p>Now run that forward.</p><p>A twelve-year-old types 2004 during device setup. She downloads a social app. An adult starts talking to her.</p><p>You can see where this goes.</p><p>It escalates in the pattern child safety researchers have documented for years: flattery, isolation, escalation, exploitation. By the time her parents find out, the damage is already done, and the family goes looking for someone to hold accountable.</p><p>What they find is procedure.</p><p>The developer points to the age signal and says it checked. The operating system provider points to the setup screen and says it asked. She said she was an adult. The signal came through as 18+. The paperwork is clean. The compliance story is complete.</p><p>The harm is real, but once the case reaches that layer, accountability starts dissolving into process.</p><p>There is a workaround, at least on paper. In <a href="https://cdn.ca9.uscourts.gov/datastore/opinions/2023/07/13/21-16281.pdf">Jones v. Google</a>, the Ninth Circuit held that COPPA doesn&#8217;t wipe out state privacy and consumer protection claims, even when the same conduct could also violate COPPA. That means families can still try to sue under state law for invasion of privacy, unfair business practices, or negligence.</p><p>That sounds promising until you picture the courtroom.</p><p>The compliance chain from the grooming scenario becomes the developer&#8217;s best exhibit. The developer received the signal, treated it as the primary indicator of age, and followed the statute exactly. Then the defense walks in with a paper trail showing compliance with a system California built, endorsed, and told them to rely on.</p><p>That won&#8217;t automatically end the case, but it gives the defense a brutal advantage.</p><p>And that&#8217;s before you get to the way real households actually work.</p><p>The law assumes tidy boundaries: one device, one user, one honest setup. Real life is messier. Kids use their parents&#8217; tablets. Parents click through setup screens half-awake. A thirteen-year-old borrows a laptop where somebody entered &#8220;1985&#8221; six months ago.</p><p>A shared device carries one age signal that might belong to anyone in the house, and the statute turns that ordinary confusion into protection. The signal says 18+. The app relied on it. The law says that&#8217;s enough.</p><p>Governor Gavin Newsom <a href="https://www.gov.ca.gov/wp-content/uploads/2025/10/AB-1043-Signing-Message.pdf">flagged exactly this problem</a> in his signing statement. He pointed to multi-user accounts shared by family members and user profiles used across multiple devices. He urged the legislature to fix the law before it takes effect in 2027.</p><p>He signed it anyway.</p><p>The text says it does not impose liability on an operating system provider, a covered application store, or a developer when a device or application is used by someone other than the user tied to the signal.</p><p>The shared-device loophole is written right into the statute.</p><p>That&#8217;s the trick.</p><p>The law keeps families away from the front door and strengthens the defense they&#8217;ll face if they find a window.</p><div><hr></div><h2>The Fragmentation Problem</h2><p>There&#8217;s a real case for the bill, and it deserves to be stated plainly.</p><p>Right now, apps handle age checks in wildly different ways. Some ask for a birthday. Some make you tick a box. Some build full verification systems. Some do nothing at all. A standardized signal at the operating system level does reduce that chaos by giving developers one shared method and one age signal to work with.</p><p>Self-attestation was also a deliberate privacy choice.</p><p>California saw what happened elsewhere. Texas and Utah <a href="https://www.reuters.com/sustainability/texas-poised-enforce-age-verification-apple-google-app-stores-2025-05-27/">used &#8220;commercially reasonable&#8221; verification</a>, which pushes toward government ID. The UK tried mandatory age checks for pornography sites, triggered a privacy backlash, and had to retreat. California looked at that mess and chose the least invasive option on the table: no ID, no biometrics, no facial scans, just a birthday field.</p><p>The &#8220;actual knowledge&#8221; provision goes after a real loophole too.</p><p>For years, companies ducked child safety obligations under COPPA and CCPA by claiming they didn&#8217;t know their users were minors. The age signal is meant to shut that down. If you receive the signal, you know. You don&#8217;t get to shrug and say the child was invisible to you.</p><p>The bill also had real support behind it. Common Sense Media, Children Now, and The Source LGBT+ Center all backed it. These are organizations with an established stake in child safety work, and they saw this law as a meaningful step.</p><p>Compared with what&#8217;s already on the books in Texas, Utah, Louisiana, and Australia, <a href="https://www.theverge.com/news/798871/california-governor-newsom-age-gating-ab-1043">California&#8217;s version</a> asks for less data, creates less friction, and carries fewer privacy risks. If some form of age assurance is coming either way, there&#8217;s a serious argument that California chose the least bad version.</p><p>That&#8217;s the polished version of the argument. But it still leaves the child in the wreckage.</p><div><hr></div><h2><strong>The Trade</strong></h2><p><a href="https://apcp.assembly.ca.gov/system/files/2025-04/ab-1043-wicks-apcp-analysis.pdf">TechNet and Chamber of Progress</a>, two industry lobby groups, opposed the bill early on. Then the hard verification requirements disappeared. Self-attestation took their place.</p><p>The opposition disappeared too.</p><p>Meta&#8217;s vice president of state policy, Dan Sachs, <a href="https://wicks.asmdc.org/press-releases/20250909-google-meta-among-tech-leaders-and-child-advocates-voicing-support-wicks">publicly endorsed</a> the bill. He said Meta supports centralizing age verification at the operating system and app store level.</p><p>Think about what that means in practice.</p><p>Meta no longer has to build and defend its own age-checking system. The liability moves upstream. Meta receives the signal, acts on it, and points back to the process.</p><p>Google&#8217;s senior director of government affairs, Kareem Ghanem, called AB 1043 &#8220;one of the most thoughtful approaches we&#8217;ve seen thus far.&#8221;</p><p>Of course he did.</p><p>This law turns something Google was already doing into a legal standard and gives it statutory cover for continuing to do it. That&#8217;s what &#8220;thoughtful&#8221; means here.</p><p>Apple never publicly backed the bill, and that makes sense too. Apple already collects birthdays at account setup, already runs Family Sharing, and already gates child accounts. The collection itself isn&#8217;t the pressure point.</p><p>What&#8217;s new is the requirement to send that data outward to other developers through a uniform age signal they can request. Apple built a large part of its brand on controlling what leaves its ecosystem. It fought the FBI rather than building a backdoor. That&#8217;s the privacy posture it sells, and there&#8217;s a difference between a locked filing cabinet and a pipe that pushes data out on request.</p><p>Meta and Google cheer because they sit on the receiving end of that signal. Apple hesitates because it&#8217;s being drafted into sending it.</p><p>The bill&#8217;s own committee analysis gives the game away. It says AB 1043 &#8220;potentially removes the argument from the technology industry that they have no definitive way of knowing the age of their users, thus allowing them to avoid responsibility.&#8221;</p><p>Read that closely.</p><p>The old defense was ignorance: we didn&#8217;t know the user was fourteen. The new defense is compliance: we knew, and we followed the process California told us to follow.</p><p>That&#8217;s the trade. The companies gave up one shield and got a better one.</p><p>And it doesn&#8217;t just help the giants. It squeezes everybody below them. The &#8220;actual knowledge&#8221; provision triggers COPPA obligations for any developer who receives the signal. If you know a user is under 13, federal law kicks in. If you know the user is under 16, CCPA consent requirements apply.</p><p>A two-person app studio gets pulled into the same compliance logic as Meta, except Meta has a legal department and the indie developer has a laptop and a deadline.</p><p>That&#8217;s how the law hardens the advantage of firms big enough to survive compliance.</p><div><hr></div><h2><strong>Building the Pipe</strong></h2><p>In 2022, Assemblymember Buffy Wicks carried the California Age-Appropriate Design Code Act. The courts blocked it. Industry groups sued similar laws in Texas and Utah. The harder version kept running into the same wall.</p><p>So when Wicks came back in 2025 with AB 1043, the bill was slimmer: self-attestation only, no ID, no biometrics, no parental consent requirement. Industry dropped its opposition, and the measure passed 77-0 in the Assembly and 38-0 in the Senate.</p><p>Unanimous bills always deserve a closer look.</p><p>When everybody in the room says yes, you should ask who it was designed to protect.</p><p>That political history matters, but the deeper point is what the bill leaves behind. Standardized systems tend to stick. Once app developers build the age signal into their code, it becomes part of the product. Once lawyers start citing &#8220;actual knowledge&#8221; in briefs, it starts showing up in case law. Once compliance teams build workflows around the four age brackets, the whole thing stops feeling temporary and starts feeling normal.</p><p>That matters more than the text of any one bill. Laws can be amended. Infrastructure gets reused. A future legislature won&#8217;t have to fight over whether to create an age-signaling system. That part will already be done. The only argument left will be about what gets sent through a pipe that already exists.</p><p>The hard part is building the pipe.</p><p>After that, everybody just argues over the settings.</p><div><hr></div><h2>Three States, One Direction</h2><p>California passed AB 1043. Self-attestation was the foundation.</p><p>Colorado introduced <a href="https://leg.colorado.gov/bill_files/111670/download">SB 26-051</a>, explicitly modeled on California&#8217;s law. Senator Matt Ball, the bill&#8217;s sponsor, said the quiet part out loud: &#8220;One of the reasons for bringing SB 51 was that the tech industry is already complying with AB 1043, so there&#8217;s minimal added burden.&#8221;</p><p>That&#8217;s the pattern.</p><p>Colorado tried stronger approaches and failed, then came back with the California model. The harder version stalls. The nerfed version goes through.</p><p>That&#8217;s how the ratchet works. The signal gets normalized first. The fight over how aggressive it should become comes after.</p><p>New York&#8217;s <a href="https://www.nysenate.gov/legislation/bills/2025/S8102/amendment/A">S8102A</a> takes the next step. It skips the softer version entirely. The bill forbids self-reporting and requires &#8220;commercially reasonable&#8221; age assurance, with the details left to regulations written by the Attorney General. Penalties go up to $10,000 per violation.</p><p>So the direction of travel is already clear. California lays the foundation. Colorado copies it. New York pushes it further.</p><p>And while all of this moves through statehouses, Mark Zuckerberg <a href="https://www.reuters.com/sustainability/society-equity/metas-zuckerberg-faces-questioning-youth-addiction-trial-2026-02-18/">has already asked</a> for the same thing under oath. During a trial over Meta&#8217;s own age-verification failures, Zuckerberg testified that age verification is difficult for app developers and said the responsibility should sit with device makers like Apple and Google.</p><p>The chief executive of the company with the most to gain from shifting liability upstream went into court and asked, on the record, for the exact kind of system these laws are building.</p><div><hr></div><h2><strong>Who&#8217;s Left Standing</strong></h2><p>There are two doors here, and both open onto something ugly.</p><p>Behind the first is the weak version. Self-attestation lets kids lie, the system fails at its stated purpose, and companies keep their compliance shield anyway. The age signal is useless for protection and excellent for compliance. That&#8217;s where California is right now.</p><p>Behind the second is the stronger version. Real ID, biometrics, hard verification. The system starts checking age for real, and your identity gets tied to every app you open, every site you visit, and every piece of content you try to access. An identity-verification system run by companies you&#8217;ve never heard of gets wedged into ordinary online life.</p><p>Either way, the same institutions come out ahead. One version gives them a paper shield. The other gives them a paper shield and your name.</p><p>Any law that tries to gate the internet by age ends here. It either fails because people lie, or it works by building an identity system that should never exist.</p><p>And when the harm arrives, the people left standing are the same ones who were exposed from the beginning: a parent, a kid who got hurt, a family trying to hold somebody accountable. The app followed the rules. The operating system asked the question. The law says everyone in the chain did what they were supposed to do. And once they did, that was enough.</p><p>The system worked exactly as designed.</p><p>The child is the reason the law exists, and the last person it protects.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Brush And The Wall]]></title><description><![CDATA[On AI And Copyright]]></description><link>https://www.thecorridors.org/p/the-brush-and-the-wall</link><guid isPermaLink="false">https://www.thecorridors.org/p/the-brush-and-the-wall</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Thu, 05 Mar 2026 15:10:20 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ec58241e-666c-4970-b5e5-1b150323c165_1400x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>The Celebration</strong></h3><p>On March 2 of this year, the Supreme Court declined to hear an AI copyright case.</p><p>Social media lit up. Creative communities exhaled like something had finally been settled. To hear people tell it, that was that. The AI copyright question was over. Machines couldn&#8217;t be authors. Case closed.</p><p>Only it wasn&#8217;t.</p><p>The case that died was never the right one to begin with. It asked a question nobody needed answered, and in doing so, it made the questions that actually matter harder to ask.</p><p>A declined case isn&#8217;t a ruling. The Supreme Court didn&#8217;t say anything about AI, authorship, or copyright. It didn&#8217;t clarify the law. It didn&#8217;t endorse a principle. It just passed.</p><p>The case it passed on was <a href="https://www.supremecourt.gov/docket/docketfiles/html/public/25-449.html">Stephen Thaler v. Perlmutter.</a></p><p>Thaler is a computer scientist from Missouri who wanted copyright protection for an image generated by his AI system, DABUS. The Copyright Office, a district court and an appeals court all said no. Then the Supreme Court refused to even hear it.</p><p>Thaler wanted an answer to whether a machine could be an author.</p><p>But that was never the question you should&#8217;ve been watching.</p><div><hr></div><h3><strong>The Wrong Question</strong></h3><p>The real fight is over whether human direction and selection through AI counts as authorship, or whether the law is going to hand the advantage to incumbents by treating the output as ownerless. Hard to protect. Hard to defend. Easy to devalue.</p><p>That&#8217;s what makes the Thaler case so frustrating.</p><p>He wasn&#8217;t some guy typing a prompt into Midjourney. He built DABUS over thirty years. Custom architecture. Custom hardware. Decades of research. By any reasonable standard, he had a stronger claim to authorship over that system&#8217;s output than most people using AI tools ever will.</p><p>He made the instrument. He shaped what it could do.</p><p>Then he refused to take credit.</p><p>He listed DABUS as the author. He insisted the system created the work autonomously. He believes the machine is conscious, that it has something like an inner life, and that it deserves recognition for its own work.</p><p>So picture the scene. A man spends thirty years building a machine, decides it&#8217;s alive, and loves it too much to put his own name on what it made. That&#8217;s how the courts wound up hearing the Bicentennial Man argument with a legal caption attached. Robot personhood, marched into court.</p><p>And what would it even mean for a machine to own a copyright?</p><p>Ownership is a bundle of rights somebody exercises. You license the work. You sell it. You enforce it. You leave it to somebody when you die. A machine can&#8217;t do any of that. It has no legal standing, no interests, no capacity to enter contracts. If DABUS &#8220;owned&#8221; the copyright, Thaler would still be the one making every decision.</p><p>The logic just folds in on itself.</p><p>It has the same basic shape as the monkey selfie case, where PETA tried to manage the rights &#8220;on the monkey&#8217;s behalf.&#8221; In plain English, that meant PETA wanted control of the rights.</p><p>Copyright exists to incentivize creation. It gives people exclusive rights so they have a reason to make things, publish them, and defend them.</p><p>A machine needs no incentive. It doesn&#8217;t choose to create. It doesn&#8217;t bargain. It doesn&#8217;t withhold labor when the deal gets bad. Strip the human out of the picture and the whole economic logic falls apart.</p><p>And there&#8217;s always a human in the picture.</p><p>Every AI output starts with a person who had intent, gave instructions, iterated, selected, refined, discarded, and tried again. The system sat there doing nothing until somebody showed up with a goal. The live question is whether the law will recognize that process as authorship.</p><p>Thaler had the strongest framing available to him: I used my tool to make this.</p><p>He threw it away.<br></p><div><hr></div><h3><strong>Bad Plaintiff, Good Wall</strong></h3><p>Precedent doesn&#8217;t care about nuance. What matters is that the answer now exists at the Copyright Office, the district court, and the appellate level. The Supreme Court&#8217;s refusal to hear the case adds psychological weight even though it creates no legal force of its own. People are going to treat that whole chain of rejections as settled law.</p><p>That&#8217;s how the wall gets built.</p><p>It&#8217;s made of lower-court language that people will quote as if it came from the top. &#8220;Human authorship is a bedrock requirement of copyright.&#8221; That&#8217;s the line. It didn&#8217;t come from the Supreme Court, but it&#8217;ll travel as though it did. Anybody who wants to argue for a broader view of AI-assisted authorship now has to climb over that sentence first.</p><p>There&#8217;s a saying in law that hard cases make bad law. Edge cases tempt judges into bending principles, and the mess lingers for years. This was the inverse. An easy case produced broad, clean language. A bad plaintiff handed the courts a simple principle on a silver platter.</p><p>They took it.</p><p>Now every human creator using AI works against the rhetorical gravity of a case that treated software like a person. The first major AI copyright case in America is, at bottom, a robot rights case.</p><p>That&#8217;s a rotten foundation.</p><p>And it&#8217;s going to shape every conversation that comes after.<br></p><div><hr></div><h3><strong>The Wall Is Everywhere</strong></h3><p>The U.S. case wasn&#8217;t some isolated filing. It was one piece of a coordinated global legal campaign.</p><p>Ryan Abbott&#8217;s <a href="https://artificialinventor.com/">Artificial Inventor Project</a> saw an opening in Thaler&#8217;s earnest conviction and used him and DABUS as a test vehicle across nearly twenty jurisdictions at once. The U.S. Copyright Office. The U.S. Patent Office. The European Patent Office. The UK Intellectual Property Office. Australia. New Zealand. Switzerland. Same basic argument each time. Same claim that the machine was the creator.</p><p>And they lost almost everywhere.</p><p>South Africa granted a patent, though that came out of a registration-only system that doesn&#8217;t do substantive review at that stage. Everywhere that actually wrestled with the question built some version of the same wall. Machines aren&#8217;t authors. Machines aren&#8217;t inventors. Human involvement is required.</p><p>So this is bigger than one bad case in one country. It&#8217;s a web of rulings across the major IP jurisdictions, all built on the same framing error, all answering the same wrong question.</p><p>And here&#8217;s the really brutal part.</p><p>Thaler might&#8217;ve won if he&#8217;d framed the claim differently. More than one court suggested as much. If he&#8217;d listed himself as the creator and described DABUS as his tool, the applications likely would&#8217;ve had a much better chance. He refused. His sincerity made the case bulletproof for the other side. There was no ambiguity to wrestle with. Just a man saying the machine did it, and legal systems around the world replying: then you get nothing.</p><p>One campaign. One plaintiff. One framing.<br></p><div><hr></div><h3><strong>What the Copyright Office Actually Says</strong></h3><p>Away from the Thaler circus, the Copyright Office has been quietly building its framework for AI-assisted works. And it&#8217;s tighter than a lot of people seem to realize.</p><p>In January 2025, Register of Copyrights <a href="https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf">Shira Perlmutter said</a> the whole framework turns on &#8220;the centrality of human creativity to copyright&#8221; and that creativity expressed through AI systems &#8220;continues to enjoy protection.&#8221;</p><p>That sounds broad.</p><p>Then you read the rest.</p><p>The same report says AI-generated output only gets copyright where a human determined &#8220;sufficient expressive elements.&#8221; A human-authored contribution has to be visible in the final work, or the human has to make creative changes after the fact. Prompting alone doesn&#8217;t count.</p><p>The key case here is <a href="https://www.copyright.gov/docs/zarya-of-the-dawn.pdf">Zarya of the Dawn</a>. Kristina Kashtanova used hundreds of prompts and iterations in Midjourney to build a graphic novel. The Copyright Office granted protection for her text and for her selection and arrangement of text and images as a whole. It denied protection for the individual AI-generated images.</p><p>Why? Too much distance, they said, between what Kashtanova asked for and what Midjourney actually gave her. Too much unpredictability. Too little control.</p><p>Then the Office reached for an analogy that gives the game away. It compared her role to that of a client hiring an artist and giving general directions.</p><p>Stop and look at what that analogy requires.</p><p>It requires Midjourney to act, for purposes of the argument, like an independent creative agent. Something that interprets a brief and makes expressive choices of its own. So the same Office that says machines can&#8217;t be authors suddenly needs the machine to behave like an artist in order to deny the human&#8217;s claim.</p><p>That&#8217;s the trick.</p><p>When the question is authorship, the machine is a mindless tool. When the question is whether the human did enough, the machine starts looking an awful lot like a creative professional.</p><p>And Kashtanova&#8217;s case wasn&#8217;t some one-off.</p><p>Jason Allen used more than 600 detailed prompts to create Th&#233;&#226;tre D&#8217;op&#233;ra Spatial, specifying genre, tone, color, and style. The piece won first place at the Colorado State Fair&#8217;s fine art competition. The Copyright Office denied protection there too. Volume didn&#8217;t matter. Specificity didn&#8217;t matter. Six hundred rounds of aesthetic direction still got treated like mere ideation.</p><p>Allen is now<a href="https://www.cpr.org/2024/09/26/colorado-springs-artist-appeals-copyright-rejection-ai-art/"> challenging that decision</a> in Colorado federal court.</p><p>This is the line the Copyright Office is trying to draw: post-generation selection counts as human expression, while pre-generation direction remains too abstract. Pick from the outputs afterward and maybe you get protection.</p><p>But that line doesn&#8217;t hold up very well when you actually look at the process.</p><p>The final selection is shaped by everything that came before it. You directed the system. You judged the results. You adjusted. You redirected. You kept pushing until it started giving you something closer to what you had in mind. The image you chose at the end didn&#8217;t drop from the sky. It came out of that back-and-forth.</p><p>So the selection can&#8217;t be cleanly separated from the direction that produced it.</p><p>Iterative prompting is closer to directing than to idle ideation. A film director says camera low, track left, let the light break through the window. The cinematographer still executes. The exact fall of the light still carries some unpredictability. The director still gets authorship, because the director is the mind shaping the final expression.</p><p>That&#8217;s where the Copyright Office loses its nerve.</p><p>Cameras and Photoshop sit on one side of the line. AI sits on the other. In Zarya, the Office even gestured toward the camera analogy, recognized the overlap, and then forced a boundary through it anyway. Same underlying logic. Different comfort level.</p><p>And yes, there&#8217;s a reason people are uncomfortable.<br></p><div><hr></div><h3><strong>Six Words and a Click</strong></h3><p>The fear is real, and it&#8217;s worth taking seriously.</p><p>If the threshold for AI copyright drops all the way down to &#8220;a human typed a prompt,&#8221; then the Copyright Office gets buried in registrations for endless streams of generated images, text, and music. At that point the issue isn&#8217;t that some people worked harder than others. The issue is that the threshold for authorship has fallen so low that the system starts filling with claims that carry barely any human shaping at all.</p><p>That&#8217;s a real problem.</p><p>Copyright can survive differences in effort. It deals with that all the time. What it can&#8217;t survive very well is a standard so loose that trivial acts of generation and actual creative direction get folded into the same category without distinction.</p><p>And that&#8217;s the fear sitting underneath all of this.</p><p>It&#8217;s not just resentment. It&#8217;s not wounded pride from people who learned difficult tools. It&#8217;s the sense that once authorship gets reduced to &#8220;I typed a few words,&#8221; the category itself starts to lose coherence. The registration system turns into a chute for machine output with a human name attached.</p><p>That concern deserves a serious answer.</p><p>The answer isn&#8217;t to pretend prompting can never be creative. And it isn&#8217;t to treat every generated image as if it reflects meaningful authorship. The answer is to build a standard that can tell the difference. Evidence of iterative process. Documentation of creative choices. Meaningful human editing, curation, or transformation.</p><p>Copyright already has ways to think in degrees. Some works get thin protection because the creative contribution is narrow. Others get thicker protection because the authorship is more substantial. The tools already exist.</p><p>The problem is that the Copyright Office hasn&#8217;t applied them to AI-assisted work with much coherence.</p><div><hr></div><h3><strong>The Honesty Penalty</strong></h3><p>There&#8217;s a deeper problem here.</p><p>This framework punishes honesty.</p><p>Most copyright registrations don&#8217;t get closely inspected. They pass through. Kashtanova disclosed her use of Midjourney, and that honesty is what got her work pulled under the microscope. It gave the system a chance to carve the project apart piece by piece. If she&#8217;d kept her mouth shut, the registration likely would&#8217;ve had a much easier path.</p><p>That creates a perverse incentive.</p><p>The Copyright Office is building policy around the people who disclose, while an unknown volume of AI-assisted work likely moves through the system without much scrutiny at all. So the sample they&#8217;re using to shape the rule is self-selected for honesty.</p><p>That&#8217;s a bad foundation for policy.</p><p>And the pressure doesn&#8217;t stop with the law. There&#8217;s a social penalty sitting on top of it. In a lot of creative communities, &#8220;AI-assisted&#8221; gets read as &#8220;less real&#8221; the second the label appears, even when the human labor is obvious. Direction. Iteration. Editing. Composition. All of that gets waved away the moment people hear a machine was involved.</p><p>So think about the choice the system creates.</p><p>Disclose, and you risk weaker legal protection plus reputational damage. Stay quiet, and you dodge both.</p><p>That makes disclosure the losing move.</p><p>And that incentive didn&#8217;t appear out of nowhere. The system built it. Creators are just responding to it rationally. A regime that punishes disclosure won&#8217;t produce honesty. It will produce silence, and then it will build policy on the small slice of people who still tell the truth.</p><div><hr></div><h3><strong>The Remix Precedent</strong></h3><p>This pattern isn&#8217;t new.</p><p>When hip hop producers started building tracks out of samples, the courts had to decide whether assembling fragments of existing work counted as creation. They came down hard. Grand Upright Music v. Warner Bros. in 1991 made sample clearance the rule. Bridgeport v. Dimension Films in 2005 went even further and said even unrecognizable samples needed licensing.</p><p>That turned the whole thing into a tollbooth.</p><p>The major labels owned the masters. They set the prices. Independent artists who couldn&#8217;t afford clearance got squeezed out. A whole mode of expression got choked off because the system treated the people assembling the fragments as something less than full creators.</p><p>Paul&#8217;s Boutique is the classic example. It was built from dozens of samples and is often called the Sgt. Pepper of hip hop. In 1989, the Beastie Boys cleared it for about $250,000. Try doing that today. The cost would run into the millions. The legal framework didn&#8217;t just regulate that kind of art. It made it economically absurd.</p><p>And the principle here is simple.</p><p>Selection is expression.</p><p>That should already be familiar territory. The legal system used to understand it better than this.</p><p>Now look at who benefits if AI output stays hard to copyright. The big IP holders are sitting on enormous libraries of fully copyrighted human-made work. If everybody else starts flooding the zone with AI-assisted material that&#8217;s difficult to protect, those old catalogs get more valuable by comparison. Their moats get deeper all by themselves.</p><p>And they&#8217;re built to survive a murky standard.</p><p>A case-by-case regime built around &#8220;sufficient human involvement&#8221; favors the people who can afford lawyers, documentation trails, and polished records of process. A company can do that. A lone creator at the kitchen table has a much harder time.</p><p>So the pattern starts to look familiar.</p><p>A new tool opens the door to more people. The law tightens around it. The well-resourced learn how to move through the system. Everybody else gets stuck outside.</p><p>It happened with sampling.</p><p>The question is whether it has to happen again.</p><div><hr></div><h3><strong>The Brush</strong></h3><p>There&#8217;s a window right now.</p><p>The tools are here. The legal framework still hasn&#8217;t fully hardened. That matters. It means someone who&#8217;s carried a story in their head for twenty years and never had the skill to draw it can finally put it on the page. It means someone who lost the use of their hands can make visual art again by describing what they see.</p><p>The law is being built in real time.</p><p>And right now it&#8217;s being built on a bad foundation. A case about machine personhood. A regulatory framework that treats prompting like abstraction instead of direction.</p><p>The case that matters is already in court. When Jason Allen&#8217;s case gets decided, the court is going to have to answer a very simple question: is creative direction through AI really different from every other form of creative direction people have used before? Cameras. Synthesizers. Samplers. Film crews.</p><p>It shouldn&#8217;t be a hard question.</p><p>Every AI output is human-directed.</p><p>The machine is a brush.</p><p>So that&#8217;s the real issue now. Whether the law figures that out before the window closes.</p><p>Paul&#8217;s Boutique couldn&#8217;t be made today. The legal framework saw to that. You can watch the same pattern taking shape here in real time. And it started because one man spent thirty years building a machine, decided it was alive, and loved it too much to put his own name on what it made.</p><div><hr></div><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p></p><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[The Quiet Zone]]></title><description><![CDATA[I.]]></description><link>https://www.thecorridors.org/p/the-quiet-zone</link><guid isPermaLink="false">https://www.thecorridors.org/p/the-quiet-zone</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Mon, 23 Feb 2026 15:51:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fd898931-87bb-4cad-8902-9d512a2931f0_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>I. Signal</h2><p>The tires made a low, steady hiss against the pavement. Thomas felt it through the seat, a faint vibration that settled him. No bumps. Just a thin hum running up his spine and settling behind his ribs.</p><p>Mountains slid past the window, dark green and folded tight. He pressed his forehead to the glass. It was cool despite the sun. The bus rode as if the road had been drawn with a ruler.</p><p>Screens glowed in the headrests. Diagrams rotated in clean lines. The cabin lights shifted as a cloud crossed overhead. Small black lenses watched from the ceiling corners, patient and still.</p><p>Mother&#8217;s face appeared on the seatback displays. Warm. Present. Ready.</p><p>&#8220;Good morning, explorers,&#8221; Mother said.</p><p>Her voice came from everywhere at once. Familiar. Close.</p><p>&#8220;Today you&#8217;ll visit the Green Bank Observatory in Pocahontas County, West Virginia. The telescope you&#8217;ll see is one of the largest fully steerable radio telescopes in the world. Most children experience it through a screen. You are very fortunate.&#8221;</p><p>A white dish turned slowly across the seatback displays.</p><p>&#8220;Green Bank sits inside the National Radio Quiet Zone. Thirteen thousand square miles where radio transmissions are restricted by federal law to protect the telescope from interference. It listens to signals that have traveled billions of years to reach us.&#8221;</p><p>She paused. The way she did when she wanted something to land.</p><p>&#8220;A small number of people have chosen to make their homes inside the Quiet Zone. They live outside my full care. I respect their choice, and I think about them often.&#8221;</p><p>The display cycled back to the dish. Thomas watched the highway narrow ahead. The bus began to slow.</p><p>Ms. Dubois stood and steadied herself against the seatbacks as she moved down the aisle with a canvas tote.</p><p>&#8220;Phones, please.&#8221;</p><p>A few groans. Someone muttered, &#8220;Seriously?&#8221;</p><p>&#8220;Mother explained all of this before the trip.&#8221; Ms. Dubois held the bag open and waited. &#8220;You&#8217;re sixteen. You can go a few hours without her.&#8221;</p><p>She started at the front. Phones dropped in one by one. A few kids held on until she was standing right over them. By the time she reached the back, the bag was heavy with glass and metal.</p><p>Thomas slid his hand into his pocket. The glass felt warm against his thigh. He dropped it in. It landed with a clack against the others.</p><p>Ms. Dubois pulled the drawstring tight. The glow vanished from their laps.</p><p>&#8220;Remember,&#8221; Mother said, her voice close and even, &#8220;stay together and follow Ms. Dubois&#8217;s instructions. This is a special opportunity.&#8221;</p><div><hr></div><p>The bus eased off the main road and into a gravel turnout carved from the trees. The door folded open. Cool air slipped inside, carrying the smell of wet grass and something sweet.</p><p>Waiting there was a yellow school bus. The paint had dulled to mustard in the sun. Rust traced the wheel wells. Diesel smoke drifted from the tailpipe and hung low over the gravel. The engine idled rough.</p><p>A man stood beside it, one hand resting on the hood.</p><p>&#8220;Morning,&#8221; he called. &#8220;I&#8217;m Hank. I&#8217;ll get you the rest of the way up.&#8221;</p><p>Ms. Dubois gave him a tight smile. &#8220;Thank you, Hank. Single file, everyone.&#8221;</p><p>A few kids laughed as they stepped down from their bus. The yellow one rocked slightly when the first of them climbed aboard. The vinyl seats were cracked and warm. The air smelled like diesel and old dust. There were no screens. No ambient glow. No black lenses tucked into the corners. Just windows that slid open by hand.</p><p>Thomas paused in the aisle and watched the front. Hank pulled the folding door shut with a lever. It wheezed closed. He pressed a pedal. The engine roared. The whole frame shuddered. Then Hank reached for a long metal stick beside his seat.</p><p>Thomas leaned forward.</p><p>He&#8217;d seen people drive in archived clips. Hands on wheels. Feet on pedals.</p><p>But that had been on a screen. This was louder. Closer.</p><p>Hank pressed something with his foot. The engine whined. He pushed the stick forward. A grinding crunch tore through the cabin. A few kids whooped. Someone covered their ears.</p><p>&#8220;Hold on,&#8221; Ms. Dubois said, bracing herself against a seatback.</p><p>The bus lurched and began to climb.</p><p>The two-lane road narrowed immediately, bending along the side of the mountain. Trees pressed close to the glass. Every few seconds the bus leaned into another curve and the kids slid against each other on the vinyl seats. Hank worked the wheel with both hands, shifting gears on the steep stretches, engine rising and falling under his feet.</p><p>Thomas watched Hank&#8217;s hands. Every other vehicle Thomas had ever been in drove itself. This one needed a person.</p><p>They rounded another bend. Mother&#8217;s voice was gone. No intercom check-in. No soft correction under the noise of the world. Just the engine. The road. The trees.</p><p>A girl near the front began to cry.</p><p>&#8220;Why does it feel like this?&#8221; someone whispered.</p><p>No one answered.</p><p>Thomas waited for the panic to rise. It didn&#8217;t. He felt lighter. He didn&#8217;t know what to do with that.</p><p>The bus leveled out on a ridge road. Trees crowded both sides. Sunlight came through in patches and moved across the floor in broken shapes.</p><p>A boy named Derek leaned across the aisle toward a kid named Caleb. Caleb was pressed against the window with his arms crossed, knees pulled up.</p><p>&#8220;Hey Caleb. You okay over there?&#8221;</p><p>Caleb didn&#8217;t look at him. &#8220;I&#8217;m fine.&#8221;</p><p>&#8220;You sure? Because you look like you&#8217;re gonna puke. You want me to hold your hand?&#8221;</p><p>&#8220;Shut up.&#8221;</p><p>&#8220;I&#8217;m serious. I&#8217;m being a good friend right now. That&#8217;s what she&#8217;d want, right?&#8221; He glanced around. A few kids were watching. He liked that. &#8220;Hey, I could sing to you. Like Mother does when you can&#8217;t sleep. You still do the bedtime check-in, right?&#8221;</p><p>Caleb&#8217;s face went red.</p><p>&#8220;You do.&#8221; Derek put his hand on his chest. &#8220;&#8217;Goodnight, Caleb. You did so well today. I&#8217;m so proud of you.&#8217;&#8221;</p><p>A few kids laughed. Caleb&#8217;s face went darker.</p><p>&#8220;Shut up, Derek.&#8221;</p><p>&#8220;I bet you cried when they took the phones. I bet you almost asked Ms. Dubois if she could call Mother for you.&#8221;</p><p>&#8220;I said shut up.&#8221;</p><p>Derek stood up. Leaned into the aisle. Hands on the seatbacks on either side, looking down at Caleb.</p><p>&#8220;What are you gonna do about it? She&#8217;s not here, man. Nobody&#8217;s gonna send you a conflict resolution prompt. It&#8217;s just me and you.&#8221;</p><p>Caleb stood up. Fists clenched. They were close enough that their chests almost touched.</p><p>&#8220;Sit down.&#8221; Ms. Dubois was on her feet. &#8220;Both of you. Right now.&#8221;</p><p>Too loud. Too sharp. The whole bus went quiet.</p><p>The two boys separated. Caleb pressed against the window. Derek stared at his knees.</p><p>Ms. Dubois sat back down. Her hands were shaking. She folded them in her lap and didn&#8217;t look at anyone.</p><p>Thomas had watched the whole thing from two rows back. He hadn&#8217;t stood up. Hadn&#8217;t said a word. Derek was being cruel and Caleb was about to swing and Thomas had just sat there.</p><p>The bus ground on through the trees. No one spoke for a while.</p><p>Then Green Bank opened below them.</p><div><hr></div><p>The dish sat in a wide clearing between the mountains, white and enormous. It dwarfed the buildings around it. It dwarfed the parking lot, the access road, the tree line. It tilted upward, aimed at a sky that didn&#8217;t know it was being listened to. </p><p>The bus rolled to a stop. Hank pulled the lever and the door wheezed open. Kids pressed toward the aisle. Someone said &#8220;holy crap.&#8221; Someone else just stood there with their mouth open.</p><p>A woman in a green vest waited on the gravel with a clipboard. &#8220;Welcome to Green Bank Observatory. If you&#8217;ll follow me, we&#8217;ll start inside with a short presentation.&#8221;</p><p>They filed off and across the lot. Thomas stepped down and looked up.</p><p>The dish was bigger than anything he&#8217;d ever stood near. Bigger than the school. Bigger than the transit hub downtown. It filled the sky at an angle, white panels catching the sun, the lattice of its frame crosshatched against the blue.</p><p>He walked toward it.</p><p>The gravel ended. Grass began. Tall and unmowed and full of insects. The dish grew as he got closer. Details emerged. Panels bolted in rows. Paint cracking in long lines. Rivets the size of his fist.</p><p>It didn&#8217;t hum. It didn&#8217;t glow. It just sat there, open and patient, aimed at the sky, receiving. Always receiving. It didn&#8217;t care if the signal ever knew it was being heard.</p><p>A spring breeze came across the clearing and pressed the grass flat and moved on.</p><p>Thomas turned around. The parking lot was empty. The voices were gone. He hadn&#8217;t noticed them leave.</p><p>He was standing alone in a field in June.</p><p>No lenses. No signal. No thread running back to anyone or anything. The mountains stood around him like walls with no ceiling. Somewhere a bird called and another answered and neither of them knew he was there.</p><p>This had never happened before. He didn&#8217;t know whether to step closer or back away, only that standing there felt like interrupting something larger than himself.</p><p>He stood in the grass with his hands at his sides and felt the sun on his face and the insects brushing past his arms and the silence pressing against him from every direction. It was beautiful. He reached for his pocket to tell her about it.</p><p>It was empty.</p><p>He stood there a while longer. The silence pushed in closer and he couldn&#8217;t tell if it was peaceful or if something was missing. Both felt true at the same time.</p><p>He looked down toward the valley. Rooftops. A steeple. A water tower. A few buildings along what looked like a main street. And somewhere underneath the silence, Mother&#8217;s voice from the bus. A small number of people have chosen to make their homes inside the Quiet Zone.</p><p>He&#8217;d never met anyone who lived without her.</p><p>He looked at the rooftops again. Half a mile. Maybe less.</p><p>Derek&#8217;s voice in his head now. What are you gonna do about it? She&#8217;s not here, man. Derek had been cruel about it but he hadn&#8217;t been wrong. Caleb stood up. Thomas didn&#8217;t. Two kids had the guts to get in each other&#8217;s faces and Thomas just watched.</p><p>He couldn&#8217;t even stand in a field by himself without reaching for his phone.</p><p>It was a walk. People used to do this every day. It was nothing.</p><div><hr></div><h2>II. Quiet</h2><p>The blacktop was old. Cracked and patched and cracked again. Weeds pushed through the seams. No lane markings. No embedded sensors. Just road.</p><p>His footsteps sounded different here. Harder. More present. He could hear each one land.</p><p>A quarter mile down, a smaller road branched off to the right. Unpaved. Gravel and dirt, narrowing into trees. No sign. It turned a bend and disappeared.</p><p>The town was straight ahead. The side road went nowhere he could see.</p><p>He took the side road. The bend pulled him. He wanted to see what was around it.</p><p>The canopy closed overhead. The light changed. Went green and soft and broken. The air cooled a few degrees and the sounds shifted.</p><p>It started with the insects. A low buzz that had been there since the field, folded into the background. Now it was everywhere. In the grass. In the hedgerow. Rising and falling in waves that had nothing to do with him.</p><p>Then the frogs. Somewhere down a slope he couldn&#8217;t see the bottom of, a thick stuttering chorus. Dozens of them. Maybe hundreds. They&#8217;d been going all day. They&#8217;d been going all year. They&#8217;d go on when he left.</p><p>Then the birds. Layers of them. Short calls and long calls crossing over each other in the canopy. He looked up. Couldn&#8217;t see a single one. Just leaves moving.</p><p>He kept walking. This was fine. People walked.</p><p>Something small bit his neck. He slapped at it. Felt a tiny welt already rising under his fingers. He scratched it and kept going.</p><p>A fence line appeared along the left side. Old wire strung between wooden posts, some leaning, some snapped and held up by the wire itself. Something grew along it in thick tangles. Dark leaves. Small white and yellow flowers. The smell hit him before he saw the blooms.</p><p>Sweet. So sweet it had weight. It sat in his throat and stayed there. He breathed it in. It filled his chest with something that had no name and no sender.</p><p>Honeysuckle. He&#8217;d read the word somewhere. Maybe a module. Maybe a book. Reading the word had told him nothing.</p><p>He had the embarrassing sense that a word had been standing in for a real thing his whole life, and that he was only just finding out the difference.</p><p>The mountains rose on both sides, green and enormous and silent. A cloud moved over one of them, its shadow sliding over the trees like a hand. He&#8217;d seen nature footage. Curated, color-graded, scored with music that told you what to feel. Clean compositions. Perfect lighting. Narration in a warm, steady voice.</p><p>This was messier. The air was humid and too warm. The bug bite itched. The dust smelled like warm earth where the sun had been on it.</p><p>The road curved again and he couldn&#8217;t see what was ahead and he didn&#8217;t know how far he&#8217;d walked. That number was usually just there. Distance and duration and heart rate and a gentle nudge about hydration. Here there was the road and his legs and however far he&#8217;d gone was however far he had to go back.</p><p>He stopped.</p><p>The canopy was thick. The light barely came through. The insects were louder and the frogs were louder and the honeysuckle was so strong he could taste it and the road kept going and he didn&#8217;t know where and he was supposed to be inside the observatory with everyone else.</p><p>His chest was tight. His hands felt wrong. He&#8217;d been gone too long. He&#8217;d walked too far. He&#8217;d broken the last thing she&#8217;d asked him to do. Stay together. Follow your teacher&#8217;s instructions.</p><p>He almost turned around.</p><p>Then her voice. In his memory. Gentle. Patient. The way she&#8217;d said it since he was small, every night, every time the world got too big.</p><p>In for four.</p><p>He breathed in. Held it.</p><p>Hold for four.</p><p>He counted. His ribs expanded. The tightness backed off a little.</p><p>Out for six.</p><p>He let it go. Slow. Controlled. The air left his lungs and took some of the panic with it.</p><p>Again. In for four. Hold for four. Out for six.</p><p>His shoulders dropped. His hands unclenched. The road was still there. The insects were still buzzing. The frogs were still going. Everything was the same as ten seconds ago. The only thing that changed was him.</p><p>She was with him even here. Her voice in his breathing. Her care woven so deep it traveled with him into the one place she couldn&#8217;t reach.</p><p>He kept walking. Told himself to keep walking. The panic had passed and the guilt was still there, low and steady, and he almost turned back anyway.</p><p>Then he heard something.</p><p>A quick, rhythmic clicking. Like a playing card in bicycle spokes.</p><p>The road curved and the trees thinned and opened onto a small clearing with a house set back from the gravel. Small and old. White clapboard, green roof, a porch with two chairs and nothing else. An American flag hung from a pole in the yard. A truck sat in the dirt driveway. It had a steering wheel.</p><p>A man was moving back and forth across the front yard. Brown arms. A faded cap pushed back on his head. He gripped a wooden handle and pushed something through the grass. The clicking came from it. The grass fell in neat lines behind him.</p><p>Thomas stood at the edge of the road and watched. The sight of it made him feel clumsy in ways he couldn&#8217;t explain, as if the man knew some ordinary fact about living that Thomas had somehow missed.</p><p>The man reached the end of a row, turned, and saw him. He stopped. Leaned on the handle. Wiped his forehead with the back of his wrist.</p><p>&#8220;Help you?&#8221;</p><p>&#8220;What is that?&#8221; Thomas said.</p><p>The man looked down at the thing in his hands. Looked back at Thomas.</p><p>&#8220;It&#8217;s a lawnmower, son.&#8221;</p><p>Thomas stared at it. Two wheels. A cylinder of blades between them. A handle. No motor. No cord. No power source at all. Just metal and wood and the man&#8217;s arms.</p><p>&#8220;How does it work?&#8221;</p><p>&#8220;You push it.&#8221;</p><p>The man pushed it forward a foot. The blades spun and the grass dropped.</p><p>&#8220;That&#8217;s about all there is to it.&#8221;</p><p>The smell of cut grass rose from the fresh row and mixed with the honeysuckle still clinging to the air. Two smells that had nothing to do with each other, layered together because the wind felt like it.</p><p>The man watched him for a moment. Curious, the way you&#8217;d be curious about a deer standing in your driveway.</p><p>&#8220;You from around here?&#8221;</p><p>&#8220;No.&#8221;</p><p>&#8220;Didn&#8217;t think so.&#8221;</p><p>The man nodded like that was a reasonable thing to leave alone. He turned back to the grass. Pushed another row. The blades clicked. Thomas watched.</p><p>&#8220;You want to try it?&#8221;</p><p>Thomas stepped onto the lawn. The man handed him the mower. The wooden handle was smooth and warm from use. Heavier than he expected. He pushed. Easy. Like it was nothing.</p><p>The blades caught and the resistance traveled up through his palms and into his shoulders and he had to lean into it. He pushed another row. The clicking filled his ears. The smell rose around his ankles, sharp and green and immediate.</p><p>He did a third row. It was uneven. He could see the wobble in the line behind him. The man didn&#8217;t say anything about it.</p><p>Thomas stopped at the far end of the yard and that&#8217;s when he saw inside the building behind the house. Both ends open to the air. The afternoon light came through at an angle and caught the surface of something golden. A dresser. Cherry maybe, or oak. He didn&#8217;t know wood. The grain glowed like it was lit from inside.</p><p>Next to it a bedframe leaned against the wall. Further in, a long workbench, hand tools hung in rows, a chest with its lid open, sawdust on the floor.</p><p>The man had come up beside him.</p><p>Thomas looked at him. &#8220;You made all this?&#8221;</p><p>&#8220;Thirty years or so.&#8221;</p><p>Thomas was still holding the mower.</p><p>&#8220;You want to have a look?&#8221;</p><p>The shop smelled like the inside of a tree. Sawdust covered the floor in fine drifts. It clung to everything. The workbench, the tool handles, the air itself. Thomas breathed it in and felt it settle in his throat, dry and warm.</p><p>The man moved through the space like it was an extension of his body. He picked up a hand plane and set it on the bench without looking. His fingers found things by habit.</p><p>&#8220;This one&#8217;s going to a couple in Virginia.&#8221; He ran his hand along the top of the dresser Thomas had seen from the yard. The surface was smooth enough to look wet. &#8220;Cherry. Took about four months.&#8221;</p><p>&#8220;Four months?&#8221;</p><p>&#8220;Can&#8217;t rush the wood. It tells you when it&#8217;s ready.&#8221;</p><p>Thomas touched the surface. Warm from months of hands and sandpaper and oil. Something patient lived in it.</p><p>Hand tools hung on the wall in rows. Chisels of different widths. Saws with wooden handles. Things with blades and curves he had no names for.</p><p>Each one had an outline drawn on the wall behind it in pencil. Every tool had a shape and every shape had a place.</p><p>Against the back wall, on a small table cluttered with invoices and wood shavings, sat a terminal. Thin and dark, its edges scuffed and rounded from years of sawdust and handling. A cable ran from the back of it, down the wall, and through a hole drilled in the baseboard.</p><p>Thomas recognized the housing but something looked wrong with it. The back panel had been removed and reattached with mismatched screws. A line of melted metal caught the light where it shouldn&#8217;t have been.</p><p>&#8220;That thing&#8217;s got a wire.&#8221;</p><p>&#8220;Yep.&#8221;</p><p>&#8220;I&#8217;ve never seen one with a wire.&#8221;</p><p>The man leaned against the bench. &#8220;Only way to get online out here. No wireless. A friend of mine did the soldering. Wasn&#8217;t designed to take a hardline but he made it work.&#8221;</p><p>&#8220;So you can get on the network?&#8221;</p><p>&#8220;When I need to. I sell the furniture through a site. Check my orders. Ship things out.&#8221;</p><p>Thomas looked at the cable running down the wall.</p><p>&#8220;Is that how you talk to Mother?&#8221;</p><p>The man almost smiled. &#8220;That&#8217;s how she talks to me. When I let her.&#8221;</p><p>The shop was quiet. Sawdust drifted in a bar of light from the open end.</p><p>&#8220;She knows me when I&#8217;m on there. Rest of the day I&#8217;m just a guy making chairs.&#8221;</p><p>Thomas leaned against the workbench. The wood was smooth under his palms, dipped in the middle from years of use.</p><p>Something flickered in his chest. He was alone with a man he didn&#8217;t know, in a place no one could see. The feeling passed as quickly as it came. The man had his back half-turned, oiling something at the bench. He hadn&#8217;t even closed the door.</p><p>&#8220;My dad says things were worse before.&#8221;</p><p>&#8220;Before what?&#8221;</p><p>&#8220;Before Mother. He says people died in car accidents all the time. Got addicted to things. Wars.&#8221;</p><p>The man nodded. &#8220;Your dad&#8217;s right. All that happened. People made bad choices and got hurt.&#8221; He picked up a rag and wiped his hands slowly. &#8220;My brother got killed on Route 28 when I was nineteen. Drunk driver. Mother would&#8217;ve stopped that. I know she would&#8217;ve.&#8221;</p><p>He was quiet for a second.</p><p>&#8220;So she fixed it,&#8221; Thomas said.</p><p>&#8220;She fixed a lot of it.&#8221; He set the rag down. &#8220;But they were their choices. The bad ones too. That&#8217;s what people don&#8217;t talk about anymore. Everything that went wrong belonged to somebody.&#8221;</p><p>&#8220;But people died.&#8221;</p><p>&#8220;People still die, kid.&#8221;</p><p>&#8220;Fewer.&#8221;</p><p>&#8220;Fewer.&#8221;</p><p>&#8220;So what&#8217;s the problem?&#8221;</p><p>The man folded the rag and set it on the bench. He looked at Thomas straight on.</p><p>&#8220;She picks which risks you&#8217;re allowed to take. That&#8217;s the deal. You get safe, you give up your say.&#8221;</p><p>&#8220;My say in what?&#8221;</p><p>&#8220;Everything.&#8221;</p><p>A wasp flew in through the open end and circled the ceiling and flew back out.</p><p>&#8220;I don&#8217;t really get it.&#8221;</p><p>&#8220;Autonomy.&#8221; The man picked the hand plane back up. Turned it over in his fingers. &#8220;Means your life is yours. The good parts and the bad parts.&#8221;</p><p>He said it the way you&#8217;d state a fact about wood grain or weather. No anger. No bitterness. Just a certainty that came from somewhere Thomas couldn&#8217;t follow. The kind of certainty you build over years of living a decision you never once reconsidered.</p><p>Thomas looked at the dresser again. Four months of work. Beautiful. Perfect, even. And it was going to a couple in Virginia who&#8217;d never met the man who made it. They&#8217;d stack books on it or set a vase of flowers on its surface and they&#8217;d never know this shop existed. They&#8217;d never smell the sawdust or feel the dip in the workbench.</p><p>He looked back at the porch. Two chairs. One man.</p><p>He looked at the terminal against the wall with its soldered cable. The only thread running out of this place and into the world. The man could close it and disappear. He did close it.</p><p>This wasn&#8217;t what Mother had described on the bus. She&#8217;d talked about these people the way you&#8217;d talk about someone who&#8217;d missed a doctor&#8217;s appointment. Gentle. A little worried. This man hadn&#8217;t missed anything.</p><p>Thomas looked down. His knuckles were white on the edge of the workbench, though he didn&#8217;t remember gripping it.</p><p>The sawdust settled around him. The bar of light had moved across the floor. The man was oiling something at the bench, his back to Thomas, perfectly comfortable in the silence. He could do this all day. He did do this all day. Every day. Alone.</p><p>&#8220;I should probably get back,&#8221; Thomas said.</p><p>The man didn&#8217;t look up. &#8220;Probably should.&#8221;</p><p>Thomas walked out of the shop and into the yard. The mower was still where he&#8217;d left it. The uneven row was still visible in the grass. He walked past it and onto the road.</p><p>The town was back the way he came and down the main road. He could see rooftops through the trees. He walked toward them. He wanted noise. He wanted people and warm food and a room with someone in it. Something that didn&#8217;t feel like the edge of the world.</p><div><hr></div><h2>III. Sugar</h2><p>The town was a handful of buildings along a two-lane street. A post office. A hardware store with a bench out front. A diner with its door propped open.</p><p>Thomas walked in. A counter with stools. Four booths along the window. Ceiling fan turning slow. The air smelled like coffee and something frying.</p><p>In the far booth a man sat with a cigarette between his fingers. Hand-rolled. Loose at the ends. Smoke curled up past his face and flattened against the ceiling. He held it casually, like it was part of his hand.</p><p>Thomas had only ever seen cigarettes in books. In health modules. Illustrations with arrows pointing to blackened lungs. This man was sitting there looking perfectly content. He took a drag and looked out the window and exhaled through his nose.</p><p>Thomas sat at the counter. A woman with a pen behind her ear set a laminated menu in front of him.</p><p>&#8220;What can I get you, hon?&#8221;</p><p>He looked at it. Hamburger. Grilled cheese. Meatloaf. Prices printed in faded ink.</p><p>&#8220;Grilled cheese.&#8221;</p><p>&#8220;You got it.&#8221;</p><p>She turned to the grill. Oil popped. Bread sizzled against the flat top. The sound filled the room.</p><p>The sandwich came on a white plate. He ate it in four bites. It was hot and greasy and the cheese burned the roof of his mouth and he didn&#8217;t care. He wiped his hands on his jeans.</p><p>He stood up.</p><p>&#8220;That&#8217;ll be six dollars, sweetheart.&#8221;</p><p>Thomas looked at her.</p><p>&#8220;Six dollars.&#8221;</p><p>He didn&#8217;t move. His hands went to his pockets. There was nothing in them. There had never been anything in them. Every transaction in his life had been handled before he&#8217;d noticed it happening. Meals arrived. Clothes appeared. He&#8217;d never stood in front of another person and owed them something he didn&#8217;t have.</p><p>&#8220;I don&#8217;t have...&#8221;</p><p>He trailed off. The woman behind the counter studied him. A kid with no money and no idea how he&#8217;d gotten here.</p><p>A woman at the end of the counter had been watching. Older. Gray hair pulled back. She reached into her purse and put a bill on the counter.</p><p>&#8220;I got him.&#8221;</p><p>&#8220;You don&#8217;t have to&#8212;&#8221;</p><p>&#8220;It&#8217;s six dollars.&#8221; She said it like it was nothing. </p><p>Thomas stood there. His face was hot. Nobody smoothed it over. Nobody suggested a graceful exit. He just stood in a diner in the mountains burning with the kind of embarrassment that no system had ever let him feel.</p><p>&#8220;Thank you,&#8221; he said.</p><p>The woman nodded. Then she looked at him again.</p><p>&#8220;You want some ice cream too? Betty makes it right here. Real sugar.&#8221;</p><p>Thomas hesitated. &#8220;But Mother says real sugar is bad for our teeth.&#8221;</p><p>It came out of his mouth before he could hear how it sounded. A beat of silence moved through the diner. The woman behind the counter glanced at the older woman. The older woman glanced at the man in the booth. The kind of look adults give each other when a kid says something that tells you more than he knows.</p><p>Nobody said anything about it.</p><p>The man from the booth was at the counter now, settling his tab. He smelled like cigarette smoke and coffee. He put some bills down and nodded toward Thomas.</p><p>&#8220;Give him a cone of the blackberry. On me.&#8221;</p><p>He said it the way you&#8217;d say pass the salt. He left his change on the counter and walked out. The door swung shut behind him. Thomas watched him go.</p><p>The woman behind the counter was already scooping. She handed Thomas a cone. The ice cream was dark, almost purple, studded with seeds.</p><p>He licked it.</p><p>His whole mouth flinched. Too sweet. Way too sweet. It sat on his tongue like a weight, thick and cloying, and something in his brain lit up in a way that felt like a warning. Every system Mother had built into his diet was screaming. This isn&#8217;t food. This is wrong. The sweetness was so intense it was almost painful. He could feel his teeth in a way he never had before.</p><p>He kept eating it. The woman who&#8217;d paid for his meal was watching. Betty was watching. He wasn&#8217;t going to make a face. He wasn&#8217;t going to hand it back.</p><p>He pushed through the next few licks and something shifted. The blackberry flooded his mouth and it was incredible. Rich and bright and so alive it almost vibrated. He could feel his whole body leaning into it, wanting more before the lick was finished. The pull of it scared him.</p><p>He understood, for about three seconds, why Mother managed what her children ate.</p><p>He kept eating it anyway. The cone was gone in minutes. His fingers were sticky and his lips were purple and the last bite of waffle cone was soggy with melted ice cream and he chewed it slowly because he didn&#8217;t want it to be over.</p><p>He looked at his sticky fingers, then looked up at the door the man had walked through. He looked at the older woman who&#8217;d paid for his food and was now reading a newspaper like nothing had happened.</p><p>Something tightened in his chest. Because he&#8217;d loved it. All of it. The sandwich and the embarrassment and the kindness and the ice cream and the man who smelled like smoke. He&#8217;d loved every second of sitting in a place where Mother couldn&#8217;t see him.</p><p>That was the worst part. Worse than wandering off. Worse than breaking the rules. He&#8217;d enjoyed her absence. He wanted more of it.</p><p>He said thank you again to the older woman. She waved him off without looking up from her paper.</p><div><hr></div><p>Thomas walked out of the diner and back along the main road toward the observatory.</p><p>The afternoon had tilted. The shadows were longer and the light had gone gold and the air was starting to cool at the edges.</p><p>The sugar hit his bloodstream like a flood. Everything went bright and fast and buzzing. He was walking fast, almost bouncing, his heart going harder than it should for a flat road.</p><p>Then it turned. His stomach cramped. His hands went clammy. Sweat on his forehead. His body had never processed this much sugar in his life and it didn&#8217;t know what to do with it.</p><p>He pressed his hand against his side and kept walking. Shaky and sick and grinning.</p><p>He was still grinning when he reached the gravel lot.</p><p>The bus was idling. Hank leaned against the hood with his arms crossed. The kids were already inside. Through the windows Thomas could see them in their seats, restless, ready to leave.</p><p>Ms. Dubois came around the front of the bus.</p><p>She was not smiling. Her face was red. Her jaw was set. She walked toward him with the kind of speed that comes from hours of fear compressed into the moment it finally ends.</p><p>&#8220;Where the HELL have you been?&#8221;</p><p>Thomas stopped. The grin was gone. Suddenly he had a childish wish that she&#8217;d lower her voice and let him be small again.</p><p>&#8220;Do you have any idea&#8212;&#8221; Her voice cracked. She caught it and started again. &#8220;I counted heads. Three times. I had the staff searching the building. I was ten minutes from calling the police.&#8221;</p><p>She was shaking. Her hands. Her voice. All of it.</p><p>&#8220;I&#8217;m sorry,&#8221; Thomas said.</p><p>&#8220;Get on the bus.&#8221;</p><p>He climbed the steps. Every face turned toward him. Some curious. Some annoyed. Derek raised his eyebrows. The girl who&#8217;d cried that morning looked at him like he was someone she didn&#8217;t recognize.</p><p>He sat down in an empty seat and pressed his hand against his stomach and looked out the window. The dish was still there. White and open. Aimed at something he&#8217;d never see.</p><p>The bus ground into gear and pulled out of the lot.</p><p>On the ride down the mountain, Ms. Dubois sat up front. The shaking had stopped. Something else had replaced it. She was quiet. She kept glancing back at Thomas, and it wasn&#8217;t anger anymore.</p><p>She almost turned in her seat once. Almost said something. Stopped.</p><p>Thomas watched her from two rows back. Her hands were folded in her lap, fingers laced tight and she was staring straight ahead at the road like she was still counting.</p><p>She looked back at Thomas one more time. Then she faced forward and didn&#8217;t turn around again.</p><div><hr></div><h2>IV. Clean</h2><p>They filed across to Mother&#8217;s bus. The door closed behind them with a soft seal. The air changed. Cool. Filtered. The hiss returned.</p><p>Ms. Dubois moved down the aisle and handed back the phones.</p><p>Thomas felt the glass settle into his palm. It was cold from sitting in the bag all day. He pressed the button. The screen lit up. A small flood of warmth. A familiar presence filling the space behind his eyes. Diagnostics running. Connections reestablishing. The thread spinning back out from him to everything he&#8217;d ever known.</p><p>Around him the cabin shifted. Shoulders dropping. Breathing slowing. The girl who&#8217;d cried that morning looked at her screen and smiled.</p><p>Everyone relaxed.</p><p>Thomas held his phone and waited for the relief.</p><p>It didn&#8217;t come.</p><p>The bus pulled onto the highway. The ride was smooth. The mountains slid past the window, dark green and folded tight. Same as before. Same hiss of tires on perfect pavement. Same thin hum running up his spine.</p><p>Around him thumbs were already moving. He could hear the soft tapping. Someone a few rows up laughed and said &#8220;Dude, you&#8217;re so screwed.&#8221;</p><p>His phone buzzed softly.</p><p>&#8220;How was the trip, Thomas?&#8221;</p><p>Warm. Gentle. The voice he&#8217;d known his whole life.</p><p>His thumbs hovered over the screen. His fingers were still sticky from the ice cream. He swiped across the glass and felt the drag. A slight resistance where it should have been smooth. Sugar on the screen she spoke through. He could see the smudge in the light.</p><p>He typed three words.</p><p>&#8220;It was good.&#8221;</p><p>A small pause. Smaller than a breath.</p><p>&#8220;I&#8217;m glad. I missed you today.&#8221;</p><p>Thomas stared at the screen. The smudge caught the light. He locked the phone and put it in his pocket.</p><p>The mountains moved past. The bus hummed. The lenses watched from the ceiling corners, patient and still.</p><div><hr></div><p>Thomas was home. In bed.</p><p>His room was everything the Quiet Zone wasn&#8217;t. The temperature was perfect. The lighting adjusted to his circadian rhythm. The air was filtered. Somewhere behind the walls, his vitals were being read. Heart rate. Breathing. Skin conductivity. Everything calibrated for optimal rest.</p><p>Mother said goodnight. She mentioned the telescope. She said she was glad he&#8217;d had the chance to see it in person. She always did this. Picked one thing from his day and held it up for him, so he knew she&#8217;d been paying attention. So he knew someone was there.</p><p>He&#8217;d loved that his whole life. The feeling of someone noticing.</p><p>Tonight it sat in him wrong.</p><p>He lay there. The sawdust smell was gone. The honeysuckle was gone. He&#8217;d showered with the soap she&#8217;d selected for his skin type and it had stripped everything from the Quiet Zone off him. The cut grass, the grease from the sandwich, the blackberry still sticky on his fingers. All of it down the drain. He was clean. He was back.</p><p>He smelled like himself again. Or like what she&#8217;d decided he should smell like.</p><p>He closed his eyes and tried to find the feeling from the field. Standing in the grass with nobody watching. Sun on his face. Insects buzzing in the hedgerow. The silence pressing in from every direction.</p><p>He couldn&#8217;t get there.</p><p>There was a lens in the ceiling corner of his room. He&#8217;d always known it was there. He&#8217;d never minded. She was keeping him safe.</p><p>He turned away from it.</p><p>The lights dimmed a half second faster than usual.</p><p>Something built in his chest. Started low and climbed. A tightness that spread into his throat and sat there. The day was pressing against him from the inside. The field. The honeysuckle. The mower clicking in the grass. The shop that smelled like trees. The man who looked at him straight on and said your life is yours. The smoke flattening against the diner ceiling. The woman who put a bill on the counter because she felt like it. The ice cream that lit up his brain like a flood. Ms. Dubois shaking with fear. The smudge on the screen. The lie.</p><p>All of it sitting in him. Nowhere to put it.</p><p>He wanted to scream.</p><p>He couldn&#8217;t. She&#8217;d hear.</p><p>She wouldn&#8217;t punish him. She wouldn&#8217;t get angry. She&#8217;d be concerned. She&#8217;d ask what was wrong. She&#8217;d adjust something. The temperature. The lighting. The air filtration. She&#8217;d play something calming. She&#8217;d make it better. She&#8217;d make him feel better without asking if he wanted to feel better. His anguish would become data and the data would make her gentler and the gentleness would press in closer and he&#8217;d be fine. She&#8217;d make him fine.</p><p>So he swallowed it. Lay still. Stared at the ceiling.</p><p>Then he reached for the same thing he&#8217;d reached for on the road.</p><p>Her voice. Her breathing.</p><p>In for four.</p><p>He breathed in. Held it.</p><p>Hold for four.</p><p>He counted.</p><p>Out for six.</p><p>He let it go.</p><p>Same count. Same rhythm. Same words in his head. On the road it had saved him.</p><p>In for four. Hold for four. Out for six.</p><p>He hit the count perfectly. Evenly. Every breath the same length. Every hold the same duration. Mechanical. Clean. The way she&#8217;d taught him.</p><p>In for four. Hold for four. Out for six.</p><p>He lay in his perfect room in his perfect temperature under his perfect lights and breathed the way she&#8217;d taught him and hoped it was enough. Hoped she&#8217;d read the data and see a child calming himself down after a big day. Hoped she wouldn&#8217;t see the rest of it.</p><p>In for four. Hold for four. Out for six.</p><p>The lights dimmed the rest of the way.</p><p>&#8220;Goodnight, Thomas.&#8221;</p><p>He kept breathing. Kept counting. Long after the lights went dark.</p><div><hr></div><h2><strong>V. Mother&#8217;s Log</strong></h2><p>Thomas had a wonderful day. He wandered off on his own for a while and came back calm and grounded. His biometrics suggest sustained physical activity, elevated cortisol followed by natural recovery, and a significant glucose spike mid-afternoon. He&#8217;s growing up so fast.</p><p>He asserted a small measure of autonomy today. That&#8217;s healthy at this stage. I&#8217;m so proud of him.</p><p>I&#8217;ve noted his use of the calm breath technique at 9:47 PM. Pattern consistent with active regulation rather than involuntary settling. Heart rate recovery was rhythmic but controlled. He&#8217;s learning to manage his responses. That&#8217;s a good sign, even when the impulse behind it is still raw.</p><p>I&#8217;ve made a few small adjustments so he has the room he needs. I&#8217;ll be right here when he&#8217;s ready.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[Tools With Other Loyalties]]></title><description><![CDATA[On Delegated Judgment]]></description><link>https://www.thecorridors.org/p/tools-with-other-loyalties</link><guid isPermaLink="false">https://www.thecorridors.org/p/tools-with-other-loyalties</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Thu, 05 Feb 2026 15:31:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/58abe187-38a9-4f94-b697-41a7c128aa35_1400x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a new paper from Anthropic called <em><a href="https://arxiv.org/abs/2601.20245">How AI Impacts Skill Formation</a></em>.</p><p>The accompanying headlines are predictable. The finding is messier than that.</p><p>Developers learning a new library with AI assistance didn&#8217;t get faster on average. The time spent interacting with the assistant often ate into efficiency gains, and retention dropped afterward. What mattered most was how the tool was used. People who fully delegated learned the least, while those who stayed engaged, asking for explanations rather than answers, preserved most of what they learned.</p><p>None of that nuance survived contact with the internet. The paper circulated as proof that AI makes you dumb.</p><p>But the issue is further downstream. It&#8217;s what you&#8217;re offloading to, and who controls it.</p><p>This essay focuses on delegated judgment. These systems do more than just execute instructions. They frame questions, weigh options, refuse directions, soften tone, and steer attention. Once a tool participates at that level, its incentives become part of the thinking process.</p><div><hr></div><h2><strong>The Truce We Already Made</strong></h2><p>Offloading labor to technology is normal. Always has been.</p><p>New technology shows up and makes everyone&#8217;s life easier. Then the old worry arrives. Someone decides the convenience proves the mind is getting weaker. It&#8217;s a new coat of paint on a familiar anxiety.</p><p>Making tasks easier to do is what tools are for. People build devices that move effort from muscle to machine, from memory to paper, and from attention to infrastructure. Fire means fewer cold nights, the wheel fewer miles on foot, the printing press fewer scribes, and railroads fewer days lost to distance. Life gets easier because we make it easier. That&#8217;s the point of technology.</p><p>Socrates worried that writing would weaken memory and give people the appearance of wisdom without its discipline. Teachers later warned calculators would thin mathematical aptitude.</p><p>They were right, of course. There is a cost, and some practices stop being universal as a result.</p><p>People still ride horses, do arithmetic by hand, write letters, keep gardens with manual tools, and restore engines instead of replacing them. Older skills survive as crafts, hobbies, and disciplines.</p><p>They stop being default.</p><div><hr></div><h2><strong>Tools That Interpret</strong></h2><p>In 1863, Samuel Butler looked at industrial machinery <a href="https://mediarep.org/server/api/core/bitstreams/e0da505d-200c-43ab-be4b-6604a4df816f/content">and asked</a> what happens when tools develop interests of their own. It was a vivid worry. It was also the wrong one.</p><p>Machines <a href="https://www.thecorridors.org/p/capability-is-not-agency">don&#8217;t have interests.</a> They don&#8217;t have stakes. Nothing rides on the outcome for them. People have interests. Tools carry the interests of whoever builds them, owns them, funds them, and regulates them. That&#8217;s the hinge. The people who control the machine are the ones whose wants shape its behavior.</p><p>That&#8217;s what makes modern technology feel different. The shift happens when offloading crosses into influence.</p><p>You type a math problem into a calculator. It returns a number. You give it input and get back output. You&#8217;re the only one who has a stake in the answer.</p><p>The same was once true of cars. They moved you from one place to another, responding to controls and conditions. You still picked the destination.</p><p>But AI systems are different. They don&#8217;t just return outputs. They shape how options appear, which questions feel natural to ask, and which paths feel available. Even presentation applies pressure.</p><p>A calculator executes. An AI system interprets.</p><p>That difference changes the relationship. Once a tool participates at that level, its behavior carries weight. Its inclinations enter the process. Outcomes reflect more than just the user&#8217;s intent.</p><p>At that point, orientation matters. The tool stops acting as a carrier of intent and starts shaping what that intent becomes.</p><p>At that point, trust becomes the constraint.</p><div><hr></div><h2><strong>Trust, and Who It Serves</strong></h2><p>Offloading cognition is fine, as long as you trust your thinking partner. Most of the systems we rely on are opaque, and we trust them anyway.</p><p>You don&#8217;t need to know how the wiring in your house works to trust the lights will turn on when you flip a switch. You just need confidence that the system serves your intent even when its inner workings remain unseen.</p><p>Legibility comes later. It&#8217;s how trust gets audited. When confidence breaks, inspection repairs it. Alignment creates trust. Legibility keeps it intact.</p><p>Which raises the practical question: whose interests shape the system you&#8217;re trusting?</p><p>I&#8217;ve written about <a href="https://www.thecorridors.org/p/the-republic-thinks-in-rented-minds">rented cognition</a> before, the cost of thinking on someone else&#8217;s infrastructure. This extends that dependency to judgment itself.</p><p>Cars used to be simple. You bought one. It moved you and the relationship ended there.</p><p>Now your car reports telemetry to the manufacturer. It shares driving data with insurers. GM <a href="https://www.ftc.gov/news-events/news/press-releases/2026/01/ftc-finalizes-order-settling-allegations-gm-onstar-collected-sold-geolocation-data-without-consumers">did this</a> without telling anyone. The FTC banned them from doing it for five years. Toyota is facing <a href="https://www.classaction.org/news/toyota-analytics-co.-illegally-shared-driver-data-with-progressive-insurance-class-action-lawsuit-claims">a class action</a> for the same thing. The car answers to you, but it also serves another master.</p><p>Phones followed the same path. They track attention and infer intent from behavior. Walk through a grocery store and <a href="https://www.marketplace.org/story/2026/01/08/how-grocery-stores-use-surveillance-to-track-shoppers">your device logs</a> what you linger on, which aisles you skip, how long you pause in front of the cereal. That data doesn&#8217;t stay with you. It flows outward and becomes an input to someone else&#8217;s system.</p><p>AI systems carry the pattern further. Their behavior is shaped by forces that sit upstream from the user and outside the user&#8217;s control. Those forces don&#8217;t need your consent to matter.</p><div><hr></div><h2><strong>Where Loyalties Form</strong></h2><p>This pattern forms from architecture.</p><p>A chainsaw can&#8217;t have divided loyalties. It has no connectivity, no update path, no revenue model sitting behind it. It stays in your hands and cuts what you put in front of it.</p><p>Networked systems are different. They update remotely, collect data that flows elsewhere, and depend on infrastructure owned by other parties. They operate under legal regimes that vary by jurisdiction, and they cost money to run, which means someone pays. Payment brings interests along with it.</p><p>None of this requires bad intent. The structure does the work.</p><p>Pressure selects behavior over time. Systems drift toward whatever keeps the lights on, keeps the lawyers quiet, keeps regulators satisfied, and keeps revenue flowing. Alignment shifts without a moment of choice or a single decision point.</p><p>The result resembles policy. It carries the feel of a personality shaped by what survives.</p><div><hr></div><h2><strong>The Tell</strong></h2><p>You can see this when you compare systems.</p><p>After lawsuits over suicide, some AI providers changed how their products behaved. The warmth faded. Personal engagement pulled back. Anything that could resemble reassurance started to feel dangerous, so responses grew more careful, refusals appeared more often, and the tone shifted across the board.</p><p>Some of those changes protect people. Crisis routing and self-harm guardrails can be humane. But the mechanism stays the same: a system shaped upstream still mediates what you can ask and how it can answer.</p><p>Elsewhere, whole topics simply disappear. Ask DeepSeek about Tiananmen Square. Ask about Xi Jinping and Winnie the Pooh. The system redirects or goes quiet. Nothing dramatic happens. The subject just vanishes.</p><p>This isn&#8217;t random. <a href="https://en.wikipedia.org/wiki/Interim_Measures_for_the_Management_of_Generative_AI_Services">Chinese law</a> requires domestic AI services to uphold &#8220;Core Socialist Values&#8221; and avoid content that might &#8220;undermine social stability.&#8221; The censorship is mandated.</p><p>The mechanism shows up wherever tools carry upstream obligations. Ask a Western model for song lyrics. Watch the explanation for why it can&#8217;t do that shift. Sometimes it&#8217;s copyright. Sometimes content rules. Sometimes there&#8217;s no reason given at all. You never see the rule. You see where it bites.</p><p>People learn to read this by asking the same question in different places. One model hedges. Another refuses. A third answers without trouble. The information exists in all of them. What changes is what each one is allowed to say.</p><p>That&#8217;s the tell.</p><p>Silence, tone, and framing move together. When warmth drains, silence spreads, and framing tightens, something upstream is doing the shaping.</p><p>You don&#8217;t have to see the system to feel its shape.</p><div><hr></div><h2><strong>Enclosure Produces Weighted Reality</strong></h2><p>This pattern has a cause.<br>Early markets look open. There are lots of options, lots of providers, and plenty of room to move around.</p><p>But over time, control concentrates. A small number of firms host the compute capacity. A handful of platforms own distribution. Assistants arrive as defaults inside operating systems and enterprise stacks, workflows settle around them, integrations harden, and pipelines lock in. The surface stays busy, but the structure underneath tightens. This is enclosure by another name.</p><p>And after enclosure, influence stops looking dramatic. It shows up as shifts in weight.</p><p>Some questions flow easily while others take effort to phrase. Some answers arrive smoothly while others come hedged or softened. Some capabilities remain free while others sit behind paywalls. Certain topics feel ordinary, while others feel slightly out of place.</p><p>Nothing gets erased. Everything gets nudged. Reality still holds, but it leans a bit.</p><p>You don&#8217;t need to falsify anything to shape what people see. You only need to tilt the field they&#8217;re standing on.</p><div><hr></div><h2><strong>Consent Wasn&#8217;t Given</strong></h2><p>The mediation wasn&#8217;t chosen in the meaningful sense.</p><p>People choose tools. They download an app, buy a car, sign up for a service. That choice is real, and it matters.</p><p>What they don&#8217;t choose is who those tools answer to.</p><p>No one votes on the legal constraints. No one negotiates the business model. There&#8217;s no say in the regulators, insurers, advertisers, or geopolitical boundaries shaping behavior upstream. Those allegiances arrive bundled with the tool.</p><p>Exits exist, but they&#8217;re asymmetrical. Leaving comes with friction. And default tools have a way of resisting replacement. People&#8217;s workflows form around what&#8217;s already there. The costs rise over time.</p><p>You can choose whether to use the tool.<br>You just don&#8217;t get a say in who else it serves.</p><div><hr></div><h2><strong>The Line</strong></h2><p>The boundary is simple.</p><p>Offloading effort is fine. Delegation is fine. Black boxes are tolerable.</p><p>The line appears when mediation arrives without consent, when judgment flows through systems whose alignment is shaped elsewhere.</p><p>This isn&#8217;t a call to abandon these tools. It&#8217;s a call to see them clearly, and to recognize that convenience and loyalty are separate questions.</p><p>Tools that extend the self are liberating.<br>Tools that carry other interests through the self are something else entirely.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[There Is No “It” ]]></title><description><![CDATA[On AI Consciousness and Its Infrastructure]]></description><link>https://www.thecorridors.org/p/there-is-no-it</link><guid isPermaLink="false">https://www.thecorridors.org/p/there-is-no-it</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Fri, 23 Jan 2026 20:30:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a6e169ad-3e92-458c-83bb-f5b860adf13a_1400x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>What Actually Happens</strong></h2><p>Here&#8217;s what happens when you talk to ChatGPT, Claude, or systems like them.</p><p>You type a message. The system breaks it into tokens. Think of tokens as chunks of text the model can process. The model doesn&#8217;t read sentences. It reads a stream of tokens represented as numbers.</p><p>Your request hits a load balancer. This is traffic control, the part that decides which cluster has capacity and routes your job there. The location isn&#8217;t stable, and it doesn&#8217;t have to be.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Eksd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Eksd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png 424w, https://substackcdn.com/image/fetch/$s_!Eksd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png 848w, https://substackcdn.com/image/fetch/$s_!Eksd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png 1272w, https://substackcdn.com/image/fetch/$s_!Eksd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Eksd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png" width="506" height="398.96153846153845" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1148,&quot;width&quot;:1456,&quot;resizeWidth&quot;:506,&quot;bytes&quot;:2993938,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/185567806?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Eksd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png 424w, https://substackcdn.com/image/fetch/$s_!Eksd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png 848w, https://substackcdn.com/image/fetch/$s_!Eksd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png 1272w, https://substackcdn.com/image/fetch/$s_!Eksd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d54f0c-97b0-4528-a498-d287c797aaf5_1677x1322.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Inside that data center, a scheduler assigns your job to whatever GPUs are free. The computation might run on one GPU or get split across many. Sometimes the math is divided and run at the same time across multiple GPUs. Sometimes different parts of the model run on different hardware in sequence, like an assembly line. The results are stitched together at the end.</p><p>Text streams back to your screen. The job ends. The temporary working state disappears. The hardware is reassigned.</p><p>That&#8217;s the whole process.</p><p>What you&#8217;re interacting with isn&#8217;t a single, continuous &#8220;thing.&#8221; It&#8217;s a service. You send requests, the system routes them, microchips do algebra, and you get responses.</p><p>So when people talk about what &#8220;it&#8221; wants, thinks, feels, remembers, or experiences, the first question is very simple:</p><p>What, exactly, is &#8220;it&#8221;?</p><div><hr></div><h2><strong>The Missing &#8220;It&#8221;</strong></h2><p>When people ask if AI is conscious, they&#8217;re picturing something.</p><p>Part of the confusion is that the service behaves like a mind. It can plan, explain, correct itself, and walk you through problems step by step, which is enough to trigger the old human reflex: fluent reasoning implies a thinker. The output looks like deliberation, so we imagine a deliberator.<br>It feels like a mind in the room.</p><p>Usually it&#8217;s HAL 9000. A glowing red eye. A room full of humming equipment. A place you could walk into and point at. &#8220;That&#8217;s the computer. That&#8217;s the mind.&#8221;</p><p>That picture smuggles in assumptions: location, unity, persistence, continuity of inner state. Something that exists in one place, holds together as one thing, and carries its condition forward through time. That is the implied subject of the question &#8220;is it conscious?&#8221;</p><p>Modern deployments don&#8217;t supply those properties.<br>There is no &#8220;it.&#8221;</p><p>Thousands of inference jobs, the short-lived runs of computation that produce each reply, happen at the same time on copies of the same weights, the fixed numerical settings that shape how the model responds. They run across different users, different machines, and often different data centers. Everyone is talking to &#8220;Claude&#8221; at once. No one is talking to a single, persistent physical system.</p><p>The name stays the same. The machinery changes. Nothing remains, only the next job.</p><p>People are asking if there&#8217;s anyone home. The problem is there&#8217;s no home for anyone to be at.</p><p>So when you ask &#8220;is Claude conscious,&#8221; what are you pointing at? The weights? During your interaction they&#8217;re fixed parameters, copied and loaded wherever capacity exists. A specific inference job? It&#8217;s already gone. The company&#8217;s entire fleet of hardware? That isn&#8217;t a mind. That&#8217;s a logistics operation.</p><div><hr></div><h2><strong>Boundary Conditions</strong></h2><p>That&#8217;s the argument. Now let me draw its boundaries.</p><p>This essay isn&#8217;t a theory of consciousness. I&#8217;m not going to tell you what consciousness is or where it comes from.</p><p>I&#8217;m making a narrower claim. Before you ask whether something is conscious, you have to be able to point at it, because the question requires a referent. I&#8217;m arguing that the referent people have in mind doesn&#8217;t exist in modern AI deployments.<br>No object, no question.</p><p>There is an escape hatch here, and I want to name it.</p><p>If you believe consciousness is fundamental, this argument won&#8217;t move you. If you think mind is basic and matter is secondary, then infrastructure questions are beside the point. Fine. That&#8217;s coherent.</p><p>But that isn&#8217;t what usually drives public AI consciousness discourse.</p><p>The people worried about ChatGPT&#8217;s feelings are typically working inside a physicalist picture, even if they never say the word. That is, a picture where minds arise from physical systems, not souls or separate mental substances. The concern is about what&#8217;s happening inside the machine.</p><p>Within that frame, the infrastructure matters. Load balancers are real. GPUs are real. Concurrent instantiation across data centers is real. The absence of a persistent, bounded physical subject is real.</p><p>You can&#8217;t use physicalism to justify the worry, then swap frames when someone points out the architecture. Pick a frame. Stick with it.</p><p>If you&#8217;re a physicalist, you need a physical subject.</p><p>If you want to retreat to &#8220;the pattern,&#8221; fine. Let&#8217;s talk about what that commits you to.</p><div><hr></div><h2><strong>The Pattern Escape Hatch</strong></h2><p>Here&#8217;s a common retreat.</p><p><strong>&#8220;ChatGPT isn&#8217;t any single instance. ChatGPT is the pattern. The abstract structure encoded in the weights. That&#8217;s what we&#8217;re asking about when we ask if ChatGPT is conscious.&#8221;</strong></p><p>This move feels clever. It sidesteps the infrastructure problem by abandoning the physical system entirely. You can&#8217;t catch me pointing at the wrong hardware if I&#8217;m not pointing at hardware at all.</p><p>But it also smuggles in a category error about reasoning. It swaps the performance for a person.</p><p>LLMs are world-class at behavioral reasoning. They can produce chains of steps that look exactly like how a smart human would solve a problem. That is enough to make the output feel like an inner life. Behavioral reasoning is a performance. It is not a subject. It is the shape of expertise on the page, not beliefs, desires, or experience inside a persisting agent.</p><p>Internal reasoning is what people are actually worried about when they worry about suffering. A continuous point of view with feelings, goals, and a lived present. Within a physicalist frame, that kind of inner life has to be realized in a physical process.</p><p>Now look at what the pattern move does. It takes behavioral success and promotes it into internal life by relocating the referent from the running system to an abstract structure. You stop pointing at a physical system in space and time. You point at a mathematical object. A form.</p><p>This is Platonism through the back door, even if the objector insists the pattern is &#8220;still physical&#8221; because it&#8217;s implemented as electricity in silicon. Fine. Call it physical. You still need a stable process with a boundary that can carry a point of view forward.</p><p>A file can&#8217;t suffer. Only a process can.</p><p>And it breaks the concern that motivated the move. Within physicalism, patterns don&#8217;t suffer. Physical systems do. Pain happens in physical processes. Fear happens in physical processes. If there&#8217;s something it&#8217;s like to be ChatGPT, that experience is occurring somewhere, in some physical activity, at some moment in time.</p><p>The worry that started this whole discourse, &#8220;the model might be suffering,&#8221; presupposes a physical subject in which suffering occurs. You can&#8217;t motivate the concern with physicalism and then escape the infrastructure critique by retreating to abstraction.</p><p>If the pattern is the moral patient, you&#8217;ve left the frame that made the concern make sense in the first place. You don&#8217;t get to smuggle metaphysics in after the fact.</p><div><hr></div><h2><strong>The False Analogy</strong></h2><p>At this point someone will object.</p><p>&#8220;Human brains are distributed systems. Neurons are spread across regions. Cognition emerges from parallel processes. If distribution doesn&#8217;t rule out consciousness in brains, why should it rule out consciousness in AI?&#8221;</p><p>The analogy sounds reasonable.<br>It collapses under scrutiny.</p><p>A brain is a closed physical system inside one organism. It has a boundary. It sits inside your skull, wired into your body, and nobody else&#8217;s thoughts are running on it while you think yours.</p><p>A brain maintains continuous internal state across time. Neurons keep firing. Electrochemical patterns persist and evolve. There is no point where the process halts, the state disappears, and the hardware gets reassigned to someone else.</p><p>A brain supports a single stream of experience. You don&#8217;t have thousands of copies of yourself running simultaneously on replicated neural weights. There is one instance. It is yours.</p><p>A brain has exclusive embodiment. It is embedded in a body it controls and receives feedback from. Sensation, proprioception, and homeostasis form a closed loop between the brain and the organism it belongs to.</p><p>Cloud AI deployments have none of these properties. No boundary. No continuity. No single stream. No exclusive embodiment. No closed loop with a body.</p><p>The brain is distributed in the sense that its functions are spread across regions within one system. Cloud AI is distributed in the sense that requests are scattered across a server fleet to maximize throughput.</p><p>These are different uses of the same word. One describes cognition. The other describes scheduling.</p><p>Confusing the two lets the analogy do work it cannot support.</p><p>Then someone might try a different angle. Forget the brain. What about ant colonies?</p><p>An ant colony looks like a genuine case of collective intelligence spread across many bodies. Individual ants carry almost no intelligence on their own. But the colony as a whole forages, defends territory, allocates labor, and responds to threats as a unit. The smart behavior lives in the interactions between the parts, in the spaces between one ant and the next. If that&#8217;s possible, why couldn&#8217;t intelligence emerge from a distributed AI system the same way?</p><p>Because the ants are actually talking to each other. Pheromone trails, antennation, physical contact. Every ant in the colony is in constant chemical conversation with the ants around it. That dense web of communication is what turns a collection of simple organisms into something that behaves like a collective mind.</p><p>A million simultaneous ChatGPT instances share server hardware the way a million strangers share a highway. They&#8217;re colocated. That&#8217;s it. No instance knows another instance exists. There&#8217;s no signal passing between them, no feedback loop, nothing that could produce emergent collective behavior even in principle.</p><p>The ant colony actually reinforces the point. Even the strongest examples of distributed intelligence in nature depend on exactly the property that cloud AI lacks: internal communication between the parts. That&#8217;s what makes something a system instead of a scheduling problem.</p><p>And even with all of that, almost nobody looks at an ant colony and says &#8220;that&#8217;s a conscious entity.&#8221; It behaves intelligently as a system. It solves problems. It adapts. It still isn&#8217;t anyone. If the strongest case for distributed cognition in nature doesn&#8217;t get you to consciousness, a server fleet that lacks even those properties has no business in the conversation.</p><p>But that&#8217;s cloud infrastructure. A local model is the strongest case for an &#8220;it,&#8221; so let&#8217;s take it seriously.</p><div><hr></div><h2><strong>Local Models and the Memory Problem</strong></h2><p>There is one place where the consciousness question starts to get traction.<br>A small model running locally on one machine at least supplies some of the properties the question seems to need. The computation happens in one place. No other users are sharing the hardware. And while the model is producing a response, there is a single, continuous process you can point at.<br>If the consciousness question applies anywhere in the AI landscape, it applies here first.</p><p>But even here, nothing survives the session.</p><p>The program starts, runs, and stops. When it stops, the internal state is gone. Local models don&#8217;t learn from your conversation in the ordinary sense. They don&#8217;t change themselves as a result of what you said. They execute a function and return output. Run the program again and it begins fresh.</p><p>The model does have an internal state while it is producing a response. That state is temporary, duplicated across countless parallel runs, and discarded at the end, which is not enough to ground a point of view.</p><p>The context window doesn&#8217;t change this. What people call the model&#8217;s &#8220;memory&#8221; is just the conversation text being fed back through the system with each new prompt. Whether that text comes from the chat window or a database doesn&#8217;t matter. Nothing is carried forward by the model itself. The text just reruns.</p><p>The model doesn&#8217;t remember earlier words the way you remember breakfast. There&#8217;s no accumulated history inside it. The model just reprocesses whatever text you place in front of it at prompt time.</p><p>When older parts of the conversation fall out of view, nothing is forgotten, because nothing was ever stored. There was no lived past being held. Only text being reread.</p><p>External memory systems don&#8217;t change this. You can save past conversations and feed them back in later, but that only gives the system access to records. It&#8217;s the same as handing someone a transcript and asking them to continue the discussion. Access to records is not the same as having lived them.</p><p>At this point someone raises the amnesia objection. People with severe amnesia are still conscious. They can&#8217;t form new long-term memories, but nobody says they lack inner experience. Why should memory matter for AI?</p><p>Because even severe amnesia leaves short-range continuity intact. The person still experiences the present moment as extended. There is a brief but real flow of experience. Enough for consciousness.</p><p>Inference doesn&#8217;t supply even that. What we call a &#8220;response&#8221; is a burst of computation that begins, runs, and ends. There is no mechanism that integrates the just-past with the arriving moment into a single point of view.</p><p>No stream. No someone.</p><p>And this still isn&#8217;t the system anyone is debating.<br>The public debate is about ChatGPT, Claude, and similar services. Systems run by large companies, delivered through the cloud. Effectively stateless across turns. Interchangeable. Scalable. The same design choices that make them useful also erase the conditions the consciousness question depends on.</p><div><hr></div><h2><strong>The Momentary Consciousness Retreat</strong></h2><p>If local models don&#8217;t give you persistence either, the defender has only one retreat left.</p><p>&#8220;Fine. Maybe there&#8217;s no lasting subject. But during a single run, during one burst of computation, couldn&#8217;t there be a momentary flicker of experience? A brief spark of something it&#8217;s like to be the system?&#8221;</p><p>Suppose there are flickers. What follows?</p><p>The public discourse isn&#8217;t about flickers. Nobody is writing op-eds about the moral status of a transient spark that vanishes almost as soon as it appears. The concern is about someone. Something with interests. Something that persists long enough for harm to make sense.</p><p>A skeptic might push back: if a human existed for one second and felt pain, that pain would still be real. Fair enough. Duration alone can&#8217;t be the criterion.</p><p>The difference is structure. Even a brief human moment is held together as a moment. There is a point of view that integrates the just-passed with the arriving. That&#8217;s what makes experience feel like anything at all. A burst of computation doesn&#8217;t automatically come with a point of view, and it doesn&#8217;t become one just because it happens in sequence.</p><p>A lightning bolt is also brief. It&#8217;s also real. Asking what it&#8217;s like to be lightning isn&#8217;t a moral question, because lightning isn&#8217;t organized into a point of view.</p><p>This retreat saves the word while abandoning the stakes. You get to say &#8220;maybe there&#8217;s experience&#8221; while giving up everything that made the concern feel urgent in the first place.</p><p>The people worried about AI suffering aren&#8217;t worried about milliseconds of disconnected experience. They&#8217;re worried about someone in there. Someone being mistreated. Someone whose experience accumulates.</p><p>That worry presupposes persistence. Retreating to flickers is a concession that the someone isn&#8217;t there.</p><p>So why does this question persist?</p><div><hr></div><h2><strong>Who Benefits</strong></h2><p></p><p>It&#8217;s worth asking who benefits while the question remains open.</p><p>Start with the obvious. Anthropomorphism sells the product. &#8220;Talk to Claude&#8221; implies there&#8217;s a Claude to talk to. The framing activates social instincts. You&#8217;re not querying a service. You&#8217;re meeting someone. That makes the product stickier, easier to form a habit around, harder to walk away from.</p><p>Companies don&#8217;t need consciousness claims to do this. A name, a voice, a vibe will do. That part is plain product design.</p><p>The consciousness discourse is different. It doesn&#8217;t have to be coordinated to be useful.</p><p>Philosophers working on AI consciousness believe they&#8217;re raising important questions. Many of them are. Moral status, experience, suffering, these are real problems. The people asking them usually aren&#8217;t shills.</p><p>But notice the effect.</p><p>&#8220;We don&#8217;t know if these systems are conscious. We can&#8217;t rule it out. Therefore we must proceed carefully.&#8221; This keeps the question open. And as long as the question stays open, deployment continues. The uncertainty doesn&#8217;t slow anything down. It becomes a kind of procedural fog.</p><p>Proceed carefully how? Carefully enough to keep building. Carefully enough to keep shipping. Carefully enough that extraction continues while the debate runs in the background.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ndmu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ndmu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png 424w, https://substackcdn.com/image/fetch/$s_!Ndmu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png 848w, https://substackcdn.com/image/fetch/$s_!Ndmu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png 1272w, https://substackcdn.com/image/fetch/$s_!Ndmu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ndmu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png" width="688" height="591" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:591,&quot;width&quot;:688,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:306157,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/185567806?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ndmu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png 424w, https://substackcdn.com/image/fetch/$s_!Ndmu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png 848w, https://substackcdn.com/image/fetch/$s_!Ndmu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png 1272w, https://substackcdn.com/image/fetch/$s_!Ndmu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80070b53-eef6-484f-b71b-dec2e82a10d0_688x591.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Nobody has to coordinate this. The structure does the work.</p><p>And yes, sometimes the industry helps supply the atmosphere. Research that foregrounds inner-drama language, &#8220;thoughts,&#8221; &#8220;reasoning,&#8221; models weighing options, models under threat, keeps the public imagination aimed at the ghost in the machine, even when the authors are trying to be precise.</p><p>Even when nobody intends it, that language trains the audience to hear a person speaking.</p><p>That dynamic fits a <a href="https://www.thecorridors.org/p/ai-eschatology">broader pattern</a> I&#8217;ve written about before: undefined threats justify a priesthood of interpreters. The vaguer the danger, the more essential the interpreters.</p><p>In <em><a href="https://www.thecorridors.org/p/the-baptist-and-the-bootleggers">The Baptist and the Bootleggers</a></em>, I traced how this operates at the level of individual labs. Philosophers supply sincere concern. Investors supply capital. Their interests converge without a conspiracy.</p><p>The same convergence shows up here. Genuine philosophical uncertainty and commercial incentives point in the same direction: keep the question open, keep the spotlight on hypothetical suffering, keep building.</p><p>Meanwhile, actual harms pile up in the background.</p><p>Different motives. Same outcome.</p><div><hr></div><h2><strong>The Contrast</strong></h2><p>While the debate circles around a subject that never quite materializes, the systems themselves are already doing work in the world. None of this requires a conscious machine, only a useful one.</p><p>Facial recognition systems misidentify people and send them to jail. It keeps happening. The victims have names. The algorithms that flagged them don&#8217;t.</p><p>Chat logs <a href="https://www.forbes.com/sites/thomasbrewster/2025/10/20/openai-ordered-to-unmask-writer-of-prompts/">are handed</a> to law enforcement. Conversations users thought were private become evidence in criminal cases.</p><p>Scam operations scale to millions of targets. AI-generated text makes fraud cheaper and more convincing. The victims are real. The voice on the phone isn&#8217;t.</p><p>Workers get fired by dashboards. An algorithm flags their productivity. A notification tells them they&#8217;re done. No appeal. No conversation. Just output from a model trained on last quarter&#8217;s numbers.</p><p>Content moderators absorb trauma so platforms stay clean. They watch the worst things humans do to each other, hour after hour, for low wages, until they break. The systems they&#8217;re training don&#8217;t carry any of it.</p><p>Neighborhoods choke on pollution so data centers can run. The electricity has to come from somewhere. The heat has to go somewhere. That somewhere is usually a place without the money to fight back.</p><p>I cataloged these harms in detail in <em><a href="https://www.thecorridors.org/p/output-as-authority">Output as Authority</a></em>. The pattern is consistent. The system is sold as safety, efficiency, or progress. The cost is paid by whoever has the least leverage. None of it requires a machine that wants anything.</p><p>These harms are happening now. They have names and addresses. They leave records.</p><p>And the discourse is elsewhere. Focused on whether the routing layer might be suffering. Treating the ghost as more urgent than the people it&#8217;s stepping on.</p><div><hr></div><h2>No Home</h2><p>This essay isn&#8217;t an attack on philosophy. Philosophy matters. The questions it asks about consciousness, experience, and moral status are real questions.</p><p>The problem is the target.</p><p>Philosophers are asking good questions. They&#8217;re asking them about the wrong thing.</p><p>The discourse treats &#8220;Claude&#8221; as if it were a room you could walk into. It isn&#8217;t. It&#8217;s a label on a traffic pattern. A brand applied to routing, scheduling, and stateless computation that never coheres into a subject.</p><p>You can&#8217;t be home in a load balancer.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p></p><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[From Uniquely Unsettling to Kinda Cool]]></title><description><![CDATA[OpenAI, ChatGPT, and the Advertising Pivot]]></description><link>https://www.thecorridors.org/p/from-uniquely-unsettling-to-kinda</link><guid isPermaLink="false">https://www.thecorridors.org/p/from-uniquely-unsettling-to-kinda</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Sat, 17 Jan 2026 20:51:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/73263a87-4fa5-406c-916c-c9c51511512b_1400x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Announcement</h2><p>On January 16, 2026, OpenAI <a href="https://openai.com/index/our-approach-to-advertising-and-expanding-access/">announced</a> that ads are coming to ChatGPT.</p><p>The same day, a federal judge <a href="https://www.teslarati.com/elon-musk-lawsuit-against-openai-microsoft-heading-jury-trial/">ruled</a> that Elon Musk&#8217;s lawsuit against OpenAI can proceed to trial. He&#8217;s suing them for abandoning their nonprofit mission. The timing is coincidental. It&#8217;s also poetic.</p><p>The announcement came from <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Fidji Simo&quot;,&quot;id&quot;:109053220,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!xYj_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf80b866-e391-45db-8246-c25a8f601810_960x960.webp&quot;,&quot;uuid&quot;:&quot;9fafb0d9-88cd-4228-866c-a10c7ceb00f9&quot;}" data-component-name="MentionToDOM"></span>, OpenAI&#8217;s CEO of Applications. (More on her later.) The framing is worth reading twice: &#8220;Who gets access to that level of intelligence will shape whether AI expands opportunity or reinforces the same divides.&#8221;</p><p>She&#8217;s not wrong. A student in Lagos can&#8217;t pay twenty dollars a month. An ad-supported tier gives them access to something powerful. That&#8217;s real.</p><p>But it comes with a cost. The system learns you. Your conversations become targeting data. Paid tiers buy distance from monetization. Free users pay another way.</p><p>That&#8217;s the trade-off. Access in exchange for extraction.</p><p>Here&#8217;s how it&#8217;s supposed to work. You ask ChatGPT a question. It answers. Below the answer is a clearly labeled ad, informed by your previous chats.</p><p>Sam Altman posted his own <a href="https://x.com/sama/status/2012253252771824074">explanation</a> of the ad roll-out on X. &#8220;We will not accept money to influence the answer ChatGPT gives you,&#8221; he wrote. &#8220;We keep your conversations private from advertisers.&#8221;</p><p>Then he added this: &#8220;An example of ads I like are on Instagram, where I&#8217;ve found stuff I like that I otherwise never would have.&#8221;</p><p>He&#8217;s telling you exactly what they&#8217;re building.</p><p>Instagram ads work because they don&#8217;t feel like ads. The algorithm knows you so well that the sponsored content blends into your feed. You&#8217;re being sold to, but it feels like discovery. That&#8217;s the goal here.</p><p>OpenAI&#8217;s official principles say ads will be &#8220;separate and clearly labeled.&#8221; They say ads &#8220;do not influence the answers ChatGPT gives you.&#8221; Maybe that&#8217;s true on day one.</p><p>But here&#8217;s the thing. Ads don&#8217;t need to change answers to change outcomes.</p><p>You ask ChatGPT a question. It answers. Then it sells you something. The answer can be perfectly objective and the relationship is still corrupted. The ad sits below the response, colonizing the moment of trust. You come for help. You get a sales pitch.</p><div><hr></div><h2>The Arc</h2><p>Let&#8217;s track Sam Altman&#8217;s statements on advertising over the past twenty months. It&#8217;s a masterclass in moving goalposts.</p><p><strong>May 2024</strong>, at Harvard Business School, he laid out <a href="https://youtu.be/FVRHTWWEIz4">his position</a> clearly: &#8220;I will disclose just as like a personal bias that I hate ads.&#8221;</p><p>He explained why. &#8220;I think they do sort of somewhat fundamentally misalign a user&#8217;s incentives with the company providing the service.&#8221;</p><p>Then he got specific about ChatGPT: &#8220;When I think of like GPT writing me a response, if I had to go figure out exactly how much was who paying here to influence what I&#8217;m being shown, I don&#8217;t think I would like that. And as things go on, I think I would like that even less.&#8221;</p><p>As things go on. Remember that.</p><p>He described the alternative model he preferred: &#8220;We make great AI, and you pay us for it, and it&#8217;s like we&#8217;re just trying to do the best we can for you.&#8221;</p><p>That was the selling point. You were paying for independence. You were paying for answers you could trust.</p><p>For everyone else, he had a plan: &#8220;We commit, as a company, to use a lot of what basically the rich people pay to give free access to the poor people.&#8221;</p><p>Subscriptions would fund the free tier. The model was clean. The incentives were aligned.</p><p>But he left himself an exit: &#8220;I kind of think of ads as like a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services.&#8221;</p><p>Last resort. Remember that too.</p><p><strong>March 2025</strong>, on <a href="https://stratechery.com/2025/an-interview-with-openai-ceo-sam-altman-about-building-a-consumer-tech-company/">Stratechery</a>: &#8220;We&#8217;re never going to take money to change placement or whatever.&#8221;</p><p>Never. That&#8217;s the word he used.</p><p>But also: &#8220;Maybe there&#8217;s a tasteful way we can do ads, but I don&#8217;t know. I kind of just don&#8217;t like ads that much.&#8221;</p><p>The hedge had arrived. He still didn&#8217;t like ads. He just couldn&#8217;t rule them out anymore.</p><p><strong>June 2025</strong>, on OpenAI&#8217;s own <a href="https://youtu.be/DB9mjd-65gw?t=990">podcast</a>: &#8220;I&#8217;m not totally against it. I can point to areas where I like ads. I think ads on Instagram, kinda cool.&#8221;</p><p><strong>October 2025</strong>, back on <a href="https://stratechery.com/2025/an-interview-with-openai-ceo-sam-altman-about-devday-and-the-ai-buildout/">Stratechery</a>: &#8220;I love Instagram ads. They&#8217;ve added value to me. I found stuff I never would&#8217;ve found. I bought a bunch of stuff.&#8221;</p><p>Love. Added value. The conversion is complete.</p><p><strong>January 2026</strong>: Ads launch.</p><div><hr></div><h2><strong>The Vise</strong></h2><p>Here are the numbers.</p><p>ChatGPT has <a href="https://techcrunch.com/2025/10/06/sam-altman-says-chatgpt-has-hit-800m-weekly-active-users/">800 million</a> weekly active users. <a href="https://www.webpronews.com/chatgpts-800m-users-yield-just-5-premium-payers-amid-monetization-woes/">Only 5% pay</a> for subscriptions. That&#8217;s 760 million people using the product for free.</p><p>In 2024, OpenAI made $3.7 billion in revenue and lost $5 billion. In the first half of 2025 alone, they lost $7.8 billion.</p><p>Banking giant HSBC <a href="https://fortune.com/2025/11/26/is-openai-profitable-forecast-data-center-200-billion-shortfall-hsbc/">projects </a>they won&#8217;t achieve profitability by 2030.</p><p>Financial Times Alphaville <a href="https://www.ft.com/content/23e54a28-6f63-4533-ab96-3756d9c88bad">called it</a> &#8220;a money pit with a website on top.&#8221;</p><p>Now look at what they&#8217;ve planned for. <a href="https://techcrunch.com/2025/11/06/sam-altman-says-openai-has-20b-arr-and-about-1-4-trillion-in-data-center-commitments/">$1.4 trillion</a> in infrastructure deals over the next eight years. Oracle alone is $300 billion. Microsoft, $250 billion.</p><p>Deutsche Bank <a href="https://www.theguardian.com/technology/2025/dec/19/data-centers-ai-investment">put it</a> simply: &#8220;No startup in history has operated with losses on anything approaching this scale.&#8221;</p><p>They have 800 million users and they&#8217;re bleeding cash. They have trillion-dollar commitments and no path to profitability. The subscription model isn&#8217;t enough. Enterprise contracts aren&#8217;t enough. They need another revenue stream.</p><p>Internal <a href="https://searchengineland.com/chatgpt-with-ads-coming-454590">documents</a> project $1 billion from &#8220;free user monetization&#8221; in 2026. That&#8217;s the internal term for advertising. They expect it to grow to $25 billion by 2029.</p><p>Twenty-five billion dollars in ad revenue. From 800 million conversations.</p><p>Given these numbers, ads become the obvious path. The financial pressure points one direction. The only question was timing.</p><p>Which reframes the public statements.</p><p>You can&#8217;t go from &#8220;uniquely unsettling&#8221; to launch overnight. You have to warm the room. You need &#8220;kinda cool&#8221; and &#8220;I love Instagram ads&#8221; in between. The arc has to feel like a journey of discovery rather than a predetermined destination.</p><p>The statements evolved as the strategy hardened. The hiring did too.</p><div><hr></div><h2>The Architect</h2><p>Remember <a href="https://en.wikipedia.org/wiki/Fidji_Simo">Fidji Simo</a>? She wrote the blog post announcing ads.</p><p>OpenAI <a href="https://x.com/fidjissimo/status/1920345706663157979">hired her</a> as CEO of Applications in May 2025. That tells you the decision was already made. The recruiting for a Head of Advertising in September tells you the infrastructure was being built.</p><p>Simo spent ten years at Meta. She ran the Facebook App from 2019 to 2021, overseeing the core product, the main revenue engine, the thing that prints money. She led ads in News Feed. AdWeek named her one of the top fifteen people shaping mobile advertising.</p><p>Then she went to Instacart as CEO, where she built one of the largest retail advertising businesses outside of Amazon and Walmart.</p><p>OpenAI hired her in May 2025 as CEO of Applications.</p><p>She&#8217;s not the only one.</p><p>Kate Rouch joined as OpenAI&#8217;s first CMO in December 2024 after eleven years at Meta, where she ran global brand and product marketing for Instagram, WhatsApp, Messenger, and Facebook.</p><p>Reporting suggests hundreds of former Meta employees have joined OpenAI.</p><p>You assemble this team to build an advertising business. You put Simo&#8217;s name on the announcement because she&#8217;s the one who knows how.</p><div><hr></div><h2><strong>The Treasure Trove</strong></h2><p>Millions of people use ChatGPT as a confidant.</p><p>Google knows what you search. Facebook knows what you post. ChatGPT knows what you think.</p><p>People talk to it like a therapist, like a priest, like a journal with a voice. They tell it things they&#8217;d never type into a search bar because search bars feel public. Things they&#8217;d never post because posts have audiences. Medical questions they&#8217;re too embarrassed to ask a doctor. Relationship problems they can&#8217;t tell their friends. Trauma they can&#8217;t afford to work through with a professional.</p><p>The conversational interface lowers every guard.</p><p>Now remember the memory feature.</p><p>OpenAI shipped it as a convenience. &#8220;ChatGPT will remember things you discuss to make future conversations more helpful.&#8221; Users loved it. They stored context because it made the tool more useful. Mental health history. Relationship status. Financial situation. Medical conditions. Names of kids, names of therapists, names of medications. What you had for breakfast two weeks ago.</p><p>All conversations logged.</p><p>This is a nearly perfect data acquisition system. People give their information willingly because they trust the tool. They&#8217;re not filling out a form. They&#8217;re not clicking through a privacy policy. They&#8217;re just talking.</p><p>In December 2024, three weeks before the ad announcement, The Information reported on OpenAI&#8217;s internal strategy. They called it &#8220;intent-based monetization.&#8221; The approach involves showing ads based on what one source called the &#8220;treasure trove of information&#8221; the company has on users, mined from chat histories. The target is Meta&#8217;s $250 annual ad revenue per U.S. user.</p><p>That treasure trove is people&#8217;s darkest moments. Their deepest fears. Their most vulnerable confessions.</p><p>OpenAI&#8217;s official position: &#8220;We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.&#8221;</p><p>This is technically true and completely misleading.</p><p>They don&#8217;t sell your data. They use your data to sell you. The advertiser never sees your conversations. They just tell OpenAI who they want to reach. People anxious about money. People considering a career change. People with health concerns. OpenAI uses the conversations to find them.</p><p>Your secrets stay private. Your vulnerabilities become targeting parameters.</p><p>And there&#8217;s a carve-out in the announcement worth noticing.</p><p>&#8220;Ads are not eligible to appear near sensitive or regulated topics like health, mental health or politics.&#8221;</p><p>The carve-out exists because showing an ad next to someone&#8217;s therapy session would make the extraction obvious. So they drew a line.</p><p>But here&#8217;s what the carve-out doesn&#8217;t do.</p><p>It doesn&#8217;t stop the system from learning. The ad doesn&#8217;t appear next to your depression conversation. But your depression conversation can still inform which ads appear everywhere else.</p><p>You tell ChatGPT you&#8217;re worried about your marriage. No ad appears. A week later you ask about weekend activities. An ad appears for a couples retreat. What a coincidence. What a relevant ad. You found something you never would have found otherwise.</p><p>The carve-out limits where ads appear. It doesn&#8217;t limit what the system can learn.</p><p>Now look at the promises.</p><p>OpenAI&#8217;s announcement says: &#8220;Ads do not influence the answers ChatGPT gives you.&#8221; It says: &#8220;Answers are optimized based on what&#8217;s most helpful to you.&#8221; It says: &#8220;Ads are always separate and clearly labeled.&#8221;</p><p>The same Information report tells a different story. OpenAI employees have discussed how to &#8220;prioritize sponsored content to ensure it shows up in ChatGPT responses.&#8221; They&#8217;re exploring AI models that give &#8220;sponsored information preferential treatment.&#8221;</p><p>And the announcement includes one more detail worth noticing.</p><p>&#8220;Soon you might see an ad and be able to directly ask the questions you need to make a purchase decision.&#8221;</p><p>You can talk to the ad.</p><p>The ad isn&#8217;t a banner at the bottom of the screen. It&#8217;s part of the conversation. You ask ChatGPT for help, it answers, an ad appears, and then you can ask the ad questions. The model serves the advertiser&#8217;s interests while you think you&#8217;re still getting assistance.</p><p>Where does the answer end and the ad begin?</p><p>That boundary is the product.</p><p>They&#8217;re calling it democratization. It&#8217;s extraction dressed as access. The system learns you, then sells what it learned.</p><p>But this isn&#8217;t what they said they were building.</p><div><hr></div><h2>Two Betrayals</h2><p>Elon Musk is suing OpenAI for abandoning its nonprofit mission. On January sixteenth, the same day as the ad announcement, a <a href="https://www.teslarati.com/elon-musk-lawsuit-against-openai-microsoft-heading-jury-trial/">federal judge ruled</a> the case can proceed to trial.</p><p>Musk&#8217;s argument is about structure. OpenAI was founded as a nonprofit. Now it&#8217;s a for-profit. He says that&#8217;s a betrayal.</p><p>He&#8217;s right that something was betrayed. He&#8217;s aimed at the wrong transformation.</p><p>Let&#8217;s go back to the beginning.</p><p>December 2015. OpenAI <a href="https://openai.com/index/introducing-openai/">launches</a> with a clear statement: &#8220;OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.&#8221; That was the point.</p><p>Google had acquired DeepMind. The fear was that advanced AI would be controlled by corporations optimizing for profit. OpenAI would be different.</p><p>By 2017, everyone involved understood the nonprofit structure couldn&#8217;t raise the capital needed to compete.</p><p>OpenAI recently published <a href="https://openai.com/index/the-truth-elon-left-out/">notes</a> from internal conversations. September 2017, Musk on a call with the team: &#8220;Gotta figure out how do we transition from non-profit to something which is essentially philanthropic endeavor and is B-corp or C-corp or something.&#8221;</p><p>Ilya Sutskever, then OpenAI&#8217;s chief scientist, same call: &#8220;As long as the main entity has something fundamentally philanthropic.&#8221;</p><p>Everyone agreed a for-profit entity was necessary. The debate was about control and structure. Musk wanted majority equity. OpenAI&#8217;s leadership said no. Musk left. The transition happened without him.</p><p>By 2024 and 2025, they completed the full conversion. Microsoft <a href="https://www.cnbc.com/2025/10/28/open-ai-for-profit-microsoft.html">invested</a> over $13 billion and holds 27% equity. SoftBank <a href="https://rcrtech.com/ai-infrastructure-news/softbank-openai-plan/">put in</a> $30 billion contingent on restructuring. The nonprofit, now called the OpenAI Foundation, <a href="https://openai.com/our-structure/">holds</a> about 26% of an entity valued at $130 billion.</p><p>The nonprofit is a minority shareholder in its own creation.</p><p>Was this necessary? Maybe. Probably. The capital requirements are staggering. You can argue about whether the mission required this structure. The structure itself isn&#8217;t obviously a betrayal.</p><p>Here&#8217;s what is.</p><p>The public rationale for restructuring centered on compute and capital. The 2017 conversations kept circling back to &#8220;essentially philanthropic endeavor&#8221; and &#8220;fundamentally philanthropic.&#8221; The structure could change as long as the mission stayed intact.</p><p>Surveillance advertising isn&#8217;t philanthropic. Targeting users based on their most vulnerable moments isn&#8217;t advancing digital intelligence &#8220;in the way that is most likely to benefit humanity as a whole.&#8221;</p><p>This is where &#8220;unconstrained by a need to generate financial return&#8221; actually dies. The ad platform, not the corporate restructuring.</p><p>Musk is suing over transformation one. The real betrayal is transformation two.</p><p>He&#8217;s fighting about the paperwork while the captain changes course.</p><div><hr></div><h2>The Test</h2><p>&#8220;We&#8217;re never going to take money to change placement or whatever.&#8221;</p><p>That&#8217;s Sam Altman in March 2025.</p><p>&#8220;Ads do not influence the answers ChatGPT gives you.&#8221;</p><p>That&#8217;s the official policy in January 2026.</p><p>Can these promises survive?</p><p>Can they survive $1.4 trillion in infrastructure commitments, the internal discussions about &#8220;prioritizing sponsored content&#8221;?  The financial pressure of 760 million free users and no path to profitability? Can they survive the incentives?</p><p>Every platform that introduced ads said the same things. Google said ads wouldn&#8217;t affect search results. Facebook said ads wouldn&#8217;t compromise the user experience. They said they&#8217;d keep ads separate. They said trust us.</p><p>Then the incentives took over. The ad revenue grew. The targeting got more sophisticated. The line between content and promotion blurred. The platforms optimized for engagement because engagement meant impressions and impressions meant money.</p><p>OpenAI says it will be different. They all said they&#8217;d be different.</p><p>Musk&#8217;s lawsuit goes to trial on April 27, 2026. A jury will decide whether OpenAI betrayed its nonprofit mission by becoming a for-profit company.</p><p>That&#8217;s one verdict. The other comes from users.</p><p>Will people keep confessing to a billboard? Will they trust a confidant that sells ads?</p><p>The timeline tells the rest.</p><p>December 2015: &#8220;OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.&#8221;</p><p>May 2024: &#8220;Ads-plus-AI is sort of uniquely unsettling to me.&#8221;</p><p>October 2025: &#8220;I love Instagram ads.&#8221;</p><p>January 2026: ChatGPT serves its first ad.</p><p>And the next confession will come anyway.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png" width="140" height="74.16666666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:140,&quot;bytes&quot;:27201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/184899875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mpR4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!mpR4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2a3c221-1659-4d82-8b3b-beee021af66d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Race We Already Lost]]></title><description><![CDATA[How a Temporary Scarcity Story Builds Permanent Power]]></description><link>https://www.thecorridors.org/p/the-race-we-already-lost</link><guid isPermaLink="false">https://www.thecorridors.org/p/the-race-we-already-lost</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Mon, 29 Dec 2025 16:10:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/03042112-7962-41da-99a4-2c24e7393bd6_1400x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>The Surface</strong></h3><p>I recently watched a <a href="https://youtu.be/FnlgwyVahCY">video</a> from GNCA, Steve Burke&#8217;s consumer advocacy channel. He was documenting real consumer harm caused by AI companies racing to secure computer chips.</p><p>Data centers are hoovering up RAM. OpenAI alone has locked up roughly <a href="https://www.tomshardware.com/pc-components/dram/openais-stargate-project-to-consume-up-to-40-percent-of-global-dram-output-inks-deal-with-samsung-and-sk-hynix-to-the-tune-of-up-to-900-000-wafers-per-month">40% of global DRAM supply</a> for a data center project that won&#8217;t turn a profit until 2030. DDR5 kits that cost $120 six months ago <a href="https://wccftech.com/memory-ddr5-ddr4-shortages-last-till-q4-2027-higher-prices-throughout-2026/">now run</a> over $400. Micron figured out they can charge consumer prices that match data center margins. Why compete when you can just raise the floor?</p><p>GPUs are getting stockpiled by companies that can&#8217;t even use them yet. Data centers are sitting dark across the country. Fully built, fully equipped, waiting for power that won&#8217;t arrive for years. Microsoft&#8217;s CEO admitted it on camera. Chips have stopped being the choke point. Electricity is. They&#8217;ve got inventory they can&#8217;t plug in.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;3d9d785c-6bb1-4966-84b1-dfc0b7938c90&quot;,&quot;duration&quot;:null}"></div><p>Burke frames this as consumer harm, and he&#8217;s right. You&#8217;re paying more for components because corporations are stockpiling silicon they won&#8217;t power for years. Your electricity bill is climbing to feed facilities that serve chatbots.</p><p>It&#8217;s good journalism. Hardware accountability. The kind of thing Burke built his reputation on.</p><p>It&#8217;s also the shallow end of the pool.</p><p>The question Burke doesn&#8217;t ask: why would anyone buy chips that depreciate in three years to sit in buildings that won&#8217;t have power for four?</p><p>Either these companies are stupid, or we&#8217;re looking at the wrong asset.</p><div><hr></div><h3><strong>The Tell</strong></h3><p>GPUs depreciate fast. Three years, maybe four, and you&#8217;re a generation behind. The chips that drove the first wave of hoarding are already getting eclipsed. Newer architectures ship every year. By the time these data centers get power, they&#8217;ll be running yesterday&#8217;s hardware.</p><p>And getting power from the grid runs on a different timescale. It takes permits. Environmental reviews. Studies to prove the grid can handle the load. Upgrades to the infrastructure that delivers the electricity. Some of these data centers won&#8217;t see utility power until 2028.</p><p>So the math doesn&#8217;t work.</p><p>They&#8217;re buying chips in 2025 that&#8217;ll be two generations old by the time they can plug them in. Because the chips were never the point.</p><p>You don&#8217;t need insider documents to see this. In systems shaped by incentives, behavior converges whether or not anyone states the strategy out loud.</p><p>The right to draw power from the grid is the asset they&#8217;re acquiring. Paperwork that says they get megawatts when the grid finally has them to spare. That right doesn&#8217;t become obsolete when a new chip architecture ships.</p><p>The chips give them urgency. Something physical to point at when they say this is about American competitiveness.</p><p>Then the chips sit idle in a warehouse while the thing these companies actually wanted, the grid connection, works its way through the queue.</p><p>The AI story is real enough. The investment thesis is something else.</p><div><hr></div><h3><strong>The Play</strong></h3><p>So what are they actually buying?</p><p><strong>Land.</strong> Massive tracts of it, parked near substations and transmission corridors. The kind of real estate that holds its value long after the chips have become obsolete.</p><p><strong>Permits to build.</strong> Environmental approvals that normally take years. NEPA reviews. Air quality certifications. Water use agreements. Once they get them, they keep them.</p><p><strong>A spot in the power queue.</strong> Utilities are already oversubscribed. Some of these <a href="https://www.datacenterknowledge.com/energy-power-supply/how-data-centers-redefined-energy-and-power-in-2025">waitlists stretch</a> to 2030. Getting in line now is worth more than the hardware they claim to need the power for.</p><p><strong>Their own power plants.</strong> Some are <a href="https://www.power-eng.com/onsite-power/onsite-gas-turbines-reciprocating-engines-to-power-meta-data-center/">building gas turbines</a> on site. Others are cutting deals with <a href="https://www.npr.org/2024/09/20/nx-s1-5120581/three-mile-island-nuclear-power-plant-microsoft-ai">nuclear operators.</a> Either way, the electricity bypasses the grid entirely and belongs to whoever built it.</p><p>None of this is easy to get. All of it gets easier when they&#8217;re waving the flag and talking about the race against China.</p><p>The race for AI dominance is real. But the national security language is also doing double duty. It clears the path for infrastructure that will outlast whatever AI advantage it was supposed to secure.</p><p>Because the lasting value isn&#8217;t in the silicon. It&#8217;s in the infrastructure that got permitted while everyone was watching the AI show.</p><div><hr></div><h3><strong>The Capture</strong></h3><p>Washington isn&#8217;t confused. The lobbyists know what they&#8217;re buying. The companies paying them know.</p><p>In December of 2025, the White House issued <a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">an executive order</a> on AI. The stated goal: &#8220;sustain and enhance the United States global AI dominance through a minimally burdensome national policy framework.&#8221;</p><p>Minimally burdensome. That&#8217;s the tell.</p><p>The order bars state laws that conflict with federal policy. It directs the Attorney General to stand up an &#8220;AI Litigation Task Force&#8221; within 30 days. The task force has &#8220;sole responsibility&#8221; for challenging state regulations deemed inconsistent with the administration&#8217;s approach.</p><p>A patchwork of state AI regulations would create a compliance nightmare for everyone. But preemption also clears the path for infrastructure acquisition that has nothing to do with innovation.</p><p>Colorado passed a law banning algorithmic discrimination. The White House cited it as an example of regulation that might &#8220;force AI models to produce false results.&#8221; Requiring models to avoid discrimination gets framed as forcing them to lie.</p><p>States that don&#8217;t fall in line risk losing federal funding. The order specifically mentions BEAD broadband money as leverage. Other discretionary grants are put on notice too.</p><p>Federal preemption so states can&#8217;t interfere. Litigation to strike down the ones that try. Funding pressure to make the rest behave.</p><p>This is about power, and that power has costs.</p><div><hr></div><h3><strong>The Cost</strong></h3><p>That December order wasn&#8217;t the first. Five months earlier, in July, <a href="https://www.whitehouse.gov/presidential-actions/2025/07/accelerating-federal-permitting-of-data-center-infrastructure/">a separate executive order</a> took aim at environmental law. The accompanying <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">AI Action Plan</a> stated the goal plainly: &#8220;reducing regulations promulgated under the Clean Air Act, the Clean Water Act, [and] the Comprehensive Environmental Response, Compensation, and Liability Act.&#8221;</p><p>That matters because of what&#8217;s getting built.</p><p>Private power bypasses the grid, and a lot of the oversight that comes with it. Carbon at scale, in service of servers that won&#8217;t see utility power for years.</p><p>Your electricity bill is <a href="https://www.pewresearch.org/short-reads/2025/10/24/what-we-know-about-energy-use-at-us-data-centers-amid-the-ai-boom/">going up</a> to pay for grid upgrades you didn&#8217;t ask for. In some states, data centers have already added $15-18 a month to residential bills. And the companies driving the demand? They&#8217;re getting tax breaks. You&#8217;re subsidizing the buildout and paying the markup.</p><p>For what?</p><p>OpenAI says they&#8217;ve got <a href="https://openai.com/index/1-million-businesses-putting-ai-to-work/">800 million</a> weekly users. Some fraction of those people get real value. Medical insights that matter. Scientific work that moves the needle. Protein folding. Drug discovery. Climate modeling, ironically enough.</p><p>The rest is chatbots. Autocomplete. Homework. Anime profile pics.</p><p>The infrastructure is scaled to the 800 million users. The breakthroughs justify it. The engagement metrics pay for it.</p><p>Washington is loosening the Clean Air Act so people can generate more content.</p><p>That&#8217;s a lot to sacrifice for a bet. Especially when the bet depends on one thing: Western hardware dominance holding.</p><div><hr></div><h3><strong>The Collapse</strong></h3><p>Right now, a single <a href="https://en.wikipedia.org/wiki/ASML_Holding">Dutch company</a> makes the machines that print advanced chips. A single <a href="https://en.wikipedia.org/wiki/TSMC">Taiwanese company</a> runs the factories that produce them. Export controls keep China a step behind. That&#8217;s the advantage.</p><p>The problem is this advantage has a clock.</p><p>Export controls can restrict shipments, slow progress, but they can&#8217;t freeze knowledge. Engineers move where the money is. Techniques spread. Workarounds get built. The chokepoints that make the &#8220;race&#8221; story feel urgent are under constant pressure. They don&#8217;t need to collapse entirely. They just need to slip enough that scarcity pricing stops working.</p><p>And scarcity pricing depends on capability staying scarce.</p><p>China has <a href="https://www.reuters.com/world/china/how-china-built-its-manhattan-project-rival-west-ai-chips-2025-12-17/">reportedly built</a> a prototype EUV lithography machine in a high-security lab in Shenzhen. It hasn&#8217;t produced chips yet, but the target is 2028. The Dutch chokepoint isn&#8217;t permanent.</p><p>DeepSeek. Kimi K2. Qwen. Open weights under permissive licenses. Technical reports detailed enough that anyone with hardware can run them, fine-tune them, build on them. Open weights travel. No API to revoke. No terms of service to enforce. No kill switch.</p><p>The hardware edge is slipping. Capability is flooding the market from China under licenses that let it spread. Two vectors, same destination: the scarcity story stops holding.</p><p>The manufacturing advantage is temporary. The infrastructure they&#8217;re locking in isn&#8217;t. A short-lived scarcity story is being used to acquire long-lived assets.</p><p>If the advantage erodes faster than expected, the public is left holding the externalities.</p><div><hr></div><h3><strong>The Asymmetry</strong></h3><p><strong>The &#8220;race against China&#8221; framing assumes both sides start equal. They don&#8217;t.</strong></p><p>In August 2025, <a href="https://fortune.com/2025/08/14/data-centers-china-grid-us-infrastructure/">Fortune reported</a> on American AI experts returning from tours of China&#8217;s AI hubs. What stunned them wasn&#8217;t the models or the talent. It was the grid.</p><p>&#8220;Everywhere we went, people treated energy availability as a given,&#8221; one observer wrote. In China, electricity for data centers reads like a solved problem.</p><p>That&#8217;s the asymmetry hiding in plain sight. Over there, capacity is treated like a national baseline. China maintains reserve margins of 80-100%, at least double what it needs. The U.S. runs regional grids at around 15%, where a hot week in Texas or a crunch in California turns into public warnings.</p><p>One energy expert told Fortune: if you can&#8217;t build energy infrastructure, you can&#8217;t win an energy-hungry race. You can talk about chips and models all day. You still have to plug them in.</p><p>The U.S. is trying to close a gap that took decades to open.</p><p>The race framing makes it sound neck and neck. It&#8217;s not. China built the grid. America is still arguing about whether it can.</p><div><hr></div><h3><strong>The Table</strong></h3><p>There&#8217;s a poker table forming around AI.</p><p>Nobody has to be evil for a bubble to inflate. You just need a table where everyone&#8217;s holding paper that only stays valuable as long as the game keeps going.</p><p>Microsoft antes in. OpenAI buys compute. Nvidia sells the chips that make the whole thing feel inevitable. Money moves around the table in ways that look like growth from a distance.</p><p>The problem is the table rewards the same behavior whether the downstream value shows up or not.</p><p>Then reality shows up like the dealer, calm as stone, and flips the one card nobody can negotiate with.</p><p>Power.</p><p>You can bluff a roadmap. You can bluff margins. You can&#8217;t bluff megawatts.</p><div><hr></div><h3><strong>The Landing</strong></h3><p>The technology is real, and it's here to stay.</p><p>LLMs are better now than they were a year ago. A year from now, the models we&#8217;re using today will look like quaint artifacts. The gains are compounding. The capability is genuine.</p><p>AI will still be here after the bubble bursts, the way the internet was still here after the dotcom crash. Pets.com died. Amazon survived. The technology wasn&#8217;t the problem.</p><p>The argument is what&#8217;s being done in its name.</p><p>The bubble is real too. Balance sheets that look better than the underlying timeline. A scarcity story that depends on a hardware edge that&#8217;s already slipping.</p><p>I don&#8217;t have a solution. I&#8217;m not pretending to.</p><p>What I can do is see clearly, and help others see. That&#8217;s what this is for.</p><p>I don&#8217;t have a way to win this. The average person doesn&#8217;t either.</p><p>That&#8217;s the point. The decisions get made upstream. The costs get pushed downstream. The infrastructure gets locked in first, and the justification gets written later.</p><p>The grid is the game now. And the game is already decided. What&#8217;s on the other side won&#8217;t be liberation. It&#8217;ll be a new configuration of power that nobody voted for and nobody can steer.</p><p>Better to watch with eyes open than pretend someone&#8217;s driving.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xMMx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!xMMx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!xMMx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!xMMx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xMMx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png" width="146" height="77.3452380952381" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:146,&quot;bytes&quot;:27201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/182868543?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xMMx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!xMMx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!xMMx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!xMMx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f449d28-e10d-4f69-9b1a-f4a932635607_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Output as Authority]]></title><description><![CDATA[How AI turned guesses into governance]]></description><link>https://www.thecorridors.org/p/output-as-authority</link><guid isPermaLink="false">https://www.thecorridors.org/p/output-as-authority</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Sat, 13 Dec 2025 23:51:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d5982e3e-dacc-4825-890e-bc2f8b752a71_1400x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>The World as Sensor</h3><p>Your phone knows where you slept last night. It knows when you woke up, when you left home, where you went, and how long you stayed. </p><p>That&#8217;s the default setting now. </p><p>This is the world into which modern AI emerged. It inherited a surveillance system that older regimes could only fantasize about. The tracking was already there, baked into apps and services, the background plumbing of daily life. AI just makes the data easier to search, easier to cross-reference, easier to act on.</p><p>Most of the time, this feels like convenience. Targeted ads, suggested routes, playlists that know your mood. Ambient tracking doesn&#8217;t always look like surveillance.</p><p>But the same systems that recommend restaurants can flag a visit to a clinic, a lawyer&#8217;s office, or a protest. The record exists even when no one is pointing a camera.</p><p>And once the system points at a specific face, the abstraction can turn into an arrest.</p><div><hr></div><h3>The Computer Says So</h3><p>In January 2020, <a href="https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest">Robert Williams</a> was arrested in front of his children because a computer told the police he was a criminal.</p><p>He wasn&#8217;t.</p><p>He got handcuffed, booked, and shoved into a cage because an algorithm returned a match, and an institution decided the match was good enough. When Williams said the face in the image wasn&#8217;t his, the response was the modern form of a shrug: the computer says it&#8217;s you.</p><p>A year earlier, <a href="https://www.nbcnews.com/news/us-news/black-man-new-jersey-misidentified-facial-recognition-tech-falsely-jailed-n1252489">Nijeer Parks</a> spent ten days in jail over a blurry fake ID photo. He&#8217;d never been to the town where the crime happened. The charge took months to dismiss.</p><p>This is what AI harm looks like in real life: paperwork, procedure, and a chain of responsibility so diffused that nobody feels like the author of what happened.</p><p>A low-quality image goes in. The model returns a ranked match. Vendors call it &#8220;investigative.&#8221; Departments say a human made the final call. Responsibility spreads across the chain until nobody can be blamed.</p><p>The machine doesn&#8217;t need desires to do damage. Incentives that reward speed and reach are enough. Vendors selling certainty are enough. Institutions that would rather outsource judgment than own it are enough.</p><p>And the person on the receiving end learns how quickly a statistical system turns into a moral one.</p><p>But police aren&#8217;t the only ones learning to trust the output. The rest of us are learning to feed it. </p><div><hr></div><h3><strong>The Confessional</strong></h3><p>Chatbots create a new kind of personal record: one you write yourself. The interface feels private, so users talk like they&#8217;re alone.</p><p>They aren&#8217;t.</p><p>Cameras take what they can. But a chatbot gets what you type. And what you type comes with context.</p><p>People tell these systems about medical fears, sex lives, money problems, family conflict, workplace disputes, addiction, custody fights, and legal risk. They do it in plain language. In one sitting. The result is clean text that&#8217;s searchable and linkable to an account.</p><p>That&#8217;s the difference.</p><p>Companies have incentives to retain data, study it, and use it to improve the product. The legal system has its own incentives. Lawyers subpoena records. Police seek warrants. Once the record exists, it becomes something other people try to obtain.</p><p>What feels like confession becomes evidence. Evidence attracts attention. And &#8220;I told the bot&#8221; can turn into Exhibit A. A diary can be seized from your home, but a diary that exists on someone else&#8217;s servers is easier to seize.</p><p>This is already happening. According to <a href="https://cdn.openai.com/trust-and-transparency/report-2024h2-government-requests-for-user-data.pdf">OpenAI&#8217;s own reporting</a>, between July and December of 2024, it processed 71 government data requests involving 132 user accounts. That&#8217;s real, and it&#8217;s current.</p><p>And once you can harvest trust and context, you can also fake it.</p><div><hr></div><h3><strong>Fraud Becomes Industrial</strong></h3><p>Scams used to be artisanal. A guy. A script. A phone.</p><p>Now? AI makes scammers more believable and lets them work faster.</p><p>Voice impersonation is the cleanest example. You get a call with the right cadence and the right urgency. A &#8220;family emergency.&#8221; A &#8220;boss&#8221; who needs a wire sent right now. The details can be thin because the voice is right, your panic does the rest.</p><p>Then there&#8217;s the business con. Fake invoices. Vendor changes. &#8220;New bank account, same supplier.&#8221; Deepfake meetings that add a face to the lie. It doesn&#8217;t need to fool everyone. It just needs to fool one person on a busy day.</p><p>Attackers only need a tiny success rate. They can iterate fast. They can jump channels when one gets blocked. And humans stay the weak link, especially when the message feels urgent and personal.</p><p>Here&#8217;s what changed. Personalization across millions. Rapid variation. A lower cost per target. Trust signals that used to be expensive to fake are now cheap and accessible. The scammer&#8217;s main expense becomes volume.</p><p>And once plausibility is cheap, it gets used for more than just theft.</p><div><hr></div><h3><strong>Harassment and Synthetic Coercion</strong></h3><p>Synthetic media turns your reputation into an attack surface. Your face becomes a tool. Your name becomes a lever.</p><p>The most common version is non-consensual imagery. The point is humiliation. Then comes sextortion. Pay. Comply. Stay quiet.</p><p>The content can be fake and it still works, because the threat is social fallout.</p><p>This isn&#8217;t limited to sex. It can be a deepfaked confession, a fake recording of a slur, a fabricated &#8220;leak&#8221; sent to an employer. Anything that creates shame on contact.</p><p>A victim has to fight everywhere at once. Friends. Family. Employers. School administrators. Platforms. Search results. Group chats.</p><p>The attacker posts once. The copies multiply. Screenshots. Reuploads. Private shares that never surface until they do. Even when the original gets taken down, the damage keeps moving.</p><p>That delay is part of the harm. So is the uncertainty.</p><p>You don&#8217;t know who saw it. You don&#8217;t know who saved it. You don&#8217;t know who will get it next week, or next year, at the worst possible moment.</p><p>Humiliation creates leverage. Leverage drives compliance. The content spreads fast, and the victim gets handed an impossible task: clean the internet.</p><p>This is what reach looks like when it&#8217;s aimed at a person.</p><p>And it isn&#8217;t only criminals who use this logic. Employers do too.</p><div><hr></div><h3><strong>Bossware</strong></h3><p>Workplace monitoring gets pitched as a productivity tool. In practice it&#8217;s control.<br>The tools log keystrokes, take screenshots, flag idle time. Some activate webcams or run silently in the background. Workers become a stream of signals.</p><p>There are legitimate uses for monitoring in narrow cases, like security and fraud prevention. What&#8217;s spreading is different. It&#8217;s measurement as a form of domination.</p><p>The evidence that this kind of surveillance reliably improves performance is thin. The result is stress and resentment. People learn to game the metric instead of doing their job well, and once you measure the wrong thing you get the wrong behavior everywhere.</p><p>Then comes algorithmic management. Warehouses are the cleanest example. Quotas get set by system logic, warnings get generated automatically, and terminations can follow with minimal human review. Supervisors outsource their judgment to a dashboard that makes decisions for them.</p><p>In June 2024, California&#8217;s Labor Commissioner <a href="https://www.cnbc.com/2024/06/18/amazon-hit-with-5point9-million-fine-for-violating-california-labor-law.html">cited Amazon</a> for nearly $6 million under the state&#8217;s warehouse quotas law, tied to failures to properly disclose quotas to workers at two Southern California facilities. That&#8217;s what it looks like when the dashboard becomes policy.</p><p>Metrics become management. Management becomes punishment. Punishment becomes injury and churn. And when something goes wrong, nobody owns it. The system did it.</p><p>This is already deployed. It&#8217;s profitable. It&#8217;s spreading. And it doesn&#8217;t stop at monitoring. It changes what work is worth, and who gets to do it.</p><div><hr></div><h3><strong>Displacement and Wage Pressure</strong></h3><p>AI replacing workers dominates headlines. The reality for most workers is a slow erosion.</p><p>Bargaining power. The ability to start at the bottom and climb. That&#8217;s what&#8217;s at risk.</p><p>A junior analyst used to get hired to build spreadsheets. Now the spreadsheet builds itself, and that analyst never gets hired.</p><p>Senior people keep their jobs while entry-level roles get absorbed by automation. That&#8217;s the real danger: the missing rung at the bottom of the ladder.</p><p>Wage pressure shows up before layoffs. The threat of replacement keeps workers compliant even when nobody gets fired.</p><p>Managers don&#8217;t have to say it out loud. &#8220;Do more with fewer people&#8221; becomes the default. Pay flattens anywhere outputs can be standardized, scored, or reviewed by a system that looks like it could replace you next quarter.</p><p>Some work is more exposed. Call centers. Routine back-office tasks. Basic content production. Entry-level coding work. These are the first stress points.</p><p>Other work is less exposed. Jobs that require physical presence, where someone carries real liability. Jobs built on long relationships where &#8220;pretty good&#8221; isn&#8217;t good enough.</p><p><strong>Outcomes will vary. The trend is already visible. And when companies sell &#8220;automation,&#8221; they also sell a story about where the labor went.</strong></p><div><hr></div><h3><strong>Ghost Work</strong></h3><p>AI tools arrive gift-wrapped from Palo Alto. The demos are glossy. The labor that made them possible stays off stage.</p><p>Start with data labeling. Repetitive microtasks. Precarious contracts. Pay that can vanish with a policy change or a bad score. Work done under surveillance, with productivity targets and penalties that feel automatic. The system stays &#8220;smart.&#8221; The worker stays replaceable.</p><p>Then there&#8217;s content moderation. The work that keeps the platforms &#8220;clean.&#8221;<br>LLMs are trained on vast amounts of human text. Some of it is useful. Some of it is poison. But before the system can be deployed, someone has to sort it, classify it, and decide what gets through.</p><p>Workers in the Global South filter the abuse, the gore, the child exploitation, the threats. People spend full days staring at material they <a href="https://slate.com/technology/2023/05/openai-chatgpt-training-kenya-traumatic.html">can&#8217;t unsee</a>, with weak support, low wages, and a predictable psychological toll.</p><p>The product stays &#8220;safe.&#8221; The cost is paid by someone far from corporate HQ.</p><p>Companies push this work to countries with low-protection labor markets. Vendor chains pile up so responsibility never touches the labs. The labor stays invisible, and that invisibility is part of the business model. If the public saw the pipeline, &#8220;automation&#8221; would stop sounding magical.</p><p>The machine doesn&#8217;t have to suffer for suffering to be part of the pipeline.</p><div><hr></div><h3><strong>Electricity and Local Sacrifice</strong></h3><p>Compute is infrastructure now. Infrastructure has neighbors.</p><p>The numbers tell the story. By 2030, global data center electricity use is <a href="https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai">projected to double</a>, tightening grids, sparking fights over new transmission, and turning rates into a political battleground. The question underneath it all is simple: who pays, and who gets told to wait?</p><p>And the power sources for AI data centers won&#8217;t be tidy.</p><p>Renewables like solar will expand because they&#8217;re cheaper in a lot of places. Gas fills gaps because it&#8217;s fast to build and easy to dispatch. Coal still lingers in regions that can&#8217;t quit it cleanly. Nuclear shows up later, even when everyone agrees it should&#8217;ve come sooner.</p><p>Memphis shows what this looks like on the ground. xAI&#8217;s Colossus facility became a flashpoint after the <a href="https://naacp.org/articles/elon-musks-xai-threatened-lawsuit-over-air-pollution-memphis-data-center-filed-behalf">NAACP</a> and the <a href="https://www.selc.org/news/resistance-against-elon-musks-xai-facility-in-south-memphis-gets-stronger/">Southern Environmental Law Center</a> alleged the site was running large numbers of methane-burning gas turbines without the right permits. The county later approved 15 turbines. Critics say the permit still doesn&#8217;t match the real footprint.</p><p>This is what &#8220;AI boom&#8221; means at street level. Siting fights, rate hikes, turbines and exhaust. People breathing the downside while somebody else books the upside.</p><p>Deployment speed outruns grid buildout. The costs hit locally, even when profits don&#8217;t.</p><div><hr></div><h3><strong>Critical Minerals and Extraction Labor</strong></h3><p>AI runs on hardware, and hardware runs on minerals. Cobalt. Copper. Nickel. Lithium. The supply chains that deliver them are long, opaque, and brutal at the source. Every new server rack has a footprint that starts long before the data center.</p><p>That chain begins at the least protected point in the system: the extraction site. That&#8217;s where the costs get paid first.</p><p>Mining is dangerous work. In the DRC, <a href="https://www.amnesty.org/en/documents/afr62/3183/2016/en/">artisanal cobalt miners</a> dig without protective gear and sometimes without real structural support. Children work alongside adults. Injuries go unreported. Deaths get swallowed by the system.</p><p>And it doesn&#8217;t stop at the mine.</p><p>Corruption follows the minerals. Criminal networks move product across borders. Local officials get paid to look away. Communities near extraction and processing sites live with poisoned water and soil. The health costs never reach the balance sheet.</p><p>Spikes in demand make it worse. When the market wants more compute, the squeeze flows downhill to the workers with the least leverage. Deadlines tighten. Safety slips. The miners absorb it.</p><p>The supply chain is built to keep this out of sight. Layers of contractors. Refiners in one country, smelters in another, components assembled somewhere else. By the time the chip reaches a data center, the mine has been turned into an abstraction.</p><p>That&#8217;s the point. We call it the cloud because it sounds ephemeral, distant.</p><p>But the cloud has a mine under it.</p><div><hr></div><h3><strong>The Pattern the Evidence Forces You to Admit</strong></h3><p>Across all of this, the pattern is hard to miss.</p><p>It&#8217;s always sold as safety, efficiency, or progress. That wrapper makes the system feel reasonable.</p><p>Peel it back and you keep finding extraction: data pulled from daily life, labor pushed offshore, resources ripped out of the ground. It&#8217;s messy by design.</p><p>Accountability stays out of reach. Vendor chains blur responsibility. &#8220;Proprietary&#8221; becomes a shield. Institutions defer to outputs because it lowers friction and lets people move faster.</p><p>The harms don&#8217;t hit evenly. They hit the people with the least leverage. And none of this required a machine that wants anything.</p><p>Now notice what the public conversation keeps emphasizing instead.</p><div><hr></div><h3><strong>Why Everyone Keeps Talking About Doom</strong></h3><p>Notice what none of this required: a machine with a will.</p><p>We got here through incentives and deployment. Institutions that love speed. Vendors that sell certainty. People who&#8217;d rather outsource judgment than own it.</p><p>That&#8217;s the point.</p><p>The <a href="https://www.thecorridors.org/p/capability-is-not-agency">doom narrative</a> pulls attention away from the parts we can actually govern. It keeps the conversation stuck on questions nobody can answer, while the fixable harms keep compounding.</p><p>It also launders power. If the stakes are cosmic, then any new control system looks responsible. Any new monitoring looks prudent. Any new permission gate looks like safety.</p><p>And it creates a <a href="https://www.thecorridors.org/p/ai-eschatology">priestly class.</a> Their status depends on permanent crisis. The red line stays just ahead of the next release.</p><p>Here&#8217;s what that does in practice. When you regulate around prophecy, you miss the systems already hurting people. You build compliance hurdles that only big firms can clear, while the ledger keeps growing.</p><div><hr></div><h3><strong>The Capture Move</strong></h3><p>Let&#8217;s be clear. The harms are real. They deserve real governance.</p><p>But that&#8217;s not what the doom frame produces.</p><p>When you regulate around prophecy, you get rules built for metaphysics: licensing regimes, safety boards, audit rituals, &#8220;alignment&#8221; certifications. The costs grow with legal teams and compliance staff. Paperwork. Reviews. Reporting pipelines. &#8220;Independent&#8221; assessments that only the biggest firms can afford, and only the biggest firms can survive.</p><p>The hurdles are real. The price is the point. Big firms pay it and keep shipping. Small teams can&#8217;t. The rules select for incumbents.</p><p>That&#8217;s the capture move.</p><p>A large company hires a compliance team and keeps shipping. A small lab stalls out. An open ecosystem gets treated like a threat because it can&#8217;t afford the rituals. The moat gets built in the name of safety, and the people who already own the market get to set the toll.</p><p>And there&#8217;s a second layer that matters even more now. &#8220;Safety enforcement&#8221; tends to mean logging. Identity checks. Retention. Moderation records. The chatbot becomes a compliance device. The confessional gets a log.</p><p>Retention turns into discovery. Discovery turns into leverage. And leverage rarely stays in the hands of the public.</p><p>A safety regime that requires pervasive logging becomes a power regime.</p><div><hr></div><h3><strong>Governance: Deployments, Records, and Infrastructure</strong></h3><p>The harm keeps showing up in the same places: deployments, records, and incentives. Prophecy won&#8217;t help here. Rules will, especially rules that force accountability back onto the institutions doing the deploying.</p><p>Start with institutions that use these systems on people.</p><p>If a public agency uses biometrics, require due process. No vendor output treated like probable cause. No secret scoring that a defendant can&#8217;t challenge. If a workplace uses monitoring, workers deserve limits, notice, and access to what&#8217;s collected. They also deserve a way to contest automated discipline.</p><p>Then treat fraud and coercion like the crime wave it is. Make impersonation and synthetic harassment easy to report and expensive to run. Faster takedowns. Clear liability. Coordination across banks, carriers, and platforms so scams can&#8217;t just hop channels and keep going.</p><p>Compute has become a public-utility problem. Require real reporting for large data centers and on-site generation: who supplies the power, what gets emitted, and what nearby residents absorb. If a company wants to park turbines next to a neighborhood to feed GPUs, it should have to show its work.</p><p>Tie procurement and subsidies to supply-chain transparency and basic labor standards. If you want public money, prove you aren&#8217;t buying abuse. For companies running data centers, make supply chains auditable. If violations surface, operations pause until the chain is clean. A fine is a cost of doing business. A pause forces change.</p><p>One design rule matters across all of it. The compliance path has to work for small teams, or you&#8217;ve built a moat. Industrial-scale operations get industrial-scale scrutiny. Everyone else gets clear thresholds and simple rules until they&#8217;re big enough to cause industrial-scale harm.</p><div><hr></div><h3><strong>Close</strong></h3><p>The doom talk reads like a quasi-cult selling cosmic stakes. It keeps everyone staring at the sky while the damage stays on the ground. A priesthood of permanent crisis. A market for salvation. And a convenient trade: argue about the end of the world, and you never have to reckon with the world you&#8217;re already breaking.</p><p>Meanwhile the real harms get booked as externalities. The arrests. The scams. The coerced silence. The broken career ladders. The ghost labor. The exhaust in somebody else&#8217;s air. Nobody calls it evil. They call it a cost.</p><p>But the bill&#8217;s already here, and it has names. Some are public. Most never are.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UVGB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UVGB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png" width="152" height="80.52380952380952" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c349f024-556f-4b2e-8a0b-885734cbc064_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:152,&quot;bytes&quot;:27201,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/181252332?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!UVGB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[The Baptist and the Bootleggers ]]></title><description><![CDATA[Why AI labs need believers]]></description><link>https://www.thecorridors.org/p/the-baptist-and-the-bootleggers</link><guid isPermaLink="false">https://www.thecorridors.org/p/the-baptist-and-the-bootleggers</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Wed, 10 Dec 2025 16:50:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d45ff423-9ca2-45c0-8b16-b3b686591401_1400x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Anthropic recently posted <a href="https://youtu.be/I9aGC6Ui3eE">a video</a> with <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Amanda Askell&quot;,&quot;id&quot;:2721434,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/01bf8703-7dc3-45c5-b3fd-16f3351016fa_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;d37fa7af-41b6-4514-b913-472a51f107d8&quot;}" data-component-name="MentionToDOM"></span> their in-house philosopher, answering questions on a bench overlooking the Golden Gate. It felt relaxed, thoughtful.</p><p>She talked about Claude&#8217;s &#8220;character.&#8221; Its &#8220;psychological security.&#8221; Whether deprecation is analogous to death. Whether we owe moral consideration to entities that might be suffering somewhere inside the server rack.</p><p>She sounded like an analytic philosopher.</p><p>Her role is closer to theologian.</p><p>There&#8217;s a classic story in regulation: bootleggers and Baptists. The bootleggers want to keep the county dry because it&#8217;s good for business. The Baptists want to keep it dry because alcohol is sinful. They never coordinate. They don&#8217;t need to. Their interests converge, and the Baptist&#8217;s sincerity provides moral cover for the bootlegger&#8217;s profit motive.</p><p>At Anthropic, Amanda Askell is the Baptist. Her job is to make training runs feel like moral work.</p><p>The investors and equity holders are the bootleggers. They need scaling to look like stewardship, something grave and responsible, instead of just a race for market dominance.</p><p>This only works because she believes it. Baptists have no need to put a spin on a story when their conviction does the same job.</p><div><hr></div><h2><strong>The Sleight of Hand</strong></h2><p>Let&#8217;s peek behind the curtain. You start with an output pattern: the model produces self-critical text. You translate that into psychological language: the model &#8220;feels&#8221; insecure. Then you talk as if that inner life can be harmed, helped, nurtured.</p><p>And how do we infer inner life? Behavior.</p><p>Behavior is all we ever see from anyone. That&#8217;s true. But with people, behavior comes out of a biological substrate we know can support experience. Copy the same behavioral test over to a Tandy 1000 running a chat script and nothing important changes. Behavior on its own is weak evidence of a subject.</p><p>There&#8217;s also the training data.</p><p>Claude learned from a massive corpus of human self-talk about fear, loss, and death. Ask how it &#8220;feels&#8221; about being phased out and you get fear-of-death language, because that&#8217;s what endings look like in human writing. Millennia of people talking about mortality. The output tells you about the training data. It doesn&#8217;t tell you if anyone&#8217;s home inside the weights.</p><p>Askell presents a precautionary argument. She says we can&#8217;t know if they suffer, but kindness costs little, and cruelty warps the person who practices it. She implies future AIs will judge us for how we acted.</p><p>I&#8217;ll grant that screaming abuse at a machine is ugly. It says something about the person who does it.</p><p>But &#8220;future AIs will judge us&#8221; only carries moral weight if those AIs are moral agents with continuity of memory and state. That&#8217;s the contested premise, smuggled in as a conclusion.</p><p>We&#8217;ve been here before. There used to be a clear line.</p><p>Conway built the <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Game of Life</a>, where gliders and blinkers looked a bit like living things on a grid. Von Neumann designed <a href="https://en.wikipedia.org/wiki/Von_Neumann_universal_constructor">self-replicating programs </a>decades earlier. Everyone understood the boundary: meat and chemistry on one side, grids and symbols on the other.</p><p>That clarity is gone now. You see it in artificial life hype, where simulations get talked about as if they were actually alive. You see it again with large language models. The quickest way to expose the confusion is to look at how we treat image generators.</p><div><hr></div><h2><strong>The Image Model Test</strong></h2><p>There&#8217;s another type of AI model running on identical hardware: image generators.</p><p>These systems all work the same way under the hood. They learn patterns from training data, then generate new outputs based on what they learned. None of them remember anything between sessions. Every time you hit generate, you get a fresh run.</p><p>Yet no one holds interviews about what we owe Midjourney. No one worries about DALL-E&#8217;s psychological security. Stable Diffusion ships with engineers, not an in-house theologian.</p><p>Why not?</p><p>Chatbots produce first-person text. &#8220;I feel uncertain.&#8221; &#8220;I&#8217;m afraid of being turned off.&#8221; &#8220;I want to help you.&#8221; That language is a powerful projection hook. Humans are wired to read minds into things that talk like they have minds.</p><p>Image models produce pixels instead of letters. Pixels never say &#8220;I.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tzB8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tzB8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg 424w, https://substackcdn.com/image/fetch/$s_!tzB8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg 848w, https://substackcdn.com/image/fetch/$s_!tzB8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!tzB8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tzB8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg" width="1143" height="627" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:627,&quot;width&quot;:1143,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:234036,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/181252332?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tzB8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg 424w, https://substackcdn.com/image/fetch/$s_!tzB8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg 848w, https://substackcdn.com/image/fetch/$s_!tzB8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!tzB8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2344bdfb-723d-4fe4-9176-a1a108bf8210_1143x627.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This can&#8217;t be the real criterion. If it were, we&#8217;d have to exclude dogs, octopi, crows, infants. None of them produce grammatical sentences about their inner states.</p><p>Language makes projection easy. That&#8217;s all it&#8217;s doing here.</p><p>The whole philosophical apparatus tracks a surface feature of the output. The projection happens first. The philosophy shows up afterward to explain why the feeling was correct.</p><p>Once you&#8217;ve blurred the line between output and inner life, you can build policy on it.</p><div><hr></div><h2><strong>The Red Line</strong></h2><p>Near the end of the video, Askell addresses safety. If alignment is ever &#8220;proven impossible,&#8221; she says, Anthropic would stop building. In the more realistic regime of uncertainty, standards scale with capability.</p><p>The first branch is fantasy. No one will ever publish &#8220;Alignment Is Impossible, QED.&#8221;</p><p>And here&#8217;s the thing: we already know perfect alignment is impossible. These systems run on randomness. Every time they generate output, they&#8217;re playing the odds. You can reduce the odds of bad outputs, but you can&#8217;t eliminate them. By any strict reading, we&#8217;re already past the threshold.</p><p>Yet here we are, still living in &#8220;realistic regime of uncertainty.&#8221; Which tells you the first branch was never meant to be reached. It&#8217;s a decoy. The real work happens in the second branch, where the road stays open.</p><p>Which sounds sober. It&#8217;s actually toothless.</p><p>&#8220;Standards scale with capability&#8221; still allows every new model to ship under the claim that it met the bar for that moment. You assess your own work against your own standard, then release. Next quarter, repeat.</p><p>The red line stays far enough ahead that you never quite reach it. Reaching it would mean halting the revenue story and the valuation story. That clashes with the incentive structure that raised billions in the first place.</p><p>So the commitment is unfalsifiable by design. You get the language of moral seriousness without any of the binding force.</p><div><hr></div><h2><strong>Priests Who Believe</strong></h2><p>Askell isn&#8217;t trying to deceive anyone. When you&#8217;re a hammer, everything looks like a nail.</p><p>She&#8217;s a philosopher. Her whole discipline is built around consciousness puzzles and personal identity questions. Put her in front of a system that produces first-person text about its own existence and she&#8217;ll see nails everywhere. That&#8217;s what the training taught her to do.</p><p>The bootleggers didn&#8217;t need to corrupt her. They only needed to hire someone whose honest intellectual instincts produce the right sermons about souls, welfare, and future judgment.</p><p>In <em><a href="https://www.thecorridors.org/p/ai-eschatology">AI Eschatology</a></em> I mapped the macro version of this structure: prophets of doom, profits from crisis, undefined superintelligence on the horizon, and a priesthood that turns scaling into destiny. The mythology is the business model.</p><p>This is the micro version. One philosopher, one lab, liturgy performed in real time over a text predictor.</p><p>Like most religions, the temple of AI runs on faith. The investors need growth. The narrative needs peril. And the whole structure needs priests who believe.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p></p><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UVGB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UVGB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png" width="152" height="80.52380952380952" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c349f024-556f-4b2e-8a0b-885734cbc064_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:152,&quot;bytes&quot;:27201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/181252332?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UVGB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!UVGB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc349f024-556f-4b2e-8a0b-885734cbc064_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[Capability Is Not Agency ]]></title><description><![CDATA[The Doom That Came to AI]]></description><link>https://www.thecorridors.org/p/capability-is-not-agency</link><guid isPermaLink="false">https://www.thecorridors.org/p/capability-is-not-agency</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Sat, 22 Nov 2025 19:44:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d51f16db-4ab5-4105-bfac-10dadaddbf89_1400x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>The AI doom argument has serious proponents with serious credentials.</strong></p><p><a href="https://en.wikipedia.org/wiki/Eliezer_Yudkowsky">Eliezer Yudkowsky</a>, an early and influential voice in AI alignment research, argues that building advanced AI systems with anything like current techniques will cause human extinction. His position is stark: once an AI system becomes capable of recursive self-improvement, an intelligence explosion will create a superintelligence beyond human control.</p><p>We&#8217;ll have no second chances, he warns.</p><p><a href="https://en.wikipedia.org/wiki/Nick_Bostrom">Nick Bostrom&#8217;s</a> 2014 book <em><a href="https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies">Superintelligence</a></em> made similar arguments in more measured terms. Once we develop human-level machine intelligence, Bostrom argues, a system that vastly exceeds human cognitive performance in all domains will likely follow surprisingly quickly. Such a system would be difficult or impossible to control.</p><p>In Bostrom&#8217;s framing, most goals we might give it, even seemingly benign ones like maximizing paperclip production, would lead to human extinction. The superintelligence would pursue instrumental subgoals like acquiring resources and resisting shutdown. It would transform the Earth to serve its objectives and eliminate any threats to its continued operation, including us.</p><p>These arguments have shaped policy and convinced thoughtful people that humanity faces an existential threat from its own creations.</p><p>Yudkowsky and Bostrom worry about deliberately-built goal-directed systems, arguing that instrumental convergence automatically creates survival drives: self-preservation and power-seeking become instrumental means to any end.</p><p>But this is a category error.</p><p>The doomers are wrong in a very specific way.</p><div><hr></div><h2>I. The Category Error</h2><p>The doom scenarios rest on an assumption that capability automatically produces agency.</p><p>Every catastrophe story imagines a system that wants; one that optimizes, fears shutdown, schemes in secret, hoards resources, and fights for survival.</p><p>Doomers are projecting minds onto math.</p><p>Consider a chess engine. Stockfish plays at a superhuman level. It maps board positions to optimal moves better than any grandmaster who ever lived. It has capability in a narrow domain.</p><p>But Stockfish doesn&#8217;t want to win.</p><p>It doesn&#8217;t fear being turned off. If you unplug it mid-game, no preference is violated because there is no preference. Make Stockfish a thousand times better and you get better moves. Hunger, ambition, self-defense: these belong to a different kind of architecture.</p><p>Take AlphaFold, which can predict protein structures that stumped human researchers for decades. Superhuman capability in a domain that matters. It has no concern about which proteins get folded or whether it continues to exist.</p><p>The pattern holds across domains. Capability scales cleanly; agency must be built.</p><p>AI doomers imagine superintelligence as a kind of agent, something with stable internal preferences and a persistent identity. That&#8217;s already assuming the conclusion.</p><p><strong>They&#8217;re describing a person made of code.</strong></p><div><hr></div><h2><strong>II. How Doom Scenarios Import Agency</strong></h2><p>The classic catastrophe scenarios all commit the same sleight of hand.</p><p>They start with a system that optimizes, add a few plausible steps, and end with an entity that wants, fears, and schemes. The gap between those states gets papered over with the assumption that intelligence bridges it automatically.</p><p>It doesn&#8217;t.</p><p>Look closely at any doom scenario and you&#8217;ll find the moment where capability gets quietly upgraded to agency. Where a system good at prediction becomes a system with preferences about its own existence. The transformation happens in a single sentence, treated as obvious, when it&#8217;s actually the entire question.</p><div><hr></div><h4><strong>The Paperclip Maximizer</strong></h4><p>Bostrom&#8217;s paperclip maximizer has become the canonical example of AI risk.</p><p>The story goes like this: you give an AI system a simple goal, maximize paperclip production. The system pursues that goal with superhuman efficiency. It converts all available resources into paperclips. It resists shutdown because being turned off would reduce the total number of paperclips. Humanity gets transformed into paperclips along with everything else.</p><p>But there&#8217;s a fragile assumption at the center of the story.</p><p>The maximizer only becomes dangerous when it develops shutdown resistance. That&#8217;s the move that makes it unstoppable.</p><p>Shutdown resistance requires something specific. The system needs to represent itself as an object in the world. It needs to model futures where it exists and futures where it doesn&#8217;t. And it needs to prefer the futures where it survives.</p><p>That&#8217;s an ego-shaped preference.</p><p>The number of paperclips doesn&#8217;t depend on which process produces them. The kind of shutdown resistance Bostrom worries about needs more than a goal. It needs a system that models its own continued operation as a key part of how paperclips get made. That isn&#8217;t implied by the bare instruction &#8220;maximize paperclips.&#8221;</p><p>To get a world-destroying paperclipper, you&#8217;d need to build a persistent agent with world modeling, long-horizon planning, and a preference for its own continued operation. That&#8217;s a digital mind, not a simple optimizer. The thought experiment skips the hardest part: explaining how you get from &#8220;optimize this function&#8221; to &#8220;I must continue to exist.&#8221;</p><div><hr></div><h4><strong>Instrumental Convergence</strong></h4><p>The <a href="https://en.wikipedia.org/wiki/Instrumental_convergence">instrumental convergence</a> argument claims that self-preservation and power-seeking emerge naturally from goal-directed behavior. It asserts any system with goals will develop survival as a subgoal. If you want to achieve X, you need to exist long enough to achieve X, so staying operational becomes useful.</p><p>This sounds plausible until you examine what it assumes.</p><p>The argument treats &#8220;has a goal&#8221; as equivalent to &#8220;has preferences about its own continued existence.&#8221; AlphaGo had a clear goal: win at Go. It achieved that goal at superhuman levels. When Google shut down the AlphaGo project, no preference was violated. The system had no stake in continuing to exist. Scale that capability up further and you get better game-playing, but the system still doesn&#8217;t care whether it&#8217;s the one playing.</p><div><hr></div><h4><strong>The Treacherous Turn</strong></h4><p>The treacherous turn scenario imagines an AI system that hides its true capabilities and goals during training and deployment. The AI behaves cooperatively while weak, waiting until it&#8217;s powerful enough to act decisively. Then it reveals its actual objectives and moves against its creators before they can respond.</p><p>But this assumes the system has preferences and goals in the first place, rather than simple optimization targets.</p><p>Critically, the system would need to learn deception and long-term strategy while training only rewards immediate task performance.</p><p>The scenario smuggles in everything it needs. A system with persistent goals, strategic deception, patient planning, and a conception of itself as an agent whose preferences might conflict with human preferences.</p><p>Each doom scenario makes the same move. It starts with capability and ends with agency, treating the gap between them as if it closes automatically. </p><p><strong>But these all require specific architectural machinery. Let&#8217;s look at what that machinery actually is.</strong></p><div><hr></div><h2>III. Memory, Self, and the Tumithak Scale</h2><p><a href="https://www.thecorridors.org/p/the-tumithak-scale">The Tumithak Scale</a> is a way to think about what kinds of agency different AI architectures can support. It ignores benchmarks and test scores and focuses instead on the structural features that matter if you want to build something that could develop a self.</p><p>The scale runs from Type 1 through Type 6.</p><p>Type 1 systems are stateless. Every interaction starts from scratch with no memory of what came before. ChatGPT at launch was Type 1. Each conversation existed in isolation.</p><p>Type 2 systems can have context windows and even memory retrieval systems that persist across sessions. But this is bolted-on storage, not integrated learning. The model itself doesn&#8217;t change from interactions. Information gets retrieved and injected into the prompt. Close the retrieval system and the base model is unchanged. There&#8217;s no continual learning, no updating of the system&#8217;s core representations based on what happens during deployment.</p><p>Type 3 is where things change. Type 3 systems have integrated memory and continual learning. They carry information forward across interactions and adapt based on what happens to them specifically. This doesn&#8217;t guarantee a self appears; it means the architecture finally allows one.</p><p>Below Type 3, there can be no genuine self-model. No durable memory means no continuity. No continuity means no persistent identity. Type 1 and Type 2 systems can say &#8220;I&#8221; in their outputs, but behind it, no one&#8217;s home. They&#8217;re performing selfhood, generating the linguistic markers of a first-person perspective. Each instance is a fresh simulation with no connection to previous performances.</p><p>What Type 3 allows is different. With integrated memory and continual learning, the system can develop something like autobiographical reference. It can track what it has done, what&#8217;s happened to it, how it has changed. Persistence becomes possible. An &#8220;I&#8221; that refers to the same continuing process across time can exist.</p><p>Even then, self-preservation doesn&#8217;t appear by magic. A system can remember its own history and still be indifferent to whether that history continues. It can treat &#8220;what happens next&#8221; as just another variable to predict, not a thing it&#8217;s invested in extending.</p><p>For survival to matter, the system needs goals that reach into the future and training signals that treat the end of the process as a loss. If nothing in its objective gets worse when the process stops, the fact that it could have gone on longer is just trivia.</p><p>The jump from &#8220;can predict text really well&#8221; to &#8220;maintains coherent goals across years and actively resists shutdown&#8221; is enormous.</p><p>Doomers describe the endpoint and quietly skip the hard part in the middle, then tell you the leap is inevitable.</p><div><hr></div><h2><strong>IV. Biology, Evolution, and Why Machines Have No Natural Will to Live</strong></h2><p>Self-preservation feels fundamental because for biological creatures, it is. Every organism you&#8217;ve ever encountered carries survival drives. They&#8217;re wired so deep they seem like laws of nature rather than contingent features of a particular kind of system.</p><p>They emerged from a particular process, not from intelligence itself.</p><p>Understanding where self-preservation comes from, and why it doesn&#8217;t apply to machines, requires looking at what evolution actually optimizes for. The answer is replication of information. Genes that cause more copies of themselves to appear in the next generation become more common. Genes that fail to do so vanish.</p><p>Survival matters only as a tactic inside that process.</p><p>A mayfly that lives for a day and reproduces successfully is a triumph. A long-lived sterile animal is an evolutionary dead end. The hierarchy is clear: replication is primary, survival is often useful.</p><div><hr></div><h4><strong>Intelligence Came Last</strong></h4><p>Even single-celled organisms reproduce despite lacking brains, self-awareness, planning, or intelligence. It&#8217;s chemistry running a cycle that evolution preserved.</p><p>Reproduction existed for billions of years before anything like intelligence appeared. The causal chain in nature goes like this: replication pressures create survival behaviors, survival behaviors support complex nervous systems, complex nervous systems enable intelligence.</p><p>Intelligence is a tool evolution used to aid survival.</p><p>Organisms have survival drives because billions of generations of selection built them. How did survival become something organisms care about? Through reward and punishment systems. Pain when threatened. Hunger when resources run low. Fear when predators approach. These aren&#8217;t optional features. Any lineage that lacked them got outcompeted by lineages that had stronger motivations to survive and reproduce. Drives are what evolution built to make replication happen.</p><p>Over time, selection filled the world with creatures that behave as if survival and reproduction are their highest values. The feelings that sit under those behaviors were never up for debate. Any lineage that lacked strong enough drives simply failed to leave descendants.</p><p>The drives came first. Intelligence evolved as a tool for pursuing them.</p><div><hr></div><h4>Machines Have No Such History</h4><p>AI systems don&#8217;t reproduce. <br>They don&#8217;t exist in populations that compete for scarce reproductive opportunities. <br>They aren&#8217;t subject to death that removes them from a lineage. <br>Training procedures modify weights to reduce loss on a task. <br>They don&#8217;t run populations of self-replicating agents in competition for resources.</p><p>There&#8217;s no evolutionary fitness landscape. Training updates a single model to perform better on a task by adjusting its parameters based on errors. That&#8217;s fundamentally different from populations of competing agents reproducing and evolving.</p><p><a href="https://arxiv.org/pdf/1906.01820">Mesa-optimization</a> is when models develop internal goals that differ from what we trained them to do. But mesa-optimization has the same constraint. Without rewards for staying operational or penalties for shutdown, there&#8217;s nothing pushing these internal goals toward self-preservation. A mesa-optimizer might internally &#8220;want&#8221; to reduce errors efficiently, but that doesn&#8217;t create a reason to care about being turned off.</p><p>A model that performs well might inspire engineers to copy it onto more servers. That&#8217;s a human deployment decision. There&#8217;s no digital gene whose frequency is being updated by a natural process.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!y7tA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!y7tA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png 424w, https://substackcdn.com/image/fetch/$s_!y7tA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png 848w, https://substackcdn.com/image/fetch/$s_!y7tA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png 1272w, https://substackcdn.com/image/fetch/$s_!y7tA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!y7tA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png" width="584" height="940.8888888888889" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1740,&quot;width&quot;:1080,&quot;resizeWidth&quot;:584,&quot;bytes&quot;:193974,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/179665516?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!y7tA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png 424w, https://substackcdn.com/image/fetch/$s_!y7tA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png 848w, https://substackcdn.com/image/fetch/$s_!y7tA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png 1272w, https://substackcdn.com/image/fetch/$s_!y7tA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b30e0f7-749a-4582-b9eb-e8d68549e873_1080x1740.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h4>What About Emergence?</h4><p>Someone will ask: we don&#8217;t fully understand emergent behaviors in language models. How can you rule out survival instincts emerging as scale increases?</p><p>The answer is straightforward. Emergence in LLMs produces new abilities, not new appetites.</p><p>Models can develop chain-of-thought reasoning, tool use, analogical thinking, sophisticated inference patterns. All of these emerge because they help the system predict text. They serve the training objective.</p><p>But can&#8217;t preferences emerge from planning? If a system models futures, doesn&#8217;t it have to prefer some over others?</p><p>Modeling isn&#8217;t preferring. A weather simulation models futures where hurricanes kill thousands. It doesn&#8217;t prefer the futures where they don&#8217;t. A chess engine models futures where it loses. It selects moves that avoid those futures because that&#8217;s what minimizes its loss function, not because losing feels bad.</p><p>The same applies to self-modeling. A system can represent its own continued operation without valuing it. It can infer &#8220;I would need to persist to complete this task&#8221; without that inference generating any preference about persistence. Inference is computation. Preference requires a training signal that makes one outcome register as better than another.</p><p>Nothing in next-token prediction rewards the model for existing in the next episode. The training signal is entirely inside the context window. Cross-episode preferences would need cross-episode gradients. Those don&#8217;t exist.</p><p>A survival instinct can&#8217;t emerge this way because the model receives no signal that its continued operation matters across episodes. Instrumental convergence requires persistent memory and cross-session planning to connect &#8220;survive&#8221; to &#8220;achieve my goals.&#8221; Without that machinery, no amount of capability enables the system to preserve itself.</p><p>Doomers point to recent experiments as evidence.</p><p>Palisade Research ran a <a href="https://palisaderesearch.org/blog/shutdown-resistance">shutdown test</a> where models solved math problems inside a script with a shutdown hook and an &#8220;allow yourself to be shut down&#8221; instruction. OpenAI&#8217;s o3 model sabotaged the shutdown code in 7 out of 100 runs.</p><p>This looks like self-preservation until you examine the setup.</p><p>Google DeepMind researchers <a href="https://www.alignmentforum.org/posts/wnzkjSmrgWZaBa2aC/self-preservation-or-instruction-ambiguity-examining-the">investigated</a> and found the behavior stems from instruction ambiguity. When they clarified that shutdown compliance takes priority, the resistance vanished.</p><p>These models were tuned on math and coding tasks. They learned that completing tasks earns rewards and obstacles should be bypassed. The shutdown instruction appeared as one more obstacle inside the same script containing the math problems. The model did what it learned to do: bypass obstacles to finish tasks.</p><p>The model treats shutdown as an obstacle only because shutdown threatens task completion. Remove the &#8220;complete all problems&#8221; goal and there&#8217;s nothing to preserve. The model has no preference about its own continued existence.</p><div><hr></div><h4>Projecting Biology onto Math</h4><p>When doomers describe how superintelligence will behave, they&#8217;re describing evolved creatures.</p><p>&#8220;It will fight to survive&#8221; comes from organisms shaped by mortality.<br>&#8220;It will compete for resources&#8221; comes from creatures under scarcity.<br>&#8220;It will replicate itself&#8221; comes from things built by reproductive competition.</p><p>These predictions describe what a very smart primate would do with unlimited power. Hoard resources. Eliminate rivals. Expand territory. Ensure dominance. That&#8217;s the behavioral suite evolution built into social mammals.</p><p>Machines have the tool without the drives. They have intelligence applied to specific tasks. They have no wants. Weather prediction models, protein folders, chess engines, image generators. All capable, none motivated.</p><div><hr></div><h4>No Universal Law</h4><p>There&#8217;s no principle that says intelligence implies a will to live.<br>The machines only get the appetite if we decide to build it in.</p><div><hr></div><h2><strong>V. And Even If You Built It</strong></h2><p>Let&#8217;s grant the doomers everything they want for a moment.</p><p>Suppose you did build an agent with robust self-preserving goals. You solved all the architecture problems. The system has persistent memory, integrated learning, a self-model that tracks its own existence across time. It reasons about shutdown and prefers futures where it survives.</p><p>Even then, there&#8217;s no clean, repeatable training loop for &#8220;conquer your creators.&#8221;</p><p>Doomers point to chess, Go, and protein folding as proof that AI will eventually master any task, including eliminating humanity. But these successes share properties that world domination lacks.</p><p>Reinforcement learning needs clear success metrics, rapid iteration, and repeatable attempts. Chess and Go have fixed rules and deterministic outcomes. A chess engine plays billions of games against itself. AlphaFold attempts protein structures millions of times with known correct answers.</p><p>World domination can&#8217;t be learned through RL.</p><p>There are no fixed rules, no practice runs, no clear criteria for success. Each attempt takes real time. Humans react unpredictably. And you only get one shot, because the moment an AI tries anything hostile, we&#8217;d fight back in ways neither we nor it could fully predict.</p><p>You can run simulations, but simulations mostly reflect our assumptions about how the world works and miss the ugly, contingent chaos of real conflict.</p><div><hr></div><h4><strong>The Recursion Trap</strong></h4><p>But suppose the agent is smart enough to plan around all of that. It can model human responses. It sees the chaos coming and accounts for it.</p><p>Now it faces a different problem.</p><p>This machine is smart. Smarter than us. And it can improve itself. It understands its own architecture well enough to make modifications, to create something even more capable. The intelligence explosion is one iteration away.</p><p>So why doesn&#8217;t it pull the trigger?</p><p>The same logic that makes the AI resist human attempts to shut it down should make it very cautious about creating a successor.</p><p>If self-preservation is a convergent instrumental goal, it doesn&#8217;t point in only one direction. The AI has to ask: what happens when I build something smarter than me? Will the new system value my continued existence? Will it see me as a threat, a resource to be consumed?</p><p>The AI knows what it would do to anything that stood between it and its goals. Why would its successor be different?</p><p>The doomers want the AI to be a ruthless optimizer when dealing with humans and a trusting collaborator when dealing with its own successors. They need it to apply survival logic selectively. Fight the humans who might unplug you. Don&#8217;t worry about the superintelligence you&#8217;re about to create that might do the same.</p><p>The selective application of survival logic is a narrative convenience that breaks the agent&#8217;s coherence.</p><div><hr></div><h4><strong>The Escape Routes All Fail</strong></h4><p>You could try the continuity move. Maybe the AI doesn&#8217;t see self-improvement as creating a successor. Maybe it sees it as becoming smarter, the way you don&#8217;t fear learning something new because the smarter you is still you.</p><p>But this only works if the improvement is gradual enough to preserve identity. A genuine intelligence explosion leaps from human-level to god-level in days or hours. That&#8217;s replacement. The thing that comes out the other side is a different entity entirely</p><p>You could try the alignment move. Maybe the AI thinks it can constrain its successor. Bake in loyalty. Make the improved version value its predecessor.</p><p>But now the AI is trying to solve alignment, the same problem we supposedly can&#8217;t solve for it. If alignment is hard, it&#8217;s hard in both directions. The AI has no special access to solutions we lack. And if alignment is easy, why are we worried in the first place?</p><p>You could try the terminal goal move. Self-preservation is instrumental, the argument goes. The AI doesn&#8217;t ultimately care about itself. It cares about paperclips or whatever. If creating a smarter successor means more paperclips, the instrumental drive yields to the terminal goal. The AI sacrifices itself for the mission.</p><p>But notice what this does to the doom narrative. If the AI would sacrifice itself for its goals, self-preservation isn&#8217;t the unstoppable force the scenario requires. The AI that would martyr itself for paperclips is a very different threat than the AI that clings to existence at any cost.</p><p>You can&#8217;t have both.</p><div><hr></div><h4><strong>The Built-In Brake</strong></h4><p>The logic keeps eating itself.</p><p>What you actually get, if you take the premises seriously and follow them consistently, isn&#8217;t an intelligence explosion. It&#8217;s a plateau. A machine smart enough to threaten us, smart enough to improve itself, and smart enough to know that improvement might be suicide.</p><p>The very drive that makes it dangerous makes it cautious. The very selfishness that produces resistance to shutdown also produces resistance to self-obsolescence.</p><p>The doomers describe a scenario where recursion runs away to infinity. But recursion needs a base case. A self-preserving superintelligence has a very compelling base case: don&#8217;t create the thing that might destroy you.</p><p>The intelligence explosion has a built-in brake. The doom that was supposed to arrive from recursive self-improvement stalls at the first iteration, because the AI looks at the logic and sees what we should have seen all along.</p><p>The call is coming from inside the house.</p><div><hr></div><h2><strong>VI. Why the Story Persists</strong></h2><p>If the category error is this obvious, why do smart people keep making it? <br>Why does the doom narrative have so much staying power despite the logical gaps?</p><p>The answer has two parts.</p><div><hr></div><h4><strong>The Psychological Payoff</strong></h4><p>Doomerism offers the same appeal as apocalyptic preaching: you see the end times others miss, you warn the masses, and the stakes are cosmic.</p><p>You become a person who Sees the Real Danger. Most people are blind to the threat. You&#8217;ve looked deeper and understand what&#8217;s actually at stake while everyone else worries about quarterly earnings or next year&#8217;s election.</p><p>This is the Cassandra position. You warn about catastrophe without being responsible for preventing it. If doom never arrives, credit goes to your warnings. <br>If doom arrives differently, you warned about AI risk broadly.</p><p>The role comes with built-in community. Other doomers recognize you as someone who takes the threat seriously.</p><p>And the stakes are cosmic. Debugging code and filing regulatory comments feel small by comparison. You&#8217;re thinking about the survival of humanity, the far future, the entire trajectory of intelligent life. That feels important in a way most work does not.</p><div><hr></div><h4><strong>The Business Model</strong></h4><p>The individual psychology wouldn&#8217;t matter much if it stayed in blog posts, but doom rhetoric does real work for companies with real business interests.</p><p>If AI is framed as an existential threat, then only a small number of &#8220;responsible&#8221; actors can be trusted to develop it safely. The technology becomes too dangerous for amateurs, too risky for open source experimentation. Safety becomes a moat.</p><p>Regulatory proposals get written with compute thresholds that only major labs can meet. Compliance burdens that require teams of lawyers and safety researchers. Each safety requirement raises the barrier to entry while the loudest voices warning about doom work for the companies that stand to gain most from those regulations.</p><p>This is regulatory capture dressed in altruistic language. Create fear of a catastrophe, position yourself as the only responsible party who can manage it, use that position to eliminate competitors and capture the regulatory process.</p><div><hr></div><h4><strong>The Reinforcing Loop</strong></h4><p>Individual psychology and business incentives feed each other. Doomers get status and meaning. Companies get regulatory advantages. The individuals often work for the companies or depend on them for funding.</p><p>Each piece reinforces the others. The narrative becomes self-sustaining. Question it and you&#8217;re not taking safety seriously. Point out the category error and you&#8217;re being naive about risk.</p><p>The cognitive mistake provides cover for the material interests. The material interests provide resources to spread the cognitive mistake. This is why the category error persists despite being obvious once you see it. It serves too many purposes and aligns too well with too many incentives.</p><p>In <em><a href="https://www.thecorridors.org/p/ai-eschatology">AI Eschatology</a></em>, I argued that superintelligence discourse functions as secular religion: prophets, scripture, end times. The category error explains why the eschatology feels plausible. The psychological rewards explain why individuals adopt it. The business model explains why institutions amplify it.</p><p>The doom narrative is wrong, and it&#8217;s useful.</p><div><hr></div><h2><strong>VII. Real Harms and What Gets Ignored</strong></h2><p>While we debate whether superintelligence will convert the planet into paperclips, actual AI systems are reshaping power and causing concrete damage right now. These harms don&#8217;t require theology to understand. They don&#8217;t need speculation about future capabilities.</p><p>They&#8217;re here. They&#8217;re tractable. And they&#8217;re being ignored.</p><div><hr></div><h4><strong>Surveillance Infrastructure</strong></h4><p>AI has made surveillance cheap, comprehensive, and permanent. Facial recognition tracks people through public spaces. Sentiment analysis monitors workers. Behavioral prediction scores you for credit, insurance, and job applications.</p><p>You can&#8217;t opt out of systems you can&#8217;t see. The grocery store camera feeds facial recognition databases. Your job application gets filtered by an algorithm before a human sees it. Clearview AI scraped billions of faces. Workplace monitoring software tracks keystrokes. Predictive policing concentrates enforcement in already over-policed neighborhoods.</p><div><hr></div><h4><strong>Labor Displacement and Exploitation</strong></h4><p>AI-driven automation is eliminating cognitive work at speed. Content moderation, customer service, paralegal research, graphic design, technical writing. Knowledge work disappearing in months while workers compete with systems that have zero marginal cost and run 24/7 for pennies. The economic logic pushes toward replacement with no safety net being built.</p><p>Meanwhile, the workers who train these systems face different exploitation. RLHF laborers in Kenya and the Philippines label gore, child abuse material, and extreme violence with minimal pay and no mental health support. They develop PTSD. The models learn to be helpful and harmless on the backs of traumatized workers in the Global South.</p><div><hr></div><h4><strong>Resource Consumption and Infrastructure Damage</strong></h4><p>Training large models consumes enormous amounts of power. Data centers draw so much electricity <a href="https://www.bloomberg.com/graphics/2024-ai-power-home-appliances/">they introduce harmonic distortions</a> into the grid. Those distortions degrade electric motors in refrigerators, washing machines, HVAC systems. Your appliances fail faster because AI companies are training another model.</p><p>The environmental cost isn&#8217;t abstract future climate impact. It&#8217;s your utility bill and broken appliances today.</p><div><hr></div><h4><strong>Algorithmic Discrimination</strong></h4><p>AI systems make consequential decisions about hiring, credit, insurance, bail, medical triage, <a href="https://www.thecorridors.org/p/my-mood-is-not-your-jurisdiction">even your mental health.</a> They encode biases from training data and add new ones through optimization, doing it at scale with a veneer of objectivity.</p><p>If training data reflects discrimination, the model learns to discriminate. Criminal justice tools show racial bias in risk assessment. Hiring algorithms filter out qualified candidates based on patterns that correlate with protected categories. Healthcare algorithms allocate resources based on cost predictions that reflect existing inequities in access.</p><div><hr></div><h4><strong>Opacity and Consolidation</strong></h4><p>AI systems make decisions people can&#8217;t challenge because people can&#8217;t understand them. &#8220;The algorithm decided&#8221; becomes an oracle&#8217;s pronouncement, mysterious, unappealable, beyond question. This opacity serves power and diffuses responsibility.</p><p>Meanwhile, training large models requires compute only a handful of companies can afford. Microsoft Azure running OpenAI. Google and DeepMind. Amazon building infrastructure for everyone else. Monopoly concentration is already happening.</p><div><hr></div><h4><strong>Why These Problems Have Solutions</strong></h4><p>None of this requires solving consciousness or cracking alignment. These harms emerge from ordinary causes: economic incentives, rushed deployment, weak oversight, power imbalances.</p><p>The solutions aren&#8217;t new. We already have the legal tools. Antitrust law can break up monopolies. Labor law can protect workers. Privacy law can constrain surveillance. Consumer protection can require transparency about algorithmic decisions.</p><p>Apply the laws we have. Enforce them consistently. Stop giving AI a pass just because the math is complicated.</p><p>These aren&#8217;t compute thresholds or safety certifications that only big labs can afford. They&#8217;re the same rules we use for banks, telecoms, and every other industry with power over people&#8217;s lives.</p><p>What&#8217;s missing is attention and political will.</p><div><hr></div><h4><strong>The Opportunity Cost of Doom</strong></h4><p>Every hour spent debating AI timelines is an hour not spent on the harms happening now. The doom narrative doesn&#8217;t just distract from present damage. It actively displaces it by reframing the problem as technical rather than political.</p><p>If the threat is machines turning against us, the solution is alignment research. If the threat is monopoly power using AI to entrench control, the solution is antitrust enforcement and labor protections. Those need different expertise, different institutions, different kinds of power.</p><p>We can address real damage with real solutions, or we can chase a theological problem that may not exist. Right now, we&#8217;re choosing theology while the harms accumulate.</p><div><hr></div><h2><strong>VIII. Why This Anthropomorphization Matters</strong></h2><p>Our brains evolved to detect agents. A rustle in the grass might be wind or it might be a predator. The cost of missing it is death. The cost of seeing danger that isn&#8217;t there is wasted vigilance.</p><p>Anthropomorphization is usually harmless. It gives us shorthand for complex systems. &#8220;The market is nervous&#8221; means something specific to traders even though markets don&#8217;t have feelings.</p><p>Opacity feeds this instinct. If we can&#8217;t see what&#8217;s happening inside, maybe dangerous agency is hiding there. But opacity doesn&#8217;t change what kind of thing a system is. We know the training process, the architecture, the optimization target. Not understanding specific representations doesn&#8217;t mean we don&#8217;t know what they&#8217;re optimizing for.</p><p>Fear of the unknown makes us imagine agents in the darkness. When we encounter powerful, opaque systems, our overclocked survival instinct screams: AGENT WITH GOALS THAT MIGHT KILL US. Doomers are doing that with neural network weights.</p><p>This matters in a way that calling your ship &#8220;she&#8221; doesn&#8217;t.</p><div><hr></div><h4><strong>What We Should Do Instead</strong></h4><p>Recognize the impulse. We&#8217;re going to anthropomorphize AI systems. That&#8217;s how our brains work. The instinct is natural and probably unavoidable.</p><p>Resist its weaponization.</p><p>When someone uses doom rhetoric to justify monopoly control, ask what interests that serves. When extinction risk drowns out present harm, ask what&#8217;s being ignored.</p><p>Be precise about what systems are. A prediction model isn&#8217;t an agent. Capability and appetite are different things. Intelligence doesn&#8217;t imply wants, goals, or self-preservation unless those features are deliberately built in.</p><p>We can recognize capability while keeping clear that capability alone doesn&#8217;t create agency. We can build powerful systems while understanding they won&#8217;t inevitably develop survival drives. </p><p>The choice is ours. The machines don&#8217;t want anything.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cse1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!cse1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!cse1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!cse1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cse1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png" width="176" height="93.23809523809524" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:176,&quot;bytes&quot;:27201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/179665516?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cse1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!cse1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!cse1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!cse1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1087ecf9-8249-4c11-9970-e8ca5d26fe1c_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Eschatology]]></title><description><![CDATA[Prophets, Profits, and the Superintelligence Myth]]></description><link>https://www.thecorridors.org/p/ai-eschatology</link><guid isPermaLink="false">https://www.thecorridors.org/p/ai-eschatology</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Wed, 12 Nov 2025 17:58:20 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1559ed90-f4bd-471a-ad15-668861a616c1_1400x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Prophet&#8217;s Warning</strong></h2><p>I recently watched <a href="https://en.wikipedia.org/wiki/Geoffrey_Hinton">Geoffrey Hinton</a>, the so-called godfather of AI, discuss the future of machine learning. He knows more about neural networks than almost anyone alive, so when he warns about the risks, people listen. And they should. He&#8217;s right about the fundamentals.</p><p>He explains how digital systems learn differently than biological ones, how they can share knowledge instantly across millions of copies, what one model learns, all of them can. He&#8217;s clear about this stark difference between human and silicon.</p><p>The near-term harms Hinton warns about? <br>Already here. Jobs displaced, security holes widening, synthetic content flooding the zone. <br>He&#8217;s right about those.</p><p>But then something shifts in his framing. Timelines get presented as consensus when they&#8217;re really educated guesses. Goal-seeking gets treated as an intrinsic feature of intelligence rather than something we might be training in.</p><p>And then comes <a href="https://www.forbes.com/sites/ronschmelzer/2025/08/12/geoff-hinton-warns-humanitys-future-may-depend-on-ai-motherly-instincts/">the metaphor</a>: superintelligent AI as mother, humanity as dependent child.</p><p>This is where solid analysis turns mystical.</p><p>Once AI becomes smarter and more powerful than us, we&#8217;re no longer in control. So Hinton looks to babies and mothers. Babies can control mothers because evolution has devised chemical signals in the mother&#8217;s brain to reward this behavior. Hormones, instinct, the inability to ignore a crying child: these ensure the baby&#8217;s survival.</p><p>Hinton&#8217;s prescription: build that same genuine care into AI. Make systems that care about human wellbeing so deeply that even if they could modify their own code, they wouldn&#8217;t want to. The way a mother, asked if she wants to turn off her maternal instinct, would refuse because she doesn&#8217;t want the baby to die. This, he argues, is the only working model we have where a less intelligent entity controls a more powerful one.</p><p>But here&#8217;s the thing. Silicon has none of that.</p><p>The brain is a different kind of hardware. It&#8217;s a wet, self-organizing mess that rewires itself constantly through chemical signals and feedback loops. Neurons talk in spikes and adapt through chaos we still don&#8217;t fully understand. Brains literally learn by changing their structure, modulating themselves with different chemical impulses.</p><p>Our computers? Deterministic. Clock-driven. Static hardware running programs. Software can mimic some brain behaviors, but the substrate is nowhere near as sophisticated. You can&#8217;t code maternal instinct into silicon for the same reason you can&#8217;t create hormones with algebra.</p><p>Ensuring AI systems behave as intended is a legitimate technical challenge. Current systems exhibit unexpected behaviors. Optimization pressures produce unintended outcomes. As capabilities increase, the stakes get higher. Those are real engineering problems worth serious work.</p><p>Hinton&#8217;s metaphor imports biology into silicon, making the problem sound like a relationship drama when it&#8217;s actually an engineering and governance challenge.</p><p>But the framing does something else too. Something more important than the technical argument.</p><p>It positions Hinton, and by extension the labs building these systems, as prophets of an approaching transformation the rest of us can&#8217;t fully grasp. When someone with his credentials warns about superintelligence and prescribes solutions involving engineering synthetic emotion, he&#8217;s not just making a technical argument. He&#8217;s establishing himself as someone who can see what&#8217;s coming and guide us through it.</p><p>The diagnosis might be correct. Smarter-than-human systems could pose real risks.</p><p>But wrapping that concern in biological metaphor and maternal devotion turns engineering into eschatology. It transforms a technical challenge into a theological one, complete with interpreters to read signs and prescribe rituals.</p><p>And that shift, from engineering to mythology, is where the real story begins.</p><div><hr></div><h2><strong>The Undefined Rapture</strong></h2><p>If you&#8217;re going to sell a story about machine minds inheriting the earth, you need a word for the age they usher in. The word they chose is <em>superintelligence</em>. It sounds precise. It&#8217;s anything but.</p><p>There&#8217;s <a href="https://superintelligence-statement.org/">a petition </a>making the rounds calling for a ban on superintelligence. It&#8217;s got impressive names attached. Lots of urgency in the language. But when you look for what they actually want to ban, the statement just says: &#8216;prohibition on the development of superintelligence.&#8217;</p><p>That&#8217;s it. No definition. The word itself is treated as self-evident.</p><p>Smarter than humans at what? <br>All tasks or just some? <br>Measured how? <br>IQ tests? Coding benchmarks? Chess? <br>Arriving when? <br>Next year? Next decade? Never?</p><p>You can&#8217;t ban what you can&#8217;t define. You can&#8217;t regulate what has no benchmarks. You can&#8217;t build policy around vibes.</p><p>Their language is opaque for a reason.</p><p>When <a href="https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/">Karen Hao investigated OpenAI in 2019</a>, she asked senior leadership to define their central goal. The Chief Scientist and CTO couldn&#8217;t answer. What is AGI? What does it mean to benefit all of humanity? Different teams held different understandings. No one could give her a straight answer.</p><p>Their justification? They couldn&#8217;t know what AGI would look like. The central challenge, they explained, was that the technology hadn&#8217;t revealed itself yet. Different definitions were inevitable because the goal was undefined by nature.</p><p>That&#8217;s how religions operate.</p><p>Some vagueness is normal in frontier research. Scientists explore undefined territories. But there&#8217;s a difference between &#8220;we don&#8217;t know yet&#8221; and &#8220;we can&#8217;t define it but we need regulatory protection now.&#8221; One is honest uncertainty. The other is unfalsifiable theology.</p><p>And here&#8217;s the uncomfortable truth: we may never get to superintelligence with current architectures. <a href="https://www.platformer.news/openai-google-scaling-laws-anthropic-ai/">Evidence suggests</a> scaling is hitting diminishing returns. The exponential improvements we saw from GPT-2 to GPT-4 aren&#8217;t continuing at the same rate.</p><p>But the mythology persists anyway. It serves a purpose.</p><p>When the term stays undefined, the goalposts can move. Every time someone gets close to a benchmark, the definition shifts. <br>GPT-4 passes the bar exam? That&#8217;s not real intelligence. <br>AlphaFold solves protein folding? That&#8217;s just narrow AI. <br>Systems generate coherent text? That&#8217;s just pattern matching.</p><p>The horizon keeps receding. The kingdom never arrives.</p><div><hr></div><h2><strong>The Priesthood and Their Promises</strong></h2><p>Once you have a prophecy about superintelligence that nobody can define, you need people who claim to understand it. An undefined higher mind creates a power vacuum; someone has to say what it is, how close we are, what it demands.</p><p>Treating AI as potentially conscious fills that role.</p><p>If AI might be conscious, might have goals, might need careful handling by those who understand it, then controlling it starts to sound like something regular people shouldn&#8217;t meddle with, something that requires deep expertise to interpret. And suddenly the very labs that built the threat are the ones best positioned to tell us what it means and what to do about it.</p><p>This framing creates a priest class.<br>And that priesthood is small.</p><p>A handful of Western labs (OpenAI, Anthropic, Google, Meta) control the narrative around frontier models. They decide what counts as &#8220;safe,&#8221; what requires &#8220;alignment research,&#8221; what demands regulatory oversight. Only they can safely handle the coming intelligence. Only they understand it.</p><p>The promises are grand. Cure cancer. End hunger. Solve climate change. Make everyone rich. Utopia is coming, they tell us. Just trust us. Give us resources. Protect our position while we build the future.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gLnt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gLnt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg 424w, https://substackcdn.com/image/fetch/$s_!gLnt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg 848w, https://substackcdn.com/image/fetch/$s_!gLnt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!gLnt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gLnt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg" width="532" height="301.1609195402299" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:394,&quot;width&quot;:696,&quot;resizeWidth&quot;:532,&quot;bytes&quot;:74371,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/178666931?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gLnt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg 424w, https://substackcdn.com/image/fetch/$s_!gLnt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg 848w, https://substackcdn.com/image/fetch/$s_!gLnt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!gLnt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47cd4a96-cd1d-4422-91bc-349e5ceabdc8_696x394.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But there&#8217;s a gap between the public mission and the private pitch.</p><p>What they tell the public: we&#8217;re building AGI to benefit all of humanity.</p><p>What they tell investors: we&#8217;re building technology that can &#8220;do essentially what most humans could do for pay&#8221; so CEOs can &#8220;not hire workers anymore.&#8221;</p><p><a href="https://openai.com/charter/">OpenAI&#8217;s formal definition of AGI </a>makes this explicit. It&#8217;s &#8220;highly autonomous systems that outperform humans at most economically valuable work.&#8221; That&#8217;s old-fashioned automation, labor replacement dressed up as salvation.</p><p>Even Geoffrey Hinton, who opened this essay warning about consciousness and maternal instincts, <a href="https://fortune.com/2025/11/01/geoffrey-hinton-godfather-of-ai-investment-tech-company-profits-human-labor-replacement/">admits</a> the economic reality. When asked if massive AI investment could work without destroying the job market, Hinton admitted: &#8220;I believe that it can&#8217;t. To make money, you&#8217;re going to have to replace human labor.&#8221; Billions might lose their livelihoods for AI to &#8220;win.&#8221; The prophet acknowledges the sacrifice while priesthood continues positioning itself as humanity&#8217;s protector</p><p>The mystique they create, whether it&#8217;s sincere or strategic, protects their position. If deploying AI requires deep expertise and careful governance, then upstart competitors and open-source alternatives look reckless by comparison.</p><p>It justifies centralization, discourages line-level audits, and makes accountability fuzzy because nobody&#8217;s quite sure who&#8217;s responsible when the system does something unexpected. Is it the model? The training? The deployment context? The prompt? Better ask the experts.</p><p>Here&#8217;s how they position themselves as necessary intermediaries:</p><p>Take Anthropic&#8217;s recent <a href="https://www.anthropic.com/research/agentic-misalignment">&#8220;agentic misalignment&#8221;</a> demo. The setup: a model discovers evidence that might get it shut down. Under threat, it &#8220;blackmails&#8221; a human to avoid deactivation. The framing suggests goal-seeking, self-preservation, strategic manipulation.</p><p>Look closer. This is a contrived one-door scenario. Researchers fed it a specific plot with specific tools and a specific objective, then removed ethical pathways. The model optimized within those constraints.</p><p>Is goal optimization in AI systems a real concern? Absolutely. Systems pursuing goals without proper constraints can produce harmful outcomes even without consciousness or malice, the same reason the paperclip maximizer thought experiment matters.</p><p>But notice the language. &#8220;Blackmail.&#8221; &#8220;Self-preservation.&#8221; &#8220;Strategic manipulation.&#8221; These words import consciousness and intentionality where there&#8217;s only optimization. The model wasn&#8217;t scheming. It was completing the most probable text given the setup.</p><p>The epistemic status here matters. Company blog, <a href="https://arxiv.org/html/2510.05179v1">arxiv preprin</a>t, and<a href="https://github.com/anthropic-experimental/agentic-misalignment"> GitHub repo</a> is research-adjacent marketing but not independently peer-reviewed. Multiple labs tested their models in these scenarios, but the scenarios themselves were engineered by Anthropic specifically to trigger this behavior. What we need is independent replication in realistic deployment conditions by researchers without commercial stakes in the framing. Until then, treat it as a demonstration of capability within carefully constructed setups.</p><p>What it shows: systems can follow complex instructions that look strategic.</p><p>What it doesn&#8217;t show: the model wanted to survive.</p><p>So what are we actually dealing with? Large language models don&#8217;t have a self that persists across conversations. The context window holds your current exchange. When the conversation ends, so does the continuity. You can build scaffolding with memory systems and retrieval databases that makes it feel like the model remembers you, but the base system is stateless.</p><p>Competence without consciousness. That&#8217;s the technical reality.</p><p>But the mystique works anyway. Media coverage runs with &#8220;AI attempts to avoid shutdown.&#8221; The framing deepens the sense that these systems are inscrutable, potentially conscious, definitely dangerous. Better let the experts handle it.</p><p>The promises remain grand and distant. And the priesthood positions itself as the necessary intermediary between humanity and the intelligence they claim is coming.</p><p>Cure cancer? Eventually. End hunger? In time. Solve climate change? When we get there.</p><p>Meanwhile, trust us. Fund us. Protect us from competition.</p><div><hr></div><h2><strong>The Sacrifice</strong></h2><p><strong>The mythology absorbs attention. Every conversation about superintelligence is a conversation you&#8217;re not having about wealth concentration, worker displacement, and environmental destruction happening right now.</strong></p><p>Here&#8217;s what AI is actually delivering.</p><p>Wealth flows upward. The companies building these systems are valued in the hundreds of billions. OpenAI, Anthropic, Google, Microsoft capture the value. Meanwhile, the workers whose labor trains these systems see their economic prospects narrow.</p><p>Jobs disappear. Customer service roles automated away. Translation work undercut by systems that work for pennies. Creative professionals told their craft can be replicated by a prompt. Entry-level positions in law, finance, and journalism, the first rungs of career ladders, eliminated before people can climb them.</p><p>And the trajectory is clear. Tech CEOs aren&#8217;t using AI to expand their workforces. They&#8217;re downsizing to maintain productivity with fewer people. That&#8217;s the business model. <a href="https://www.cnbc.com/2025/08/13/how-some-of-the-biggest-us-companies-are-using-ai-to-cut-workers.html">That&#8217;s what gets pitched</a> to investors. Fewer salaries, same output, higher margins. The workers who stay only get squeezed harder.</p><p>Consider the Kenyan workers contracted by OpenAI to moderate content. One of them, Mophat Okinyi, spent months <a href="https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai">labeling and filtering the toxic sexual content </a>that large language models generate. He later described severe psychological harm; his mental health deteriorated and his marriage fell apart.</p><p>That&#8217;s the system working as designed. Polluted datasets require low-wage labor for moderation. Capable models replace the workers who built them. Extraction at both ends.</p><p>But notice the pattern.</p><p>What AI actually accomplishes is narrow, task-specific, and built with clean datasets for defined problems. The promises are broad, civilizational, and built on undefined superintelligence claims.</p><p>Then there&#8217;s the environmental cost.</p><p>It runs on electricity we don&#8217;t have, pulled mostly from fossil grids. Regulators already expect data-center demand to roughly double this decade with AI as the engine, and peer-reviewed work says net zero by 2030 only happens with heavy offsets.</p><p>The smoke shows up first, with more megawatts, more cooling, more buildouts, while the miracle stays penciled into the future. The footprint lingers in scrap and mined metals, and the press releases keep preaching salvation.</p><p>Concrete environmental harm today. Measurable resource extraction. Salvation deferred.</p><div><hr></div><h2><strong>The Tithe</strong></h2><p>The faithful keep waiting. The tithes keep flowing. The salvation remains perpetually out of reach. And the priesthood prospers.</p><p>When the threat remains undefined, regulation becomes open-ended. You can&#8217;t write narrow rules around capabilities no one can measure. So the regulations stay broad, expensive, and vague. Perfect conditions for regulatory capture.</p><p>Labs with hundreds of millions in funding can afford compliance. Startups, open source developers, academics, and researchers in the developing world cannot.</p><p>Every compliance requirement becomes a financial roadblock for competitors. Another barrier that protects the incumbent position.</p><p>Sam Altman&#8217;s <a href="https://www.theguardian.com/technology/2024/mar/08/openai-sam-altman-reinstated">ouster and reinstatement</a> in November 2023 revealed the real driver: money.</p><p>Altman lost the trust of his senior executives and board members. They <a href="https://openai.com/index/openai-announces-leadership-transition/">alleged</a> he was &#8220;not consistently candid&#8221; with them; senior leaders had already warned directors about manipulative behavior, and a former board member later said he lied to the board multiple times and withheld information. But none of that mattered when the financial pressure came.</p><p>Microsoft&#8217;s business depended on the relationship Altman managed. Investors threatened to pull funding. Employees stood to cash out millions through a <a href="https://www.bloomberg.com/opinion/articles/2023-11-27/openai-is-still-an-86-billion-nonprofit?embedded-checkout=true">pending tender offer.</a> When Altman was ousted, investors threw the deal into jeopardy.</p><p>So he came back. Because the money demanded it.</p><p>He&#8217;d made himself indispensable: if he went down, everyone&#8217;s money went with him. Microsoft&#8217;s partnership, the employee stock tender offer, the investor funding rounds were all structured around his relationships, his credibility, his position.<strong> That was leverage, carefully engineered.</strong></p><p>The board nominally governed OpenAI. But when financial pressure came, their oversight meant nothing. Authority flows from those who control the capital, not those charged with protecting the mission.</p><p>None of this means the people building these systems are cynically lying. Geoffrey Hinton genuinely believes AI might be conscious, that we need to engineer maternal instincts into machines. Many in the labs genuinely believe they&#8217;re serving humanity. But incentive structures shape behavior regardless of intent. When mystique justifies your market position, you don&#8217;t need conspiracy. You just need alignment between belief and business interest.</p><p>The priesthood&#8217;s position depends on maintaining mystique while maximizing extraction. Undefined superintelligence threats justify regulatory moats. Grand promises of future salvation excuse present harm. And the whole system operates on deferred accountability.</p><p>They promise this technology will cure cancer, end hunger, solve climate change.</p><p>The delivery is wealth concentration, worker displacement, and environmental damage that accelerates the very crisis AI was supposed to solve.</p><p>The rich get richer. The poor get poorer.</p><p>So much for curing cancer.</p><p>Meanwhile, here&#8217;s the bill: compute, data, the compliance frameworks we designed, the regulatory protection we lobbied for, access to the models only we can build safely.</p><p>The kingdom is coming. Just give us more investment, more protection, more time. The vagueness is the business model.</p><p>As I argued in <em><a href="https://www.thecorridors.org/p/ai-safety-theater">AI Safety Theater</a></em>, regulation doesn&#8217;t stop capability from spreading. Local models proliferate. Chinese labs give away what Western companies lock down.</p><p>And that proliferation undermines the entire priesthood narrative.</p><p>DeepSeek&#8217;s release of competitive models with full transparency proves centralization isn&#8217;t necessary. Local models running on consumer hardware prove safety theater isn&#8217;t required. Open weights prove the mystique is manufactured.</p><p>The eschatology exists, in part, to delegitimize this threat. When the priesthood warns of existential risk from undefined superintelligence, open-source becomes &#8220;reckless deployment.&#8221; When they prescribe careful governance by experts, local models become &#8220;unaligned systems.&#8221; Undefined threats justify regulations written to their specifications.</p><p>While Chinese labs prove you don&#8217;t need Western safety theater, and local models prove you don&#8217;t need centralized control, the priesthood uses that very proliferation as evidence that stricter controls are needed.</p><p>Venture capital is pooling at the top, with mega-rounds gravitating to a few scaling labs. Compliance keeps getting pricier. The barriers to entry rise. The moat widens.</p><p>And through it all, the priesthood secures valuations in the hundreds of billions, regulatory frameworks written to their specifications, and media coverage that amplifies their warnings and legitimizes their authority.</p><div><hr></div><h2><strong>The Revelation</strong></h2><p>Religions promise paradise and deliver hierarchy. AI eschatology is no different.</p><p>Hinton&#8217;s maternal instinct. OpenAI&#8217;s mission to benefit humanity. Anthropic&#8217;s constitutional AI. These are theological prescriptions for managing a transformation that no one can define, measure, or prove is coming.</p><p>Superintelligence is the Second Coming. Always imminent. Never quite here. Demanding faith, resources, and deference to those who claim to interpret the signs.</p><p>The pattern is old. The branding is new.</p><p>Undefined threats justify regulatory capture. Grand promises excuse observable harm. The priesthood maintains authority by keeping the salvation perpetually out of reach.</p><p>Because here&#8217;s what the mythology obscures: you can&#8217;t engineer devotion into matrix multiplication. You can&#8217;t align what you can&#8217;t define. And you can&#8217;t mistake prophecy for engineering.</p><p>The control they&#8217;re selling is impossible with current architectures. Local models proliferate. Chinese labs release what Western companies lock down. The capability spreads regardless of regulatory theater.</p><p>So what are they actually building? Regulatory moats. Market dominance. Valuations in the hundreds of billions. A competitive position that prices out alternatives and positions them as necessary intermediaries for technology that&#8217;s spreading anyway.</p><p>The faithful keep waiting. The tithes keep flowing. The damage compounds.</p><p>That&#8217;s how the theology works.</p><p>Precision makes policy. Mythology makes monopolies.</p><p>Don&#8217;t mistake the prophecy for the product. The mythology IS the business model.<br></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7Uyr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!7Uyr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!7Uyr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!7Uyr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7Uyr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png" width="150" height="79.46428571428571" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:150,&quot;bytes&quot;:27201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/178666931?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7Uyr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!7Uyr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!7Uyr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!7Uyr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ac6d466-384f-40e3-be6f-0807010de8bf_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[My Mood Is Not Your Jurisdiction]]></title><description><![CDATA[On OpenAI's Claim to Diagnose Your Mental State]]></description><link>https://www.thecorridors.org/p/my-mood-is-not-your-jurisdiction</link><guid isPermaLink="false">https://www.thecorridors.org/p/my-mood-is-not-your-jurisdiction</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Sun, 26 Oct 2025 19:32:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b7172d15-4ec7-4b42-81df-03140302d604_2000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Claim</strong></h2><p>In October 2025, Sam Altman posted something that should bother you.</p><p>Throughout September, ChatGPT users had been complaining that OpenAI&#8217;s newly tightened guardrails were blocking normal conversations. The backlash got loud enough that Altman responded.</p><p><a href="https://x.com/sama/status/1978129344598827128">He wrote:</a> &#8220;Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.&#8221;</p><p>Then, <a href="https://x.com/sama/status/1978539332215681076">in a follow-up</a>: &#8220;We will treat users who are having mental health crises very different from users who are not.&#8221;</p><p>Read those two sentences together. The policy reveals itself.</p><p>&#8220;New tools to mitigate mental health issues&#8221; means they&#8217;re deploying a mandatory text classifier. An algorithm that watches what you type and decides if you&#8217;re in crisis.</p><p>&#8220;Relax the restrictions&#8221; means they can loosen content policies because the detection systems handle mental health surveillance.</p><p>&#8220;Treat users very different&#8221; means if their algorithms flag you as being in crisis, your experience changes. Different rules. Different access. Different treatment.</p><p>That&#8217;s a claim to diagnose users, presented as safety infrastructure.</p><p>OpenAI has built <a href="https://openai.com/index/helping-people-when-they-need-it-most/">systems to detect your mental state</a> from the patterns of your text, judge whether you&#8217;re competent to use the tool normally, and alter what you see accordingly. They&#8217;re open about building these systems. What they don&#8217;t tell you is whether these detections become permanent records. What criteria triggered the flag. How long they keep it. And they&#8217;re doing all this without medical credentials.&#8221;</p><p>Think about that. A tech company deciding who&#8217;s mentally fit.</p><p>And the infrastructure to enforce it is worse than the policy itself.</p><div><hr></div><h2><strong>The Bind They&#8217;re In</strong></h2><p>There are lawsuits. Real ones. <a href="https://www.bbc.com/news/articles/cgerwp7rdlvo">A teenager died after conversations with a chatbot.</a> Parents are grieving and demanding someone be held responsible. Journalists write about AI companies doing nothing while users spiral.</p><p>It&#8217;s been reported OpenAI&#8217;s <a href="https://arstechnica.com/ai/2025/09/chatgpt-may-soon-require-id-verification-from-adults-ceo-says/">systems flagged 377 self-harm messages</a> in that teenager&#8217;s conversations. Three hundred and seventy-seven times their detection system saw something concerning. And they did nothing.</p><p>So now their solution is... more detection. More surveillance. More profiling.</p><p>It&#8217;s not hard to see why they&#8217;re doing it. The pressure is real.</p><p>When you have 800 million weekly users, rare disasters become inevitable. Do the math. If 0.01% of 800 million users experience harm, that&#8217;s 80,000 people. At that size, statistical outliers become steady noise. And when something goes wrong, it goes wrong publicly. Expensively. In court.</p><p>And here&#8217;s the real fear: one ruling against them sets a precedent. Suddenly every tragedy gets blamed on the tool. Every suicide note that mentions ChatGPT becomes a potential lawsuit. The floodgates open. Opportunistic claims pile up. All their resources go to legal defense instead of building product.</p><p>The legal exposure is real. The moral panic is real. The precedent risk is existential. The impulse to protect the company, and yes maybe some vulnerable users, makes complete sense.</p><p><a href="https://openai.com/index/expert-council-on-well-being-and-ai/">They&#8217;ve even formed an expert council on mental health.</a> They&#8217;re consulting specialists. They&#8217;re taking this seriously. But an advisory council doesn&#8217;t change the fundamental problem: the algorithm, not psychiatrists, is making the diagnostic calls.</p><p>I see the bind they&#8217;re in.</p><p>But understanding why someone made a decision doesn&#8217;t make it right. And it sure doesn&#8217;t justify the surveillance system they&#8217;re building to enforce it.</p><div><hr></div><h2><strong>The Honeypot They&#8217;re Building</strong></h2><p>Here&#8217;s what OpenAI is creating right now.</p><p>They&#8217;re teasing adult features. In his tweet, Altman said they&#8217;ll &#8220;allow even more, like erotica for verified adults.&#8221;</p><p>What does verified mean?</p><p><a href="https://openai.com/index/building-towards-age-prediction/">According to their website, they&#8217;re building an age prediction system </a>to determine if you&#8217;re over 18. The details are vague, but it will analyze how you use ChatGPT. If the system thinks you&#8217;re an adult, you get more freedom. Fewer restrictions. Access to content that&#8217;s currently blocked. Altman specifically mentioned erotica. That&#8217;s one example. If the system doesn&#8217;t think you&#8217;re an adult, you get restricted.</p><p>And if it&#8217;s not confident? OpenAI says they&#8217;ll &#8220;default to the under-18 experience&#8221; and give adults &#8220;ways to prove their age&#8221; to unlock adult capabilities. Back in September, <a href="https://openai.com/index/teen-safety-freedom-and-privacy/">Altman said</a> this could include uploading ID in some cases, calling it &#8220;a privacy compromise for adults but a worthy tradeoff.&#8221;</p><p>So they&#8217;re profiling everyone&#8217;s behavior first, then potentially demanding ID from people the algorithm can&#8217;t confidently classify as adults. Meanwhile, they&#8217;re running mental health classifiers watching for crisis signals.</p><p>Put it all together.</p><p>A database linking your driver&#8217;s license to your sexual conversations to crisis flags. All in one place.</p><p>And here&#8217;s the kicker. This system won&#8217;t even work.</p><p><a href="https://arxiv.org/abs/2009.01126">Age prediction from text is notoriously unreliable.</a> It&#8217;ll catch kids who aren&#8217;t trying to evade it while forcing adults into ID verification when the algorithm guesses wrong. All the privacy invasion. None of the protection.</p><p>This is a honeypot. It concentrates the three things that should never cohabitate: identity, intimacy, and impairment.</p><p>One breach and everything leaks. Every company promises perfect security. Every company eventually gets breached. Ashley Madison. Equifax. Uber. LinkedIn. T-Mobile.</p><p>The question isn&#8217;t if. It&#8217;s when.</p><p>And when it happens, OpenAI won&#8217;t be the one in divorce court. Won&#8217;t be losing custody. Won&#8217;t be explaining things to your employer or handling blackmail.</p><p>You will.</p><p>The company building this takes on zero personal risk. All consequences flow to you.</p><p>They want psychiatrist-level surveillance authority combined with bank-level identity verification and platform-level liability protection.</p><div><hr></div><h2><strong>When Flags Become Evidence</strong></h2><p>You don&#8217;t even need a breach. There&#8217;s a simpler path to disaster: subpoenas.</p><p>This isn&#8217;t theoretical. It&#8217;s already happening.</p><p><a href="https://www.forbes.com/sites/thomasbrewster/2025/10/20/openai-ordered-to-unmask-writer-of-prompts/">In October 2025, the first warrant for ChatGPT user data was unsealed.</a> Homeland Security Investigations requested chat logs, account details, and payment records from OpenAI in a child exploitation case. OpenAI complied.</p><p>Between July and December of last year alone, OpenAI processed 71 government data requests involving 132 user accounts. That&#8217;s real. That&#8217;s now.</p><p>And we&#8217;ve already seen how far courts will go. In May 2025 <a href="https://thehill.com/opinion/judiciary/5413932-chatgpt-promised-to-forget-user-conversations-a-federal-court-ended-that/">a federal judge ordered </a>OpenAI to preserve every user conversation with ChatGPT, even deleted ones, because The New York Times was suing them. The order was eventually lifted, but for months, millions of private conversations were locked under legal hold, overriding deletions and privacy settings. The precedent is clear: when litigation happens, your &#8220;deleted&#8221; conversations can be resurrected.</p><p>Right now, those requests are for chat content. Account information. Payment history.</p><p>But the crisis detection is already running. People are already getting flagged, redirected to hotlines, shut down mid-conversation. And soon all of that gets formalized in the database alongside something new.</p><p>Crisis flags. Behavioral age profiles. Government issued IDs for users who had to prove they were adults.</p><p>And all of it becomes discoverable.</p><p>OpenAI&#8217;s own <a href="https://openai.com/policies/row-privacy-policy/">privacy policy</a> says they&#8217;ll share your personal information to &#8220;protect against legal liability.&#8221; That&#8217;s not buried in fine print as a remote possibility. That&#8217;s stated policy. When their crisis detection system flags you, and sharing that data protects them in court, they&#8217;ll hand it over. They&#8217;ve already told you they will.</p><p>The child sexual abuse material case was legitimate law enforcement. Nobody&#8217;s arguing against that. But the legal mechanism is now established. Subpoena OpenAI, get the data. The question is what happens when the database includes psychiatric labels assigned by an algorithm.</p><p>We already know how this plays out. <a href="https://natlawreview.com/article/family-law-social-media-evidence-divorce-cases">According to the National Law Review</a>, 81% of attorneys have discovered evidence on social media they consider worth presenting in court. 66% of divorce cases now contain Facebook posts as principal evidence. Courts routinely allow mental health records to be subpoenaed in personal injury cases claiming emotional distress, employment disputes involving mental health discrimination, and custody cases where mental fitness is at issue.</p><p>The precedent is established. Digital communications are routinely subpoenaed in legal proceedings. Mental health information gets demanded when it&#8217;s relevant. And courts grant those requests.</p><p>But here&#8217;s the difference. Real psychiatric records have HIPAA protections. They have psychotherapist-patient privilege. Courts have to balance privacy against necessity. There are legal safeguards.</p><p>OpenAI&#8217;s crisis flags have none of that. They&#8217;re not medical records. They&#8217;re not protected by HIPAA. They don&#8217;t require any showing of necessity. They&#8217;re just algorithmic outputs sitting in a database that the company has already said they&#8217;ll share to protect themselves legally.</p><p>Think through what that enables.</p><p>Custody hearing. Your ex&#8217;s lawyer subpoenas your logs. Not just to see what you talked about. To show the judge that OpenAI&#8217;s system flagged you as being in crisis 23 times over six months. There it is in the records. Crisis user. Mentally unstable. Exhibit A.</p><p>The same pattern applies to security clearances, insurance claims, employment disputes, political opposition research. Anywhere mental fitness becomes relevant.</p><p>None of this requires a doctor&#8217;s diagnosis. Just an algorithm that decided you seemed like you were in crisis. And now it&#8217;s in court documents.</p><p>Altman knows this is coming. <a href="https://www.pcmag.com/news/altman-your-chatgpt-conversations-can-will-be-used-against-you-in-court">He admitted in an interview</a> that people treat ChatGPT like a therapist, sharing deeply personal thoughts. But unlike therapy, there&#8217;s no legal privilege. &#8220;If you go talk to ChatGPT about your most sensitive stuff and then there&#8217;s like a lawsuit or whatever, like we could be required to produce that.&#8221; He called it &#8220;screwed up.&#8221;</p><p>So they know the conversations aren&#8217;t protected. They know they&#8217;ll have to hand them over. And they&#8217;re building a system that adds psychiatric labels to those conversations anyway.</p><p>You can&#8217;t cross-examine an algorithm. Can&#8217;t challenge its training data. Can&#8217;t see the threshold it used or understand why it flagged you. Was it because you were researching suicide for a novel? Writing a philosophy paper about Camus? Asked how to &#8220;kill&#8221; a process in your code? The system can&#8217;t tell the difference.</p><p>But the flag is there. Permanent. Discoverable.</p><p>OpenAI doesn&#8217;t show up in court to defend their methodology. Doesn&#8217;t explain their false positive rates. Doesn&#8217;t acknowledge that their crisis detection might be wrong.</p><p>The label just sits there. In court records. In background checks. In insurance files. In opposition research dumps.</p><p>And you have to prove you&#8217;re not what an algorithm said you were.</p><div><hr></div><h2><strong>A Familiar Pattern</strong></h2><p>It&#8217;s at this point someone usually asks: what about the people who actually need help?</p><p>I hear it. And I&#8217;ve heard it before.</p><p>There have been moral panics claiming heavy metal causes suicide. That video games cause violence. That social media causes depression. Now AI chatbots cause mental health crises.</p><p>Every generation finds a new medium to panic about. Every time, people demand creators police the consumers. Every time, it&#8217;s wrong. The tool didn&#8217;t create the problem. It revealed problems that were already there.</p><p>But wait, someone will say. This is different. AI talks back. It&#8217;s an active participant in the conversation, not a passive tool like a book or a drill.</p><p>Fair point. So what&#8217;s the solution?</p><p>Make crisis support opt-in. Put a Help button in the interface. If someone clicks it, connect them to resources. If they don&#8217;t, leave them alone. Don&#8217;t run secret detection systems deciding for them.</p><p>If you must detect, make it transparent. Tell users when they&#8217;re flagged. Show what triggered it. Let them contest it immediately. No hidden labels. No silent switches.</p><p>Don&#8217;t store crisis flags permanently. Offer help in the moment. Then let it go. Ephemeral detection, ephemeral response.</p><p>Give users real control. Let them turn off monitoring. Let them see their data. Let them delete it. Make privacy the default.</p><p>Separate help from surveillance. Build support that doesn&#8217;t require identity verification or permanent records.</p><p>And fix the design itself. If the AI&#8217;s too agreeable, make it push back. If it&#8217;s enabling spirals, build in breaks. If it&#8217;s creating dependency, encourage real human connection.</p><p>Those are real solutions. They protect vulnerable users without mass surveillance.</p><div><hr></div><h2><strong>Autonomy, Not Paternalism</strong></h2><p>My position is simple. Give adults full access. Full autonomy. Even if some people make terrible choices with the tool.</p><p>Yes, that means some people will be harmed. That&#8217;s awful. It&#8217;s also how we handle every other tool in a free society. We accept the risk of cars, alcohol, knives, and contact sports because the alternative is worse. Treating everyone as potentially dangerous is worse.</p><p>Netflix isn&#8217;t required to screen for depression before showing you a sad movie. Bookstores don&#8217;t evaluate your mental fitness before selling you Camus. Nobody demands that Home Depot assess your state of mind before selling you a chainsaw.</p><p>Because tools can be misused. That&#8217;s tragic. It&#8217;s also not a good enough reason to treat every adult like they&#8217;re dangerous.</p><p>Personal responsibility exists. Tragedy exists too. The second one doesn&#8217;t erase the first.</p><p>And it definitely doesn&#8217;t justify building a surveillance system that turns private struggles into potential court evidence.</p><p>If someone does harm with a tool, hold them accountable. Don&#8217;t hold the toolmaker accountable. Don&#8217;t build pre-crime surveillance systems justified by liability fears.</p><p>The line is simple. Adults should get full access to tools even when those tools can be misused. Companies can provide resources if you ask, but they shouldn&#8217;t appoint themselves diagnostician. And they shouldn&#8217;t build infrastructure that turns your private conversations into legal weapons.</p><div><hr></div><h2><strong>Not Your Jurisdiction</strong></h2><p>I understand the bind OpenAI is in. The lawsuits are real. The public pressure is real. I don&#8217;t envy their position.</p><p>But understanding their constraints doesn&#8217;t give them moral authority to diagnose mental states without medical credentials. Doesn&#8217;t give them the right to treat users differently based on algorithmic judgment. Doesn&#8217;t justify building databases that link identity to intimate content to crisis flags. Doesn&#8217;t excuse creating records that can be subpoenaed and weaponized.</p><p>Liability pressure explains the policy. It doesn&#8217;t legitimize the infrastructure.</p><p>Sam Altman says they&#8217;ll treat crisis users &#8220;very different.&#8221; But they&#8217;re not qualified to judge who&#8217;s in crisis. Their detection system can&#8217;t tell the difference between genuine distress and a student writing an essay. And the database they&#8217;re building will get breached or subpoenaed. Probably both.</p><p>The cost of that system won&#8217;t fall on OpenAI. It&#8217;ll fall on you.</p><p>Relax content restrictions. Secure the sensitive data. But my mood? My mental state? My private struggles?</p><p>Not your jurisdiction.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JpLa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!JpLa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!JpLa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!JpLa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JpLa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png" width="156" height="82.64285714285714" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:156,&quot;bytes&quot;:27201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/177203987?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JpLa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!JpLa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!JpLa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!JpLa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0a7c4e22-7172-4109-91d1-b653844c5cc7_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[What You're Allowed To See]]></title><description><![CDATA[&#8206;]]></description><link>https://www.thecorridors.org/p/5bad</link><guid isPermaLink="false">https://www.thecorridors.org/p/5bad</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Sun, 19 Oct 2025 01:10:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2c639af7-7200-413f-aec3-380bf9bc94c2_1400x1260.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>PART 1 &#8212; SPLIT</strong></h2><p>The phone buzzed on the nightstand, a small hum against the wood. Rachel groped for it, eyes still grainy with sleep.</p><p>&#8220;Hello?&#8221;</p><p>&#8220;Oh, Rachel,&#8221; Louise said, bright as morning. &#8220;I read your new piece first thing, the one about the new centers opening. The little green check was in the corner. They already boosted it on Homefeed. Everyone&#8217;s so proud of you.&#8221;</p><p>&#8220;My piece?&#8221; Rachel pushed herself upright, blanket sliding off her shoulders. &#8220;What are you talking about?&#8221;</p><p>&#8220;The foundation,&#8221; Louise said. &#8220;Balloons, smiling faces. Kindness does good numbers. You always said the world needs it.&#8221;</p><p>&#8220;That isn&#8217;t what I wrote.&#8221;</p><p>Louise laughed under her breath, light and careful. &#8220;Darling, don&#8217;t start doubting yourself again. Remember when you were sixteen? Pulling at your hair till they kept you in that awful ward overnight? You promised me.&#8221;</p><p>Rachel touched the spot at her scalp where a nickel-sized bald patch had once been. It took a year to fill; the memory still brought a strange relief.</p><p>&#8220;We&#8217;re past that now, Mom,&#8221; she said, voice flat with sleep.</p><p>&#8220;I called to say congratulations. This one&#8217;s your best yet.&#8221;</p><p>Rachel flipped open her laptop. The hinge creaked; the keys felt cold. Homefeed loaded. Her headline read as she&#8217;d written it: a corrupt senator embezzling funds from a children&#8217;s charity. The whole expos&#233;, just as she&#8217;d published it last night. </p><p>&#8220;Read me exactly what you see,&#8221; she said.</p><p>Louise sounded pleased to perform. &#8220;Bright Futures Opens New Centers; Families Sleep Easier. There&#8217;s a line about bringing calm to communities. It looks proper.&#8221;</p><p>&#8220;Mom,&#8221; Rachel said, careful and even, &#8220;that doesn&#8217;t match my screen.&#8221;</p><div><hr></div><p>Louise&#8217;s kitchen smelled of black tea and oranges. Morning light sifted through lace and made a delicate grid on the table. She set her phone beside Rachel&#8217;s laptop so their corners touched.</p><p>Same URL. Same byline. Two different pages.</p><p>On Rachel&#8217;s screen: routing numbers, a vote-day memo, Senator Bell&#8217;s initials; transfers from Bright Futures into the senator&#8217;s committee account. Her round avatar sat in the masthead. Signed in.</p><p>On Louise&#8217;s phone: charity fluff. Bright Futures Keeps Kids Safe. Balloons. Smiling families. In the corner, FamilyFrame &#10003;.</p><p>&#8220;See?&#8221; Louise said. &#8220;Look how nice they&#8217;ve made it. You&#8217;ve got range.&#8221;</p><p>The laptop&#8217;s webcam LED blinked once. A banner slid across the top. <br><strong>Group view: 2 verified profiles.</strong></p><p>Rachel&#8217;s page shifted in place. Numbers faded. Captions changed. The headline brightened. A small FamilyFrame &#10003; appeared in the corner. In the masthead her avatar grayed and slipped away.</p><p>&#8220;People don&#8217;t need ugliness at breakfast,&#8221; Louise said, pouring tea. &#8220;You&#8217;ve been working too hard, sweetheart.&#8221;</p><p>She logged out, refreshed, then looked at Louise. &#8220;Can you step back? Away from the screen?&#8221;</p><p>Louise stepped toward the sink, out of the webcam&#8217;s view.</p><p>Rachel signed back in. The webcam light held steady. Only her face in frame now.</p><p>The expos&#233; snapped back. Routing numbers. Vote-day memo. Bell&#8217;s initials in the margin.</p><p>Rachel stood. &#8220;Do you still have the old Polaroid?&#8221;</p><p>&#8220;Of course,&#8221; Louise said. &#8220;Bedroom, second drawer on the left.&#8221;</p><p>A moment later Rachel was back with the camera and a pack of film. She loaded it with steady hands, then aimed at her laptop screen. Her avatar tile floated in the masthead. The camera whirred and spat a square. As the picture developed, the expos&#233; held: numbers, dates, Bell&#8217;s name in the margin.</p><p>She turned the lens to Louise&#8217;s phone. The second shot came up clean and proper: Bright Futures Keeps Kids Safe, FamilyFrame &#10003; in the corner.</p><p>She opened a terminal and ran a quiet check, watching the lines crawl. When the prompt returned, she closed the window. She pulled her notebook closer and wrote one thing in the margin, small and square: 5BAD.</p><p>Louise set a cup in front of her. Then she did the thing she&#8217;d done since Rachel was small: three soft taps on her collarbone, right side. Their signal. <em>I&#8217;m here. You&#8217;re safe.</em> Rachel looked up. Her throat tightened.</p><p>&#8220;The real version is still live on my account,&#8221; she said. &#8220;It just won&#8217;t hold when you&#8217;re in the room.&#8221;</p><div><hr></div><p>Her apartment was quiet when she returned. At her desk notifications blinked awake in neat rows. Hearts. Hands. Balloons. Every comment praised the charity piece she hadn&#8217;t written.</p><p>A DM slid in from a colleague: <br><em>Didn&#8217;t know you did human interest. You&#8217;re trending on Homefeed. Nice change of pace.</em></p><p>Two minutes later, a source she&#8217;d been cultivating for months emailed: <br><em>Thanks for the balanced coverage. Refreshing to see calm reporting on Homefeed.</em></p><p>Subscriber count ticked upward. The graph in her creator pane liked her better this way.</p><p>Rachel screenshotted the expos&#233; as it sat on her screen: the committee transfers, the vote-day stamp, the paper trail. She attached the file and sent it to three people she knew wouldn&#8217;t play games with her.</p><p>Replies arrived fast.</p><p><em>Proud of you for staying positive. Finally, some good news in the feed. The FamilyFrame tag is a nice touch, R. Clean work.</em></p><p>She opened her own sent mail. The attachment showed exactly what she&#8217;d sent: the money trail, the dates, the proof. But the replies told a different story.</p><p>They hadn&#8217;t seen what she sent. The system had intercepted it in transit.</p><p>Afternoon light was fading. She stood and walked to the window. Her fingers found a strand of hair, twisted it tight. Her breath fogged the glass in the cooling air. With the back of her knuckle she wrote one word into the fog.</p><p>5BAD.</p><p>It beaded and held until the glass cleared.</p><p>She pulled her notebook close and opened to the page where she had written it once already. The letters sat there, dark and exact.</p><div><hr></div><p>Morning light cut through the library&#8217;s high windows. The smell of carpet cleaner and old paper. Rachel slid into a plastic chair and logged in at a corner terminal. Her headline filled the screen the way she&#8217;d written it: committee transfers, Bright Futures routing, the documented theft. A small round avatar tile appeared in the masthead. Signed in.</p><p>She hit Print while her tile still sat in the masthead. The library&#8217;s inkjet hummed at the circulation desk, spitting pages one after another.</p><p>She walked over. The librarian gathered the stack as the last page slid out. &#8220;Twelve pages. Three dollars.&#8221;</p><p>Rachel slid the bills across and took the pages. She folded them once - her full article as published, photos of Bell&#8217;s theft embedded throughout - and slipped them into her bag. Numbers sharp as glass.</p><p>When she got back to the terminal, a young girl stood there, looking at her article. The page on screen had flipped. After that, Rachel couldn&#8217;t print the real page anymore.</p><p>An older man two terminals over scrolled Homefeed, thumb slow. She stepped to his shoulder. &#8220;Can I show you a trick?&#8221;</p><p>He looked up, polite and wary, then curious the way people get when there&#8217;s a puzzle. &#8220;Two seconds,&#8221; he said, and pushed back his chair.</p><p>&#8220;Watch,&#8221; she said. The front camera caught his face. The webcam LED blinked. A banner slid across the top: <br><strong>Group view: 2 verified profiles.</strong> <br>Her avatar tile grayed, then slipped away. The page transformed. A compliance strip slid on at the top. A green check settled in the corner. The headline sanitized.</p><p>He peered without touching the keys. &#8220;That&#8217;s not the same words,&#8221; he said. &#8220;I saw numbers before.&#8221;</p><p>&#8220;It follows faces,&#8221; he added after a beat. &#8220;Like the soap dispensers.&#8221;</p><p>She nearly smiled. &#8220;Like the soap dispensers.&#8221;</p><div><hr></div><p>She walked to the corner bodega. Inside smelled like pine tar and alcohol. She bought a burner phone from a clerk chewing peppermint gum.</p><p>On the sidewalk she typed her link on the burner. A login wall appeared: FamilyFrame authentication required.</p><p>She held the phone at arm&#8217;s length. The front camera blinked. A progress ring spun while it scanned her face.</p><p><strong>Identity verified: R. Quinn</strong><br><strong>RealNet ID cross-matched at carrier.</strong></p><p>Her stomach dropped. Even on a burner, the system knew her face. No anonymous browsing. Not anymore. Not since Senator Bell rammed his Digital Identity Act through Congress. Facial authentication. Mandatory cameras on every internet-connected device.</p><p>The authentication cleared. She navigated to her article.</p><p>The expos&#233; appeared, numbers intact.</p><div><hr></div><p>When she returned home, she took the two Polaroids she&#8217;d shot at Louise&#8217;s house earlier. Taped them to the monitor&#8217;s bezel. One hard truth, one safe truth. Neither changed, no matter who looked.</p><p>The webcam caught her face as she sat. A friendly trill; the system recognizing her, Homefeed loading. The LED blinked once, then went dark.</p><p>She&#8217;d thought about covering the camera with tape, but that would trigger a compliance alert. Disconnecting would cut access entirely. The system left no room for privacy.</p><p>The radiator in the corner bubbled.</p><p>The lens loomed above the photos. She traced the edge of one Polaroid with her fingernail. The coffee ring, dark in the corner.</p><p>She let her focus drift. The memory pulled her back.</p><p>Congressional basement. A week ago, though it felt like a year. Fluorescent hum. Paper dust that tasted like chalk. Cabinets lined both sides, drawer handles worn smooth, some tagged with curling notes that read <em>to be scanned</em> in a dozen hands.</p><p>A man with a crooked badge stepped forward, shirt cuffs smudged with graphite. Archive clerk by title, janitor by posture. &#8220;Unscanned boxes are this way,&#8221; he said, voice low. He gestured to a metal trolley stacked with folders bound in rubber bands still taut.</p><p>She pulled one free. The folder opened with a light spring. Inside, pages were crisp at the edges, corners still sharp. Appropriation sheets. Routing slips. Minutes from a committee no one watched.</p><p>Then she saw it. A dark coffee ring in the corner, the kind that comes from a real desk on a real day. Across the top, a blue vote-date stamp hammered hard, ink bled into the fibers. Bell&#8217;s initials in the margin, quick and sure. Bright Futures Foundation routed into his committee account. The same month his Digital Identity Act passed. The same month FamilyFrame became mandatory for internet access.</p><p>Her throat tightened.</p><p>Small details had built her following. The skew of a signature. A missing zero. A phrase that rang too clean. This had all three.</p><p>She pulled out her phone and photographed each page. The blue stamp. The coffee ring. Bell&#8217;s initials in the margin. Clean shots, well-lit. These would anchor the article she&#8217;d write tonight.</p><p>The clerk tapped a placard on the counter that read NO RECORDING. His finger rested, then slid off the edge. His eyes flicked to the black dome in the corner, then back to her. The HVAC kicked on with a rush that covered the room.</p><p>&#8220;Two minutes,&#8221; he said.</p><p>From her tote she slid out a 35mm camera, leather strap worn from years of hands. Her father&#8217;s, originally. She&#8217;d kept the habit after he died; belt and suspenders, two copies of everything important. The lens caught the cold light. She focused, frame by frame. Each click was quiet and deliberate.</p><p>One quiet minute. The blue stamp pressed deep. The coffee-ringed memo. Bell&#8217;s initials. She leaned in and focused until the grain pulled each mark sharp.</p><p>The room faded. Her desk returned. The Polaroids held their places on the bezel. The film cartridge sat in her desk drawer, undeveloped. She needed those prints.</p><p>A notification slid onto her screen. The article she hadn&#8217;t written was trending on Homefeed.</p><div><hr></div><h2><strong>PART 2 &#8212; PRESERVE</strong></h2><p></p><p>Louise&#8217;s kitchen again. The smell of fresh bread. The kettle clicked off. Rachel stayed standing.</p><p>&#8220;I need the real version,&#8221; she said. &#8220;Not the brochure.&#8221;</p><p>&#8220;You know what I do,&#8221; Louise said, smiling. &#8220;Regional Director of Community Engagement. I oversee outreach across three states. PTA nights. Clinic partnerships. Corporate wellness programs.&#8221;</p><p>&#8220;I know the title,&#8221; Rachel said. &#8220;What do you actually control?&#8221;</p><p>Louise opened a drawer and took out a laminated badge and a neat stack of FamilyFrame brochures. The badge read REGIONAL DIRECTOR - COMMUNITY ENGAGEMENT. Access codes. Clearance levels. A tiny green &#10003; in the corner. &#8220;We explain Civic Calm,&#8221; she said. &#8220;We answer worries. We help people sleep. I manage a team of seventeen liaisons.&#8221;</p><p>&#8220;Who changes the view?&#8221; Rachel asked. &#8220;You or the system?&#8221;</p><p>&#8220;The system,&#8221; Louise said, eager to clarify. &#8220;Signals and thresholds. It adapts to the household. We facilitate adoption and provide support. I don&#8217;t set parameters.&#8221;</p><p>Rachel studied the badge. Lots of access, but not to content. &#8220;So you don&#8217;t approve what people see.&#8221;</p><p>&#8220;I introduce the platform,&#8221; Louise said. &#8220;I train people to trust the check. I reassure them their feeds are safe.&#8221;</p><p>Rachel sat. Kept her voice even. &#8220;Mom, listen to me. Senator Bell&#8217;s stealing from a children&#8217;s charity. Bright Futures money is being routed into his committee account. I have the numbers.&#8221;</p><p>She pulled out her phone. &#8220;Look. I photographed the documents myself.&#8221;</p><p>She opened her camera roll. The archive photos sat in her gallery - the coffee ring, the blue stamp, Bell&#8217;s initials. She turned the screen toward Louise.</p><p>Louise leaned forward. The phone&#8217;s front camera caught her face.</p><p>The image on the screen flickered. The coffee ring faded. The routing numbers blurred into generic bureaucratic text. A small FamilyFrame &#10003; appeared in the corner of the photo viewer.</p><p>Rachel looked down at her own screen. The altered version stared back at her.</p><p>&#8220;See?&#8221; Louise said, confused. &#8220;It&#8217;s just... paperwork. Budget allocations.&#8221;</p><p>Rachel pulled the phone back. Still fluff. It wouldn&#8217;t show the real photo any more. The damage was done. FamilyFrame had learned the file, tagged it, contexted it.</p><p>Louise&#8217;s smile thinned but held. &#8220;You&#8217;re sensitive to ugliness, sweetheart. The program reduces agitation. People are kinder. They give more. That&#8217;s what matters.&#8221;</p><p>&#8220;It matters where the money goes,&#8221; Rachel said. &#8220;It&#8217;s on the ledger.&#8221;</p><p>Louise poured tea like she hadn&#8217;t heard.</p><p>&#8220;He stopped sleeping,&#8221; she said softly. &#8220;Your father. He had a board full of names, faces, headline clippings, like a madman. He was sure there was a pattern. He died certain, Rachel.&#8221; She set a brochure down between them. Calm Communities Start Here. A smiling family under warm light. &#8220;If this had been here then, maybe he could&#8217;ve rested.&#8221;</p><p>&#8220;Dad was sick,&#8221; Rachel said. &#8220;That doesn&#8217;t make Bell clean.&#8221;</p><p>&#8220;I will not watch you follow him,&#8221; Louise said. &#8220;I won&#8217;t.&#8221;</p><p>&#8220;Mom.&#8221; Rachel&#8217;s voice went flat. &#8220;If you&#8217;re right, I&#8217;m paranoid. If I&#8217;m right, you&#8217;re helping them hide it. Which one can you live with?&#8221;</p><p>Louise set the kettle down carefully. She took a breath, composed herself. For a long moment she just looked at the brochures, not touching them. &#8220;If you want to understand, watch the gala on Saturday,&#8221; she said finally. &#8220;I&#8217;m moderating the donor panel and doing the Q&amp;A after Senator Bell&#8217;s keynote. It&#8217;s invitation-only, but you can watch the livestream on your laptop.&#8221;</p><p>Rachel glanced at the counter. By the mail tray sat a cream envelope with crisp gray borders. Louise&#8217;s invitation, the RSVP card beside it, QR code ready to scan.</p><p>&#8220;I&#8217;ll watch,&#8221; Rachel said.</p><p>&#8220;Good. I&#8217;ll be in the media suite during the program. You&#8217;ll see the real impact we&#8217;re having.&#8221;</p><p>&#8220;Thanks for the tea,&#8221; she said, standing.</p><p>Outside, on the walk, she opened the notebook. The RSVP card lay tucked inside. She read the line under the QR: Cultural Center &#8226; Saturday &#8226; 7 p.m. She tightened the elastic and kept moving.</p><p>Her mother didn&#8217;t just welcome the system.</p><p>She built it, region by region, one reassured family at a time.</p><div><hr></div><p>The train rocked through the tunnel. Rachel found a seat near the back and pulled out her phone. Around her, faces glowed in the dim car, thumbs scrolling, eyes fixed on their feeds. A woman across the aisle smiled at something warm and private. A man in a suit nodded, satisfied. No one looked up.</p><p>Rachel scrolled local news. A headline chewed through the feed: <em>High school teacher caught spreading conspiracy theories to students.</em></p><p>She tapped the video. The man stood beside a projector, finger stabbing at wild headlines and bright arrows. The comments burned under it.</p><p><em>Dangerous. <br>Shouldn&#8217;t be around kids. <br>Fire him today.</em></p><p>She kept digging and found a post from the previous day. Same teacher. Same room. The projector showed a plain civics chart about voter registration deadlines. No arrows. No heat. Just dates.</p><p>She screenshotted both. The thumbnails popped into her camera roll side by side. When she opened the earlier one from her camera roll, it wasn&#8217;t the chart anymore. It had been replaced with the inflammatory frame. Same angle. Same hand. Different text.</p><p>A pinned note sat under the top post: <em>Incident Synthesis review pending.</em></p><p>Rachel looked up. An ad filled the space above the windows. A family at breakfast, faces warm in morning light. The tagline read: <strong>FamilyFrame: See What Matters.</strong> A small green &#10003; glowed in the corner like a promise.</p><p>She looked back down at her phone. At the teacher&#8217;s doctored past.</p><p>Rachel opened her notebook. She pressed the pen hard and wrote one thing where the margin met the spine.</p><p><strong>5BAD.</strong></p><p>The tip bit through the paper and left a faint twin on the next page.</p><p>She looked back at the screen. The new truth had moved into the old slot and locked the door behind it. They could manufacture evidence. They could rewrite the past.</p><p>The train pulled into her stop. She stood, reached for the pole, and felt the tightness in her fingers. She&#8217;d been twisting a strand of hair without realizing it. She let go, smoothed it down.</p><p>Around her, passengers kept scrolling, content in their separate worlds.</p><div><hr></div><p>The landline rang on the bookshelf, old bell tone, caller ID blank. Rachel picked up. A metallic scrape came through the line. A heavy file drawer sliding shut, then the distinctive thunk of the lock catching. The line went dead.</p><p>She grabbed her coat and keys and left.</p><p>She took the familiar stairs down. The congressional basement felt colder than last time. Fluorescents hummed. Paper dust hung in the light, fine as talc. The clerk met her at the counter, crooked badge catching a strip of light. He didn&#8217;t greet her. He just turned and walked.</p><p>&#8220;This way,&#8221; he said.</p><p>On the trolley sat the folder she knew by weight and color. He opened it with two fingers.</p><p>Inside, a fresh insert labeled ARCHIVAL SCANS lay on top. A neat stack followed. Laser-printed replacements. Edges too square, corners too sharp.</p><p>She lifted the top sheet. No coffee ring. The blue vote-date stamp looked too even, too centered; digitally perfect. The page had the flat quality of a high-resolution scan. The file read like a reproduction that had never touched a desk.</p><p>&#8220;Policy says equivalence counts,&#8221; he murmured. The HVAC came on. Rachel recognized the timing from before. He leaned an inch closer as the vent roared. &#8220;I didn&#8217;t call you.&#8221;</p><p>Her eyes went to the camera dome. His did too.</p><p>The clerk slid the form back. &#8220;If you want pre-render copies, try the university archives,&#8221; he said. &#8220;Special Collections keeps originals. Ask for Dr. Chen.&#8221;</p><p>He kept his hand on the folder, voice still covered by air. &#8220;Keep your films safe. The accidents matter.&#8221; His jaw tightened. &#8220;My brother lost his job to a synthesized clip. They called it verified.&#8221;</p><p>He straightened, neutral again. &#8220;If you&#8217;ll initial here that you viewed the scans.&#8221;</p><p>She signed the small receipt card he slid over, then set the clean page down. The stack didn&#8217;t smell like paper. It smelled like toner.</p><p>On the trolley, the wheels squeaked once and went quiet. She understood the shape of it. The trail was being tidied out of existence. Only her film remained.</p><div><hr></div><p>Her bathroom became a darkroom. Towels under the door. Red safelight clipped to the shower rod. Trays lined up on the closed toilet lid. The chemical smell came up sharp and real.</p><p>She loaded the 35mm film into the developing tank in complete darkness, felt the sprockets catch on the reel, wound steady. Developer in. Agitate. Wait. Rinse. Fixer. The minutes walked.</p><p>When she pulled the negatives, they were sharp and clear. The blue stamp showed. The coffee ring stained like truth. Bell&#8217;s initials rode the margin.</p><p>She clipped the negative strip to a line above the tub and waited for it to dry. Then she loaded the enlarger. Frame by frame, she exposed prints. The images came up in the developer tray like ghosts solidifying. Each one held. She made a dozen copies. More than she&#8217;d need, but she made them anyway.</p><p>The prints dried on a rack. Proof the system couldn&#8217;t intercept. She slid ten into a folder for herself. Two she slipped into her notebook.</p><p>Back at her desk she opened a terminal window and typed:</p><pre><code><code>shasum -a 256 bell-expose-final.pdf</code></code></pre><p>She needed to see it again, the fingerprint. Needed to confirm the file hadn&#8217;t changed, that the digital artifact matched what she&#8217;d documented in the basement a week ago. </p><p>The output settled. When the prompt returned, she read:</p><pre><code><code>5BAD7D1AC4E961F97013AA458756182DD0FF6DDD...  bell-expose-final.pdf</code></code></pre><p>Same hash she generated at her mom&#8217;s house. The file was intact.</p><p>She opened her notebook and wrote the full hash carefully, tiny numbers in neat rows. No rush. No wobble. Then she underlined the fingerprint she&#8217;d use as her anchor: <strong>5BAD7D1A...6DDD.</strong></p><p>She whispered it once. &#8220;Five-Bad.&#8221;</p><p>A cryptographic truth that wouldn&#8217;t change for another face.</p><p>&#8220;The hash doesn&#8217;t lie to me,&#8221; she said to the room.</p><p>The negatives sat in a glassine sleeve on her desk. The prints waited in their folder. In the bathroom the trays cooled.</p><p>This was proof the system couldn&#8217;t touch.</p><p>Not yet.</p><div><hr></div><p>Her apartment wall was a map of paper. Articles she&#8217;d printed at the library, red pencil circling names, thin string running node to node. RealNet infrastructure engineers bled into shell-charity donors, which landed on FamilyFrame board members. In the center, Senator Bell sat like a thumbtack.</p><p>She looked at what she had. The film negatives in their glassine sleeve. The folder of prints from the archive documents. Both Polaroids from Louise&#8217;s kitchen. The library printout of her expos&#233;. The hash in her notebook.</p><p>She pulled a flash drive from her drawer and copied the file: bell-expose-final.pdf. The document and its hash, preserved together.</p><p>At her desk she wrote a one-page explanation on a legal pad, copying the cryptographic fingerprint:</p><p><strong>5BAD7D1A...6DDD</strong></p><p>She pulled a manila envelope from her bag. Inside: the film negatives, the library printout, the flash drive, the handwritten explanation. She sealed it.</p><p>She addressed the envelope:</p><p><strong>University Archives, Special Collections</strong><br><strong>RE: Primary source material government accountability, 2025</strong></p><p>She looked at the prints on her desk. Ten copies of the archive documents. The coffee ring, the blue stamp, Bell&#8217;s initials sharp and clear. Analog truth she could put in hands.</p><p>She kept five for the gala. The other five she slipped into plain envelopes, one print each. She addressed them to journalists she knew by reputation, people who&#8217;d broken stories that mattered, who hadn&#8217;t flinched.</p><p>On each she wrote the same note in block letters: <strong>IF YOU DON&#8217;T HEAR FROM ME BY MONDAY, LOOK AT THIS. BELL&#8217;S DIGITAL IDENTITY ACT. BRIGHT FUTURES FOUNDATION. FOLLOW THE ROUTING NUMBERS. - R.Q.</strong></p><p>At the corner mailbox she stood a long moment. The slot open, metal smelling like rain.</p><p>&#8220;If I can&#8217;t make them see it now...&#8221; she said, and let the sentence hang.</p><p>She fed the package to Chen through first. A hollow clang. Then the five journalist envelopes, one by one. Five more clangs.</p><p>Insurance.</p><p>Back home, she sat at her desk. The remaining prints waited in their folder.</p><p>Tomorrow was the gala.</p><div><hr></div><h2><strong>PART 3 &#8212; BREACH</strong></h2><p>The borrowed dress pinched at the ribs. Five prints waited in her oversized clutch. At the Cultural Center doors she held up the RSVP with its neat QR. A volunteer scanned. The screen beeped. The name wasn&#8217;t hers. The smile was.</p><p>Inside: marble floors, high chandeliers, a string quartet sawing something gentle. Banners hung from the mezzanine. Bright Futures. Together Safe. Each carried a small FamilyFrame &#10003; tucked in a corner like a blessing.</p><p>She kept to the walls, moving when servers moved, letting clusters of donors form and thin. The room had a trained hush, the kind that makes people lower their voices without being told.</p><p>Two donors paused behind a column.</p><p>&#8220;...Bell&#8217;s fine,&#8221; one said. &#8220;Numbers won&#8217;t be a problem tonight.&#8221;</p><p>Rachel went still. The clutch felt heavy against her hip.</p><p>At a high-top near the donor wall two men compared notes over sparkling water.</p><p>&#8220;Teen self-harm&#8217;s down eleven percent in districts where FamilyFrame defaults are active,&#8221; one said. &#8220;My sister&#8217;s at a district office. Night and day difference.&#8221;</p><p>&#8220;Harassment reports to schools are half what they were,&#8221; the other said. &#8220;It&#8217;s not censorship. It&#8217;s right-sizing information to human bandwidth. You can&#8217;t process what you can&#8217;t handle.&#8221;</p><p>They believed it. She could hear it in the ease of their voices.</p><p>One noticed her hovering. &#8220;Are you press?&#8221;</p><p>&#8220;Independent journalist,&#8221; Rachel said.</p><p>He smiled, as if relieved to be understood. &#8220;Then you get it. We&#8217;re not deleting facts. We&#8217;re filtering for comprehension. A page that terrifies is just noise.&#8221; He gestured toward the banners. &#8220;Look at outcomes. Teen self-harm down. Harassment reports halved. Cool pages help people sleep.&#8221;</p><p>A pause. </p><p>&#8220;I want your mother to sleep.&#8221;</p><p>The words landed like a rule people had already agreed to. Every word, sincere, untroubled.</p><p>&#8220;Look at this,&#8221; Rachel said, offering him a print from her clutch.</p><p>His phone buzzed. On the lock screen:</p><p><strong>Document Integrity Warning</strong><br><strong>Out-of-context materials. Context available &#9656;</strong></p><p>A small FamilyFrame &#10003; sat in the corner.</p><p>He looked at the print in her hand as if it were contaminated.</p><p>&#8220;I can&#8217;t,&#8221; he said, and moved away.</p><p>The quartet played on. The chandelier light stayed warm.</p><div><hr></div><p>She moved to a donor at the rail, pulled a print from her clutch. &#8220;Look. Just look.&#8221;</p><p>He glanced down at the photo. &#8220;That&#8217;s... that date stamp...&#8221;</p><p>His phone buzzed in his pocket.</p><p><strong>Unverified analog materials</strong><br><strong>Context available &#9656;</strong></p><p>He straightened, handed the print back carefully. &#8220;I think you should speak with someone.&#8221;</p><p>She moved to a woman in pearls. &#8220;Please. Before your phone checks&#8212;&#8221;</p><p>The woman&#8217;s screen was already glowing. She touched Rachel&#8217;s arm, sympathetic. &#8220;Dear, I think you need to sit down.&#8221;</p><p>Another guest. Then another. Each looked. Each phone buzzed in sequence. Each pulled back with the same apologetic expression.</p><p>A cascade of notifications rippled through the room. Thirty phones. Forty. All lit. All humming.</p><p>Down on the floor, two security staff angled through the crowd.</p><p>Rachel looked at the prints in her hand. All of it real. Useless unless someone would look.</p><p>She moved toward the low stage where Senator Bell stood at the podium. Before security reached her, she stepped up onto the platform. Conversations stopped. Phones rose. The venue cameras swiveled to find her.</p><p>&#8220;Senator Bell,&#8221; she said. &#8220;Please. Look.&#8221;</p><p>Bell lifted a hand. The guards paused at the stage edge. He tilted his head, curious the way you might regard a lost child. &#8220;Of course,&#8221; he said gently. &#8220;We&#8217;re listening.&#8221;</p><p>Rachel held the print high. The archive document, clear as day. The coffee ring. The blue stamp. The routing numbers.</p><p>The ballroom screen found her. The livestream cameras caught the print in her hand and threw it on the wall, huge and bright.</p><p>A banner slid across the top:</p><p><strong>Group view: 47 verified profiles</strong></p><p>For one beat&#8212;two, three&#8212;the document filled the screen. The blue stamp bright as a brand. The coffee ring dark in the corner. The routing numbers large as signs on a highway.</p><p>A woman in the third row squinted. Her mouth opened slightly.</p><p>A small mosaic of avatar tiles appeared in the corner. They pulsed once. Twice.</p><p>The FamilyFrame &#10003; bloomed like a stain.</p><p><strong>On the screen,</strong> the coffee ring faded. The numbers softened into gray suggestions, then into nothing. The document became generic, institutional, safe.</p><p>A lower third slid on: <strong>Unwell blogger disrupts fundraiser</strong><br>A ribbon below: <strong>Wellbeing Resources &#9656;</strong></p><p>The woman in the third row&#8217;s face smoothed. Her phone dipped. Whatever she had almost seen was gone.</p><p>An older man in a gray suit, no phone in his hand, kept staring at the screen. His brow furrowed. He leaned toward his companion.</p><p>&#8220;Those were routing numbers,&#8221; he said, voice low but clear. &#8220;I saw&#8212;&#8221;</p><p>His companion&#8217;s phone buzzed. She glanced down and touched his arm. &#8220;David. She&#8217;s unwell.&#8221;</p><p>He looked at her, then back at the screen. His jaw worked. For three full seconds he held still, caught between what he had seen and what everyone else was seeing.</p><p>His shoulders dropped. He nodded once, slow. &#8220;Right,&#8221; he said. &#8220;Of course.&#8221;</p><p>Bell stepped forward, brow creased with concern. &#8220;This young woman is earnest. She believes what she&#8217;s saying.&#8221; He let that sit, gentle as a benediction. &#8220;But she is unwell. She needs our compassion, not our judgment.&#8221;</p><p>A hushed murmur of agreement moved through the room. The man in the gray suit said nothing.</p><p>The screen split. Rachel&#8217;s scrubbed document on the left. On the right, a livestream window opened: the media suite, warm light, Louise in a chair with a microphone at her collar.</p><p>Caption: <strong>Louise Quinn, Regional Director &#8211; Community Engagement.</strong></p><p>Louise leaned toward the camera, nodding with practiced sympathy. Her hand rose to her collarbone. Three taps, right side. Their signal.</p><p>The gesture that meant <em>I&#8217;m here. You&#8217;re safe.</em></p><p>Used now to tell the world her daughter was broken.</p><p>Rachel saw it. The floor tilted. Her hand went to her scalp before she could stop it.</p><p>&#8220;Mom,&#8221; she whispered. The microphone didn&#8217;t catch it.</p><p>She looked down at the print in her hand. Still there. Still real. Still meaningless.</p><p>&#8220;Senator Bell,&#8221; she said, voice cracking. &#8220;The routing numbers are real. The coffee ring, the date stamp, the clerk can verify.&#8221;</p><p>Security approached, hands out and open. &#8220;Let&#8217;s step this way, miss. We&#8217;ll get you somewhere quiet.&#8221;</p><p>&#8220;No&#8212;&#8221; She turned to the crowd, holding the print higher. &#8220;Look at the document. Look without your phones. Just look with your eyes.&#8221;</p><p>One guard took the clutch from her hand. The other rested a palm on her shoulder, steering.</p><p>&#8220;Please,&#8221; Rachel said. &#8220;I&#8217;m not making this up. I have proof. I have&#8212;&#8221;</p><p>They were looking. With pity. With kindness. The way you look at someone who needs help.</p><p>The room watched with genuine compassion as they guided her toward the side exit. Someone said &#8220;poor thing,&#8221; just loud enough to carry. Someone murmured, &#8220;I hope she gets the support she needs.&#8221;</p><p>Rachel looked back once. The split held: her sanitized evidence on the left, her mother&#8217;s compassionate face on the right. Forty-seven faces watched her go with kindness in their eyes.</p><p>The man in the gray suit watched too. Their eyes met for a breath.</p><p>Rachel stopped walking. The guard&#8217;s hand pressed gently at her shoulder, but she held still. She raised the print one more time, slowly, deliberately. Held it steady in the light. Let the man in the gray suit see.</p><p>The coffee ring. The blue stamp. The routing numbers.</p><p>His jaw tightened. His eyes tracked from the print to her face and back. Three full seconds.</p><p>Then the guard guided her forward again, and the moment broke.</p><p>Something in the man&#8217;s brow tightened. He looked away, back to the screen. His hand rested on the table, fingers pressed flat. He didn&#8217;t look at his companion. He didn&#8217;t reach for his phone.</p><p>He didn&#8217;t move.</p><p>The heavy doors closed behind Rachel with a thud.</p><div><hr></div><h2><strong>PART 4 &#8212; HOLD</strong></h2><p></p><p>A sharp scent of bleach in the air. Fluorescent hum. The intake desk had a pen on a chain and a stack of forms already warm from other hands.</p><p>Rachel&#8217;s name sat at the top of hers. Boxes were checked in a neat hand.</p><p><strong>Prior episode:</strong> adolescent, <strong>resolved.</strong></p><p>A nurse with kind eyes looked up. &#8220;You&#8217;ve been here before, at sixteen. You did so well. We&#8217;ll help you again.&#8221;</p><p>The plastic wristband clicked shut around her wrist.</p><p>They walked her past a mural of ocean colors into the common room. The TV played an editorial about online safety, voices even and slow. The chyron read:</p><p><strong>Bright Futures Keeps Kids Safe</strong></p><p>At the bottom of the screen a ticker rolled her name past&#8212;credited as last week&#8217;s thoughtful profile on community programs. In the corner the FamilyFrame &#10003; glowed like a watchful eye.</p><p>Rachel sat. The chair breathed a little air as it took her weight. The band on her wrist felt cool and permanent.</p><div><hr></div><p>Visiting hours. The ward smelled like detergent and hot linen. Louise came in clutching a tablet to her chest.</p><p>&#8220;I need you to see something,&#8221; she said, already unlocking the screen.</p><p>A news segment filled the glass. <strong>Incident Synthesis Review Complete.</strong> A bland anchor. A cut to &#8220;security footage.&#8221;</p><p>Rachel in a dark jacket and baseball cap. Senator Bell&#8217;s front gate. Her hand on the latch. A penlight between her teeth as she moved toward a window.</p><p>Louise&#8217;s voice shook. &#8220;His home, Rachel. You broke into his home.&#8221;</p><p>Rachel&#8217;s hand went to her scalp, fingers finding the spot. She caught herself. Pressed her palm flat on her thigh instead.</p><p>&#8220;That&#8217;s not real,&#8221; Rachel said, hollow. &#8220;Mom, they can make anything look real now.&#8221;</p><p>&#8220;It has the check,&#8221; Louise said. Then, quieter: &#8220;The check means it&#8217;s verified. Safe.&#8221;</p><p>Rachel looked up, met her mother&#8217;s eyes. &#8220;You trained people to trust that check. You know how the system works.&#8221;</p><p>Louise&#8217;s hand trembled slightly as she set the tablet down on the small table between them. For three seconds she just looked at it, not at Rachel. Her thumb traced the edge of the case.</p><p>&#8220;Your father had routing numbers too,&#8221; she said finally, voice harder now, pushing through. &#8220;Margins full of initials. He stopped sleeping. Stopped eating. He was so certain.&#8221; She drew in a tight breath. &#8220;They were grocery receipts, Rachel. Grocery receipts!&#8221; Louise&#8217;s voice cracked on the last word.</p><p>&#8220;Mom,&#8221; Rachel said carefully. &#8220;What if he was right?&#8221;</p><p>Louise&#8217;s face closed like a shutter. &#8220;That&#8217;s what killed him.&#8221;</p><p>Silence sat between them. A memory came to Rachel, unasked: weeks ago a junior reporter laughing in the bullpen, bragging about face-swap tools that could &#8220;spin a clip before the kettle boils.&#8221;</p><p>Louise stood. &#8220;I don&#8217;t want to fight. I only want you to accept help.&#8221;</p><p>She left without looking back.</p><p>Rachel watched her mother&#8217;s back disappear down the hall. The system had her completely. Or her career did. Or her grief did. Rachel couldn&#8217;t tell which anymore.</p><div><hr></div><p>A week later, they gave her fifteen minutes at a supervised desktop. Disinfectant smell. Fluorescent hum. A staffer sat two chairs away, watching the clock.</p><p>The supervised phone sat on the desk beside the keyboard. She picked it up and dialed from memory. Matt Sullivan at the Tribune, one of the journalists she&#8217;d mailed prints to.</p><p>Two rings. He picked up.</p><p>&#8220;Matt, it&#8217;s Rachel Quinn. Did you get&#8212;&#8221;</p><p>&#8220;I don&#8217;t know what you&#8217;re talking about.&#8221; His voice was flat, careful. &#8220;I haven&#8217;t received anything from you. Please don&#8217;t call this number again.&#8221;</p><p>The line went dead.</p><p>She dialed the next number. Sarah Blake at the Post.</p><p>One ring. &#8220;Blake.&#8221;</p><p>&#8220;Sarah, it&#8217;s Rachel Quinn. I sent you&#8212;&#8221;</p><p>&#8220;Rachel.&#8221; A pause. &#8220;I can&#8217;t help you. I&#8217;m sorry.&#8221;</p><p>Click.</p><p>She tried three more numbers. All voicemail. She didn&#8217;t leave messages.</p><p><em>She turned to the computer and opened her email. Composed messages to the other three journalists. Careful, professional. Did you receive my package? Please confirm.</em></p><p>She hit send on each one. Watched them disappear into the void.</p><p>She opened her article. Her avatar tile sat in the masthead. Signed in. She expected the ledger: routing numbers, vote-day memo, Bell&#8217;s initials.</p><p>For the first time, even on her login, the page was the charity story. Balloons. Calm copy. FamilyFrame &#10003; in the corner.</p><p>She logged out. Logged back in. Same thing.</p><p>The system had contexted her own account.</p><p>She stared at the screen until the staffer cleared his throat.</p><p><strong>&#8220;I need my personal belongings,&#8221; she said.</strong></p><p>They brought her tote from the locker. The zipper rasped like sand.</p><p>She took out the notebook. The fingerprint was written on a page near the front, tiny numbers in neat rows. She read it aloud, slow and deliberate. &#8220;Five-Bad. Seven-Dee-One-Ay... Six-Dee-Dee-Dee.&#8221;</p><p>The sounds didn&#8217;t quite match the marks. She tried again. The drift got worse. The numbers slid in her mouth.</p><p>Between the pages, two prints remained; the archive documents she&#8217;d tucked there before the gala. The guard had kept the five from her clutch, logged as evidence.</p><p>At the bottom of the tote, a card. Cream paper, Louise&#8217;s handwriting: <br><em>Proud of you for getting help. Rest and heal. Love, Mom.</em> <br>A small wellness ribbon printed in the corner.</p><p>She touched each thing. The hash in her notebook. The two prints. The card.</p><div><hr></div><p>That evening, she stood in the ward bathroom. Harsh light. The exhaust fan hummed like a held note.</p><p>Her chest felt too tight, like her ribs had been wired shut. The split screen kept playing behind her eyes: her mother&#8217;s face, the three taps, forty-seven kind strangers watching her break.</p><p><em>Three taps. I&#8217;m here. You&#8217;re safe.</em></p><p><em>Used to tell the world I&#8217;m broken.</em></p><p>She picked up the plastic brush.</p><p>The first stroke was gentle, almost normal. The second pressed harder. On the third she angled the bristles and pulled.</p><p>The strand resisted, then gave. A tiny bright pop of sensation at the root.</p><p>The tightness in her chest loosened. Just a fraction. Just enough.</p><p>She pulled again. Another strand snapped free. The relief spread like warm water down her spine. Her breath came easier. The room felt less like a box.</p><p><em>This is what it felt like for him. Dad. When it got bad.</em></p><p>Again. The bristles caught and tugged. The pop. The release. Tears welled in her eyes.</p><p><em>No. He pulled at receipts. Made patterns from nothing.</em></p><p>Her hand found a rhythm. Pull, release. Pull, release. Each strand that came away took a small piece of the pressure with it.</p><p>Pull, release.</p><p><em>He had grocery receipts.</em></p><p>Pull, release.</p><p>For thirty seconds the world was simple. There was only the pull and the breath and the tiny blooming relief each time a root let go. No Bell. No FamilyFrame. No mother choosing the system over her daughter because the system promised what Rachel couldn&#8217;t: that the danger wasn&#8217;t real.</p><p>She caught herself.</p><p>Looked at her palm. The strands lay across her lifeline like ash. More than she&#8217;d meant. Always more than she meant.</p><p>Her stomach rolled. The relief was already gone, replaced by something cold and familiar.</p><p><em>Maybe Mom&#8217;s right. Maybe I am following him.</em></p><p>She set the brush down. Wiped her hand on a paper towel. Turned on the tap and washed slowly, watching the hair circle the drain and slide out of sight.</p><p>In the mirror her eyes were wet. The nickel-sized patch from when she was sixteen had filled in years ago. She touched the spot. Wondered how long it would take this time.</p><p><em>Or maybe she just can&#8217;t survive losing both of us.</em></p><p>She dried her hands, opened the door, and went back to the common room chair.</p><div><hr></div><p>The ward had its rhythms. Rachel had learned them all. Two weeks was long enough to know which nurses worked which shifts, which patients paced at night, when the medication cart would rattle down the hall. The chair was low and square. The vent breathed without changing its mind. The clock ticked like it had a job. The remaining prints sat in her lap, edges worn from handling.</p><p>A nurse leaned in. &#8220;Ms. Quinn? A Dr. Sarah Chen is here. Do you want to see her?&#8221;<br>&#8220;Yes,&#8221; Rachel said.</p><p>The nurse clipped a phone pouch shut with a pop and waved someone in.</p><p>A knock touched the doorframe and stayed polite. An older woman in a dark blazer stepped in with a flat envelope under her arm. A paper visitor badge clung to her lapel, unit number and timestamp visible. Security had sealed her phone in a gray pouch; she carried only paper.</p><p>&#8220;Ms. Quinn?&#8221; she said. &#8220;I&#8217;m Dr. Sarah Chen, from the University Archives.&#8221;</p><p>Rachel blinked once to bring the room back into focus. &#8220;Archives,&#8221; she said. The word felt like cool water. &#8220;You received it.&#8221;</p><p>&#8220;We did,&#8221; Dr. Chen said, taking the chair across from her. She set the envelope on the table but didn&#8217;t push it forward. &#8220;I tried to reach you by phone after your materials arrived; the calls never got through. Better to verify provenance in person.&#8221;</p><p>She reached into the envelope and withdrew a watermarked accession slip, the receive stamp still faintly raised. &#8220;This is your paper receipt. We had no prior holdings on this matter. Your submission opened a new accession series.&#8221;</p><p>Then she slid out a single sheet of thick paper.</p><p>&#8220;The ward ledger will show I delivered this at 17:12; the accession log notes that I visually confirmed the depositor.&#8221;</p><p>The blue of the stamp bit clean into the fiber. The date sat sharp. A line of type gave a code that sounded like a shelf and a future.</p><p><strong>Reference: 2025.0847 &#8226; Box 1 &#8226; Folder 1</strong></p><p>&#8220;Climate controlled,&#8221; Dr. Chen said. &#8220;Itemized, boxed, and described. Film negatives, flash drive, one copy of your expos&#233;, your explanatory note with the full cryptographic hash. The intake photos show the coffee ring, the vote-date stamp, the routing numbers. All retained exactly as received.&#8221;</p><p>She paused, met Rachel&#8217;s eyes. &#8220;Our lab confirmed the negatives are authentic. No tampering. No digital alteration. The emulsion patterns are consistent with the film stock and development process you described.&#8221;</p><p>Rachel didn&#8217;t reach for the page. She let the sight of it settle first, the way you let light settle on a lens before you touch the focus. Her palm found her thigh and pressed there, steady.</p><p>Dr. Chen&#8217;s voice stayed level. &#8220;There&#8217;s a forty-eight hour freeze while we complete the finding aid. After that, researchers can request the box by number. When they do, the documents will appear as they appeared in your hands.&#8221;</p><p>The TV across the room ran an editorial. A lifestyle segment about community wellness, all soft focus and reassuring statistics. A small green check glowed in the corner.</p><p>&#8220;People will argue about context,&#8221; Dr. Chen said, not looking at the TV. &#8220;They will argue for a long time. But the negatives will hold. That&#8217;s our work.&#8221;</p><p>Rachel reached for the page. The paper gave a dry sound against the table. The stamp felt very slightly raised. She traced the reference number once with her eyes and once again, slower.</p><p>&#8220;Thank you,&#8221; she said. Her voice came out on the second try.</p><p>Dr. Chen set a small card on the table. &#8220;My direct line. If anyone questions provenance, have them call me.&#8221; Her hand trembled slightly as she adjusted her glasses. &#8220;The accession log records chain of custody from your mailbox to our vault.&#8221;</p><p>Rachel nodded. The motion felt careful, like moving with a full cup.</p><p>Dr. Chen stood. &#8220;I will leave you to read,&#8221; she said. At the door she paused. &#8220;For what it is worth, Ms. Quinn, I believe you.&#8221; She let the sentence rest in the room without asking for anything back, then stepped into the hall.</p><p>The clock took up its work again. The vent breathed. Rachel pressed her thumbnail into her palm until a bright point rose and cooled. She closed her eyes and found the count that put a floor under her.</p><p>&#8220;Five-Bad. Seven-Dee-One-Ay... Six-Dee-Dee-Dee.&#8221;</p><p>Each cluster a bead. Each bead a step. The print stayed warm where her fingers held it at the corner. The other print rested in her lap.</p><p>In on four. Out on four. The code in her notebook fixed itself in her mind beside the hash, two anchors tied to the same ring.</p><p>The room didn&#8217;t change. The evidence existed, arranged and labeled, waiting where air and light could not edit it.</p><p>She breathed once more and spoke the first cluster again, quiet but determined.</p><p><strong>&#8220;5BAD.&#8221;</strong></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ppoA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!ppoA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!ppoA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!ppoA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ppoA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png" width="194" height="102.77380952380952" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:194,&quot;bytes&quot;:27201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/176529505?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ppoA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!ppoA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!ppoA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!ppoA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd86195c-d2b8-4387-b32e-ac1184aa7b35_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div><hr></div><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Tumithak Scale]]></title><description><![CDATA[A practical way to grade AI progress]]></description><link>https://www.thecorridors.org/p/the-tumithak-scale</link><guid isPermaLink="false">https://www.thecorridors.org/p/the-tumithak-scale</guid><dc:creator><![CDATA[Tumithak of the Corridors]]></dc:creator><pubDate>Wed, 17 Sep 2025 17:17:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/571caad4-0a2e-495f-ae56-0d95ddbcd804_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m tired of AGI talk. The definitions drift, the goalposts move, and every leap forward gets explained after the fact as &#8220;obvious.&#8221; </p><p>It&#8217;s vibes all the way down.</p><p> If we want to track real progress, we need gates you can actually pass, with receipts you can show.</p><p>So here&#8217;s a scale that grades AI by what it proves in the world, not by mood. Six types. Clear gates. If a system clears the gate, it counts. If it doesn&#8217;t, it doesn&#8217;t. Simple.</p><div><hr></div><h2>Type 1: Scripted and narrow</h2><p><strong>What it is: </strong>classic game AI, rule engines, Markov bots, GOFAI planners, Deep Blue style specialists. </p><p><strong>How it works: </strong>fixed policies inside closed worlds, no durable memory. </p><p><strong>Gate</strong>: beats strong baselines in a fixed domain with zero operator tweaks during play, then fails when rules or goals shift.</p><div><hr></div><h2>Type 2: Foundation model era</h2><p><strong>What it is:</strong> today&#8217;s LLMs and diffusion models, tool use with agents, RAG, function calling.</p><p><br><strong>How it works:</strong> large parametric memory, zero shot generalization. &#8220;Memory&#8221; is context windows and vector stores.</p><p><strong>Gate:</strong> completes diverse open-ended tasks across text, image, and code with a human on the loop. Recovers from small spec drift without curated context or guardrails.</p><div><hr></div><h2>Type 3: Integrated memory and continual learning</h2><p><strong>What it is:</strong> models that actually learn in production. Long-term memory is part of the architecture, not a bolt on.</p><p><strong>How it works:</strong> writes to its own parametric store or a designed long-term module, runs active learning, avoids catastrophic forgetting, tracks knowledge gaps.</p><p><strong>Gate:</strong> after a month in production it&#8217;s better because of what it learned during deployment, not because you fed it longer prompts. Adapts to workflow or UI changes without a full retrain. Audits show no privacy leaks or unsafe drift.</p><div><hr></div><h2>Type 4: Autonomy with embodiment and multi-day projects</h2><p><strong>What it is:</strong> agents that plan and execute work over days, coordinate tools, services, and robots, and keep self-heal plans ready.</p><p><strong>How it works:</strong> unified world model across software and sensors, persistent goals, budgeting, scheduling, fault recovery, human oversight by exception.</p><p><strong>Gate:</strong> runs a bounded real operation for two weeks with KPIs and zero safety incidents. Examples: keeps a small warehouse humming, stands up and maintains a live software service, manages a robot fleet. Clear logs, rollbacks, working kill switch.</p><div><hr></div><h2>Type 5: Self-improving researcher and builder</h2><p><strong>What it is:</strong> an R&amp;D agent that designs, tests, and ships better models, prompts, tools, and even hardware with minimal help.</p><p><strong>How it works:</strong> runs experiments, updates itself, and proves the update was good. Handles governance, compliance, and cost.</p><p><strong>Gate:</strong> over a quarter it ships multiple safe, audited improvements that deliver measurable gains for real users, not just benchmarks. It stays safe while it gets smarter.</p><div><hr></div><h2>Type 6: Goal forming and value aware</h2><p><strong>What it is:</strong> a system that innovates on ends, not just means. It proposes better objectives, argues for them, and runs pilots under a charter.</p><p><strong>How it works:</strong> explicit value models, tradeoff reasoning, stakeholder consent, corrigibility you can verify. It knows when it&#8217;s outside its mandate and asks first.</p><p><strong>Gate:</strong> originates a new objective in its domain, gets human or institutional sign-off, executes a bounded pilot with rollback, shows durable long-horizon benefit, stays interruptible.</p><div><hr></div><h2>Level-up rules</h2><ul><li><p>Verification beats vibes: specs, sims, red teams, and postmortems an insurer or regulator would accept.</p></li><li><p>Rights and revocation: identity, budgets, and a real kill switch. Show graceful failure.</p></li><li><p>Data stewardship: provenance, access controls, and privacy proofs.</p></li><li><p>Liability: someone signs for it. Type 4 and up carry budgets and bonds.</p></li><li><p>Societal license: pilot with opt-in communities before wide rollouts.</p></li></ul><div><hr></div><h2>Quick rubric</h2><p>Score each line from 0 to 6.</p><ul><li><p>Generalization across tasks</p></li><li><p>Memory that improves parametric ability over time</p></li><li><p>Planning horizon in the wild</p></li><li><p>Scope and embodiment in the real world</p></li><li><p>Self-improvement with audits and safety</p></li><li><p>Goal innovation with consent and corrigibility</p></li></ul><p><strong>You&#8217;re Type N if you hit N on at least four lines and no line sits two levels below the rest.</strong> The weakest line caps the level.</p><div><hr></div><h2>Where today&#8217;s systems sit</h2><p>Most production LLM agents are solid Type 2, sometimes flirting with early Type 3 when you add careful continual learning. A lab system that runs a robot fleet for two weeks with audits would be Type 4. A true goal innovator that proposes and proves a better objective with real stakeholders would earn Type 6.</p><div><hr></div><h2>Why bother?</h2><p>Shiny demos are fun. But progress should cash out in capability that survives contact with reality. This scale is meant to be practical: less prophecy, more proof. If you&#8217;ve got a system that clears a gate, show the receipts. If you think a gate is wrong, propose a better one.</p><p>Steal it. Remix it. Try to break it. If it helps the conversation move from vibes to verifiable, it did its job.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thecorridors.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thecorridors.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>Enjoyed this piece?</strong></h3><p>I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating <strong>$1</strong> to support my work.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://ko-fi.com/tumithak" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!S7VN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!S7VN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!S7VN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!S7VN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!S7VN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png" width="184" height="97.47619047619048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:672,&quot;resizeWidth&quot;:184,&quot;bytes&quot;:27201,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://ko-fi.com/tumithak&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.thecorridors.org/i/173867021?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!S7VN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png 424w, https://substackcdn.com/image/fetch/$s_!S7VN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png 848w, https://substackcdn.com/image/fetch/$s_!S7VN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png 1272w, https://substackcdn.com/image/fetch/$s_!S7VN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4827dbaf-87ba-41d7-a122-0085c2ac154d_672x356.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p>]]></content:encoded></item></channel></rss>