On Super Bowl Sunday, February 9, 2026, two AI companies spent a combined eight figures on advertisements. OpenAI ran a spot promoting ChatGPT. Anthropic ran one with the words "Deception," "Betrayal," "Treachery," and "Violation" plastered across the screen — a pointed attack on OpenAI's decision to introduce advertising into ChatGPT for free-tier users.
Sam Altman called Anthropic's ad "deceptive." Dario Amodei said mass monetization is "not essential at this stage of AI development." Demis Hassabis at Google said his company has "no plans" for ads in Gemini.
I argue that all three are performing a version of honesty that conveniently obscures a larger truth. And the larger truth is this: none of them have a sustainable business model, and each of their chosen paths carries risks they are not discussing publicly.
What OpenAI Is Not Saying
OpenAI's move to introduce ads in ChatGPT is being framed as a modest revenue diversification play. The company emphasizes that ads will only appear for free and lower-tier users. Altman insists OpenAI would "obviously never" run ads in the manner Anthropic depicted.
Here is what the framing omits.
OpenAI reportedly burned through approximately 3.7 billion in annualized revenue. The company has raised over 300 billion. To justify that valuation, OpenAI needs to become one of the largest revenue-generating technology companies in history.
Subscription growth has plateaued. ChatGPT Plus conversions have remained flat for three consecutive quarters, according to estimates from Sensor Tower and similar analytics firms. The free tier, which OpenAI initially positioned as a funnel to paid subscriptions, has instead become the product for the majority of users. An estimated 85-90% of ChatGPT users have never paid.
Ads are not a choice. They are an inevitability dictated by the arithmetic of the business. The question was never whether, but when.
The risk that OpenAI is not discussing: advertising incentives reshape products. Google Search began as a tool for users. Twenty years of advertising optimization turned it into a tool for advertisers, with users as the product. The same dynamic will apply to ChatGPT. When an AI assistant's revenue depends on engagement, the assistant's incentive shifts from answering your question quickly to keeping you in the conversation longer.
OpenAI's ad model introduces a structural conflict of interest into the most intimate software relationship most people have ever had. That is worth saying directly, without the euphemisms.
What Anthropic Is Not Saying
Anthropic's Super Bowl ad was effective marketing. It positioned the company as the principled alternative: the AI lab that respects users, that refuses to monetize attention, that prioritizes safety.
The framing omits a different arithmetic problem.
Anthropic has raised approximately 800 million and $1.2 billion. Against operating costs that include some of the most expensive computing infrastructure in history, this likely represents significant net losses.
Anthropic can afford to be principled because Amazon and Google — its two largest investors — are subsidizing that principle. Amazon has committed 3 billion. These are not philanthropic donations. They are strategic bets by companies that want Anthropic's models embedded in AWS and Google Cloud.
When Dario Amodei says mass monetization is "not essential at this stage," the unspoken clause is: "because our investors' cloud platforms are monetizing our technology at scale through enterprise API contracts."
Anthropic is not ad-free because of superior ethics. It is ad-free because it has a different revenue model — one that depends on the continued generosity of two of the largest technology companies on earth. If that generosity contracts, Anthropic's principles will be tested in ways the Super Bowl ad did not contemplate.
What Google Is Not Saying
Demis Hassabis's statement that Google has "no plans" for ads in Gemini is technically true and strategically meaningless. Google does not need ads in Gemini because Google is an advertising company. Every interaction with Gemini feeds data back into the advertising profile that Google has been building on you for two decades.
Google's AI strategy has always been defensive. Gemini exists not to generate direct revenue but to prevent users from migrating to ChatGPT or Claude for queries they would otherwise make on Google Search — queries that Google monetizes at approximately $0.04-0.08 per search through advertising.
The risk Google is not discussing: AI chatbots fundamentally undermine the search advertising model. A search query generates a page of results with ten blue links and multiple ad placements. A chatbot query generates one answer. There is no "page 2" in a chatbot. There is no ad inventory in a direct answer.
Google is subsidizing Gemini's development with search advertising revenue while Gemini simultaneously cannibalizes the search advertising model. This is the innovator's dilemma applied to a $175 billion annual revenue business.
The Safety Departure Problem
While the CEOs exchange public barbs about advertising, something more consequential is happening at their companies.
CNN reported on February 11, 2026, that AI safety researchers are leaving both OpenAI and Anthropic in increasing numbers. The former head of Anthropic's Safeguards Research team stated publicly that "the world is in peril." A departing OpenAI researcher cited "a potential for manipulating users in ways we don't have the tools to understand, let alone prevent."
OpenAI reportedly fired a top safety executive after she opposed the rollout of an "adult mode" for ChatGPT. This is the same company that positioned itself as the responsible steward of artificial general intelligence.
The pattern is consistent: safety researchers arrive, gain access to internal capabilities, become alarmed, and leave — or are pushed out when their concerns conflict with product timelines. This has happened at OpenAI (repeatedly), at Anthropic (where the irony is sharpest), and at Google DeepMind (where several alignment researchers departed in 2025).
I do not know whether these departures signal genuine existential risk or the predictable friction between researchers and product teams. What I do know is that the Super Bowl ad war is a distraction. While OpenAI and Anthropic argue about advertisements, the people who understand these systems best are walking out the door and warning anyone who will listen.
Perhaps we should be listening to them instead of watching the ads.