OpenAI’s push into classified U.S. military networks collided with a consumer backlash and a quieter but consequential infrastructure pivot, underscoring the tightrope the artificial intelligence (AI) giant is now walking between national security ambitions and user trust.
The controversy ignited around Feb. 28, when OpenAI confirmed an agreement with the U.S. Department of Defense to deploy advanced AI systems, including ChatGPT technology, on classified networks.
The company framed the deal as lawful and tightly controlled, but critics saw something else entirely: a consumer-facing AI platform stepping deeper into military operations at a moment when public scrutiny of artificial intelligence is already running hot.
OpenAI said the agreement includes explicit guardrails, including bans on mass domestic surveillance of U.S. persons, autonomous weapons control, and high-stakes automated decision-making systems.
It also stressed technical constraints, including cloud-only deployments and retained control over safety systems, alongside compliance with U.S. legal frameworks such as the Fourth Amendment and Department of Defense rules governing human oversight of lethal force.
Still, the optics were not exactly subtle.
Within hours of the announcement, a grassroots boycott campaign under the banner #QuitGPT began circulating across social media, urging users to cancel subscriptions, delete the app, and migrate to competitors. The measurable backlash translated into sizable shifts in app behavior.
Screenshot from the website Quitgpt.org
According to app analytics data, U.S. ChatGPT uninstall rates jumped 295% day over day on Feb. 28, while downloads slipped 13% the next day and another 5% after that.
User sentiment took an even sharper turn in app reviews, where one-star ratings spiked 775% in a single day and continued climbing, while five-star reviews dropped by roughly half. Competitors benefited from the moment.
Anthropic’s Claude app recorded download increases between 37% and 51% during the same period, briefly overtaking ChatGPT in U.S. App Store rankings as users explored alternatives. Organizers of the boycott claimed millions of actions tied to the campaign, including cancellations and pledges, though exact figures vary depending on the source and how participation is defined.
OpenAI moved quickly to contain the fallout. Chief Executive Officer Sam Altman acknowledged shortcomings in how the deal was communicated, calling the rollout “opportunistic and sloppy,” and within days the company revised the agreement language.
The updated terms explicitly prohibited intentional domestic surveillance using AI systems and added stricter requirements for any intelligence agency involvement, including separate contractual layers. The company also announced plans to coordinate with other AI developers on shared safety frameworks, positioning the changes as a tightening rather than a retreat.
While the backlash cooled somewhat after the revisions, the episode left a mark, highlighting how quickly consumer sentiment can shift when AI crosses into sensitive territory. At the same time, OpenAI was making less visible but strategically significant moves behind the scenes.
In early March, the company reorganized its computing and infrastructure operations, splitting responsibilities into three focused groups covering data center design, commercial partnerships, and on-the-ground facility management. The restructuring reflects a broader shift in how OpenAI plans to scale its computing power.
Rather than aggressively building and owning massive data centers tied to its ambitious “Stargate” initiative, the company is leaning more heavily on leasing and partnerships with cloud providers. Microsoft’s Azure remains central to that strategy, while OpenAI has also expanded relationships with Oracle and Amazon Web Services as part of multiyear capacity agreements.
Earlier plans involving large-scale, jointly owned infrastructure projects have been scaled back or reworked, as the financial and logistical realities of building AI supercomputing capacity at scale become harder to ignore. Instead, OpenAI is focusing on controlling key elements such as custom hardware and chips, while outsourcing the physical infrastructure layer to established hyperscalers.
The two developments — one public and contentious, the other operational and pragmatic — are not directly linked, but together they sketch a company moving quickly on multiple fronts, sometimes faster than its messaging can keep up.
For OpenAI, the challenge now is less about whether it can build powerful systems and more about how it manages the consequences of deploying them in places where the stakes are anything but theoretical.