AI Daily Briefing -- March 11, 2026

AI Daily Briefing -- March 11, 2026


TL;DR

  • Anthropic vs. the Pentagon escalates: Microsoft files amicus brief backing Anthropic's lawsuit against its "supply chain risk" designation; 30+ OpenAI and Google DeepMind employees, including Jeff Dean, also voice support.
  • FTC AI policy statement due today: The March 11 deadline for the FTC to clarify how Section 5 applies to AI models arrives, with implications for state-vs-federal AI regulation.
  • Nvidia bets big on Mira Murati's Thinking Machines Lab: A multiyear deal gives the startup access to 1+ gigawatt of next-gen Vera Rubin chips and a significant Nvidia investment.
  • Oracle stock surges on AI-fueled earnings: Cloud infrastructure revenue up 84% YoY, AI infrastructure revenue up 243%, and the company projects $90B in fiscal-year revenue.
  • GTC 2026 looms: Nvidia's annual conference kicks off next Monday in San Jose, with physical AI and robotics taking center stage.

Lead Story

The AI Industry Closes Ranks Around Anthropic

The standoff between Anthropic and the U.S. Department of Defense has become the defining story in AI this month, and Tuesday brought a significant escalation in the industry's response.

Microsoft filed an amicus brief urging a federal court to temporarily block the Pentagon's designation of Anthropic as a supply chain risk -- a label historically reserved for foreign adversaries like Huawei. Microsoft argued that the ban could "disrupt" the military's existing AI deployments and "hamper" warfighters who rely on products built with Anthropic's technology.

The brief follows an unprecedented joint statement from more than 30 employees at rival firms OpenAI and Google DeepMind, including Google DeepMind chief scientist Jeff Dean, who called the government's action "an improper and arbitrary use of power." Over 200 founders, engineers, and investors have signed a separate open letter urging the Department of Defense to withdraw the designation.

The dispute centers on two red lines Anthropic refused to cross: allowing Claude to be used for mass domestic surveillance of American citizens, or for autonomous weapons without human oversight. When negotiations broke down, Anthropic walked away from a $200 million Pentagon contract. The Pentagon responded by designating the company a supply chain risk on February 27 and, within moments, signed a deal with OpenAI instead.

That pivot triggered its own fallout. Caitlin Kalinowski, OpenAI's head of robotics, resigned on March 7, saying domestic surveillance without judicial oversight and lethal autonomy without human authorization "are lines that deserved more deliberation than they got." Anthropic CEO Dario Amodei escalated the rhetoric further, calling OpenAI's safety assurances "safety theater" and accusing Sam Altman of telling "straight up lies" about the deal's guardrails. The consumer response has been measurable: ChatGPT uninstalls spiked nearly 300% in the days after the Pentagon deal, and Claude briefly hit No. 1 on the U.S. App Store before ChatGPT reclaimed the top spot.

The financial stakes are substantial -- Anthropic risks $150 million or more in annual defense revenue, and the dispute reportedly intensified over how AI would be deployed in the Golden Dome missile defense program. But the case has crystallized a question the industry has debated in abstract for years: when a government demands unrestricted access to AI systems, who gets to draw the line? For now, a federal court in Northern California will decide.


In Other News

FTC faces its own March 11 deadline on AI. Today is the date by which the Federal Trade Commission must publish a policy statement on how Section 5 of the FTC Act applies to AI models, as mandated by President Trump's December 2025 executive order on AI governance. The statement is expected to address when state AI laws requiring "alterations to truthful AI outputs" are preempted by federal law -- a framework that could reshape the patchwork of state-level AI regulations that took effect January 1. The Commerce Department faces the same deadline to identify "burdensome" state AI laws for review. Legal analysts expect the FTC statement to attempt a federal floor, though the underlying preemption theory -- that state bias-mitigation requirements could compel "deceptive" outputs under federal law -- remains untested.

Nvidia invests in Mira Murati's Thinking Machines Lab. The AI startup founded by former OpenAI CTO Mira Murati has secured a multiyear deal with Nvidia that includes a "significant investment" and access to at least one gigawatt of next-generation Vera Rubin systems. Thinking Machines Lab, now roughly 120 employees, has raised approximately $2 billion at a valuation near $10-12 billion within its first year -- though it has weathered leadership departures, with cofounders Barret Zoph and Luke Metz returning to OpenAI in January. The deal underscores Nvidia's strategy of locking in next-gen chip commitments well ahead of production.

Oracle rides AI infrastructure to a blowout quarter. Oracle shares surged as much as 15% after Q3 results showed cloud infrastructure revenue up 84% year-over-year to $4.9 billion and AI infrastructure revenue up 243%. Total revenue hit $17.2 billion, and the company projected $90 billion for the fiscal year beginning in June. Remaining performance obligations -- contracted future revenue -- ballooned 325% to $553 billion. The results are the strongest signal yet that enterprise AI spending is translating into real infrastructure revenue, not just GPU orders.

Anthropic study complicates the "AI jobs" narrative. A new economic study from Anthropic published Monday found that AI's impact on employment is "more complicated than you think." Rather than wholesale replacement, the research points to task-level automation that reshapes roles rather than eliminating them -- a finding that pushes back against both doom-and-boom forecasts.


X / Social Pulse

The Anthropic-Pentagon fight dominated online discourse. Pentagon CTO Emil Michael's public attack on Dario Amodei -- calling him someone with a "God complex" who "wants nothing more than to try to personally control" the military -- drew sharp backlash from the AI safety community and working researchers. The Stanford Daily ran an editorial calling the dispute "a wakeup call" for AI and democracy. The open letter supporting Anthropic crossed 200 signatures. Meanwhile, Cognizant's research on enterprise AI adoption -- finding that "plug-and-play AI is a myth" -- circulated widely among enterprise tech accounts, with CIOs noting it validated what they had learned the hard way.


One to Watch

GTC 2026 (March 16-19, San Jose). Nvidia's annual developer conference arrives next week, and all signals point to "physical AI" as the headline theme. Jensen Huang's Monday keynote will cover the full stack -- accelerated compute, AI factories, agentic systems, open models, and robotics. With venture investment in physical AI on pace to nearly double last year's level, and Nvidia aggressively locking in next-gen compute deals (see: Thinking Machines), GTC could mark the moment the industry formally pivots from chatbots-as-main-event to AI that operates in warehouses, factories, and the physical world.


Quick Hits

  • Cognizant research: 86% of enterprises plan to increase AI budgets this year, but most prefer specialized "AI Builder" firms over DIY deployments. (Cognizant)
  • Google's Pentagon deal: Google will provide AI agents to the DOD's 3-million-person workforce for unclassified tasks, threading the needle between the OpenAI and Anthropic positions. (Axios)
  • Jim Cramer on AI stocks: CNBC's Cramer flagged an unnamed AI stock as "in the sweet spot" ahead of GTC, calling Nvidia's conference a near-term catalyst. (CNBC)
  • Colorado delays its AI Act: Implementation pushed from February 1 to June 30, 2026, as the state reassesses requirements in light of the federal preemption executive order. (King & Spalding)
  • "The AI Bubble" discourse grows: The Bronx Science Survey published a student investigation into whether AI valuations have outrun fundamentals -- a question Oracle's results may partially answer.

The through-line today is accountability -- who answers for how AI is built, deployed, and governed. Anthropic is betting its federal business on the principle that some uses should be off-limits regardless of who is asking. The FTC is being asked to decide where federal authority ends and state authority begins. And the market, for its part, keeps writing bigger checks. The answers that emerge from courtrooms, regulatory offices, and next week's GTC stage will set the terms for the rest of 2026.


Sources