Three phrases. Elon Musk only made that offer. “Proceed with caution.” The response, which was posted on X in response to a cybersecurity researcher’s observation that Amazon had called a mandatory all-hands meeting to discuss AI-related outages, was succinct enough to be ignored but pointed enough to stick around. The warning carried a certain weight because it came from the man who stated that AI will completely replace human coding by the end of this year. It wasn’t alarmist or technical, but rather subtly serious in a way that the rest of the internet noticed right away.
The circumstances that led to Musk’s remark had been developing for several weeks. More than 22,000 users reported an outage on Downdetector earlier in March when Amazon’s website and shopping app went down for a while, preventing them from checking out, viewing product prices, or accessing their accounts.
Key Facts: Amazon AI Outages & Elon Musk’s Warning
| Company | Amazon.com, Inc. — global e-commerce, cloud computing, and AI services giant |
| Incident date | Early March 2026 — Amazon website and app down for 22,000+ users; unable to check out or access accounts |
| Amazon’s explanation | “A software code deployment” — one incident linked to AI-assisted changes; none involved AI-written code per company |
| Internal meeting | Mandatory “TWiST” (This Week in Stores Tech) meeting held March 11, 2026, led by SVP Dave Treadwell |
| Internal language used | “High blast radius” — Amazon’s own term for AI-related incidents with wide-reaching system consequences |
| Elon Musk’s response | “Proceed with caution” — posted publicly on X in reply to cybersecurity expert Lukasz Olejnik, March 11, 2026 |
| Amazon AI investment | $200 billion capex projected for 2026, up from $131 billion in 2025 |
| Recent layoffs | 16,000 additional staff cuts in January 2026, following late-2025 rounds citing efficiency and cultural alignment |
| Expert quoted | Lukasz Olejnik — cybersecurity consultant & visiting senior research fellow, Dept. of War Studies, King’s College London |
| Reference | Fortune.com — Original Reporting |
At the time, Amazon explained it as the outcome of “a software code deployment,” which is both technically accurate and, depending on how you interpret it, not the most insightful explanation a company with a $200 billion capital expenditure budget could provide. The phrase is the business equivalent of stating that “a vehicle in motion” was the cause of an automobile collision.
What transpired later made the situation more intriguing and unsettling for Amazon’s communications team. Internal briefs and emails detailing a “trend of incidents” over the past few months were obtained by The Financial Times; many of these incidents had what Amazon referred to as a “high blast radius.” The extent of the harm that AI-assisted code modifications had been causing throughout Amazon’s internal systems was summed up in that phrase, which was striking in its military-adjacent directness.
It’s not the language that businesses expect to see reported back to them in the financial press, but rather the language that engineers use among themselves. Amazon’s senior vice president of e-commerce services, Dave Treadwell, began an internal email with a candor that is uncommon in big businesses: “Folks, as you likely know, the availability of the site and related infrastructure has not been good recently.”
Actually, there’s a refreshing quality to that sentence. Not so good lately. That sounds like a man who was sick of pretending in a world where corporate language is meticulously controlled.
Following the news, Amazon’s stance was predictable, if not totally convincing. The TWiST meeting, the weekly “This Week in Stores Tech” session that had been transformed into what internal sources described as a required deep dive, was merely a routine operational meeting, a spokesperson informed reporters. ordinary things. Regular business. Additionally, the company clarified that none of the incidents involved code that was actually written by AI, that only one outage was connected to AI at all, and that AWS was not involved in any of the incidents.
The business further refuted FT’s assertion that junior engineers would now require senior approval for AI-assisted modifications, claiming that this requirement was not being carried out. Maybe all of that is true. A company that plans to spend $200 billion on AI-related capital expenditures in a single year may also have a strong institutional incentive to minimize the scope of its AI failures.
Lukasz Olejnik, a visiting senior research fellow at King’s College London’s Department of War Studies and the cybersecurity researcher whose post Musk addressed, has been one of the more perceptive voices attempting to find the real argument in all of this. He took care to frame his concern as an argument against speed divorced from judgment rather than an argument against the deployment of AI, which he described as inevitable and unstoppable.
Moving quickly because the technology is fascinating, because rivals are moving quickly, or because executives have publicly declared that AI will replace human coders by the end of the year—none of these are particularly good excuses to forego the safety precautions that ensure a platform can continue to serve hundreds of millions of users. When expressed simply, Olejnik’s framing—that there is a significant middle ground between irrelevance and self-destruction—seems clear. In reality, it appears that very few large organizations are discovering it.
It is hard to overlook the larger picture here. In January alone, Amazon laid off 16,000 more workers. At the same time, the company invested record amounts of money in AI infrastructure and tools. The internal reasoning makes sense: fewer engineers are required because AI increases the productivity of each remaining engineer.
The operational risk, however, is that the people being cut were frequently the ones who identified issues before they spread—the second set of eyes, the institutional memory, or the person who recalled the last time a deployment of a similar nature went awry. The blast radius of a single error tends to increase as that layer thins.
It’s difficult to ignore the pattern as you watch this develop. Amazon is not the only company facing this issue. Right now, all of the big tech companies are betting on the same thing: accelerating the deployment of AI, cutting staff, and assuming that the technology will advance faster than its failure modes. That wager can occasionally pay off handsomely.
A mandatory meeting is scheduled, a shopping app goes down for 22,000 users on a Tuesday, and three words from the most well-known tech personality in the world end up in the headline. Proceed cautiously. The message is simple. It simply proves to be difficult to follow.


