28 Days Later
There’s a scene in the opening minutes of 28 Days Later where an unnamed patient, played by Cillian Murphy, wakes up alone in a hospital. He wanders through the halls, steps into the eerily empty streets of London, and eventually stumbles into a church. The pews are filled with motionless figures. He disbelievingly asks, “Hello?” to which a church bell rings and one of them stirs. Every viewer can tell you where they were when they realized what was happening. For the most part, the movie could have ended there. Yet, it is estimated that over 7 million people watched within the first 12 months of the release. Since there’s not a clean transition from a zombie movie to the rest of this blog, and because the pontificators and thoughtposters have already written six different versions this week, let’s just get to the point:
The past sixty days have felt like a frenzied awakening for anyone paying attention. Or, at least the terminally online among us. The irony is that the adoption curve for large language models is real and unprecedented, yet our tendency towards recency bias tricks us into thinking this all started last Tuesday. And, to be clear, advancements are accelerating. Yet the truth is, things have been building for some time. Considering how quickly the world can change, as Hemingway allowed in The Sun Also Rises:
Gradually and then suddenly.
The truth is Anthropic’s Opus 4.6 has been unimaginably transformative. Still, it was the release of Opus 4.5 back in November 2025 that really created the uptick in adoption of AI as a widely used tool versus an interesting novelty for a large swath of tech users. Like most people, I noticed the conversational interfaces had been quietly finding their way into daily workflows for some time. No-code tools followed. Zapier’s promise was compelling. Yet, the results were often uneven, the context lagging, and there was a general sense that the current state of AI was more of a capable assistant that never quite finished the assignment.
Now, depending on the source and estimation methodology, somewhere between 4 and 8% of all commits on GitHub are authored by an agentic assistant. That was before the advent of Clawdbot, OpenClaw, and the rise of the modern day Daedalus, Peter Steinberger. And, also prior to Tobi Lutke prompting: are you going exponential? The idea that a quarter of all commits will be co-authored by a non-human by the end of 2026 is not all that far-fetched. Why? Claude Code.
For context: my last line of functional code written before this year was BASIC on MS-DOS. Yet, over the course of 28 days, and 112 commits later, the experience was consistently surprising, often flummoxing, and a persistent reminder that there are simply not enough hours in the day. One Sunday was cleared entirely with the belief that the end of the ideas would surface; instead, the hacking continued well into the late evening. By the end of the month, a plethora of ideas for work had been validated, but also: an app for health and fitness data that brings together Garmin, Apple, and Peloton data in the same data layer and includes inputs for weather, workload, and travel to modify weekly training plans. It turns out you can build some interesting predictive layers for sports by widely sourcing quantitative results, evaluating betting markets, and bringing in insights from news feeds, broadcasts, and podcasts.
Gradually, then suddenly, indeed.
Compelling, eye-opening, all of the monikers apply. Yet what was truly revealing was how quickly the learning compounded, and it brought to mind two academically validated perspectives on how people actually get better at things. The first is double-loop learning, introduced by Chris Argyris. Single-loop learning corrects the error. Double-loop questions the belief that created the error and rewrites the underlying model. Claude Code accelerates this relentlessly: instead of fixing the broken function, you discover that your entire architecture was built on an assumption you didn’t know you were making. The acknowledgement happens in real-time, and the awareness builds with each cycle. The second is Lev Vygotsky’s zone of proximal development: the distance between what you can do independently and what becomes achievable with the right scaffolding. Claude Code is that scaffold. It doesn’t replace thinking; rather, it extends and accelerates in a manner that allows engagement with ideas that would not surface alone. Each of those dynamics operates on its own axis of acceleration. Double-loop learning compresses the depth of insight. The zone of proximal development expands the range of what’s attemptable. Claude Code applies force on both axes simultaneously, and the result clearly compounds. So, the 25% of commits coming from a non-human by 2026 seems a bit light.
The instinct, naturally, is to ask: are we in a bubble?
Hardly. We are in the early phase of a shift so fundamental that our collective pattern recognition simply cannot keep pace. When humans encounter change at this speed, we default to analogy. The dot-com era. The global financial crisis. The early days of mobile. The comparisons are understandable; they are also largely misplaced.
One of the persistent mistakes we make as a society is assigning rationality to errors of the past. The financial crisis was not principally a failure of risk evaluation; it was a failure of risk management. As ProPublica’s investigation later revealed, by 2007 roughly two-thirds of the riskiest collateralized debt obligation (CDO) tranches were being purchased by other CDOs. The banks were not distributing risk; they were recycling it. The packaging and layering of low-probability events gradually increased the risk profile of the entire system, and nobody wanted the music to stop. Incentives were misaligned. Awareness was absent. That specific combination is what made it catastrophic.
This time, or moment, is structurally different. The tools are accessible, the limitations are visible, and this is not a black box held by a handful of institutions making leveraged bets on behalf of an uninformed public. It is available, today, to anyone willing to sit down, learn, and challenge themselves to see the future differently. It works. Not flawlessly; not without supervision. But functionally and increasingly well. More importantly: today’s output is the worst it will ever be. That single acknowledgment changes the calculus entirely. The risks are real, but the most discussed version is also the most overblown: the sweeping notion that automation will displace entire categories of work overnight. It won’t. It never has. And unlike prior shifts, we are not flying blind; we are broadly aware of both the capabilities and the risks.
In 1990, Stanford economist Paul David published “The Dynamo and the Computer,” tracing a pattern that should give every skeptic pause. Electric power was commercially available by 1881. Forty years later, most American factories still looked the same. The productivity gains did not arrive until managers stopped replacing steam engines with electric motors and started rethinking the factory from the floor up. Four decades between availability and adoption. We are somewhere in the early years of that same arc, with one critical difference: the compression is tightening. What took factories forty years to learn, an individual with focus and the right tools can begin to internalize in a month.
What will happen, and is already underway, is a restructuring of how value gets created in knowledge work. Consider the law firm: a hundred attorneys producing similar work product under a shared letterhead. The model has persisted because expertise was expensive to distribute. That constraint is dissolving. An attorney embedded inside a technology company, supported by tools that can research, draft, and flag risk at scale, can deliver a higher level of service to more clients than a traditional associate billing 2,000 hours a year. The traditional firm grouped specialists together because that was the most efficient structure for output. The emerging model distributes one specialist’s judgment across an organization built for speed and reach. The 200-person firm gives way to the 15-person company that outperforms it.
The built world will follow. Slower, but with greater consequence. Infrastructure, construction, manufacturing: these are domains where the consequences of failure are physical, not theoretical. Adoption will be more deliberate; but the forces are already in motion. Autonomous equipment is grading, paving, and excavating on active jobsites today. The construction industry cannot hire fast enough to meet demand, and the housing deficit continues to widen. When autonomy begins to compress the timeline and cost of building, the effects will not be measured in quarterly earnings. They will be measured in units built, commutes shortened, and communities that exist where they otherwise would not have. That is a longer conversation, but it is one worth having.
For most of our lives, the default question when meeting someone new has been: what do you do? The tools available today are beginning to change the question entirely. Twenty-eight days was all it took to understand that the more interesting prompt, for yourself and for the people around you, is: what are you building? The answer does not require a title, a team, or a technical background. It requires curiosity, a willingness to learn, and the recognition that the ceiling you assumed was fixed is, in fact, moving. Every single day.
