Why AI anxiety is different
There’s a reason the anxiety about AI feels so real. Humans are wired for loss aversion — costs register at roughly twice the psychological weight of equivalent gains. And technology’s costs are always vivid and immediate: the skill that’s suddenly worth less, the role that looks precarious, the thing you spent years learning that a model now approximates in seconds. The benefits are abstract and future-tense. The brain does the math wrong every time, and it does it by design.
AI adds a dynamic that older technologies didn’t. Every previous wave — railroads, electricity, the internet — was at least comprehensible in principle. You could understand how it worked, even if you couldn’t build it. AI’s decision-making is opaque to the people building it, let alone the people using it. That’s ambiguity aversion with nowhere to land — not fear of a known danger, but fear of something that resists being mapped even by the people who made it. That’s why this technology feels different — and it does. That’s not weakness. It’s an appropriate response to something genuinely new.
What history suggests, though, is stranger and more interesting than the fear implies.
Bubbles are a feature, not a bug
Every major technological revolution has followed the same arc. Speculative capital floods into a new infrastructure — more than the economics can justify, more than the market can absorb. The bubble inflates. Then it bursts. Companies go bankrupt. Investors lose fortunes. And then, quietly, something unexpected happens: the infrastructure stays.
The economist Carlota Perez calls this the Installation Period. Financial capital builds what production capital cannot rationally justify. The crash is not the failure of the innovation — it’s a transfer mechanism. The infrastructure moves from the people who overpaid to the people who get it cheap. And that’s when things get interesting.
The railroad bubble of the 1840s and 50s left 70,000 miles of track across the United States. The companies that built it mostly went under. But freight costs dropped 90% over the following decades, and on top of that cheap network, things became possible that nobody had modelled during the bubble. Chicago became the food system of the world — grain markets, stockyards, commodity futures, none of it viable without cheap freight. Sears Roebuck launched a mail-order catalogue in 1897 that was, in effect, a software layer on the rail network. Carnegie negotiated freight rates to near-zero on overbuilt capacity and built a steel empire nobody could touch. The bubble built the tracks. The tracks rewired commerce. Nobody in 1845 was running a spreadsheet that showed nationwide mail-order retail as the outcome.
The telecom bubble of the late 1990s is the cleaner example, because the lag was shorter. By 2001, 95% of the fibre laid during the boom was dark — the companies bankrupt, the cable sitting unused in the ground. YouTube launched in 2005. Netflix began streaming in 2007. AWS launched in 2006. Every one of these was a bet on bandwidth staying near-free. It did, because the infrastructure had been acquired at bankruptcy-auction prices. Streaming video at 1999 bandwidth pricing was economically insane. At 2005 pricing it was inevitable. The entire SaaS model — software delivered over the web, priced by the month — only works at those bandwidth costs. The bubble funded the substrate. The substrate made the application layer possible.
The pattern runs through history. The canal bubble left waterways that made New York’s commercial dominance possible. The electrification bubble left a grid that enabled refrigeration, radio, and the 1920s consumer economy. The PC hardware glut of 2001 made AWS’s unit economics work in 2006.
This is something I’ve come to genuinely love about capitalism — not uncritically, but specifically: it has a mechanism for funding infrastructure that no rational actor would build alone. Financial bubbles are wasteful and painful and they transfer wealth in ugly ways. They are also, in aggregate, how we build the substrate for things we can’t yet imagine. The people who lose money in the bubble are never the people who win in the deployment phase. Different actors, different timescales, same infrastructure.
The AI version of this argument is straightforward. The LLM and GPU buildout of 2022–2025 is being financed at bubble scale. Whether the builders profit is almost beside the point. The question is what becomes possible when inference is cheap and abundant — built by companies that no longer need to exist for the capability to be used.
We’re in that moment. The infrastructure is being laid. And for someone who builds, who has taste, who can feel the wrongness before it compounds — this is a strange and clarifying time to be alive.
What taste actually is
There’s a line from Michael Polanyi, a philosopher of science, that I keep coming back to: we know more than we can tell.
He was writing about tacit knowledge — the expertise that’s been internalized past the point of articulation. The surgeon who can’t explain why they made a particular cut. The designer who knows something’s wrong before they can say what. The engineer who feels a system is about to break before the logs confirm it.
That’s taste. It’s not soft. It’s not decoration. It’s analytical judgment that’s been compressed into something faster than language.
Gary Klein spent years studying how experts make decisions under pressure — firefighters, military commanders, surgeons. His finding: they don’t analyze. They pattern-match and feel rightness, then act. The emotional response is the computation. The feeling of wrongness is load-bearing signal, not noise.
Antonio Damasio proved the same from the neuroscience side. People with damage to emotion-processing areas of the brain make worse decisions — not better ones — even when their analytical reasoning is completely intact. The emotion is part of the decision-making apparatus, not separate from it.
What this means practically: your instinct that something is off — in a design, in a product flow, in a codebase — is not a feeling you should override with analysis. It’s a compressed body of domain knowledge firing. The longer you’ve been building, the more reliable that signal gets.
And here’s the thing AI can’t do: it cannot feel the wrongness. It will confidently go further in the wrong direction without noticing. You’re the check. That’s not a limitation to work around — it’s the entire point of the collaboration.
Two things that are yours
There’s a distinction worth making explicit, because they’re related but different.
The canvas is yours. Your taste — what you notice, what draws you, what feels unmet — determines what gets built. The product vision, the design direction, the decision about what matters to a user: these come from something inside you that isn’t transferable. The Big Five personality research links aesthetic sensitivity directly to Openness to Experience — what you notice reflects who you are at a trait level. Your instincts about what to build are expressions of your personality, not just your skill.
AI executes against the canvas. It doesn’t create it.
Drift detection is yours. The technical, design, or product person who can feel when something is getting off track — before it’s visible in the output, before the tests fail — that’s intuition built from experience. The structured process in this kit (the runbooks, the locks, the session rhythm) is the formalisation of that intuition. But the underlying sense that something is wrong has to come from you first. A process without the human instinct behind it is just bureaucracy. With it, it’s leverage.
Both of these compound with time. The canvas gets richer as your taste develops. The drift detection gets sharper as you accumulate scar tissue. Neither can be downloaded.
The need to express yourself through work
Ryan and Deci, the psychologists behind Self-Determination Theory, identified three fundamental human needs: autonomy, competence, and relatedness. Their definition of autonomy is specific — it doesn’t just mean freedom. It means acting from self. Expressing your values and identity through what you do.
Work that satisfies this produces intrinsic motivation. Work that doesn’t produces alienation.
Csikszentmihalyi’s flow research adds the phenomenology — what it actually feels like from the inside when you’re fully engaged. The challenge-skill balance that produces flow is essentially the experience of your full self being in the work. Not just your analytical self. All of it.
This is not a nice-to-have. For a certain kind of person — the one who builds because they have to, who feels the wrongness before they can name it, who can’t half-invest in anything — it’s closer to a requirement.
AI, used well, moves execution out of the way so more of the work can be this. The parts that were blocking — the boilerplate, the repetition, the translation between idea and implementation — those get offloaded. What’s left is more canvas, more judgment, more expression.
That’s not a threat to the person who builds this way.
The learning curve is real
Understanding where things fail — the seams between layers, why production breaks in ways staging didn’t, what drift looks like before it compounds — this is more learnable right now than it’s ever been. You can ship fast and see the consequences fast. The feedback loop is compressed.
But it would be dishonest not to say what the feedback loop actually feels like. This way of working is genuinely brutal. You are holding the product vision, managing an AI that will confidently go wrong without noticing, reading the output of every task for drift, making architecture calls, feeling the seams of a system you’re building in real time — on a small team, moving fast, with no institution slowing you down. Which, looking back at how institutions actually work, might be less of a loss than it sounds. A lot of what we called process was just humans slowing other humans down. A lot of what we called bugs were just coordination failures dressed up as technical problems. The cognitive load is not metaphorical. Working memory research puts the human brain’s capacity at roughly four chunks of information held simultaneously — and this work routinely asks for more, across more layers, with fewer people to distribute the weight across. What makes it hardest is the range it requires: the further you can see into the consequences of each decision, the more you’re holding at once. The people who find this most demanding are often the ones most capable of doing it. That’s not irony. It’s the nature of the work.
Anders Ericsson, whose research on deliberate practice underpins most serious work on skill development, found that the kind of practice that builds deep competence is uniformly uncomfortable — not occasionally, but structurally. You are operating at the edge of what you can currently do. That’s the only place real learning happens. This work lives there permanently, and it demands more of you, not less. The intuition, the product sense, the ability to feel drift before it compounds — these are hard-won and require the kind of patient, uncomfortable sitting-with-things that most people spend their lives avoiding.
You’re not just learning to build. You’re building the instrument of intuition — 100x faster than was previously possible. And something I’m only beginning to understand about myself: we’re both evolving, the AI and the humans working alongside it.
I’m evolving probably as fast as the thing underneath my keyboard.