Something Big Is Always Happening

Every few years, someone in proximity to a technological frontier writes a piece like this.

It moves along a familiar spine: first the awakening, then the confession from someone who has seen behind the curtain; next the reminder that only a small circle truly controls the levers; then the chart bending upward, steeper and steeper; and, at the end, the insistence that this is no longer optional — that you, too, must grasp what is unfolding before it is too late.

The rhetorical engine is fear mixed with privilege. I have seen it first, therefore you are late.

What makes this one powerful is not the data points, but the emotional structure. It invokes February 2020 — the last time the world felt blindsided — and invites you to relive the embarrassment of underreaction. The implication is clear: if you do not believe this now, you are repeating the same mistake.

But analogies are not arguments.

Covid was a biological pathogen with exponential spread dynamics and public health consequences that could be measured in bodies. AI is a general-purpose technology embedded in economic, political, and social systems that move at far more friction-filled speeds. Using the fear and shock of a pandemic to make a productivity tool feel just as urgent may be powerful storytelling, but it blurs important differences and makes the comparison less serious than it sounds.

The piece oscillates between two claims without resolving the tension between them.

On the one hand, AI progress is portrayed as almost autonomous, as though a few hundred researchers are midwifing an intelligence explosion that even they cannot control. On the other hand, we are told that individuals can meaningfully “get ahead” by spending one hour a day experimenting with tools and paying $20 a month for access.

These cannot both be true at the scale implied.

If we are truly on the brink of systems “smarter than almost all humans at almost all tasks,” then the marginal advantage of being an early adopter of ChatGPT is irrelevant compared to the structural shifts that would follow. And if individual adaptation meaningfully matters, then we are not witnessing an unstoppable intelligence singularity, but a powerful, yet still human-mediated, wave of automation.

There is also a pattern of conflating capability with displacement.

Yes, models can draft contracts. Yes, they can generate code. Yes, they can summarize medical research. But the ability to produce an output is not identical to the social replacement of a profession. Professions are not bundles of text generation tasks. They are institutional arrangements, legal liabilities, human trust networks, and accountability structures.

The managing partner at a law firm using AI as a super-associate does not prove that associates disappear. It proves that firms reconfigure workflow. That has happened with every productivity tool in the history of modern capitalism.

Word processors did not eliminate writers.

Spreadsheets did not eliminate accountants.

Search engines did not eliminate researchers.

They changed the shape of work. They compressed some tasks and expanded others. They redistributed value. They did not produce 50% white-collar extinction events within five years.

But even if the capability to do so were there, replacement would still come down to economics. A lot of today’s assumptions is premised on the idea that AI will remain cheap.

Right now, most people interact with extremely powerful systems for $20 a month, sometimes less. That price does not reflect the real cost of building and running them. It reflects subsidy. Companies like Anthropic and OpenAI are raising tens of billions of dollars at a time. That capital is not philanthropic. It is patient, strategic money expecting outsized returns.

When you pay $20 for Claude, you are not paying the full cost of the infrastructure behind it. You are participating in a growth phase. Investors are underwriting usage in order to accelerate adoption and entrench dependency. That is a familiar playbook in technology markets.

At some point, the subsidy logic changes.

There are only a few paths forward. One is that labs dramatically reduce the cost of training and running models through breakthroughs in hardware efficiency, algorithmic optimization, or energy supply. That is possible. Another is that prices rise to reflect the true cost of compute and capital. That is also possible, and historically more common. A third path is monetization through additional layers: advertising, enterprise lock-in, data extraction, premium tiers, usage-based billing, or bundled services that quietly increase effective cost.

None of the capital currently flowing into AI labs is neutral. It is priced with expectations of return. If operating costs remain high — and data centers, GPUs, and electricity are not cheap — then pricing pressure eventually follows.

This matters for the job-loss narrative.

If the marginal cost of deploying AI at scale remains significant, and if enterprise-grade usage becomes expensive, then replacing human workers is not simply a question of capability but of economics. A firm will only eliminate a role if the alternative is reliably cheaper or meaningfully more productive. If AI tools become costly, metered, regulated, and compliance-heavy, the cost comparison changes. Managing models, auditing outputs, ensuring legal defensibility, and absorbing risk all add layers of expense.

It is entirely plausible that, at scale, the total cost of AI infrastructure — compute, oversight, energy, compliance, integration, and maintenance — approaches or even exceeds the cost of certain categories of human labor.

We should at least allow for that possibility.

Technological capability does not automatically determine economic dominance. Prices, incentives, capital expectations, and regulation shape adoption just as much as performance benchmarks do. And if today’s low price is a temporary growth subsidy rather than a stable equilibrium, then some of the displacement projections rest on assumptions about cost that may not hold.

We should also pause on the claim that AI is now smart enough to build the next version of itself, because this is where the rhetoric often takes on a life of its own. It is true that models are being used to write code, debug systems, and assist in research that feeds into future models. But that is not the same thing as autonomous self-creation. These systems do not wake up with goals, allocate capital, secure compute, design chip architectures, or decide research priorities.

Human teams still frame the problems, curate the data, set the objectives, evaluate outputs, and choose what gets deployed. AI can accelerate parts of the process. It can act as a powerful tool within the loop. But a tool participating in its own refinement is not the same as an independent agent driving its own evolution. Conflating assistance with autonomy makes the feedback loop sound mystical, when in reality it is still bounded by human direction, institutional incentives, and material constraints.

There’s also a quieter possibility that rarely makes it into these narratives: that AI may not eliminate as much work as it rearranges it. Running these systems, powering the data centers, maintaining the infrastructure, auditing outputs, monitoring misuse, complying with regulations, retraining staff, rewriting workflows, updating models, and fixing the failures they inevitably produce. All of this takes labor. A great deal of it.

The history of complex technologies suggests that the effort required to build, govern, and sustain them often grows alongside their capability. The work does not disappear. It shifts. And sometimes the machinery demands more coordination, oversight, and human involvement than the tasks it automates ever did.

The piece gestures toward this being “different from every previous wave of automation,” but never fully demonstrates why. It asserts general cognitive substitution, but sidesteps the economic reality that automation adoption is constrained by cost, regulation, liability, politics, and inertia. Technology does not diffuse at the speed of capability; it diffuses at the speed of institutions.

There is also a subtle inflation of timelines through selective framing.

“By 2023 it could pass the bar.”

Yes, in controlled conditions.

“By 2025 engineers handed over most of their coding.”

In specific contexts, with guardrails, by highly technical users.

The gap between demonstration and universalization is enormous. Nuclear fusion has worked in laboratories for decades. That does not mean it powers your home.

Another thing that deserves closer attention is the steady appeal to authority.

Dario Amodei says.

OpenAI documentation says.

METR data shows.

These are not neutral observers speaking from outside the system. They are executives and researchers leading companies locked in an expensive, high-stakes race for dominance. When the CEO of an AI lab predicts that half of white-collar jobs could disappear within five years, it is not only a warning about the future. It is also a message to investors, regulators, and competitors about how powerful and transformative their technology is. Dramatic forecasts attract capital. They command attention. And in a field driven by funding and momentum, attention is leverage.

This does not make the claim false. It makes it interested. We should be careful not to mistake strategic messaging for prophecy.

What stands out most in the piece is not its technical detail but its psychological framing. Adaptation is presented almost as a moral obligation: if you do not engage, you are naïve; if you hesitate, you are complacent; if you dismiss it, you are destined to fall behind. That posture moves beyond analysis and into something closer to evangelism.

There is a reason technological insiders often feel the ground shake first. They are standing closest to the machinery. They see the early versions before the public does. But being that close can also distort perspective. When you’re near the center of change, it’s easy to mistake rapid progress in a lab for immediate transformation everywhere else.

Every technological revolution feels total from inside its most intense node.

The early internet felt like it would dissolve all borders within five years.

Blockchain felt like it would eliminate banks.

Social media felt like it would democratize power irreversibly.

Each of these technologies changed the world profoundly. None unfolded on the timelines evangelists predicted. None displaced institutions as cleanly as imagined. And each generated secondary effects no one foresaw.

This isn’t to say AI is empty hype. It’s to say that history teaches us to be cautious about straight-line acceleration stories. Big change rarely unfolds as neatly or as quickly as it feels from the inside.

The piece also underestimates human friction.

Work is not merely cognitive throughput. It is negotiation, politics, reputation, tacit knowledge, and trust. A model can generate a diagnosis; it cannot bear legal responsibility for malpractice. It can draft a brief; it cannot stand before a judge. It can suggest strategy; it cannot absorb blame when that strategy fails.

Until liability transfers from humans to systems, humans remain in the loop. And liability transfers slowly, because it is political.

The “intelligence explosion” idea assumes that once AI becomes powerful enough, the rest of society simply gives way. But that’s not how change usually works. Institutions push back. Governments regulate. Courts get involved. Workers organize. Companies lobby. Adoption slows.

That friction isn’t a flaw in the system. It’s how societies protect themselves. It’s the immune response that kicks in when something disruptive appears.

There is another quiet assumption embedded in the piece: that exponential technical capability maps cleanly onto exponential economic displacement. History suggests otherwise. Productivity gains often concentrate wealth before they eliminate labor. They augment high-skill workers before they displace them. They generate new coordination problems before they settle into equilibrium.

If 50% of entry-level white-collar jobs disappeared in five years, the resulting political shock would dwarf the technology itself. Democracies do not absorb that scale of disruption quietly. Policy would intervene, whether clumsily or effectively.

The idea that technology simply rolls forward untouched by politics leaves out one of the most powerful forces in society: collective decision-making.

None of this is an argument for complacency. It is an argument against confusing urgency with clarity.

There is also something else worth noticing in the piece: the heavy sense of inevitability that runs through it. We are told the future is already here. That it just hasn’t reached you yet. That it is about to. This kind of language narrows the field of possibility. It leaves little room for uncertainty, debate, or delay. It turns a fast-moving trend into a fixed destination, and it treats projection as though it were already fact.

But no technology develops in isolation. AI depends on electricity, on access to advanced chips, on global supply chains, on government policy, on investment cycles, and on whether the public accepts or rejects its use. Each of these factors can speed things up or slow them down. None of them are fixed. None of them are guaranteed to move in one direction forever.

We are not strapped to a runaway train. We are part of a system shaped by politics, power, and choices.

The most important question raised by AI is not whether you should spend an hour a day experimenting with Claude. It is whether our institutions are strong enough to guide powerful technologies without handing effective control to a small circle of researchers and investors. That is the real issue at stake, and it is largely missing from the piece.

Instead, we are offered advice about personal advantage.

Adapt faster.

Use better tools.

Get ahead.

But the largest consequences of AI will not be settled through individual productivity gains. They will be shaped in policy debates, in corporate boardrooms, in regulatory agencies, and at the ballot box. If this moment truly is “bigger than Covid,” then the response cannot be reduced to better subscriptions and smarter workflows. It has to involve collective decisions about how the technology is governed and who it ultimately serves.

What unsettles me most is the ending. After pages describing an intelligence explosion that could compress a century into a decade, eliminate entire job categories, reorder geopolitics, and destabilize economies, the advice narrows into something almost ordinary: experiment more, upgrade your tools, move faster than the person next to you.

The tension is simply too difficult to ignore. If the stakes are truly existential, then the response cannot be reduced to career tips. And if the response really is about career tips, then what we are facing is not civilizational collapse, but faster productivity. Both cannot be fully true at once.

Yes, something big is happening. But big does not automatically mean breakdown. Faster does not mean unstoppable. Capability does not automatically mean mass job loss. And being close to the technology does not guarantee clear sight about how society will respond.

The right posture is neither denial nor panic. It is steady, serious engagement. It means building regulatory systems as quickly as we build models. It means strengthening social safety nets before disruption forces crisis decisions. It means resisting narratives that turn complex change into a countdown.

The water may be rising. But telling everyone to swim harder is not the same as building barriers that protect whole communities.

If AI is truly transformative, then the real urgency lies in how we govern it, not just how we use it. And that is the conversation that deserves the most attention.

 
3
Kudos
 
3
Kudos

Now read this

2023, what the fuck?

2023 came with a lot of personal growth for me. To be more specific, I added over 20kg in the past 12 months. While this would not please my gym instructor, it does give me a valuable perspective on how much has changed since the year... Continue →