When does technology kill a profession?
Three conditions that decide everything
A few weeks ago, a friend sent our group chat a news article. The CEO of America’s largest public hospital system had announced he was ready to replace radiologists with AI.
My response: “This will increase demand and pay for radiologists.”
What followed was a two-hour argument that touched on ATMs, the iPhone, cardiac surgery, and the entire history of medicine. At some point I realized I had written more in that chat than most essays I’ve published. So here we are.
The debate about AI and professional obsolescence is usually framed as binary. Either AI kills the job or it doesn’t. Either you’re a techno-optimist who thinks everything will be fine, or a doomer who thinks we’re all replaceable. Both camps argue from vibes. Neither has a good framework for why some professions survive technology and others don’t.
I think there are three conditions that determine the answer. Understanding them clarifies not just radiology’s future, but every profession currently eyeing AI with either hope or dread.
The paradox that started this
Before the conditions, you need to understand Jevons Paradox, because it’s the reason the intuitive answer is usually wrong.
In 1865, economist William Stanley Jevons noticed something counterintuitive: as steam engines became more fuel-efficient, Britain consumed more coal, not less. Cheaper efficiency didn’t reduce demand. It expanded it. Lower cost unlocked new uses nobody had imagined before.
The ATM is the modern textbook example. When automated teller machines proliferated through the 1980s and 1990s, the obvious prediction was fewer bank tellers. The ATM could handle deposits, withdrawals, balance checks, the core of what tellers did. In 1988, the average urban bank branch needed about 21 tellers. By 2004, ATMs had cut that to 13.
But here’s what actually happened. Between 1970 and 2006, the number of bank tellers in the United States roughly doubled, from 268,000 to 608,000. Cheaper branches meant more branches. More branches meant more tellers, even at lower headcount per location. Jevons won.
The natural conclusion: technology doesn’t kill jobs, it creates them. ATMs didn’t destroy tellers.
Then the iPhone came along.
Mobile banking eliminated the premise of the branch entirely. By 2022, teller employment had collapsed from 332,000 full-time positions in 2010 to 164,000. Not a delayed ATM shock. A completely different technology, attacking not the task but the institution itself.
So the ATM story isn’t proof that technology always creates jobs. It’s proof that the outcome depends on which technology, attacking what, in what context.
Which brings me to the three conditions.
Condition 1: Is the automation partial or complete?
The most important question to ask about any technology is not “what can it do?” but “what can’t it do, and does the remaining human role still matter?”
ATMs automated one task within a multi-task job. Tellers still handled loan inquiries, resolved disputes, cross-sold products, built relationships. The machine took the repetitive core; the human held the contextual periphery. And that periphery turned out to be valuable, more valuable actually, as routine transactions moved to machines and tellers became relationship bankers. Their wages went up. Banks started hiring more college graduates for the role.
Mobile banking was different in kind, not degree. It didn’t automate a task within branch banking. It automated the reason to go to a branch at all. The question stopped being “how do we staff this branch more efficiently?” and became “why does this branch need to exist?” When that question flips, there’s no peripheral role to retreat into. The institution dissolves, and the job with it.
For radiology, this is the first diagnostic question. AI is advancing rapidly on image interpretation, the core diagnostic read. But radiologists spend only 36% of their time on direct image interpretation. The rest is consultation, clinical integration, procedure guidance, peer review, communicating with referring physicians. These functions are relational, contextual, and harder to automate.
The radiologist who only reads scans is in the position of the teller whose only job was counting cash, vulnerable to the first wave. The radiologist who reads scans and runs tumor boards and performs interventional procedures and advises on imaging strategy is more like the teller who became a relationship banker, with the machine having clarified their value rather than erased it.
But most analyses miss something: the human role in radiology isn’t just surviving alongside AI. It’s producing what AI needs to keep advancing. More on this below.
Condition 2: Does lower cost actually expand the market?
Jevons Paradox requires an elastic market. Lower cost has to unlock new demand, not just make existing demand cheaper to serve.
This is where ATMs and mobile banking diverge most clearly.
ATMs made branches cheaper to operate, and banks responded by opening more of them. Urban branch density increased 43% between 1988 and 2004. More branches meant more customers, more complex transactions, more need for human judgment at the window. The efficiency gain grew the pie and kept humans in it.
Mobile banking made transactions cheaper too, dramatically so. But it didn’t generate more banking activity requiring human presence. It routed existing activity away from places where humans worked. The efficiency gain accrued to consumers and to bank margins, and it didn’t expand the market in a way that preserved teller jobs so much as shrink the institutional context those jobs depended on.
The question this raises for radiology: when AI makes imaging reads cheaper and faster, does that expand the market in a way that still requires radiologists?
The structural case is stronger than it looks. Global imaging volume is growing faster than radiologist supply. The UK projects a 40% shortage of radiologist consultants by 2028. The US faces a projected gap of 122,000 radiologists by 2032. Cheaper, faster AI-assisted reads could make imaging more accessible, enable broader screening programs, and generate clinical volume the current workforce simply cannot handle. That’s the ATM outcome, efficiency enabling expansion, expansion requiring expertise.
But there’s a deeper version of this argument, one that I haven’t seen articulated clearly enough.
Clayton Christensen’s The Innovator’s Prescription makes an observation about the history of medicine that cuts to the heart of this debate. Every time medicine developed a new diagnostic technology, a new way of seeing something previously invisible, it didn’t just improve care. It discovered new disease. New disease created new clinical categories. New categories required new specialists. The EKG didn’t just help cardiologists see hearts more clearly; it revealed arrhythmias nobody had characterized before, which required cardiologists to exist as a distinct specialty in the first place. The endoscope didn’t just help surgeons; it created gastroenterologists. MRI didn’t augment existing neurology; it birthed neuroradiology.
The pattern is consistent enough to feel like a law: new visibility creates new disease categories, which create new high-value subspecialties.
AI reading imaging at population scale is the next iteration of this. When a model processes a million chest X-rays, it doesn’t just read them faster than a human. It finds patterns no individual radiologist could see, subtle correlations across thousands of cases that no human career is long enough to accumulate. Some of those patterns will be noise. But some will describe real pathology that currently has no name, no ICD code, no treatment protocol, and no specialist.
Those discoveries will need humans to validate them, characterize them, study them, and ultimately treat them. That work creates new subspecialties, and those subspecialties will be high-acuity and high-value almost by definition, the kind of complex, rare, consequential cases that command the upper end of the compensation curve.
This is why the bifurcation argument is more optimistic than it sounds. The middle of radiology, routine reads on standard cases, faces real pressure. But the frontier of radiology, the part that exists at the edge of what AI has taught us to see, expands every time AI finds something new. The question isn’t whether there will be enough work. It’s whether the profession moves fast enough to claim it.
Condition 3: Is there a regulatory or institutional moat?
The third condition is the one most people forget.
Branch banking survived ATMs partly because it was regulated, liability-laden, and institutionally structured around physical presence. The moat wasn’t just economic; it was legal and relational. Customers needed branches for mortgages, disputes, complex products. Regulators required certain functions to happen in person. That structure bought tellers decades.
Mobile banking eventually breached that moat, not by defeating the regulations, but by changing what customers wanted so fundamentally that the structure protecting branches became irrelevant. The moat dried up not because someone filled it in but because the river that fed it changed course.
For radiology, the moat is deeper. The most important word in healthcare is licensure. You cannot practice medicine without it. You cannot bill Medicare for a diagnostic read without a credentialed physician signing off. You cannot deploy an autonomous AI for clinical diagnosis without FDA clearance, and as of 2025, not a single FDA-cleared radiology AI tool is approved for fully autonomous diagnosis without physician oversight.
This is structural, not incidental. The regulatory architecture around physician practice was built over a century, is defended by powerful professional associations, and serves genuine patient safety interests that make it politically durable. Any scenario where AI fully replaces radiologists requires not just technical capability but regulatory permission, liability restructuring, and insurance coverage changes, and each of these moves slowly and faces organized resistance.
The mobile banking lesson is worth sitting with. The moat protecting branch banking didn’t fail because regulators capitulated. It failed because consumer behavior shifted so completely that the moat became beside the point. If patients and health systems eventually prefer and trust autonomous AI reads, regulatory structures follow rather than lead.
The moat is real. It is not permanent. But it is almost certainly durable enough to outlast the careers of anyone currently in radiology training, and in a profession with 30-year careers, that is worth something.
The part nobody is talking about: humans are the training data
Here is the argument that reframes everything, and I have not seen it made clearly anywhere.
Think about CAPTCHAs. For roughly twenty years, every time you squinted at a distorted image of a street address or clicked the traffic lights in a grid of photos, you weren’t just proving you were human. You were labeling training data. Google used those interactions, billions of them, contributed unknowingly by ordinary internet users, to train the computer vision models that now power autonomous vehicles, AI image recognition, and a significant fraction of modern machine learning infrastructure.
The AI didn’t arrive from nowhere. Humans built it, one labeled image at a time, without realizing that’s what they were doing.
Radiology is in the same relationship with medical AI, and this changes how you should think about the profession’s future entirely.
Every scan a radiologist reads and annotates, every report they generate, every finding they describe with clinical precision, becomes potential training data for the next generation of imaging AI. The models that will eventually surpass human performance on standard reads are being trained right now on the outputs of human radiologists. The AI that threatens radiology’s routine work is, in a very literal sense, built on radiologists’ work product.
This creates a dependency that is easy to miss. AI imaging models don’t self-improve in a vacuum. They need new labeled data, especially for rare findings, novel pathology, and edge cases systematically underrepresented in existing training sets. The AI that finds a new cancer phenotype in a population-scale dataset still needs expert radiologists to validate that finding, characterize its clinical significance, and generate the annotated cases that train the next model to detect it reliably.
The better AI gets at finding new things, the more it needs expert humans to confirm what it found, because the stakes of validating a potentially novel finding are considerably higher than the stakes of confirming a routine pneumonia. The frontier keeps moving. The human role at that frontier doesn’t diminish; if anything, it becomes more consequential at each iteration.
This is why the new diagnostic technology loop, the Christensen insight, is self-reinforcing rather than self-terminating. AI finds a new pattern. Radiologists validate it. Validation generates labeled data. Labeled data trains better AI. Better AI finds more subtle patterns. More subtle patterns require more expert validation. The loop accelerates, but the human node doesn’t disappear; it becomes more specialized and more valuable as the loop tightens.
New hospitals are already building revenue streams around emerging imaging subspecialties. Those programs need enough radiologists to generate the case volume that AI then trains on to detect the new thing at scale. You can’t shortcut this. You can’t train a model to detect a rare cardiac imaging biomarker without having enough expert radiologists reading enough cardiac imaging to label enough positive cases. The human and the machine are in a genuinely symbiotic relationship, not a replacement relationship, at least at the frontier.
The routine nighthawk reads? That’s the cash-handling teller, under pressure, probably headed for compression. The frontier radiologist who sits at the edge of what AI can currently see is something different entirely, a role that gets more valuable as the AI gets better, not less.
The history of medicine is actually this argument
The three conditions and the training data loop aren’t novel theory. They’re what medicine has been doing for two centuries.
The general practitioner nearly disappeared. In 1931, 84% of American physicians identified as general practitioners. By 2019, family and internal medicine physicians, the successors to GPs, constituted about 25% of the physician workforce. Technology didn’t kill generalism. It fragmented it into 40 specialties and 89 subspecialties, because each new technology made something visible that was previously invisible and created enough new clinical demand to justify a new specialist.
Cardiac surgery didn’t exist before the bypass machine. C. Walton Lillehei at the University of Minnesota performed the first successful open heart operation in 1954, using a patient’s parent as the living heart-lung machine, because mechanical bypass wasn’t ready yet. By 1955, the only two places in the world performing open heart surgery were 90 miles apart in Minneapolis and Rochester. The bypass machine didn’t threaten surgery. Technology was cardiac surgery.
In none of these cases did technology produce a clean elimination. What it produced was bifurcation, transformation, or creation, outcomes that looked very different depending on where in the profession you sat and how fast you moved.
The objection worth taking seriously
My friend Joe (name anonymized) pushed back on the whole framework, “You’re following Altman and Dario’s siren song.”
He’s not wrong to push back. AI is categorically different from every previous technology because it attacks all three conditions simultaneously and doesn’t have an obvious ceiling. ATMs had a clear ceiling; they couldn’t do relationship banking. Mobile banking had one too; it couldn’t handle a mortgage dispute requiring human judgment. Every previous automation had a functional boundary that defined the surviving human role.
With multimodal AI, that boundary is genuinely unclear. The technology isn’t specialized. It synthesizes clinical history, drafts reports, suggests differentials, explains findings to patients, and each capability expansion narrows the peripheral role that radiologists retreat into.
My response to Joe isn’t that he’s wrong about the ceiling being unclear. It’s that the ceiling question matters less than how fast the institutions around it can actually move. Healthcare can’t move fast and break things. Hospitals don’t. Regulators don’t. Insurance companies don’t. Medical boards don’t. Branch banking’s moat eventually fell, but it fell over fifteen years, which is a long time in a profession with 30-year careers. The tellers who adapted in 2010 had a full career ahead of them by the time contraction hit.
The honest answer to Joe is that if AI eventually achieves general-purpose clinical cognition, the three conditions probably can’t hold indefinitely. But the training data loop provides a floor even then. General clinical AI needs humans to validate novel findings. As long as AI keeps finding new things, and the evidence suggests it will, there’s a human role in confirming what it found and training it to find it better. That role is smaller than the current radiologist workforce. But it is higher-acuity, higher-paid, and more interesting. Which is, historically, how medicine has always responded to the tools it builds.
What this actually means
The most common mistake in these debates is treating “will technology change this profession?” as the same question as “will technology eliminate this profession?”
They are very different questions.
Bank tellers changed beyond recognition, from cash handlers to relationship bankers, and then contracted sharply when mobile banking made the branch optional. They weren’t eliminated by ATMs. They were transformed by ATMs and then hollowed out by something nobody predicted when ATMs were deployed.
Cardiac surgeons didn’t exist before 1954. They weren’t threatened by the bypass machine. They were created by it.
Radiology will follow the same logic. The bottom of the distribution, routine reads on standard cases, faces real pressure. The top, subspecialists at the frontier of AI-discovered pathology, interventionalists, imaging informaticists, has a different calculus entirely. And underneath all of it sits a layer most people miss: the radiologist isn’t just surviving alongside AI. They’re building it, one annotated finding at a time, the same way you built Google’s computer vision by clicking on fire hydrants for a decade without knowing that’s what you were doing.
The profession will bifurcate. The middle will hollow out, the frontier will expand, and the radiologists who understand that their work product is the substrate the next generation of AI trains on, and who position themselves accordingly, will be fine.
Which is, as I said in that group chat, the entire history of medicine.
